Updates from: 05/25/2022 01:52:41
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c B2clogin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/b2clogin.md
Previously updated : 09/15/2021 Last updated : 05/21/2022 # Set redirect URLs to b2clogin.com for Azure Active Directory B2C
-When you set up an identity provider for sign-up and sign-in in your Azure Active Directory B2C (Azure AD B2C) application, you need to specify a redirect URL. You should no longer reference *login.microsoftonline.com* in your applications and APIs for authenticating users with Azure AD B2C. Instead, use *b2clogin.com* for all new applications, and migrate existing applications from *login.microsoftonline.com* to *b2clogin.com*.
+When you set up an identity provider for sign-up and sign-in in your Azure Active Directory B2C (Azure AD B2C) applications, you need to specify the endpoints of the Azure AD B2C identity provider. You should no longer reference *login.microsoftonline.com* in your applications and APIs for authenticating users with Azure AD B2C. Instead, use *b2clogin.com* or a [custom domain](./custom-domain.md) for all applications.
## What endpoints does this apply to
-The transition to b2clogin.com only applies to authentication endpoints that use Azure AD B2C policies (user flows or custom policies) to authenticate users. These endpoints have a `<policy-name>` parameter which specifies the policy Azure AD B2C should use. [Learn more about Azure AD B2C policies](technical-overview.md#identity-experiences-user-flows-or-custom-policies).
-These endpoints may look like:
-- <code>https://login.microsoft.com/\<tenant-name\>.onmicrosoft.com/<b>\<policy-name\></b>/oauth2/v2.0/authorize</code>
+The transition to b2clogin.com only applies to authentication endpoints that use Azure AD B2C policies (user flows or custom policies) to authenticate users. These endpoints have a `<policy-name>` parameter, which specifies the policy Azure AD B2C should use. [Learn more about Azure AD B2C policies](technical-overview.md#identity-experiences-user-flows-or-custom-policies).
-- <code>https://login.microsoft.com/\<tenant-name\>.onmicrosoft.com/<b>\<policy-name\></b>/oauth2/v2.0/token</code>
+Old endpoints may look like:
+- <code>https://<b>login.microsoft.com</b>/\<tenant-name\>.onmicrosoft.com/<b>\<policy-name\></b>/oauth2/v2.0/authorize</code>
+- <code>https://<b>login.microsoft.com</b>/\<tenant-name\>.onmicrosoft.com/oauth2/v2.0/authorize<b>?p=\<policy-name\></b></code>
-Alternatively, the `<policy-name>` may be passed as a query parameter:
-- <code>https://login.microsoft.com/\<tenant-name\>.onmicrosoft.com/oauth2/v2.0/authorize?<b>p=\<policy-name\></b></code>-- <code>https://login.microsoft.com/\<tenant-name\>.onmicrosoft.com/oauth2/v2.0/token?<b>p=\<policy-name\></b></code>
+A corresponding updated endpoint would look like:
+- <code>https://<b>\<tenant-name\>.b2clogin.com</b>/\<tenant-name\>.onmicrosoft.com/<b>\<policy-name\></b>/oauth2/v2.0/authorize</code>
+- <code>https://<b>\<tenant-name\>.b2clogin.com</b>/\<tenant-name\>.onmicrosoft.com/oauth2/v2.0/authorize?<b>p=\<policy-name\></b></code>
+
+With Azure AD B2C [custom domain](./custom-domain.md) the corresponding updated endpoint would look like:
-> [!IMPORTANT]
-> Endpoints that use the 'policy' parameter must be updated as well as [identity provider redirect URLs](#change-identity-provider-redirect-urls).
+- <code>https://<b>login.contoso.com</b>/\<tenant-name\>.onmicrosoft.com/<b>\<policy-name\></b>/oauth2/v2.0/authorize</code>
+- <code>https://<b>login.contoso.com</b>/\<tenant-name\>.onmicrosoft.com/oauth2/v2.0/authorize?<b>p=\<policy-name\></b></code>
-Some Azure AD B2C customers use the shared capabilities of Azure AD enterprise tenants like OAuth 2.0 client credentials grant flow. These features are accessed using Azure AD's login.microsoftonline.com endpoints, *which don't contain a policy parameter*. __These endpoints are not affected__.
+## Endpoints that are not affected
-## Benefits of b2clogin.com
+Some customers use the shared capabilities of Azure AD enterprise tenants. For example, acquiring an access token to call the [MS Graph API](microsoft-graph-operations.md#code-discussion) of the Azure AD B2C tenant.
-When you use *b2clogin.com* as your redirect URL:
+All endpoints, which don't contain a policy parameter aren't affected by the change. They're accessed only with the Azure AD's login.microsoftonline.com endpoints, and can't be used with the *b2clogin.com*, or custom domains. The following example shows a valid token endpoint of the Azure AD platform:
-* Space consumed in the cookie header by Microsoft services is reduced.
-* Your redirect URLs no longer need to include a reference to Microsoft.
-* [JavaScript client-side code](javascript-and-page-layout.md) is supported in customized pages. Due to security restrictions, JavaScript code and HTML form elements are removed from custom pages if you use *login.microsoftonline.com*.
+```http
+https://login.microsoftonline.com/<tenant-name>.onmicrosoft.com/oauth2/v2.0/token
+```
## Overview of required changes
-There are several modifications you might need to make to migrate your applications to *b2clogin.com*:
+There are several modifications you might need to make to migrate your applications from *login.microsoftonline.com* using Azure AD B2C endpoints:
-* Change the redirect URL in your identity provider's applications to reference *b2clogin.com*.
-* Update your Azure AD B2C applications to use *b2clogin.com* in their user flow and token endpoint references. This may include updating your use of an authentication library like Microsoft Authentication Library (MSAL).
+* Change the redirect URL in your identity provider's applications to reference *b2clogin.com*, or custom domain. For more information, follow the [change identity provider redirect URLs](#change-identity-provider-redirect-urls) guidance.
+* Update your Azure AD B2C applications to use *b2clogin.com*, or custom domain in their user flow and token endpoint references. The change may include updating your use of an authentication library like Microsoft Authentication Library (MSAL).
* Update any **Allowed Origins** that you've defined in the CORS settings for [user interface customization](customize-ui-with-html.md).
-An old endpoint may look like:
-- <b><code>https://login.microsoft.com/</b>\<tenant-name\>.onmicrosoft.com/\<policy-name\>/oauth2/v2.0/authorize</code>-
-A corresponding updated endpoint would look like:
-- <code><b>https://\<tenant-name\>.b2clogin.com/</b>\<tenant-name\>.onmicrosoft.com/\<policy-name\>/oauth2/v2.0/authorize</code>- ## Change identity provider redirect URLs
-On each identity provider's website in which you've created an application, change all trusted URLs to redirect to `your-tenant-name.b2clogin.com` instead of *login.microsoftonline.com*.
+On each identity provider's website in which you've created an application, change all trusted URLs to redirect to `your-tenant-name.b2clogin.com`, or a custom domain instead of *login.microsoftonline.com*.
-There are two formats you can use for your b2clogin.com redirect URLs. The first provides the benefit of not having "Microsoft" appear anywhere in the URL by using the Tenant ID (a GUID) in place of your tenant domain name:
+There are two formats you can use for your b2clogin.com redirect URLs. The first provides the benefit of not having "Microsoft" appear anywhere in the URL by using the Tenant ID (a GUID) in place of your tenant domain name. Note, the `authresp` endpoint may not contain a policy name.
``` https://{your-tenant-name}.b2clogin.com/{your-tenant-id}/oauth2/authresp
For migrating Azure API Management APIs protected by Azure AD B2C, see the [Migr
### MSAL.NET ValidateAuthority property
-If you're using [MSAL.NET][msal-dotnet] v2 or earlier, set the **ValidateAuthority** property to `false` on client instantiation to allow redirects to *b2clogin.com*. Setting this value to `false` is not required for MSAL.NET v3 and above.
+If you're using [MSAL.NET][msal-dotnet] v2 or earlier, set the **ValidateAuthority** property to `false` on client instantiation to allow redirects to *b2clogin.com*. Setting this value to `false` isn't required for MSAL.NET v3 and above.
```csharp ConfidentialClientApplication client = new ConfidentialClientApplication(...); // Can also be PublicClientApplication
active-directory-b2c Partner Xid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-xid.md
The following architecture diagram shows the implementation.
![image shows the architecture diagram](./media/partner-xid/partner-xid-architecture-diagram.png)
-| Step | Description |
-|:--|:--|
-| 1. |User opens Azure AD B2C's sign-in page and then signs in or signs up by entering their username. |
-| 2. |Azure AD B2C redirects the user to xID authorize API endpoint using an OpenID Connect (OIDC) request. An OIDC endpoint is available containing information about the endpoints. xID Identity provider (IdP) redirects the user to the xID authorization sign-in page allowing the user to fill in or select their email address. |
-| 3. |xID IdP sends the push notification to the user's mobile device. |
-| 4. |The user opens the xID app, checks the request, then enters the PIN or authenticates with their biometrics. If PIN or biometrics is successfully verified, xID app activates the private key and creates an electronic signature. |
-| 5. |xID app sends the signature to xID IdP for verification. |
-| 6. |xID IdP shows a consent screen to the user, requesting authorization to give their personal information to the service they're signing in. |
-| 7. |xID IdP returns the OAuth authorization code to Azure AD B2C. |
-| 8. | Azure AD B2C sends a token request using the authorization code. |
-| 9. |xID IdP checks the token request and, if still valid, returns the OAuth access token and the ID token containing the requested user's identifier and email address. |
-| 10. |In addition, if the user's customer content is needed, Azure AD B2C calls the xID userdata API. |
-| 11. |The xID userdata API returns the user's encrypted customer content. Users can decrypt it with their private key, which they create when requesting the xID client information. |
-| 12. | User is either granted or denied access to the customer application based on the verification results. |
+| Step | Description |
+| : | :- |
+| 1. | User opens Azure AD B2C's sign-in page and then signs in or signs up by entering their username. |
+| 2. | Azure AD B2C redirects the user to xID authorize API endpoint using an OpenID Connect (OIDC) request. An OIDC endpoint is available containing information about the endpoints. xID Identity provider (IdP) redirects the user to the xID authorization sign-in page allowing the user to fill in or select their email address. |
+| 3. | xID IdP sends the push notification to the user's mobile device. |
+| 4. | The user opens the xID app, checks the request, then enters the PIN or authenticates with their biometrics. If PIN or biometrics is successfully verified, xID app activates the private key and creates an electronic signature. |
+| 5. | xID app sends the signature to xID IdP for verification. |
+| 6. | xID IdP shows a consent screen to the user, requesting authorization to give their personal information to the service they're signing in. |
+| 7. | xID IdP returns the OAuth authorization code to Azure AD B2C. |
+| 8. | Azure AD B2C sends a token request using the authorization code. |
+| 9. | xID IdP checks the token request and, if still valid, returns the OAuth access token and the ID token containing the requested user's identifier and email address. |
+| 10. | In addition, if the user's customer content is needed, Azure AD B2C calls the xID userdata API. |
+| 11. | The xID userdata API returns the user's encrypted customer content. Users can decrypt it with their private key, which they create when requesting the xID client information. |
+| 12. | User is either granted or denied access to the customer application based on the verification results. |
## Onboard with xID Request API documents by filling out [the request form](https://xid.inc/contact-us). In the message field, indicate that you'd like to onboard with Azure AD B2C. Then, an xID sales representative will contact you. Follow the instructions provided in the xID API document and request an xID API client. xID tech team will send client information to you in 3-4 working days.
+Supply redirect URI. This is the URI in your site to which the user is returned after a successful authentication. The URI that should be provided to xID for your Azure AD B2C follows the pattern - `https://<your-b2c-domain>.b2clogin.com/<your-b2c-domain>.onmicrosoft.com/oauth2/authresp`.
-## Step 1: Create a xID policy key
+## Step 1: Register a web application in Azure AD B2C
+
+Before your [applications](application-types.md) can interact with Azure AD B2C, they must be registered in a tenant that you manage.
+
+For testing purposes like this tutorial, you're registering `https://jwt.ms`, a Microsoft-owned web application that displays the decoded contents of a token (the contents of the token never leave your browser).
+
+Follow the steps mentioned in [this tutorial](tutorial-register-applications.md?tabs=app-reg-ga) to **register a web application** and **enable ID token implicit grant** for testing a user flow or custom policy. There's no need to create a Client Secret at this time.
+
+## Step 2: Create a xID policy key
Store the client secret that you received from xID in your Azure AD B2C tenant.
Store the client secret that you received from xID in your Azure AD B2C tenant.
>[!NOTE] >In Azure AD B2C, [**custom policies**](./user-flow-overview.md) are designed primarily to address complex scenarios.
-## Step 2: Configure xID as an Identity provider
+## Step 3: Configure xID as an Identity provider
To enable users to sign in using xID, you need to define xID as a claims provider that Azure AD B2C can communicate with through an endpoint. The endpoint provides a set of claims Azure AD B2C uses to verify that a specific user has authenticated using digital identity available on their device. Proving the user's identity. Use the following steps to add xID as a claims provider:
-1. Get the custom policy starter packs from GitHub, then update the XML files in the SocialAndLocalAccounts starter pack with your Azure AD B2C tenant name:
+1. Get the custom policy starter packs from GitHub, then update the XML files in the SocialAccounts starter pack with your Azure AD B2C tenant name:
i. Download the [.zip file](https://github.com/Azure-Samples/active-directory-b2c-custom-policy-starterpack/archive/master.zip) or [clone the repository](https://github.com/Azure-Samples/active-directory-b2c-custom-policy-starterpack).
- ii. In all of the files in the **LocalAccounts** directory, replace the string `yourtenant` with the name of your Azure AD B2C tenant. For example, if the name of your B2C tenant is `contoso`, all instances of `yourtenant.onmicrosoft.com` become `contoso.onmicrosoft.com`.
+ ii. In all of the files in the **SocialAccounts** directory, replace the string `yourtenant` with the name of your Azure AD B2C tenant. For example, if the name of your B2C tenant is `contoso`, all instances of `yourtenant.onmicrosoft.com` become `contoso.onmicrosoft.com`.
-2. Open the `LocalAccounts/ TrustFrameworkExtensions.xml`.
+2. Open the `SocialAccounts/TrustFrameworkExtensions.xml`.
3. Find the **ClaimsProviders** element. If it doesn't exist, add it under the root element.
Use the following steps to add xID as a claims provider:
<Item Key="DiscoverMetadataByTokenIssuer">true</Item> <Item Key="token_endpoint_auth_method">client_secret_basic</Item> <Item Key="ClaimsEndpoint">https://oidc-uat.x-id.io/userinfo</Item>
+ <Item Key="ValidTokenIssuerPrefixes">https://oidc-uat.x-id.io/</Item>
</Metadata> <CryptographicKeys>
- <Key Id="client_secret" StorageReferenceId="B2C_1A_X-IDClientSecret" />
+ <Key Id="client_secret" StorageReferenceId="B2C_1A_XIDSecAppSecret" />
</CryptographicKeys> <OutputClaims> <OutputClaim ClaimTypeReferenceId="issuerUserId" PartnerClaimType="sub" />
Use the following steps to add xID as a claims provider:
<OutputClaim ClaimTypeReferenceId="email" /> <OutputClaim ClaimTypeReferenceId="sid" /> <OutputClaim ClaimTypeReferenceId="userdataid" />
- <OutputClaim ClaimTypeReferenceId="X-ID_verified" />
+ <OutputClaim ClaimTypeReferenceId="XID_verified" />
<OutputClaim ClaimTypeReferenceId="email_verified" /> <OutputClaim ClaimTypeReferenceId="authenticationSource" DefaultValue="socialIdpAuthentication" AlwaysUseDefaultValue="true" /> <OutputClaim ClaimTypeReferenceId="identityProvider" PartnerClaimType="iss" DefaultValue="https://oidc-uat.x-id.io/" />
Use the following steps to add xID as a claims provider:
5. Save the changes.
-## Step 3: Add a user journey
+## Step 4: Add a user journey
-At this point, you've set up the identity provider, but it's not yet available on any of the sign-in pages. If you have a custom user journey, continue to [step 4](#step-4-add-the-identity-provider-to-a-user-journey). Otherwise, create a duplicate of an existing template user journey as follows:
+At this point, you've set up the identity provider, but it's not yet available on any of the sign-in pages. If you have a custom user journey, continue to [step 5](#step-5-add-the-identity-provider-to-a-user-journey). Otherwise, create a duplicate of an existing template user journey as follows:
1. Open the `TrustFrameworkBase.xml` file from the starter pack.
At this point, you've set up the identity provider, but it's not yet available o
5. Rename the ID of the user journey. For example, `ID=CustomSignUpSignIn`
-## Step 4: Add the identity provider to a user journey
+## Step 5: Add the identity provider to a user journey
Now that you have a user journey add the new identity provider to the user journey. 1. Find the orchestration step element that includes Type=`CombinedSignInAndSignUp`, or Type=`ClaimsProviderSelection` in the user journey. It's usually the first orchestration step. The **ClaimsProviderSelections** element contains a list of identity providers used for signing in. The order of the elements controls the order of the sign-in buttons presented to the user. Add a **ClaimsProviderSelection** XML element. Set the value of **TargetClaimsExchangeId** to a friendly name, such as `X-IDExchange`.
-2. In the next orchestration step, add a **ClaimsExchange** element. Set the **Id** to the value of the target claims exchange ID to link the xID button to `X-ID-SignIn` action. Next, update the value of **TechnicalProfileReferenceId** to the ID of the technical profile you created earlier.
+2. In the next orchestration step, add a **ClaimsExchange** element. Set the **Id** to the value of the target claims exchange ID to link the xID button to `X-IDExchange` action. Next, update the value of **TechnicalProfileReferenceId** to the ID of the technical profile you created earlier `X-ID-Oauth2`.
- The following XML demonstrates the orchestration steps of a user journey with the identity provider:
+3. Add a new Orchestration step to call xID UserInfo endpoint to return claims about the authenticated user `X-ID-Userdata`.
+
+ The following XML demonstrates the orchestration steps of a user journey with xID identity provider:
```xml
- <UserJourney Id="X-IDSignUpOrSignIn">
+ <UserJourney Id="CombinedSignInAndSignUp">
<OrchestrationSteps> <OrchestrationStep Order="1" Type="CombinedSignInAndSignUp" ContentDefinitionReferenceId="api.signuporsignin">
Now that you have a user journey add the new identity provider to the user journ
```
-## Step 5: Upload the custom policy
+There are additional identity claims that xID supports and are referenced as part of the policy. Claims schema is the place where you declare these claims. ClaimsSchema element contains list of ClaimType elements. The ClaimType element contains the Id attribute, which is the claim name.
-1. Sign in to the [Azure portal](https://portal.azure.com/#home).
+1. Open the `TrustFrameworksExtension.xml`
-2. Make sure you're using the directory that contains your Azure AD B2C tenant:
+2. Find the `BuildingBlocks` element.
- a. Select the **Directories + subscriptions** icon in the portal toolbar.
+3. Add the following ClaimType element in the **ClaimsSchema** element of your `TrustFrameworksExtension.xml` policy
- b. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and select **Switch**.
-
-3. In the [Azure portal](https://portal.azure.com/#home), search for and select **Azure AD B2C**.
-
-4. Under Policies, select **Identity Experience Framework**.
-
-5. Select **Upload Custom Policy**, and then upload the files in the **LocalAccounts** starter pack in the following order: the extension policy, for example, `TrustFrameworkExtensions.xml`, then the relying party policy, such as `SignUpSignIn.xml`.
+```xml
+ <BuildingBlocks>
+ <ClaimsSchema>
+ <!-- xID -->
+ <ClaimType Id="sid">
+ <DisplayName>sid</DisplayName>
+ <DataType>string</DataType>
+ </ClaimType>
+ <ClaimType Id="userdataid">
+ <DisplayName>userdataid</DisplayName>
+ <DataType>string</DataType>
+ </ClaimType>
+ <ClaimType Id="xid_verified">
+ <DisplayName>xid_verified</DisplayName>
+ <DataType>boolean</DataType>
+ </ClaimType>
+ <ClaimType Id="email_verified">
+ <DisplayName>email_verified</DisplayName>
+ <DataType>boolean</DataType>
+ </ClaimType>
+ <ClaimType Id="identityProviderAccessToken">
+ <DisplayName>Identity Provider Access Token</DisplayName>
+ <DataType>string</DataType>
+ <AdminHelpText>Stores the access token of the identity provider.</AdminHelpText>
+ </ClaimType>
+ <ClaimType Id="last_name">
+ <DisplayName>last_name</DisplayName>
+ <DataType>string</DataType>
+ </ClaimType>
+ <ClaimType Id="first_name">
+ <DisplayName>first_name</DisplayName>
+ <DataType>string</DataType>
+ </ClaimType>
+ <ClaimType Id="previous_name">
+ <DisplayName>previous_name</DisplayName>
+ <DataType>string</DataType>
+ </ClaimType>
+ <ClaimType Id="year">
+ <DisplayName>year</DisplayName>
+ <DataType>string</DataType>
+ </ClaimType>
+ <ClaimType Id="month">
+ <DisplayName>month</DisplayName>
+ <DataType>string</DataType>
+ </ClaimType>
+ <ClaimType Id="date">
+ <DisplayName>date</DisplayName>
+ <DataType>string</DataType>
+ </ClaimType>
+ <ClaimType Id="prefecture">
+ <DisplayName>prefecture</DisplayName>
+ <DataType>string</DataType>
+ </ClaimType>
+ <ClaimType Id="city">
+ <DisplayName>city</DisplayName>
+ <DataType>string</DataType>
+ </ClaimType>
+ <ClaimType Id="address">
+ <DisplayName>address</DisplayName>
+ <DataType>string</DataType>
+ </ClaimType>
+ <ClaimType Id="sub_char_common_name">
+ <DisplayName>sub_char_common_name</DisplayName>
+ <DataType>string</DataType>
+ </ClaimType>
+ <ClaimType Id="sub_char_previous_name">
+ <DisplayName>sub_char_previous_name</DisplayName>
+ <DataType>string</DataType>
+ </ClaimType>
+ <ClaimType Id="sub_char_address">
+ <DisplayName>sub_char_address</DisplayName>
+ <DataType>string</DataType>
+ </ClaimType>
+ <ClaimType Id="verified_at">
+ <DisplayName>verified_at</DisplayName>
+ <DataType>int</DataType>
+ </ClaimType>
+ <ClaimType Id="gender">
+ <DisplayName>Gender</DisplayName>
+ <DataType>string</DataType>
+ <DefaultPartnerClaimTypes>
+ <Protocol Name="OpenIdConnect" PartnerClaimType="gender" />
+ </DefaultPartnerClaimTypes>
+ <AdminHelpText>The user's gender.</AdminHelpText>
+ <UserHelpText>Your gender.</UserHelpText>
+ <UserInputType>TextBox</UserInputType>
+ </ClaimType>
+ <ClaimType Id="correlationId">
+ <DisplayName>correlation ID</DisplayName>
+ <DataType>string</DataType>
+ </ClaimType>
+ <!-- xID -->
+ </ClaimsSchema>
+ </BuildingBlocks>
+```
## Step 6: Configure the relying party policy The relying party policy, for example [SignUpSignIn.xml](https://github.com/Azure-Samples/active-directory-b2c-custom-policy-starterpack/blob/main/LocalAccounts/SignUpOrSignin.xml), specifies the user journey which Azure AD B2C will execute. First, find the **DefaultUserJourney** element within the relying party. Then, update the **ReferenceId** to match the user journey ID you added to the identity provider.
-In the following example, for the `X-IDSignUpOrSignIn` user journey, the **ReferenceId** is set to `X-IDSignUpOrSignIn`:
+In the following example, for the xID user journey, the **ReferenceId** is set to `CombinedSignInAndSignUp`:
```xml <RelyingParty>
- <DefaultUserJourney ReferenceId="X-IDSignUpOrSignIn" />
+ <DefaultUserJourney ReferenceId="CombinedSignInAndSignUp" />
<TechnicalProfile Id="PolicyProfile"> <DisplayName>PolicyProfile</DisplayName> <Protocol Name="OpenIdConnect" />
In the following example, for the `X-IDSignUpOrSignIn` user journey, the **Refer
```
+## Step 7: Upload the custom policy
+
+1. Sign in to the [Azure portal](https://portal.azure.com/#home).
+
+2. Make sure you're using the directory that contains your Azure AD B2C tenant:
+
+ a. Select the **Directories + subscriptions** icon in the portal toolbar.
+
+ b. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and select **Switch**.
+
+3. In the [Azure portal](https://portal.azure.com/#home), search for and select **Azure AD B2C**.
+
+4. Under Policies, select **Identity Experience Framework**.
+
+5. Select **Upload Custom Policy**, and then upload the files in the following order:
+ 1. `TrustFrameworkBase.xml`, the base policy file
+ 2. `TrustFrameworkExtensions.xml`, the extension policy
+ 3. `SignUpSignIn.xml`, then the relying party policy
-## Step 7: Test your custom policy
+## Step 8: Test your custom policy
1. In your Azure AD B2C tenant, and under **Policies**, select **Identity Experience Framework**.
active-directory-b2c Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/whats-new-docs.md
Title: "What's new in Azure Active Directory business-to-customer (B2C)" description: "New and updated documentation for the Azure Active Directory business-to-customer (B2C)." Previously updated : 04/04/2022 Last updated : 05/23/2022
Welcome to what's new in Azure Active Directory B2C documentation. This article lists new docs that have been added and those that have had significant updates in the last three months. To learn what's new with the B2C service, see [What's new in Azure Active Directory](../active-directory/fundamentals/whats-new.md).
+## April 2022
+
+### New articles
+
+- [Tutorial: Configure Azure Web Application Firewall with Azure Active Directory B2C](partner-azure-web-application-firewall.md)
+- [Configure Asignio with Azure Active Directory B2C for multi-factor authentication](partner-asignio.md)
+- [Set up sign-up and sign-in with Mobile ID using Azure Active Directory B2C](identity-provider-mobile-id.md)
+- [Find help and open a support ticket for Azure Active Directory B2C](find-help-open-support-ticket.md)
+
+### Updated articles
+
+- [Configure authentication in a sample single-page application by using Azure AD B2C](configure-authentication-sample-spa-app.md)
+- [Configure xID with Azure Active Directory B2C for passwordless authentication](partner-xid.md)
+- [Azure Active Directory B2C service limits and restrictions](service-limits.md)
+- [Localization string IDs](localization-string-ids.md)
+- [Manage your Azure Active Directory B2C tenant](tenant-management.md)
+- [Page layout versions](page-layout.md)
+- [Secure your API used an API connector in Azure AD B2C](secure-rest-api.md)
+- [Azure Active Directory B2C: What's new](whats-new-docs.md)
+- [Application types that can be used in Active Directory B2C](application-types.md)
+- [Publish your Azure Active Directory B2C app to the Azure Active Directory app gallery](publish-app-to-azure-ad-app-gallery.md)
+- [Quickstart: Set up sign in for a desktop app using Azure Active Directory B2C](quickstart-native-app-desktop.md)
+- [Register a single-page application (SPA) in Azure Active Directory B2C](tutorial-register-spa.md)
+ ## March 2022 ### New articles
active-directory Accidental Deletions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/accidental-deletions.md
Title: Enable accidental deletions prevention in Application Provisioning in Azu
description: Enable accidental deletions prevention in Application Provisioning in Azure Active Directory. -+
active-directory Application Provisioning Config Problem No Users Provisioned https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/application-provisioning-config-problem-no-users-provisioned.md
Title: Users are not being provisioned in my application
description: How to troubleshoot common issues faced when you don't see users appearing in an Azure AD Gallery Application you have configured for user provisioning with Azure AD -+
active-directory Application Provisioning Config Problem Scim Compatibility https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/application-provisioning-config-problem-scim-compatibility.md
Title: Known issues with System for Cross-Domain Identity Management (SCIM) 2.0
description: How to solve common protocol compatibility issues faced when adding a non-gallery application that supports SCIM 2.0 to Azure AD -+
active-directory Application Provisioning Config Problem https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/application-provisioning-config-problem.md
Title: Problem configuring user provisioning to an Azure Active Directory Galler
description: How to troubleshoot common issues faced when configuring user provisioning to an application already listed in the Azure Active Directory Application Gallery -+
active-directory Application Provisioning Configuration Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/application-provisioning-configuration-api.md
Title: Configure provisioning using Microsoft Graph APIs
description: Learn how to save time by using the Microsoft Graph APIs to automate the configuration of automatic provisioning. -+
active-directory Application Provisioning Log Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/application-provisioning-log-analytics.md
Title: Understand how Provisioning integrates with Azure Monitor logs in Azure A
description: Understand how Provisioning integrates with Azure Monitor logs in Azure Active Directory. -+
active-directory Application Provisioning Quarantine Status https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/application-provisioning-quarantine-status.md
Title: Quarantine status in Azure Active Directory Application Provisioning
description: When you've configured an application for automatic user provisioning, learn what a provisioning status of Quarantine means and how to clear it. -+
active-directory Application Provisioning When Will Provisioning Finish Specific User https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md
Title: Find out when a specific user will be able to access an app in Azure Acti
description: How to find out when a critically important user be able to access an application you have configured for user provisioning with Azure Active Directory -+
active-directory Check Status User Account Provisioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/check-status-user-account-provisioning.md
Title: Report automatic user account provisioning from Azure Active Directory to
description: 'Learn how to check the status of automatic user account provisioning jobs, and how to troubleshoot the provisioning of individual users.' -+
active-directory Configure Automatic User Provisioning Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/configure-automatic-user-provisioning-portal.md
Title: User provisioning management for enterprise apps in Azure Active Director
description: Learn how to manage user account provisioning for enterprise apps using the Azure Active Directory. -+
active-directory Customize Application Attributes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/customize-application-attributes.md
Title: Tutorial - Customize Azure Active Directory attribute mappings in Applica
description: Learn what attribute mappings for Software as a Service (SaaS) apps in Azure Active Directory Application Provisioning are how you can modify them to address your business needs. -+
active-directory Define Conditional Rules For Provisioning User Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md
Title: Use scoping filters in Azure Active Directory Application Provisioning
description: Learn how to use scoping filters to prevent objects in apps that support automated user provisioning from being provisioned if an object doesn't satisfy your business requirements in Azure Active Directory Application Provisioning. -+
active-directory Export Import Provisioning Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/export-import-provisioning-configuration.md
Title: Export Application Provisioning configuration and roll back to a known go
description: Learn how to export your Application Provisioning configuration and roll back to a known good state for disaster recovery in Azure Active Directory. -+
active-directory Expression Builder https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/expression-builder.md
Title: Understand how expression builder works with Application Provisioning in
description: Understand how expression builder works with Application Provisioning in Azure Active Directory. -+
active-directory Functions For Customizing Application Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/functions-for-customizing-application-data.md
Title: Reference for writing expressions for attribute mappings in Azure Active Directory Application Provisioning description: Learn how to use expression mappings to transform attribute values into an acceptable format during automated provisioning of SaaS app objects in Azure Active Directory. Includes a reference list of functions. -+
active-directory How Provisioning Works https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/how-provisioning-works.md
Title: Understand how Application Provisioning in Azure Active Directory
description: Understand how Application Provisioning works in Azure Active Directory. -+
active-directory Hr Attribute Retrieval Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/hr-attribute-retrieval-issues.md
Title: Troubleshoot attribute retrieval issues with HR provisioning description: Learn how to troubleshoot attribute retrieval issues with HR provisioning -+
active-directory Hr Manager Update Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/hr-manager-update-issues.md
Title: Troubleshoot manager update issues with HR provisioning description: Learn how to troubleshoot manager update issues with HR provisioning -+
active-directory Hr User Creation Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/hr-user-creation-issues.md
Title: Troubleshoot user creation issues with HR provisioning description: Learn how to troubleshoot user creation issues with HR provisioning -+
active-directory Hr User Update Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/hr-user-update-issues.md
Title: Troubleshoot user update issues with HR provisioning description: Learn how to troubleshoot user update issues with HR provisioning -+
active-directory Hr Writeback Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/hr-writeback-issues.md
Title: Troubleshoot write back issues with HR provisioning description: Learn how to troubleshoot write back issues with HR provisioning -+
active-directory Isv Automatic Provisioning Multi Tenant Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/isv-automatic-provisioning-multi-tenant-apps.md
Title: Enable automatic user provisioning for multi-tenant applications in Azure
description: A guide for independent software vendors for enabling automated provisioning in Azure Active Directory -+
active-directory Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/known-issues.md
Title: Known issues for application provisioning in Azure Active Directory
description: Learn about known issues when you work with automated application provisioning in Azure Active Directory. -+
active-directory On Premises Application Provisioning Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/on-premises-application-provisioning-architecture.md
Title: 'Azure AD on-premises application provisioning architecture | Microsoft D
description: Presents an overview of on-premises application provisioning architecture. -+
active-directory On Premises Ecma Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/on-premises-ecma-troubleshoot.md
Title: 'Troubleshooting issues with provisioning to on-premises applications'
description: Describes how to troubleshoot various issues you might encounter when you install and use the ECMA Connector Host. -+
active-directory On Premises Ldap Connector Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/on-premises-ldap-connector-configure.md
Title: Azure AD Provisioning to LDAP directories (preview)
description: This document describes how to configure Azure AD to provision users into an LDAP directory. -+
active-directory On Premises Migrate Microsoft Identity Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/on-premises-migrate-microsoft-identity-manager.md
Title: 'Export a Microsoft Identity Manager connector for use with the Azure AD
description: Describes how to create and export a connector from MIM Sync to be used with the Azure AD ECMA Connector Host. -+
active-directory On Premises Scim Provisioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/on-premises-scim-provisioning.md
Title: Azure AD on-premises app provisioning to SCIM-enabled apps description: This article describes how to use the Azure AD provisioning service to provision users into an on-premises app that's SCIM enabled. -+
active-directory On Premises Sql Connector Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/on-premises-sql-connector-configure.md
Title: Provisioning users into SQL based applications using the ECMA Connector h
description: Provisioning users into SQL based applications using the ECMA Connector host -+
active-directory Plan Auto User Provisioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/plan-auto-user-provisioning.md
Title: Plan an automatic user provisioning deployment for Azure Active Directory
description: Guidance for planning and executing automatic user provisioning in Azure Active Directory -+
active-directory Plan Cloud Hr Provision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/plan-cloud-hr-provision.md
Title: Plan cloud HR application to Azure Active Directory user provisioning
description: This article describes the deployment process of integrating cloud HR systems, such as Workday and SuccessFactors, with Azure Active Directory. Integrating Azure AD with your cloud HR system results in a complete identity lifecycle management system. -+
active-directory Provision On Demand https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/provision-on-demand.md
Title: Provision a user on demand by using Azure Active Directory
description: Learn how to provision users on demand in Azure Active Directory. -+
active-directory Provisioning Agent Release Version History https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/provisioning-agent-release-version-history.md
Title: Azure Active Directory Connect Provisioning Agent - Version release histo
description: This article lists all releases of Azure Active Directory Connect Provisioning Agent and describes new features and fixed issues. -+
active-directory Sap Successfactors Attribute Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/sap-successfactors-attribute-reference.md
Title: SAP SuccessFactors attribute reference for Azure Active Directory
description: Learn which attributes from SuccessFactors are supported by SuccessFactors-HR driven provisioning in Azure Active Directory. -+
active-directory Sap Successfactors Integration Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/sap-successfactors-integration-reference.md
Title: Azure Active Directory and SAP SuccessFactors integration reference
description: Technical deep dive into SAP SuccessFactors-HR driven provisioning for Azure Active Directory. -+
active-directory Scim Graph Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/scim-graph-scenarios.md
Title: Use SCIM, Microsoft Graph, and Azure Active Directory to provision users
description: Using SCIM and the Microsoft Graph together to provision users and enrich your application with the data it needs in Azure Active Directory. -+
active-directory Tutorial Ecma Sql Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/tutorial-ecma-sql-connector.md
Title: Azure AD Provisioning to SQL applications (preview)
description: This tutorial describes how to provision users from Azure AD into a SQL database. -+
active-directory Use Scim To Build Users And Groups Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/use-scim-to-build-users-and-groups-endpoints.md
Title: Build a SCIM endpoint for user provisioning to apps from Azure Active Dir
description: Learn to develop a SCIM endpoint, integrate your SCIM API with Azure Active Directory, and automatically provision users and groups into your cloud applications. -+
active-directory Use Scim To Provision Users And Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/use-scim-to-provision-users-and-groups.md
Title: Tutorial - Develop a SCIM endpoint for user provisioning to apps from Azu
description: System for Cross-domain Identity Management (SCIM) standardizes automatic user provisioning. In this tutorial, you learn to develop a SCIM endpoint, integrate your SCIM API with Azure Active Directory, and start automating provisioning users and groups into your cloud applications. -+
active-directory User Provisioning Sync Attributes For Mapping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/user-provisioning-sync-attributes-for-mapping.md
Title: Synchronize attributes to Azure Active Directory for mapping
description: When configuring user provisioning with Azure Active Directory and SaaS apps, use the directory extension feature to add source attributes that aren't synchronized by default. -+
active-directory User Provisioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/user-provisioning.md
Title: What is automated app user provisioning in Azure Active Directory description: An introduction to how you can use Azure Active Directory to automatically provision, de-provision, and continuously update user accounts across multiple third-party applications. -+
active-directory What Is Hr Driven Provisioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/what-is-hr-driven-provisioning.md
Title: 'What is HR driven provisioning with Azure Active Directory? | Microsoft
description: Describes overview of HR driven provisioning. -+
active-directory Workday Attribute Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/workday-attribute-reference.md
Title: Workday attribute reference for Azure Active Directory
description: Learn which which attributes that you can fetch from Workday using XPATH queries in Azure Active Directory. -+
active-directory Workday Integration Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/workday-integration-reference.md
Title: Azure Active Directory and Workday integration reference
description: Technical deep dive into Workday-HR driven provisioning in Azure Active Directory -+
active-directory Active Directory App Proxy Protect Ndes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/active-directory-app-proxy-protect-ndes.md
Title: Integrate with Azure Active Directory Application Proxy on an NDES server
description: Guidance on deploying an Azure Active Directory Application Proxy to protect your NDES server. -+
active-directory Application Proxy Add On Premises Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-add-on-premises-application.md
Title: Tutorial - Add an on-premises app - Application Proxy in Azure Active Dir
description: Azure Active Directory (Azure AD) has an Application Proxy service that enables users to access on-premises applications by signing in with their Azure AD account. This tutorial shows you how to prepare your environment for use with Application Proxy. Then, it uses the Azure portal to add an on-premises application to your Azure AD tenant. -+
active-directory Application Proxy Back End Kerberos Constrained Delegation How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-back-end-kerberos-constrained-delegation-how-to.md
Title: Troubleshoot Kerberos constrained delegation - App Proxy
description: Troubleshoot Kerberos Constrained Delegation configurations for Application Proxy -+
active-directory Application Proxy Config How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-config-how-to.md
Title: How to configure an Azure Active Directory Application Proxy application
description: Learn how to create and configure an Azure Active Directory Application Proxy application in a few simple steps -+
active-directory Application Proxy Config Problem https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-config-problem.md
Title: Problem creating an Azure Active Directory Application Proxy application
description: How to troubleshoot issues creating Application Proxy applications in the Azure Active Directory Admin portal -+
active-directory Application Proxy Config Sso How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-config-sso-how-to.md
Title: Understand single sign-on with an on-premises app using Application Proxy
description: Understand single sign-on with an on-premises app using Application Proxy. -+
active-directory Application Proxy Configure Complex Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-configure-complex-application.md
Title: Complex applications for Azure Active Directory Application Proxy
description: Provides an understanding of complex application in Azure Active Directory Application Proxy, and how to configure one. -+
active-directory Application Proxy Configure Connectors With Proxy Servers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-configure-connectors-with-proxy-servers.md
Title: Work with existing on-premises proxy servers and Azure Active Directory
description: Covers how to work with existing on-premises proxy servers with Azure Active Directory. -+
active-directory Application Proxy Configure Cookie Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-configure-cookie-settings.md
Title: Application Proxy cookie settings - Azure Active Directory
description: Azure Active Directory (Azure AD) has access and session cookies for accessing on-premises applications through Application Proxy. In this article, you'll find out how to use and configure the cookie settings. -+
active-directory Application Proxy Configure Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-configure-custom-domain.md
Title: Custom domains in Azure Active Directory Application Proxy
description: Configure and manage custom domains in Azure Active Directory Application Proxy. -+
active-directory Application Proxy Configure Custom Home Page https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-configure-custom-home-page.md
Title: Custom home page for published apps - Azure Active Directory Application
description: Covers the basics about Azure Active Directory Application Proxy connectors -+
active-directory Application Proxy Configure For Claims Aware Applications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-configure-for-claims-aware-applications.md
Title: Claims-aware apps - Azure Active Directory Application Proxy
description: How to publish on-premises ASP.NET applications that accept AD FS claims for secure remote access by your users. -+
active-directory Application Proxy Configure Hard Coded Link Translation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-configure-hard-coded-link-translation.md
Title: Translate links and URLs Azure Active Directory Application Proxy
description: Learn how to redirect hard-coded links for apps published with Azure Active Directory Application Proxy. -+
active-directory Application Proxy Configure Native Client Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-configure-native-client-application.md
Title: Publish native client apps - Azure Active Directory
description: Covers how to enable native client apps to communicate with Azure Active Directory Application Proxy Connector to provide secure remote access to your on-premises apps. -+
active-directory Application Proxy Configure Single Sign On On Premises Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-configure-single-sign-on-on-premises-apps.md
Title: SAML single sign-on for on-premises apps with Azure Active Directory Appl
description: Learn how to provide single sign-on for on-premises applications that are secured with SAML authentication. Provide remote access to on-premises apps with Application Proxy. -+
active-directory Application Proxy Configure Single Sign On Password Vaulting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-configure-single-sign-on-password-vaulting.md
Title: Single sign-on to apps with Azure Active Directory Application Proxy
description: Turn on single sign-on for your published on-premises applications with Azure Active Directory Application Proxy in the Azure portal. -+
active-directory Application Proxy Configure Single Sign On With Headers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-configure-single-sign-on-with-headers.md
Title: Header-based single sign-on for on-premises apps with Azure AD App Proxy
description: Learn how to provide single sign-on for on-premises applications that are secured with header-based authentication. -+
active-directory Application Proxy Configure Single Sign On With Kcd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-configure-single-sign-on-with-kcd.md
Title: Kerberos-based single sign-on (SSO) in Azure Active Directory with Applic
description: Covers how to provide single sign-on using Azure Active Directory Application Proxy. -+
active-directory Application Proxy Connectivity No Working Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-connectivity-no-working-connector.md
Title: No working connector group found for an Azure Active Directory Applicatio
description: Address problems you might encounter when there is no working Connector in a Connector Group for your application with the Azure Active Directory Application Proxy -+
active-directory Application Proxy Connector Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-connector-groups.md
Title: Publish apps on separate networks via connector groups - Azure Active Dir
description: Covers how to create and manage groups of connectors in Azure Active Directory Application Proxy. -+
active-directory Application Proxy Connector Installation Problem https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-connector-installation-problem.md
Title: Problem installing the Azure Active Directory Application Proxy Agent Con
description: How to troubleshoot issues you might face when installing the Application Proxy Agent Connector for Azure Active Directory. -+
active-directory Application Proxy Connectors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-connectors.md
Title: Understand Azure Active Directory Application Proxy connectors
description: Learn about the Azure Active Directory Application Proxy connectors. -+
active-directory Application Proxy Debug Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-debug-apps.md
Title: Debug Application Proxy applications - Azure Active Directory
description: Debug issues with Azure Active Directory (Azure AD) Application Proxy applications. -+
active-directory Application Proxy Debug Connectors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-debug-connectors.md
Title: Debug Application Proxy connectors - Azure Active Directory
description: Debug issues with Azure Active Directory (Azure AD) Application Proxy connectors. -+
active-directory Application Proxy Deployment Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-deployment-plan.md
Title: Plan an Azure Active Directory Application Proxy Deployment
description: An end-to-end guide for planning the deployment of Application proxy within your organization -+
active-directory Application Proxy High Availability Load Balancing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-high-availability-load-balancing.md
Title: High availability and load balancing - Azure Active Directory Application
description: How traffic distribution works with your Application Proxy deployment. Includes tips for how to optimize connector performance and use load balancing for back-end servers. -+
active-directory Application Proxy Integrate With Microsoft Cloud Application Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-integrate-with-microsoft-cloud-application-security.md
Title: Use Application Proxy to integrate on-premises apps with Defender for Cloud Apps - Azure Active Directory description: Configure an on-premises application in Azure Active Directory to work with Microsoft Defender for Cloud Apps. Use the Defender for Cloud Apps Conditional Access App Control to monitor and control sessions in real-time based on Conditional Access policies. You can apply these policies to on-premises applications that use Application Proxy in Azure Active Directory (Azure AD). -+
active-directory Application Proxy Integrate With Power Bi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-integrate-with-power-bi.md
Title: Enable remote access to Power BI with Azure Active Directory Application
description: Covers the basics about how to integrate an on-premises Power BI with Azure Active Directory Application Proxy. -+
active-directory Application Proxy Integrate With Remote Desktop Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-integrate-with-remote-desktop-services.md
Title: Publish Remote Desktop with Azure Active Directory Application Proxy
description: Covers how to configure App Proxy with Remote Desktop Services (RDS) -+ Previously updated : 07/12/2021 Last updated : 05/19/2022
The configuration outlined in this article is for access to RDS via RD Web or th
| Authentication method | Supported client configuration | | | |
-| Pre-authentication | RD Web- Windows 7/10 using Internet Explorer* or [Edge Chromium IE mode](/deployedge/edge-ie-mode) + RDS ActiveX add-on |
+| Pre-authentication | RD Web- Windows 7/10/11 using Internet Explorer* or [Edge Chromium IE mode](/deployedge/edge-ie-mode) + RDS ActiveX add-on |
| Pre-authentication | RD Web Client- HTML5-compatible web browser such as Microsoft Edge, Internet Explorer 11, Google Chrome, Safari, or Mozilla Firefox (v55.0 and later) | | Passthrough | Any other operating system that supports the Microsoft Remote Desktop application |
active-directory Application Proxy Integrate With Sharepoint Server Saml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-integrate-with-sharepoint-server-saml.md
Title: Publish an on-premises SharePoint farm with Azure Active Directory Applic
description: Covers the basics about how to integrate an on-premises SharePoint farm with Azure Active Directory Application Proxy for SAML. -+
active-directory Application Proxy Integrate With Sharepoint Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-integrate-with-sharepoint-server.md
Title: Enable remote access to SharePoint - Azure Active Directory Application P
description: Covers the basics about how to integrate on-premises SharePoint Server with Azure Active Directory Application Proxy. -+
active-directory Application Proxy Integrate With Tableau https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-integrate-with-tableau.md
Title: Azure Active Directory Application Proxy and Tableau
description: Learn how to use Azure Active Directory (Azure AD) Application Proxy to provide remote access for your Tableau deployment. -+
active-directory Application Proxy Integrate With Teams https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-integrate-with-teams.md
Title: Access Azure Active Directory Application Proxy apps in Teams
description: Use Azure Active Directory Application Proxy to access your on-premises application through Microsoft Teams. -+
active-directory Application Proxy Network Topology https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-network-topology.md
Title: Network topology considerations for Azure Active Directory Application Pr
description: Covers network topology considerations when using Azure Active Directory Application Proxy. -+
active-directory Application Proxy Page Appearance Broken Problem https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-page-appearance-broken-problem.md
Title: App page doesn't display correctly for Application Proxy app
description: Guidance when the page isnΓÇÖt displaying correctly in an Application Proxy Application you have integrated with Azure Active Directory -+
active-directory Application Proxy Page Links Broken Problem https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-page-links-broken-problem.md
Title: Links on the page don't work for an Azure Active Directory Application Pr
description: How to troubleshoot issues with broken links on Application Proxy applications you have integrated with Azure Active Directory -+
active-directory Application Proxy Page Load Speed Problem https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-page-load-speed-problem.md
Title: An Azure Active Directory Application Proxy application takes too long to
description: Troubleshoot page load performance issues with Azure Active Directory Application Proxy -+
active-directory Application Proxy Ping Access Publishing Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-ping-access-publishing-guide.md
Title: Header-based authentication with PingAccess for Azure Active Directory Ap
description: Publish applications with PingAccess and App Proxy to support header-based authentication. -+
active-directory Application Proxy Powershell Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-powershell-samples.md
Title: PowerShell samples for Azure Active Directory Application Proxy
description: Use these PowerShell samples for Azure Active Directory Application Proxy to get information about Application Proxy apps and connectors in your directory, assign users and groups to apps, and get certificate information. -+
active-directory Application Proxy Qlik https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-qlik.md
Title: Azure Active Directory Application Proxy and Qlik Sense
description: Integrate Azure Active Directory Application Proxy with Qlik Sense. -+
active-directory Application Proxy Register Connector Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-register-connector-powershell.md
Title: Silent install Azure Active Directory Application Proxy connector
description: Covers how to perform an unattended installation of Azure Active Directory Application Proxy Connector to provide secure remote access to your on-premises apps. -+
active-directory Application Proxy Release Version History https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-release-version-history.md
Title: 'Azure Active Directory Application Proxy: Version release history'
description: This article lists all releases of Azure Active Directory Application Proxy and describes new features and fixed issues. -+
active-directory Application Proxy Remove Personal Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-remove-personal-data.md
Title: Remove personal data - Azure Active Directory Application Proxy description: Remove personal data from connectors installed on devices for Azure Active Directory Application Proxy. -+
active-directory Application Proxy Secure Api Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-secure-api-access.md
Title: Access on-premises APIs with Azure Active Directory Application Proxy
description: Azure Active Directory's Application Proxy lets native apps securely access APIs and business logic you host on-premises or on cloud VMs. -+
active-directory Application Proxy Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-security.md
Title: Security considerations for Azure Active Directory Application Proxy
description: Covers security considerations for using Azure AD Application Proxy -+
active-directory Application Proxy Sign In Bad Gateway Timeout Error https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-sign-in-bad-gateway-timeout-error.md
Title: Can't access this Corporate Application error with Azure Active Directory
description: How to resolve common access issues with Azure Active Directory Application Proxy applications. -+
active-directory Application Proxy Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-troubleshoot.md
Title: Troubleshoot Azure Active Directory Application Proxy
description: Covers how to troubleshoot errors in Azure Active Directory Application Proxy. -+
active-directory Application Proxy Understand Cors Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-understand-cors-issues.md
Title: Understand and solve Azure Active Directory Application Proxy CORS issues
description: Provides an understanding of CORS in Azure Active Directory Application Proxy, and how to identify and solve CORS issues. -+
active-directory Application Proxy Wildcard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-wildcard.md
Title: Wildcard applications in Azure Active Directory Application Proxy
description: Learn how to use Wildcard applications in Azure Active Directory Application Proxy. -+
active-directory Application Proxy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy.md
Title: Remote access to on-premises apps - Azure AD Application Proxy
description: Azure Active Directory's Application Proxy provides secure remote access to on-premises web applications. After a single sign-on to Azure AD, users can access both cloud and on-premises applications through an external URL or an internal application portal. For example, Application Proxy can provide remote access and single sign-on to Remote Desktop, SharePoint, Teams, Tableau, Qlik, and line of business (LOB) applications. -+
active-directory Application Sign In Problem On Premises Application Proxy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-sign-in-problem-on-premises-application-proxy.md
Title: Problem signing in to on-premises app using Azure Active Directory Applic
description: Troubleshooting common issues faced when you are unable to sign in to an on-premises application integrated using the Azure Active Directory Application Proxy -+
active-directory Powershell Assign Group To App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-assign-group-to-app.md
Title: PowerShell sample - Assign group to an Azure Active Directory Application
description: PowerShell example that assigns a group to an Azure Active Directory (Azure AD) Application Proxy application. -+
active-directory Powershell Assign User To App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-assign-user-to-app.md
Title: PowerShell sample - Assign user to an Azure Active Directory Application
description: PowerShell example that assigns a user to an Azure Active Directory (Azure AD) Application Proxy application. -+
active-directory Powershell Display Users Group Of App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-display-users-group-of-app.md
Title: PowerShell sample - List users & groups for an Azure Active Directory App
description: PowerShell example that lists all the users and groups assigned to a specific Azure Active Directory (Azure AD) Application Proxy application. -+
active-directory Powershell Get All App Proxy Apps Basic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-get-all-app-proxy-apps-basic.md
Title: PowerShell sample - List basic info for Application Proxy apps
description: PowerShell example that lists Azure Active Directory (Azure AD) Application Proxy applications along with the application ID (AppId), name (DisplayName), and object ID (ObjId). -+
active-directory Powershell Get All App Proxy Apps By Connector Group https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-get-all-app-proxy-apps-by-connector-group.md
Title: List Azure Active Directory Application Proxy connector groups for apps
description: PowerShell example that lists all Azure Active Directory (Azure AD) Application Proxy Connector groups with the assigned applications. -+
active-directory Powershell Get All App Proxy Apps Extended https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-get-all-app-proxy-apps-extended.md
Title: PowerShell sample - List extended info for Azure Active Directory Applica
description: PowerShell example that lists all Azure Active Directory (Azure AD) Application Proxy applications along with the application ID (AppId), name (DisplayName), external URL (ExternalUrl), internal URL (InternalUrl), and authentication type (ExternalAuthenticationType). -+
active-directory Powershell Get All App Proxy Apps With Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-get-all-app-proxy-apps-with-policy.md
Title: PowerShell sample - List all Azure Active Directory Application Proxy app
description: PowerShell example that lists all Azure Active Directory (Azure AD) Application Proxy applications in your directory that have a lifetime token policy. -+
active-directory Powershell Get All Connectors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-get-all-connectors.md
Title: PowerShell sample - List all Azure Active Directory Application Proxy con
description: PowerShell example that lists all Azure Active Directory (Azure AD) Application Proxy connector groups and connectors in your directory. -+
active-directory Powershell Get All Custom Domain No Cert https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-get-all-custom-domain-no-cert.md
Title: PowerShell sample - Azure Active Directory Application Proxy apps with no
description: PowerShell example that lists all Azure Active Directory (Azure AD) Application Proxy applications that are using custom domains but do not have a valid TLS/SSL certificate uploaded. -+
active-directory Powershell Get All Custom Domains And Certs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-get-all-custom-domains-and-certs.md
Title: PowerShell sample - Azure Active Directory Application Proxy apps using c
description: PowerShell example that lists all Azure Active Directory (Azure AD) Application Proxy applications that are using custom domains and certificate information. -+
active-directory Powershell Get All Default Domain Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-get-all-default-domain-apps.md
Title: PowerShell sample - Azure Active Directory Application Proxy apps using d
description: PowerShell example that lists all Azure Active Directory (Azure AD) Application Proxy applications that are using default domains (.msappproxy.net). -+
active-directory Powershell Get All Wildcard Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-get-all-wildcard-apps.md
Title: PowerShell sample - List Azure Active Directory Application Proxy apps us
description: PowerShell example that lists all Azure Active Directory (Azure AD) Application Proxy applications that are using wildcards. -+
active-directory Powershell Get Custom Domain Identical Cert https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-get-custom-domain-identical-cert.md
Title: PowerShell sample - Azure Active Directory Application Proxy apps with id
description: PowerShell example that lists all Azure Active Directory (Azure AD) Application Proxy applications that are published with the identical certificate. -+
active-directory Powershell Get Custom Domain Replace Cert https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-get-custom-domain-replace-cert.md
Title: PowerShell sample - Replace certificate in Azure Active Directory Applica
description: PowerShell example that bulk replaces a certificate across Azure Active Directory (Azure AD) Application Proxy applications. -+
active-directory Powershell Move All Apps To Connector Group https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-move-all-apps-to-connector-group.md
Title: PowerShell sample - Move Azure Active Directory Application Proxy apps to
description: Azure Active Directory (Azure AD) Application Proxy PowerShell example used to move all applications currently assigned to a connector group to a different connector group. -+
active-directory What Is Application Proxy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/what-is-application-proxy.md
Title: Publish on-premises apps with Azure Active Directory Application Proxy
description: Understand why to use Application Proxy to publish on-premises web applications externally to remote users. Learn about Application Proxy architecture, connectors, authentication methods, and security benefits. -+
active-directory Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/whats-new-docs.md
-+ # Azure Active Directory application proxy: What's new
active-directory Concept Registration Mfa Sspr Combined https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-registration-mfa-sspr-combined.md
Previously updated : 03/1/2022 Last updated : 05/24/2022
Users can set one of the following options as the default Multi-Factor Authentic
- Phone call - Text message
+>[!NOTE]
+>Virtual phone numbers are not supported for Voice calls or SMS messages.
+ Third party authenticator apps do not provide push notification. As we continue to add more authentication methods to Azure AD, those methods become available in combined registration. ## Combined registration modes
active-directory Fido2 Compatibility https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/fido2-compatibility.md
This table shows support for authenticating Azure Active Directory (Azure AD) an
|::|::|::|::|::|::|::|::|::|::|::|::|::| | | USB | NFC | BLE | USB | NFC | BLE | USB | NFC | BLE | USB | NFC | BLE | | **Windows** | ![Chrome supports USB on Windows for Azure AD accounts.][y] | ![Chrome supports NFC on Windows for Azure AD accounts.][y] | ![Chrome supports BLE on Windows for Azure AD accounts.][y] | ![Edge supports USB on Windows for Azure AD accounts.][y] | ![Edge supports NFC on Windows for Azure AD accounts.][y] | ![Edge supports BLE on Windows for Azure AD accounts.][y] | ![Firefox supports USB on Windows for Azure AD accounts.][y] | ![Firefox supports NFC on Windows for Azure AD accounts.][y] | ![Firefox supports BLE on Windows for Azure AD accounts.][y] | ![Safari supports USB on Windows for Azure AD accounts.][n] | ![Safari supports NFC on Windows for Azure AD accounts.][n] | ![Safari supports BLE on Windows for Azure AD accounts.][n] |
-| **macOS** | ![Chrome supports USB on macOS for Azure AD accounts.][y] | ![Chrome supports NFC on macOS for Azure AD accounts.][n] | ![Chrome supports BLE on macOS for Azure AD accounts.][n] | ![Edge supports USB on macOS for Azure AD accounts.][y] | ![Edge supports NFC on macOS for Azure AD accounts.][n] | ![Edge supports BLE on macOS for Azure AD accounts.][n] | ![Firefox supports USB on macOS for Azure AD accounts.][y] | ![Firefox supports NFC on macOS for Azure AD accounts.][n] | ![Firefox supports BLE on macOS for Azure AD accounts.][n] | ![Safari supports USB on macOS for Azure AD accounts.][n] | ![Safari supports NFC on macOS for Azure AD accounts.][n] | ![Safari supports BLE on macOS for Azure AD accounts.][n] |
+| **macOS** | ![Chrome supports USB on macOS for Azure AD accounts.][y] | ![Chrome supports NFC on macOS for Azure AD accounts.][n] | ![Chrome supports BLE on macOS for Azure AD accounts.][n] | ![Edge supports USB on macOS for Azure AD accounts.][y] | ![Edge supports NFC on macOS for Azure AD accounts.][n] | ![Edge supports BLE on macOS for Azure AD accounts.][n] | ![Firefox supports USB on macOS for Azure AD accounts.][n] | ![Firefox supports NFC on macOS for Azure AD accounts.][n] | ![Firefox supports BLE on macOS for Azure AD accounts.][n] | ![Safari supports USB on macOS for Azure AD accounts.][n] | ![Safari supports NFC on macOS for Azure AD accounts.][n] | ![Safari supports BLE on macOS for Azure AD accounts.][n] |
| **ChromeOS** | ![Chrome supports USB on ChromeOS for Azure AD accounts.][y] | ![Chrome supports NFC on ChromeOS for Azure AD accounts.][n] | ![Chrome supports BLE on ChromeOS for Azure AD accounts.][n] | ![Edge supports USB on ChromeOS for Azure AD accounts.][n] | ![Edge supports NFC on ChromeOS for Azure AD accounts.][n] | ![Edge supports BLE on ChromeOS for Azure AD accounts.][n] | ![Firefox supports USB on ChromeOS for Azure AD accounts.][n] | ![Firefox supports NFC on ChromeOS for Azure AD accounts.][n] | ![Firefox supports BLE on ChromeOS for Azure AD accounts.][n] | ![Safari supports USB on ChromeOS for Azure AD accounts.][n] | ![Safari supports NFC on ChromeOS for Azure AD accounts.][n] | ![Safari supports BLE on ChromeOS for Azure AD accounts.][n] | | **Linux** | ![Chrome supports USB on Linux for Azure AD accounts.][y] | ![Chrome supports NFC on Linux for Azure AD accounts.][n] | ![Chrome supports BLE on Linux for Azure AD accounts.][n] | ![Edge supports USB on Linux for Azure AD accounts.][n] | ![Edge supports NFC on Linux for Azure AD accounts.][n] | ![Edge supports BLE on Linux for Azure AD accounts.][n] | ![Firefox supports USB on Linux for Azure AD accounts.][n] | ![Firefox supports NFC on Linux for Azure AD accounts.][n] | ![Firefox supports BLE on Linux for Azure AD accounts.][n] | ![Safari supports USB on Linux for Azure AD accounts.][n] | ![Safari supports NFC on Linux for Azure AD accounts.][n] | ![Safari supports BLE on Linux for Azure AD accounts.][n] | | **iOS** | ![Chrome supports USB on iOS for Azure AD accounts.][n] | ![Chrome supports NFC on iOS for Azure AD accounts.][n] | ![Chrome supports BLE on iOS for Azure AD accounts.][n] | ![Edge supports USB on iOS for Azure AD accounts.][n] | ![Edge supports NFC on Linux for Azure AD accounts.][n] | ![Edge supports BLE on Linux for Azure AD accounts.][n] | ![Firefox supports USB on Linux for Azure AD accounts.][n] | ![Firefox supports NFC on iOS for Azure AD accounts.][n] | ![Firefox supports BLE on iOS for Azure AD accounts.][n] | ![Safari supports USB on iOS for Azure AD accounts.][n] | ![Safari supports NFC on iOS for Azure AD accounts.][n] | ![Safari supports BLE on iOS for Azure AD accounts.][n] |
active-directory Howto Authentication Passwordless Security Key On Premises https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-authentication-passwordless-security-key-on-premises.md
Make sure that enough DCs are patched to respond in time to service your resourc
> [!NOTE] > The `/keylist` switch in the `nltest` command is available in client Windows 10 v2004 and later.
-### What if I have a CloudTGT but it never gets exchange for a OnPremTGT when I am using Windows Hello for Business Cloud Trust?
-
-Make sure that the user you are signed in as, is a member of the groups of users that can use FIDO2 as an authentication method, or enable it for all users.
-
-> [!NOTE]
-> Even if you are not explicitly using a security key to sign-in to your device, the underlying technology is dependent on the FIDO2 infrastructure requirements.
- ### Do FIDO2 security keys work in a Windows login with RODC present in the hybrid environment? An FIDO2 Windows login looks for a writable DC to exchange the user TGT. As long as you have at least one writable DC per site, the login works fine. ## Next steps
-[Learn more about passwordless authentication](concept-authentication-passwordless.md)
+[Learn more about passwordless authentication](concept-authentication-passwordless.md)
active-directory Cloudknox All Reports https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-all-reports.md
Title: View a list and description of all system reports available in CloudKnox Permissions Management reports description: View a list and description of all system reports available in CloudKnox Permissions Management. --++ Last updated 02/23/2022-+ # View a list and description of system reports
active-directory Cloudknox Faqs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-faqs.md
Title: Frequently asked questions (FAQs) about CloudKnox Permissions Management description: Frequently asked questions (FAQs) about CloudKnox Permissions Management. --++ Last updated 04/20/2022-+ # Frequently asked questions (FAQs)
active-directory Cloudknox Howto Add Remove Role Task https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-howto-add-remove-role-task.md
Title: Add and remove roles and tasks for groups, users, and service accounts for Microsoft Azure and Google Cloud Platform (GCP) identities in the Remediation dashboard in CloudKnox Permissions Management description: How to attach and detach permissions for groups, users, and service accounts for Microsoft Azure and Google Cloud Platform (GCP) identities in the Remediation dashboard in CloudKnox Permissions Management. --++ Last updated 02/23/2022-+ # Add and remove roles and tasks for Microsoft Azure and Google Cloud Platform (GCP) identities
active-directory Cloudknox Howto Attach Detach Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-howto-attach-detach-permissions.md
Title: Attach and detach permissions for users, roles, and groups for Amazon Web Services (AWS) identities in the Remediation dashboard in CloudKnox Permissions Management description: How to attach and detach permissions for users, roles, and groups for Amazon Web Services (AWS) identities in the Remediation dashboard in CloudKnox Permissions Management. --++ Last updated 02/23/2022-+ # Attach and detach policies for Amazon Web Services (AWS) identities
active-directory Cloudknox Howto Audit Trail Results https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-howto-audit-trail-results.md
Title: Generate an on-demand report from a query in the Audit dashboard in CloudKnox Permissions Management description: How to generate an on-demand report from a query in the **Audit** dashboard in CloudKnox Permissions Management. --++ Last updated 02/23/2022-+ # Generate an on-demand report from a query
active-directory Cloudknox Howto Clone Role Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-howto-clone-role-policy.md
Title: Clone a role/policy in the Remediation dashboard in CloudKnox Permissions Management description: How to clone a role/policy in the Just Enough Permissions (JEP) Controller. --++ Last updated 02/23/2022-+ # Clone a role/policy in the Remediation dashboard
active-directory Cloudknox Howto Create Alert Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-howto-create-alert-trigger.md
Title: Create and view activity alerts and alert triggers in CloudKnox Permissions Management description: How to create and view activity alerts and alert triggers in CloudKnox Permissions Management. --++ Last updated 02/23/2022-+ # Create and view activity alerts and alert triggers
active-directory Cloudknox Howto Create Approve Privilege Request https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-howto-create-approve-privilege-request.md
Title: Create or approve a request for permissions in the Remediation dashboard in CloudKnox Permissions Management description: How to create or approve a request for permissions in the Remediation dashboard. --++ Last updated 02/23/2022-+ # Create or approve a request for permissions
active-directory Cloudknox Howto Create Custom Queries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-howto-create-custom-queries.md
Title: Create a custom query in CloudKnox Permissions Management description: How to create a custom query in the Audit dashboard in CloudKnox Permissions Management. --++ Last updated 02/23/2022-+ # Create a custom query
active-directory Cloudknox Howto Create Group Based Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-howto-create-group-based-permissions.md
Title: Select group-based permissions settings in CloudKnox Permissions Management with the User management dashboard description: How to select group-based permissions settings in CloudKnox Permissions Management with the User management dashboard. --++ Last updated 02/23/2022-+ # Select group-based permissions settings
active-directory Cloudknox Howto Create Role Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-howto-create-role-policy.md
Title: Create a role/policy in the Remediation dashboard in CloudKnox Permissions Management description: How to create a role/policy in the Remediation dashboard in CloudKnox Permissions Management. --++ Last updated 02/23/2022-+ # Create a role/policy in the Remediation dashboard
active-directory Cloudknox Howto Create Rule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-howto-create-rule.md
Title: Create a rule in the Autopilot dashboard in CloudKnox Permissions Management description: How to create a rule in the Autopilot dashboard in CloudKnox Permissions Management. --++ Last updated 02/23/2022-+ # Create a rule in the Autopilot dashboard
active-directory Cloudknox Howto Delete Role Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-howto-delete-role-policy.md
Title: Delete a role/policy in the Remediation dashboard in CloudKnox Permissions Management description: How to delete a role/policy in the Just Enough Permissions (JEP) Controller. --++ Last updated 02/23/2022-+ # Delete a role/policy in the Remediation dashboard
active-directory Cloudknox Howto Modify Role Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-howto-modify-role-policy.md
Title: Modify a role/policy in the Remediation dashboard in CloudKnox Permissions Management description: How to modify a role/policy in the Remediation dashboard in CloudKnox Permissions Management. --++ Last updated 02/23/2022-+ # Modify a role/policy in the Remediation dashboard
active-directory Cloudknox Howto Notifications Rule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-howto-notifications-rule.md
Title: View notification settings for a rule in the Autopilot dashboard in CloudKnox Permissions Management description: How to view notification settings for a rule in the Autopilot dashboard in CloudKnox Permissions Management. --++ Last updated 02/23/2022-+ # View notification settings for a rule in the Autopilot dashboard
active-directory Cloudknox Howto Recommendations Rule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-howto-recommendations-rule.md
Title: Generate, view, and apply rule recommendations in the Autopilot dashboard in CloudKnox Permissions Management description: How to generate, view, and apply rule recommendations in the Autopilot dashboard in CloudKnox Permissions Management. --++ Last updated 02/23/2022-+ # Generate, view, and apply rule recommendations in the Autopilot dashboard
active-directory Cloudknox Howto Revoke Task Readonly Status https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-howto-revoke-task-readonly-status.md
Title: Revoke access to high-risk and unused tasks or assign read-only status for Microsoft Azure and Google Cloud Platform (GCP) identities in the Remediation dashboard in CloudKnox Permissions Management description: How to revoke access to high-risk and unused tasks or assign read-only status for Microsoft Azure and Google Cloud Platform (GCP) identities in the Remediation dashboard in CloudKnox Permissions Management. --++ Last updated 02/23/2022-+ # Revoke access to high-risk and unused tasks or assign read-only status for Microsoft Azure and Google Cloud Platform (GCP) identities
active-directory Cloudknox Howto View Role Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-howto-view-role-policy.md
Title: View information about roles/ policies in the Remediation dashboard in CloudKnox Permissions Management description: How to view and filter information about roles/ policies in the Remediation dashboard in CloudKnox Permissions Management. --++ Last updated 02/23/2022-+ # View information about roles/ policies in the Remediation dashboard
active-directory Cloudknox Integration Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-integration-api.md
Title: Set and view configuration settings in CloudKnox Permissions Management description: How to view the CloudKnox Permissions Management API integration settings and create service accounts and roles. --++ Last updated 02/23/2022-+ # Set and view configuration settings
active-directory Cloudknox Multi Cloud Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-multi-cloud-glossary.md
Title: CloudKnox Permissions Management - The CloudKnox glossary description: CloudKnox Permissions Management glossary --++ Last updated 02/23/2022-+ # The CloudKnox glossary
active-directory Cloudknox Onboard Add Account After Onboarding https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-onboard-add-account-after-onboarding.md
Title: Add an account/ subscription/ project to Microsoft CloudKnox Permissions Management after onboarding is complete description: How to add an account/ subscription/ project to Microsoft CloudKnox Permissions Management after onboarding is complete. --++ Last updated 02/23/2022-+ # Add an account/ subscription/ project after onboarding is complete
active-directory Cloudknox Onboard Aws https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-onboard-aws.md
Title: Onboard an Amazon Web Services (AWS) account on CloudKnox Permissions Management description: How to onboard an Amazon Web Services (AWS) account on CloudKnox Permissions Management. --++ Last updated 04/20/2022-+ # Onboard an Amazon Web Services (AWS) account
active-directory Cloudknox Onboard Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-onboard-azure.md
Title: Onboard a Microsoft Azure subscription in CloudKnox Permissions Management description: How to a Microsoft Azure subscription on CloudKnox Permissions Management. --++ Last updated 04/20/2022-+ # Onboard a Microsoft Azure subscription
active-directory Cloudknox Onboard Enable Controller After Onboarding https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-onboard-enable-controller-after-onboarding.md
Title: Enable or disable the controller in Microsoft CloudKnox Permissions Management after onboarding is complete description: How to enable or disable the controller in Microsoft CloudKnox Permissions Management after onboarding is complete. --++ Last updated 02/23/2022-+ # Enable or disable the controller after onboarding is complete
active-directory Cloudknox Onboard Enable Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-onboard-enable-tenant.md
Title: Enable CloudKnox Permissions Management in your organization description: How to enable CloudKnox Permissions Management in your organization. --++ Last updated 04/20/2022-+ # Enable CloudKnox in your organization
active-directory Cloudknox Onboard Gcp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-onboard-gcp.md
Title: Onboard a Google Cloud Platform (GCP) project in CloudKnox Permissions Management description: How to onboard a Google Cloud Platform (GCP) project on CloudKnox Permissions Management. --++ Last updated 04/20/2022-+ # Onboard a Google Cloud Platform (GCP) project
active-directory Cloudknox Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-overview.md
Title: What's CloudKnox Permissions Management? description: An introduction to CloudKnox Permissions Management. --++ Last updated 04/20/2022-+ # What's CloudKnox Permissions Management?
active-directory Cloudknox Product Account Explorer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-product-account-explorer.md
Title: The CloudKnox Permissions Management - View roles and identities that can access account information from an external account description: How to view information about identities that can access accounts from an external account in CloudKnox Permissions Management. -+ -+ Last updated 02/23/2022-+ # View roles and identities that can access account information from an external account
active-directory Cloudknox Product Account Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-product-account-settings.md
Title: View personal and organization information in CloudKnox Permissions Management description: How to view personal and organization information in the Account settings dashboard in CloudKnox Permissions Management. -+ -+ Last updated 02/23/2022-+ # View personal and organization information
active-directory Cloudknox Product Audit Trail https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-product-audit-trail.md
Title: Filter and query user activity in CloudKnox Permissions Management description: How to filter and query user activity in CloudKnox Permissions Management. --++ Last updated 02/23/2022-+ # Filter and query user activity
active-directory Cloudknox Product Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-product-dashboard.md
Title: View data about the activity in your authorization system in CloudKnox Permissions Management description: How to view data about the activity in your authorization system in the CloudKnox Dashboard in CloudKnox Permissions Management. --++ Last updated 02/23/2022-+
active-directory Cloudknox Product Data Inventory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-product-data-inventory.md
Title: CloudKnox Permissions Management - Display an inventory of created resources and licenses for your authorization system description: How to display an inventory of created resources and licenses for your authorization system in CloudKnox Permissions Management. --++ Last updated 02/23/2022-+ # Display an inventory of created resources and licenses for your authorization system
active-directory Cloudknox Product Data Sources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-product-data-sources.md
Title: View and configure settings for data collection from your authorization system in CloudKnox Permissions Management description: How to view and configure settings for collecting data from your authorization system in CloudKnox Permissions Management. --++ Last updated 02/23/2022-+ # View and configure settings for data collection
active-directory Cloudknox Product Define Permission Levels https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-product-define-permission-levels.md
Title: Define and manage users, roles, and access levels in CloudKnox Permissions Management description: How to define and manage users, roles, and access levels in CloudKnox Permissions Management User management dashboard. --++ Last updated 02/23/2022-+ # Define and manage users, roles, and access levels
active-directory Cloudknox Product Integrations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-product-integrations.md
Title: View integration information about an authorization system in CloudKnox Permissions Management description: View integration information about an authorization system in CloudKnox Permissions Management. --++ Last updated 02/23/2022-+ # View integration information about an authorization system
active-directory Cloudknox Product Permission Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-product-permission-analytics.md
Title: Create and view permission analytics triggers in CloudKnox Permissions Management description: How to create and view permission analytics triggers in the Permission analytics tab in CloudKnox Permissions Management. --++ Last updated 02/23/2022-+ # Create and view permission analytics triggers
active-directory Cloudknox Product Permissions Analytics Reports https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-product-permissions-analytics-reports.md
Title: Generate and download the Permissions analytics report in CloudKnox Permissions Management description: How to generate and download the Permissions analytics report in CloudKnox Permissions Management. --++ Last updated 02/23/2022-+ # Generate and download the Permissions analytics report
active-directory Cloudknox Product Reports https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-product-reports.md
Title: View system reports in the Reports dashboard in CloudKnox Permissions Management description: How to view system reports in the Reports dashboard in CloudKnox Permissions Management. --++ Last updated 02/23/2022-+ # View system reports in the Reports dashboard
active-directory Cloudknox Product Rule Based Anomalies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-product-rule-based-anomalies.md
Title: Create and view rule-based anomalies and anomaly triggers in CloudKnox Permissions Management description: How to create and view rule-based anomalies and anomaly triggers in CloudKnox Permissions Management. --++ Last updated 02/23/2022-+ # Create and view rule-based anomaly alerts and anomaly triggers
active-directory Cloudknox Product Statistical Anomalies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-product-statistical-anomalies.md
Title: Create and view statistical anomalies and anomaly triggers in CloudKnox Permissions Management description: How to create and view statistical anomalies and anomaly triggers in the Statistical Anomaly tab in CloudKnox Permissions Management. --++ Last updated 02/23/2022-+ # Create and view statistical anomalies and anomaly triggers
active-directory Cloudknox Report Create Custom Report https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-report-create-custom-report.md
Title: Create, view, and share a custom report a custom report in CloudKnox Permissions Management description: How to create, view, and share a custom report in the CloudKnox Permissions Management. --++ Last updated 02/23/2022-+ # Create, view, and share a custom report
active-directory Cloudknox Report View System Report https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-report-view-system-report.md
Title: Generate and view a system report in CloudKnox Permissions Management description: How to generate and view a system report in the CloudKnox Permissions Management. --++ Last updated 02/23/2022-+ # Generate and view a system report
active-directory Cloudknox Training Videos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-training-videos.md
Title: CloudKnox Permissions Management training videos description: CloudKnox Permissions Management training videos. --++ Last updated 04/20/2022-+ # CloudKnox Permissions Management training videos
active-directory Cloudknox Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-troubleshoot.md
Title: Troubleshoot issues with CloudKnox Permissions Management description: Troubleshoot issues with CloudKnox Permissions Management --++ Last updated 02/23/2022-+ # Troubleshoot issues with CloudKnox Permissions Management
active-directory Cloudknox Ui Audit Trail https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-ui-audit-trail.md
Title: Use queries to see how users access information in an authorization system in CloudKnox Permissions Management description: How to use queries to see how users access information in an authorization system in CloudKnox Permissions Management. --++ Last updated 02/23/2022-+ # Use queries to see how users access information
active-directory Cloudknox Ui Autopilot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-ui-autopilot.md
Title: View rules in the Autopilot dashboard in CloudKnox Permissions Management description: How to view rules in the Autopilot dashboard in CloudKnox Permissions Management. --++ Last updated 02/23/2022-+ # View rules in the Autopilot dashboard
active-directory Cloudknox Ui Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-ui-dashboard.md
Title: View key statistics and data about your authorization system in CloudKnox Permissions Management description: How to view statistics and data about your authorization system in the CloudKnox Permissions Management. --++ Last updated 02/23/2022-+
active-directory Cloudknox Ui Remediation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-ui-remediation.md
Title: View existing roles/policies and requests for permission in the Remediation dashboard in CloudKnox Permissions Management description: How to view existing roles/policies and requests for permission in the Remediation dashboard in CloudKnox Permissions Management. --++ Last updated 02/23/2022-+ # View roles/policies and requests for permission in the Remediation dashboard
active-directory Cloudknox Ui Tasks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-ui-tasks.md
Title: View information about active and completed tasks in CloudKnox Permissions Management description: How to view information about active and completed tasks in the Activities pane in CloudKnox Permissions Management. --++ Last updated 02/23/2022-+ # View information about active and completed tasks
active-directory Cloudknox Ui Triggers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-ui-triggers.md
Title: View information about activity triggers in CloudKnox Permissions Management description: How to view information about activity triggers in the Activity triggers dashboard in CloudKnox Permissions Management. --++ Last updated 02/23/2022-+ # View information about activity triggers
active-directory Cloudknox Ui User Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-ui-user-management.md
Title: Manage users and groups with the User management dashboard in CloudKnox Permissions Management description: How to manage users and groups in the User management dashboard in CloudKnox Permissions Management. --++ Last updated 02/23/2022-+ # Manage users and groups with the User management dashboard
active-directory Cloudknox Usage Analytics Access Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-usage-analytics-access-keys.md
Title: View analytic information about access keys in CloudKnox Permissions Management description: How to view analytic information about access keys in CloudKnox Permissions Management. --++ Last updated 02/23/2022-+ # View analytic information about access keys
active-directory Cloudknox Usage Analytics Active Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-usage-analytics-active-resources.md
Title: View analytic information about active resources in CloudKnox Permissions Management description: How to view usage analytics about active resources in CloudKnox Permissions Management. --++ Last updated 02/23/2022-+ # View analytic information about active resources
active-directory Cloudknox Usage Analytics Active Tasks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-usage-analytics-active-tasks.md
Title: View analytic information about active tasks in CloudKnox Permissions Management description: How to view analytic information about active tasks in CloudKnox Permissions Management. --++ Last updated 02/23/2022-+ # View analytic information about active tasks
active-directory Cloudknox Usage Analytics Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-usage-analytics-groups.md
Title: View analytic information about groups in CloudKnox Permissions Management description: How to view analytic information about groups in CloudKnox Permissions Management. --++ Last updated 02/23/2022-+ # View analytic information about groups
active-directory Cloudknox Usage Analytics Home https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-usage-analytics-home.md
Title: View analytic information with the Analytics dashboard in CloudKnox Permissions Management description: How to use the Analytics dashboard in CloudKnox Permissions Management to view details about users, groups, active resources, active tasks, access keys, and serverless functions. --++ Last updated 02/23/2022-+ # View analytic information with the Analytics dashboard
active-directory Cloudknox Usage Analytics Serverless Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-usage-analytics-serverless-functions.md
Title: View analytic information about serverless functions in CloudKnox Permissions Management description: How to view analytic information about serverless functions in CloudKnox Permissions Management. --++ Last updated 02/23/2022-+ # View analytic information about serverless functions
active-directory Cloudknox Usage Analytics Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-usage-analytics-users.md
Title: View analytic information about users in CloudKnox Permissions Management description: How to view analytic information about users in CloudKnox Permissions Management. --++ Last updated 02/23/2022-+ # View analytic information about users
active-directory Block Legacy Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/block-legacy-authentication.md
For more information about these authentication protocols and services, see [Sig
Before you can block legacy authentication in your directory, you need to first understand if your users have apps that use legacy authentication and how it affects your overall directory. Azure AD sign-in logs can be used to understand if you're using legacy authentication.
-1. Navigate to the **Azure portal** > **Azure Active Directory** > **Sign-ins**.
+1. Navigate to the **Azure portal** > **Azure Active Directory** > **Sign-in logs**.
1. Add the Client App column if it isn't shown by clicking on **Columns** > **Client App**. 1. **Add filters** > **Client App** > select all of the legacy authentication protocols. Select outside the filtering dialog box to apply your selections and close the dialog box. 1. If you've activated the [new sign-in activity reports preview](../reports-monitoring/concept-all-sign-ins.md), repeat the above steps also on the **User sign-ins (non-interactive)** tab.
active-directory Concept Conditional Access Grant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-conditional-access-grant.md
The following client apps have been confirmed to support this setting:
- Microsoft Invoicing - Microsoft Kaizala - Microsoft Launcher-- Microsoft Lists
+- Microsoft Lists (iOS)
- Microsoft Office - Microsoft OneDrive - Microsoft OneNote
active-directory Howto Conditional Access Session Lifetime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-session-lifetime.md
The Azure Active Directory (Azure AD) default configuration for user sign-in fre
It might sound alarming to not ask for a user to sign back in, in reality any violation of IT policies will revoke the session. Some examples include (but aren't limited to) a password change, an incompliant device, or account disable. You can also explicitly [revoke usersΓÇÖ sessions using PowerShell](/powershell/module/azuread/revoke-azureaduserallrefreshtoken). The Azure AD default configuration comes down to ΓÇ£donΓÇÖt ask users to provide their credentials if security posture of their sessions hasn't changedΓÇ¥.
-The sign-in frequency setting works with apps that have implemented OAUTH2 or OIDC protocols according to the standards. Most Microsoft native apps for Windows, Mac, and Mobile including the following web applications comply with the setting.
+The sign-in frequency setting works with apps that have implemented OAuth2 or OIDC protocols according to the standards. Most Microsoft native apps for Windows, Mac, and Mobile including the following web applications comply with the setting.
- Word, Excel, PowerPoint Online - OneNote Online
The sign-in frequency setting works with apps that have implemented OAUTH2 or OI
- Dynamics CRM Online - Azure portal
-The sign-in frequency setting works with SAML applications as well, as long as they don't drop their own cookies and are redirected back to Azure AD for authentication on regular basis.
+The sign-in frequency setting works with 3rd party SAML applications and apps that have implemented OAuth2 or OIDC protocols, as long as they don't drop their own cookies and are redirected back to Azure AD for authentication on regular basis.
### User sign-in frequency and multi-factor authentication
active-directory Active Directory Optional Claims https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/active-directory-optional-claims.md
This section covers the configuration options under optional claims for changing
{ "name": "groups", "additionalProperties": [
- "netbios_name_and_sam_account_name",
+ "netbios_domain_and_sam_account_name",
"emit_as_roles" ] }
This section covers the configuration options under optional claims for changing
{ "name": "groups", "additionalProperties": [
- "netbios_name_and_sam_account_name",
+ "netbios_domain_and_sam_account_name",
"emit_as_roles" ] }
active-directory Active Directory V2 Protocols https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/active-directory-v2-protocols.md
https://login.microsoftonline.com/<issuer>/oauth2/v2.0/token
# NOTE: These are examples. Endpoint URI format may vary based on application type, # sign-in audience, and Azure cloud instance (global or national cloud).+
+# The {issuer} value in the path of the request can be used to control who can sign into the application.
+# The allowed values are **common** for both Microsoft accounts and work or school accounts,
+# **organizations** for work or school accounts only, **consumers** for Microsoft accounts only,
+# and **tenant identifiers** such as the tenant ID or domain name.
``` To find the endpoints for an application you've registered, in the [Azure portal](https://portal.azure.com) navigate to:
active-directory Permissions Consent Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/permissions-consent-overview.md
+
+ Title: Overview of permissions and consent in the Microsoft identity platform
+description: Learn about the foundational concepts and scenarios around consent and permissions in the Microsoft identity platform
+++++++++ Last updated : 05/10/2022++
+#Customer intent: As and a developer or admin in the Microsoft identity platform, I want to understand the basic concept about managing how applications access resources through the permissions and consent framework.
+
+# Introduction to permissions and consent
+
+To _access_ a protected resource like email or calendar data, your application needs the resource owner's _authorization_. The resource owner can _consent_ to or deny your app's request. Understanding these foundational concepts will help you build more secure and trustworthy applications that request only the access they need, when they need it, from its users and administrators.
+
+## Access scenarios
+
+As an application developer, you must identify how your application will access data. The application can use delegated access, acting on behalf of a signed-in user, or direct access, acting only as the application's own identity.
+
+![Image shows illustration of access scenarios.](./media/permissions-consent-overview/access-scenarios.png)
+
+### Delegated access (access on behalf of a user)
+
+In this access scenario, a user has signed into a client application. The client application accesses the resource on behalf of the user. Delegated access requires delegated permissions. Both the client and the user must be authorized separately to make the request.
+
+For the client app, the correct delegated permissions must be granted. Delegated permissions can also be referred to as scopes. Scopes are permissions of a given resource that the client application exercises on behalf of a user. They're strings that represent what the application wants to do on behalf of the user. For more information about scopes, see [scopes and permissions](v2-permissions-and-consent.md#scopes-and-permissions).
+
+For the user, the authorization relies on the privileges that the user has been granted for them to access the resource. For example, the user could be authorized to access directory resources by [Azure Active Directory (Azure AD) role-based access control (RBAC)](../roles/custom-overview.md) or to access mail and calendar resources by [Exchange Online RBAC](/exchange/permissions-exo/permissions-exo).
+
+### Direct access (App-only access)
+
+In this access scenario, the application acts on its own with no user signed in. Application access is used in scenarios such as automation, and backup. This scenario includes apps that run as background services or daemons. It's appropriate when it's undesirable to have a specific user signed in, or when the data required can't be scoped to a single user.
+
+Direct access may require application permissions but this isn't the only way for granting an application direct access. Application permissions can be referred to as app roles. When app roles are granted to other applications, they can be called applications permissions. The appropriate application permissions or app roles must be granted to the application for it to access the resource. For more information about assigning app roles to applications, see [App roles for applications](howto-add-app-roles-in-azure-ad-apps.md).
+
+## Types of permissions
+
+**Delegated permissions** are used in the delegated access scenario. They're permissions that allow the application to act on a user's behalf. The application will never be able to access anything users themselves couldn't access.
+
+For example, imagine an application that has been granted the Files.Read.All delegated permission on behalf of Tom, the user. The application will only be able to read files that Tom can personally access.
+
+**Application permissions** are used in the direct access scenario, without a signed-in user present. The application will be able to access any data that the permission is associated with. For example, an application granted the Files.Read.All application permission will be able to read any file in the tenant. Only an administrator or owner of the service principal can consent to application permissions.
+
+There are other ways in which applications can be granted authorization for direct access. For example, an application can be assigned an Azure AD RBAC role.
+
+## Consent
+One way that applications are granted permissions is through consent. Consent is a process where users or admins authorize an application to access a protected resource. For example, when a user attempts to sign into an application for the first time, the application can request permission to see the user's profile and read the contents of the user's mailbox. The user sees the list of permissions the app is requesting through a consent prompt.
+
+The key details of a consent prompt are the list of permissions the application requires and the publisher information. For more information about the consent prompt and the consent experience for both admins and end-users, see [application consent experience](application-consent-experience.md).
+
+### User consent
+
+User consent happens when a user attempts to sign into an application. The user provides their sign-in credentials. These credentials are checked to determine whether consent has already been granted. If no previous record of user or admin consent for the required permissions exists, the user is shown a consent prompt and asked to grant the application the requested permissions. In many cases, an admin may be required to grant consent on behalf of the user.
+
+### Administrator consent
+
+Depending on the permissions they require, some applications might require an administrator to be the one who grants consent. For example, application permissions can only be consented to by an administrator. Administrators can grant consent for themselves or for the entire organization. For more information about user and admin consent, see [user and admin consent overview](../manage-apps/consent-and-permissions-overview.md)
+
+### Preauthorization
+
+Preauthorization allows a resource application owner to grant permissions without requiring users to see a consent prompt for the same set of permissions that have been preauthorized. This way, an application that has been preauthorized won't ask users to consent to permissions. Resource owners can preauthorize client apps in the Azure portal or by using PowerShell and APIs, like Microsoft Graph.
+
+## Next steps
+- [User and admin consent overview](../manage-apps/consent-and-permissions-overview.md)
+- [Scopes and permissions](v2-permissions-and-consent.md)
active-directory Reference App Manifest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/reference-app-manifest.md
Previously updated : 02/02/2021 Last updated : 05/19/2022
Example:
"id": "f7f9acfc-ae0c-4d6c-b489-0a81dc1652dd", ```
+### acceptMappedClaims attribute
+
+| Key | Value type |
+| : | : |
+| acceptMappedClaims | Nullable Boolean |
+
+As documented on the [apiApplication resource type](/graph/api/resources/apiapplication#properties), this allows an application to use [claims mapping](active-directory-claims-mapping.md) without specifying a custom signing key. Applications that receive tokens rely on the fact that the claim values are authoritatively issued by Azure AD and cannot be tampered with. However, when you modify the token contents through claims-mapping policies, these assumptions may no longer be correct. Applications must explicitly acknowledge that tokens have been modified by the creator of the claims-mapping policy to protect themselves from claims-mapping policies created by malicious actors.
+
+> [!WARNING]
+> Do not set `acceptMappedClaims` property to `true` for multi-tenant apps, which can allow malicious actors to create claims-mapping policies for your app.
+
+Example:
+
+```json
+ "acceptMappedClaims": true,
+```
+ ### accessTokenAcceptedVersion attribute | Key | Value type |
active-directory Reference Breaking Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/reference-breaking-changes.md
If a request fails the validation check, the application API for create/update w
[!INCLUDE [active-directory-identifierUri](../../../includes/active-directory-identifier-uri-patterns.md)]
+>[!NOTE]
+> While it is safe to remove the identifierUris for app registrations within the current tenant, removing the identifierUris may cause clients to fail for other app registrations.
+ ## August 2021 ### Conditional Access will only trigger for explicitly requested scopes
active-directory Sample V2 Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/sample-v2-code.md
The following samples show an application that accesses the Microsoft Graph API
> |.NET Core| &#8226; [Call Microsoft Graph](https://github.com/Azure-Samples/active-directory-dotnetcore-daemon-v2/tree/master/1-Call-MSGraph) <br/> &#8226; [Call web API](https://github.com/Azure-Samples/active-directory-dotnetcore-daemon-v2/tree/master/2-Call-OwnApi)<br/> &#8226; [Call own web API](https://github.com/Azure-Samples/active-directory-dotnetcore-daemon-v2/tree/master/4-Call-OwnApi-Pop) <br/> &#8226; [Using managed identity and Azure key vault](https://github.com/Azure-Samples/active-directory-dotnetcore-daemon-v2/tree/master/3-Using-KeyVault)| MSAL.NET | Client credentials grant| > | ASP.NET|[Multi-tenant with Microsoft identity platform endpoint](https://github.com/Azure-Samples/ms-identity-aspnet-daemon-webapp) | MSAL.NET | Client credentials grant| > | Java | &#8226; [Call Microsoft Graph with Secret](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/1.%20Server-Side%20Scenarios/msal-client-credential-secret) <br/> &#8226; [Call Microsoft Graph with Certificate](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/1.%20Server-Side%20Scenarios/msal-client-credential-certificate)| MSAL Java | Client credentials grant|
-> | Node.js | [Sign in users and call web API](https://github.com/Azure-Samples/ms-identity-javascript-nodejs-console) | MSAL Node | Client credentials grant |
+> | Node.js | [Call Microsoft Graph with secret](https://github.com/Azure-Samples/ms-identity-javascript-nodejs-console) | MSAL Node | Client credentials grant |
> | Python | &#8226; [Call Microsoft Graph with secret](https://github.com/Azure-Samples/ms-identity-python-daemon/tree/master/1-Call-MsGraph-WithSecret) <br/> &#8226; [Call Microsoft Graph with certificate](https://github.com/Azure-Samples/ms-identity-python-daemon/tree/master/2-Call-MsGraph-WithCertificate) | MSAL Python| Client credentials grant| ## Azure Functions as web APIs
active-directory Scenario Protected Web Api App Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-protected-web-api-app-configuration.md
You can create a web API from scratch by using Microsoft.Identity.Web project te
#### Starting from an existing ASP.NET Core 3.1 application
-ASP.NET Core 3.1 uses the Microsoft.AspNetCore.AzureAD.UI library. The middleware is initialized in the Startup.cs file.
+ASP.NET Core 3.1 uses the Microsoft.AspNetCore.Authentication.JwtBearer library. The middleware is initialized in the Startup.cs file.
```csharp using Microsoft.AspNetCore.Authentication.JwtBearer;
active-directory Tutorial V2 Javascript Auth Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/tutorial-v2-javascript-auth-code.md
Next, implement a small [Express](https://expressjs.com/) web server to serve yo
npm install yargs ``` 2. Next, create file named *server.js* and add the following code:-
- :::code language="js" source="~/ms-identity-javascript-v2/server.js":::
+
+ :::code language="js" source="~/ms-identity-javascript-v2/server.js":::
## Create the SPA UI
active-directory Howto Hybrid Azure Ad Join https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/howto-hybrid-azure-ad-join.md
Verify devices can access the required Microsoft resources under the system acco
We think most organizations will deploy hybrid Azure AD join with managed domains. Managed domains use [password hash sync (PHS)](../hybrid/whatis-phs.md) or [pass-through authentication (PTA)](../hybrid/how-to-connect-pta.md) with [seamless single sign-on](../hybrid/how-to-connect-sso.md). Managed domain scenarios don't require configuring a federation server.
-> [!NOTE]
-> Azure AD doesn't support smart cards or certificates in managed domains.
- Configure hybrid Azure AD join by using Azure AD Connect for a managed domain: 1. Start Azure AD Connect, and then select **Configure**.
active-directory Hybrid Azuread Join Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/hybrid-azuread-join-plan.md
These scenarios don't require you to configure a federation server for authentic
> [!NOTE] > [Cloud authentication using Staged rollout](../hybrid/how-to-connect-staged-rollout.md) is only supported starting at the Windows 10 1903 update.
->
-> Azure AD doesn't support smartcards or certificates in managed domains.
+ ### Federated environment
active-directory Directory Delete Howto https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/directory-delete-howto.md
You can put a subscription into the **Deprovisioned** state to be deleted in thr
If you have an Active or Cancelled Azure Subscription associated to your Azure AD Tenant then you would not be able to delete Azure AD Tenant. After you cancel, billing is stopped immediately. However, Microsoft waits 30 - 90 days before permanently deleting your data in case you need to access it or you change your mind. We don't charge you for keeping the data. -- If you have a free trial or pay-as-you-go subscription, you don't have to wait 90 days for the subscription to automatically delete. You can delete your subscription three days after you cancel it. The Delete subscription option isn't available until three days after you cancel your subscription. For more details please read through [Delete free trial or pay-as-you-go subscriptions](../../cost-management-billing/manage/cancel-azure-subscription.md#delete-free-trial-or-pay-as-you-go-subscriptions).
+- If you have a free trial or pay-as-you-go subscription, you don't have to wait 90 days for the subscription to automatically delete. You can delete your subscription three days after you cancel it. The Delete subscription option isn't available until three days after you cancel your subscription. For more details please read through [Delete free trial or pay-as-you-go subscriptions](../../cost-management-billing/manage/cancel-azure-subscription.md#delete-subscriptions).
- All other subscription types are deleted only through the [subscription cancellation](../../cost-management-billing/manage/cancel-azure-subscription.md#cancel-subscription-in-the-azure-portal) process. In other words, you can't delete a subscription directly unless it's a free trial or pay-as-you-go subscription. However, after you cancel a subscription, you can create an [Azure support request](https://go.microsoft.com/fwlink/?linkid=2083458) to ask to have the subscription deleted immediately. - Alternatively, you can also move/transfer the Azure subscription to another Azure AD tenant account. When you transfer billing ownership of your subscription to an account in another Azure AD tenant, you can move the subscription to the new account's tenant. Additionally, perfoming Switch Directory on the subscription would not help as the billing would still be aligned with Azure AD Tenant which was used to sign up for the subscription. For more information review [Transfer a subscription to another Azure AD tenant account](../../cost-management-billing/manage/billing-subscription-transfer.md#transfer-a-subscription-to-another-azure-ad-tenant-account)
You can put a self-service sign-up product like Microsoft Power BI or Azure Righ
## Next steps
-[Azure Active Directory documentation](../index.yml)
+[Azure Active Directory documentation](../index.yml)
active-directory B2b Direct Connect Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/b2b-direct-connect-overview.md
B2B direct connect requires a mutual trust relationship between two Azure AD org
Currently, B2B direct connect capabilities work with Teams shared channels. When B2B direct connect is established between two organizations, users in one organization can create a shared channel in Teams and invite an external B2B direct connect user to it. Then from within Teams, the B2B direct connect user can seamlessly access the shared channel in their home tenant Teams instance, without having to manually sign in to the organization hosting the shared channel.
-For licensing and pricing information related to B2B direct connect users, refer to [Azure Active Directory pricing](https://azure.microsoft.com/pricing/details/active-directory/).
+For licensing and pricing information related to B2B direct connect users, refer to [Azure Active Directory External Identities pricing](https://azure.microsoft.com/pricing/details/active-directory/external-identities/).
## Managing cross-tenant access for B2B direct connect
active-directory B2b Government National Clouds https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/b2b-government-national-clouds.md
Previously updated : 01/31/2022 Last updated : 05/17/2022
# Azure AD B2B in government and national clouds
-## National clouds
-[National clouds](../develop/authentication-national-cloud.md) are physically isolated instances of Azure. B2B collaboration is not supported across national cloud boundaries. For example, if your Azure tenant is in the public, global cloud, you can't invite a user whose account is in a national cloud. To collaborate with the user, ask them for another email address or create a member user account for them in your directory.
+Microsoft Azure [national clouds](../develop/authentication-national-cloud.md) are physically isolated instances of Azure. B2B collaboration isn't enabled by default across national cloud boundaries, but you can use Microsoft cloud settings (preview) to establish mutual B2B collaboration between the following Microsoft Azure clouds:
-## Azure US Government clouds
-Within the Azure US Government cloud, B2B collaboration is supported between tenants that are both within Azure US Government cloud and that both support B2B collaboration. Azure US Government tenants that support B2B collaboration can also collaborate with social users using Microsoft, Google accounts, or email one-time passcode accounts. If you invite a user outside of these groups (for example, if the user is in a tenant that isn't part of the Azure US Government cloud or doesn't yet support B2B collaboration), the invitation will fail or the user won't be able to redeem the invitation. For Microsoft accounts (MSAs), there are known limitations with accessing the Azure portal: newly invited MSA guests are unable to redeem direct link invitations to the Azure portal, and existing MSA guests are unable to sign in to the Azure portal. For details about other limitations, see [Azure Active Directory Premium P1 and P2 Variations](../../azure-government/compare-azure-government-global-azure.md#azure-active-directory-premium-p1-and-p2).
+- Microsoft Azure global cloud and Microsoft Azure Government
+- Microsoft Azure global cloud and Microsoft Azure China 21Vianet
+
+## B2B collaboration across Microsoft clouds
+
+To set up B2B collaboration between tenants in different clouds, both tenants need to configure their Microsoft cloud settings to enable collaboration with the other cloud. Then each tenant must configure inbound and outbound cross-tenant access with the tenant in the other cloud. For details, see [Microsoft cloud settings (preview)](cross-cloud-settings.md).
+
+## B2B collaboration within the Microsoft Azure Government cloud
+
+Within the Azure US Government cloud, B2B collaboration is enabled between tenants that are both within Azure US Government cloud and that both support B2B collaboration. Azure US Government tenants that support B2B collaboration can also collaborate with social users using Microsoft, Google accounts, or email one-time passcode accounts. If you invite a user outside of these groups (for example, if the user is in a tenant that isn't part of the Azure US Government cloud or doesn't yet support B2B collaboration), the invitation will fail or the user won't be able to redeem the invitation. For Microsoft accounts (MSAs), there are known limitations with accessing the Azure portal: newly invited MSA guests are unable to redeem direct link invitations to the Azure portal, and existing MSA guests are unable to sign in to the Azure portal. For details about other limitations, see [Azure Active Directory Premium P1 and P2 Variations](../../azure-government/compare-azure-government-global-azure.md#azure-active-directory-premium-p1-and-p2).
### How can I tell if B2B collaboration is available in my Azure US Government tenant? To find out if your Azure US Government cloud tenant supports B2B collaboration, do the following:
active-directory Cross Cloud Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/cross-cloud-settings.md
+
+ Title: Configure B2B collaboration Microsoft cloud settings - Azure AD
+description: Use Microsoft cloud settings to enable cross-cloud B2B collaboration between sovereign (national) Microsoft Azure clouds.
++++ Last updated : 05/17/2022++++++++
+# Configure Microsoft cloud settings for B2B collaboration (Preview)
+
+> [!NOTE]
+> Microsoft cloud settings are preview features of Azure Active Directory. For more information about previews, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+When Azure AD organizations in separate Microsoft Azure clouds need to collaborate, they can use Microsoft cloud settings to enable Azure AD B2B collaboration. B2B collaboration is available between the following global and sovereign Microsoft Azure clouds:
+
+- Microsoft Azure global cloud and Microsoft Azure Government
+- Microsoft Azure global cloud and Microsoft Azure China 21Vianet
+
+To set up B2B collaboration between partner organizations in different Microsoft Azure clouds, each partner mutually agrees to configure B2B collaboration with each other. In each organization, an admin completes the following steps:
+
+1. Configures their Microsoft cloud settings to enable collaboration with the partner's cloud.
+
+1. Uses the partner's tenant ID to find and add the partner to their organizational settings.
+
+1. Configures their inbound and outbound settings for the partner organization. The admin can either apply the default settings or configure specific settings for the partner.
+
+After each organization has completed these steps, Azure AD B2B collaboration between the organizations is enabled.
+
+## Before you begin
+
+- **Obtain the partner's tenant ID.** To enable B2B collaboration with a partner's Azure AD organization in another Microsoft Azure cloud, you'll need the partner's tenant ID. Using an organization's domain name for lookup isn't available in cross-cloud scenarios.
+- **Decide on inbound and outbound access settings for the partner.** Selecting a cloud in your Microsoft cloud settings doesn't automatically enable B2B collaboration. Once you enable another Microsoft Azure cloud, all B2B collaboration is blocked by default for organizations in that cloud. You'll need to add the tenant you want to collaborate with to your Organizational settings. At that point, your default settings go into effect for that tenant only. You can allow the default settings to remain in effect. Or, you can modify the inbound and outbound settings for the organization.
+- **Obtain any required object IDs or app IDs.** If you want to apply access settings to specific users, groups, or applications in the partner organization, you'll need to contact the organization for information before configuring your settings. Obtain their user object IDs, group object IDs, or application IDs (*client app IDs* or *resource app IDs*) so you can target your settings correctly.
+
+## Enable the cloud in your Microsoft cloud settings
+
+In your Microsoft cloud settings, enable the Microsoft Azure cloud you want to collaborate with.
+
+1. Sign in to the [Azure portal](https://portal.azure.com) using a Global administrator or Security administrator account. Then open the **Azure Active Directory** service.
+1. Select **External Identities**, and then select **Cross-tenant access settings (Preview)**.
+1. Select **Cross cloud settings**.
+1. Select the checkboxes next to the external Microsoft Azure clouds you want to enable.
+
+ ![Screenshot showing Microsoft cloud settings.](media/cross-cloud-settings/cross-cloud-settings.png)
+
+> [!NOTE]
+> Selecting a cloud doesn't automatically enable B2B collaboration with organizations in that cloud. You'll need to add the organization you want to collaborate with, as described in the next section.
+
+## Add the tenant to your organizational settings
+
+Follow these steps to add the tenant you want to collaborate with to your Organizational settings.
+
+1. Sign in to the [Azure portal](https://portal.azure.com) using a Global administrator or Security administrator account. Then open the **Azure Active Directory** service.
+1. Select **External Identities**, and then select **Cross-tenant access settings (Preview)**.
+1. Select **Organizational settings**.
+1. Select **Add organization**.
+1. On the **Add organization** pane, type the tenant ID for the organization (cross-cloud lookup by domain name isn't currently available).
+
+ ![Screenshot showing adding an organization.](media/cross-cloud-settings/cross-tenant-add-organization.png)
+
+1. Select the organization in the search results, and then select **Add**.
+1. The organization appears in the **Organizational settings** list. At this point, all access settings for this organization are inherited from your default settings.
+
+ ![Screenshot showing an organization added with default settings.](media/cross-cloud-settings/org-specific-settings-inherited.png)
++
+1. If you want to change the cross-tenant access settings for this organization, select the **Inherited from default** link under the **Inbound access** or **Outbound access** column. Then follow the detailed steps in these sections:
+
+ - [Modify inbound access settings](cross-tenant-access-settings-b2b-collaboration.md#modify-inbound-access-settings)
+ - [Modify outbound access settings](cross-tenant-access-settings-b2b-collaboration.md#modify-outbound-access-settings)
+
+## Next steps
+
+See [Configure external collaboration settings](external-collaboration-settings-configure.md) for B2B collaboration with non-Azure AD identities, social identities, and non-IT managed external accounts.
active-directory Cross Tenant Access Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/cross-tenant-access-overview.md
Previously updated : 03/21/2022 Last updated : 05/17/2022
> [!NOTE] > Cross-tenant access settings are preview features of Azure Active Directory. For more information about previews, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-Azure AD organizations can use External Identities cross-tenant access settings to manage how they collaborate with other Azure AD organizations through [B2B collaboration](cross-tenant-access-settings-b2b-collaboration.md) and [B2B direct connect](cross-tenant-access-settings-b2b-direct-connect.md). Cross-tenant access settings give you granular control over how external Azure AD organizations collaborate with you (inbound access) and how your users collaborate with external Azure AD organizations (outbound access). These settings also let you trust multi-factor authentication (MFA) and device claims ([compliant claims and hybrid Azure AD joined claims](../conditional-access/howto-conditional-access-policy-compliant-device.md)) from other Azure AD organizations.
+Azure AD organizations can use External Identities cross-tenant access settings to manage how they collaborate with other Azure AD organizations and other Microsoft Azure clouds through B2B collaboration and [B2B direct connect](cross-tenant-access-settings-b2b-direct-connect.md). [Cross-tenant access settings](cross-tenant-access-settings-b2b-collaboration.md) give you granular control over how external Azure AD organizations collaborate with you (inbound access) and how your users collaborate with external Azure AD organizations (outbound access). These settings also let you trust multi-factor authentication (MFA) and device claims ([compliant claims and hybrid Azure AD joined claims](../conditional-access/howto-conditional-access-policy-compliant-device.md)) from other Azure AD organizations.
-This article describes cross-tenant access settings, which are used to manage B2B collaboration and B2B direct connect with external Azure AD organizations. Additional settings are available for B2B collaboration with non-Azure AD identities (for example, social identities or non-IT managed external accounts). These [external collaboration settings](external-collaboration-settings-configure.md) include options for restricting guest user access, specifying who can invite guests, and allowing or blocking domains.
+This article describes cross-tenant access settings, which are used to manage B2B collaboration and B2B direct connect with external Azure AD organizations, including across Microsoft clouds. Additional settings are available for B2B collaboration with non-Azure AD identities (for example, social identities or non-IT managed external accounts). These [external collaboration settings](external-collaboration-settings-configure.md) include options for restricting guest user access, specifying who can invite guests, and allowing or blocking domains.
-![Overview diagram of cross-tenant access settings](media/cross-tenant-access-overview/cross-tenant-access-settings-overview.png)
+![Overview diagram of cross-tenant access settings.](media/cross-tenant-access-overview/cross-tenant-access-settings-overview.png)
## Manage external access with inbound and outbound settings
The default cross-tenant access settings apply to all Azure AD organizations ext
- **Organizational settings**: No organizations are added to your Organizational settings by default. This means all external Azure AD organizations are enabled for B2B collaboration with your organization.
+The behaviors described above apply to B2B collaboration with other Azure AD tenants in your same Microsoft Azure cloud. In cross-cloud scenarios, default settings work a little differently. See [Microsoft cloud settings](#microsoft-cloud-settings) later in this article.
+ ## Organizational settings You can configure organization-specific settings by adding an organization and modifying the inbound and outbound settings for that organization. Organizational settings take precedence over default settings.
You can configure organization-specific settings by adding an organization and m
- You can use external collaboration settings to limit who can invite external users, allow or block B2B specific domains, and set restrictions on guest user access to your directory.
+## Microsoft cloud settings
+
+Microsoft cloud settings let you collaborate with organizations from different Microsoft Azure clouds. With Microsoft cloud settings, you can establish mutual B2B collaboration between the following clouds:
+
+- Microsoft Azure global cloud and Microsoft Azure Government
+- Microsoft Azure global cloud and Microsoft Azure China 21Vianet
+
+To set up B2B collaboration, both organizations configure their Microsoft cloud settings to enable the partner's cloud. Then each organization uses the partner's tenant ID to find and add the partner to their organizational settings. From there, each organization can allow their default cross-tenant access settings apply to the partner, or they can configure partner-specific inbound and outbound settings. After you establish B2B collaboration with a partner in another cloud, you'll be able to:
+
+- Use B2B collaboration to invite a user in the partner tenant to access resources in your organization, including web line-of-business apps, SaaS apps, and SharePoint Online sites, documents, and files.
+- Apply Conditional Access policies to the B2B collaboration user and opt to trust device claims (compliant claims and hybrid Azure AD joined claims) from the userΓÇÖs home tenant.
+
+For configuration steps, see [Configure Microsoft cloud settings for B2B collaboration (Preview)](cross-cloud-settings.md).
+
+### Default settings in cross-cloud scenarios
+
+To collaborate with a partner tenant in a different Microsoft Azure cloud, both organizations need to mutually enable B2B collaboration with each other. The first step is to enable the partner's cloud in your cross-tenant settings. When you first enable another cloud, B2B collaboration is blocked for all tenants in that cloud. You need to add the tenant you want to collaborate with to your Organizational settings, and at that point your default settings go into effect for that tenant only. You can allow the default settings to remain in effect, or you can modify the organizational settings for the tenant.
+ ## Important considerations > [!IMPORTANT]
You can configure organization-specific settings by adding an organization and m
Several tools are available to help you identify the access your users and partners need before you set inbound and outbound access settings. To ensure you donΓÇÖt remove access that your users and partners need, you should examine current sign-in behavior. Taking this preliminary step will help prevent loss of desired access for your end users and partner users. However, in some cases these logs are only retained for 30 days, so we strongly recommend you speak with your business stakeholders to ensure required access isn't lost.
+> [!NOTE]
+> During the preview of Microsoft cloud settings, sign-in events for cross-cloud scenarios will be reported in the resource tenant, but not in the home tenant.
+ ### Cross-tenant sign-in activity PowerShell script To review user sign-in activity associated with external tenants, use the [cross-tenant user sign-in activity](https://aka.ms/cross-tenant-signins-ps) PowerShell script. For example, to view all available sign-in events for inbound activity (external users accessing resources in the local tenant) and outbound activity (local users accessing resources in an external tenant), run the following command:
If your organization exports sign-in logs to a Security Information and Event Ma
The Azure AD audit logs capture all activity around cross-tenant access setting changes and activity. To audit changes to your cross-tenant access settings, use the **category** of ***CrossTenantAccessSettings*** to filter all activity to show changes to cross-tenant access settings.
-![Audit logs for cross-tenant access settings](media/cross-tenant-access-overview/cross-tenant-access-settings-audit-logs.png)
+![Audit logs for cross-tenant access settings.](media/cross-tenant-access-overview/cross-tenant-access-settings-audit-logs.png)
## Next steps
active-directory Cross Tenant Access Settings B2b Collaboration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/cross-tenant-access-settings-b2b-collaboration.md
Previously updated : 05/02/2022 Last updated : 05/17/2022
Use External Identities cross-tenant access settings to manage how you collabora
- Decide on the default level of access you want to apply to all external Azure AD organizations. - Identify any Azure AD organizations that will need customized settings so you can configure **Organizational settings** for them. - If you want to apply access settings to specific users, groups, or applications in an external organization, you'll need to contact the organization for information before configuring your settings. Obtain their user object IDs, group object IDs, or application IDs (*client app IDs* or *resource app IDs*) so you can target your settings correctly.
+- If you want to set up B2B collaboration with a partner organization in an external Microsoft Azure cloud, follow the steps in [Configure Microsoft cloud settings](cross-cloud-settings.md). An admin in the partner organization will need to do the same for your tenant.
## Configure default settings
- Default cross-tenant access settings apply to all external tenants for which you haven't created organization-specific customized settings. If you want to modify the Azure AD-provided default settings, follow these steps.
+ Default cross-tenant access settings apply to all external tenants for which you haven't created organization-specific customized settings. If you want to modify the Azure AD-provided default settings, follow these steps.
1. Sign in to the [Azure portal](https://portal.azure.com) using a Global administrator or Security administrator account. Then open the **Azure Active Directory** service. 1. Select **External Identities**, and then select **Cross-tenant access settings (Preview)**. 1. Select the **Default settings** tab and review the summary page.
- ![Screenshot showing the Cross-tenant access settings Default settings tab](media/cross-tenant-access-settings-b2b-collaboration/cross-tenant-defaults.png)
+ ![Screenshot showing the Cross-tenant access settings Default settings tab.](media/cross-tenant-access-settings-b2b-collaboration/cross-tenant-defaults.png)
1. To change the settings, select the **Edit inbound defaults** link or the **Edit outbound defaults** link.
- ![Screenshot showing edit buttons for Default settings](media/cross-tenant-access-settings-b2b-collaboration/cross-tenant-defaults-edit.png)
+ ![Screenshot showing edit buttons for Default settings.](media/cross-tenant-access-settings-b2b-collaboration/cross-tenant-defaults-edit.png)
1. Modify the default settings by following the detailed steps in these sections:
Use External Identities cross-tenant access settings to manage how you collabora
Follow these steps to configure customized settings for specific organizations. 1. Sign in to the [Azure portal](https://portal.azure.com) using a Global administrator or Security administrator account. Then open the **Azure Active Directory** service.
-1. Select **External Identities**, and then select **Cross-tenant access settings (preview)**.
+1. Select **External Identities**, and then select **Cross-tenant access settings (Preview)**.
1. Select **Organizational settings**. 1. Select **Add organization**. 1. On the **Add organization** pane, type the full domain name (or tenant ID) for the organization.
- ![Screenshot showing adding an organization](media/cross-tenant-access-settings-b2b-collaboration/cross-tenant-add-organization.png)
+ ![Screenshot showing adding an organization.](media/cross-tenant-access-settings-b2b-collaboration/cross-tenant-add-organization.png)
1. Select the organization in the search results, and then select **Add**. 1. The organization appears in the **Organizational settings** list. At this point, all access settings for this organization are inherited from your default settings. To change the settings for this organization, select the **Inherited from default** link under the **Inbound access** or **Outbound access** column.
- ![Screenshot showing an organization added with default settings](media/cross-tenant-access-settings-b2b-collaboration/org-specific-settings-inherited.png)
+ ![Screenshot showing an organization added with default settings.](media/cross-tenant-access-settings-b2b-collaboration/org-specific-settings-inherited.png)
1. Modify the organization's settings by following the detailed steps in these sections:
With inbound settings, you select which external users and groups will be able t
1. Sign in to the [Azure portal](https://portal.azure.com) using a Global administrator or Security administrator account. Then open the **Azure Active Directory** service.
-1. Select **External Identities** > **Cross-tenant access settings (preview)**.
+1. Select **External Identities** > **Cross-tenant access settings (Preview)**.
1. Navigate to the settings you want to modify: - **Default settings**: To modify default inbound settings, select the **Default settings** tab, and then under **Inbound access settings**, select **Edit inbound defaults**.
With inbound settings, you select which external users and groups will be able t
- **Allow access**: Allows the users and groups specified under **Applies to** to be invited for B2B collaboration. - **Block access**: Blocks the users and groups specified under **Applies to** from being invited to B2B collaboration.
- ![Screenshot showing selecting the user access status for B2B collaboration](media/cross-tenant-access-settings-b2b-collaboration/generic-inbound-external-users-groups-access.png)
+ ![Screenshot showing selecting the user access status for B2B collaboration.](media/cross-tenant-access-settings-b2b-collaboration/generic-inbound-external-users-groups-access.png)
1. Under **Applies to**, select one of the following:
With inbound settings, you select which external users and groups will be able t
> [!NOTE] > If you block access for all external users and groups, you also need to block access to all your internal applications (on the **Applications** tab).
- ![Screenshot showing selecting the target users and groups](media/cross-tenant-access-settings-b2b-collaboration/generic-inbound-external-users-groups-target.png)
+ ![Screenshot showing selecting the target users and groups.](media/cross-tenant-access-settings-b2b-collaboration/generic-inbound-external-users-groups-target.png)
1. If you chose **Select external users and groups**, do the following for each user or group you want to add:
With inbound settings, you select which external users and groups will be able t
- In the menu next to the search box, choose either **user** or **group**. - Select **Add**.
- ![Screenshot showing adding users and groups](media/cross-tenant-access-settings-b2b-collaboration/generic-inbound-external-users-groups-add.png)
+ ![Screenshot showing adding users and groups.](media/cross-tenant-access-settings-b2b-collaboration/generic-inbound-external-users-groups-add.png)
1. When you're done adding users and groups, select **Submit**.
- ![Screenshot showing submitting users and groups](media/cross-tenant-access-settings-b2b-collaboration/generic-inbound-external-users-groups-submit.png)
+ ![Screenshot showing submitting users and groups.](media/cross-tenant-access-settings-b2b-collaboration/generic-inbound-external-users-groups-submit.png)
1. Select the **Applications** tab.
With inbound settings, you select which external users and groups will be able t
- **Allow access**: Allows the applications specified under **Applies to** to be accessed by B2B collaboration users. - **Block access**: Blocks the applications specified under **Applies to** from being accessed by B2B collaboration users.
- ![Screenshot showing applications access status](media/cross-tenant-access-settings-b2b-collaboration/generic-inbound-applications-access.png)
+ ![Screenshot showing applications access status.](media/cross-tenant-access-settings-b2b-collaboration/generic-inbound-applications-access.png)
1. Under **Applies to**, select one of the following:
With inbound settings, you select which external users and groups will be able t
> [!NOTE] > If you block access to all applications, you also need to block access for all external users and groups (on the **External users and groups** tab).
- ![Screenshot showing target applications](media/cross-tenant-access-settings-b2b-collaboration/generic-inbound-applications-target.png)
+ ![Screenshot showing target applications.](media/cross-tenant-access-settings-b2b-collaboration/generic-inbound-applications-target.png)
1. If you chose **Select applications**, do the following for each application you want to add:
With inbound settings, you select which external users and groups will be able t
- In the **Select** pane, type the application name or the application ID (either the *client app ID* or the *resource app ID*) in the search box. Then select the application in the search results. Repeat for each application you want to add. - When you're done selecting applications, choose **Select**.
- ![Screenshot showing selecting applications](media/cross-tenant-access-settings-b2b-collaboration/generic-inbound-applications-add.png)
+ ![Screenshot showing selecting applications.](media/cross-tenant-access-settings-b2b-collaboration/generic-inbound-applications-add.png)
1. Select **Save**.
With inbound settings, you select which external users and groups will be able t
- **Trust hybrid Azure AD joined devices**: Allows your Conditional Access policies to trust hybrid Azure AD joined device claims from an external organization when their users access your resources.
- ![Screenshot showing trust settings](media/cross-tenant-access-settings-b2b-collaboration/inbound-trust-settings.png)
+ ![Screenshot showing trust settings.](media/cross-tenant-access-settings-b2b-collaboration/inbound-trust-settings.png)
1. Select **Save**.
With outbound settings, you select which of your users and groups will be able t
- **Allow access**: Allows your users and groups specified under **Applies to** to be invited to external organizations for B2B collaboration. - **Block access**: Blocks your users and groups specified under **Applies to** from being invited to B2B collaboration. If you block access for all users and groups, this will also block all external applications from being accessed via B2B collaboration.
- ![Screenshot showing users and groups access status for b2b collaboration](media/cross-tenant-access-settings-b2b-collaboration/generic-outbound-external-users-groups-access.png)
+ ![Screenshot showing users and groups access status for b2b collaboration.](media/cross-tenant-access-settings-b2b-collaboration/generic-outbound-external-users-groups-access.png)
1. Under **Applies to**, select one of the following:
With outbound settings, you select which of your users and groups will be able t
> [!NOTE] > If you block access for all of your users and groups, you also need to block access to all external applications (on the **External applications** tab).
- ![Screenshot showing selecting the target users for b2b collaboration](media/cross-tenant-access-settings-b2b-collaboration/generic-outbound-external-users-groups-target.png)
+ ![Screenshot showing selecting the target users for b2b collaboration.](media/cross-tenant-access-settings-b2b-collaboration/generic-outbound-external-users-groups-target.png)
1. If you chose **Select \<your organization\> users and groups**, do the following for each user or group you want to add:
With outbound settings, you select which of your users and groups will be able t
- **Allow access**: Allows the external applications specified under **Applies to** to be accessed by your users via B2B collaboration. - **Block access**: Blocks the external applications specified under **Applies to** from being accessed by your users via B2B collaboration.
- ![Screenshot showing applications access status for b2b collaboration](media/cross-tenant-access-settings-b2b-collaboration/generic-outbound-applications-access.png)
+ ![Screenshot showing applications access status for b2b collaboration.](media/cross-tenant-access-settings-b2b-collaboration/generic-outbound-applications-access.png)
1. Under **Applies to**, select one of the following:
With outbound settings, you select which of your users and groups will be able t
> [!NOTE] > If you block access to all external applications, you also need to block access for all of your users and groups (on the **Users and groups** tab).
- ![Screenshot showing application targets for b2b collaboration](media/cross-tenant-access-settings-b2b-collaboration/generic-outbound-applications-target.png)
+ ![Screenshot showing application targets for b2b collaboration.](media/cross-tenant-access-settings-b2b-collaboration/generic-outbound-applications-target.png)
1. If you chose **Select external applications**, do the following for each application you want to add:
With outbound settings, you select which of your users and groups will be able t
- In the search box, type the application name or the application ID (either the *client app ID* or the *resource app ID*). Then select the application in the search results. Repeat for each application you want to add. - When you're done selecting applications, choose **Select**.
- ![Screenshot showing selecting applications for b2b collaboration](media/cross-tenant-access-settings-b2b-collaboration/outbound-b2b-collaboration-add-apps.png)
+ ![Screenshot showing selecting applications for b2b collaboration.](media/cross-tenant-access-settings-b2b-collaboration/outbound-b2b-collaboration-add-apps.png)
1. Select **Save**.
active-directory External Identities Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/external-identities-overview.md
Previously updated : 03/21/2022 Last updated : 05/17/2022
The following capabilities make up External Identities:
Depending on how you want to interact with external organizations and the types of resources you need to share, you can use a combination of these capabilities.
-![External Identities overview diagram](media/external-identities-overview/external-identities-b2b-overview.png)
+![External Identities overview diagram.](media/external-identities-overview/external-identities-b2b-overview.png)
## B2B collaboration
There are various ways to add external users to your organization for B2B collab
A user object is created for the B2B collaboration user in the same directory as your employees. This user object can be managed like other user objects in your directory, added to groups, and so on. You can assign permissions to the user object (for authorization) while letting them use their existing credentials (for authentication).
-You can use [cross-tenant access settings](cross-tenant-access-overview.md) to manage B2B collaboration with other Azure AD organizations. For B2B collaboration with non-Azure AD external users and organizations, use [external collaboration settings](external-collaboration-settings-configure.md).
+You can use [cross-tenant access settings](cross-tenant-access-overview.md) to manage B2B collaboration with other Azure AD organizations and across Microsoft Azure clouds. For B2B collaboration with non-Azure AD external users and organizations, use [external collaboration settings](external-collaboration-settings-configure.md).
## B2B direct connect
-B2B direct connect is a new way to collaborate with other Azure AD organizations. With B2B direct connect, you create two-way trust relationships with other Azure AD organizations to allow users to seamlessly sign in to your shared resources and vice versa. B2B direct connect users aren't added as guests to your Azure AD directory. When two organizations mutually enable B2B direct connect, users authenticate in their home organization and receive a token from the resource organization for access. Learn more about [B2B direct connect in Azure AD](b2b-direct-connect-overview.md).
+B2B direct connect is a new way to collaborate with other Azure AD organizations. This feature currently works with Microsoft Teams shared channels. With B2B direct connect, you create two-way trust relationships with other Azure AD organizations to allow users to seamlessly sign in to your shared resources and vice versa. B2B direct connect users aren't added as guests to your Azure AD directory. When two organizations mutually enable B2B direct connect, users authenticate in their home organization and receive a token from the resource organization for access. Learn more about [B2B direct connect in Azure AD](b2b-direct-connect-overview.md).
Currently, B2B direct connect enables the Teams Connect shared channels feature, which lets your users collaborate with external users from multiple organizations with a Teams shared channel for chat, calls, file-sharing, and app-sharing. Once youΓÇÖve set up B2B direct connect with an external organization, the following Teams shared channels capabilities become available:
Cross-tenant access settings let you manage B2B collaboration and B2B direct con
For more information, see [Cross-tenant access in Azure AD External Identities](cross-tenant-access-overview.md).
+### Microsoft cloud settings for B2B collaboration (preview)
+
+Microsoft Azure cloud services are available in separate national clouds, which are physically isolated instances of Azure. Increasingly, organizations are finding the need to collaborate with organizations and users across global cloud and national cloud boundaries. With Microsoft cloud settings, you can establish mutual B2B collaboration between the following Microsoft Azure clouds:
+
+- Microsoft Azure global cloud and Microsoft Azure Government
+- Microsoft Azure global cloud and Microsoft Azure China 21Vianet
+
+To set up B2B collaboration between tenants in different clouds, both tenants need to configure their Microsoft cloud settings to enable collaboration with the other cloud. Then each tenant must configure inbound and outbound cross-tenant access with the tenant in the other cloud. See [Microsoft cloud settings](cross-cloud-settings.md) for details.
### External collaboration settings External collaboration settings determine whether your users can send B2B collaboration invitations to external users and the level of access guest users have to your directory. With these settings, you can:
As an inviting organization, you might not know ahead of time who the individual
Microsoft Graph APIs are available for creating and managing External Identities features. -- **Cross-tenant access settings API**: The [Microsoft Graph cross-tenant access API](/graph/api/resources/crosstenantaccesspolicy-overview?view=graph-rest-beta) lets you programmatically create the same B2B collaboration and B2B direct connect policies that are configurable in the Azure portal. Using the API, you can set up policies for inbound and outbound collaboration to allow or block features for everyone by default and limit access to specific organizations, groups, users, and applications. The API also allows you to accept MFA and device claims (compliant claims and hybrid Azure AD joined claims) from other Azure AD organizations.
+- **Cross-tenant access settings API**: The [Microsoft Graph cross-tenant access API](/graph/api/resources/crosstenantaccesspolicy-overview?view=graph-rest-beta&preserve-view=true) lets you programmatically create the same B2B collaboration and B2B direct connect policies that are configurable in the Azure portal. Using the API, you can set up policies for inbound and outbound collaboration to allow or block features for everyone by default and limit access to specific organizations, groups, users, and applications. The API also allows you to accept MFA and device claims (compliant claims and hybrid Azure AD joined claims) from other Azure AD organizations.
- **B2B collaboration invitation manager**: The [Microsoft Graph invitation manager API](/graph/api/resources/invitation) is available for building your own onboarding experiences for B2B guest users. You can use the [create invitation API](/graph/api/invitation-post?tabs=http) to automatically send a customized invitation email directly to the B2B user, for example. Or your app can use the inviteRedeemUrl returned in the creation response to craft your own invitation (through your communication mechanism of choice) to the invited user.
active-directory Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/troubleshoot.md
Previously updated : 03/31/2022 Last updated : 05/17/2022 tags: active-directory
By default, SharePoint Online and OneDrive have their own set of external user o
If you're notified that you don't have permissions to invite users, verify that your user account is authorized to invite external users under Azure Active Directory > User settings > External users > Manage external collaboration settings:
-![Screenshot showing the External Users settings](media/troubleshoot/external-user-settings.png)
+![Screenshot showing the External Users settings.](media/troubleshoot/external-user-settings.png)
If you've recently modified these settings or assigned the Guest Inviter role to a user, there might be a 15-60 minute delay before the changes take effect.
Common errors include:
When inviting users whose organization is using Azure Active Directory, but where the specific userΓÇÖs account doesn't exist (for example, the user doesn't exist in Azure AD contoso.com). The administrator of contoso.com may have a policy in place preventing users from being created. The user must check with their admin to determine if external users are allowed. The external userΓÇÖs admin may need to allow Email Verified users in their domain (see this [article](/powershell/module/msonline/set-msolcompanysettings) on allowing Email Verified Users).
-![Error stating the tenant doesn't allow email verified users](media/troubleshoot/allow-email-verified-users.png)
+![Screenshot of the error stating the tenant doesn't allow email verified users.](media/troubleshoot/allow-email-verified-users.png)
### External user doesn't exist already in a federated domain
As of November 18, 2019, guest users in your directory (defined as user accounts
## In an Azure US Government tenant, I can't invite a B2B collaboration guest user
-Within the Azure US Government cloud, B2B collaboration is currently only supported between tenants that are both within Azure US Government cloud and that both support B2B collaboration. If you invite a user in a tenant that isn't part of the Azure US Government cloud or that doesn't yet support B2B collaboration, you'll get an error. For details and limitations, see [Azure Active Directory Premium P1 and P2 Variations](../../azure-government/compare-azure-government-global-azure.md#azure-active-directory-premium-p1-and-p2).
+Within the Azure US Government cloud, B2B collaboration is enabled between tenants that are both within Azure US Government cloud and that both support B2B collaboration. If you invite a user in a tenant that doesn't yet support B2B collaboration, you'll get an error. For details and limitations, see [Azure Active Directory Premium P1 and P2 Variations](../../azure-government/compare-azure-government-global-azure.md#azure-active-directory-premium-p1-and-p2).
+
+If you need to collaborate with an Azure AD organization that's outside of the Azure US Government cloud, you can use [Microsoft cloud settings (preview)](cross-cloud-settings.md) to enable B2B collaboration.
+
+## Invitation is blocked due to cross-tenant access policies
+
+When you try to invite a B2B collaboration user in another Microsoft Azure cloud, this error message will appear if B2B collaboration is supported between the two clouds but is blocked by cross-tenant access settings. The settings that are blocking collaboration could be either in the B2B collaboration userΓÇÖs home tenant or in your tenant. Check your cross-tenant access settings to make sure youΓÇÖve added the B2B collaboration userΓÇÖs home tenant to your Organizational settings and that your settings allow B2B collaboration with the user. Then make sure an admin in the userΓÇÖs tenant does the same.
+
+## Invitation is blocked due to disabled Microsoft B2B Cross Cloud Worker application
+
+Rarely, you might see this message: ΓÇ£This action can't be completed because the Microsoft B2B Cross Cloud Worker application has been disabled in the invited userΓÇÖs tenant. Please ask the invited userΓÇÖs admin to re-enable it, then try again.ΓÇ¥ This error means that the Microsoft B2B Cross Cloud Worker application has been disabled in the B2B collaboration userΓÇÖs home tenant. This app is typically enabled, but it might have been disabled by an admin in the userΓÇÖs home tenant, either through PowerShell or the portal (see [Disable how a user signs in](../manage-apps/disable-user-sign-in-portal.md)). An admin in the userΓÇÖs home tenant can re-enable the app through PowerShell or the Azure portal. In the portal, search for ΓÇ£Microsoft B2B Cross Cloud WorkerΓÇ¥ to find the app, select it, and then choose to re-enable it.
+
+## Redemption is blocked due to cross-tenant access settings
+
+A B2B collaboration user could see this message when they try to redeem a B2B collaboration invitation: ΓÇ£This invitation is blocked by cross-tenant access settings. Admins in both your organization and the inviterΓÇÖs organization must configure cross-tenant access settings to allow the invitation.ΓÇ¥ This error can occur when cross-tenant policies are changed between the time the invitation was sent to the user and the time the user redeems it. Check your cross-tenant access settings to make sure B2B collaboration is properly configured, and make sure an admin in the userΓÇÖs tenant does the same.
## I receive the error that Azure AD can't find the aad-extensions-app in my tenant
active-directory What Is B2b https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/what-is-b2b.md
Previously updated : 05/09/2022 Last updated : 05/17/2022
Azure Active Directory (Azure AD) B2B collaboration is a feature within External Identities that lets you invite guest users to collaborate with your organization. With B2B collaboration, you can securely share your company's applications and services with external users, while maintaining control over your own corporate data. Work safely and securely with external partners, large or small, even if they don't have Azure AD or an IT department.
-![Diagram illustrating B2B collaboration](media/what-is-b2b/b2b-collaboration-overview.png)
+![Diagram illustrating B2B collaboration.](media/what-is-b2b/b2b-collaboration-overview.png)
A simple invitation and redemption process lets partners use their own credentials to access your company's resources. You can also enable self-service sign-up user flows to let external users sign up for apps or resources themselves. Once the external user has redeemed their invitation or completed sign-up, they're represented in your directory as a [user object](user-properties.md). B2B collaboration user objects are typically given a user type of "guest" and can be identified by the #EXT# extension in their user principal name. Developers can use Azure AD business-to-business APIs to customize the invitation process or write applications like self-service sign-up portals. For licensing and pricing information related to guest users, refer to [Azure Active Directory External Identities pricing](https://azure.microsoft.com/pricing/details/active-directory/external-identities/). - > [!IMPORTANT] > We've begun rolling out a change to turn on the email one-time passcode feature for all existing tenants and enable it by default for new tenants. We're enabling the email one-time passcode feature because it provides a seamless fallback authentication method for your guest users. However, if you don't want to allow this feature to turn on automatically, you can [disable it](one-time-passcode.md#disable-email-one-time-passcode). Soon, we'll stop creating new, unmanaged ("viral") Azure AD accounts and tenants during B2B collaboration invitation redemption.
With Azure AD B2B, the partner uses their own identity management solution, so t
- You don't need to manage external accounts or passwords. - You don't need to sync accounts or manage account lifecycles.
-## Manage external access with inbound and outbound settings
+## Manage collaboration with other organizations and clouds
+
+B2B collaboration is enabled by default, but comprehensive admin settings let you control your inbound and outbound B2B collaboration with external partners and organizations:
-B2B collaboration is enabled by default, but comprehensive admin settings let you control your B2B collaboration with external partners and organizations:
+- For B2B collaboration with other Azure AD organizations, use [cross-tenant access settings (preview)](cross-tenant-access-overview.md). Manage inbound and outbound B2B collaboration, and scope access to specific users, groups, and applications. Set a default configuration that applies to all external organizations, and then create individual, organization-specific settings as needed. Using cross-tenant access settings, you can also trust multi-factor (MFA) and device claims (compliant claims and hybrid Azure AD joined claims) from other Azure AD organizations.
-- For B2B collaboration with other Azure AD organizations, you can use [cross-tenant access settings](cross-tenant-access-overview.md) to manage inbound and outbound B2B collaboration and scope access to specific users, groups, and applications. You can set a default configuration that applies to all external organizations, and then create individual, organization-specific settings as needed. Using cross-tenant access settings, you can also trust multi-factor (MFA) and device claims (compliant claims and hybrid Azure AD joined claims) from other Azure AD organizations.
+- Use [external collaboration settings](external-collaboration-settings-configure.md) to define who can invite external users, allow or block B2B specific domains, and set restrictions on guest user access to your directory.
-- You can use [external collaboration settings](external-collaboration-settings-configure.md) to limit who can invite external users, allow or block B2B specific domains, and set restrictions on guest user access to your directory.
+- Use [Microsoft cloud settings (preview)](cross-cloud-settings.md) to establish mutual B2B collaboration between the Microsoft Azure global cloud and Microsoft Azure Government or Microsoft Azure China 21Vianet.
## Easily invite guest users from the Azure AD portal
As an administrator, you can easily add guest users to your organization in the
- Assign guest users to apps or groups. - Send an invitation email that contains a redemption link, or send a direct link to an app you want to share.
-![Screenshot showing the New Guest User invitation entry page](media/what-is-b2b/add-a-b2b-user-to-azure-portal.png)
+![Screenshot showing the New Guest User invitation entry page.](media/what-is-b2b/add-a-b2b-user-to-azure-portal.png)
- Guest users follow a few simple [redemption steps](redemption-experience.md) to sign in.
-![Screenshot showing the Review permissions page](media/what-is-b2b/consentscreen.png)
+![Screenshot showing the Review permissions page.](media/what-is-b2b/consentscreen.png)
## Allow self-service sign-up
With a self-service sign-up user flow, you can create a sign-up experience for e
You can also use [API connectors](api-connectors-overview.md) to integrate your self-service sign-up user flows with external cloud systems. You can connect with custom approval workflows, perform identity verification, validate user-provided information, and more.
-![Screenshot showing the user flows page](media/what-is-b2b/self-service-sign-up-user-flow-overview.png)
+![Screenshot showing the user flows page.](media/what-is-b2b/self-service-sign-up-user-flow-overview.png)
## Use policies to securely share your apps and services
You can use authentication and authorization policies to protect your corporate
- At the application level. - For specific guest users to protect corporate apps and data.
-![Screenshot showing the Conditional Access option](media/what-is-b2b/tutorial-mfa-policy-2.png)
--
+![Screenshot showing the Conditional Access option.](media/what-is-b2b/tutorial-mfa-policy-2.png)
## Let application and group owners manage their own guest users
You can delegate guest user management to application owners so that they can ad
- Administrators set up self-service app and group management. - Non-administrators use their [Access Panel](https://myapps.microsoft.com) to add guest users to applications or groups.
-![Screenshot showing the Access panel for a guest user](media/what-is-b2b/access-panel-manage-app.png)
+![Screenshot showing the Access panel for a guest user.](media/what-is-b2b/access-panel-manage-app.png)
## Customize the onboarding experience for B2B guest users
Bring your external partners on board in ways customized to your organization's
Azure AD supports external identity providers like Facebook, Microsoft accounts, Google, or enterprise identity providers. You can set up federation with identity providers so your external users can sign in with their existing social or enterprise accounts instead of creating a new account just for your application. Learn more about [identity providers for External Identities](identity-providers.md).
-![Screenshot showing the Identity providers page](media/what-is-b2b/identity-providers.png)
+![Screenshot showing the Identity providers page.](media/what-is-b2b/identity-providers.png)
## Integrate with SharePoint and OneDrive
-You can [enable integration with SharePoint and OneDrive](/sharepoint/sharepoint-azureb2b-integration) to share files, folders, list items, document libraries, and sites with people outside your organization, while using Azure B2B for authentication and management. The users you share resources with are typically added to your directory as guests, and permissions and groups work the same for these guests as they do for internal users. When enabling integration with SharePoint and OneDrive, you'll also enable the [email one-time passcode](one-time-passcode.md) feature in Azure AD B2B to serve as a fallback authentication method.
+You can [enable integration with SharePoint and OneDrive](/sharepoint/sharepoint-azureb2b-integration) to share files, folders, list items, document libraries, and sites with people outside your organization, while using Azure B2B for authentication and management. The users you share resources with are typically added to your directory as guests, and permissions and groups work the same for these guests as they do for internal users. When enabling integration with SharePoint and OneDrive, you'll also enable the [email one-time passcode](one-time-passcode.md) feature in Azure AD B2B to serve as a fallback authentication method.
![Screenshot of the email one-time-passcode setting.](media/what-is-b2b/enable-email-otp-options.png) - ## Next steps - [External Identities pricing](external-identities-pricing.md) - [Add B2B collaboration guest users in the portal](add-users-administrator.md)-- [Understand the invitation redemption process](redemption-experience.md)
+- [Understand the invitation redemption process](redemption-experience.md)
active-directory Active Directory Groups Membership Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-groups-membership-azure-portal.md
This article helps you to add and remove a group from another group using Azure
You can add an existing Security group to another existing Security group (also known as nested groups), creating a member group (subgroup) and a parent group. The member group inherits the attributes and properties of the parent group, saving you configuration time. >[!Important]
->We don't currently support:<ul><li>Adding groups to a group synced with on-premises Active Directory.</li><li>Adding Security groups to Microsoft 365 groups.</li><li>Adding Microsoft 365 groups to Security groups or other Microsoft 365 groups.</li><li>Assigning apps to nested groups.</li><li>Applying licenses to nested groups.</li><li>Adding distribution groups in nesting scenarios.</li><li> Adding security groups as members of mail-enabled security groups</li></ul>
+>We don't currently support:<ul><li>Adding groups to a group synced with on-premises Active Directory.</li><li>Adding Security groups to Microsoft 365 groups.</li><li>Adding Microsoft 365 groups to Security groups or other Microsoft 365 groups.</li><li>Assigning apps to nested groups.</li><li>Applying licenses to nested groups.</li><li>Adding distribution groups in nesting scenarios.</li><li>Adding security groups as members of mail-enabled security groups</li><li> Adding groups as members of a role-assignable group.</li></ul>
### To add a group as a member of another group
active-directory Add Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/add-custom-domain.md
After you create your directory, you can add your custom domain name.
>[!IMPORTANT] >You must include *.com*, *.net*, or any other top-level extension for this to work properly.
+ >
+ >When adding a custom domain, the Password Policy values will be inherited from the initial domain.
The unverified domain is added. The **contoso.com** page appears showing your DNS information. Save this information. You need it later to create a TXT record to configure DNS.
If Azure AD can't verify a custom domain name, try the following suggestions:
- Manage your domain name information in Azure AD. For more information, see [Managing custom domain names](../enterprise-users/domains-manage.md). -- If you have on-premises versions of Windows Server that you want to use alongside Azure Active Directory, see [Integrate your on-premises directories with Azure Active Directory](../hybrid/whatis-hybrid-identity.md).
+- If you have on-premises versions of Windows Server that you want to use alongside Azure Active Directory, see [Integrate your on-premises directories with Azure Active Directory](../hybrid/whatis-hybrid-identity.md).
active-directory Concept Fundamentals Block Legacy Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/concept-fundamentals-block-legacy-authentication.md
Today, the majority of all compromising sign-in attempts come from legacy authen
Before you can block legacy authentication in your directory, you need to first understand if your users have apps that use legacy authentication and how it affects your overall directory. Azure AD sign-in logs can be used to understand if you're using legacy authentication.
-1. Navigate to the **Azure portal** > **Azure Active Directory** > **Sign-ins**.
+1. Navigate to the **Azure portal** > **Azure Active Directory** > **Sign-in logs**.
1. Add the **Client App** column if it is not shown by clicking onΓÇ»**Columns**ΓÇ»>ΓÇ»**Client App**. 1. Filter by **Client App** > check all the **Legacy Authentication Clients** options presented. 1. Filter by **Status** > **Success**.
active-directory Protect M365 From On Premises Attacks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/protect-m365-from-on-premises-attacks.md
Title: Protecting Microsoft 365 from on-premises attacks
-description: Guidance about how to ensure an on-premises attack doesn't affect Microsoft 365.
+description: Learn how to configure your systems to help protect your Microsoft 365 cloud environment from on-premises compromise.
- Previously updated : 12/22/2020+ Last updated : 04/29/2022 -+
+ - it-pro
+ - seodec18
+ - kr2b-contr-experiment
# Protecting Microsoft 365 from on-premises attacks
-Many customers connect their private corporate networks to Microsoft 365 to benefit their users, devices, and applications. However, these private networks can be compromised in many well-documented ways. Because Microsoft 365 acts as a sort of nervous system for many organizations, it's critical to protect it from compromised on-premises infrastructure.
+Many customers connect their private corporate networks to Microsoft 365 to benefit their users, devices, and applications. However, these private networks can be compromised in many well-documented ways. Microsoft 365 acts as a sort of nervous system for many organizations. It's critical to protect it from compromised on-premises infrastructure.
-This article shows you how to configure your systems to protect your Microsoft 365 cloud environment from on-premises compromise. We focus primarily on:
+This article shows you how to configure your systems to help protect your Microsoft 365 cloud environment from on-premises compromise, including the following elements:
-- Azure Active Directory (Azure AD) tenant configuration settings.-- How Azure AD tenants can be safely connected to on-premises systems.-- The tradeoffs required to operate your systems in ways that protect your cloud systems from on-premises compromise.
+- Azure Active Directory (Azure AD) tenant configuration settings
+- How Azure AD tenants can be safely connected to on-premises systems
+- The tradeoffs required to operate your systems in ways that protect your cloud systems from on-premises compromise
-We strongly recommend you implement this guidance to secure your Microsoft 365 cloud environment.
+Microsoft strongly recommends that you implement this guidance.
-> [!NOTE]
-> This article was initially published as a blog post. It has been moved to its current location for longevity and maintenance.
->
-> To create an offline version of this article, use your browser's print-to-PDF functionality. Check back here frequently for updates.
+## Threat sources in on-premises environments
-## Primary threat vectors from compromised on-premises environments
+Your Microsoft 365 cloud environment benefits from an extensive monitoring and security infrastructure. Microsoft 365 uses machine learning and human intelligence to look across worldwide traffic. It can rapidly detect attacks and allow you to reconfigure nearly in real time.
-Your Microsoft 365 cloud environment benefits from an extensive monitoring and security infrastructure. Using machine learning and human intelligence, Microsoft 365 looks across worldwide traffic. It can rapidly detect attacks and allow you to reconfigure nearly in real time.
-
-In hybrid deployments that connect on-premises infrastructure to Microsoft 365, many organizations delegate trust to on-premises components for critical authentication and directory object state management decisions. Unfortunately, if the on-premises environment is compromised, these trust relationships become an attacker's opportunities to compromise your Microsoft 365 environment.
+Hybrid deployments can connect on-premises infrastructure to Microsoft 365. In such deployments, many organizations delegate trust to on-premises components for critical authentication and directory object state management decisions. If the on-premises environment is compromised, these trust relationships become an attacker's opportunities to compromise your Microsoft 365 environment.
The two primary threat vectors are *federation trust relationships* and *account synchronization.* Both vectors can grant an attacker administrative access to your cloud.
-* **Federated trust relationships**, such as SAML authentication, are used to authenticate to Microsoft 365 through your on-premises identity infrastructure. If a SAML token-signing certificate is compromised, federation allows anyone who has that certificate to impersonate any user in your cloud. *We recommend you disable federation trust relationships for authentication to Microsoft 365 when possible.*
-
-* **Account synchronization** can be used to modify privileged users (including their credentials) or groups that have administrative privileges in Microsoft 365. *We recommend you ensure that synchronized objects hold no privileges beyond a user in Microsoft 365,* either directly or through inclusion in trusted roles or groups. Ensure these objects have no direct or nested assignment in trusted cloud roles or groups.
+- **Federated trust relationships**, such as Security Assertions Markup Language (SAML) authentication, are used to authenticate to Microsoft 365 through your on-premises identity infrastructure. If a SAML token-signing certificate is compromised, federation allows anyone who has that certificate to impersonate any user in your cloud.
-## Protecting Microsoft 365 from on-premises compromise
+ We recommend that you disable federation trust relationships for authentication to Microsoft 365 when possible.
-To address the threat vectors outlined earlier, we recommend you adhere to the principles illustrated in the following diagram:
+- **Account synchronization** can be used to modify privileged users, including their credentials, or groups that have administrative privileges in Microsoft 365.
-![Reference architecture for protecting Microsoft 365.](media/protect-m365/protect-m365-principles.png)
+ We recommend that you ensure that synchronized objects hold no privileges beyond a user in Microsoft 365. You can control privileges either directly or through inclusion in trusted roles or groups. Ensure these objects have no direct or nested assignment in trusted cloud roles or groups.
-1. **Fully isolate your Microsoft 365 administrator accounts.** They should be:
+## Protecting Microsoft 365 from on-premises compromise
- * Mastered in Azure AD.
+To address the threats described above, we recommend you adhere to the principles illustrated in the following diagram:
- * Authenticated by using multifactor authentication.
+![Reference architecture for protecting Microsoft 365, as described in the following list.](media/protect-m365/protect-m365-principles.png)
- * Secured by Azure AD Conditional Access.
+1. **Fully isolate your Microsoft 365 administrator accounts.** They should be:
- * Accessed only by using Azure-managed workstations.
+ - Mastered in Azure AD.
+ - Authenticated by using multifactor authentication.
+ - Secured by Azure AD Conditional Access.
+ - Accessed only by using Azure-managed workstations.
- These administrator accounts are restricted-use accounts. *No on-premises accounts should have administrative privileges in Microsoft 365.*
+ These administrator accounts are restricted-use accounts. No on-premises accounts should have administrative privileges in Microsoft 365.
- For more information, see the [overview of Microsoft 365 administrator roles](/microsoft-365/admin/add-users/about-admin-roles). Also see [Roles for Microsoft 365 in Azure AD](../roles/m365-workload-docs.md).
+ For more information, see [About admin roles](/microsoft-365/admin/add-users/about-admin-roles). Also, see [Roles for Microsoft 365 in Azure AD](../roles/m365-workload-docs.md).
1. **Manage devices from Microsoft 365.** Use Azure AD join and cloud-based mobile device management (MDM) to eliminate dependencies on your on-premises device management infrastructure. These dependencies can compromise device and security controls. 1. **Ensure no on-premises account has elevated privileges to Microsoft 365.** Some accounts access on-premises applications that require NTLM, LDAP, or Kerberos authentication. These accounts must be in the organization's on-premises identity infrastructure. Ensure that these accounts, including service accounts, aren't included in privileged cloud roles or groups. Also ensure that changes to these accounts can't affect the integrity of your cloud environment. Privileged on-premises software must not be capable of affecting Microsoft 365 privileged accounts or roles.
-1. **Use Azure AD cloud authentication** to eliminate dependencies on your on-premises credentials. Always use strong authentication, such as Windows Hello, FIDO, Microsoft Authenticator, or Azure AD multifactor authentication.
+1. **Use Azure AD cloud authentication to eliminate dependencies on your on-premises credentials.** Always use strong authentication, such as Windows Hello, FIDO, Microsoft Authenticator, or Azure AD multifactor authentication.
## Specific security recommendations
-The following sections provide specific guidance about how to implement the principles described earlier.
+The following sections provide guidance about how to implement the principles described above.
### Isolate privileged identities In Azure AD, users who have privileged roles, such as administrators, are the root of trust to build and manage the rest of the environment. Implement the following practices to minimize the effects of a compromise.
-* Use cloud-only accounts for Azure AD and Microsoft 365 privileged roles.
+- Use cloud-only accounts for Azure AD and Microsoft 365 privileged roles.
-* Deploy [privileged access devices](/security/compass/privileged-access-devices#device-roles-and-profiles) for privileged access to manage Microsoft 365 and Azure AD.
+- Deploy privileged access devices for privileged access to manage Microsoft 365 and Azure AD. See [Device roles and profiles](/security/compass/privileged-access-devices#device-roles-and-profiles).
-* Deploy [Azure AD Privileged Identity Management](../privileged-identity-management/pim-configure.md) (PIM) for just-in-time (JIT) access to all human accounts that have privileged roles. Require strong authentication to activate roles.
+ Deploy Azure AD Privileged Identity Management (PIM) for just-in-time access to all human accounts that have privileged roles. Require strong authentication to activate roles. See [What is Azure AD Privileged Identity Management](../privileged-identity-management/pim-configure.md).
-* Provide administrative roles that allow the [least privilege necessary to do required tasks](../roles/delegate-by-task.md).
+- Provide administrative roles that allow the least privilege necessary to do required tasks. See [Least privileged roles by task in Azure Active Directory](../roles/delegate-by-task.md).
-* To enable a rich role assignment experience that includes delegation and multiple roles at the same time, consider using Azure AD security groups or Microsoft 365 Groups. These groups are collectively called *cloud groups*. Also [enable role-based access control](../roles/groups-assign-role.md). You can use [administrative units](../roles/administrative-units.md) to restrict the scope of roles to a portion of the organization.
+- To enable a rich role assignment experience that includes delegation and multiple roles at the same time, consider using Azure AD security groups or Microsoft 365 Groups. These groups are collectively called *cloud groups*.
-* Deploy [emergency access accounts](../roles/security-emergency-access.md). Do *not* use on-premises password vaults to store credentials.
+ Also, enable role-based access control. See [Assign Azure AD roles to groups](../roles/groups-assign-role.md). You can use administrative units to restrict the scope of roles to a portion of the organization. See [Administrative units in Azure Active Directory](../roles/administrative-units.md).
-For more information, see [Securing privileged access](/security/compass/overview). Also see [Secure access practices for administrators in Azure AD](../roles/security-planning.md).
+- Deploy emergency access accounts. Do *not* use on-premises password vaults to store credentials. See [Manage emergency access accounts in Azure AD](../roles/security-emergency-access.md).
-### Use cloud authentication
+For more information, see [Securing privileged access](/security/compass/overview). Also, see [Secure access practices for administrators in Azure AD](../roles/security-planning.md).
+
+### Use cloud authentication
Credentials are a primary attack vector. Implement the following practices to make credentials more secure:
-* [Deploy passwordless authentication](../authentication/howto-authentication-passwordless-deployment.md). Reduce the use of passwords as much as possible by deploying passwordless credentials. These credentials are managed and validated natively in the cloud. Choose from these authentication methods:
+- **Deploy passwordless authentication**. Reduce the use of passwords as much as possible by deploying passwordless credentials. These credentials are managed and validated natively in the cloud. For more information, see [Plan a passwordless authentication deployment in Azure Active Directory](../authentication/howto-authentication-passwordless-deployment.md).
- * [Windows Hello for business](/windows/security/identity-protection/hello-for-business/passwordless-strategy)
+ Choose from these authentication methods:
- * [The Microsoft Authenticator app](../authentication/howto-authentication-passwordless-phone.md)
+ - [Windows Hello for business](/windows/security/identity-protection/hello-for-business/passwordless-strategy)
+ - [The Microsoft Authenticator app](../authentication/howto-authentication-passwordless-phone.md)
+ - [FIDO2 security keys](../authentication/howto-authentication-passwordless-security-key-windows.md)
- * [FIDO2 security keys](../authentication/howto-authentication-passwordless-security-key-windows.md)
+- **Deploy multifactor authentication**. For more information, see [Plan an Azure Active Directory Multi-Factor Authentication deployment](../authentication/howto-mfa-getstarted.md).
-* [Deploy multifactor authentication](../authentication/howto-mfa-getstarted.md). Provision
- [multiple strong credentials by using Azure AD multifactor authentication](../fundamentals/resilience-in-credentials.md). That way, access to cloud resources will require a credential that's managed in Azure AD in addition to an on-premises password that can be manipulated. For more information, see [Create a resilient access control management strategy by using Azure AD](./resilience-overview.md).
+ Provision multiple strong credentials by using Azure AD multifactor authentication. That way, access to cloud resources requires an Azure AD managed credential in addition to an on-premises password. For more information, see [Build resilience with credential management](../fundamentals/resilience-in-credentials.md) and [Create a resilient access control management strategy by using Azure AD](./resilience-overview.md).
### Limitations and tradeoffs
-* Hybrid account password management requires hybrid components such as password protection agents and password writeback agents. If your on-premises infrastructure is compromised, attackers can control the machines on which these agents reside. This vulnerability won't compromise your cloud infrastructure. But your cloud accounts won't protect these components from on-premises compromise.
+Hybrid account password management requires hybrid components such as password protection agents and password writeback agents. If your on-premises infrastructure is compromised, attackers can control the machines on which these agents reside. This vulnerability won't compromise your cloud infrastructure. But your cloud accounts won't protect these components from on-premises compromise.
-* On-premises accounts synced from Active Directory are marked to never expire in Azure AD. This setting is usually mitigated by on-premises Active Directory password settings. However, if your on-premises instance of Active Directory is compromised and synchronization is disabled, you must set the [EnforceCloudPasswordPolicyForPasswordSyncedUsers](../hybrid/how-to-connect-password-hash-synchronization.md) option to force password changes.
+On-premises accounts synced from Active Directory are marked to never expire in Azure AD. This setting is usually mitigated by on-premises Active Directory password settings. If your instance of Active Directory is compromised and synchronization is disabled, set the [EnforceCloudPasswordPolicyForPasswordSyncedUsers](../hybrid/how-to-connect-password-hash-synchronization.md) option to force password changes.
## Provision user access from the cloud *Provisioning* refers to the creation of user accounts and groups in applications or identity providers.
-![Diagram of provisioning architecture.](media/protect-m365/protect-m365-provision.png)
+![Diagram of provisioning architecture shows the interaction of Azure A D with Cloud HR, Azure A D B 2 B, Azure app provisioning, and group-based licensing.](media/protect-m365/protect-m365-provision.png)
We recommend the following provisioning methods:
-* **Provision from cloud HR apps to Azure AD**: This provisioning enables an on-premises compromise to be isolated, without disrupting your joiner-mover-leaver cycle from your cloud HR apps to Azure AD.
-
-* **Cloud applications**: Where possible, deploy [Azure AD app provisioning](../app-provisioning/user-provisioning.md) as opposed to on-premises provisioning solutions. This method protects some of your software-as-a-service (SaaS) apps from being affected by malicious hacker profiles in on-premises breaches.
+- **Provision from cloud HR apps to Azure AD.** This provisioning enables an on-premises compromise to be isolated. This isolation doesn't disrupt your joiner-mover-leaver cycle from your cloud HR apps to Azure AD.
+- **Cloud applications.** Where possible, deploy Azure AD app provisioning as opposed to on-premises provisioning solutions. This method protects some of your software as a service (SaaS) apps from malicious hacker profiles in on-premises breaches. For more information, see [What is app provisioning in Azure Active Directory](../app-provisioning/user-provisioning.md).
+- **External identities.** Use Azure AD B2B collaboration to reduce the dependency on on-premises accounts for external collaboration with partners, customers, and suppliers. Carefully evaluate any direct federation with other identity providers. For more information, see [B2B collaboration overview](../external-identities/what-is-b2b.md).
-* **External identities**: Use [Azure AD B2B collaboration](../external-identities/what-is-b2b.md) This method reduces the dependency on on-premises accounts for external collaboration with partners, customers, and suppliers. Carefully evaluate any direct federation with other identity providers. We recommend limiting B2B guest accounts in the following ways:
+ We recommend limiting B2B guest accounts in the following ways:
- * Limit guest access to browsing groups and other properties in the directory. Use the external collaboration settings to restrict guests' ability to read groups they're not members of.
+ - Limit guest access to browsing groups and other properties in the directory. Use the external collaboration settings to restrict guests' ability to read groups they're not members of.
+ - Block access to the Azure portal. You can make rare necessary exceptions. Create a Conditional Access policy that includes all guests and external users. Then implement a policy to block access. See [Conditional Access](../conditional-access/concept-conditional-access-cloud-apps.md).
- * Block access to the Azure portal. You can make rare necessary exceptions. Create a Conditional Access policy that includes all guests and external users. Then [implement a policy to block access](../conditional-access/concept-conditional-access-cloud-apps.md).
+- **Disconnected forests.** Use Azure AD cloud provisioning to connect to disconnected forests. This approach eliminates the need to establish cross-forest connectivity or trusts, which can broaden the effect of an on-premises breach. For more information, see [What is Azure AD Connect cloud sync](../cloud-sync/what-is-cloud-sync.md).
-* **Disconnected forests**: Use [Azure AD cloud provisioning](../cloud-sync/what-is-cloud-sync.md). This method enables you to connect to disconnected forests, eliminating the need to establish cross-forest connectivity or trusts, which can broaden the effect of an on-premises breach.
-
### Limitations and tradeoffs When used to provision hybrid accounts, the Azure-AD-from-cloud-HR system relies on on-premises synchronization to complete the data flow from Active Directory to Azure AD. If synchronization is interrupted, new employee records won't be available in Azure AD.
When used to provision hybrid accounts, the Azure-AD-from-cloud-HR system relies
Cloud groups allow you to decouple your collaboration and access from your on-premises infrastructure.
-* **Collaboration**: Use Microsoft 365 Groups and Microsoft Teams for modern collaboration. Decommission on-premises distribution lists, and [upgrade distribution lists to Microsoft 365 Groups in Outlook](/office365/admin/manage/upgrade-distribution-lists).
-
-* **Access**: Use Azure AD security groups or Microsoft 365 Groups to authorize access to applications in Azure AD.
-
-* **Office 365 licensing**: Use group-based licensing to provision to Office 365 by using cloud-only groups. This method decouples control of group membership from on-premises infrastructure.
+- **Collaboration**. Use Microsoft 365 Groups and Microsoft Teams for modern collaboration. Decommission on-premises distribution lists, and [upgrade distribution lists to Microsoft 365 Groups in Outlook](/office365/admin/manage/upgrade-distribution-lists).
+- **Access**. Use Azure AD security groups or Microsoft 365 Groups to authorize access to applications in Azure AD.
+- **Office 365 licensing**. Use group-based licensing to provision to Office 365 by using cloud-only groups. This method decouples control of group membership from on-premises infrastructure.
Owners of groups that are used for access should be considered privileged identities to avoid membership takeover in an on-premises compromise. A takeover would include direct manipulation of group membership on-premises or manipulation of on-premises attributes that can affect dynamic group membership in Microsoft 365.
Owners of groups that are used for access should be considered privileged identi
Use Azure AD capabilities to securely manage devices. -- **Use Windows 10 workstations**: [Deploy Azure AD joined](../devices/azureadjoin-plan.md) devices with MDM policies. Enable [Windows Autopilot](/mem/autopilot/windows-autopilot) for a fully automated provisioning experience.-
- - Deprecate machines that run Windows 8.1 and earlier.
-
- - Don't deploy server OS machines as workstations.
+Deploy Azure AD joined Windows 10 workstations with mobile device management policies. Enable Windows Autopilot for a fully automated provisioning experience. See [Plan your Azure AD join implementation](../devices/azureadjoin-plan.md) and [Windows Autopilot](/mem/autopilot/windows-autopilot).
- - Use [Microsoft Intune](https://www.microsoft.com/microsoft-365/enterprise-mobility-security/microsoft-intune) as the source of authority for all device management workloads.
+- **Use Windows 10 workstations**.
+ - Deprecate machines that run Windows 8.1 and earlier.
+ - Don't deploy computers that have server operating systems as workstations.
+- **Use Microsoft Endpoint Manager as the authority for all device management workloads.** See [Microsoft Endpoint Manager](https://www.microsoft.com/security/business/microsoft-endpoint-manager).
+- **Deploy privileged access devices.** For more information, see [Device roles and profiles](/security/compass/privileged-access-devices#device-roles-and-profiles).
-- [**Deploy privileged access devices**](/security/compass/privileged-access-devices#device-roles-and-profiles):
- Use privileged access to manage Microsoft 365 and Azure AD as part of a complete approach to [Securing privileged access](/security/compass/overview).
+### Workloads, applications, and resources
-## Workloads, applications, and resources
+- **On-premises single-sign-on (SSO) systems**
-- **On-premises single-sign-on (SSO) systems**
+ Deprecate any on-premises federation and web access management infrastructure. Configure applications to use Azure AD.
- Deprecate any on-premises federation and web access management infrastructure. Configure applications to use Azure AD.
+- **SaaS and line-of-business (LOB) applications that support modern authentication protocols**
-- **SaaS and line-of-business (LOB) applications that support modern authentication protocols**
+ Use Azure AD for SSO. The more apps you configure to use Azure AD for authentication, the less risk in an on-premises compromise. For more information, see [What is single sign-on in Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
- [Use Azure AD for SSO](../manage-apps/what-is-single-sign-on.md). The more apps you configure to use Azure AD for authentication, the less risk in an on-premises compromise.
+- **Legacy applications**
+ You can enable authentication, authorization, and remote access to legacy applications that don't support modern authentication. Use [Azure AD Application Proxy](../app-proxy/application-proxy.md). Or, enable them through a network or application delivery controller solution by using secure hybrid access partner integrations. See [Secure legacy apps with Azure Active Directory](../manage-apps/secure-hybrid-access.md).
-* **Legacy applications**
+ Choose a VPN vendor that supports modern authentication. Integrate its authentication with Azure AD. In an on-premises compromise, you can use Azure AD to disable or block access by disabling the VPN.
- * You can enable authentication, authorization, and remote access to legacy applications that don't support modern authentication. Use [Azure AD Application Proxy](../app-proxy/application-proxy.md). You can also enable them through a network or application delivery controller solution by using [secure hybrid access partner integrations](../manage-apps/secure-hybrid-access.md).
+- **Application and workload servers**
- * Choose a VPN vendor that supports modern authentication. Integrate its authentication with Azure AD. In an on-premises compromise, you can use Azure AD to disable or block access by disabling the VPN.
+ Applications or resources that required servers can be migrated to Azure infrastructure as a service (IaaS). Use Azure AD Domain Services (Azure AD DS) to decouple trust and dependency on on-premises instances of Active Directory. To achieve this decoupling, make sure virtual networks used for Azure AD DS don't have a connection to corporate networks. See [Azure AD Domain Services](../../active-directory-domain-services/overview.md).
-* **Application and workload servers**
-
- * Applications or resources that required servers can be migrated to Azure infrastructure as a service (IaaS). Use [Azure AD Domain Services](../../active-directory-domain-services/overview.md) (Azure AD DS) to decouple trust and dependency on on-premises instances of Active Directory. To achieve this decoupling, make sure virtual networks used for Azure AD DS don't have a connection to corporate networks.
-
- * Follow the guidance for [credential tiering](/security/compass/privileged-access-access-model#ADATM_BM). Application servers are typically considered tier-1 assets.
+ Use credential tiering. Application servers are typically considered tier-1 assets. For more information, see [Enterprise access model](/security/compass/privileged-access-access-model#ADATM_BM).
## Conditional Access policies Use Azure AD Conditional Access to interpret signals and use them to make authentication decisions. For more information, see the [Conditional Access deployment plan](../conditional-access/plan-conditional-access.md).
-* Use Conditional Access to [block legacy authentication protocols](../conditional-access/howto-conditional-access-policy-block-legacy.md) whenever possible. Additionally, disable legacy authentication protocols at the application level by using an application-specific configuration.
+- Use Conditional Access to block legacy authentication protocols whenever possible. Additionally, disable legacy authentication protocols at the application level by using an application-specific configuration. See [Block legacy authentication](../conditional-access/howto-conditional-access-policy-block-legacy.md).
+
+ For more information, see [Legacy authentication protocols](../fundamentals/auth-sync-overview.md#legacy-authentication-protocols). Or see specific details for [Exchange Online](/exchange/clients-and-mobile-in-exchange-online/disable-basic-authentication-in-exchange-online#how-basic-authentication-works-in-exchange-online) and [SharePoint Online](/powershell/module/sharepoint-online/set-spotenant).
- For more information, see [Legacy authentication protocols](../fundamentals/auth-sync-overview.md). Or see specific details for [Exchange Online](/exchange/clients-and-mobile-in-exchange-online/disable-basic-authentication-in-exchange-online#how-basic-authentication-works-in-exchange-online) and [SharePoint Online](/powershell/module/sharepoint-online/set-spotenant).
+- Implement the recommended identity and device access configurations. See [Common Zero Trust identity and device access policies](/microsoft-365/security/office-365-security/identity-access-policies).
-* Implement the recommended [identity and device access configurations](/microsoft-365/security/office-365-security/identity-access-policies).
+- If you're using a version of Azure AD that doesn't include Conditional Access, use [Security defaults in Azure AD](../fundamentals/concept-fundamentals-security-defaults.md).
-* If you're using a version of Azure AD that doesn't include Conditional Access, ensure that you're using the [Azure AD security defaults](../fundamentals/concept-fundamentals-security-defaults.md).
+ For more information about Azure AD feature licensing, see the [Azure AD pricing guide](https://www.microsoft.com/security/business/identity-access-management/azure-ad-pricing).
- For more information about Azure AD feature licensing, see the [Azure AD pricing guide](https://www.microsoft.com/security/business/identity-access-management/azure-ad-pricing).
+## Monitor
-## Monitor
+After you configure your environment to protect your Microsoft 365 from an on-premises compromise, proactively monitor the environment. For more information, see [What is Azure Active Directory monitoring](../reports-monitoring/overview-monitoring.md).
-After you configure your environment to protect your Microsoft 365
-from an on-premises compromise, [proactively monitor](../reports-monitoring/overview-monitoring.md)
-the environment.
### Scenarios to monitor Monitor the following key scenarios, in addition to any scenarios specific to your organization. For example, you should proactively monitor access to your business-critical applications and resources.
-* **Suspicious activity**
-
- Monitor all [Azure AD risk events](../identity-protection/overview-identity-protection.md#risk-detection-and-remediation) for suspicious activity. [Azure AD Identity Protection](../identity-protection/overview-identity-protection.md) is natively integrated with Microsoft Defender for Cloud.
-
- Define the network [named locations](../conditional-access/location-condition.md) to avoid noisy detections on location-based signals.
-* **User and Entity Behavioral Analytics (UEBA) alerts**
-
- Use UEBA to get insights on anomaly detection.
+- **Suspicious activity**
- * Microsoft Defender for Cloud Apps provides [UEBA in the cloud](/cloud-app-security/tutorial-ueba).
+ Monitor all Azure AD risk events for suspicious activity. See [Risk detection and remediation](../identity-protection/overview-identity-protection.md#risk-detection-and-remediation). Azure AD Identity Protection is natively integrated with Microsoft Defender for Cloud. See [What is Identity Protection](../identity-protection/overview-identity-protection.md).
- * You can [integrate on-premises UEBA from Azure Advanced Threat Protection (ATP)](/defender-for-identity/install-step2). Defender for Cloud Apps reads signals from Azure AD Identity Protection.
+ Define the network named locations to avoid noisy detections on location-based signals. See [Using the location condition in a Conditional Access policy](../conditional-access/location-condition.md).
-* **Emergency access accounts activity**
+- **User and Entity Behavioral Analytics (UEBA) alerts**
- Monitor any access that uses [emergency access accounts](../roles/security-emergency-access.md). Create alerts for investigations. This monitoring must include:
+ Use UEBA to get insights on anomaly detection. Microsoft Defender for Cloud Apps provides UEBA in the cloud. See [Investigate risky users](/cloud-app-security/tutorial-ueba).
- * Sign-ins.
+ You can integrate on-premises UEBA from Azure Advanced Threat Protection (ATP). Microsoft Defender for Cloud Apps reads signals from Azure AD Identity Protection. See [Connect to your Active Directory Forest](/defender-for-identity/install-step2).
- * Credential management.
+- **Emergency access accounts activity**
- * Any updates on group memberships.
+ Monitor any access that uses emergency access accounts. See [Manage emergency access accounts in Azure AD](../roles/security-emergency-access.md). Create alerts for investigations. This monitoring must include the following actions:
- * Application assignments.
+ - Sign-ins
+ - Credential management
+ - Any updates on group memberships
+ - Application assignments
-* **Privileged role activity**
+- **Privileged role activity**
- Configure and review security [alerts generated by Azure AD Privileged Identity Management (PIM)](../privileged-identity-management/pim-how-to-configure-security-alerts.md?tabs=new#security-alerts). Monitor direct assignment of privileged roles outside PIM by generating alerts whenever a user is assigned directly.
+ Configure and review security alerts generated by Azure AD Privileged Identity Management (PIM). Monitor direct assignment of privileged roles outside PIM by generating alerts whenever a user is assigned directly. See [Security alerts](../privileged-identity-management/pim-how-to-configure-security-alerts.md?tabs=new#security-alerts).
-* **Azure AD tenant-wide configurations**
+- **Azure AD tenant-wide configurations**
- Any change to tenant-wide configurations should generate alerts in the system. These changes include but aren't limited to:
+ Any change to tenant-wide configurations should generate alerts in the system. These changes include but aren't limited to the following changes:
- * Updated custom domains.
+ - Updated custom domains
+ - Azure AD B2B changes to allowlists and blocklists
+ - Azure AD B2B changes to allowed identity providers, such as SAML identity providers through direct federation or social sign-ins
+ - Conditional Access or Risk policy changes
- * Azure AD B2B changes to allowlists and blocklists.
+- **Application and service principal objects**
- * Azure AD B2B changes to allowed identity providers (SAML identity providers through direct federation or social sign-ins).
+ - New applications or service principals that might require Conditional Access policies
+ - Credentials added to service principals
+ - Application consent activity
- * Conditional Access or Risk policy changes.
+- **Custom roles**
-* **Application and service principal objects**
-
- * New applications or service principals that might require Conditional Access policies.
-
- * Credentials added to service principals.
- * Application consent activity.
-
-* **Custom roles**
- * Updates to the custom role definitions.
-
- * Newly created custom roles.
+ - Updates to the custom role definitions
+ - Newly created custom roles
### Log management Define a log storage and retention strategy, design, and implementation to facilitate a consistent tool set. For example, you could consider security information and event management (SIEM) systems like Microsoft Sentinel, common queries, and investigation and forensics playbooks.
-* **Azure AD logs**: Ingest generated logs and signals by consistently following best practices for settings such as diagnostics, log retention, and SIEM ingestion.
-
- The log strategy must include the following Azure AD logs:
- * Sign-in activity
-
- * Audit logs
-
- * Risk events
+- **Azure AD logs**. Ingest generated logs and signals by consistently following best practices for settings such as diagnostics, log retention, and SIEM ingestion.
- Azure AD provides [Azure Monitor integration](../reports-monitoring/concept-activity-logs-azure-monitor.md) for the sign-in activity log and audit logs. Risk events can be ingested through the [Microsoft Graph API](/graph/api/resources/identityprotection-root). You can [stream Azure AD logs to Azure Monitor logs](../reports-monitoring/howto-integrate-activity-logs-with-log-analytics.md).
+ The log strategy must include the following Azure AD logs:
-* **Hybrid infrastructure OS security logs**: All hybrid identity infrastructure OS logs should be archived and carefully monitored as a tier-0 system, because of the surface-area implications. Include the following elements:
+ - Sign-in activity
+ - Audit logs
+ - Risk events
- * Azure AD Connect. [Azure AD Connect Health](../hybrid/whatis-azure-ad-connect.md) must be deployed to monitor identity synchronization.
+ Azure AD provides Azure Monitor integration for the sign-in activity log and audit logs. See [Azure AD activity logs in Azure Monitor](../reports-monitoring/concept-activity-logs-azure-monitor.md).
- * Application Proxy agents
+ Use the Microsoft Graph API to ingest risk events. See [Use the Microsoft Graph identity protection APIs](/graph/api/resources/identityprotection-root).
+ You can stream Azure AD logs to Azure Monitor logs. See [Integrate Azure AD logs with Azure Monitor logs](../reports-monitoring/howto-integrate-activity-logs-with-log-analytics.md).
- * Password writeback agents
+- **Hybrid infrastructure operating system security logs**. All hybrid identity infrastructure operating system logs should be archived and carefully monitored as a tier-0 system, because of the surface-area implications. Include the following elements:
- * Password Protection Gateway machines
+ - Application Proxy agents
+ - Password writeback agents
+ - Password Protection Gateway machines
+ - Network policy servers (NPSs) that have the Azure AD multifactor authentication RADIUS extension
+ - Azure AD Connect
- * Network policy servers (NPSs) that have the Azure AD multifactor authentication RADIUS extension
+ You must deploy Azure AD Connect Health to monitor identity synchronization. See [What is Azure AD Connect](../hybrid/whatis-azure-ad-connect.md).
## Next steps
-* [Build resilience into identity and access management by using Azure AD](resilience-overview.md)
-* [Secure external access to resources](secure-external-access-resources.md)
-* [Integrate all your apps with Azure AD](five-steps-to-full-application-integration-with-azure-ad.md)
+- [Build resilience into identity and access management by using Azure AD](resilience-overview.md)
+- [Secure external access to resources](secure-external-access-resources.md)
+- [Integrate all your apps with Azure AD](five-steps-to-full-application-integration-with-azure-ad.md)
active-directory Resilience Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/resilience-overview.md
Title: Building resilient identity and access management with Azure Active Directory
-description: A guide for architects, IT administrators, and developers on building resilience to disruption of their identity systems.
+ Title: Resilience in identity and access management with Azure Active Directory
+description: Learn how to build resilience into identity and access management. Resilience helps endure disruption to system components and recover with minimal effort.
- Previously updated : 11/30/2020+ Last updated : 04/29/2022 -+
+ - it-pro
+ - seodec18
+ - kr2b-contr-experiment
# Building resilience into identity and access management with Azure Active Directory
-Identity and access management (IAM) is a framework of processes, policies, and technologies that facilitate the management of identities and what they access. It includes the many components supporting the authentication and authorization of user and other accounts in your system.
+Identity and access management (IAM) is a framework of processes, policies, and technologies. IAM facilitates the management of identities and what they access. It includes the many components supporting the authentication and authorization of user and other accounts in your system.
-IAM resilience is the ability to endure disruption to system components and recover with minimal impact to your business, users, customers, and operations. Reducing dependencies, complexity, and single-points-of-failure, while ensuring comprehensive error handling will increase your resilience.
+IAM resilience is the ability to endure disruption to system components and recover with minimal impact to your business, users, customers, and operations. Reducing dependencies, complexity, and single-points-of-failure, while ensuring comprehensive error handling, increases your resilience.
-Disruption can come from any component of your IAM systems. To build a resilient IAM system, assume disruptions will occur and plan for it.
+Disruption can come from any component of your IAM systems. To build a resilient IAM system, assume disruptions will occur and plan for them.
-When planning the resilience of your IAM solution, consider the following elements:
+When planning the resilience of your IAM solution, consider the following elements:
-* Your applications that rely on your IAM system.
+* Your applications that rely on your IAM system
+* The public infrastructures your authentication calls use, including telecom companies, Internet service providers, and public key providers
+* Your cloud and on-premises identity providers
+* Other services that rely on your IAM, and the APIs that connect them
+* Any other on-premises components in your system
-* The public infrastructures your authentication calls use, including telecom companies, Internet service providers, and public key providers.
-
-* Your cloud and on-premises identity providers.
-
-* Other services that rely on your IAM, and the APIs that connect them.
-
-* Any other on-premises components in your system.
-
-Whatever the source, recognizing and planning for the contingencies is important. However, adding additional identity systems, and their resultant dependencies and complexity, may reduce your resilience rather than increase it.
+Whatever the source, recognizing and planning for the contingencies is important. However, adding other identity systems, and their resultant dependencies and complexity, may reduce your resilience rather than increase it.
To build more resilience in your systems, review the following articles: * [Build resilience in your IAM infrastructure](resilience-in-infrastructure.md)- * [Build IAM resilience in your applications](resilience-app-development-overview.md)- * [Build resilience in your Customer Identity and Access Management (CIAM) systems](resilience-b2c.md)
active-directory Security Operations Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/security-operations-introduction.md
Title: Azure Active Directory security operations guide
-description: Learn to monitor, identify, and alert on security issues with accounts, applications, devices, and infrastructure
+description: Learn to monitor, identify, and alert on security issues with accounts, applications, devices, and infrastructure in Azure Active Directory.
- Previously updated : 07/15/2021+ Last updated : 04/29/2022 -+
+ - it-pro
+ - seodec18
+ - kr2b-contr-experiment
# Azure Active Directory security operations guide
-Microsoft has a successful and proven approach to [Zero Trust security](https://aka.ms/Zero-Trust) using [Defense in Depth](https://us-cert.cisa.gov/bsi/articles/knowledge/principles/defense-in-depth) principles that leverage identity as a control plane. As organizations continue to embrace a hybrid workload world for scale, cost savings, and security, Azure Active Directory (Azure AD) plays a pivotal role in your strategy for identity management. Recently, news surrounding identity and security compromise has increasingly prompted enterprise IT to consider their identity security posture as a measurement of defensive security success.
+Microsoft has a successful and proven approach to [Zero Trust security](https://aka.ms/Zero-Trust) using [Defense in Depth](https://us-cert.cisa.gov/bsi/articles/knowledge/principles/defense-in-depth) principles that use identity as a control plane. Organizations continue to embrace a hybrid workload world for scale, cost savings, and security. Azure Active Directory (Azure AD) plays a pivotal role in your strategy for identity management. Recently, news surrounding identity and security compromise has increasingly prompted enterprise IT to consider their identity security posture as a measurement of defensive security success.
Increasingly, organizations must embrace a mixture of on-premises and cloud applications, which users access with both onΓÇôpremises and cloud-only accounts. Managing users, applications, and devices both on-premises and in the cloud poses challenging scenarios.
-Azure Active Directory creates a common user identity for authentication and authorization to all resources, regardless of location. We call this hybrid identity.
+## Hybrid identity
+
+Azure Active Directory creates a common user identity for authentication and authorization to all resources, regardless of location. We call this *hybrid identity*.
To achieve hybrid identity with Azure AD, one of three authentication methods can be used, depending on your scenarios. The three methods are: * [Password hash synchronization (PHS)](../hybrid/whatis-phs.md)- * [Pass-through authentication (PTA)](../hybrid/how-to-connect-pta.md)- * [Federation (AD FS)](../hybrid/whatis-fed.md) As you audit your current security operations or establish security operations for your Azure environment, we recommend you: * Read specific portions of the Microsoft security guidance to establish a baseline of knowledge about securing your cloud-based or hybrid Azure environment.- * Audit your account and password strategy and authentication methods to help deter the most common attack vectors.- * Create a strategy for continuous monitoring and alerting on activities that might indicate a security threat.
-## Audience
+### Audience
The Azure AD SecOps Guide is intended for enterprise IT identity and security operations teams and managed service providers that need to counter threats through better identity security configuration and monitoring profiles. This guide is especially relevant for IT administrators and identity architects advising Security Operations Center (SOC) defensive and penetration testing teams to improve and maintain their identity security posture.
-## Scope
+### Scope
-This introduction provides the suggested prereading and password audit and strategy recommendations. This article also provides an overview of the tools available for hybrid Azure environments as well as fully cloud-based Azure environments. Finally, we provide a list of data sources you can use for monitoring and alerting and configuring your security information and event management (SIEM) strategy and environment. The rest of the guidance presents monitoring and alerting strategies in the following areas:
+This introduction provides the suggested prereading and password audit and strategy recommendations. This article also provides an overview of the tools available for hybrid Azure environments and fully cloud-based Azure environments. Finally, we provide a list of data sources you can use for monitoring and alerting and configuring your security information and event management (SIEM) strategy and environment. The rest of the guidance presents monitoring and alerting strategies in the following areas:
-* [User accounts](security-operations-user-accounts.md) ΓÇô Guidance specific to non-privileged user accounts without administrative privilege, including anomalous account creation and usage, and unusual sign-ins.
+* [User accounts](security-operations-user-accounts.md). Guidance specific to non-privileged user accounts without administrative privilege, including anomalous account creation and usage, and unusual sign-ins.
-* [Privileged accounts](security-operations-privileged-accounts.md) ΓÇô Guidance specific to privileged user accounts that have elevated permissions to perform administrative tasks, including Azure AD role assignments, Azure resource role assignments, and access management for Azure resources and subscriptions.
+* [Privileged accounts](security-operations-privileged-accounts.md). Guidance specific to privileged user accounts that have elevated permissions to perform administrative tasks. Tasks include Azure AD role assignments, Azure resource role assignments, and access management for Azure resources and subscriptions.
-* [Privileged Identity Management (PIM)](security-operations-privileged-identity-management.md) ΓÇô guidance specific to using PIM to manage, control, and monitor access to resources.
+* [Privileged Identity Management (PIM)](security-operations-privileged-identity-management.md). Guidance specific to using PIM to manage, control, and monitor access to resources.
-* [Applications](security-operations-applications.md) ΓÇô Guidance specific to accounts used to provide authentication for applications.
+* [Applications](security-operations-applications.md). Guidance specific to accounts used to provide authentication for applications.
-* [Devices](security-operations-devices.md) ΓÇô Guidance specific to monitoring and alerting for devices registered or joined outside of policies, non-compliant usage, managing device administration roles, and sign-ins to virtual machines.
+* [Devices](security-operations-devices.md). Guidance specific to monitoring and alerting for devices registered or joined outside of policies, non-compliant usage, managing device administration roles, and sign-ins to virtual machines.
-* [Infrastructure](security-operations-infrastructure.md)ΓÇô Guidance specific to monitoring and alerting on threats to your hybrid and purely cloud-based environments.
+* [Infrastructure](security-operations-infrastructure.md). Guidance specific to monitoring and alerting on threats to your hybrid and purely cloud-based environments.
## Important reference content
-Microsoft has many products and services that enable you to customize your IT environment to fit your needs. We recommend as part of your monitoring and alerting strategy you review the following guidance that is relevant to your operating environment:
+Microsoft has many products and services that enable you to customize your IT environment to fit your needs. We recommend that you review the following guidance for your operating environment:
* Windows operating systems
- * [Windows 10 and Windows Server 2016 security auditing and monitoring reference](https://www.microsoft.com/download/details.aspx?id=52630)
-
- * [Security baseline (FINAL) for Windows 10 v1909 and Windows Server v1909](https://techcommunity.microsoft.com/t5/microsoft-security-baselines/security-baseline-final-for-windows-10-v1909-and-windows-server/ba-p/1023093)
+ * [Windows 10 and Windows Server 2016 security auditing and monitoring reference](https://www.microsoft.com/download/details.aspx?id=52630)
+ * [Security baseline (FINAL) for Windows 10 v1909 and Windows Server v1909](https://techcommunity.microsoft.com/t5/microsoft-security-baselines/security-baseline-final-for-windows-10-v1909-and-windows-server/ba-p/1023093)
+ * [Security baseline for Windows 11](https://techcommunity.microsoft.com/t5/microsoft-security-baselines/windows-11-security-baseline/ba-p/2810772)
+ * [Security baseline for Windows Server 2022](https://techcommunity.microsoft.com/t5/microsoft-security-baselines/windows-server-2022-security-baseline/ba-p/2724685)
- * [Security baseline for Windows 11](https://techcommunity.microsoft.com/t5/microsoft-security-baselines/windows-11-security-baseline/ba-p/2810772)
-
- * [Security baseline for Windows Server 2022](https://techcommunity.microsoft.com/t5/microsoft-security-baselines/windows-server-2022-security-baseline/ba-p/2724685)
-
* On-premises environments
- * [Microsoft Defender for Identity architecture](/defender-for-identity/architecture)
-
- * [Connect Microsoft Defender for Identity to Active Directory quickstart](/defender-for-identity/install-step2)
-
- * [Azure security baseline for Microsoft Defender for Identity](/defender-for-identity/security-baseline)
-
- * [Monitoring Active Directory for Signs of Compromise](/windows-server/identity/ad-ds/plan/security-best-practices/monitoring-active-directory-for-signs-of-compromise)
+ * [Microsoft Defender for Identity architecture](/defender-for-identity/architecture)
+ * [Connect Microsoft Defender for Identity to Active Directory quickstart](/defender-for-identity/install-step2)
+ * [Azure security baseline for Microsoft Defender for Identity](/defender-for-identity/security-baseline)
+ * [Monitoring Active Directory for Signs of Compromise](/windows-server/identity/ad-ds/plan/security-best-practices/monitoring-active-directory-for-signs-of-compromise)
* Cloud-based Azure environments -
- * [Monitor sign-ins with the Azure AD sign-in log](../reports-monitoring/concept-all-sign-ins.md)
-
- * [Audit activity reports in the Azure Active Directory portal](../reports-monitoring/concept-audit-logs.md)
-
- * [Investigate risk with Azure Active Directory Identity Protection](../identity-protection/howto-identity-protection-investigate-risk.md)
-
- * [Connect Azure AD Identity Protection data to Microsoft Sentinel](../../sentinel/data-connectors-reference.md#azure-active-directory-identity-protection)
+ * [Monitor sign-ins with the Azure AD sign-in log](../reports-monitoring/concept-all-sign-ins.md)
+ * [Audit activity reports in the Azure Active Directory portal](../reports-monitoring/concept-audit-logs.md)
+ * [Investigate risk with Azure Active Directory Identity Protection](../identity-protection/howto-identity-protection-investigate-risk.md)
+ * [Connect Azure AD Identity Protection data to Microsoft Sentinel](../../sentinel/data-connectors-reference.md#azure-active-directory-identity-protection)
* Active Directory Domain Services (AD DS)
- * [Audit Policy Recommendations](/windows-server/identity/ad-ds/plan/security-best-practices/audit-policy-recommendations)
+ * [Audit Policy Recommendations](/windows-server/identity/ad-ds/plan/security-best-practices/audit-policy-recommendations)
* Active Directory Federation Services (AD FS)
- * [AD FS Troubleshooting - Auditing Events and Logging](/windows-server/identity/ad-fs/troubleshooting/ad-fs-tshoot-logging)
+ * [AD FS Troubleshooting - Auditing Events and Logging](/windows-server/identity/ad-fs/troubleshooting/ad-fs-tshoot-logging)
-## Data sources
+## Data sources
The log files you use for investigation and monitoring are: * [Azure AD Audit logs](../reports-monitoring/concept-audit-logs.md)- * [Sign-in logs](../reports-monitoring/concept-all-sign-ins.md)- * [Microsoft 365 Audit logs](/microsoft-365/compliance/auditing-solutions-overview)- * [Azure Key Vault logs](../../key-vault/general/logging.md?tabs=Vault)
-From the Azure portal you can view the Azure AD Audit logs and download as comma separated value (CSV) or JavaScript Object Notation (JSON) files. The Azure portal has several ways to integrate Azure AD logs with other tools that allow for greater automation of monitoring and alerting:
+From the Azure portal, you can view the Azure AD Audit logs. Download logs as comma separated value (CSV) or JavaScript Object Notation (JSON) files. The Azure portal has several ways to integrate Azure AD logs with other tools that allow for greater automation of monitoring and alerting:
-* **[Microsoft Sentinel](../../sentinel/overview.md)** ΓÇô enables intelligent security analytics at the enterprise level by providing security information and event management (SIEM) capabilities.
+* **[Microsoft Sentinel](../../sentinel/overview.md)**. Enables intelligent security analytics at the enterprise level by providing security information and event management (SIEM) capabilities.
-* **[Azure Monitor](../../azure-monitor/overview.md)** ΓÇô enables automated monitoring and alerting of various conditions. Can create or use workbooks to combine data from different sources.
+* **[Azure Monitor](../../azure-monitor/overview.md)**. Enables automated monitoring and alerting of various conditions. Can create or use workbooks to combine data from different sources.
-* **[Azure Event Hubs](../../event-hubs/event-hubs-about.md) integrated with a SIEM**- [Azure AD logs can be integrated to other SIEMs](../reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub.md) such as Splunk, ArcSight, QRadar and Sumo Logic via the Azure Event Hub integration.
+* **[Azure Event Hubs](../../event-hubs/event-hubs-about.md)** integrated with a SIEM. Azure AD logs can be integrated to other SIEMs such as Splunk, ArcSight, QRadar and Sumo Logic via the Azure Event Hubs integration. For more information, see [Stream Azure Active Directory logs to an Azure event hub](../reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub.md).
-* **[Microsoft Defender for Cloud Apps](/cloud-app-security/what-is-cloud-app-security)** ΓÇô enables you to discover and manage apps, govern across apps and resources, and check the compliance of your cloud apps.
+* **[Microsoft Defender for Cloud Apps](/cloud-app-security/what-is-cloud-app-security)**. Enables you to discover and manage apps, govern across apps and resources, and check the compliance of your cloud apps.
-* **[Securing workload identities with Identity Protection Preview](..//identity-protection/concept-workload-identity-risk.md)** - Used to detect risk on workload identities across sign-in behavior and offline indicators of compromise.
+* **[Securing workload identities with Identity Protection Preview](../identity-protection/concept-workload-identity-risk.md)**. Used to detect risk on workload identities across sign-in behavior and offline indicators of compromise.
-Much of what you will monitor and alert on are the effects of your Conditional Access policies. You can use the [Conditional Access insights and reporting workbook](../conditional-access/howto-conditional-access-insights-reporting.md) to examine the effects of one or more Conditional Access policies on your sign-ins, as well as the results of policies, including device state. This workbook enables you to view an impact summary, and identify the impact over a specific time period. You can also use the workbook to investigate the sign-ins of a specific user.
+Much of what you will monitor and alert on are the effects of your Conditional Access policies. You can use the Conditional Access insights and reporting workbook to examine the effects of one or more Conditional Access policies on your sign-ins and the results of policies, including device state. This workbook enables you to view an impact summary, and identify the impact over a specific time period. You can also use the workbook to investigate the sign-ins of a specific user. For more information, see [Conditional Access insights and reporting](../conditional-access/howto-conditional-access-insights-reporting.md).
-The remainder of this article describes what we recommend you monitor and alert on, and is organized by the type of threat. Where there are specific pre-built solutions we link to them or provide samples following the table. Otherwise, you can build alerts using the preceding tools.
+The remainder of this article describes what to monitor and alert on. Where there are specific pre-built solutions we link to them or provide samples following the table. Otherwise, you can build alerts using the preceding tools.
-* **[Identity Protection](../identity-protection/overview-identity-protection.md)** -- generates three key reports that you can use to help with your investigation:
+* **[Identity Protection](../identity-protection/overview-identity-protection.md)** generates three key reports that you can use to help with your investigation:
- * **Risky users** ΓÇô contains information about which users are at risk, details about detections, history of all risky sign-ins, and risk history.
+* **Risky users** contains information about which users are at risk, details about detections, history of all risky sign-ins, and risk history.
- * **Risky sign-ins** ΓÇô contains information surrounding the circumstance of a sign-in that might indicate suspicious circumstances. For additional information on investigating information from this report, visit [How To: Investigate risk](../identity-protection/howto-identity-protection-investigate-risk.md).
+* **Risky sign-ins** contains information surrounding the circumstance of a sign-in that might indicate suspicious circumstances. For more information on investigating information from this report, see [How To: Investigate risk](../identity-protection/howto-identity-protection-investigate-risk.md).
- * **Risk detections** - contains information on risk signals detected by Azure AD Identity Protection that informs sign-in and user risk. For more information, see the [Azure AD security operations guide for user accounts](security-operations-user-accounts.md).
+* **Risk detections** contains information on risk signals detected by Azure AD Identity Protection that informs sign-in and user risk. For more information, see the [Azure AD security operations guide for user accounts](security-operations-user-accounts.md).
+
+For more information, see [What is Identity Protection](../identity-protection/overview-identity-protection.md).
### Data sources for domain controller monitoring
-For the best results, we recommend that you monitor your domain controllers using Microsoft Defender for Identity. This will enable you for the best detection and automation capabilities. Please follow the guidance from:
+For the best results, we recommend that you monitor your domain controllers using Microsoft Defender for Identity. This approach enables the best detection and automation capabilities. Follow the guidance from these resources:
* [Microsoft Defender for Identity architecture](/defender-for-identity/architecture)- * [Connect Microsoft Defender for Identity to Active Directory quickstart](/defender-for-identity/install-step2)
-If you do not plan to use Microsoft Defender for identity, you can [monitor your domain controllers either by event log messages](/windows-server/identity/ad-ds/plan/security-best-practices/monitoring-active-directory-for-signs-of-compromise) or by [running PowerShell cmdlets](/windows-server/identity/ad-ds/deploy/troubleshooting-domain-controller-deployment).
+If you don't plan to use Microsoft Defender for Identity, monitor your domain controllers by one of these approaches:
+
+* Event log messages. See [Monitoring Active Directory for Signs of Compromise](/windows-server/identity/ad-ds/plan/security-best-practices/monitoring-active-directory-for-signs-of-compromise).
+* PowerShell cmdlets. See [Troubleshooting Domain Controller Deployment](/windows-server/identity/ad-ds/deploy/troubleshooting-domain-controller-deployment).
## Components of hybrid authentication
-As part of an Azure hybrid environment, the following should be baselined and included in your monitoring and alerting strategy.
+As part of an Azure hybrid environment, the following items should be baselined and included in your monitoring and alerting strategy.
-* **PTA Agent** ΓÇô The Pass-through authentication agent is used to enable pass-through authentication and is installed on-premises. See [Azure AD Pass-through Authentication agent: Version release history](../hybrid/reference-connect-pta-version-history.md) for information on verifying your agent version and next steps.
+* **PTA Agent**. The pass-through authentication agent is used to enable pass-through authentication and is installed on-premises. See [Azure AD Pass-through Authentication agent: Version release history](../hybrid/reference-connect-pta-version-history.md) for information on verifying your agent version and next steps.
-* **AD FS/WAP** ΓÇô Azure Active Directory Federation Services (Azure AD FS) and Web Application Proxy (WAP) enable secure sharing of digital identity and entitlement rights across your security and enterprise boundaries. For information on security best practices, see [Best practices for securing Active Directory Federation Services](/windows-server/identity/ad-fs/deployment/best-practices-securing-ad-fs).
+* **AD FS/WAP**. Azure Active Directory Federation Services (Azure AD FS) and Web Application Proxy (WAP) enable secure sharing of digital identity and entitlement rights across your security and enterprise boundaries. For information on security best practices, see [Best practices for securing Active Directory Federation Services](/windows-server/identity/ad-fs/deployment/best-practices-securing-ad-fs).
-* **Azure AD Connect Health Agent** ΓÇô The agent used to provide a communications link for Azure AD Connect Health. For information on installing the agent, see [Azure AD Connect Health agent installation](../hybrid/how-to-connect-health-agent-install.md).
+* **Azure AD Connect Health Agent**. The agent used to provide a communications link for Azure AD Connect Health. For information on installing the agent, see [Azure AD Connect Health agent installation](../hybrid/how-to-connect-health-agent-install.md).
-* **Azure AD Connect Sync Engine** - The on-premises component, also called the sync engine. For information on the feature, see [Azure AD Connect sync service features](../hybrid/how-to-connect-syncservice-features.md).
+* **Azure AD Connect Sync Engine**. The on-premises component, also called the sync engine. For information on the feature, see [Azure AD Connect sync service features](../hybrid/how-to-connect-syncservice-features.md).
-* **Password Protection DC agent** ΓÇô Azure password protection DC agent is used to help with monitoring and reporting event log messages. For information, see [Enforce on-premises Azure AD Password Protection for Active Directory Domain Services](../authentication/concept-password-ban-bad-on-premises.md).
+* **Password Protection DC agent**. Azure password protection DC agent is used to help with monitoring and reporting event log messages. For information, see [Enforce on-premises Azure AD Password Protection for Active Directory Domain Services](../authentication/concept-password-ban-bad-on-premises.md).
-* **Password Filter DLL** ΓÇô The password filter DLL of the DC Agent receives user password-validation requests from the operating system. The filter forwards them to the DC Agent service that's running locally on the DC. For information on using the DLL, see [Enforce on-premises Azure AD Password Protection for Active Directory Domain Services](../authentication/concept-password-ban-bad-on-premises.md).
+* **Password Filter DLL**. The password filter DLL of the DC Agent receives user password-validation requests from the operating system. The filter forwards them to the DC Agent service that's running locally on the DC. For information on using the DLL, see [Enforce on-premises Azure AD Password Protection for Active Directory Domain Services](../authentication/concept-password-ban-bad-on-premises.md).
-* **Password writeback Agent** ΓÇô Password writeback is a feature enabled with [Azure AD Connect](../hybrid/whatis-hybrid-identity.md) that allows password changes in the cloud to be written back to an existing on-premises directory in real time. For more information on this feature, see [How does self-service password reset writeback work in Azure Active Directory?](../authentication/concept-sspr-writeback.md)
+* **Password writeback Agent**. Password writeback is a feature enabled with [Azure AD Connect](../hybrid/whatis-hybrid-identity.md) that allows password changes in the cloud to be written back to an existing on-premises directory in real time. For more information on this feature, see [How does self-service password reset writeback work in Azure Active Directory](../authentication/concept-sspr-writeback.md).
-* **Azure AD Application Proxy Connector** ΓÇô Lightweight agents that sit on-premises and facilitate the outbound connection to the Application Proxy service. For more information, see [Understand Azure ADF Application Proxy connectors](../app-proxy/application-proxy-connectors.md).
+* **Azure AD Application Proxy Connector**. Lightweight agents that sit on-premises and facilitate the outbound connection to the Application Proxy service. For more information, see [Understand Azure ADF Application Proxy connectors](../app-proxy/application-proxy-connectors.md).
## Components of cloud-based authentication
-As part of an Azure cloud-based environment, the following should be baselined and included in your monitoring and alerting strategy.
+As part of an Azure cloud-based environment, the following items should be baselined and included in your monitoring and alerting strategy.
-* **Azure AD Application Proxy** ΓÇô This cloud service provides secure remote access to on-premises web applications. For more information, see [Remote access to on-premises applications through Azure AD Application Proxy](../app-proxy/application-proxy-connectors.md).
+* **Azure AD Application Proxy**. This cloud service provides secure remote access to on-premises web applications. For more information, see [Remote access to on-premises applications through Azure AD Application Proxy](../app-proxy/application-proxy-connectors.md).
-* **Azure AD Connect** ΓÇô Services used for an Azure AD Connect solution. For more information, see [What is Azure AD Connect](../hybrid/whatis-azure-ad-connect.md).
+* **Azure AD Connect**. Services used for an Azure AD Connect solution. For more information, see [What is Azure AD Connect](../hybrid/whatis-azure-ad-connect.md).
-* **Azure AD Connect Health** ΓÇô Service Health provides you with a customizable dashboard which tracks the health of your Azure services in the regions where you use them. For more information, see [Azure AD Connect Health](../hybrid/whatis-azure-ad-connect.md).
+* **Azure AD Connect Health**. Service Health provides you with a customizable dashboard that tracks the health of your Azure services in the regions where you use them. For more information, see [Azure AD Connect Health](../hybrid/whatis-azure-ad-connect.md).
-* **Azure MFA** ΓÇô Azure AD Multi-Factor Authentication requires a user to provide more than one form of proof for authentication. This can provide a proactive first step to securing your environment. For more information, see [How it works: Azure AD Multi-Factor Authentication](../authentication/concept-mfa-howitworks.md).
+* **Azure AD multifactor authentication**. Multifactor authentication requires a user to provide more than one form of proof for authentication. This approach can provide a proactive first step to securing your environment. For more information, see [Azure AD multi-factor authentication](../authentication/concept-mfa-howitworks.md).
-* **Dynamic Groups** ΓÇô Dynamic configuration of security group membership for Azure Active Directory (Azure AD) Administrators can set rules to populate groups that are created in Azure AD based on user attributes. For more information, see [Dynamic groups and Azure Active Directory B2B collaboration](../external-identities/use-dynamic-groups.md).
+* **Dynamic groups**. Dynamic configuration of security group membership for Azure AD Administrators can set rules to populate groups that are created in Azure AD based on user attributes. For more information, see [Dynamic groups and Azure Active Directory B2B collaboration](../external-identities/use-dynamic-groups.md).
-* **Conditional Access** ΓÇô Conditional Access is the tool used by Azure Active Directory to bring signals together, to make decisions, and enforce organizational policies. Conditional Access is at the heart of the new identity driven control plane. For more information, see [What is Conditional Access](../conditional-access/overview.md).
+* **Conditional Access**. Conditional Access is the tool used by Azure Active Directory to bring signals together, to make decisions, and enforce organizational policies. Conditional Access is at the heart of the new identity driven control plane. For more information, see [What is Conditional Access](../conditional-access/overview.md).
-* **Identity Protection** ΓÇô A tool that enables organizations to automate the detection and remediation of identity-based risks, investigate risks using data in the portal, and export risk detection data to your SIEM. For more information, see [What is Identity Protection](../identity-protection/overview-identity-protection.md)?
+* **Identity Protection**. A tool that enables organizations to automate the detection and remediation of identity-based risks, investigate risks using data in the portal, and export risk detection data to your SIEM. For more information, see [What is Identity Protection](../identity-protection/overview-identity-protection.md).
-* **Group-based licensing**ΓÇô Licenses can be assigned to groups rather than directly to users. Azure AD stores information about license assignment states for users.
+* **Group-based licensing**. Licenses can be assigned to groups rather than directly to users. Azure AD stores information about license assignment states for users.
-* **Provisioning Service** ΓÇô Provisioning refers to creating user identities and roles in the cloud applications that users need access to. In addition to creating user identities, automatic provisioning includes the maintenance and removal of user identities as status or roles change. For more information, see [How Application Provisioning works in Azure Active Directory](../app-provisioning/how-provisioning-works.md).
+* **Provisioning Service**. Provisioning refers to creating user identities and roles in the cloud applications that users need access to. In addition to creating user identities, automatic provisioning includes the maintenance and removal of user identities as status or roles change. For more information, see [How Application Provisioning works in Azure Active Directory](../app-provisioning/how-provisioning-works.md).
-* **Graph API** ΓÇô The Microsoft Graph API is a RESTful web API that enables you to access Microsoft Cloud service resources. After you register your app and get authentication tokens for a user or service, you can make requests to the Microsoft Graph API. For more information, see [Overview of Microsoft Graph](/graph/overview).
+* **Graph API**. The Microsoft Graph API is a RESTful web API that enables you to access Microsoft Cloud service resources. After you register your app and get authentication tokens for a user or service, you can make requests to the Microsoft Graph API. For more information, see [Overview of Microsoft Graph](/graph/overview).
-* **Domain Service** ΓÇô Azure Active Directory Domain Services (AD DS) provides managed domain services such as domain join, group policy. For more information, see [What is Azure Active Directory Domain Services?](../../active-directory-domain-services/overview.md)
+* **Domain Service**. Azure Active Directory Domain Services (AD DS) provides managed domain services such as domain join, group policy. For more information, see [What is Azure Active Directory Domain Services](../../active-directory-domain-services/overview.md).
-* **Azure Resource Manager** ΓÇô Azure Resource Manager is the deployment and management service for Azure. It provides a management layer that enables you to create, update, and delete resources in your Azure account. For more information, see [What is Azure Resource Manager?](../../azure-resource-manager/management/overview.md)
+* **Azure Resource Manager**. Azure Resource Manager is the deployment and management service for Azure. It provides a management layer that enables you to create, update, and delete resources in your Azure account. For more information, see [What is Azure Resource Manager](../../azure-resource-manager/management/overview.md).
-* **Managed Identity** ΓÇô Managed identities eliminate the need for developers to manage credentials. Managed identities provide an identity for applications to use when connecting to resources that support Azure Active Directory (Azure AD) authentication. For more information, see [What are managed identities for Azure resources?](../managed-identities-azure-resources/overview.md)
+* **Managed identity**. Managed identities eliminate the need for developers to manage credentials. Managed identities provide an identity for applications to use when connecting to resources that support Azure AD authentication. For more information, see [What are managed identities for Azure resources](../managed-identities-azure-resources/overview.md).
-* **Privileged Identity Management** ΓÇô Privileged Identity Management (PIM) is a service in Azure Active Directory (Azure AD) that enables you to manage, control, and monitor access to important resources in your organization. For more information, see [What is Azure AD Privileged Identity Management](../privileged-identity-management/pim-configure.md).
+* **Privileged Identity Management**. PIM is a service in Azure AD that enables you to manage, control, and monitor access to important resources in your organization. For more information, see [What is Azure AD Privileged Identity Management](../privileged-identity-management/pim-configure.md).
-* **Access Reviews** ΓÇô Azure Active Directory (Azure AD) access reviews enable organizations to efficiently manage group memberships, access to enterprise applications, and role assignments. User's access can be reviewed on a regular basis to make sure only the right people have continued access. For more information, see [What are Azure AD access reviews?](../governance/access-reviews-overview.md)
+* **Access reviews**. Azure AD access reviews enable organizations to efficiently manage group memberships, access to enterprise applications, and role assignments. User's access can be reviewed regularly to make sure only the right people have continued access. For more information, see [What are Azure AD access reviews](../governance/access-reviews-overview.md).
-* **Entitlement Management** ΓÇô Azure Active Directory (Azure AD) entitlement management is an [identity governance](../governance/identity-governance-overview.md) feature that enables organizations to manage identity and access lifecycle at scale, by automating access request workflows, access assignments, reviews, and expiration. For more information, see [What is Azure AD entitlement management?](../governance/entitlement-management-overview.md)
+* **Entitlement management**. Azure AD entitlement management is an [identity governance](../governance/identity-governance-overview.md) feature. Organizations can manage identity and access lifecycle at scale, by automating access request workflows, access assignments, reviews, and expiration. For more information, see [What is Azure AD entitlement management](../governance/entitlement-management-overview.md).
-* **Activity Logs** ΓÇô The Activity log is a [platform log](../../azure-monitor/essentials/platform-logs-overview.md) in Azure that provides insight into subscription-level events. This includes such information as when a resource is modified or when a virtual machine is started. For more information, see [Azure Activity log](../../azure-monitor/essentials/activity-log.md).
+* **Activity logs**. The Activity log is an Azure [platform log](../../azure-monitor/essentials/platform-logs-overview.md) that provides insight into subscription-level events. This log includes such information as when a resource is modified or when a virtual machine is started. For more information, see [Azure Activity log](../../azure-monitor/essentials/activity-log.md).
-* **Self-service Password reset service** ΓÇô Azure Active Directory (Azure AD) self-service password reset (SSPR) gives users the ability to change or reset their password, with no administrator or help desk involvement. For more information, see [How it works: Azure AD self-service password reset](../authentication/concept-sspr-howitworks.md).
+* **Self-service password reset service**. Azure AD self-service password reset (SSPR) gives users the ability to change or reset their password. The administrator or help desk isn't required. For more information, see [How it works: Azure AD self-service password reset](../authentication/concept-sspr-howitworks.md).
-* **Device Services** ΓÇô Device identity management is the foundation for [device-based Conditional Access](../conditional-access/require-managed-devices.md). With device-based Conditional Access policies, you can ensure that access to resources in your environment is only possible with managed devices. For more information, see [What is a device identity?](../devices/overview.md)
+* **Device services**. Device identity management is the foundation for [device-based Conditional Access](../conditional-access/require-managed-devices.md). With device-based Conditional Access policies, you can ensure that access to resources in your environment is only possible with managed devices. For more information, see [What is a device identity](../devices/overview.md).
-* **Self-Service Group Management** ΓÇô You can enable users to create and manage their own security groups or Microsoft 365 groups in Azure Active Directory (Azure AD). The owner of the group can approve or deny membership requests and can delegate control of group membership. Self-service group management features are not available for mail-enabled security groups or distribution lists. For more information, see [Set up self-service group management in Azure Active Directory](../enterprise-users/groups-self-service-management.md).
+* **Self-service group management**. You can enable users to create and manage their own security groups or Microsoft 365 groups in Azure AD. The owner of the group can approve or deny membership requests and can delegate control of group membership. Self-service group management features aren't available for mail-enabled security groups or distribution lists. For more information, see [Set up self-service group management in Azure Active Directory](../enterprise-users/groups-self-service-management.md).
-* **Risk detections** ΓÇô contains information about other risks triggered when a risk is detected and other pertinent information such as sign-in location and any details from Microsoft Defender for Cloud Apps.
+* **Risk detections**. Contains information about other risks triggered when a risk is detected and other pertinent information such as sign-in location and any details from Microsoft Defender for Cloud Apps.
## Next steps See these security operations guide articles:
-[Azure AD security operations overview](security-operations-introduction.md)
-
-[Security operations for user accounts](security-operations-user-accounts.md)
-
-[Security operations for privileged accounts](security-operations-privileged-accounts.md)
-
-[Security operations for Privileged Identity Management](security-operations-privileged-identity-management.md)
-
-[Security operations for applications](security-operations-applications.md)
-
-[Security operations for devices](security-operations-devices.md)
-
-
-[Security operations for infrastructure](security-operations-infrastructure.md)
+* [Azure AD security operations overview](security-operations-introduction.md)
+* [Security operations for user accounts](security-operations-user-accounts.md)
+* [Security operations for privileged accounts](security-operations-privileged-accounts.md)
+* [Security operations for Privileged Identity Management](security-operations-privileged-identity-management.md)
+* [Security operations for applications](security-operations-applications.md)
+* [Security operations for devices](security-operations-devices.md)
+* [Security operations for infrastructure](security-operations-infrastructure.md)
active-directory Security Operations Privileged Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/security-operations-privileged-accounts.md
Title: Azure Active Directory security operations for privileged accounts
-description: Learn to set baselines, and then monitor and alert on potential security issues with privileged accounts in Azure Active directory.
+ Title: Security operations for privileged accounts in Azure Active Directory
+description: Learn about baselines, and how to monitor and alert on potential security issues with privileged accounts in Azure Active Directory.
- Previously updated : 07/15/2021+ Last updated : 04/29/2022 +
-# Security operations for privileged accounts
+# Security operations for privileged accounts in Azure Active Directory
The security of business assets depends on the integrity of the privileged accounts that administer your IT systems. Cyber attackers use credential theft attacks and other means to target privileged accounts and gain access to sensitive data.
You're entirely responsible for all layers of security for your on-premises IT e
* For more information on securing access for privileged users, see [Securing privileged access for hybrid and cloud deployments in Azure AD](../roles/security-planning.md). * For a wide range of videos, how-to guides, and content of key concepts for privileged identity, see [Privileged Identity Management documentation](../privileged-identity-management/index.yml).
-## Where to look
+## Log files to monitor
The log files you use for investigation and monitoring are:
The log files you use for investigation and monitoring are:
From the Azure portal, you can view the Azure AD Audit logs and download as comma-separated value (CSV) or JavaScript Object Notation (JSON) files. The Azure portal has several ways to integrate Azure AD logs with other tools that allow for greater automation of monitoring and alerting:
-* [Microsoft Sentinel](../../sentinel/overview.md): Enables intelligent security analytics at the enterprise level by providing security information and event management (SIEM) capabilities.
-* [Azure Monitor](../../azure-monitor/overview.md): Enables automated monitoring and alerting of various conditions. Can create or use workbooks to combine data from different sources.
-* [Azure Event Hubs](../../event-hubs/event-hubs-about.md) integrated with a SIEM: Enables [Azure AD logs to be pushed to other SIEMs](../reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub.md) such as Splunk, ArcSight, QRadar, and Sumo Logic via the Azure Event Hubs integration.
-* [Microsoft Defender for Cloud Apps](/cloud-app-security/what-is-cloud-app-security): Enables you to discover and manage apps, govern across apps and resources, and check your cloud apps' compliance.
-* **Microsoft Graph**: Enables you to export data and use Microsoft Graph to do more analysis. For more information on Microsoft Graph, see [Microsoft Graph PowerShell SDK and Azure Active Directory Identity Protection](../identity-protection/howto-identity-protection-graph-api.md).
-* [Identity Protection](../identity-protection/overview-identity-protection.md): Generates three key reports you can use to help with your investigation:
+* **[Microsoft Sentinel](../../sentinel/overview.md)**. Enables intelligent security analytics at the enterprise level by providing security information and event management (SIEM) capabilities.
+* **[Azure Monitor](../../azure-monitor/overview.md)**. Enables automated monitoring and alerting of various conditions. Can create or use workbooks to combine data from different sources.
+* **[Azure Event Hubs](../../event-hubs/event-hubs-about.md)** integrated with a SIEM. Enables Azure AD logs to be pushed to other SIEMs such as Splunk, ArcSight, QRadar, and Sumo Logic via the Azure Event Hubs integration. For more information, see [Stream Azure Active Directory logs to an Azure event hub](../reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub.md).
+* **[Microsoft Defender for Cloud Apps](/cloud-app-security/what-is-cloud-app-security)**. Enables you to discover and manage apps, govern across apps and resources, and check your cloud apps' compliance.
+* **Microsoft Graph**. Enables you to export data and use Microsoft Graph to do more analysis. For more information, see [Microsoft Graph PowerShell SDK and Azure Active Directory Identity Protection](../identity-protection/howto-identity-protection-graph-api.md).
+* **[Identity Protection](../identity-protection/overview-identity-protection.md)**. Generates three key reports you can use to help with your investigation:
- * **Risky users**: Contains information about which users are at risk, details about detections, history of all risky sign-ins, and risk history.
- * **Risky sign-ins**: Contains information about a sign-in that might indicate suspicious circumstances. For more information on investigating information from this report, see [Investigate risk](../identity-protection/howto-identity-protection-investigate-risk.md).
- * **Risk detections**: Contains information about other risks triggered when a risk is detected and other pertinent information such as sign-in location and any details from Microsoft Defender for Cloud Apps.
+ * **Risky users**. Contains information about which users are at risk, details about detections, history of all risky sign-ins, and risk history.
+ * **Risky sign-ins**. Contains information about a sign-in that might indicate suspicious circumstances. For more information on investigating information from this report, see [Investigate risk](../identity-protection/howto-identity-protection-investigate-risk.md).
+ * **Risk detections**. Contains information about other risks triggered when a risk is detected and other pertinent information such as sign-in location and any details from Microsoft Defender for Cloud Apps.
-* **[Securing workload identities with Identity Protection Preview](..//identity-protection/concept-workload-identity-risk.md)** - Used to detect risk on workload identities across sign-in behavior and offline indicators of compromise.
+* **[Securing workload identities with Identity Protection Preview](..//identity-protection/concept-workload-identity-risk.md)**. Use to detect risk on workload identities across sign-in behavior and offline indicators of compromise.
Although we discourage the practice, privileged accounts can have standing administration rights. If you choose to use standing privileges, and the account is compromised, it can have a strongly negative effect. We recommend you prioritize monitoring privileged accounts and include the accounts in your Privileged Identity Management (PIM) configuration. For more information on PIM, see [Start using Privileged Identity Management](../privileged-identity-management/pim-getting-started.md). Also, we recommend you validate that admin accounts: * Are required. * Have the least privilege to execute the require activities.
-* Are protected with multifactor authentication (MFA) at a minimum.
+* Are protected with multifactor authentication at a minimum.
* Are run from privileged access workstation (PAW) or secure admin workstation (SAW) devices.
-The rest of this article describes what we recommend you monitor and alert on. The article is organized by the type of threat. Where there are specific prebuilt solutions, we link to them following the table. Otherwise, you can build alerts by using the preceding tools.
+The rest of this article describes what we recommend you monitor and alert on. The article is organized by the type of threat. Where there are specific prebuilt solutions, we link to them following the table. Otherwise, you can build alerts by using the tools described above.
-Specifically, this article provides details on setting baselines and auditing sign-in and usage of privileged accounts. It also discusses tools and resources you can use to help maintain the integrity of your privileged accounts. The content is organized into the following subjects:
+This article provides details on setting baselines and auditing sign-in and usage of privileged accounts. It also discusses tools and resources you can use to help maintain the integrity of your privileged accounts. The content is organized into the following subjects:
* Emergency "break-glass" accounts * Privileged account sign-in
Specifically, this article provides details on setting baselines and auditing si
## Emergency access accounts
-It's important that you prevent being accidentally locked out of your Azure AD tenant. You can mitigate the effect of an accidental lockout by creating emergency access accounts in your organization. Emergency access accounts are also known as break-glass accounts, as in "break glass in case of emergency" messages found on physical security equipment like fire alarms.
+It's important that you prevent being accidentally locked out of your Azure AD tenant. You can mitigate the effect of an accidental lockout by creating emergency access accounts in your organization. Emergency access accounts are also known as *break-glass accounts*, as in "break glass in case of emergency" messages found on physical security equipment like fire alarms.
Emergency access accounts are highly privileged, and they aren't assigned to specific individuals. Emergency access accounts are limited to emergency or break-glass scenarios where normal privileged accounts can't be used. An example is when a Conditional Access policy is misconfigured and locks out all normal administrative accounts. Restrict emergency account use to only the times when it's absolutely necessary.
Send a high-priority alert every time an emergency access account is used.
Because break-glass accounts are only used if there's an emergency, your monitoring should discover no account activity. Send a high-priority alert every time an emergency access account is used or changed. Any of the following events might indicate a bad actor is trying to compromise your environments:
-* **Account used**: Monitor and alert on any activity by using this type of account, such as:
- * Sign-in.
- * Account password change.
- * Account permission or roles changed.
- * Credential or auth method added or changed.
+* Sign-in.
+* Account password change.
+* Account permission or roles changed.
+* Credential or auth method added or changed.
For more information on managing emergency access accounts, see [Manage emergency access admin accounts in Azure AD](../roles/security-emergency-access.md). For detailed information on creating an alert for an emergency account, see [Create an alert rule](../roles/security-emergency-access.md).
You can monitor privileged account sign-in events in the Azure AD Sign-in logs.
| - | - | - | - | - | | Sign-in failure, bad password threshold | High | Azure AD Sign-ins log | Status = Failure<br>-and-<br>error code = 50126 | Define a baseline threshold and then monitor and adjust to suit your organizational behaviors and limit false alerts from being generated.<br>[Azure Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/MultipleDataSources/PrivilegedAccountsSigninFailureSpikes.yaml) | | Failure because of Conditional Access requirement |High | Azure AD Sign-ins log | Status = Failure<br>-and-<br>error code = 53003<br>-and-<br>Failure reason = Blocked by Conditional Access | This event can be an indication an attacker is trying to get into the account.<br>[Azure Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/SigninLogs/UserAccounts-CABlockedSigninSpikes.yaml) |
-| Privileged accounts that don't follow naming policy| | Azure subscription | [List Azure role assignments using the Azure portal - Azure RBAC](../../role-based-access-control/role-assignments-list-portal.md)| List role assignments for subscriptions and alert where the sign-in name doesn't match your organization's format. An example is the use of ADM_ as a prefix. |
-| Interrupt | High, medium | Azure AD Sign-ins | Status = Interrupted<br>-and-<br>error code = 50074<br>-and-<br>Failure reason = Strong auth required<br>Status = Interrupted<br>-and-<br>Error code = 500121<br>Failure reason = Authentication failed during strong authentication request | This event can be an indication an attacker has the password for the account but can't pass the MFA challenge.<br>[Azure Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/MultipleDataSources/AADPrivilegedAccountsFailedMFA.yaml) |
+| Privileged accounts that don't follow naming policy| | Azure subscription | [List Azure role assignments using the Azure portal](../../role-based-access-control/role-assignments-list-portal.md)| List role assignments for subscriptions and alert where the sign-in name doesn't match your organization's format. An example is the use of ADM_ as a prefix. |
+| Interrupt | High, medium | Azure AD Sign-ins | Status = Interrupted<br>-and-<br>error code = 50074<br>-and-<br>Failure reason = Strong auth required<br>Status = Interrupted<br>-and-<br>Error code = 500121<br>Failure reason = Authentication failed during strong authentication request | This event can be an indication an attacker has the password for the account but can't pass the multifactor authentication challenge.<br>[Azure Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/MultipleDataSources/AADPrivilegedAccountsFailedMFA.yaml) |
| Privileged accounts that don't follow naming policy| High | Azure AD directory | [List Azure AD role assignments](../roles/view-assignments.md)| List role assignments for Azure AD roles and alert where the UPN doesn't match your organization's format. An example is the use of ADM_ as a prefix. |
-| Discover privileged accounts not registered for MFA | High | Microsoft Graph API| Query for IsMFARegistered eq false for admin accounts. [List credentialUserRegistrationDetails - Microsoft Graph beta](/graph/api/reportroot-list-credentialuserregistrationdetails?view=graph-rest-beta&preserve-view=true&tabs=http) | Audit and investigate to determine if the event is intentional or an oversight. |
+| Discover privileged accounts not registered for multifactor authentication | High | Microsoft Graph API| Query for IsMFARegistered eq false for admin accounts. [List credentialUserRegistrationDetails - Microsoft Graph beta](/graph/api/reportroot-list-credentialuserregistrationdetails?view=graph-rest-beta&preserve-view=true&tabs=http) | Audit and investigate to determine if the event is intentional or an oversight. |
| Account lockout | High | Azure AD Sign-ins log | Status = Failure<br>-and-<br>error code = 50053 | Define a baseline threshold, and then monitor and adjust to suit your organizational behaviors and limit false alerts from being generated.<br>[Azure Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/MultipleDataSources/PrivilegedAccountsLockedOut.yaml) | | Account disabled or blocked for sign-ins | Low | Azure AD Sign-ins log | Status = Failure<br>-and-<br>Target = User UPN<br>-and-<br>error code = 50057 | This event could indicate someone is trying to gain access to an account after they've left the organization. Although the account is blocked, it's still important to log and alert on this activity.<br>[Azure Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/SigninLogs/UserAccounts-BlockedAccounts.yaml) |
-| MFA fraud alert or block | High | Azure AD Sign-ins log/Azure Log Analytics | Sign-ins>Authentication details Result details = MFA denied, fraud code entered | Privileged user has indicated they haven't instigated the MFA prompt, which could indicate an attacker has the password for the account.<br>[Azure Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/SigninLogs/MFARejectedbyUser.yaml) |
-| MFA fraud alert or block | High | Azure AD Audit log log/Azure Log Analytics | Activity type = Fraud reported - User is blocked for MFA or fraud reported - No action taken (based on tenant-level settings for fraud report) | Privileged user has indicated they haven't instigated the MFA prompt, which could indicate an attacker has the password for the account.<br>[Azure Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/SigninLogs/MFARejectedbyUser.yaml) |
+| MFA fraud alert or block | High | Azure AD Sign-ins log/Azure Log Analytics | Sign-ins>Authentication details Result details = MFA denied, fraud code entered | Privileged user has indicated they haven't instigated the multi-factor authentication prompt, which could indicate an attacker has the password for the account.<br>[Azure Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/SigninLogs/MFARejectedbyUser.yaml) |
+| MFA fraud alert or block | High | Azure AD Audit log log/Azure Log Analytics | Activity type = Fraud reported - User is blocked for MFA or fraud reported - No action taken (based on tenant-level settings for fraud report) | Privileged user has indicated they haven't instigated the multi-factor authentication prompt, which could indicate an attacker has the password for the account.<br>[Azure Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/SigninLogs/MFARejectedbyUser.yaml) |
| Privileged account sign-ins outside of expected controls | | Azure AD Sign-ins log | Status = Failure<br>UserPricipalName = \<Admin account\><br>Location = \<unapproved location\><br>IP address = \<unapproved IP\><br>Device info = \<unapproved Browser, Operating System\> | Monitor and alert on any entries that you've defined as unapproved.<br>[Azure Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/SigninLogs/SuspiciousSignintoPrivilegedAccount.yaml) | | Outside of normal sign-in times | High | Azure AD Sign-ins log | Status = Success<br>-and-<br>Location =<br>-and-<br>Time = Outside of working hours | Monitor and alert if sign-ins occur outside of expected times. It's important to find the normal working pattern for each privileged account and to alert if there are unplanned changes outside of normal working times. Sign-ins outside of normal working hours could indicate compromise or possible insider threats.<br>[Azure Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/MultipleDataSources/AnomolousSignInsBasedonTime.yaml) | | Identity protection risk | High | Identity Protection logs | Risk state = At risk<br>-and-<br>Risk level = Low, medium, high<br>-and-<br>Activity = Unfamiliar sign-in/TOR, and so on | This event indicates there's some abnormality detected with the sign-in for the account and should be alerted on. |
You can monitor privileged account sign-in events in the Azure AD Sign-in logs.
| Change in legacy authentication protocol | High | Azure AD Sign-ins log | Client App = Other client, IMAP, POP3, MAPI, SMTP, and so on<br>-and-<br>Username = UPN<br>-and-<br>Application = Exchange (example) | Many attacks use legacy authentication, so if there's a change in auth protocol for the user, it could be an indication of an attack. | | New device or location | High | Azure AD Sign-ins log | Device info = Device ID<br>-and-<br>Browser<br>-and-<br>OS<br>-and-<br>Compliant/Managed<br>-and-<br>Target = User<br>-and-<br>Location | Most admin activity should be from [privileged access devices](/security/compass/privileged-access-devices), from a limited number of locations. For this reason, alert on new devices or locations.<br>[Azure Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/SigninLogs/SuspiciousSignintoPrivilegedAccount.yaml) | | Audit alert setting is changed | High | Azure AD Audit logs | Service = PIM<br>-and-<br>Category = Role management<br>-and-<br>Activity = Disable PIM alert<br>-and-<br>Status = Success | Changes to a core alert should be alerted if unexpected. |
-| Administrators authenticating to other Azure AD tenants| Medium| Azure AD Sign-ins log| Status = success<br><br>Resource tenantID != Home Tenant ID| When scoped to Privileged Users this detects when an administrator has successfully authenticated to another Azure AD tenant with an identity in your organization's tenant. <br><br>Alert if Resource TenantID is not equal to Home Tenant ID |
-|Admin User state changed from Guest to Member|Medium|Azure AD Audit logs|Activity: Update user<br><br>Category: UserManagement<br><br>UserType changed from Guest to Member|Monitor and alert on change of user type from Guest to Member.<br><br> Was this expected?
+| Administrators authenticating to other Azure AD tenants| Medium| Azure AD Sign-ins log| Status = success<br><br>Resource tenantID != Home Tenant ID| When scoped to Privileged Users, this monitor detects when an administrator has successfully authenticated to another Azure AD tenant with an identity in your organization's tenant. <br><br>Alert if Resource TenantID isn't equal to Home Tenant ID |
+|Admin User state changed from Guest to Member|Medium|Azure AD Audit logs|Activity: Update user<br><br>Category: UserManagement<br><br>UserType changed from Guest to Member|Monitor and alert on change of user type from Guest to Member.<br><br> Was this change expected?
|Guest users invited to tenant by non-approved inviters|Medium|Azure AD Audit logs|Activity: Invite external user<br><br>Category: UserManagement<br><br>Initiated by (actor): User Principal Name|Monitor and alert on non-approved actors inviting external users.+ ## Changes by privileged accounts Monitor all completed and attempted changes by a privileged account. This data enables you to establish what's normal activity for each privileged account and alert on activity that deviates from the expected. The Azure AD Audit logs are used to record this type of event. For more information on Azure AD Audit logs, see [Audit logs in Azure Active Directory](../reports-monitoring/concept-audit-logs.md). ### Azure Active Directory Domain Services
-Privileged accounts that have been assigned permissions in Azure AD Domain Services can perform tasks for Azure AD Domain Services that affect the security posture of your Azure-hosted virtual machines (VMs) that use Azure AD Domain Services. Enable security audits on VMs and monitor the logs. For more information on enabling Azure AD Domain Services audits and for a list of sensitive privileges, see the following resources:
+Privileged accounts that have been assigned permissions in Azure AD Domain Services can perform tasks for Azure AD Domain Services that affect the security posture of your Azure-hosted virtual machines that use Azure AD Domain Services. Enable security audits on virtual machines and monitor the logs. For more information on enabling Azure AD Domain Services audits and for a list of sensitive privileges, see the following resources:
* [Enable security audits for Azure Active Directory Domain Services](../../active-directory-domain-services/security-audit-events.md) * [Audit Sensitive Privilege Use](/windows/security/threat-protection/auditing/audit-sensitive-privilege-use)
-| What to monitor | Risk level | Where | Filter/subfilter | Notes |
-|-|||--|--|
-| Attempted and completed changes | High | Azure AD Audit logs | Date and time<br>-and-<br>Service<br>-and-<br>Category and name of the activity (what)<br>-and-<br>Status = Success or failure<br>-and-<br>Target<br>-and-<br>Initiator or actor (who) | Any unplanned changes should be alerted on immediately. These logs should be retained to assist in any investigation. Any tenant-level changes should be investigated immediately (link out to Infra doc) that would lower the security posture of your tenant. An example is excluding accounts from MFA or Conditional Access. Alert on any [additions or changes to applications](security-operations-applications.md). |
-| **EXAMPLE**<br>Attempted or completed change to high-value apps or services | High | Audit log | Service<br>-and-<br>Category and name of the activity | <li>Date and time <li>Service <li>Category and name of the activity <li>Status = Success or failure <li>Target <li>Initiator or actor (who) |
-| Privileged changes in Azure AD Domain Services | High | Azure AD Domain Services | Look for event [4673](/windows/security/threat-protection/auditing/event-4673) | [Enable security audits for Azure Active Directory Domain Services](../../active-directory-domain-services/security-audit-events.md)<br>[Audit Sensitive Privilege use](/windows/security/threat-protection/auditing/audit-sensitive-privilege-use). See the article for a list of all privileged events. |
+| What to monitor | Risk level | Where | Filter/subfilter | Notes |
+||||-|-|
+| Attempted and completed changes | High | Azure AD Audit logs | Date and time<br>-and-<br>Service<br>-and-<br>Category and name of the activity (what)<br>-and-<br>Status = Success or failure<br>-and-<br>Target<br>-and-<br>Initiator or actor (who) | Any unplanned changes should be alerted on immediately. These logs should be retained to help with any investigation. Any tenant-level changes should be investigated immediately (link out to Infra doc) that would lower the security posture of your tenant. An example is excluding accounts from multifactor authentication or Conditional Access. Alert on any additions or changes to applications. See [Azure Active Directory security operations guide for Applications](security-operations-applications.md). |
+| **EXAMPLE**<br>Attempted or completed change to high-value apps or services | High | Audit log | Service<br>-and-<br>Category and name of the activity | <li>Date and time <li>Service <li>Category and name of the activity <li>Status = Success or failure <li>Target <li>Initiator or actor (who) |
+| Privileged changes in Azure AD Domain Services | High | Azure AD Domain Services | Look for event [4673](/windows/security/threat-protection/auditing/event-4673) | [Enable security audits for Azure Active Directory Domain Services](../../active-directory-domain-services/security-audit-events.md)<br>For a list of all privileged events, see [Audit Sensitive Privilege use](/windows/security/threat-protection/auditing/audit-sensitive-privilege-use). |
## Changes to privileged accounts
Investigate changes to privileged accounts' authentication rules and privileges,
| Changes to authentication methods| High| Azure AD Audit logs| Service = Authentication Method<br>-and-<br>Activity type = User registered security information<br>-and-<br>Category = User management| This change could be an indication of an attacker adding an auth method to the account so they can have continued access.<br>[Azure Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/MultipleDataSources/AuthenticationMethodsChangedforPrivilegedAccount.yaml) | | Alert on changes to privileged account permissions| High| Azure AD Audit logs| Category = Role management<br>-and-<br>Activity type = Add eligible member (permanent)<br>-and-<br>Activity type = Add eligible member (eligible)<br>-and-<br>Status = Success or failure<br>-and-<br>Modified properties = Role.DisplayName| This alert is especially for accounts being assigned roles that aren't known or are outside of their normal responsibilities. | | Unused privileged accounts| Medium| Azure AD Access Reviews| | Perform a monthly review for inactive privileged user accounts. |
-| Accounts exempt from Conditional Access| High| Azure Monitor Logs<br>-or-<br>Access Reviews| Conditional Access = Insights and reporting| Any account exempt from Conditional Access is most likely bypassing security controls and is more vulnerable to compromise. Break-glass accounts are exempt. See information on how to monitor break-glass accounts in a subsequent section of this article.|
-| Addition of a Temporary Access Pass to a privileged account| High| Azure AD Audit logs| Activity: Admin registered security info<br><br>Status Reason: Admin registered temporary access pass method for user<br><br>Category: UserManagement<br><br>Initiated by (actor): User Principal Name<br><br>Target:User Principal Name|Monitor and alert on a Temporary Access Pass being created for a privileged user.
+| Accounts exempt from Conditional Access| High| Azure Monitor Logs<br>-or-<br>Access Reviews| Conditional Access = Insights and reporting| Any account exempt from Conditional Access is most likely bypassing security controls and is more vulnerable to compromise. Break-glass accounts are exempt. See information on how to monitor break-glass accounts later in this article.|
+| Addition of a Temporary Access Pass to a privileged account| High| Azure AD Audit logs| Activity: Admin registered security info<br><br>Status Reason: Admin registered temporary access pass method for user<br><br>Category: UserManagement<br><br>Initiated by (actor): User Principal Name<br><br>Target: User Principal Name|Monitor and alert on a Temporary Access Pass being created for a privileged user.
For more information on how to monitor for exceptions to Conditional Access policies, see [Conditional Access insights and reporting](../conditional-access/howto-conditional-access-insights-reporting.md).
Having privileged accounts that are permanently provisioned with elevated abilit
### Establish a baseline
-To monitor for exceptions, you must first create a baseline. Determine the following information for:
+To monitor for exceptions, you must first create a baseline. Determine the following information for these elements
-* **Admin accounts**:
+* **Admin accounts**
- * Your privileged account strategy
- * Use of on-premises accounts to administer on-premises resources
- * Use of cloud-based accounts to administer cloud-based resources
- * Approach to separating and monitoring administrative permissions for on-premises and cloud-based resources
+ * Your privileged account strategy
+ * Use of on-premises accounts to administer on-premises resources
+ * Use of cloud-based accounts to administer cloud-based resources
+ * Approach to separating and monitoring administrative permissions for on-premises and cloud-based resources
-* **Privileged role protection**:
+* **Privileged role protection**
- * Protection strategy for roles that have administrative privileges
- * Organizational policy for using privileged accounts
- * Strategy and principles for maintaining permanent privilege versus providing time-bound and approved access
+ * Protection strategy for roles that have administrative privileges
+ * Organizational policy for using privileged accounts
+ * Strategy and principles for maintaining permanent privilege versus providing time-bound and approved access
-The following concepts and information will help you determine policies:
+The following concepts and information help determine policies:
-* **Just-in-time admin principles**: Use the Azure AD logs to capture information for performing administrative tasks that are common in your environment. Determine the typical amount of time needed to complete the tasks.
-* **Just-enough admin principles**: [Determine the least-privileged role](../roles/delegate-by-task.md), which might be a custom role, that's needed for administrative tasks.
-* **Establish an elevation policy**: After you have insight into the type of elevated privilege needed and how long is needed for each task, create policies that reflect elevated privileged usage for your environment. As an example, define a policy to limit Global admin access to one hour.
+* **Just-in-time admin principles**. Use the Azure AD logs to capture information for performing administrative tasks that are common in your environment. Determine the typical amount of time needed to complete the tasks.
+* **Just-enough admin principles**. Determine the least-privileged role, which might be a custom role, that's needed for administrative tasks. For more information, see [Least privileged roles by task in Azure Active Directory](../roles/delegate-by-task.md).
+* **Establish an elevation policy**. After you have insight into the type of elevated privilege needed and how long is needed for each task, create policies that reflect elevated privileged usage for your environment. As an example, define a policy to limit Global admin access to one hour.
- After you establish your baseline and set policy, you can configure monitoring to detect and alert usage outside of policy.
+After you establish your baseline and set policy, you can configure monitoring to detect and alert usage outside of policy.
### Discovery
Pay particular attention to and investigate changes in assignment and elevation
### Things to monitor
-You can monitor privileged account changes by using Azure AD Audit logs and Azure Monitor logs. Specifically, include the following changes in your monitoring process.
+You can monitor privileged account changes by using Azure AD Audit logs and Azure Monitor logs. Include the following changes in your monitoring process.
| What to monitor| Risk level| Where| Filter/subfilter| Notes | | - | - | - | - | - |
For more information about managing elevation, see [Elevate access to manage all
For information about configuring alerts for Azure roles, see [Configure security alerts for Azure resource roles in Privileged Identity Management](../privileged-identity-management/pim-resource-roles-configure-alerts.md).
- ## Next steps
+## Next steps
See these security operations guide articles:
active-directory Whats New Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-new-archive.md
The registration campaign comes with the ability for an admin to scope users and
**Service category:** User Access Management **Product capability:** Entitlement Management
-In Azure AD entitlement management, an administrator can define that an access package is incompatible with another access package or with a group. Users who have the incompatible memberships will be then unable to request more access. [Learn more](../governance/entitlement-management-access-package-request-policy.md#prevent-requests-from-users-with-incompatible-access-preview).
+In Azure AD entitlement management, an administrator can define that an access package is incompatible with another access package or with a group. Users who have the incompatible memberships will be then unable to request more access. [Learn more](../governance/entitlement-management-access-package-request-policy.md#prevent-requests-from-users-with-incompatible-access).
active-directory Deploy Access Reviews https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/deploy-access-reviews.md
When you create an access review for groups or applications, you can choose to l
[Access packages](entitlement-management-overview.md) can vastly simplify your governance and access review strategy. An access package is a bundle of all the resources with the access a user needs to work on a project or do their task. For example, you might want to create an access package that includes all the applications that developers in your organization need, or all applications to which external users should have access. An administrator or delegated access package manager then groups the resources (groups or apps) and the roles the users need for those resources.
-When you [create an access package](entitlement-management-access-package-create.md), you can create one or more access policies that set conditions for which users can request an access package, what the approval process looks like, and how often a person would have to re-request access. Access reviews are configured while you create or edit an access package policy.
+When you [create an access package](entitlement-management-access-package-create.md), you can create one or more access package policies that set conditions for which users can request an access package, what the approval process looks like, and how often a person would have to re-request access or have their access reviewed. Access reviews are configured while you create or edit those access package policies.
Select the **Lifecycle** tab and scroll down to access reviews.
active-directory Entitlement Management Access Package Approval Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-approval-policy.md
# Change approval and requestor information settings for an access package in Azure AD entitlement management
-As an access package manager, you can change the approval and requestor information settings for an access package at any time by editing an existing policy or adding a new policy.
+As an access package manager, you can change the approval and requestor information settings for an access package at any time by editing an existing policy or adding a new policy for requesting access.
-This article describes how to change the approval and requestor information settings for an existing access package.
+This article describes how to change the approval and requestor information settings for an existing access package, through an access package's policy.
## Approval
In order to make sure users are getting access to the right access packages, you
1. Fill out the remaining tabs (e.g., Lifecycle) based on your needs.
-After you have configured requestor information in your access package policy, can view the requestor's responses to the questions. For guidance on seeing requestor information, see [View requestor's answers to questions](entitlement-management-request-approve.md#view-requestors-answers-to-questions).
+After you have configured requestor information in your access package's policy, can view the requestor's responses to the questions. For guidance on seeing requestor information, see [View requestor's answers to questions](entitlement-management-request-approve.md#view-requestors-answers-to-questions).
## Next steps - [Change lifecycle settings for an access package](entitlement-management-access-package-lifecycle-policy.md)
active-directory Entitlement Management Access Package Assignments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-assignments.md
$req = New-MgEntitlementManagementAccessPackageAssignmentRequest -AccessPackageI
## Remove an assignment
+You can remove an assignment that a user or an administrator had previously requested.
+ **Prerequisite role:** Global administrator, User administrator, Catalog owner, Access package manager or Access package assignment manager 1. In the Azure portal, click **Azure Active Directory** and then click **Identity Governance**.
active-directory Entitlement Management Access Package Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-create.md
You can also create an access package using Microsoft Graph. A user in an approp
1. [List the accessPackageResources in the catalog](/graph/api/entitlementmanagement-list-accesspackagecatalogs?tabs=http&view=graph-rest-beta&preserve-view=true) and [create an accessPackageResourceRequest](/graph/api/entitlementmanagement-post-accesspackageresourcerequests?tabs=http&view=graph-rest-beta&preserve-view=true) for any resources that are not yet in the catalog. 1. [List the accessPackageResourceRoles](/graph/api/accesspackage-list-accesspackageresourcerolescopes?tabs=http&view=graph-rest-beta&preserve-view=true) of each accessPackageResource in an accessPackageCatalog. This list of roles will then be used to select a role, when subsequently creating an accessPackageResourceRoleScope. 1. [Create an accessPackage](/graph/tutorial-access-package-api).
-1. [Create an accessPackageAssignmentPolicy](/graph/api/entitlementmanagement-post-accesspackageassignmentpolicies?tabs=http&view=graph-rest-beta&preserve-view=true).
+1. [Create an accessPackageAssignmentPolicy](/graph/api/entitlementmanagement-post-accesspackageassignmentpolicies?tabs=http&view=graph-rest-beta&preserve-view=true) for each policy needed in the access package.
1. [Create an accessPackageResourceRoleScope](/graph/api/accesspackage-post-accesspackageresourcerolescopes?tabs=http&view=graph-rest-beta&preserve-view=true) for each resource role needed in the access package. ## Next steps
active-directory Entitlement Management Access Package Lifecycle Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-lifecycle-policy.md
# Change lifecycle settings for an access package in Azure AD entitlement management
-As an access package manager, you can change the lifecycle settings for an access package at any time by editing an existing policy. If you change the expiration date for a policy, the expiration date for requests that are already in a pending approval or approved state will not change.
+As an access package manager, you can change the lifecycle settings for assignments in an access package at any time by editing an existing policy. If you change the expiration date for assignments on a policy, the expiration date for requests that are already in a pending approval or approved state will not change.
This article describes how to change the lifecycle settings for an existing access package.
active-directory Entitlement Management Access Package Request Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-request-policy.md
As an access package manager, you can change the users who can request an access
The way you specify who can request an access package is with a policy. Before creating a new policy or editing an existing policy in an access package, you need to determine how many policies the access package needs.
-When you create an access package, you specify the request setting which creates a policy. Most access packages will have a single policy, but a single access package can have multiple policies. You would create multiple policies for an access package if you want to allow different sets of users to be granted assignments with different request and approval settings.
+When you create an access package, you specify the request, approval and lifecycle settings, which are stored on the first policy of the access package. Most access packages will have a single policy, but a single access package can have multiple policies. You would create multiple policies for an access package if you want to allow different sets of users to be granted assignments with different request and approval settings.
For example, a single policy cannot be used to assign internal and external users to the same access package. However, you can create two policies in the same access package, one for internal users and one for external users. If there are multiple policies that apply to a user, they will be prompted at the time of their request to select the policy they would like to be assigned to. The following diagram shows an access package with two policies.
Follow these steps if you want to allow users not in your directory to request t
1. Once you've selected all your connected organizations, click **Select**. > [!NOTE]
- > All users from the selected connected organizations will be able to request this access package. This includes users in Azure AD from all subdomains associated with the organization, unless those domains are blocked by the Azure B2B allow or blocklist. For more information, see [Allow or block invitations to B2B users from specific organizations](../external-identities/allow-deny-list.md).
+ > All users from the selected connected organizations can request this access package. For a connected organization that has an Azure AD directory, users from all verified domains associated with the Azure AD directory can request, unless those domains are blocked by the Azure B2B allow or deny list. For more information, see [Allow or block invitations to B2B users from specific organizations](../external-identities/allow-deny-list.md).
1. If you want to require approval, use the steps in [Change approval settings for an access package in Azure AD entitlement management](entitlement-management-access-package-approval-policy.md) to configure approval settings.
To change the request and approval settings for an access package, you need to o
1. If you are editing a policy click **Update**. If you are adding a new policy, click **Create**.
-## Prevent requests from users with incompatible access (preview)
+## Prevent requests from users with incompatible access
In addition to the policy checks on who can request, you may wish to further restrict access, in order to avoid a user who already has some access - via a group or another access package - from obtaining excessive access.
active-directory Entitlement Management Access Package Requests https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-requests.md
# View and remove requests for an access package in Azure AD entitlement management
-In Azure AD entitlement management, you can see who has requested access packages, their policy, and status. This article describes how to view requests for an access package, and remove requests that are no longer needed.
+In Azure AD entitlement management, you can see who has requested access packages, the policy for their request, and the status of their request. This article describes how to view requests for an access package, and remove requests that are no longer needed.
## View requests
active-directory Entitlement Management Access Reviews Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-reviews-create.md
Title: Create an access review of an access package in Azure AD entitlement management
-description: Learn how to create an access review policy for entitlement management access packages in Azure Active Directory access reviews (Preview).
+description: Learn how to set up an access review in a policy for entitlement management access packages in Azure Active Directory.
documentationCenter: ''
-#Customer intent: As an administrator, I want to create an access review policy for my access packages so I can review the active assignments of my users to ensure everyone has the appropriate access.
+#Customer intent: As an administrator, I want to create an access review for my access packages so I can review the active assignments of my users to ensure everyone has the appropriate access.
# Create an access review of an access package in Azure AD entitlement management
active-directory Entitlement Management Access Reviews Self Review https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-reviews-self-review.md
Title: Self-review of an access package in Azure AD entitlement management
-description: Learn how to review user access of entitlement management access packages in Azure Active Directory access reviews (Preview).
+description: Learn how to review user access of entitlement management access packages in Azure Active Directory access reviews.
documentationCenter: ''
active-directory Entitlement Management External Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-external-users.md
When using the [Azure AD B2B](../external-identities/what-is-b2b.md) invite expe
With entitlement management, you can define a policy that allows users from organizations you specify to be able to self-request an access package. That policy includes whether approval is required, whether access reviews are required, and an expiration date for the access. If approval is required, you might consider inviting one or more users from the external organization to your directory, designating them as sponsors, and configuring that sponsors are approvers - since they are likely to know which external users from their organization need access. Once you have configured the access package, obtain the access package's request link so you can send that link to your contact person (sponsor) at the external organization. That contact can share with other users in their external organization, and they can use this link to request the access package. Users from that organization who have already been invited into your directory can also use that link.
-Typically, when a request is approved, entitlement management will provision the user with the necessary access. If the user is not already in your directory, entitlement management will first invite the user. When the user is invited, Azure AD will automatically create a B2B guest account for them, but will not send the user an email. Note that an administrator may have previously limited which organizations are permitted for collaboration, by setting a [B2B allow or deny list](../external-identities/allow-deny-list.md) to allow or block invites to other organizations. If the user is not permitted by the allow or block list, then they will not be invited, and cannot be assigned access until the lists are updated.
+Typically, when a request is approved, entitlement management will provision the user with the necessary access. If the user isn't already in your directory, entitlement management will first invite the user. When the user is invited, Azure AD will automatically create a B2B guest account for them but won't send the user an email. Note that an administrator may have previously limited which organizations are allowed for collaboration, by setting a [B2B allow or deny list](../external-identities/allow-deny-list.md) to allow or block invites to other organization's domains. If the user's domain isn't allowed by those lists, then they won't be invited and can't be assigned access until the lists are updated.
Since you do not want the external user's access to last forever, you specify an expiration date in the policy, such as 180 days. After 180 days, if their access is not extended, entitlement management will remove all access associated with that access package. By default, if the user who was invited through entitlement management has no other access package assignments, then when they lose their last assignment, their guest account will be blocked from signing in for 30 days, and subsequently removed. This prevents the proliferation of unnecessary accounts. As described in the following sections, these settings are configurable.
The following diagram and steps provide an overview of how external users are gr
1. To access the resources, the external user can either click the link in the email or attempt to access any of the directory resources directly to complete the invitation process.
-1. If the policy settings includes an expiration date, then later when the access package assignment for the external user expires, the external user's access rights from that access package are removed.
+1. If the policy settings include an expiration date, then later when the access package assignment for the external user expires, the external user's access rights from that access package are removed.
1. Depending on the lifecycle of external users settings, when the external user no longer has any access package assignments, the external user is blocked from signing in and the guest user account is removed from your directory.
To ensure people outside of your organization can request access packages and ge
### Configure your Azure AD B2B external collaboration settings - Allowing guests to invite other guests to your directory means that guest invites can occur outside of entitlement management. We recommend setting **Guests can invite** to **No** to only allow for properly governed invitations.-- If you are using the B2B allow list, you must make sure all the domains of all the organizations you want to partner with using entitlement management are added to the list. Alternatively, if you are using the B2B deny list, you must make sure no domain of any organization you want to partner with is not present on that list.-- If you create an entitlement management policy for **All users** (All connected organizations + any new external users), and a user doesnΓÇÖt belong to a connected organization in your directory, a connected organization will automatically be created for them when they request the package. Any B2B allow or deny list settings you have will take precedence. Therefore, be sure to include the domains you intend to include in this policy to your allow list if you are using one, and exclude them from your deny list if you are using a deny list.
+- If you have been previously using the B2B allow list, you must either remove that list, or make sure all the domains of all the organizations you want to partner with using entitlement management are added to the list. Alternatively, if you are using the B2B deny list, you must make sure no domain of any organization you want to partner with is present on that list.
+- If you create an entitlement management policy for **All users** (All connected organizations + any new external users), and a user doesnΓÇÖt belong to a connected organization in your directory, a connected organization will automatically be created for them when they request the package. However, any B2B [allow or deny list](../external-identities/allow-deny-list.md) settings you have will take precedence. Therefore, you will want to remove the allow list, if you were using one, so that **All users** can request access, and exclude all authorized domains from your deny list if you are using a deny list.
- If you want to create an entitlement management policy that includes **All users** (All connected organizations + any new external users), you must first enable email one-time passcode authentication for your directory. For more information, see [Email one-time passcode authentication](../external-identities/one-time-passcode.md). - For more information about Azure AD B2B external collaboration settings, see [Configure external collaboration settings](../external-identities/external-collaboration-settings-configure.md). ![Azure AD external collaboration settings](./media/entitlement-management-external-users/collaboration-settings.png)
+
+ > [!NOTE]
+ > If you create a connected organization for an Azure AD tenant from a different Microsoft cloud, you also need to configure cross-tenant access settings appropriately. For more information on how to configure these settings, see [Configure cross-tenant access settings](../external-identities/cross-cloud-settings.md).
### Review your Conditional Access policies
active-directory Entitlement Management Logic Apps Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-logic-apps-integration.md
These triggers to Logic Apps are controlled in a new tab within access package p
For more information on creating Logic App workflows, see [Create automated workflows with Azure Logic Apps in the Azure portal](../../logic-apps/quickstart-create-first-logic-app-workflow.md).
-## Add custom extension to access package policy
+## Add custom extension to a policy in an access package
**Prerequisite roles:** Global administrator, Identity Governance administrator, Catalog owner, or Access package manager
These triggers to Logic Apps are controlled in a new tab within access package p
1. In the left menu, select **Access packages**.
-1. Select **New access package** if you want to add a custom extension (Logic App) to a new access package. Or select the access package you want to add a custom extension (Logic App) to from the list of access packages that have already been created.
+1. Select the access package you want to add a custom extension (Logic App) to from the list of access packages that have already been created.
> [!NOTE]
+ > Select **New access package** if you want to create a new access package.
> For more information about how to create an access package see [Create a new access package in entitlement management](entitlement-management-access-package-create.md). For more information about how to edit an existing access package, see [Change request settings for an access package in Azure AD entitlement management](entitlement-management-access-package-request-policy.md#open-and-edit-an-existing-policy-of-request-settings).
-1. In the policy settings of the access package, go to the **Rules (Preview)** tab.
+1. Change to the policy tab, select the policy and select **Edit**.
-1. In the menu below **When**, select the access package event you wish to use as trigger for this custom extension (Logic App). For example, if you only want to trigger the custom extension Logic App workflow when a user requests the access package, select **when request is created**.
+1. In the policy settings, go to the **Custom Extensions (Preview)** tab.
-1. In the menu below **Do**, select the custom extension (Logic App) you want to add to the access package. The do action you select will execute when the event selected in the when field occurs.
+1. In the menu below **Stage**, select the access package event you wish to use as trigger for this custom extension (Logic App). For example, if you only want to trigger the custom extension Logic App workflow when a user requests the access package, select **Request is created**.
-1. Select **Create** if you want to add the custom extension to a new access package. Select **Update** if you want to add it to an existing access package.
+1. In the menu below **Custom Extension**, select the custom extension (Logic App) you want to add to the access package. The do action you select will execute when the event selected in the when field occurs.
+
+1. Select **Update** to add it to an existing access package's policy.
![Add a logic app to access package](./media/entitlement-management-logic-apps/add-logic-apps-access-package.png)
active-directory Entitlement Management Organization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-organization.md
A connected organization is another organization that you have a relationship wi
There are three ways that entitlement management lets you specify the users that form a connected organization. It could be
-* users in another Azure AD directory,
+* users in another Azure AD directory (from any Microsoft cloud),
* users in another non-Azure AD directory that has been configured for direct federation, or * users in another non-Azure AD directory, whose email addresses all have the same domain name in common.
+In addition, you can have a connected organization for users with a Microsoft Account, such as from the domain *live.com*.
+ For example, suppose you work at Woodgrove Bank and you want to collaborate with two external organizations. These two organizations have different configurations: - Graphic Design Institute uses Azure AD, and their users have a user principal name that ends with *graphicdesigninstitute.com*. - Contoso does not yet use Azure AD. Contoso users have a user principal name that ends with *contoso.com*.
-In this case, you can configure two connected organizations. You create one connected organization for Graphic Design Institute and one for Contoso. If you then add the two connected organizations to a policy, users from each organization with a user principal name that matches the policy can request access packages. Users with a user principal name that has a domain of contoso.com would match the Contoso-connected organization and would also be allowed to request packages. Users with a user principal name that has a domain of *graphicdesigninstitute.com* would match the Graphic Design Institute-connected organization and be allowed to submit requests. And, because Graphic Design Institute uses Azure AD, any users with a principal name that matches a [verified domain](../fundamentals/add-custom-domain.md#verify-your-custom-domain-name) that's added to their tenant, such as *graphicdesigninstitute.example*, would also be able to request access packages by using the same policy. If you have [email one-time passcode (OTP) authentication](../external-identities/one-time-passcode.md) turned on, that includes users from those domains who do not yet have Azure AD accounts who will authenticate using email OTP when accessing your resources.
+In this case, you can configure two connected organizations. You create one connected organization for Graphic Design Institute and one for Contoso. If you then add the two connected organizations to a policy, users from each organization with a user principal name that matches the policy can request access packages. Users with a user principal name that has a domain of contoso.com would match the Contoso-connected organization and would also be allowed to request packages. Users with a user principal name that has a domain of *graphicdesigninstitute.com* would match the Graphic Design Institute-connected organization and be allowed to submit requests. And, because Graphic Design Institute uses Azure AD, any users with a principal name that matches a [verified domain](../fundamentals/add-custom-domain.md#verify-your-custom-domain-name) that's added to their tenant, such as *graphicdesigninstitute.example*, would also be able to request access packages by using the same policy. If you have [email one-time passcode (OTP) authentication](../external-identities/one-time-passcode.md) turned on, that includes users from those domains that aren't yet part of Azure AD directories who'll authenticate using email OTP when accessing your resources.
![Connected organization example](./media/entitlement-management-organization/connected-organization-example.png)
To add an external Azure AD directory or domain as a connected organization, fol
1. In the search box, enter a domain name to search for the Azure AD directory or domain. Be sure to enter the entire domain name.
-1. Verify that the organization name and authentication type are correct. How users sign in depends on the authentication type.
+1. Confirm that the organization name and authentication type are correct. User sign in, prior to being able to access the myaccess portal, depends on the authentication type for their organization. If the authentication type for a connected organization is Azure AD, then all users with an account in any verified domain of that Azure AD directory will sign into their directory, and then can request access to access packages that allow that connected organization. If the authentication type is One-time passcode, this allows users with email addresses from just that domain to visit the myaccess portal. Then, after they authenticate with the passcode, the user can make a request.
![The "Select directories + domains" pane](./media/entitlement-management-organization/organization-select-directories-domains.png)
-1. Select **Add** to add the Azure AD directory or domain. Currently, you can add only one Azure AD directory or domain per connected organization.
- > [!NOTE]
- > All users from the Azure AD directory or domain will be able to request this access package. This includes users in Azure AD from all subdomains associated with the directory, unless those domains are blocked by the Azure AD business to business (B2B) allow or deny list. For more information, see [Allow or block invitations to B2B users from specific organizations](../external-identities/allow-deny-list.md).
+ > Access from some domains could be blocked by the Azure AD business to business (B2B) allow or deny list. For more information, see [Allow or block invitations to B2B users from specific organizations](../external-identities/allow-deny-list.md).
+
+1. Select **Add** to add the Azure AD directory or domain. Currently, you can add only one Azure AD directory or domain per connected organization.
1. After you've added the Azure AD directory or domain, select **Select**.
active-directory Entitlement Management Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-overview.md
You can also read the [common scenarios](entitlement-management-scenarios.md), o
Entitlement management introduces to Azure AD the concept of an *access package*. An access package is a bundle of all the resources with the access a user needs to work on a project or perform their task. Access packages are used to govern access for your internal employees, and also users outside your organization.
- Here are the types of resources you can manage user's access to with entitlement management:
+ Here are the types of resources you can manage user's access to, with entitlement management:
- Membership of Azure AD security groups - Membership of Microsoft 365 Groups and Teams
You can also control access to other resources that rely upon Azure AD security
With an access package, an administrator or delegated access package manager lists the resources (groups, apps, and sites), and the roles the users need for those resources.
-Access packages also include one or more *policies*. A policy defines the rules or guardrails for assignment to access package. Each policy can be used to ensure that only the appropriate users are able to request access, that there are approvers for their request, and that their access to those resources is time-limited and will expire if not renewed.
+Access packages also include one or more *policies*. A policy defines the rules or guardrails for assignment to access package. Each policy can be used to ensure that only the appropriate users are able to have access assignments, and the access is time-limited and will expire if not renewed.
![Access package and policies](./media/entitlement-management-overview/elm-overview-access-package.png)
-Within each policy, an administrator or access package manager defines
+You can have policies for users to request access. In these kinds of policies, an administrator or access package manager defines
- Either the already-existing users (typically employees or already-invited guests), or the partner organizations of external users, that are eligible to request access - The approval process and the users that can approve or deny access - The duration of a user's access assignment, once approved, before the assignment expires
+You can also have policies for users to be assigned access, either by an administrator or automatically.
+ The following diagram shows an example of the different elements in entitlement management. It shows one catalog with two example access packages. - **Access package 1** includes a single group as a resource. Access is defined with a policy that enables a set of users in the directory to request access.
active-directory How To Connect Fed Group Claims https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-fed-group-claims.md
Emit group names to be returned in `NetbiosDomain\sAMAccountName` format as the
"optionalClaims": { "saml2Token": [{ "name": "groups",
- "additionalProperties": ["netbios_name_and_sam_account_name", "emit_as_roles"]
+ "additionalProperties": ["netbios_domain_and_sam_account_name", "emit_as_roles"]
}], "idToken": [{ "name": "groups",
- "additionalProperties": ["netbios_name_and_sam_account_name", "emit_as_roles"]
+ "additionalProperties": ["netbios_domain_and_sam_account_name", "emit_as_roles"]
}] } ```
Emit group names to be returned in `NetbiosDomain\sAMAccountName` format as the
- [Add authorization using groups & group claims to an ASP.NET Core web app (code sample)](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/blob/master/5-WebApp-AuthZ/5-2-Groups/README.md) - [Assign a user or group to an enterprise app](../../active-directory/manage-apps/assign-user-or-group-access-portal.md)-- [Configure role claims](../../active-directory/develop/active-directory-enterprise-app-role-management.md)
+- [Configure role claims](../../active-directory/develop/active-directory-enterprise-app-role-management.md)
active-directory How To Connect Sync Staging Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-sync-staging-server.md
na Previously updated : 02/27/2018 Last updated : 5/18/2022
Most of the file is self-explanatory. Some abbreviations to understand the conte
* AMODT ΓÇô Attribute Modification Type. Indicates if the operation at an attribute level is an Add, Update, or delete. **Retrieve common identifiers**
-The export.csv file contains all changes that are about to be exported. Each row corresponds to a change for an object in the connector space and the object is identified by the DN attribute. The DN attribute is a unique identifier assigned to an object in the connector space. When you have many rows/changes in the export.csv to analyze, it may be difficult for you to figure out which objects the changes are for based on the DN attribute alone. To simplify the process of analyzing the changes, use the csanalyzer.ps1 PowerShell script. The script retrieves common identifiers (for example, displayName, userPrincipalName) of the objects. To use the script:
+The export.csv file contains all changes that are about to be exported. Each row corresponds to a change for an object in the connector space and the object is identified by the DN attribute. The DN attribute is a unique identifier assigned to an object in the connector space. When you have many rows/changes in the export.csv to analyze, it may be difficult for you to figure out which objects the changes are for based on the DN attribute alone. To simplify the process of analyzing the changes, use the `csanalyzer.ps1` PowerShell script. The script retrieves common identifiers (for example, displayName, userPrincipalName) of the objects. To use the script:
1. Copy the PowerShell script from the section [CSAnalyzer](#appendix-csanalyzer) to a file named `csanalyzer.ps1`. 2. Open a PowerShell window and browse to the folder where you created the PowerShell script. 3. Run: `.\csanalyzer.ps1 -xmltoimport %temp%\export.xml`.
Support for SQL AOA was added to Azure AD Connect in version 1.1.524.0. You must
## Appendix CSAnalyzer See the section [verify](#verify) on how to use this script.
-```
+```powershell
Param( [Parameter(Mandatory=$true, HelpMessage="Must be a file generated using csexport 'Name of Connector' export.xml /f:x)")] [string]$xmltoimport="%temp%\exportedStage1a.xml",
$resolvedXMLtoimport=Resolve-Path -Path ([Environment]::ExpandEnvironmentVariabl
#use an XmlReader to deal with even large files $result=$reader = [System.Xml.XmlReader]::Create($resolvedXMLtoimport)  $result=$reader.ReadToDescendant('cs-object')
-do 
+if($result)
{
- #create the object placeholder
- #adding them up here means we can enforce consistency
- $objOutputUser=New-Object psobject
- Add-Member -InputObject $objOutputUser -MemberType NoteProperty -Name ID -Value ""
- Add-Member -InputObject $objOutputUser -MemberType NoteProperty -Name Type -Value ""
- Add-Member -inputobject $objOutputUser -MemberType NoteProperty -Name DN -Value ""
- Add-Member -inputobject $objOutputUser -MemberType NoteProperty -Name operation -Value ""
- Add-Member -inputobject $objOutputUser -MemberType NoteProperty -Name UPN -Value ""
- Add-Member -inputobject $objOutputUser -MemberType NoteProperty -Name displayName -Value ""
- Add-Member -inputobject $objOutputUser -MemberType NoteProperty -Name sourceAnchor -Value ""
- Add-Member -inputobject $objOutputUser -MemberType NoteProperty -Name alias -Value ""
- Add-Member -inputobject $objOutputUser -MemberType NoteProperty -Name primarySMTP -Value ""
- Add-Member -inputobject $objOutputUser -MemberType NoteProperty -Name onPremisesSamAccountName -Value ""
- Add-Member -inputobject $objOutputUser -MemberType NoteProperty -Name mail -Value ""
-
- $user = [System.Xml.Linq.XElement]::ReadFrom($reader)
- if ($showOutput) {Write-Host Found an exported object... -ForegroundColor Green}
-
- #object id
- $outID=$user.Attribute('id').Value
- if ($showOutput) {Write-Host ID: $outID}
- $objOutputUser.ID=$outID
-
- #object type
- $outType=$user.Attribute('object-type').Value
- if ($showOutput) {Write-Host Type: $outType}
- $objOutputUser.Type=$outType
-
- #dn
- $outDN= $user.Element('unapplied-export').Element('delta').Attribute('dn').Value
- if ($showOutput) {Write-Host DN: $outDN}
- $objOutputUser.DN=$outDN
-
- #operation
- $outOperation= $user.Element('unapplied-export').Element('delta').Attribute('operation').Value
- if ($showOutput) {Write-Host Operation: $outOperation}
- $objOutputUser.operation=$outOperation
-
- #now that we have the basics, go get the details
-
- foreach ($attr in $user.Element('unapplied-export-hologram').Element('entry').Elements("attr"))
+ do 
{
- $attrvalue=$attr.Attribute('name').Value
- $internalvalue= $attr.Element('value').Value
-
- switch ($attrvalue)
+ #create the object placeholder
+ #adding them up here means we can enforce consistency
+ $objOutputUser=New-Object psobject
+ Add-Member -InputObject $objOutputUser -MemberType NoteProperty -Name ID -Value ""
+ Add-Member -InputObject $objOutputUser -MemberType NoteProperty -Name Type -Value ""
+ Add-Member -inputobject $objOutputUser -MemberType NoteProperty -Name DN -Value ""
+ Add-Member -inputobject $objOutputUser -MemberType NoteProperty -Name operation -Value ""
+ Add-Member -inputobject $objOutputUser -MemberType NoteProperty -Name UPN -Value ""
+ Add-Member -inputobject $objOutputUser -MemberType NoteProperty -Name displayName -Value ""
+ Add-Member -inputobject $objOutputUser -MemberType NoteProperty -Name sourceAnchor -Value ""
+ Add-Member -inputobject $objOutputUser -MemberType NoteProperty -Name alias -Value ""
+ Add-Member -inputobject $objOutputUser -MemberType NoteProperty -Name primarySMTP -Value ""
+ Add-Member -inputobject $objOutputUser -MemberType NoteProperty -Name onPremisesSamAccountName -Value ""
+ Add-Member -inputobject $objOutputUser -MemberType NoteProperty -Name mail -Value ""
+
+ $user = [System.Xml.Linq.XElement]::ReadFrom($reader)
+ if ($showOutput) {Write-Host Found an exported object... -ForegroundColor Green}
+
+ #object id
+ $outID=$user.Attribute('id').Value
+ if ($showOutput) {Write-Host ID: $outID}
+ $objOutputUser.ID=$outID
+
+ #object type
+ $outType=$user.Attribute('object-type').Value
+ if ($showOutput) {Write-Host Type: $outType}
+ $objOutputUser.Type=$outType
+
+ #dn
+ $outDN= $user.Element('unapplied-export').Element('delta').Attribute('dn').Value
+ if ($showOutput) {Write-Host DN: $outDN}
+ $objOutputUser.DN=$outDN
+
+ #operation
+ $outOperation= $user.Element('unapplied-export').Element('delta').Attribute('operation').Value
+ if ($showOutput) {Write-Host Operation: $outOperation}
+ $objOutputUser.operation=$outOperation
+
+ #now that we have the basics, go get the details
+
+ foreach ($attr in $user.Element('unapplied-export-hologram').Element('entry').Elements("attr"))
{
- "userPrincipalName"
- {
- if ($showOutput) {Write-Host UPN: $internalvalue}
- $objOutputUser.UPN=$internalvalue
- }
- "displayName"
- {
- if ($showOutput) {Write-Host displayName: $internalvalue}
- $objOutputUser.displayName=$internalvalue
- }
- "sourceAnchor"
- {
- if ($showOutput) {Write-Host sourceAnchor: $internalvalue}
- $objOutputUser.sourceAnchor=$internalvalue
- }
- "alias"
- {
- if ($showOutput) {Write-Host alias: $internalvalue}
- $objOutputUser.alias=$internalvalue
- }
- "proxyAddresses"
+ $attrvalue=$attr.Attribute('name').Value
+ $internalvalue= $attr.Element('value').Value
+
+ switch ($attrvalue)
{
- if ($showOutput) {Write-Host primarySMTP: ($internalvalue -replace "SMTP:","")}
- $objOutputUser.primarySMTP=$internalvalue -replace "SMTP:",""
+ "userPrincipalName"
+ {
+ if ($showOutput) {Write-Host UPN: $internalvalue}
+ $objOutputUser.UPN=$internalvalue
+ }
+ "displayName"
+ {
+ if ($showOutput) {Write-Host displayName: $internalvalue}
+ $objOutputUser.displayName=$internalvalue
+ }
+ "sourceAnchor"
+ {
+ if ($showOutput) {Write-Host sourceAnchor: $internalvalue}
+ $objOutputUser.sourceAnchor=$internalvalue
+ }
+ "alias"
+ {
+ if ($showOutput) {Write-Host alias: $internalvalue}
+ $objOutputUser.alias=$internalvalue
+ }
+ "proxyAddresses"
+ {
+ if ($showOutput) {Write-Host primarySMTP: ($internalvalue -replace "SMTP:","")}
+ $objOutputUser.primarySMTP=$internalvalue -replace "SMTP:",""
+ }
} }
- }
- $objOutputUsers += $objOutputUser
+ $objOutputUsers += $objOutputUser
- Write-Progress -activity "Processing ${xmltoimport} in batches of ${batchsize}" -status "Batch ${outputfilecount}: " -percentComplete (($objOutputUsers.Count / $batchsize) * 100)
+ Write-Progress -activity "Processing ${xmltoimport} in batches of ${batchsize}" -status "Batch ${outputfilecount}: " -percentComplete (($objOutputUsers.Count / $batchsize) * 100)
- #every so often, dump the processed users in case we blow up somewhere
- if ($count % $batchsize -eq 0)
- {
- Write-Host Hit the maximum users processed without completion... -ForegroundColor Yellow
+ #every so often, dump the processed users in case we blow up somewhere
+ if ($count % $batchsize -eq 0)
+ {
+ Write-Host Hit the maximum users processed without completion... -ForegroundColor Yellow
- #export the collection of users as a CSV
- Write-Host Writing processedusers${outputfilecount}.csv -ForegroundColor Yellow
- $objOutputUsers | Export-Csv -path processedusers${outputfilecount}.csv -NoTypeInformation
+ #export the collection of users as a CSV
+ Write-Host Writing processedusers${outputfilecount}.csv -ForegroundColor Yellow
+ $objOutputUsers | Export-Csv -path processedusers${outputfilecount}.csv -NoTypeInformation
- #increment the output file counter
- $outputfilecount+=1
+ #increment the output file counter
+ $outputfilecount+=1
- #reset the collection and the user counter
- $objOutputUsers = $null
- $count=0
- }
+ #reset the collection and the user counter
+ $objOutputUsers = $null
+ $count=0
+ }
- $count+=1
+ $count+=1
- #need to bail out of the loop if no more users to process
- if ($reader.NodeType -eq [System.Xml.XmlNodeType]::EndElement)
- {
- break
- }
+ #need to bail out of the loop if no more users to process
+ if ($reader.NodeType -eq [System.Xml.XmlNodeType]::EndElement)
+ {
+ break
+ }
-} while ($reader.Read)
+ } while ($reader.Read)
-#need to write out any users that didn't get picked up in a batch of 1000
-#export the collection of users as CSV
-Write-Host Writing processedusers${outputfilecount}.csv -ForegroundColor Yellow
-$objOutputUsers | Export-Csv -path processedusers${outputfilecount}.csv -NoTypeInformation
+ #need to write out any users that didn't get picked up in a batch of 1000
+ #export the collection of users as CSV
+ Write-Host Writing processedusers${outputfilecount}.csv -ForegroundColor Yellow
+ $objOutputUsers | Export-Csv -path processedusers${outputfilecount}.csv -NoTypeInformation
+}
+else
+{
+ Write-Host "Imported XML file is empty. No work to do." -ForegroundColor Red
+}
``` ## Next steps
active-directory Tshoot Connect Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/tshoot-connect-connectivity.md
This article explains how connectivity between Azure AD Connect and Azure AD wor
Azure AD Connect uses the MSAL library for authentication. The installation wizard and the sync engine proper require machine.config to be properly configured since these two are .NET applications. >[!NOTE]
->Azure AD Connect v1.6.xx.x uses the ADAL library. The ADAL library is being depricated and support will end in June 2022. Microsoft recommends that you upgrade to the latest version of [Azure AD Connect v2](whatis-azure-ad-connect-v2.md).
+>Azure AD Connect v1.6.xx.x uses the ADAL library. The ADAL library is being deprecated and support will end in June 2022. Microsoft recommends that you upgrade to the latest version of [Azure AD Connect v2](whatis-azure-ad-connect-v2.md).
In this article, we show how Fabrikam connects to Azure AD through its proxy. The proxy server is named fabrikamproxy and is using port 8080.
active-directory Concept Identity Protection B2b https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/concept-identity-protection-b2b.md
From the [Risky users report](https://portal.azure.com/#blade/Microsoft_AAD_IAM/
### Manually dismiss user's risk
-If password reset is not an option for you from the Azure AD portal, you can choose to manually dismiss user risk. This process will cause the user to no longer be at risk, but does not have any impact on the existing password. It is important that you change the user's password using whatever means are available to you in order to bring the identity back to a safe state.
+If password reset is not an option for you from the Azure AD portal, you can choose to manually dismiss user risk. Dismissing user risk does not have any impact on the user's existing password, but this process will change the user's Risk State from At Risk to Dismissed. It is important that you change the user's password using whatever means are available to you in order to bring the identity back to a safe state.
To dismiss user risk, go to the [Risky users report](https://portal.azure.com/#blade/Microsoft_AAD_IAM/SecurityMenuBlade/RiskyUsers) in the Azure AD Security menu. Search for the impacted user using the 'User' filter and click on the user. Click on "dismiss user risk" option from the top toolbar. This action may take a few minutes to complete and update the user risk state in the report.
Excluding B2B users from your organization's risk-based Conditional Access polic
See the following articles on Azure AD B2B collaboration: -- [What is Azure AD B2B collaboration?](../external-identities/what-is-b2b.md)
+- [What is Azure AD B2B collaboration?](../external-identities/what-is-b2b.md)
active-directory F5 Aad Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/f5-aad-integration.md
Refer to the following guided configuration tutorials using Easy Button template
- [F5 BIG-IP Easy Button for SSO to header-based and LDAP applications](f5-big-ip-ldap-header-easybutton.md) -- [BIG-IP Easy Button for SSO to Oracle EBS (Enterprise Business Suite)](f5-big-ip-oracle-enterprise-business-suite-easy-button.md)
+- [F5-BIG-IP Easy Button for SSO to Oracle EBS (Enterprise Business Suite)](f5-big-ip-oracle-enterprise-business-suite-easy-button.md)
-- [BIG-IP Easy Button for SSO to Oracle JD Edwards](f5-big-ip-oracle-jde-easy-button.md)
+- [F5-BIG-IP Easy Button for SSO to Oracle JD Edwards](f5-big-ip-oracle-jde-easy-button.md)
-- [BIG-IP Easy Button for SSO to SAP ERP](f5-big-ip-sap-erp-easy-button.md)
+- [F5-BIG-IP Easy Button for SSO to SAP ERP](f5-big-ip-sap-erp-easy-button.md)
## Azure AD B2B guest access Azure AD B2B guest access to SHA protected applications is also possible, but some scenarios may require some additional steps not covered in the tutorials. One example is Kerberos SSO, where a BIG-IP will perform kerberos constrained delegation (KCD) to obtain a service ticket from domain contollers. Without a local representation of a guest user exisiting locally, a domain controller will fail to honour the request on the basis that the user does not exist. To support this scenario, you would need to ensure external identities are flowed down from your Azure AD tenant to the directory used by the application. See [Grant B2B users in Azure AD access to your on-premises applications](../external-identities/hybrid-cloud-to-on-premises.md) for guidance.
active-directory Powershell Export Apps With Secrets Beyond Required https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/scripts/powershell-export-apps-with-secrets-beyond-required.md
This PowerShell script example exports all app registrations secrets and certifi
## Script explanation This script is working non-interactively. The admin using it will need to change the values in the "#PARAMETERS TO CHANGE" section with their own App ID, Application Secret, Tenant Name, the period for the apps credentials expiration and the Path where the CSV will be exported.
-This script uses the [Client_Credential Oauth Flow](../../develop/v2-oauth2-client-creds-grant-flow.md)
+This script uses the [Client_Credential Oauth Flow](../../develop/v2-oauth2-client-creds-grant-flow.md)
The function "RefreshToken" will build the access token based on the values of the parameters modified by the admin. The "Add-Member" command is responsible for creating the columns in the CSV file. | Command | Notes | |||
-| [Invoke-WebRequest](/powershell/module/microsoft.powershell.utility/invoke-webrequest?view=powershell-7.1&preserve-view=true) | Sends HTTP and HTTPS requests to a web page or web service. It parses the response and returns collections of links, images, and other significant HTML elements. |
+| [Invoke-WebRequest](/powershell/module/microsoft.powershell.utility/invoke-webrequest) | Sends HTTP and HTTPS requests to a web page or web service. It parses the response and returns collections of links, images, and other significant HTML elements. |
## Next steps
active-directory Concept Privileged Access Versus Role Assignable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/concept-privileged-access-versus-role-assignable.md
na Previously updated : 12/16/2021 Last updated : 05/18/2022
Privileged Identity Management (PIM) supports the ability to enable privileged a
## What are Azure AD role-assignable groups?
-Azure AD lets you assign a cloud Azure AD security group to an Azure AD role. Global Administrators and Privileged Role Administrators must create a new security group and make the group role-assignable at creation time. Only users in the Global Administrator, Privileged Role Administrator, or the group's Owner roles can change the membership of the group. Also, no other users can reset the password of the users who are members of the group. This feature helps prevent admins from elevating to a higher privileged role without going through a request and approval procedure.
+Azure Active Directory (Azure AD) lets you assign a cloud Azure AD security group to an Azure AD role. A Global Administrator or Privileged Role Administrator must create a new security group and make the group role-assignable at creation time. Only the Global Administrator, Privileged Role Administrator, or the group Owner role assignments can change the membership of the group. Also, no other users can reset the password of the users who are members of the group. This feature helps prevent an admin from elevating to a higher privileged role without going through a request and approval procedure.
## What are Privileged Access groups?
Privileged Access groups enable users to elevate to the owner or member role of
>[!Note] >For privileged access groups used for elevating into Azure AD roles, Microsoft recommends that you require an approval process for eligible member assignments. Assignments that can be activated without approval can leave you vulnerable to a security risk from less-privileged administrators. For example, the Helpdesk Administrator has permission to reset an eligible user's passwords.
-## When to use each type of group
+## When to use a role-assignable group
You can set up just-in-time access to permissions and roles beyond Azure AD and Azure Resource. If you have other resources whose authorization can be connected to an Azure AD security group (for Azure Key Vault, Intune, Azure SQL, or other apps and services), you should enable privileged access on the group and assign users as eligible for membership in the group.
-If you want to assign a group to an Azure AD or Azure Resource role and require elevation through a PIM process, there are two ways to do it:
+If you want to assign a group to an Azure AD or Azure Resource role and require elevation through a PIM process, there's only one way to do it:
-- **Assign the group persistently to a role**. You then grant users eligible member access to the group in PIM. Each eligible user must then activate their membership to get into the group that is permanently assigned to the role. This path requires a role-assignable group to be enabled in PIM as a privileged access group for the Azure AD role.-- **Assign the group as eligible for a role** through PIM. Everyone in the group gets access to the role assignment at once when the group's assignment is activated. This path requires a role-assignable group for the Azure AD role, and a security group for Azure resources.
+- **Assign the group persistently to a role**. Then, in PIM, you can grant users eligible role assignments to the group. Each eligible user must activate their role assignment to become members of the group, and activation is subject to approval policies. This path requires a role-assignable group to be enabled in PIM as a privileged access group for the Azure AD role.
- ![Diagram showing two ways to assign role using privileged access groups in PIM.](./media/concept-privileged-access-versus-role-assignable/concept-privileged-access.png)
-
-Method one allows maximum granularity of permissions, and method two allows simple, one-step activation for a group of users. Either of these methods will work for the end-to-end scenario. We recommend that you use the second method in most cases. You should use the first method only if you are trying to:
+This method allows maximum granularity of permissions. Use this method to:
- Assign a group to multiple Azure AD or Azure resource roles and have users activate once to get access to multiple roles.-- Maintain different activation policies for different sets of users to access an Azure AD or Azure resource role. For example, if you want some users to be approved before becoming a Global Administrator while allowing other users to be auto-approved, you can set up two privileged access groups, assign them both persistently (a "permanent" assignment in Privileged Identity Management) to the Global Administrator role and then use a different activation policy for the member role for each group.
+- Maintain different activation policies for different sets of users to access an Azure AD or Azure resource role. For example, if you want some users to be approved before becoming a Global Administrator while allowing other users to be auto-approved, you could set up two privileged access groups, assign them both persistently (a "permanent" assignment in Privileged Identity Management) to the Global Administrator role and then use a different activation policy for the Member role for each group.
## Next steps
active-directory Pim How To Change Default Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-how-to-change-default-settings.md
You can require that users enter a business justification when they activate. To
## Require ticket information on activation
-If your organization uses a ticketing system to track help desk items or change requests for your enviornment, you can select the **Require ticket information on activation** box to require the elevation request to contain the name of the ticketing system (optional, if your organization uses multiple systems) and the ticket number that prompted the need for role activation.
+If your organization uses a ticketing system to track help desk items or change requests for your environment, you can select the **Require ticket information on activation** box to require the elevation request to contain the name of the ticketing system (optional, if your organization uses multiple systems) and the ticket number that prompted the need for role activation.
## Require approval to activate
If setting multiple approvers, approval completes as soon as one of them approve
1. Select **Update** to save your changes.
+## Manage role settings through Microsoft Graph
+
+To manage settings for Azure AD roles through Microsoft Graph, use the [unifiedRoleManagementPolicy resource type and related methods](/graph/api/resources/unifiedrolemanagementpolicy).
+
+In Microsoft Graph, role settings are referred to as rules and they're assigned to Azure AD roles through container policies. Each Azure AD role is assigned a specific policy object. You can retrieve all policies that are scoped to Azure AD roles and for each policy, retrieve the associated collection of rules through an `$expand` query parameter. The syntax for the request is as follows:
+
+```http
+GET https://graph.microsoft.com/v1.0/policies/roleManagementPolicies?$filter=scopeId eq '/' and scopeType eq 'DirectoryRole'&$expand=rules
+```
+
+Rules are grouped into containers. The containers are further broken down into rule definitions that are identified by unique IDs for easier management. For example, a **unifiedRoleManagementPolicyEnablementRule** container exposes three rule definitions identified by the following unique IDs.
+++ `Enablement_Admin_Eligibility` - Rules that apply for admins to carry out operations on role eligibilities. For example, whether justification is required, and whether for all operations (for example, renewal, activation, or deactivation) or only for specific operations.++ `Enablement_Admin_Assignment` - Rules that apply for admins to carry out operations on role assignments. For example, whether justification is required, and whether for all operations (for example, renewal, deactivation, or extension) or only for specific operations.++ `Enablement_EndUser_Assignment` - Rules that apply for principals to enable their assignments. For example, whether multifactor authentication is required.++
+To update these rule definitions, use the [update rules API](/graph/api/unifiedrolemanagementpolicyrule-update). For example, the following request specifies an empty **enabledRules** collection, therefore deactivating the enabled rules for a policy, such as multifactor authentication, ticketing information and justification.
+
+```http
+PATCH https://graph.microsoft.com/v1.0/policies/roleManagementPolicies/DirectoryRole_cab01047-8ad9-4792-8e42-569340767f1b_70c808b5-0d35-4863-a0ba-07888e99d448/rules/Enablement_EndUser_Assignment
+{
+ "@odata.type": "#microsoft.graph.unifiedRoleManagementPolicyEnablementRule",
+ "id": "Enablement_EndUser_Assignment",
+ "enabledRules": [],
+ "target": {
+ "caller": "EndUser",
+ "operations": [
+ "all"
+ ],
+ "level": "Assignment",
+ "inheritableSettings": [],
+ "enforcedSettings": []
+ }
+}
+```
+
+You can retrieve the collection of rules that are applied to all Azure AD roles or a specific Azure AD role through the [unifiedroleManagementPolicyAssignment resource type and related methods](/graph/api/resources/unifiedrolemanagementpolicyassignment). For example, the following request uses the `$expand` query parameter to retrieve the rules that are applied to an Azure AD role identified by **roleDefinitionId** or **templateId** `62e90394-69f5-4237-9190-012177145e10`.
+
+```http
+GET https://graph.microsoft.com/v1.0/policies/roleManagementPolicyAssignments?$filter=scopeId eq '/' and scopeType eq 'DirectoryRole' and roleDefinitionId eq '62e90394-69f5-4237-9190-012177145e10'&$expand=policy($expand=rules)
+```
+
+For more information about managing role settings through PIM, see [Role settings and PIM](/graph/api/resources/privilegedidentitymanagementv3-overview#role-settings-and-pim).
+ ## Next steps - [Assign Azure AD roles in Privileged Identity Management](pim-how-to-add-role-to-user.md)
active-directory Pim Resource Roles Configure Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-resource-roles-configure-alerts.md
na Previously updated : 10/07/2021 Last updated : 05/24/2022
Select an alert to see a report that lists the users or roles that triggered the
## Alerts
-| Alert | Severity | Trigger | Recommendation |
-| | | | |
-| **Too many owners assigned to a resource** |Medium |Too many users have the owner role. |Review the users in the list and reassign some to less privileged roles. |
-| **Too many permanent owners assigned to a resource** |Medium |Too many users are permanently assigned to a role. |Review the users in the list and re-assign some to require activation for role use. |
-| **Duplicate role created** |Medium |Multiple roles have the same criteria. |Use only one of these roles. |
+Alert | Severity | Trigger | Recommendation
+ | | |
+**Too many owners assigned to a resource** |Medium |Too many users have the owner role. |Review the users in the list and reassign some to less privileged roles.
+**Too many permanent owners assigned to a resource** |Medium |Too many users are permanently assigned to a role. |Review the users in the list and re-assign some to require activation for role use.
+**Duplicate role created** |Medium |Multiple roles have the same criteria. |Use only one of these roles.
+**Roles are being assigned outside of Privileged Identity Management (Preview)** | High | A role is managed directly through the Azure IAM resource blade or the Azure Resource Manager API | Review the users in the list and remove them from privileged roles assigned outside of Privilege Identity Management.
+
+> [!Note]
+> During the public preview of the **Roles are being assigned outside of Privileged Identity Management (Preview)** alert, Microsoft supports only permissions that are assigned at the subscription level.
### Severity
active-directory Administrative Units https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/administrative-units.md
Previously updated : 03/22/2022 Last updated : 05/24/2022
The following sections describe current support for administrative unit scenario
| Permissions | Microsoft Graph/PowerShell | Azure portal | Microsoft 365 admin center | | | :: | :: | :: |
-| Create or delete administrative units | :heavy_check_mark: | :heavy_check_mark: | :x: |
-| Add or remove members individually | :heavy_check_mark: | :heavy_check_mark: | :x: |
-| Add or remove members in bulk by using CSV files | :x: | :heavy_check_mark: | No plan to support |
-| Assign administrative unit-scoped administrators | :heavy_check_mark: | :heavy_check_mark: | :x: |
+| Create or delete administrative units | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
+| Add or remove members individually | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
+| Add or remove members in bulk | :x: | :heavy_check_mark: | :heavy_check_mark: |
+| Assign administrative unit-scoped administrators | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
| Add or remove users or devices dynamically based on rules (Preview) | :heavy_check_mark: | :heavy_check_mark: | :x: | | Add or remove groups dynamically based on rules | :x: | :x: | :x: |
The following sections describe current support for administrative unit scenario
| Permissions | Microsoft Graph/PowerShell | Azure portal | Microsoft 365 admin center | | | :: | :: | :: |
-| Administrative unit-scoped management of group properties and membership | :heavy_check_mark: | :heavy_check_mark: | :x: |
+| Administrative unit-scoped creation and deletion of groups | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
+| Administrative unit-scoped management of group properties and membership | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
| Administrative unit-scoped management of group licensing | :heavy_check_mark: | :heavy_check_mark: | :x: | > [!NOTE]
active-directory Custom Group Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/custom-group-permissions.md
Title: Group management permissions for Azure AD custom roles (Preview) - Azure Active Directory
-description: Group management permissions for Azure AD custom roles (Preview) in the Azure portal, PowerShell, or Microsoft Graph API.
+ Title: Group management permissions for Azure AD custom roles - Azure Active Directory
+description: Group management permissions for Azure AD custom roles in the Azure portal, PowerShell, or Microsoft Graph API.
Previously updated : 10/26/2021 Last updated : 05/24/2022
-# Group management permissions for Azure AD custom roles (Preview)
-
-> [!IMPORTANT]
-> Group management permissions for Azure AD custom roles are currently in PREVIEW.
-> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+# Group management permissions for Azure AD custom roles
Group management permissions can be used in custom role definitions in Azure Active Directory (Azure AD) to grant fine-grained access such as the following:
Group management permissions can be used in custom role definitions in Azure Act
This article lists the permissions you can use in your custom roles for different group management scenarios. For information about how to create custom roles, see [Create and assign a custom role](custom-create.md).
-> [!NOTE]
-> Assigning custom roles at a group scope using the Azure portal is currently available **only** for Azure AD Premium P1.
+## License requirements
+ ## How to interpret group management permissions
active-directory My Staff Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/my-staff-configure.md
Previously updated : 03/11/2021 Last updated : 05/18/2021
Before you configure My Staff for your organization, we recommend that you revie
## How My Staff works
-My Staff is based on administrative units, which are a container of resources which can be used to restrict the scope of a role assignment's administrative control. For more information, see [Administrative units management in Azure Active Directory](administrative-units.md). In My Staff, administrative units can used to contain a group of users in a store or department. A team manager can then be assigned to an administrative role at a scope of one or more units.
+My Staff is based on administrative units, which are a container of resources which can be used to restrict the scope of a role assignment's administrative control. For more information, see [Administrative units management in Azure Active Directory](administrative-units.md). In My Staff, administrative units can be used to contain a group of users in a store or department. A team manager can then be assigned to an administrative role at a scope of one or more units.
## Before you begin
To complete this article, you need the following resources and privileges:
Once you have configured administrative units, you can apply this scope to your users who access My Staff. Only users who are assigned an administrative role can access My Staff. To enable My Staff, complete the following steps:
-1. Sign in to the [Azure portal](https://portal.azure.com) or [Azure AD admin center](https://aad.portal.azure.com) as a User Administrator.
+1. Sign in to the [Azure portal](https://portal.azure.com) or [Azure AD admin center](https://aad.portal.azure.com) as a Global Administrator, User Administrator, or Group Administrator.
1. Select **Azure Active Directory** > **User settings** > **User feature ** > **Manage user feature settings**.
active-directory Permissions Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/permissions-reference.md
Previously updated : 04/03/2022 Last updated : 05/20/2022
Users in this role can read and update basic information of users, groups, and s
> | microsoft.directory/servicePrincipals/synchronizationCredentials/manage | Manage application provisioning secrets and credentials | > | microsoft.directory/servicePrincipals/synchronizationJobs/manage | Start, restart, and pause application provisioning syncronization jobs | > | microsoft.directory/servicePrincipals/synchronizationSchema/manage | Create and manage application provisioning syncronization jobs and schema |
-> | microsoft.directory/servicePrincipals/managePermissionGrantsForGroup.microsoft-all-application-permissions | Grant a service principal direct access to a group's data |
> | microsoft.directory/servicePrincipals/appRoleAssignedTo/update | Update service principal role assignments | > | microsoft.directory/users/assignLicense | Manage user licenses | > | microsoft.directory/users/create | Add users |
Users with this role have access to all administrative features in Azure Active
> | microsoft.directory/passwordHashSync/allProperties/allTasks | Manage all aspects of Password Hash Synchronization (PHS) in Azure AD | > | microsoft.directory/policies/allProperties/allTasks | Create and delete policies, and read and update all properties | > | microsoft.directory/conditionalAccessPolicies/allProperties/allTasks | Manage all properties of conditional access policies |
-> | microsoft.directory/crossTenantAccessPolicy/standard/read | Read basic properties of cross-tenant access policy |
-> | microsoft.directory/crossTenantAccessPolicy/allowedCloudEndpoints/update | Update allowed cloud endpoints of cross-tenant access policy |
-> | microsoft.directory/crossTenantAccessPolicy/basic/update | Update basic settings of cross-tenant access policy |
-> | microsoft.directory/crossTenantAccessPolicy/default/standard/read | Read basic properties of the default cross-tenant access policy |
-> | microsoft.directory/crossTenantAccessPolicy/default/b2bCollaboration/update | Update Azure AD B2B collaboration settings of the default cross-tenant access policy |
-> | microsoft.directory/crossTenantAccessPolicy/default/b2bDirectConnect/update | Update Azure AD B2B direct connect settings of the default cross-tenant access policy |
-> | microsoft.directory/crossTenantAccessPolicy/default/crossCloudMeetings/update | Update cross-cloud Teams meeting settings of the default cross-tenant access policy |
-> | microsoft.directory/crossTenantAccessPolicy/default/tenantRestrictions/update | Update tenant restrictions of the default cross-tenant access policy |
-> | microsoft.directory/crossTenantAccessPolicy/partners/create | Create cross-tenant access policy for partners |
-> | microsoft.directory/crossTenantAccessPolicy/partners/delete | Delete cross-tenant access policy for partners |
-> | microsoft.directory/crossTenantAccessPolicy/partners/standard/read | Read basic properties of cross-tenant access policy for partners |
-> | microsoft.directory/crossTenantAccessPolicy/partners/b2bCollaboration/update | Update Azure AD B2B collaboration settings of cross-tenant access policy for partners |
-> | microsoft.directory/crossTenantAccessPolicy/partners/b2bDirectConnect/update | Update Azure AD B2B direct connect settings of cross-tenant access policy for partners |
-> | microsoft.directory/crossTenantAccessPolicy/partners/crossCloudMeetings/update | Update cross-cloud Teams meeting settings of cross-tenant access policy for partners |
-> | microsoft.directory/crossTenantAccessPolicy/partners/tenantRestrictions/update | Update tenant restrictions of cross-tenant access policy for partners |
+> | microsoft.directory/crossTenantAccessPolicies/allProperties/allTasks | Manage all aspects of cross-tenant access policies |
> | microsoft.directory/privilegedIdentityManagement/allProperties/read | Read all resources in Privileged Identity Management | > | microsoft.directory/provisioningLogs/allProperties/read | Read all properties of provisioning logs | > | microsoft.directory/roleAssignments/allProperties/allTasks | Create and delete role assignments, and read and update all role assignment properties |
Users with this role have access to all administrative features in Azure Active
> | microsoft.directory/serviceAction/getAvailableExtentionProperties | Can perform the getAvailableExtentionProperties service action | > | microsoft.directory/servicePrincipals/allProperties/allTasks | Create and delete service principals, and read and update all properties | > | microsoft.directory/servicePrincipals/managePermissionGrantsForAll.microsoft-company-admin | Grant consent for any permission to any application |
-> | microsoft.directory/servicePrincipals/managePermissionGrantsForGroup.microsoft-all-application-permissions | Grant a service principal direct access to a group's data |
> | microsoft.directory/servicePrincipals/synchronization/standard/read | Read provisioning settings associated with your service principal | > | microsoft.directory/signInReports/allProperties/read | Read all properties on sign-in reports, including privileged properties | > | microsoft.directory/subscribedSkus/allProperties/allTasks | Buy and manage subscriptions and delete subscriptions |
Users with this role have access to all administrative features in Azure Active
> | microsoft.directory/verifiableCredentials/configuration/delete | Delete configuration required to create and manage verifiable credentials and delete all of its verifiable credentials | > | microsoft.directory/verifiableCredentials/configuration/allProperties/read | Read configuration required to create and manage verifiable credentials | > | microsoft.directory/verifiableCredentials/configuration/allProperties/update | Update configuration required to create and manage verifiable credentials |
+> | microsoft.directory/lifecycleManagement/workflows/allProperties/allTasks | Manage all aspects of lifecycle management workflows and tasks in Azure AD |
> | microsoft.azure.advancedThreatProtection/allEntities/allTasks | Manage all aspects of Azure Advanced Threat Protection | > | microsoft.azure.informationProtection/allEntities/allTasks | Manage all aspects of Azure Information Protection | > | microsoft.azure.serviceHealth/allEntities/allTasks | Read and configure Azure Service Health |
Users in this role can read settings and administrative information across Micro
> | microsoft.directory/permissionGrantPolicies/standard/read | Read standard properties of permission grant policies | > | microsoft.directory/policies/allProperties/read | Read all properties of policies | > | microsoft.directory/conditionalAccessPolicies/allProperties/read | Read all properties of conditional access policies |
-> | microsoft.directory/crossTenantAccessPolicy/standard/read | Read basic properties of cross-tenant access policy |
-> | microsoft.directory/crossTenantAccessPolicy/default/standard/read | Read basic properties of the default cross-tenant access policy |
-> | microsoft.directory/crossTenantAccessPolicy/partners/standard/read | Read basic properties of cross-tenant access policy for partners |
+> | microsoft.directory/crossTenantAccessPolicies/allProperties/read | Read all properties of cross-tenant access policies |
> | microsoft.directory/deviceManagementPolicies/standard/read | Read standard properties on device management application policies | > | microsoft.directory/deviceRegistrationPolicy/standard/read | Read standard properties on device registration policies | > | microsoft.directory/privilegedIdentityManagement/allProperties/read | Read all resources in Privileged Identity Management |
Users in this role can read settings and administrative information across Micro
> | microsoft.directory/verifiableCredentials/configuration/contracts/cards/allProperties/read | Read a verifiable credential card | > | microsoft.directory/verifiableCredentials/configuration/contracts/allProperties/read | Read a verifiable credential contract | > | microsoft.directory/verifiableCredentials/configuration/allProperties/read | Read configuration required to create and manage verifiable credentials |
+> | microsoft.directory/lifecycleManagement/workflows/allProperties/read | Read all properties of lifecycle management workflows and tasks in Azure AD |
> | microsoft.cloudPC/allEntities/allProperties/read | Read all aspects of Windows 365 | > | microsoft.commerce.billing/allEntities/read | Read all resources of Office 365 billing | > | microsoft.edge/allEntities/allProperties/read | Read all aspects of Microsoft Edge |
Users in this role can create/manage groups and its settings like naming and exp
> | microsoft.directory/groups/owners/update | Update owners of Security groups and Microsoft 365 groups, excluding role-assignable groups | > | microsoft.directory/groups/settings/update | Update settings of groups | > | microsoft.directory/groups/visibility/update | Update the visibility property of Security groups and Microsoft 365 groups, excluding role-assignable groups |
-> | microsoft.directory/servicePrincipals/managePermissionGrantsForGroup.microsoft-all-application-permissions | Grant a service principal direct access to a group's data |
> | microsoft.azure.serviceHealth/allEntities/allTasks | Read and configure Azure Service Health | > | microsoft.azure.supportTickets/allEntities/allTasks | Create and manage Azure support tickets | > | microsoft.office365.serviceHealth/allEntities/allTasks | Read and configure Service Health in the Microsoft 365 admin center |
Azure Advanced Threat Protection | Monitor and respond to suspicious security ac
> | microsoft.directory/auditLogs/allProperties/read | Read all properties on audit logs, including privileged properties | > | microsoft.directory/authorizationPolicy/standard/read | Read standard properties of authorization policy | > | microsoft.directory/bitlockerKeys/key/read | Read bitlocker metadata and key on devices |
-> | microsoft.directory/crossTenantAccessPolicy/standard/read | Read basic properties of cross-tenant access policy |
-> | microsoft.directory/crossTenantAccessPolicy/allowedCloudEndpoints/update | Update allowed cloud endpoints of cross-tenant access policy |
-> | microsoft.directory/crossTenantAccessPolicy/basic/update | Update basic settings of cross-tenant access policy |
-> | microsoft.directory/crossTenantAccessPolicy/default/standard/read | Read basic properties of the default cross-tenant access policy |
-> | microsoft.directory/crossTenantAccessPolicy/default/b2bCollaboration/update | Update Azure AD B2B collaboration settings of the default cross-tenant access policy |
-> | microsoft.directory/crossTenantAccessPolicy/default/b2bDirectConnect/update | Update Azure AD B2B direct connect settings of the default cross-tenant access policy |
-> | microsoft.directory/crossTenantAccessPolicy/default/crossCloudMeetings/update | Update cross-cloud Teams meeting settings of the default cross-tenant access policy |
-> | microsoft.directory/crossTenantAccessPolicy/default/tenantRestrictions/update | Update tenant restrictions of the default cross-tenant access policy |
-> | microsoft.directory/crossTenantAccessPolicy/partners/create | Create cross-tenant access policy for partners |
-> | microsoft.directory/crossTenantAccessPolicy/partners/delete | Delete cross-tenant access policy for partners |
-> | microsoft.directory/crossTenantAccessPolicy/partners/standard/read | Read basic properties of cross-tenant access policy for partners |
-> | microsoft.directory/crossTenantAccessPolicy/partners/b2bCollaboration/update | Update Azure AD B2B collaboration settings of cross-tenant access policy for partners |
-> | microsoft.directory/crossTenantAccessPolicy/partners/b2bDirectConnect/update | Update Azure AD B2B direct connect settings of cross-tenant access policy for partners |
-> | microsoft.directory/crossTenantAccessPolicy/partners/crossCloudMeetings/update | Update cross-cloud Teams meeting settings of cross-tenant access policy for partners |
-> | microsoft.directory/crossTenantAccessPolicy/partners/tenantRestrictions/update | Update tenant restrictions of cross-tenant access policy for partners |
+> | microsoft.directory/crossTenantAccessPolicies/create | Create cross-tenant access policies |
+> | microsoft.directory/crossTenantAccessPolicies/delete | Delete cross-tenant access policies |
+> | microsoft.directory/crossTenantAccessPolicies/standard/read | Read basic properties of cross-tenant access policies |
+> | microsoft.directory/crossTenantAccessPolicies/owners/read | Read owners of cross-tenant access policies |
+> | microsoft.directory/crossTenantAccessPolicies/policyAppliedTo/read | Read the policyAppliedTo property of cross-tenant access policies |
+> | microsoft.directory/crossTenantAccessPolicies/basic/update | Update basic properties of cross-tenant access policies |
+> | microsoft.directory/crossTenantAccessPolicies/owners/update | Update owners of cross-tenant access policies |
+> | microsoft.directory/crossTenantAccessPolicies/tenantDefault/update | Update the default tenant for cross-tenant access policies |
> | microsoft.directory/entitlementManagement/allProperties/read | Read all properties in Azure AD entitlement management |
-> | microsoft.directory/hybridAuthenticationPolicy/allProperties/allTasks | Manage hybrid authentication policy in Azure AD |
> | microsoft.directory/identityProtection/allProperties/read | Read all resources in Azure AD Identity Protection | > | microsoft.directory/identityProtection/allProperties/update | Update all resources in Azure AD Identity Protection |
-> | microsoft.directory/passwordHashSync/allProperties/allTasks | Manage all aspects of Password Hash Synchronization (PHS) in Azure AD |
> | microsoft.directory/policies/create | Create policies in Azure AD | > | microsoft.directory/policies/delete | Delete policies in Azure AD | > | microsoft.directory/policies/basic/update | Update basic properties on policies |
Users in this role can manage all aspects of the Microsoft Teams workload via th
> | microsoft.directory/groups.unified/basic/update | Update basic properties on Microsoft 365 groups, excluding role-assignable groups | > | microsoft.directory/groups.unified/members/update | Update members of Microsoft 365 groups, excluding role-assignable groups | > | microsoft.directory/groups.unified/owners/update | Update owners of Microsoft 365 groups, excluding role-assignable groups |
-> | microsoft.directory/servicePrincipals/managePermissionGrantsForGroup.microsoft-all-application-permissions | Grant a service principal direct access to a group's data |
> | microsoft.azure.serviceHealth/allEntities/allTasks | Read and configure Azure Service Health | > | microsoft.azure.supportTickets/allEntities/allTasks | Create and manage Azure support tickets | > | microsoft.office365.network/performance/allProperties/read | Read all network performance properties in the Microsoft 365 admin center |
Users in this role can manage all aspects of the Microsoft Teams workload via th
> | microsoft.office365.usageReports/allEntities/allProperties/read | Read Office 365 usage reports | > | microsoft.office365.webPortal/allEntities/standard/read | Read basic properties on all resources in the Microsoft 365 admin center | > | microsoft.teams/allEntities/allProperties/allTasks | Manage all resources in Teams |
-> | microsoft.directory/crossTenantAccessPolicy/standard/read | Read basic properties of cross-tenant access policy |
-> | microsoft.directory/crossTenantAccessPolicy/allowedCloudEndpoints/update | Update allowed cloud endpoints of cross-tenant access policy |
-> | microsoft.directory/crossTenantAccessPolicy/basic/update | Update basic settings of cross-tenant access policy |
-> | microsoft.directory/crossTenantAccessPolicy/default/standard/read | Read basic properties of the default cross-tenant access policy |
-> | microsoft.directory/crossTenantAccessPolicy/default/b2bCollaboration/update | Update Azure AD B2B collaboration settings of the default cross-tenant access policy |
-> | microsoft.directory/crossTenantAccessPolicy/default/b2bDirectConnect/update | Update Azure AD B2B direct connect settings of the default cross-tenant access policy |
-> | microsoft.directory/crossTenantAccessPolicy/default/crossCloudMeetings/update | Update cross-cloud Teams meeting settings of the default cross-tenant access policy |
-> | microsoft.directory/crossTenantAccessPolicy/default/tenantRestrictions/update | Update tenant restrictions of the default cross-tenant access policy |
-> | microsoft.directory/crossTenantAccessPolicy/partners/create | Create cross-tenant access policy for partners |
-> | microsoft.directory/crossTenantAccessPolicy/partners/delete | Delete cross-tenant access policy for partners |
-> | microsoft.directory/crossTenantAccessPolicy/partners/standard/read | Read basic properties of cross-tenant access policy for partners |
-> | microsoft.directory/crossTenantAccessPolicy/partners/b2bCollaboration/update | Update Azure AD B2B collaboration settings of cross-tenant access policy for partners |
-> | microsoft.directory/crossTenantAccessPolicy/partners/b2bDirectConnect/update | Update Azure AD B2B direct connect settings of cross-tenant access policy for partners |
-> | microsoft.directory/crossTenantAccessPolicy/partners/crossCloudMeetings/update | Update cross-cloud Teams meeting settings of cross-tenant access policy for partners |
-> | microsoft.directory/crossTenantAccessPolicy/partners/tenantRestrictions/update | Update tenant restrictions of cross-tenant access policy for partners |
## Teams Communications Administrator
active-directory Alinto Protect Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/alinto-protect-provisioning-tutorial.md
+
+ Title: 'Tutorial: Configure Alinto Protect for automatic user provisioning with Azure Active Directory | Microsoft Docs'
+description: Learn how to automatically provision and de-provision user accounts from Azure AD to Alinto Protect.
++
+writer: twimmers
+
+ms.assetid: cc47804c-2d00-402f-8aa5-b6155a81d78d
++++ Last updated : 05/15/2022+++
+# Tutorial: Configure Alinto Protect for automatic user provisioning
+
+This tutorial describes the steps you need to do in both Alinto Protect and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [Alinto Protect](https://www.alinto.com/) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
++
+## Capabilities supported
+> [!div class="checklist"]
+> * Create users in Alinto Protect
+> * Remove users in Alinto Protect when they do not require access anymore
+> * Keep user attributes synchronized between Azure AD and Alinto Protect
+> * [Single sign-on](../manage-apps/add-application-portal-setup-oidc-sso.md) to Alinto Protect (recommended).
+
+## Prerequisites
+
+The scenario outlined in this tutorial assumes that you already have the following prerequisites:
+
+* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md)
+* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
+* A user account in Alinto Protect with Admin permission
+
+## Step 1. Plan your provisioning deployment
+1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).
+1. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+1. Determine what data to [map between Azure AD and Alinto Protect](../app-provisioning/customize-application-attributes.md).
+
+## Step 2. Configure Alinto Protect to support provisioning with Azure AD
+
+Contact [Alinto Protect Support](https://www.alinto.com/contact-email-provider/) to configure Alinto to support provisioning with Azure AD.
+
+## Step 3. Add Alinto Protect from the Azure AD application gallery
+
+Add Alinto Protect from the Azure AD application gallery to start managing provisioning to Alinto Protect. If you have previously setup Alinto Protect for SSO, you can use the same application. However it's recommended you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
+
+## Step 4. Define who will be in scope for provisioning
+
+The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user and group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
++
+## Step 5. Configure automatic user provisioning to Alinto Protect
+
+This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and groups in Alinto Protect based on user and group assignments in Azure AD.
+
+### To configure automatic user provisioning for Alinto Protect in Azure AD:
+
+1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**.
+
+ ![Enterprise applications blade](common/enterprise-applications.png)
+
+1. In the applications list, select **Alinto Protect**.
+
+ ![The Alinto Protect link in the Applications list](common/all-applications.png)
+
+1. Select the **Provisioning** tab.
+
+ ![Provisioning tab](common/provisioning.png)
+
+1. Set the **Provisioning Mode** to **Automatic**.
+
+ ![Provisioning tab automatic](common/provisioning-automatic.png)
+
+1. In the **Admin Credentials** section, input your Alinto Protect Tenant URL and Secret Token. Click **Test Connection** to ensure Azure AD can connect to Alinto Protect. If the connection fails, ensure your Alinto Protect account has Admin permissions and try again.
+
+ ![Token](common/provisioning-testconnection-tenanturltoken.png)
+
+1. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box.
+
+ ![Notification Email](common/provisioning-notification-email.png)
+
+1. Select **Save**.
+
+1. In the **Mappings** section, select **Synchronize Azure Active Directory Users to Alinto Protect**.
+
+1. Review the user attributes that are synchronized from Azure AD to Alinto Protect in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Alinto Protect for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you'll need to ensure that the Alinto Protect API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for filtering|Required by Alinto Protect|
+ |||||
+ |userName|String|&check;|&check;
+ |active|Boolean||&check;
+ |name.givenName|String||
+ |name.familyName|String||
+ |externalId|String||
+
+1. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+1. To enable the Azure AD provisioning service for Alinto Protect, change the **Provisioning Status** to **On** in the **Settings** section.
+
+ ![Provisioning Status Toggled On](common/provisioning-toggle-on.png)
+
+1. Define the users and groups that you would like to provision to Alinto Protect by choosing the desired values in **Scope** in the **Settings** section.
+
+ ![Provisioning Scope](common/provisioning-scope.png)
+
+1. When you're ready to provision, click **Save**.
+
+ ![Saving Provisioning Configuration](common/provisioning-configuration-save.png)
+
+This operation starts the initial synchronization cycle of all users and groups defined in **Scope** in the **Settings** section. The initial cycle takes longer to complete than next cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running.
+
+## Step 6. Monitor your deployment
+Once you've configured provisioning, use the following resources to monitor your deployment:
+
+* Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully
+* Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it's to completion
+* If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
+
+## More resources
+
+* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
+* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+
+## Next steps
+
+* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
active-directory Cerby Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/cerby-provisioning-tutorial.md
+
+ Title: 'Tutorial: Configure Cerby for automatic user provisioning with Azure Active Directory | Microsoft Docs'
+description: Learn how to automatically provision and de-provision user accounts from Azure AD to Cerby.
++
+writer: twimmers
+
+ms.assetid: 465492d5-4f75-4201-bed4-f45b3be18702
++++ Last updated : 05/15/2022+++
+# Tutorial: Configure Cerby for automatic user provisioning
+
+This tutorial describes the steps you need to do in both Cerby and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [Cerby](https://app.cerby.com/) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
++
+## Capabilities supported
+> [!div class="checklist"]
+> * Create users in Cerby
+> * Remove users in Cerby when they do not require access anymore
+> * Keep user attributes synchronized between Azure AD and Cerby
+> * [Single sign-on](cerby-tutorial.md) to Cerby (recommended).
+
+## Prerequisites
+
+The scenario outlined in this tutorial assumes that you already have the following prerequisites:
+
+* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md)
+* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
+* A user account in Cerby with the Workspace Owner role.
+* The Cerby SAML2-based integration must be set up. Follow the instructions in the [How to Configure the Cerby App Gallery SAML App with Your Azure AD Tenant](https://help.cerby.com/en/articles/5457563-how-to-configure-the-cerby-app-gallery-saml-app-with-your-azure-ad-tenant) article to set up the integration.
+
+## Step 1. Plan your provisioning deployment
+1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).
+1. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+1. Determine what data to [map between Azure AD and Cerby](../app-provisioning/customize-application-attributes.md).
+
+## Step 2. Configure Cerby to support provisioning with Azure AD
+Cerby has enabled by default the provisioning support for Azure AD. You must only retrieve the SCIM API authentication token by completing the following steps:
+
+1. Log in to your corresponding [Cerby workspace](https://app.cerby.com/).
+1. Click the **Hi there < user >!** button located at the bottom of the left side navigation menu. A drop-down menu is displayed.
+1. Select the **Workspace Configuration** option related to your account from the drop-down menu. The **Workspace Configuration** page is displayed.
+1. Activate the **IDP Settings** tab.
+1. Click the **View Token** button located in the **Directory Sync** section of the **IDP Settings** tab. A pop-up window is displayed waiting to confirm your identity, and a push notification is sent to your Cerby mobile application.
+**IMPORTANT:** To confirm your identity, you must have installed and logged in to the Cerby mobile application to receive push notifications.
+1. Click the **It's me!** button in the **Confirmation Request** screen of your Cerby mobile application to confirm your identity. The pop-up window in your Cerby workspace is closed, and the **Show Token** pop-up window is displayed.
+1. Click the **Copy** button to copy the SCIM token to the clipboard.
+
+ >[!TIP]
+ >Keep the **Show Token** pop-up window open to copy the token at any time. You need the token to configure provisioning with Azure AD.
+
+## Step 3. Add Cerby from the Azure AD application gallery
+
+Add Cerby from the Azure AD application gallery to start managing provisioning to Cerby. If you have previously setup Cerby for SSO, you can use the same application. However it's recommended you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
+
+## Step 4. Define who will be in scope for provisioning
+
+The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user and group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
++
+## Step 5. Configure automatic user provisioning to Cerby
+
+This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and groups in Cerby based on user and group assignments in Azure AD.
+
+### To configure automatic user provisioning for Cerby in Azure AD:
+
+1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**.
+
+ ![Enterprise applications blade](common/enterprise-applications.png)
+
+1. In the applications list, select **Cerby**.
+
+ ![The Cerby link in the Applications list](common/all-applications.png)
+
+1. Select the **Provisioning** tab.
+
+ ![Provisioning tab](common/provisioning.png)
+
+1. Set the **Provisioning Mode** to **Automatic**.
+
+ ![Provisioning tab automatic](common/provisioning-automatic.png)
+
+1. In the **Admin Credentials** section, input `https://api.cerby.com/v1/scim/v2` as your Cerby Tenant URL and the SCIM API authentication token that you have previously retrieved.
+
+1. Click **Test Connection** to ensure Azure AD can connect to Cerby. If the connection fails, ensure your Cerby account has Admin permissions and try again.
+
+ ![Token](common/provisioning-testconnection-tenanturltoken.png)
+
+1. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box.
+
+ ![Notification Email](common/provisioning-notification-email.png)
+
+1. Select **Save**.
+
+1. In the **Mappings** section, select **Synchronize Azure Active Directory Users to Cerby**.
+
+1. Review the user attributes that are synchronized from Azure AD to Cerby in the **Attribute Mappings** section. The attributes selected as **Matching** properties are used to match the user accounts in Cerby for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you'll need to ensure that the Cerby API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for filtering|Required by Cerby|
+ |||||
+ |userName|String|&check;|&check;
+ |emails[type eq "work"].value|String|&check;|&check;
+ |active|Boolean||&check;
+ |name.givenName|String||&check;
+ |name.familyName|String||&check;
+ |externalId|String||
+
+1. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+1. To enable the Azure AD provisioning service for Cerby, change the **Provisioning Status** to **On** in the **Settings** section.
+
+ ![Provisioning Status Toggled On](common/provisioning-toggle-on.png)
+
+1. Define the users and groups that you would like to provision to Cerby by choosing the desired values in **Scope** in the **Settings** section.
+
+ ![Provisioning Scope](common/provisioning-scope.png)
+
+1. When you're ready to provision, click **Save**.
+
+ ![Saving Provisioning Configuration](common/provisioning-configuration-save.png)
+
+This operation starts the initial synchronization cycle of all users and groups defined in **Scope** in the **Settings** section. The initial cycle takes longer to complete than next cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running.
+
+## Step 6. Monitor your deployment
+Once you've configured provisioning, use the following resources to monitor your deployment:
+
+* Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully
+* Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it's to completion
+* If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
+
+## More resources
+
+* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
+* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+
+## Next steps
+
+* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
active-directory Chronus Saml Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/chronus-saml-tutorial.md
+
+ Title: 'Tutorial: Azure AD SSO integration with Chronus SAML'
+description: Learn how to configure single sign-on between Azure Active Directory and Chronus SAML.
++++++++ Last updated : 05/18/2022+++
+# Tutorial: Azure AD SSO integration with Chronus SAML
+
+In this tutorial, you'll learn how to integrate Chronus SAML with Azure Active Directory (Azure AD). When you integrate Chronus SAML with Azure AD, you can:
+
+* Control in Azure AD who has access to Chronus SAML.
+* Enable your users to be automatically signed-in to Chronus SAML with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Chronus SAML single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* Chronus SAML supports **SP and IDP** initiated SSO.
+* Chronus SAML supports **Just In Time** user provisioning.
+
+## Add Chronus SAML from the gallery
+
+To configure the integration of Chronus SAML into Azure AD, you need to add Chronus SAML from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Chronus SAML** in the search box.
+1. Select **Chronus SAML** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for Chronus SAML
+
+Configure and test Azure AD SSO with Chronus SAML using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Chronus SAML.
+
+To configure and test Azure AD SSO with Chronus SAML, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Chronus SAML SSO](#configure-chronus-saml-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Chronus SAML test user](#create-chronus-saml-test-user)** - to have a counterpart of B.Simon in Chronus SAML that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **Chronus SAML** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** textbox, type a value using the following pattern:
+ `<CustomerName>.domain.extension`
+
+ b. In the **Reply URL** textbox, type a URL using the following pattern:
+ `https://<CustomerName>.domain.extension/session`
+
+1. Click **Set additional URLs** and perform the following steps if you wish to configure the application in **SP** initiated mode:
+
+ In the **Sign-on URL** text box, type a URL using the following pattern:
+ `https://<CustomerName>.domain.extension/session`
+
+ > [!NOTE]
+ > These values are not real. Update these values with the actual Identifier, Reply URL and Sign-on URL. Contact [Chronus SAML Client support team](mailto:support@chronus.com) to get these values. You can also refer to the patterns shown in the Basic SAML Configuration section in the Azure portal.
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/certificatebase64.png "Certificate")
+
+1. On the **Set up Chronus SAML** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Screenshot shows to copy configuration appropriate U R L's.](common/copy-configuration-urls.png "Attributes")
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Chronus SAML.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Chronus SAML**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Chronus SAML SSO
+
+To configure single sign-on on Chronus SAML side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [Chronus SAML support team](mailto:support@chronus.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Chronus SAML test user
+
+In this section, a user called B.Simon is created in Chronus SAML. Chronus SAML supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in Chronus SAML, a new one is created after authentication.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to Chronus SAML Sign on URL where you can initiate the login flow.
+
+* Go to Chronus SAML Sign-on URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Chronus SAML for which you set up the SSO.
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the Chronus SAML tile in the My Apps, if configured in SP mode you would be redirected to the application sign-on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Chronus SAML for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+
+## Next steps
+
+Once you configure Chronus SAML you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-aad).
active-directory Clebex Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/clebex-tutorial.md
Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with Clebex | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with Clebex'
description: Learn how to configure single sign-on between Azure Active Directory and Clebex.
Previously updated : 08/27/2021 Last updated : 05/23/2022
-# Tutorial: Azure Active Directory single sign-on (SSO) integration with Clebex
+# Tutorial: Azure AD SSO integration with Clebex
In this tutorial, you'll learn how to integrate Clebex with Azure Active Directory (Azure AD). When you integrate Clebex with Azure AD, you can:
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
## Configure Clebex SSO
-1. Log in to your Clebex website as an administrator.
+1. To automate the configuration within Clebex, you need to install **My Apps Secure Sign-in browser extension** by clicking **Install the extension**.
+
+ ![My apps extension](common/install-myappssecure-extension.png)
+
+2. After adding extension to the browser, click on **Set up Clebex** will direct you to the Clebex application. From there, provide the admin credentials to sign into Clebex. The browser extension will automatically configure the application for you and automate steps 3-10.
+
+ ![Setup configuration](common/setup-sso.png)
+
+3. If you want to setup Clebex manually, in a different web browser window, sign in to your Clebex company site as an administrator.
1. Go to the COMPANY ADMIN -> **Connectors** -> **Single Sign On (SSO)** and click **select**.
active-directory Fortigate Ssl Vpn Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/fortigate-ssl-vpn-tutorial.md
Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with FortiGate SSL VPN | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with FortiGate SSL VPN'
description: Learn the steps you need to perform to integrate FortiGate SSL VPN with Azure Active Directory (Azure AD).
Previously updated : 06/30/2021 Last updated : 05/13/2022
-# Tutorial: Azure Active Directory single sign-on (SSO) integration with FortiGate SSL VPN
+# Tutorial: Azure AD SSO integration with FortiGate SSL VPN
In this tutorial, you'll learn how to integrate FortiGate SSL VPN with Azure Active Directory (Azure AD). When you integrate FortiGate SSL VPN with Azure AD, you can:
In this tutorial, you'll learn how to integrate FortiGate SSL VPN with Azure Act
To get started, you need the following items: * An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
-* A FortiGate SSL VPN subscription with single sign-on (SSO) enabled.
+* A FortiGate SSL VPN with single sign-on (SSO) enabled.
## Tutorial description
To configure the integration of FortiGate SSL VPN into Azure AD, you need to add
## Configure and test Azure AD SSO for FortiGate SSL VPN
-You'll configure and test Azure AD SSO with FortiGate SSL VPN by using a test user named B.Simon. For SSO to work, you need to establish a link relationship between an Azure AD user and the corresponding user in FortiGate SSL VPN.
+You'll configure and test Azure AD SSO with FortiGate SSL VPN by using a test user named B.Simon. For SSO to work, you need to establish a link relationship between an Azure AD user and the corresponding SAML SSO user group in FortiGate SSL VPN.
To configure and test Azure AD SSO with FortiGate SSL VPN, you'll complete these high-level steps:
To configure and test Azure AD SSO with FortiGate SSL VPN, you'll complete these
1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** to test Azure AD single sign-on. 1. **[Grant access to the test user](#grant-access-to-the-test-user)** to enable Azure AD single sign-on for that user. 1. **[Configure FortiGate SSL VPN SSO](#configure-fortigate-ssl-vpn-sso)** on the application side.
- 1. **Create a FortiGate SSL VPN test user** as a counterpart to the Azure AD representation of the user.
+ 1. **Create a FortiGate SAML SSO user group** as a counterpart to the Azure AD representation of the user.
1. **[Test SSO](#test-sso)** to verify that the configuration works. ### Configure Azure AD SSO
Follow these steps to enable Azure AD SSO in the Azure portal:
1. In the Azure portal, on the **FortiGate SSL VPN** application integration page, in the **Manage** section, select **single sign-on**. 1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up Single Sign-On with SAML** page, select the pencil button for **Basic SAML Configuration** to edit the settings:
+1. On the **Set up Single Sign-On with SAML** page, select the **Edit** button for **Basic SAML Configuration** to edit the settings:
- ![Screenshot that shows the pencil button for editing the basic SAML configuration.](common/edit-urls.png)
+ ![Screenshot of showing Basic SAML configuration page.](./media/fortigate-ssl-vpn-tutorial/saml-configuration.png)
1. On the **Set up Single Sign-On with SAML** page, enter the following values:
- a. In the **Sign on URL** box, enter a URL in the pattern
- `https://<FQDN>/remote/saml/login`.
+ a. In the **Identifier** box, enter a URL in the pattern
+ `https://<FortiGate IP or FQDN address>:<Custom SSL VPN port>/remote/saml/metadata`.
- b. In the **Identifier** box, enter a URL in the pattern
- `https://<FQDN>/remote/saml/metadata`.
+ b. In the **Reply URL** box, enter a URL in the pattern
+ `https://<FortiGate IP or FQDN address>:<Custom SSL VPN port>/remote/saml/login`.
- c. In the **Reply URL** box, enter a URL in the pattern
- `https://<FQDN>/remote/saml/login`.
+ c. In the **Sign on URL** box, enter a URL in the pattern
+ `https://<FortiGate IP or FQDN address>:<Custom SSL VPN port>/remote/login`.
d. In the **Logout URL** box, enter a URL in the pattern
- `https://<FQDN>/remote/saml/logout`.
+ `https://<FortiGate IP or FQDN address>:<Custom SSL VPN port><FQDN>/remote/saml/logout`.
> [!NOTE]
- > These values are just patterns. You need to use the actual **Sign on URL**, **Identifier**, **Reply URL**, and **Logout URL**. Contact [Fortinet support](https://support.fortinet.com) for guidance. You can also refer to the example patterns shown in the Fortinet documentation and the **Basic SAML Configuration** section in the Azure portal.
+ > These values are just patterns. You need to use the actual **Sign on URL**, **Identifier**, **Reply URL**, and **Logout URL** that is configured on the FortiGate.
1. The FortiGate SSL VPN application expects SAML assertions in a specific format, which requires you to add custom attribute mappings to the configuration. The following screenshot shows the list of default attributes.
- ![Screenshot that shows the default attributes.](common/default-attributes.png)
+ ![Screenshot of showing Attributes and Claims section.](./media/fortigate-ssl-vpn-tutorial/claims.png)
-1. The two additional claims required by FortiGate SSL VPN are shown in the following table. The names of these claims must match the names used in the **Perform FortiGate command-line configuration** section of this tutorial.
+
+1. The claims required by FortiGate SSL VPN are shown in the following table. The names of these claims must match the names used in the **Perform FortiGate command-line configuration** section of this tutorial. Names are case-sensitive.
| Name | Source attribute| | | |
Follow these steps to enable Azure AD SSO in the Azure portal:
g. Select **All groups**.
- h. Select the **Customize the name of the group claim** check box.
+ h. Under **Advanced options**, select the **Customize the name of the group claim** check box.
i. For **Name**, enter **group**.
After the certificate is uploaded, take note of its name under **System** > **Ce
#### Complete FortiGate command-line configuration
-The following steps require that you configure the Azure Logout URL. This URL contains a question mark character (?). You need to take specific steps to successfully submit this character. You can't complete these steps from the FortiGate CLI Console. Instead, establish an SSH session to the FortiGate appliance by using a tool like PuTTY. If your FortiGate appliance is an Azure virtual machine, you can complete the following steps from the serial console for Azure virtual machines.
+Although you can configure SSO from the GUI since FortiOS 7.0, the CLI configurations apply to all versions and are therefore shown here.
To complete these steps, you'll need the values you recorded earlier: -- Entity ID-- Reply URL-- Logout URL-- Azure Login URL-- Azure AD Identifier-- Azure Logout URL-- Base64 SAML certificate name (REMOTE_Cert_*N*)
+| FortiGate SAML CLI setting | Equivalent Azure configuration |
+ | | |
+ | SP entity ID (`entity-id`) | Identifier (Entity ID) |
+| SP Single Sign-On URL (`single-sign-on-url`) | Reply URL (Assertion Consumer Service URL) |
+| SP Single Logout URL (`single-logout-url`) | Logout URL |
+| IdP Entity ID (`idp-entity-id`) | Azure Login URL |
+| IdP Single Sign-On URL (`idp-single-sign-on-url`) | Azure AD Identifier |
+| IdP Single Logout URL (`idp-single-logout-url`) | Azure Logout URL |
+| IdP certificate (`idp-cert`) | Base64 SAML certificate name (REMOTE_Cert_N) |
+| Username attribute (`user-name`) | username |
+| Group name attribute (`group-name`) | group |
+
+> [!NOTE]
+ > The Sign on URL under Basic SAML Configuration is not used in the FortiGate configurations. It is used to trigger SP-initiated single sign on to redirect the user to the SSL VPN portal page.
1. Establish an SSH session to your FortiGate appliance, and sign in with a FortiGate Administrator account.
-1. Run these commands:
+1. Run these commands and substitute the `<values>` with the information that you collected previously:
```console config user saml
- edit azure
- set cert <FortiGate VPN Server Certificate Name>
- set entity-id <Entity ID>
- set single-sign-on-url <Reply URL>
- set single-logout-url <Logout URL>
- set idp-single-sign-on-url <Azure Login URL>
- set idp-entity-id <Azure AD Identifier>
- set idp-single-logout-url <Azure Logout URL>
- set idp-cert <Base64 SAML Certificate Name>
- set user-name username
- set group-name group
+ edit azure
+ set cert <FortiGate VPN Server Certificate Name>
+ set entity-id < Identifier (Entity ID)Entity ID>
+ set single-sign-on-url < Reply URL Reply URL>
+ set single-logout-url <Logout URL>
+ set idp-entity-id <Azure AD Identifier>
+ set idp-single-logout-url <Azure Logout URL>
+ set idp-cert <Base64 SAML Certificate Name>
+ set user-name username
+ set group-name group
+ next
end ```
- > [!NOTE]
- > The Azure Logout URL contains a `?` character. You must enter a special key sequence to correctly provide this URL to the FortiGate serial console. The URL is usually `https://login.microsoftonline.com/common/wsfederation?wa=wsignout1.0`.
- >
- > To enter the Azure Logout URL in the serial console, enter `set idp-single-logout-url https://login.microsoftonline.com/common/wsfederation`.
- >
- > Then select CTRL+V and paste the rest of the URL to complete the line: `set idp-single-logout-url https://login.microsoftonline.com/common/wsfederation?wa=wsignout1.0`.
- #### Configure FortiGate for group matching In this section, you'll configure FortiGate to recognize the Object ID of the security group that includes the test user. This configuration will allow FortiGate to make access decisions based on the group membership.
To complete these steps, you'll need the Object ID of the FortiGateAccess securi
1. Establish an SSH session to your FortiGate appliance, and sign in with a FortiGate Administrator account. 1. Run these commands:
- ```
+ ```console
config user group
- edit FortiGateAccess
- set member azure
- config match
- edit 1
- set server-name azure
- set group-name <Object Id>
- next
- end
- next
+ edit FortiGateAccess
+ set member azure
+ config match
+ edit 1
+ set server-name azure
+ set group-name <Object Id>
+ next
+ end
+ next
end
- ```
-
+ ```
+
#### Create a FortiGate VPN Portals and Firewall Policy In this section, you'll configure a FortiGate VPN Portals and Firewall Policy that grants access to the FortiGateAccess security group you created earlier in this tutorial.
-Work with the [FortiGate support team](mailto:tac_amer@fortinet.com) to add the VPN Portals and Firewall Policy to the FortiGate VPN platform. You need to complete this step before you use single sign-on.
+Refer to [Configuring SAML SSO login for SSL VPN with Azure AD acting as SAML IdP for instructions](https://docs.fortinet.com/document/fortigate-public-cloud/7.0.0/azure-administration-guide/584456/configuring-saml-sso-login-for-ssl-vpn-web-mode-with-azure-ad-acting-as-saml-idp).
## Test SSO In this section, you test your Azure AD single sign-on configuration with following options.
-* Click on **Test this application** in Azure portal. This will redirect to FortiGate VPN Sign-on URL where you can initiate the login flow.
+* In Step 5) of the Azure SSO configuration, **Test single sign-on with your App*, click the **Test** button in the Azure portal. This will redirect to FortiGate VPN Sign-on URL where you can initiate the login flow.
* Go to FortiGate VPN Sign-on URL directly and initiate the login flow from there.
active-directory Fresh Relevance Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/fresh-relevance-tutorial.md
Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with Fresh Relevance | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with Fresh Relevance'
description: Learn how to configure single sign-on between Azure Active Directory and Fresh Relevance.
Previously updated : 07/26/2021 Last updated : 05/23/2022
-# Tutorial: Azure Active Directory single sign-on (SSO) integration with Fresh Relevance
+# Tutorial: Azure AD SSO integration with Fresh Relevance
In this tutorial, you'll learn how to integrate Fresh Relevance with Azure Active Directory (Azure AD). When you integrate Fresh Relevance with Azure AD, you can:
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
## Configure Fresh Relevance SSO
-1. Log in to your Fresh Relevance company site as an administrator.
+1. To automate the configuration within Fresh Relevance, you need to install **My Apps Secure Sign-in browser extension** by clicking **Install the extension**.
+
+ ![My apps extension](common/install-myappssecure-extension.png)
+
+2. After adding extension to the browser, click on **Set up Fresh Relevance** will direct you to the Fresh Relevance application. From there, provide the admin credentials to sign into Fresh Relevance. The browser extension will automatically configure the application for you and automate steps 3-10.
+
+ ![Setup configuration](common/setup-sso.png)
+
+3. If you want to setup Fresh Relevance manually, in a different web browser window, sign in to your Fresh Relevance company site as an administrator.
1. Go to **Settings** > **All Settings** > **Security and Privacy** and click **SAML/Azure AD Single Sign-On**.
active-directory Idrive360 Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/idrive360-tutorial.md
Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with IDrive360 | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with IDrive360'
description: Learn how to configure single sign-on between Azure Active Directory and IDrive360.
Previously updated : 06/18/2021 Last updated : 05/23/2022
-# Tutorial: Azure Active Directory single sign-on (SSO) integration with IDrive360
+# Tutorial: Azure AD SSO integration with IDrive360
In this tutorial, you'll learn how to integrate IDrive360 with Azure Active Directory (Azure AD). When you integrate IDrive360 with Azure AD, you can:
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
### Create IDrive360 test user
-1. In a different web browser window, sign in to your IDrive360 company site as an administrator.
+1. To automate the configuration within IDrive360, you need to install **My Apps Secure Sign-in browser extension** by clicking **Install the extension**.
+
+ ![My apps extension](common/install-myappssecure-extension.png)
+
+2. After adding extension to the browser, click on **Set up IDrive360** will direct you to the IDrive360 application. From there, provide the admin credentials to sign into IDrive360. The browser extension will automatically configure the application for you and automate steps 3-10.
+
+ ![Setup configuration](common/setup-sso.png)
+
+3. If you want to setup IDrive360 manually, in a different web browser window, sign in to your IDrive360 company site as an administrator.
2. Navigate to the **Users** tab and click **Add User**.
active-directory Isight Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/isight-tutorial.md
+
+ Title: 'Tutorial: Azure AD SSO integration with i-Sight'
+description: Learn how to configure single sign-on between Azure Active Directory and i-Sight.
++++++++ Last updated : 05/18/2022++++
+# Tutorial: Azure AD SSO integration with i-Sight
+
+In this tutorial, you'll learn how to integrate i-Sight with Azure Active Directory (Azure AD). When you integrate i-Sight with Azure AD, you can:
+
+* Control in Azure AD who has access to i-Sight.
+* Enable your users to be automatically signed-in to i-Sight with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* i-Sight single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* i-Sight supports **IDP** initiated SSO.
+
+## Add i-Sight from the gallery
+
+To configure the integration of i-Sight into Azure AD, you need to add i-Sight from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **i-Sight** in the search box.
+1. Select **i-Sight** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for i-Sight
+
+Configure and test Azure AD SSO with i-Sight using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in i-Sight.
+
+To configure and test Azure AD SSO with i-Sight, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure i-Sight SSO](#configure-i-sight-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create i-Sight test user](#create-i-sight-test-user)** - to have a counterpart of B.Simon in i-Sight that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **i-Sight** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows to edit Basic S A M L Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** text box, type a URL using one of the following patterns:
+
+ | **Identifier** |
+ |--|
+ | `https://<CustomerName>.i-sight.com` |
+ | `https://<CustomerName>.i-sightuat.com` |
+
+ b. In the **Reply URL** text box, type a URL using one of the following patterns:
+
+ | **Reply URL** |
+ |--|
+ | `https://<CustomerName>.i-sight.com/auth/wsfed` |
+ | `https://<CustomerName>.i-sightuat.com/auth/wsfed` |
+
+ > [!Note]
+ > These values are not real. Update these values with the actual Identifier and Reply URL. Contact [i-Sight Client support team](mailto:it@i-sight.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. On the **Set-up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/metadataxml.png "Certificate")
+
+1. On the **Set up i-Sight** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Screenshot shows to copy configuration appropriate U R L.](common/copy-configuration-urls.png "Attributes")
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to i-Sight.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **i-Sight**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure i-Sight SSO
+
+To configure single sign-on on **i-Sight** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [i-Sight support team](mailto:it@i-sight.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create i-Sight test user
+
+In this section, you create a user called Britta Simon in i-Sight. Work with [i-Sight support team](mailto:it@i-sight.com) to add the users in the i-Sight platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on Test this application in Azure portal and you should be automatically signed in to the i-Sight for which you set up the SSO.
+
+* You can use Microsoft My Apps. When you click the i-Sight tile in the My Apps, you should be automatically signed in to the i-Sight for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure i-Sight you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Mongodb Cloud Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/mongodb-cloud-tutorial.md
Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with MongoDB Cloud | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with MongoDB Cloud'
description: Learn how to configure single sign-on between Azure Active Directory and MongoDB Cloud.
Previously updated : 04/14/2021 Last updated : 05/13/2022
-# Tutorial: Azure Active Directory single sign-on (SSO) integration with MongoDB Cloud
+# Tutorial: Azure AD SSO integration with MongoDB Cloud
In this tutorial, you'll learn how to integrate MongoDB Cloud with Azure Active Directory (Azure AD). When you integrate MongoDB Cloud with Azure AD, you can:
-* Control in Azure AD who has access to MongoDB Cloud, MongoDB Atlas, the MongoDB community, MongoDB University, and MongoDB Support.
+* Control in Azure AD who has access to MongoDB Atlas, the MongoDB community, MongoDB University, and MongoDB Support.
* Enable your users to be automatically signed in to MongoDB Cloud with their Azure AD accounts.
+* Assign MongoDB Atlas roles to users based on their Azure AD group memberships.
* Manage your accounts in one central location: the Azure portal. ## Prerequisites
Configure and test Azure AD SSO with MongoDB Cloud, by using a test user called
To configure and test Azure AD SSO with MongoDB Cloud, perform the following steps: 1. [Configure Azure AD SSO](#configure-azure-ad-sso) to enable your users to use this feature.
- 1. [Create an Azure AD test user](#create-an-azure-ad-test-user) to test Azure AD single sign-on with B.Simon.
- 1. [Assign the Azure AD test user](#assign-the-azure-ad-test-user) to enable B.Simon to use Azure AD single sign-on.
-1. [Configure MongoDB Cloud SSO](#configure-mongodb-cloud-sso) to configure the single sign-on settings on the application side.
+ 1. [Create an Azure AD test user and test group](#create-an-azure-ad-test-user-and-test-group) to test Azure AD single sign-on with B.Simon.
+ 1. [Assign the Azure AD test user or test group](#assign-the-azure-ad-test-user-or-test-group) to enable B.Simon to use Azure AD single sign-on.
+1. [Configure MongoDB Atlas SSO](#configure-mongodb-atlas-sso) to configure the single sign-on settings on the application side.
1. [Create a MongoDB Cloud test user](#create-a-mongodb-cloud-test-user) to have a counterpart of B.Simon in MongoDB Cloud, linked to the Azure AD representation of the user. 1. [Test SSO](#test-sso) to verify whether the configuration works.
To configure and test Azure AD SSO with MongoDB Cloud, perform the following ste
Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **MongoDB Cloud** application integration page, find the **Manage** section. Select **single sign-on**.
+1. In the Azure portal, on the **MongoDB Cloud** application integration page, find the **Manage** section. Select **single sign-on**.
1. On the **Select a single sign-on method** page, select **SAML**. 1. On the **Set up Single Sign-On with SAML** page, select the pencil icon for **Basic SAML Configuration** to edit the settings.
Follow these steps to enable Azure AD SSO in the Azure portal.
| firstName | user.givenname | | lastName | user.surname |
+1. If you would like to authorize users using MongoDB Atlas [role mappings](https://docs.atlas.mongodb.com/security/manage-role-mapping/), add the below group claim to send user's group information within SAML assertion.
+
+ | Name | Source attribute|
+ | | |
+ | memberOf | Group ID |
+
1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML**. Select **Download** to download the certificate and save it on your computer. ![Screenshot of SAML Signing Certificate section, with Download link highlighted](common/metadataxml.png)
Follow these steps to enable Azure AD SSO in the Azure portal.
![Screenshot of Set up Mongo DB Cloud section, with URLs highlighted](common/copy-configuration-urls.png)
-### Create an Azure AD test user
+### Create an Azure AD test user and test group
In this section, you create a test user in the Azure portal called B.Simon.
In this section, you create a test user in the Azure portal called B.Simon.
1. Select the **Show password** check box, and then write the password down. 1. Select **Create**.
-### Assign the Azure AD test user
+If you are using MongoDB Atlas role mappings feature in order to assign roles to users based on their Azure AD groups, create a test group and B.Simon as a member:
+1. From the left pane in Azure portal, select **Azure Active Directory** > **Groups**.
+1. Select **New group** at the top of the screen.
+1. In the **Group** properties, follow these steps:
+ 1. Select 'Security' in **Group type** dropdown.
+ 1. In the **Group name** field, enter 'Group 1'.
+ 1. Select **Create**.
+
+### Assign the Azure AD test user or test group
-In this section, you'll enable B.Simon to use Azure single sign-on by granting access to MongoDB Cloud.
+In this section, you'll enable B.Simon or Group 1 to use Azure single sign-on by granting access to MongoDB Cloud.
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**. 1. In the applications list, select **MongoDB Cloud**. 1. In the app's overview page, find the **Manage** section and select **Users and groups**. 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
-1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
-1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list or if you are using MongoDB Atla role mappings, select **Group 1** from the Groups list; then click the **Select** button at the bottom of the screen.
1. In the **Add Assignment** dialog, click the **Assign** button.
-## Configure MongoDB Cloud SSO
+## Configure MongoDB Atlas SSO
+
+To configure single sign-on on the MongoDB Atlas side, you need the appropriate URLs copied from the Azure portal. You also need to configure the Federation Application for your MongoDB Atlas Organization. Follow the instructions in the [MongoDB Atlas documentation](https://docs.atlas.mongodb.com/security/federated-auth-azure-ad/). If you have a problem, contact the [MongoDB support team](https://support.mongodb.com/).
+
+### Configure MongoDB Atlas Role Mapping
-To configure single sign-on on the MongoDB Cloud side, you need the appropriate URLs copied from the Azure portal. You also need to configure the Federation Application for your MongoDB Cloud Organization. Follow the instructions in the [MongoDB Cloud documentation](https://docs.atlas.mongodb.com/security/federated-auth-azure-ad/). If you have a problem, contact the [MongoDB Cloud support team](https://support.mongodb.com/).
+To authorize users in MongoDB Atlas based on their Azure AD group membership, you can map the Azure AD group's Object-IDs to MongoDB Atlas Organization/Project roles with the help of MongoDB Atlas role mappings. Follow the instructions in the [MongoDB Atlas documentation](https://docs.atlas.mongodb.com/security/manage-role-mapping/#add-role-mappings-in-your-organization-and-its-projects). If you have a problem, contact the [MongoDB support team](https://support.mongodb.com/).
### Create a MongoDB Cloud test user
-MongoDB Cloud supports just-in-time user provisioning, which is enabled by default. There is no additional action for you to take. If a user doesn't already exist in MongoDB Cloud, a new one is created after authentication.
+MongoDB Atlas supports just-in-time user provisioning, which is enabled by default. There is no additional action for you to take. If a user doesn't already exist in MongoDB Atlas, a new one is created after authentication.
## Test SSO
In this section, you test your Azure AD single sign-on configuration with follow
#### SP initiated:
-* Click on **Test this application** in Azure portal. This will redirect to MongoDB Cloud Sign on URL where you can initiate the login flow.
+* Click on **Test this application** in Azure portal. This will redirect to MongoDB Atlas Sign-on URL where you can initiate the login flow.
-* Go to MongoDB Cloud Sign-on URL directly and initiate the login flow from there.
+* Go to MongoDB Atlas Sign on URL directly and initiate the login flow from there.
#### IDP initiated:
-* Click on **Test this application** in Azure portal and you should be automatically signed in to the MongoDB Cloud for which you set up the SSO.
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the MongoDB Atlas for which you set up the SSO.
-You can also use Microsoft My Apps to test the application in any mode. When you click the MongoDB Cloud tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the MongoDB Cloud for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+You can also use Microsoft My Apps to test the application in any mode. When you click the MongoDB Cloud tile in the My Apps, if configured in SP mode you would be redirected to the application sign-on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the MongoDB Cloud for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
## Next steps
active-directory Sharingcloud Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/sharingcloud-tutorial.md
Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with SharingCloud | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with SharingCloud |'
description: Learn how to configure single sign-on between Azure Active Directory and Instant Suite.
Previously updated : 03/10/2021 Last updated : 05/19/2022
-# Tutorial: Azure Active Directory single sign-on (SSO) integration with SharingCloud
+# Tutorial: Azure AD SSO integration with SharingCloud
In this tutorial, you'll learn how to integrate SharingCloud with Azure Active Directory (Azure AD). When you integrate SharingCloud with Azure AD, you can:
To get started, you need the following items:
In this tutorial, you configure and test Azure AD SSO in a test environment.
-* SharingCloud supports **SP and IDP** initiated SSO
-* SharingCloud supports **Just In Time** user provisioning
+* SharingCloud supports **SP and IDP** initiated SSO.
+* SharingCloud supports **Just In Time** user provisioning.
## Adding SharingCloud from the gallery
Follow these steps to enable Azure AD SSO in the Azure portal.
![Edit Basic SAML Configuration](common/edit-urls.png)
-1. On the **Basic SAML Configuration** section, perform the following steps:
+1. On the **Set up single sign-on with SAML** page, perform the following steps:
- Upload the metadata file with XML file provided by SharingCloud. Contact the [SharingCloud Client support team](mailto:support@sharingcloud.com) to get the file.
+ a. In the **Identifier** text box, type a URL using the following pattern:
+ `https://auth.sharingcloud.net/auth/realms/<COMPANY_NAME>`
- ![Screenshot of the Basic SAML Configuration user interface with the **Upload metadata file** link highlighted.](common/upload-metadata.png)
-
- Select the metadata file provided and click on **Upload**.
+ b. In the **Reply URL** text box, type a URL using the following pattern:
+ `https://auth.sharingcloud.net/auth/realms/<COMPANY_NAME>/broker/saml/endpoint`
- ![Screenshot of the metadata file provided user interface, with the select file icon and **Upload** button highlighted.](common/browse-upload-metadata.png)
+1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
+
+ In the **Sign-on URL** text box, type a URL using the following pattern:
+ `https://<SUBDOMAIN>.factset.com/services/saml2/`
+
+ > [!NOTE]
+ > These values are not real. Update these values with the actual Identifier, Reply URL and Sign-on URL. Contact the [SharingCloud support team](mailto:support@sharingcloud.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
1. SharingCloud application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
active-directory Skillsbase Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/skillsbase-tutorial.md
Previously updated : 07/22/2021 Last updated : 05/13/2022 # Tutorial: Azure Active Directory integration with Skills Base
In this tutorial, you'll learn how to integrate Skills Base with Azure Active Di
To get started, you need the following items: * An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
-* Skills Base single sign-on (SSO) enabled subscription.
+* A Skills Base license that supports single sign-on (SSO).
> [!NOTE] > This integration is also available to use from Azure AD US Government Cloud environment. You can find this application in the Azure AD US Government Cloud Application Gallery and configure it in the same way as you do from public cloud.
In this tutorial, you configure and test Azure AD single sign-on in a test envir
* Skills Base supports **SP** initiated SSO. * Skills Base supports **Just In Time** user provisioning.
-> [!NOTE]
-> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
+>[!NOTE]
+> Skills Base does not support **IdP** initiated SSO.
## Add Skills Base from the gallery
To configure and test Azure AD SSO with Skills Base, perform the following steps
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature. 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
- 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
-1. **[Configure Skills Base SSO](#configure-skills-base-sso)** - to configure the single sign-on settings on application side.
+ 2. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+2. **[Configure Skills Base SSO](#configure-skills-base-sso)** - to configure the single sign-on settings on application side.
1. **[Create Skills Base test user](#create-skills-base-test-user)** - to have a counterpart of B.Simon in Skills Base that is linked to the Azure AD representation of user.
-1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
-
+3. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
## Configure Azure AD SSO Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the Azure portal, on the **Skills Base** application integration page, find the **Manage** section and select **single sign-on**.
-1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+1. In the Azure portal, on the **Skills Base** Enterprise Application Overview page, under **Getting Started** section select **Get started** under **2. Set up single sign on**.
+
+2. On the **Select a single sign-on method** page, select **SAML**.
+
+3. On the **Set up Single Sign-On with SAML** page, click the **Upload metadata file** button at the top of the page.
+
+4. Click the **Select a file** icon and select the metadata file that you downloaded from Skills Base.
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+5. Click **Add**
-4. On the **Basic SAML Configuration** section, perform the following step:
+ ![Screenshot of showing Upload SP metadata.](common/browse-upload-metadata.png)
- In the **Sign-on URL** text box, type a URL using the following pattern:
+6. On the **Basic SAML Configuration** page, in the **Sign on URL** text box, enter your Skills Base shortcut link, which should be in the format:
`https://app.skills-base.com/o/<customer-unique-key>` > [!NOTE]
- > You can get the Sign-On URL from Skills Base application. Please login as an Administrator and to go to Admin-> Settings-> Instance details -> Shortcut link. Copy the Sign-On URL and paste it in above textbox.
+ > You can get the Sign on URL from the Skills Base application. Please log in as an Administrator and to go to \[Administration > Settings > Instance details > Shortcut link\]. Copy the shortcut link and paste it into the **Sign on URL** textbox in Azure AD.
-5. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Federation Metadata XML** from the given options as per your requirement and save it on your computer.
+5. Click **Save**
- ![The Certificate download link](common/metadataxml.png)
+6. Close the **Basic SAML Configuration** dialog.
-6. On the **Set up Skills Base** section, copy the appropriate URL(s) as per your requirement.
+5. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, next to **Federation Metadata XML**, click **Download** to download the Federation Metadata XML and save it on your computer.
- ![Copy configuration URLs](common/copy-configuration-urls.png)
+ ![Screenshot of showing The Certificate download link.](common/metadataxml.png)
-### Create an Azure AD test user
+## Configure Skills Base SSO
-In this section, you'll create a test user in the Azure portal called B.Simon.
+1. Log in to Skills Base as an Administrator.
-1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
-1. Select **New user** at the top of the screen.
-1. In the **User** properties, follow these steps:
- 1. In the **Name** field, enter `B.Simon`.
- 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
- 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
- 1. Click **Create**.
+1. From the left side of menu, select **Administration -> Authentication**.
-### Assign the Azure AD test user
+ ![Screenshot of showing The Authentication menu.](./media/skillsbase-tutorial/admin.png)
-In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Skills Base.
+1. On the **Authentication** page in the **Identity Providers** section, select **Add identity provider**.
-1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
-1. In the applications list, select **Skills Base**.
-1. In the app's overview page, find the **Manage** section and select **Users and groups**.
-1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
-1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
-1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
-1. In the **Add Assignment** dialog, click the **Assign** button.
+ ![Screenshot shows the "Add identity provider" button.](./media/skillsbase-tutorial/configuration.png)
-## Configure Skills Base SSO
+1. Click **Add** to use the default settings.
-1. In a different web browser window, login to Skills Base as a Security Administrator.
+ ![Screenshot shows the Authentication page where you can enter the values described.](./media/skillsbase-tutorial/save-configuration.png)
-2. From the left side of menu, under **ADMIN** click **Authentication**.
+1. In the **Application Details** panel, next to **SAML SP Metadata**, select **Download XML File** and save the resulting file on your computer.
- ![The admin](./media/skillsbase-tutorial/admin.png)
+ ![Screenshot shows the Application Details panel where you can download the SP Metadata file.](./media/skillsbase-tutorial/download-sp-metadata.png)
-3. On the **Authentication** Page, select Single Sign-On as **SAML 2**.
+1. In the **Identity Providers** section, select the **edit** button (denoted by a pencil icon) for the Identity Provider record you added.
- ![Screenshot shows the Authentication page with SAML 2 selected for Sing Sign-on.](./media/skillsbase-tutorial/configuration.png)
+ ![Screenshot of showing Edit Identity Providers button.](./media/skillsbase-tutorial/edit-identity-provider.png)
-4. On the **Authentication** Page, Perform the following steps:
+1. In the **Edit identity provider** panel, for **SAML IdP Metadata** select **Upload an XML file**
- ![Screenshot shows the Authentication page where you can enter the values described.](./media/skillsbase-tutorial/save-configuration.png)
+1. Click **Browse** to choose a file. Select the Federation Metadata XML file that you downloaded from Azure AD and click **Save**.
+
+ ![Screenshot of showing Upload certificate type.](./media/skillsbase-tutorial/browse-and-save.png)
+
+1. In the **Authentication** panel, for **Single Sign-On** select the Identity Provider you added.
+
+ ![Screenshot for Authentication panel for S S O.](./media/skillsbase-tutorial/select-identity-provider.png)
- a. Click on **Update IdP metadata** button next to **Status** option and paste the contents of Metadata XML that you downloaded from the Azure portal in the specified textbox.
+1. Make sure the option to bypass the Skills Base login screen is **deselected** for now. You can enable this option later, once the integration is proved to be working.
+
+1. If you would like to enable **Just In Time** user provisioning, enable the **Automatic user account provisioning** option.
+
+1. click **Save changes**.
+
+ ![Screenshot for Just in Time provisioning.](./media/skillsbase-tutorial/identity-provider-enabled.png)
+
+> [!Note]
+> The Identity Provider you added in the **Identity Providers** panel should now have a green **Enabled** badge in the **Status** column.
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+2. Select **New user** at the top of the screen.
+3. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 2. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 3. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 4. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Skills Base.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+2. In the applications list, select **Skills Base**.
+3. In the app's overview page, find the **Manage** section and select **Users and groups**.
+4. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+5. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+6. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+7. In the **Add Assignment** dialog, click the **Assign** button.
- > [!Note]
- > You can also validate idp metadata through the **Metadata validator** tool as highlighted in screenshot above.
- b. Click **Save**.
### Create Skills Base test user
-In this section, a user called Britta Simon is created in Skills Base. Skills Base supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in Skills Base, a new one is created after authentication.
+Skills Base supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in Skills Base, a new one is created after authentication.
> [!Note]
-> If you need to create a user manually, follow the instructions [here](http://wiki.skills-base.net/index.php?title=Adding_people_and_enabling_them_to_log_in).
+> If you need to create a user manually, follow the instructions [here](https://support.skills-base.com/kb/articles/11000024831-adding-people-and-enabling-them-to-log-in).
## Test SSO
active-directory Userzoom Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/userzoom-tutorial.md
+
+ Title: 'Tutorial: Azure AD SSO integration with UserZoom'
+description: Learn how to configure single sign-on between Azure Active Directory and UserZoom.
++++++++ Last updated : 05/05/2022++++
+# Tutorial: Azure AD SSO integration with UserZoom
+
+In this tutorial, you'll learn how to integrate UserZoom with Azure Active Directory (Azure AD). When you integrate UserZoom with Azure AD, you can:
+
+* Control in Azure AD who has access to UserZoom.
+* Enable your users to be automatically signed-in to UserZoom with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* UserZoom single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* UserZoom supports **SP** and **IDP** initiated SSO.
+
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
+
+## Add UserZoom from the gallery
+
+To configure the integration of UserZoom into Azure AD, you need to add UserZoom from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **UserZoom** in the search box.
+1. Select **UserZoom** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for UserZoom
+
+Configure and test Azure AD SSO with UserZoom using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in UserZoom.
+
+To configure and test Azure AD SSO with UserZoom, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure UserZoom SSO](#configure-userzoom-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create UserZoom test user](#create-userzoom-test-user)** - to have a counterpart of B.Simon in UserZoom that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **UserZoom** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, the user does not have to perform any step as the app is already pre-integrated with Azure.
+
+1. On the **Basic SAML Configuration** section, if you wish to configure the application in **SP** initiated mode then perform the following steps :
+
+ a. In the **Identifier** text box, type the value:
+ `urn:auth0:auth-userzoom:microsoft`
+
+ b. In the **Reply URL** text box, type the URL:
+ `https://auth.userzoom.com/login/callback?connection=microsoft`
+
+ c. In the **Sign-on URL** text box, type the URL:
+ `https://www.manager.userzoom.com/microsoft`
+
+1. UserZoom application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
+
+ ![Screenshot shows the Authomize application image.](common/default-attributes.png "Image")
+
+1. In addition to above, UserZoom application expects few more attributes to be passed back in SAML response, which are shown below. These attributes are also pre populated but you can review them as per your requirement.
+
+ | Name | Source Attribute |
+ |-| |
+ | email | user.mail |
+ | given_name | user.givenname |
+ | family_name | user.surname |
+
+1. On the **Set-up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/metadataxml.png "Certificate")
+
+1. On the **Set up UserZoom** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Screenshot shows to copy configuration appropriate URLs.](common/copy-configuration-urls.png "Attributes")
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to UserZoom.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **UserZoom**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure UserZoom SSO
+
+To configure single sign-on on **UserZoom** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [UserZoom support team](mailto:support@userzoom.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create UserZoom test user
+
+In this section, you create a user called Britta Simon in UserZoom. Work with [UserZoom support team](mailto:support@userzoom.com) to add the users in the UserZoom platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to UserZoom Sign on URL where you can initiate the login flow.
+
+* Go to UserZoom Sign-on URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the UserZoom for which you set up the SSO.
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the UserZoom tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the UserZoom for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure UserZoom you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Workgrid Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/workgrid-tutorial.md
Title: 'Tutorial: Azure Active Directory integration with Workgrid | Microsoft Docs'
+ Title: 'Tutorial: Azure Active Directory integration with Workgrid'
description: Learn how to configure single sign-on between Azure Active Directory and Workgrid.
Previously updated : 09/02/2021 Last updated : 05/13/2022 # Tutorial: Azure Active Directory integration with Workgrid
Follow these steps to enable Azure AD SSO in the Azure portal.
1. On the **Select a single sign-on method** page, select **SAML**. 1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+ ![Screenshot of Edit Basic SAML Configuration.](common/edit-urls.png)
4. On the **Basic SAML Configuration** section, perform the following steps:
- a. In the **Sign on URL** text box, type a URL using the following pattern:
- `https://<COMPANYCODE>.workgrid.com/console`
+ a. In the **Sign on URL** text box, type a URL using the following pattern:
+ `https://<COMPANYCODE>.workgrid.com/console`
- b. In the **Identifier (Entity ID)** text box, type a value using the following pattern:
- `urn:amazon:cognito:sp:us-east-1_<poolid>`
+ b. In the **Identifier (Entity ID)** text box, type a value using the following pattern:
+ `urn:amazon:cognito:sp:us-east-1_<poolid>`
- > [!NOTE]
- > These values are not real. Update these values with the actual Sign on URL and Identifier. Contact [Workgrid Client support team](mailto:support@workgrid.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+ > [!NOTE]
+ > These values are not real. Update these values with the actual Sign on URL and Identifier. Your Sign On URL is the same URL you use to sign in to the Workgrid console. You can find the Entity ID in the Security Section of your Workgrid console.
5. Workgrid application expects the SAML assertions in a specific format. Configure the following claims for this application. You can manage the values of these attributes from the **User Attributes** section on application integration page. On the **Set up Single Sign-On with SAML** page, click **Edit** button to open **User Attributes** dialog.
- ![image](common/edit-attribute.png)
+ ![Screenshot of user attributes.](common/edit-attribute.png)
6. On the **Set-up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Federation Metadata XML** from the given options as per your requirement and save it on your computer.
- ![The Certificate download link](common/metadataxml.png)
+ ![Screenshot of The Certificate download link.](common/metadataxml.png)
7. On the **Set-up Workgrid** section, copy the appropriate URL(s) as per your requirement.
- ![Copy configuration URLs](common/copy-configuration-urls.png)
+ ![Screenshot of Copy configuration U R Ls.](common/copy-configuration-urls.png)
### Create an Azure AD test user
In this section, you'll create a test user in the Azure portal called B.Simon.
1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**. 1. Select **New user** at the top of the screen. 1. In the **User** properties, follow these steps:
- 1. In the **Name** field, enter `B.Simon`.
- 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
- 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
- 1. Click **Create**.
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
### Assign the Azure AD test user
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
## Configure Workgrid SSO
-To configure single sign-on on **Workgrid** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [Workgrid support team](mailto:support@workgrid.com). They set this setting to have the SAML SSO connection set properly on both sides.
+To configure single sign-on on **Workgrid** side, you need to add the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to your Workgrid console in the **Security section**.
+
+ ![Screenshot of the Workgrid U I with the Security section called out.](media/workgrid-tutorial/security-section.png)
+
+ > [!NOTE]
+ > You will need to use the full schema URI for the Email, Name and Family Name claims when mapping the attributes in Workgrid:
+ >
+ > ![Screenshot of the Workgrid U I with the Security section attribute fields.](media/workgrid-tutorial/attribute-mappings.png)
+ ### Create Workgrid test user
Workgrid also supports automatic user provisioning, you can find more details [h
## Test SSO
-In this section, you test your Azure AD single sign-on configuration with following options.
+In this section, you test your Azure AD single sign-on configuration with following options.
-* Click on **Test this application** in Azure portal. This will redirect to Workgrid Sign-on URL where you can initiate the login flow.
+* Click on **Test this application** in Azure portal. This will redirect to Workgrid Sign-on URL where you can initiate the login flow.
* Go to Workgrid Sign-on URL directly and initiate the login flow from there.
active-directory Zscaler One Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/zscaler-one-provisioning-tutorial.md
You can use the **Synchronization Details** section to monitor progress and foll
For information on how to read the Azure AD provisioning logs, see [Reporting on automatic user account provisioning](../app-provisioning/check-status-user-account-provisioning.md).
+## Change Logs
+* 05/16/2022 - **Schema Discovery** feature enabled on this app.
+ ## Additional resources * [Manage user account provisioning for enterprise apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
active-directory Credential Design https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/credential-design.md
Title: How to customize your Azure Active Directory Verifiable Credentials (prev
description: This article shows you how to create your own custom verifiable credential --++ Last updated 04/01/2021
active-directory Decentralized Identifier Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/decentralized-identifier-overview.md
Title: Introduction to Azure Active Directory Verifiable Credentials (preview)
description: An overview Azure Verifiable Credentials. -+ editor:-+ Last updated 04/01/2021
active-directory Get Started Request Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/get-started-request-api.md
description: Learn how to issue and verify by using the Request Service REST API documentationCenter: '' --++ Last updated 05/03/2022
active-directory How To Create A Free Developer Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/how-to-create-a-free-developer-account.md
Title: How to create a free Azure Active Directory developer tenant
description: This article shows you how to create a developer account --++ Last updated 04/01/2021
active-directory How To Dnsbind https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/how-to-dnsbind.md
Title: Link your Domain to your Decentralized Identifier (DID) (preview) - Azure
description: Learn how to DNS Bind? documentationCenter: '' --++ Last updated 02/22/2022
active-directory How To Issuer Revoke https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/how-to-issuer-revoke.md
Title: How to Revoke a Verifiable Credential as an Issuer - Azure Active Directo
description: Learn how to revoke a Verifiable Credential that you've issued documentationCenter: '' --++ Last updated 04/01/2021
active-directory How To Opt Out https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/how-to-opt-out.md
Title: Opt out of the Azure Active Directory Verifiable Credentials (preview)
description: Learn how to Opt Out of the Verifiable Credentials Preview documentationCenter: '' --++ Last updated 02/08/2022
active-directory Introduction To Verifiable Credentials Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/introduction-to-verifiable-credentials-architecture.md
description: Learn foundational information to plan and design your solution
documentationCenter: '' -+ Last updated 07/20/2021
active-directory Issuance Request Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/issuance-request-api.md
description: Learn how to issue a verifiable credential that you've issued. documentationCenter: '' --++ Last updated 10/08/2021
active-directory Issuer Openid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/issuer-openid.md
Title: Issuer service communication examples (preview) - Azure Active Directory Verifiable Credentials description: Details of communication between identity provider and issuer service --++
active-directory Plan Issuance Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/plan-issuance-solution.md
description: Learn to plan your end-to-end issuance solution.
documentationCenter: '' -+ Last updated 07/20/2021
active-directory Plan Verification Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/plan-verification-solution.md
description: Learn foundational information to plan and design your verification
documentationCenter: '' -+ Last updated 07/20/2021
active-directory Presentation Request Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/presentation-request-api.md
description: Learn how to start a presentation request in Verifiable Credentials documentationCenter: '' --++ Last updated 10/08/2021
active-directory Verifiable Credentials Configure Issuer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/verifiable-credentials-configure-issuer.md
Title: Tutorial - Issue Azure AD Verifiable Credentials from an application (preview) description: In this tutorial, you learn how to issue verifiable credentials by using a sample app.-+ -+ Last updated 05/03/2022
active-directory Verifiable Credentials Configure Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/verifiable-credentials-configure-tenant.md
Title: Tutorial - Configure your tenant for Azure AD Verifiable Credentials (preview) description: In this tutorial, you learn how to configure your tenant to support the Verifiable Credentials service. -+ -+ Last updated 05/06/2022
active-directory Verifiable Credentials Configure Verifier https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/verifiable-credentials-configure-verifier.md
Title: Tutorial - Configure Azure AD Verifiable Credentials verifier (preview) description: In this tutorial, you learn how to configure your tenant to verify credentials.-+ -+ Last updated 10/08/2021
active-directory Verifiable Credentials Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/verifiable-credentials-faq.md
Title: Frequently asked questions - Azure Verifiable Credentials (preview) description: Find answers to common questions about Verifiable Credentials --++ Last updated 04/28/2022
active-directory Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/whats-new.md
Title: What's new for Azure Active Directory Verifiable Credentials (preview)
description: Recent updates for Azure Active Directory Verifiable Credentials -+ Last updated 05/10/2022
aks Cluster Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/cluster-extensions.md
Title: Cluster extensions for Azure Kubernetes Service (AKS) (preview)
+ Title: Cluster extensions for Azure Kubernetes Service (AKS)
description: Learn how to deploy and manage the lifecycle of extensions on Azure Kubernetes Service (AKS) Previously updated : 10/13/2021+ Last updated : 05/13/2022
-# Deploy and manage cluster extensions for Azure Kubernetes Service (AKS) (preview)
+# Deploy and manage cluster extensions for Azure Kubernetes Service (AKS)
Cluster extensions provides an Azure Resource Manager driven experience for installation and lifecycle management of services like Azure Machine Learning (ML) on an AKS cluster. This feature enables:
In this article, you will learn about:
A conceptual overview of this feature is available in [Cluster extensions - Azure Arc-enabled Kubernetes][arc-k8s-extensions] article. - ## Prerequisites * An Azure subscription. If you don't have an Azure subscription, you can create a [free account](https://azure.microsoft.com/free).
A conceptual overview of this feature is available in [Cluster extensions - Azur
> clusterconfig.azure.com/managedby: k8s-extension > ```
-### Register provider for cluster extensions
-
-#### [Azure CLI](#tab/azure-cli)
-
-1. Enter the following commands:
-
- ```azurecli-interactive
- az provider register --namespace Microsoft.KubernetesConfiguration
- az provider register --namespace Microsoft.ContainerService
- ```
-
-2. Monitor the registration process. Registration may take up to 10 minutes.
-
- ```azurecli-interactive
- az provider show -n Microsoft.KubernetesConfiguration -o table
- az provider show -n Microsoft.ContainerService -o table
- ```
-
- Once registered, you should see the `RegistrationState` state for these namespaces change to `Registered`.
-
-#### [PowerShell](#tab/azure-powershell)
-
-1. Enter the following commands:
-
- ```azurepowershell
- Register-AzResourceProvider -ProviderNamespace Microsoft.KubernetesConfiguration
- Register-AzResourceProvider -ProviderNamespace Microsoft.ContainerService
- ```
-
-1. Monitor the registration process. Registration may take up to 10 minutes.
-
- ```azurepowershell
- Get-AzResourceProvider -ProviderNamespace Microsoft.KubernetesConfiguration
- Get-AzResourceProvider -ProviderNamespace Microsoft.ContainerService
- ```
-
- Once registered, you should see the `RegistrationState` state for these namespaces change to `Registered`.
---
-### Register the `AKS-ExtensionManager` preview features
-
-To create an AKS cluster that can use cluster extensions, you must enable the `AKS-ExtensionManager` feature flag on your subscription.
-
-Register the `AKS-ExtensionManager` feature flag by using the [az feature register][az-feature-register] command, as shown in the following example:
-
-```azurecli-interactive
-az feature register --namespace "Microsoft.ContainerService" --name "AKS-ExtensionManager"
-```
-
-It takes a few minutes for the status to show *Registered*. Verify the registration status by using the [az feature list][az-feature-list] command:
-
-```azurecli-interactive
-az feature list -o table --query "[?contains(name, 'Microsoft.ContainerService/AKS-ExtensionManager')].{Name:name,State:properties.state}"
-```
-
-When ready, refresh the registration of the *Microsoft.KubernetesConfiguration* and *Microsoft.ContainerService* resource providers by using the [az provider register][az-provider-register] command:
-
-```azurecli-interactive
-az provider register --namespace Microsoft.KubernetesConfiguration
-az provider register --namespace Microsoft.ContainerService
-```
- ### Setup the Azure CLI extension for cluster extensions > [!NOTE]
az k8s-extension delete --name azureml --cluster-name <clusterName> --resource-g
[az-feature-register]: /cli/azure/feature#az-feature-register [az-feature-list]: /cli/azure/feature#az-feature-list [az-provider-register]: /cli/azure/provider#az-provider-register
-[azure-ml-overview]: ../machine-learning/how-to-attach-arc-kubernetes.md
+[azure-ml-overview]: ../machine-learning/how-to-attach-kubernetes-anywhere.md
[dapr-overview]: ./dapr.md [gitops-overview]: ../azure-arc/kubernetes/conceptual-gitops-flux2.md [k8s-extension-reference]: /cli/azure/k8s-extension [use-azure-ad-pod-identity]: ./use-azure-ad-pod-identity.md <!-- EXTERNAL -->
-[arc-k8s-regions]: https://azure.microsoft.com/global-infrastructure/services/?products=azure-arc&regions=all
+[arc-k8s-regions]: https://azure.microsoft.com/global-infrastructure/services/?products=azure-arc&regions=all
aks Custom Certificate Authority https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/custom-certificate-authority.md
+
+ Title: Custom certificate authority (CA) in Azure Kubernetes Service (AKS) (preview)
+description: Learn how to use a custom certificate authority (CA) in an Azure Kubernetes Service (AKS) cluster.
++++ Last updated : 4/12/2022++
+# Custom certificate authority (CA) in Azure Kubernetes Service (AKS) (preview)
+
+Custom certificate authorities (CAs) allow you to establish trust between your Azure Kubernetes Service (AKS) cluster and your workloads, such as private registries, proxies, and firewalls. A Kubernetes secret is used to store the certificate authority's information, then it's passed to all nodes in the cluster.
+
+This feature is applied per nodepool, so new and existing nodepools must be configured to enable this feature.
++
+## Prerequisites
+
+* An Azure subscription. If you don't have an Azure subscription, you can create a [free account](https://azure.microsoft.com/free).
+* [Azure CLI installed][azure-cli-install].
+* A base64 encoded certificate string.
+
+### Limitations
+
+This feature isn't currently supported for Windows nodepools.
+
+### Install the `aks-preview` extension
+
+You also need the *aks-preview* Azure CLI extensions version 0.5.72 or later. Install the *aks-preview* extension by using the [az extension add][az-extension-add] command, or install any available updates by using the [az extension update][az-extension-update] command.
+
+```azurecli
+# Install the aks-preview extension
+az extension add --name aks-preview
+
+# Update the extension to make sure you have the latest version installed
+az extension update --name aks-preview
+```
+
+### Register the `CustomCATrustPreview` preview feature
+
+Register the `CustomCATrustPreview` feature flag by using the [az feature register][az-feature-register] command:
+
+```azurecli
+az feature register --namespace "Microsoft.ContainerService" --name "CustomCATrustPreview"
+```
+
+It takes a few minutes for the status to show *Registered*. Verify the registration status by using the [az feature list][az-feature-list] command:
+
+```azurecli
+az feature list --query "[?contains(name, 'Microsoft.ContainerService/CustomCATrustPreview')].{Name:name,State:properties.state}" -o table
+```
+
+Refresh the registration of the *Microsoft.ContainerService* resource provider by using the [az provider register][az-provider-register] command:
+
+```azurecli
+az provider register --namespace Microsoft.ContainerService
+```
+
+## Configure a new AKS cluster to use a custom CA
+
+To configure a new AKS cluster to use a custom CA, run the [az aks create][az-aks-create] command with the `--enable-custom-ca-trust` parameter.
+
+```azurecli
+az aks create \
+ --resource-group myResourceGroup \
+ --name myAKSCluster \
+ --node-count 2 \
+ --enable-custom-ca-trust
+```
+
+## Configure a new nodepool to use a custom CA
+
+To configure a new nodepool to use a custom CA, run the [az aks nodepool add][az-aks-nodepool-add] command with the `--enable-custom-ca-trust` parameter.
+
+```azurecli
+az aks nodepool add \
+ --cluster-name myAKSCluster \
+ --resource-group myResourceGroup \
+ --name myNodepool \
+ --enable-custom-ca-trust
+```
+
+## Configure an existing nodepool to use a custom CA
+
+To configure an existing nodepool to use a custom CA, run the [az aks nodepool update][az-aks-nodepool-update] command with the `--enable-custom-trust-ca` parameter.
+
+```azurecli
+az aks nodepool update \
+ --resource-group myResourceGroup \
+ --cluster-name myAKSCluster \
+ --name myNodepool \
+ --enable-custom-ca-trust
+```
+
+## Create a Kubernetes secret with your CA information
+
+Create a [Kubernetes secret][kubernetes-secrets] YAML manifest with your base64 encoded certificate string in the `data` field. Data from this secret is used to update CAs on all nodes.
+
+You must ensure that:
+* The secret is named `custom-ca-trust-secret`.
+* The secret is created in the `kube-system` namespace.
+
+```yaml
+apiVerison: v1
+kind: Secret
+metadata:
+ name: custom-ca-trust-secret
+ namespace: kube-system
+type: Opaque
+data:
+ ca1.crt: |
+ {base64EncodedCertStringHere}
+ ca2.crt: |
+ {anotherBase64EncodedCertStringHere}
+```
+
+To update or remove a CA, edit and apply the YAML manifest. The cluster will poll for changes and update the nodes accordingly. This process may take a couple of minutes before changes are applied.
+
+## Next steps
+
+For more information on AKS security best practices, see [Best practices for cluster security and upgrades in Azure Kubernetes Service (AKS)][aks-best-practices-security-upgrades].
+
+<!-- LINKS EXTERNAL -->
+[kubernetes-secrets]:https://kubernetes.io/docs/concepts/configuration/secret/
+
+<!-- LINKS INTERNAL -->
+[aks-best-practices-security-upgrades]: operator-best-practices-cluster-security.md
+[azure-cli-install]: /cli/azure/install-azure-cli
+[az-aks-create]: /cli/azure/aks#az-aks-create
+[az-aks-update]: /cli/azure/aks#az-aks-update
+[az-aks-nodepool-add]: /cli/azure/aks#az-aks-nodepool-add
+[az-aks-nodepool-update]: /cli/azure/aks#az-aks-update
+[az-extension-add]: /cli/azure/extension#az-extension-add
+[az-extension-update]: /cli/azure/extension#az-extension-update
+[az-feature-list]: /cli/azure/feature#az-feature-list
+[az-feature-register]: /cli/azure/feature#az-feature-register
+[az-provider-register]: /cli/azure/provider#az-provider-register
aks Custom Node Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/custom-node-configuration.md
Title: Customize the node configuration for Azure Kubernetes Service (AKS) node pools description: Learn how to customize the configuration on Azure Kubernetes Service (AKS) cluster nodes and node pools. + Last updated 12/03/2020
Add a new node pool specifying the Kubelet parameters using the JSON file you cr
az aks nodepool add --name mynodepool1 --cluster-name myAKSCluster --resource-group myResourceGroup --kubelet-config ./kubeletconfig.json ``` - ## Other configuration The settings below can be used to modify other Operating System settings.
The settings below can be used to modify other Operating System settings.
Pass the ```--message-of-the-day``` flag with the location of the file to replace the Message of the Day on Linux nodes at cluster creation or node pool creation. - #### Cluster creation+ ```azurecli az aks create --cluster-name myAKSCluster --resource-group myResourceGroup --message-of-the-day ./newMOTD.txt ``` #### Nodepool creation+ ```azurecli az aks nodepool add --name mynodepool1 --cluster-name myAKSCluster --resource-group myResourceGroup --message-of-the-day ./newMOTD.txt ```
+## Confirm settings have been applied
+After you have applied custom node configuration, you can confirm the settings have been applied to the nodes by [connecting to the host][node-access] and verifying `sysctl` or configuration changes have been made on the filesystem.
## Next steps
az aks nodepool add --name mynodepool1 --cluster-name myAKSCluster --resource-gr
[aks-scale-apps]: tutorial-kubernetes-scale.md [aks-support-policies]: support-policies.md [aks-upgrade]: upgrade-cluster.md
+[node-access]: node-access.md
[aks-view-master-logs]: ../azure-monitor/containers/container-insights-log-query.md#enable-resource-logs [autoscaler-profile-properties]: #using-the-autoscaler-profile [azure-cli-install]: /cli/azure/install-azure-cli
aks Dapr Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/dapr-overview.md
Title: Dapr extension for Azure Kubernetes Service (AKS) overview (preview)
+ Title: Dapr extension for Azure Kubernetes Service (AKS) overview
description: Learn more about using Dapr on your Azure Kubernetes Service (AKS) cluster to develop applications. Previously updated : 10/15/2021- Last updated : 05/03/2022+ # Dapr
The managed Dapr cluster extension is the easiest method to provision Dapr on an
When installing Dapr OSS via helm or the Dapr CLI, runtime versions and configuration options are the responsibility of developers and cluster maintainers.
-Lastly, the Dapr extension is an extension of AKS, therefore you can expect the same support policy as other AKS features that are currently in preview.
+Lastly, the Dapr extension is an extension of AKS, therefore you can expect the same support policy as other AKS features.
### How can I switch to using the Dapr extension if IΓÇÖve already installed Dapr via a method, such as Helm?
After learning about Dapr and some of the challenges it solves, try [Deploying a
<!-- Links External --> [dapr-docs]: https://docs.dapr.io/ [dapr-blocks]: https://docs.dapr.io/concepts/building-blocks-concept/
-[dapr-secrets-block]: https://docs.dapr.io/developing-applications/building-blocks/secrets/secrets-overview/
+[dapr-secrets-block]: https://docs.dapr.io/developing-applications/building-blocks/secrets/secrets-overview/
aks Dapr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/dapr.md
Title: Dapr extension for Azure Kubernetes Service (AKS) (preview)
-description: Install and configure Dapr on your Azure Kubernetes Service (AKS) cluster using the Dapr cluster extension.
+ Title: Dapr extension for Azure Kubernetes Service (AKS) and Arc-enabled Kubernetes
+description: Install and configure Dapr on your Azure Kubernetes Service (AKS) and Arc-enabled Kubernetes clusters using the Dapr cluster extension.
Previously updated : 10/15/2021- Last updated : 05/16/2022+
-# Dapr extension for Azure Kubernetes Service (AKS) (preview)
+# Dapr extension for Azure Kubernetes Service (AKS) and Arc-enabled Kubernetes
[Dapr](https://dapr.io/) is a portable, event-driven runtime that makes it easy for any developer to build resilient, stateless and stateful applications that run on the cloud and edge and embraces the diversity of languages and developer frameworks. Leveraging the benefits of a sidecar architecture, Dapr helps you tackle the challenges that come with building microservices and keeps your code platform agnostic. In particular, it helps with solving problems around services calling other services reliably and securely, building event-driven apps with pub-sub, and building applications that are portable across multiple cloud services and hosts (e.g., Kubernetes vs. a VM).
-By using the AKS Dapr extension to provision Dapr on your AKS cluster, you eliminate the overhead of downloading Dapr tooling and manually installing and managing the runtime on your AKS cluster. Additionally, the extension offers support for all [native Dapr configuration capabilities][dapr-configuration-options] through simple command-line arguments.
+By using the Dapr extension to provision Dapr on your AKS or Arc-enabled Kubernetes cluster, you eliminate the overhead of downloading Dapr tooling and manually installing and managing the runtime on your AKS cluster. Additionally, the extension offers support for all [native Dapr configuration capabilities][dapr-configuration-options] through simple command-line arguments.
> [!NOTE] > If you plan on installing Dapr in a Kubernetes production environment, please see the [Dapr guidelines for production usage][kubernetes-production] documentation page. ## How it works
-The AKS Dapr extension uses the Azure CLI to provision the Dapr control plane on your AKS cluster. This will create:
+The Dapr extension uses the Azure CLI to provision the Dapr control plane on your AKS or Arc-enabled Kubernetes cluster. This will create:
- **dapr-operator**: Manages component updates and Kubernetes services endpoints for Dapr (state stores, pub/subs, etc.) - **dapr-sidecar-injector**: Injects Dapr into annotated deployment pods and adds the environment variables `DAPR_HTTP_PORT` and `DAPR_GRPC_PORT` to enable user-defined applications to easily communicate with Dapr without hard-coding Dapr port values. - **dapr-placement**: Used for actors only. Creates mapping tables that map actor instances to pods - **dapr-sentry**: Manages mTLS between services and acts as a certificate authority. For more information read the [security overview][dapr-security].
-Once Dapr is installed on your AKS cluster, you can begin to develop using the Dapr building block APIs by [adding a few annotations][dapr-deployment-annotations] to your deployments. For a more in-depth overview of the building block APIs and how to best use them, please see the [Dapr building blocks overview][building-blocks-concepts].
+Once Dapr is installed on your cluster, you can begin to develop using the Dapr building block APIs by [adding a few annotations][dapr-deployment-annotations] to your deployments. For a more in-depth overview of the building block APIs and how to best use them, please see the [Dapr building blocks overview][building-blocks-concepts].
> [!WARNING]
-> If you install Dapr through the AKS extension, our recommendation is to continue using the extension for future management of Dapr instead of the Dapr CLI. Combining the two tools can cause conflicts and result in undesired behavior.
+> If you install Dapr through the AKS or Arc-enabled Kubernetes extension, our recommendation is to continue using the extension for future management of Dapr instead of the Dapr CLI. Combining the two tools can cause conflicts and result in undesired behavior.
-## Supported Kubernetes versions
+## Currently supported
-The Dapr extension uses support window similar to AKS, but instead of N-2, Dapr supports N-1. For more, see the [Kubernetes version support policy][k8s-version-support-policy].
-
-## Prerequisites
--- If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.-- Install the latest version of the [Azure CLI](/cli/azure/install-azure-cli-windows) and the *aks-preview* extension.-- If you don't have one already, you need to create an [AKS cluster][deploy-cluster].
+### Dapr versions
+The Dapr extension support varies depending on how you manage the runtime.
-### Register the `AKS-ExtensionManager` and `AKS-Dapr` preview features
+**Self-managed**
+For self-managed runtime, the Dapr extension supports:
+- [The latest version of Dapr and 1 previous version (N-1)][dapr-supported-version]
+- Upgrading minor version incrementally (for example, 1.5 -> 1.6 -> 1.7)
+Self-managed runtime requires manual upgrade to remain in the support window. To upgrade Dapr via the extension, follow the [Update extension instance instructions][update-extension].
-To create an AKS cluster that can use the Dapr extension, you must enable the `AKS-ExtensionManager` and `AKS-Dapr` feature flags on your subscription.
+**Auto-upgrade**
+Enabling auto-upgrade keeps your Dapr extension updated to the latest minor version. You may experience breaking changes between updates.
-Register the `AKS-ExtensionManager` and `AKS-Dapr` feature flags by using the [az feature register][az-feature-register] command, as shown in the following example:
+### Components
-```azurecli-interactive
-az feature register --namespace "Microsoft.ContainerService" --name "AKS-ExtensionManager"
-az feature register --namespace "Microsoft.ContainerService" --name "AKS-Dapr"
-```
+Azure + open source components are supported. Alpha and beta components are supported via best effort.
-It takes a few minutes for the status to show *Registered*. Verify the registration status by using the [az feature list][az-feature-list] command:
+### Clouds/regions
-```azurecli-interactive
-az feature list -o table --query "[?contains(name, 'Microsoft.ContainerService/AKS-ExtensionManager')].{Name:name,State:properties.state}"
-az feature list -o table --query "[?contains(name, 'Microsoft.ContainerService/AKS-Dapr')].{Name:name,State:properties.state}"
-```
+Global Azure cloud is supported with Arc support on the regions listed by [Azure Products by Region][supported-cloud-regions].
-When ready, refresh the registration of the *Microsoft.KubernetesConfiguration* and *Microsoft.ContainerService* resource providers by using the [az provider register][az-provider-register] command:
+## Prerequisites
-```azurecli-interactive
-az provider register --namespace Microsoft.KubernetesConfiguration
-az provider register --namespace Microsoft.ContainerService
-```
+- If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+- Install the latest version of the [Azure CLI](/cli/azure/install-azure-cli-windows).
+- If you don't have one already, you need to create an [AKS cluster][deploy-cluster] or connect an [Arc-enabled Kubernetes cluster][arc-k8s-cluster].
### Set up the Azure CLI extension for cluster extensions
-You will also need the `k8s-extension` Azure CLI extension. Install this by running the following commands:
+You will need the `k8s-extension` Azure CLI extension. Install this by running the following commands:
```azurecli-interactive az extension add --name k8s-extension
If the `k8s-extension` extension is already installed, you can update it to the
az extension update --name k8s-extension ```
-## Create the extension and install Dapr on your AKS cluster
+## Create the extension and install Dapr on your AKS or Arc-enabled Kubernetes cluster
-> [!NOTE]
-> It is important that you use the flag `--cluster-type managedClusters` when installing the Dapr extension on your AKS cluster. Using `--cluster-type connectedClusters` is currently not supported.
+When installing the Dapr extension, use the flag value that corresponds to your cluster type:
+
+- **AKS cluster**: `--cluster-type managedClusters`.
+- **Arc-enabled Kubernetes cluster**: `--cluster-type connectedClusters`.
-Once your subscription is registered to use Kubernetes extensions, you can create the Dapr extension, which installs Dapr on your AKS cluster. For example:
+Create the Dapr extension, which installs Dapr on your AKS or Arc-enabled Kubernetes cluster. For example, for an AKS cluster:
```azure-cli-interactive az k8s-extension create --cluster-type managedClusters \
The below JSON is returned, and the error message is captured in the `message` p
], ```
+### Troubleshooting Dapr
+
+Troubleshoot Dapr errors via the [common Dapr issues and solutions guide][dapr-troubleshooting].
+ ## Delete the extension If you need to delete the extension and remove Dapr from your AKS cluster, you can use the following command:
az k8s-extension delete --resource-group myResourceGroup --cluster-name myAKSClu
[az-provider-register]: /cli/azure/provider#az-provider-register [sample-application]: ./quickstart-dapr.md [k8s-version-support-policy]: ./supported-kubernetes-versions.md?tabs=azure-cli#kubernetes-version-support-policy
+[arc-k8s-cluster]: /azure-arc/kubernetes/quickstart-connect-cluster.md
+[update-extension]: ./cluster-extensions.md#update-extension-instance
<!-- LINKS EXTERNAL --> [kubernetes-production]: https://docs.dapr.io/operations/hosting/kubernetes/kubernetes-production
az k8s-extension delete --resource-group myResourceGroup --cluster-name myAKSClu
[sample-application]: https://github.com/dapr/quickstarts/tree/master/hello-kubernetes#step-2create-and-configure-a-state-store [dapr-security]: https://docs.dapr.io/concepts/security-concept/ [dapr-deployment-annotations]: https://docs.dapr.io/operations/hosting/kubernetes/kubernetes-overview/#adding-dapr-to-a-kubernetes-deployment
+[dapr-oss-support]: https://docs.dapr.io/operations/support/support-release-policy/
+[dapr-supported-version]: https://docs.dapr.io/operations/support/support-release-policy/#supported-versions
+[dapr-troubleshooting]: https://docs.dapr.io/operations/troubleshooting/common_issues/
+[supported-cloud-regions]: https://azure.microsoft.com/global-infrastructure/services/?products=azure-arc
aks Devops Pipeline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/devops-pipeline.md
After the pipeline run is finished, explore what happened and then go see your a
1. Select **View environment**.
-1. Select the instance if your app for the namespace you deployed to. If you stuck to the defaults we mentioned above, then it will be the **myapp** app in the **default** namespace.
+1. Select the instance of your app for the namespace you deployed to. If you stuck to the defaults we mentioned above, then it will be the **myapp** app in the **default** namespace.
1. Select the **Services** tab.
aks Draft https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/draft.md
+
+ Title: Draft extension for Azure Kubernetes Service (AKS) (preview)
+description: Install and use Draft on your Azure Kubernetes Service (AKS) cluster using the Draft extension.
++++ Last updated : 5/02/2022+++
+# Draft for Azure Kubernetes Service (AKS) (preview)
+
+[Draft](https://github.com/Azure/draft) is an open-source project that streamlines Kubernetes development by taking a non-containerized application and generating the Dockerfiles, Kubernetes manifests, Helm charts, Kustomize configurations, and other artifacts associated with a containerized application. Draft can also create a GitHub Action workflow file to quickly build and deploy applications onto any Kubernetes cluster.
+
+## How it works
+
+Draft has the following commands to help ease your development on Kubernetes:
+
+- **draft create**: Creates the Dockerfile and the proper manifest files.
+- **draft setup-gh**: Sets up your GitHub OIDC.
+- **draft generate-workflow**: Generates the GitHub Action workflow file for deployment onto your cluster.
+- **draft up**: Sets up your GitHub OIDC and generates a GitHub Action workflow file, combining the previous two commands.
+
+## Prerequisites
+
+- If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+- Install the latest version of the [Azure CLI](/cli/azure/install-azure-cli-windows) and the *aks-preview* extension.
+- If you don't have one already, you need to create an [AKS cluster][deploy-cluster].
+
+### Install the `aks-preview` Azure CLI extension
++
+```azurecli-interactive
+# Install the aks-preview extension
+az extension add --name aks-preview
+
+# Update the extension to make sure you have the latest version installed
+az extension update --name aks-preview
+```
+
+## Create artifacts using `draft create`
+
+To create a Dockerfile, Helm chart, Kubernetes manifest, or Kustomize files needed to deploy your application onto an AKS cluster, use the `draft create` command:
+
+```azure-cli-interactive
+az aks draft create
+```
+
+You can also run the command on a specific directory using the `--destination` flag:
+
+```azure-cli-interactive
+az aks draft create --destination /Workspaces/ContosoAir
+```
+
+## Set up GitHub OIDC using `draft setup-gh`
+
+To use Draft, you have to register your application with GitHub using `draft setup-gh`. This step only needs to be done once per repository.
+
+```azure-cli-interactive
+az aks draft setup-gh
+```
+
+## Generate a GitHub Action workflow file for deployment using `draft generate-workflow`
+
+After you create your artifacts and set up GitHub OIDC, you can generate a GitHub Action workflow file, creating an action that deploys your application onto your AKS cluster. Once your workflow file is generated, you must commit it into your repository in order to initiate the GitHub Action.
+
+```azure-cli-interactive
+az aks draft generate-workflow
+```
+
+You can also run the command on a specific directory using the `--destination` flag:
+
+```azure-cli-interactive
+az aks draft generate-workflow --destination /Workspaces/ContosoAir
+```
+
+## Set up GitHub OpenID Connect (OIDC) and generate a GitHub Action workflow file using `draft up`
+
+`draft up` is a single command to accomplish GitHub OIDC setup and generate a GitHub Action workflow file for deployment. It effectively combines the `draft setup-gh` and `draft generate-workflow` commands, meaning it's most commonly used when getting started in a new repository for the first time, and only needs to be run once. Subsequent updates to the GitHub Action workflow file can be made using `draft generate-workflow`.
+
+```azure-cli-interactive
+az aks draft up
+```
+
+You can also run the command on a specific directory using the `--destination` flag:
+
+```azure-cli-interactive
+az aks draft up --destination /Workspaces/ContosoAir
+```
+
+<!-- LINKS INTERNAL -->
+[deploy-cluster]: ./tutorial-kubernetes-deploy-cluster.md
+[az-feature-register]: /cli/azure/feature#az-feature-register
+[az-feature-list]: /cli/azure/feature#az-feature-list
+[az-provider-register]: /cli/azure/provider#az-provider-register
+[sample-application]: ./quickstart-dapr.md
+[k8s-version-support-policy]: ./supported-kubernetes-versions.md?tabs=azure-cli#kubernetes-version-support-policy
+[web-app-routing]: web-app-routing.md
+[az-extension-add]: /cli/azure/extension#az-extension-add
+[az-extension-update]: /cli/azure/extension#az-extension-update
aks Gpu Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/gpu-cluster.md
Title: Use GPUs on Azure Kubernetes Service (AKS)
description: Learn how to use GPUs for high performance compute or graphics-intensive workloads on Azure Kubernetes Service (AKS) + Last updated 08/06/2021- #Customer intent: As a cluster administrator or developer, I want to create an AKS cluster that can use high-performance GPU-based VMs for compute-intensive workloads.
For information on using Azure Kubernetes Service with Azure Machine Learning, s
[aks-spark]: spark-job.md [gpu-skus]: ../virtual-machines/sizes-gpu.md [install-azure-cli]: /cli/azure/install-azure-cli
-[azureml-aks]: ../machine-learning/how-to-deploy-azure-kubernetes-service.md
+[azureml-aks]: ../machine-learning/v1/how-to-deploy-azure-kubernetes-service.md
[azureml-gpu]: ../machine-learning/how-to-deploy-inferencing-gpus.md [azureml-triton]: ../machine-learning/how-to-deploy-with-triton.md [aks-container-insights]: monitor-aks.md#container-insights
aks Howto Deploy Java Liberty App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/howto-deploy-java-liberty-app.md
export PASSWORD=${PASSWORD}
export DB_SERVER_NAME=<Server name>.database.windows.net export DB_PORT_NUMBER=1433 export DB_NAME=<Database name>
-export DB_USER=<Server admin login>@<Database name>
+export DB_USER=<Server admin login>@<Server name>
export DB_PASSWORD=<Server admin password>
-export NAMESPACE=${OPERATOR_NAMESPACE}
+export NAMESPACE=default
mvn clean install ```
aks Http Proxy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/http-proxy.md
description: Use the HTTP proxy configuration feature for Azure Kubernetes Servi
Previously updated : 09/09/2021 Last updated : 05/23/2022
-# HTTP proxy support in Azure Kubernetes Service (preview)
+# HTTP proxy support in Azure Kubernetes Service
Azure Kubernetes Service (AKS) clusters, whether deployed into a managed or custom virtual network, have certain outbound dependencies necessary to function properly. Previously, in environments requiring internet access to be routed through HTTP proxies, this was a problem. Nodes had no way of bootstrapping the configuration, environment variables, and certificates necessary to access internet services.
This feature adds HTTP proxy support to AKS clusters, exposing a straightforward
Some more complex solutions may require creating a chain of trust to establish secure communications across the network. The feature also enables installation of a trusted certificate authority onto the nodes as part of bootstrapping a cluster. - ## Limitations and other details The following scenarios are **not** supported:
By default, *httpProxy*, *httpsProxy*, and *trustedCa* have no value.
## Prerequisites * An Azure subscription. If you don't have an Azure subscription, you can create a [free account](https://azure.microsoft.com/free).
-* [Azure CLI installed](/cli/azure/install-azure-cli).
-
-### Install the `aks-preview` Azure CLI
-
-You also need the *aks-preview* Azure CLI extension version 0.5.25 or later. Install the *aks-preview* Azure CLI extension by using the [az extension add][az-extension-add] command. Or install any available updates by using the [az extension update][az-extension-update] command.
-
-```azurecli-interactive
-# Install the aks-preview extension
-az extension add --name aks-preview
-# Update the extension to make sure you have the latest version installed
-az extension update --name aks-preview
-```
-
-### Register the `HTTPProxyConfigPreview` preview feature
-
-To use the feature, you must also enable the `HTTPProxyConfigPreview` feature flag on your subscription.
-
-Register the `HTTPProxyConfigPreview` feature flag by using the [az feature register][az-feature-register] command, as shown in the following example:
-
-```azurecli-interactive
-az feature register --namespace "Microsoft.ContainerService" --name "HTTPProxyConfigPreview"
-```
-
-It takes a few minutes for the status to show *Registered*. Verify the registration status by using the [az feature list][az-feature-list] command:
-
-```azurecli-interactive
-az feature list -o table --query "[?contains(name, 'Microsoft.ContainerService/HTTPProxyConfigPreview')].{Name:name,State:properties.state}"
-```
-
-When ready, refresh the registration of the *Microsoft.ContainerService* resource provider by using the [az provider register][az-provider-register] command:
-
-```azurecli-interactive
-az provider register --namespace Microsoft.ContainerService
-```
+* Latest version of [Azure CLI installed](/cli/azure/install-azure-cli).
## Configuring an HTTP proxy using Azure CLI
aks Integrations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/integrations.md
Title: Add-ons, extensions, and other integrations with Azure Kubernetes Service
description: Learn about the add-ons, extensions, and open-source integrations you can use with Azure Kubernetes Service. + Last updated 02/22/2022
The below table shows a few examples of open-source and third-party integrations
| [Grafana][grafana] | An open-source dashboard for observability. | [Deploy Grafana on Kubernetes][grafana-install] | | [Couchbase][couchdb] | A distributed NoSQL cloud database. | [Install Couchbase and the Operator on AKS][couchdb-install] | | [OpenFaaS][open-faas]| An open-source framework for building serverless functions by using containers. | [Use OpenFaaS with AKS][open-faas-aks] |
-| [Apache Spark][apache-spark] | An open source, fast engine for large-scale data processing. | [Run an Apache Spark job with AKS][spark-job] |
+| [Apache Spark][apache-spark] | An open source, fast engine for large-scale data processing. | Running Apache Spark jobs requires a minimum node size of *Standard_D3_v2*. See [running Spark on Kubernetes][spark-kubernetes] for more details on running Spark jobs on Kubernetes. |
| [Istio][istio] | An open-source service mesh. | [Istio Installation Guides][istio-install] | | [Linkerd][linkerd] | An open-source service mesh. | [Linkerd Getting Started][linkerd-install] | | [Consul][consul] | An open source, identity-based networking solution. | [Getting Started with Consul Service Mesh for Kubernetes][consul-install] |
The below table shows a few examples of open-source and third-party integrations
[open-faas]: https://www.openfaas.com/ [open-faas-aks]: openfaas.md [apache-spark]: https://spark.apache.org/
-[spark-job]: spark-job.md
-[azure-ml-overview]: ../machine-learning/how-to-attach-arc-kubernetes.md
+[azure-ml-overview]: ../machine-learning/how-to-attach-kubernetes-anywhere.md
+[spark-kubernetes]: https://spark.apache.org/docs/latest/running-on-kubernetes.html
[dapr-overview]: ./dapr.md [gitops-overview]: ../azure-arc/kubernetes/conceptual-gitops-flux2.md
aks Keda https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/keda.md
+
+ Title: KEDA add-on on Azure Kubernetes Service (AKS) (Preview)
+description: Use the KEDA add-on to deploy a managed KEDA instance on Azure Kubernetes Service (AKS).
++++ Last updated : 05/24/2021+++
+# Simplified application autoscaling with Kubernetes Event-driven Autoscaling (KEDA) add-on (Preview)
+
+Kubernetes Event-driven Autoscaling (KEDA) is a single-purpose and lightweight component that strives to make application autoscaling simple and is a CNCF Incubation project.
+
+The KEDA add-on makes it even easier by deploying a managed KEDA installation, providing you with [a rich catalog of 40+ KEDA scalers](https://keda.sh/docs/latest/scalers/) that you can scale your applications with on your Azure Kubernetes Services (AKS) cluster.
++
+## KEDA add-on overview
+
+[KEDA][keda] provides two main components:
+
+- **KEDA operator** allows end-users to scale workloads in/out from 0 to N instances with support for Kubernetes Deployments, Jobs, StatefulSets or any custom resource that defines `/scale` subresource.
+- **Metrics server** exposes external metrics to HPA in Kubernetes for autoscaling purposes such as messages in a Kafka topic, or number of events in an Azure event hub. Due to upstream limitations, this must be the only installed metric adapter.
+
+## Prerequisites
+
+> [!NOTE]
+> KEDA is currently only available in the `westcentralus` region.
+
+- An Azure subscription. If you don't have an Azure subscription, you can create a [free account](https://azure.microsoft.com/free).
+- [Azure CLI installed](/cli/azure/install-azure-cli).
+
+### Register the `AKS-KedaPreview` feature flag
+
+To use the KEDA, you must enable the `AKS-KedaPreview` feature flag on your subscription.
+
+```azurecli
+az feature register --name AKS-KedaPreview --namespace Microsoft.ContainerService
+```
+
+You can check on the registration status by using the `az feature list` command:
+
+```azurecli-interactive
+az feature list -o table --query "[?contains(name, 'Microsoft.ContainerService/AKS-KedaPreview')].{Name:name,State:properties.state}"
+```
+
+When ready, refresh the registration of the *Microsoft.ContainerService* resource provider by using the `az provider register` command:
+
+```azurecli-interactive
+az provider register --namespace Microsoft.ContainerService
+```
+
+## Deploy the KEDA add-on with Azure Resource Manager (ARM) templates
+
+The KEDA add-on can be enabled by deploying an AKS cluster with an Azure Resource Manager template and specifying the `workloadAutoScalerProfile` field:
+
+```json
+ "workloadAutoScalerProfile": {
+ "keda": {
+ "enabled": true
+ }
+ }
+```
+
+## Connect to your AKS cluster
+
+To connect to the Kubernetes cluster from your local computer, you use [kubectl][kubectl], the Kubernetes command-line client.
+
+If you use the Azure Cloud Shell, `kubectl` is already installed. You can also install it locally using the [az aks install-cli][az aks install-cli] command:
+
+```azurecli
+az aks install-cli
+```
+
+To configure `kubectl` to connect to your Kubernetes cluster, use the [az aks get-credentials][az aks get-credentials] command. The following example gets credentials for the AKS cluster named *MyAKSCluster* in the *MyResourceGroup*:
+
+```azurecli
+az aks get-credentials --resource-group MyResourceGroup --name MyAKSCluster
+```
+
+## Example deployment
+
+The following snippet is a sample deployment that creates a cluster with KEDA enabled with a single node pool comprised of three `DS2_v5` nodes.
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "resources": [
+ {
+ "apiVersion": "2022-05-02-preview",
+ "dependsOn": [],
+ "type": "Microsoft.ContainerService/managedClusters",
+ "location": "westcentralus",
+ "name": "myAKSCluster",
+ "properties": {
+ "kubernetesVersion": "1.23.5",
+ "enableRBAC": true,
+ "dnsPrefix": "myAKSCluster",
+ "agentPoolProfiles": [
+ {
+ "name": "agentpool",
+ "osDiskSizeGB": 200,
+ "count": 3,
+ "enableAutoScaling": false,
+ "vmSize": "Standard_D2S_v5",
+ "osType": "Linux",
+ "storageProfile": "ManagedDisks",
+ "type": "VirtualMachineScaleSets",
+ "mode": "System",
+ "maxPods": 110,
+ "availabilityZones": [],
+ "nodeTaints": [],
+ "enableNodePublicIP": false
+ }
+ ],
+ "networkProfile": {
+ "loadBalancerSku": "standard",
+ "networkPlugin": "kubenet"
+ },
+ "workloadAutoScalerProfile": {
+ "keda": {
+ "enabled": true
+ }
+ }
+ },
+ "identity": {
+ "type": "SystemAssigned"
+ }
+ }
+ ]
+}
+```
+
+## Use KEDA
+
+KEDA scaling will only work once a custom resource definition has been defined (CRD). To learn more about KEDA CRDs, follow the official [KEDA documentation][keda-scalers] to define your scaler.
+
+## Clean Up
+
+To remove the resource group, and all related resources, use the [az group delete][az-group-delete] command:
+
+```azurecli
+az group delete --name MyResourceGroup
+```
+
+<!-- LINKS - internal -->
+[az-aks-create]: /cli/azure/aks#az-aks-create
+[az aks install-cli]: /cli/azure/aks#az-aks-install-cli
+[az aks get-credentials]: /cli/azure/aks#az-aks-get-credentials
+[az aks update]: /cli/azure/aks#az-aks-update
+[az-group-delete]: /cli/azure/group#az-group-delete
+
+<!-- LINKS - external -->
+[kubectl]: https://kubernetes.io/docs/user-guide/kubectl
+[keda]: https://keda.sh/
+[keda-scalers]: https://keda.sh/docs/scalers/
aks Kubernetes Action https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/kubernetes-action.md
description: Learn how to use GitHub Actions to deploy your container to Kubern
Previously updated : 01/05/2022 Last updated : 05/16/2022
For a workflow targeting AKS, the file has three sections:
|Section |Tasks | |||
-|**Authentication** | Login to a private container registry (ACR) |
+|**Authentication** | Generate deployment credentials. |
|**Build** | Build & push the container image | |**Deploy** | 1. Set the target AKS cluster | | |2. Create a generic/docker-registry secret in Kubernetes cluster |
For a workflow targeting AKS, the file has three sections:
## Create a service principal
+# [Service principal](#tab/userlevel)
+ You can create a [service principal](../active-directory/develop/app-objects-and-service-principals.md#service-principal-object) by using the [az ad sp create-for-rbac](/cli/azure/ad/sp#az-ad-sp-create-for-rbac) command in the [Azure CLI](/cli/azure/). You can run this command using [Azure Cloud Shell](https://shell.azure.com/) in the Azure portal or by selecting the **Try it** button. ```azurecli-interactive
In the above command, replace the placeholders with your subscription ID, and re
``` Copy this JSON object, which you can use to authenticate from GitHub.
+# [Open ID Connect](#tab/openid)
+
+Open ID Connect is an authentication method that uses short-lived tokens. Setting up [Open ID Connect with GitHub Actions](https://docs.github.com/en/actions/deployment/security-hardening-your-deployments/about-security-hardening-with-openid-connect) is more complex process that offers hardened security.
+
+1. If you do not have an existing application, register a [new Active Directory application and service principal that can access resources](../active-directory/develop/howto-create-service-principal-portal.md). Create the Active Directory application.
+
+ ```azurecli-interactive
+ az ad app create --display-name myApp
+ ```
+
+ This command will output JSON with an `appId` that is your `client-id`. Save the value to use as the `AZURE_CLIENT_ID` GitHub secret later.
+
+ You'll use the `objectId` value when creating federated credentials with Graph API and reference it as the `APPLICATION-OBJECT-ID`.
+
+1. Create a service principal. Replace the `$appID` with the appId from your JSON output.
+
+ This command generates JSON output with a different `objectId` and will be used in the next step. The new `objectId` is the `assignee-object-id`.
+
+ Copy the `appOwnerTenantId` to use as a GitHub secret for `AZURE_TENANT_ID` later.
+
+ ```azurecli-interactive
+ az ad sp create --id $appId
+ ```
+
+1. Create a new role assignment by subscription and object. By default, the role assignment will be tied to your default subscription. Replace `$subscriptionId` with your subscription ID, `$resourceGroupName` with your resource group name, and `$assigneeObjectId` with the generated `assignee-object-id`. Learn [how to manage Azure subscriptions with the Azure CLI](/cli/azure/manage-azure-subscriptions-azure-cli).
+
+ ```azurecli-interactive
+ az role assignment create --role contributor --subscription $subscriptionId --assignee-object-id $assigneeObjectId --scopes /subscriptions/$subscriptionId/resourceGroups/$resourceGroupName/providers/Microsoft.Web/sites/--assignee-principal-type ServicePrincipal
+ ```
+
+1. Run the following command to [create a new federated identity credential](/graph/api/application-post-federatedidentitycredentials?view=graph-rest-beta&preserve-view=true) for your active directory application.
+
+ * Replace `APPLICATION-OBJECT-ID` with the **objectId (generated while creating app)** for your Active Directory application.
+ * Set a value for `CREDENTIAL-NAME` to reference later.
+ * Set the `subject`. The value of this is defined by GitHub depending on your workflow:
+ * Jobs in your GitHub Actions environment: `repo:< Organization/Repository >:environment:< Name >`
+ * For Jobs not tied to an environment, include the ref path for branch/tag based on the ref path used for triggering the workflow: `repo:< Organization/Repository >:ref:< ref path>`. For example, `repo:n-username/ node_express:ref:refs/heads/my-branch` or `repo:n-username/ node_express:ref:refs/tags/my-tag`.
+ * For workflows triggered by a pull request event: `repo:< Organization/Repository >:pull_request`.
+
+ ```azurecli
+ az rest --method POST --uri 'https://graph.microsoft.com/beta/applications/<APPLICATION-OBJECT-ID>/federatedIdentityCredentials' --body '{"name":"<CREDENTIAL-NAME>","issuer":"https://token.actions.githubusercontent.com","subject":"repo:organization/repository:ref:refs/heads/main","description":"Testing","audiences":["api://AzureADTokenExchange"]}'
+ ```
+
+To learn how to create a Create an active directory application, service principal, and federated credentials in Azure portal, see [Connect GitHub and Azure](/azure/developer/github/connect-from-azure#use-the-azure-login-action-with-openid-connect).
++++ ## Configure the GitHub secrets
+# [Service principal](#tab/userlevel)
+ Follow the steps to configure the secrets: 1. In [GitHub](https://github.com/), browse to your repository, select **Settings > Secrets > New repository secret**.
Follow the steps to configure the secrets:
:::image type="content" source="media/kubernetes-action/kubernetes-secrets.png" alt-text="Screenshot shows existing secrets for a repository.":::
+# [OpenID Connect](#tab/openid)
+
+You need to provide your application's **Client ID**, **Tenant ID**, and **Subscription ID** to the login action. These values can either be provided directly in the workflow or can be stored in GitHub secrets and referenced in your workflow. Saving the values as GitHub secrets is the more secure option.
+
+1. Open your GitHub repository and go to **Settings**.
+
+1. Select **Settings > Secrets > New secret**.
+
+1. Create secrets for `AZURE_CLIENT_ID`, `AZURE_TENANT_ID`, and `AZURE_SUBSCRIPTION_ID`. Use these values from your Active Directory application for your GitHub secrets:
+
+ |GitHub Secret | Active Directory Application |
+ |||
+ |AZURE_CLIENT_ID | Application (client) ID |
+ |AZURE_TENANT_ID | Directory (tenant) ID |
+ |AZURE_SUBSCRIPTION_ID | Subscription ID |
+
+1. Similarly, define the following additional secrets for the container registry credentials and set them in Docker login action.
+
+ - REGISTRY_USERNAME
+ - REGISTRY_PASSWORD
++ ## Build a container image and deploy to Azure Kubernetes Service cluster
Before you can deploy to AKS, you'll need to set target Kubernetes namespace and
Complete your deployment with the `azure/k8s-deploy@v1` action. Replace the environment variables with values for your application.
+# [Service principal](#tab/userlevel)
+ ```yaml on: [push]
jobs:
namespace: ${{ env.NAMESPACE }} ```
+# [Open ID Connect](#tab/openid)
+
+The Azure Kubernetes Service set context action ([azure/aks-set-context](https://github.com/Azure/aks-set-context)) can be used to set cluster context before other actions like [k8s-deploy](https://github.com/Azure/k8s-deploy). For Open ID Connect, you'll use the Azure Login action before set context.
+
+```yaml
+
+on: [push]
+
+# Environment variables available to all jobs and steps in this workflow
+env:
+ REGISTRY_NAME: {registry-name}
+ CLUSTER_NAME: {cluster-name}
+ CLUSTER_RESOURCE_GROUP: {resource-group-name}
+ NAMESPACE: {namespace-name}
+ SECRET: {secret-name}
+ APP_NAME: {app-name}
+
+jobs:
+ build:
+ runs-on: ubuntu-latest
+ steps:
+ - uses: actions/checkout@main
+
+ # Connect to Azure Container Registry (ACR)
+ - uses: azure/docker-login@v1
+ with:
+ login-server: ${{ env.REGISTRY_NAME }}.azurecr.io
+ username: ${{ secrets.REGISTRY_USERNAME }}
+ password: ${{ secrets.REGISTRY_PASSWORD }}
+
+ # Container build and push to a Azure Container Registry (ACR)
+ - run: |
+ docker build . -t ${{ env.REGISTRY_NAME }}.azurecr.io/${{ env.APP_NAME }}:${{ github.sha }}
+ docker push ${{ env.REGISTRY_NAME }}.azurecr.io/${{ env.APP_NAME }}:${{ github.sha }}
+ working-directory: ./<path-to-Dockerfile-directory>
+
+ - uses: azure/login@v1
+ with:
+ client-id: ${{ secrets.AZURE_CLIENT_ID }}
+ tenant-id: ${{ secrets.AZURE_TENANT_ID }}
+ subscription-id: ${{ secrets.AZURE_SUBSCRIPTION_ID }}
+
+ # Set the target Azure Kubernetes Service (AKS) cluster.
+ - uses: azure/aks-set-context@v2.0
+ with:
+ cluster-name: ${{ env.CLUSTER_NAME }}
+ resource-group: ${{ env.CLUSTER_RESOURCE_GROUP }}
+
+ # Create namespace if doesn't exist
+ - run: |
+ kubectl create namespace ${{ env.NAMESPACE }} --dry-run=client -o json | kubectl apply -f -
+
+ # Create image pull secret for ACR
+ - uses: azure/k8s-create-secret@v1
+ with:
+ container-registry-url: ${{ env.REGISTRY_NAME }}.azurecr.io
+ container-registry-username: ${{ secrets.REGISTRY_USERNAME }}
+ container-registry-password: ${{ secrets.REGISTRY_PASSWORD }}
+ secret-name: ${{ env.SECRET }}
+ namespace: ${{ env.NAMESPACE }}
+ arguments: --force true
+
+ # Deploy app to AKS
+ - uses: azure/k8s-deploy@v1
+ with:
+ manifests: |
+ ${{ github.workspace }}/manifests/deployment.yaml
+ ${{ github.workspace }}/manifests/service.yaml
+ images: |
+ ${{ env.REGISTRY_NAME }}.azurecr.io/${{ env.APP_NAME }}:${{ github.sha }}
+ imagepullsecrets: |
+ ${{ env.SECRET }}
+ namespace: ${{ env.NAMESPACE }}
+```
+++ ## Clean up resources When your Kubernetes cluster, container registry, and repository are no longer needed, clean up the resources you deployed by deleting the resource group and your GitHub repository.
aks Quick Windows Container Deploy Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-windows-container-deploy-cli.md
Title: Create a Windows Server container on an AKS cluster by using Azure CLI
description: Learn how to quickly create a Kubernetes cluster, deploy an application in a Windows Server container in Azure Kubernetes Service (AKS) using the Azure CLI. + Last updated 04/29/2022-- #Customer intent: As a developer or cluster operator, I want to quickly create an AKS cluster and deploy a Windows Server container so that I can see how to run applications running on a Windows Server container using the managed Kubernetes service in Azure.
This article assumes a basic understanding of Kubernetes concepts. For more info
- This article requires version 2.0.64 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed. -- The identity you are using to create your cluster has the appropriate minimum permissions. For more details on access and identity for AKS, see [Access and identity options for Azure Kubernetes Service (AKS)](../concepts-identity.md).
+- The identity you're using to create your cluster has the appropriate minimum permissions. For more details on access and identity for AKS, see [Access and identity options for Azure Kubernetes Service (AKS)](../concepts-identity.md).
- If you have multiple Azure subscriptions, select the appropriate subscription ID in which the resources should be billed using the [Az account](/cli/azure/account) command.
The following additional limitations apply to Windows Server node pools:
## Create a resource group
-An Azure resource group is a logical group in which Azure resources are deployed and managed. When you create a resource group, you are asked to specify a location. This location is where resource group metadata is stored, it is also where your resources run in Azure if you don't specify another region during resource creation. Create a resource group using the [az group create][az-group-create] command.
+An Azure resource group is a logical group in which Azure resources are deployed and managed. When you create a resource group, you're asked to specify a location. This location is where resource group metadata is stored, it is also where your resources run in Azure if you don't specify another region during resource creation. Create a resource group using the [az group create][az-group-create] command.
The following example creates a resource group named *myResourceGroup* in the *eastus* location. > [!NOTE] > This article uses Bash syntax for the commands in this tutorial.
-> If you are using Azure Cloud Shell, ensure that the dropdown in the upper-left of the Cloud Shell window is set to **Bash**.
+> If you're using Azure Cloud Shell, ensure that the dropdown in the upper-left of the Cloud Shell window is set to **Bash**.
```azurecli-interactive az group create --name myResourceGroup --location eastus
az aks create \
After a few minutes, the command completes and returns JSON-formatted information about the cluster. Occasionally the cluster can take longer than a few minutes to provision. Allow up to 10 minutes in these cases.
-## Add a Windows Server node pool
+## Add a Windows Server 2019 node pool
By default, an AKS cluster is created with a node pool that can run Linux containers. Use `az aks nodepool add` command to add an additional node pool that can run Windows Server containers alongside the Linux node pool.
az aks nodepool add \
The above command creates a new node pool named *npwin* and adds it to the *myAKSCluster*. The above command also uses the default subnet in the default vnet created when running `az aks create`.
+## Add a Windows Server 2022 node pool (preview)
+
+When creating a Windows node pool, the default operating system will be Windows Server 2019. To use Windows Server 2022 nodes, you will need to specify an OS SKU type of `Windows2022`.
++
+### Install the `aks-preview` Azure CLI
+
+You also need the *aks-preview* Azure CLI extension version `0.5.68` or later. Install the *aks-preview* Azure CLI extension by using the [az extension add][az-extension-add] command, or install any available updates by using the [az extension update][az-extension-update] command.
+
+```azurecli-interactive
+# Install the aks-preview extension
+az extension add --name aks-preview
+# Update the extension to make sure you have the latest version installed
+az extension update --name aks-preview
+```
+
+### Register the `AKSWindows2022Preview` preview feature
+
+To use the feature, you must also enable the `AKSWindows2022Preview` feature flag on your subscription.
+
+Register the `AKSWindows2022Preview` feature flag by using the [az feature register][az-feature-register] command, as shown in the following example:
+
+```azurecli-interactive
+az feature register --namespace "Microsoft.ContainerService" --name "AKSWindows2022Preview"
+```
+
+It takes a few minutes for the status to show *Registered*. Verify the registration status by using the [az feature list][az-feature-list] command:
+
+```azurecli-interactive
+az feature list -o table --query "[?contains(name, 'Microsoft.ContainerService/AKSWindows2022Preview')].{Name:name,State:properties.state}"
+```
+
+When ready, refresh the registration of the *Microsoft.ContainerService* resource provider by using the [az provider register][az-provider-register] command:
+
+```azurecli-interactive
+az provider register --namespace Microsoft.ContainerService
+```
+
+Use `az aks nodepool add` command to add a Windows Server 2022 node pool:
+
+```azurecli
+az aks nodepool add \
+ --resource-group myResourceGroup \
+ --cluster-name myAKSCluster \
+ --os-type Windows \
+ --os-sku Windows2022 \
+ --name npwin \
+ --node-count 1
+```
+ ## Optional: Using `containerd` with Windows Server node pools Beginning in Kubernetes version 1.20 and greater, you can specify `containerd` as the container runtime for Windows Server 2019 node pools. From Kubernetes 1.23, containerd will be the default container runtime for Windows.
Beginning in Kubernetes version 1.20 and greater, you can specify `containerd` a
### Add a Windows Server node pool with `containerd`
-Use the `az aks nodepool add` command to add an additional node pool that can run Windows Server containers with the `containerd` runtime.
+Use the `az aks nodepool add` command to add a node pool that can run Windows Server containers with the `containerd` runtime.
> [!NOTE] > If you do not specify the *WindowsContainerRuntime=containerd* custom header, the node pool will use Docker as the container runtime.
aks Node Auto Repair https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/node-auto-repair.md
If AKS identifies an unhealthy node that remains unhealthy for 10 minutes, AKS t
1. Reboot the node. 1. If the reboot is unsuccessful, reimage the node.
-1. If the reimage is unsuccessful, redploy the node.
+1. If the reimage is unsuccessful, redeploy the node.
Alternative remediations are investigated by AKS engineers if auto-repair is unsuccessful.
aks Quickstart Dapr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/quickstart-dapr.md
Title: Deploy an application with the Dapr cluster extension (preview) for Azure Kubernetes Service (AKS)
-description: Use the Dapr cluster extension (Preview) for Azure Kubernetes Service (AKS) to deploy an application
+ Title: Deploy an application with the Dapr cluster extension for Azure Kubernetes Service (AKS)
+description: Use the Dapr cluster extension for Azure Kubernetes Service (AKS) to deploy an application
Previously updated : 11/01/2021- Last updated : 05/03/2022+
-# Quickstart: Deploy an application using the Dapr cluster extension (preview) for Azure Kubernetes Service (AKS)
+# Quickstart: Deploy an application using the Dapr cluster extension for Azure Kubernetes Service (AKS)
In this quickstart, you will get familiar with using the [Dapr cluster extension][dapr-overview] in an AKS cluster. You will be deploying a hello world example, consisting of a Python application that generates messages and a Node application that consumes and persists them. - ## Prerequisites * An Azure subscription. If you don't have an Azure subscription, you can create a [free account](https://azure.microsoft.com/free).
cd quickstarts/hello-kubernetes
## Create and configure a state store
-Dapr can use a number of different state stores (Redis, CosmosDB, DynamoDB, Cassandra, etc.) to persist and retrieve state. For this example, we will use Redis.
+Dapr can use a number of different state stores (Redis, Cosmos DB, DynamoDB, Cassandra, etc.) to persist and retrieve state. For this example, we will use Redis.
### Create a Redis store
aks Release Tracker https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/release-tracker.md
+
+ Title: AKS release tracker
+description: Learn how to determine which Azure regions have the weekly AKS release deployments rolled out in real time.
++ Last updated : 05/24/2022++++
+# AKS release tracker
+
+AKS releases weekly rounds of fixes and feature and component updates that affect all clusters and customers. However, these releases can take up to two weeks to roll out to all regions from the initial time of shipping due to Azure Safe Deployment Practices (SDP). It is important for customers to know when a particular AKS release is hitting their region, and the AKS release tracker provides these details in real time by versions and regions.
+
+## Why release tracker?
+
+With AKS release tracker, customers can follow specific component updates present in an AKS version release, such as fixes shipped to a core add-on. In addition to providing real-time updates of region release status, the tracker also links to the specific version of the AKS [release notes][aks-release] to help customers identify which instance of the release is relevant to them. As the data is updated in real time, customers can track the entire SDP process with a single tool.
+
+## How to use the release tracker
+
+The top half of the tracker shows the latest and 3 previously available release versions for each region, and links to the corresponding release notes entry. This view is helpful when you want to track the available versions by region.
++
+The bottom half of the tracker shows the SDP process. The table has two views: one shows the latest version and status update for each grouping of regions and the other shows the status and region availability of each currently supported version.
++
+<!-- LINKS - external -->
+[aks-release]: https://github.com/Azure/AKS/releases
aks Spark Job https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/spark-job.md
- Title: Run an Apache Spark job with Azure Kubernetes Service (AKS)
-description: Use Azure Kubernetes Service (AKS) to create and run an Apache Spark job for large-scale data processing.
- Previously updated : 10/18/2019---
-# Running Apache Spark jobs on AKS
-
-[Apache Spark][apache-spark] is a fast engine for large-scale data processing. As of the [Spark 2.3.0 release][spark-kubernetes-earliest-version], Apache Spark supports native integration with Kubernetes clusters. Azure Kubernetes Service (AKS) is a managed Kubernetes environment running in Azure. This document details preparing and running Apache Spark jobs on an Azure Kubernetes Service (AKS) cluster.
-
-## Prerequisites
-
-In order to complete the steps within this article, you need the following.
-
-* Basic understanding of Kubernetes and [Apache Spark][spark-quickstart].
-* [Docker Hub][docker-hub] account, or an [Azure Container Registry][acr-create].
-* Azure CLI [installed][azure-cli] on your development system.
-* [JDK 8][java-install] installed on your system.
-* [Apache Maven][maven-install] installed on your system.
-* SBT ([Scala Build Tool][sbt-install]) installed on your system.
-* Git command-line tools installed on your system.
-
-## Create an AKS cluster
-
-Spark is used for large-scale data processing and requires that Kubernetes nodes are sized to meet the Spark resources requirements. We recommend a minimum size of `Standard_D3_v2` for your Azure Kubernetes Service (AKS) nodes.
-
-If you need an AKS cluster that meets this minimum recommendation, run the following commands.
-
-Create a resource group for the cluster.
-
-```azurecli
-az group create --name mySparkCluster --location eastus
-```
-
-Create a Service Principal for the cluster. After it is created, you will need the Service Principal appId and password for the next command.
-
-```azurecli
-az ad sp create-for-rbac --name SparkSP --role Contributor --scopes /subscriptions/mySubscriptionID
-```
-
-Create the AKS cluster with nodes that are of size `Standard_D3_v2`, and values of appId and password passed as service-principal and client-secret parameters.
-
-```azurecli
-az aks create --resource-group mySparkCluster --name mySparkCluster --node-vm-size Standard_D3_v2 --generate-ssh-keys --service-principal <APPID> --client-secret <PASSWORD>
-```
-
-Connect to the AKS cluster.
-
-```azurecli
-az aks get-credentials --resource-group mySparkCluster --name mySparkCluster
-```
-
-If you are using Azure Container Registry (ACR) to store container images, configure authentication between AKS and ACR. See the [ACR authentication documentation][acr-aks] for these steps.
-
-## Build the Spark source
-
-Before running Spark jobs on an AKS cluster, you need to build the Spark source code and package it into a container image. The Spark source includes scripts that can be used to complete this process.
-
-Clone the Spark project repository to your development system.
-
-```bash
-git clone -b branch-2.4 https://github.com/apache/spark
-```
-
-Change into the directory of the cloned repository and save the path of the Spark source to a variable.
-
-```bash
-cd spark
-sparkdir=$(pwd)
-```
-
-If you have multiple JDK versions installed, set `JAVA_HOME` to use version 8 for the current session.
-
-```bash
-export JAVA_HOME=`/usr/libexec/java_home -d 64 -v "1.8*"`
-```
-
-Run the following command to build the Spark source code with Kubernetes support.
-
-```bash
-./build/mvn -Pkubernetes -DskipTests clean package
-```
-
-The following commands create the Spark container image and push it to a container image registry. Replace `registry.example.com` with the name of your container registry and `v1` with the tag you prefer to use. If using Docker Hub, this value is the registry name. If using Azure Container Registry (ACR), this value is the ACR login server name.
-
-```bash
-REGISTRY_NAME=registry.example.com
-REGISTRY_TAG=v1
-```
-
-```bash
-./bin/docker-image-tool.sh -r $REGISTRY_NAME -t $REGISTRY_TAG build
-```
-
-Push the container image to your container image registry.
-
-```bash
-./bin/docker-image-tool.sh -r $REGISTRY_NAME -t $REGISTRY_TAG push
-```
-
-## Prepare a Spark job
-
-Next, prepare a Spark job. A jar file is used to hold the Spark job and is needed when running the `spark-submit` command. The jar can be made accessible through a public URL or pre-packaged within a container image. In this example, a sample jar is created to calculate the value of Pi. This jar is then uploaded to Azure storage. If you have an existing jar, feel free to substitute
-
-Create a directory where you would like to create the project for a Spark job.
-
-```bash
-mkdir myprojects
-cd myprojects
-```
-
-Create a new Scala project from a template.
-
-```bash
-sbt new sbt/scala-seed.g8
-```
-
-When prompted, enter `SparkPi` for the project name.
-
-```bash
-name [Scala Seed Project]: SparkPi
-```
-
-Navigate to the newly created project directory.
-
-```bash
-cd sparkpi
-```
-
-Run the following commands to add an SBT plugin, which allows packaging the project as a jar file.
-
-```bash
-touch project/assembly.sbt
-echo 'addSbtPlugin("com.eed3si9n" % "sbt-assembly" % "0.14.10")' >> project/assembly.sbt
-```
-
-Run these commands to copy the sample code into the newly created project and add all necessary dependencies.
-
-```bash
-EXAMPLESDIR="src/main/scala/org/apache/spark/examples"
-mkdir -p $EXAMPLESDIR
-cp $sparkdir/examples/$EXAMPLESDIR/SparkPi.scala $EXAMPLESDIR/SparkPi.scala
-
-cat <<EOT >> build.sbt
-// https://mvnrepository.com/artifact/org.apache.spark/spark-sql
-libraryDependencies += "org.apache.spark" %% "spark-sql" % "2.3.0" % "provided"
-EOT
-
-sed -ie 's/scalaVersion.*/scalaVersion := "2.11.11"/' build.sbt
-sed -ie 's/name.*/name := "SparkPi",/' build.sbt
-```
-
-To package the project into a jar, run the following command.
-
-```bash
-sbt assembly
-```
-
-After successful packaging, you should see output similar to the following.
-
-```bash
-[info] Packaging /Users/me/myprojects/sparkpi/target/scala-2.11/SparkPi-assembly-0.1.0-SNAPSHOT.jar ...
-[info] Done packaging.
-[success] Total time: 10 s, completed Mar 6, 2018 11:07:54 AM
-```
-
-## Copy job to storage
-
-Create an Azure storage account and container to hold the jar file.
-
-```azurecli
-RESOURCE_GROUP=sparkdemo
-STORAGE_ACCT=sparkdemo$RANDOM
-az group create --name $RESOURCE_GROUP --location eastus
-az storage account create --resource-group $RESOURCE_GROUP --name $STORAGE_ACCT --sku Standard_LRS
-export AZURE_STORAGE_CONNECTION_STRING=`az storage account show-connection-string --resource-group $RESOURCE_GROUP --name $STORAGE_ACCT -o tsv`
-```
-
-Upload the jar file to the Azure storage account with the following commands.
-
-```azurecli
-CONTAINER_NAME=jars
-BLOB_NAME=SparkPi-assembly-0.1.0-SNAPSHOT.jar
-FILE_TO_UPLOAD=target/scala-2.11/SparkPi-assembly-0.1.0-SNAPSHOT.jar
-
-echo "Creating the container..."
-az storage container create --name $CONTAINER_NAME
-az storage container set-permission --name $CONTAINER_NAME --public-access blob
-
-echo "Uploading the file..."
-az storage blob upload --container-name $CONTAINER_NAME --file $FILE_TO_UPLOAD --name $BLOB_NAME
-
-jarUrl=$(az storage blob url --container-name $CONTAINER_NAME --name $BLOB_NAME | tr -d '"')
-```
-
-Variable `jarUrl` now contains the publicly accessible path to the jar file.
-
-## Submit a Spark job
-
-Start kube-proxy in a separate command-line with the following code.
-
-```bash
-kubectl proxy
-```
-
-Navigate back to the root of Spark repository.
-
-```bash
-cd $sparkdir
-```
-
-Create a service account that has sufficient permissions for running a job.
-
-```bash
-kubectl create serviceaccount spark
-kubectl create clusterrolebinding spark-role --clusterrole=edit --serviceaccount=default:spark --namespace=default
-```
-
-Submit the job using `spark-submit`.
-
-```bash
-./bin/spark-submit \
- --master k8s://http://127.0.0.1:8001 \
- --deploy-mode cluster \
- --name spark-pi \
- --class org.apache.spark.examples.SparkPi \
- --conf spark.executor.instances=3 \
- --conf spark.kubernetes.authenticate.driver.serviceAccountName=spark \
- --conf spark.kubernetes.container.image=$REGISTRY_NAME/spark:$REGISTRY_TAG \
- $jarUrl
-```
-
-This operation starts the Spark job, which streams job status to your shell session. While the job is running, you can see Spark driver pod and executor pods using the kubectl get pods command. Open a second terminal session to run these commands.
-
-```console
-kubectl get pods
-```
-
-```output
-NAME READY STATUS RESTARTS AGE
-spark-pi-2232778d0f663768ab27edc35cb73040-driver 1/1 Running 0 16s
-spark-pi-2232778d0f663768ab27edc35cb73040-exec-1 0/1 Init:0/1 0 4s
-spark-pi-2232778d0f663768ab27edc35cb73040-exec-2 0/1 Init:0/1 0 4s
-spark-pi-2232778d0f663768ab27edc35cb73040-exec-3 0/1 Init:0/1 0 4s
-```
-
-While the job is running, you can also access the Spark UI. In the second terminal session, use the `kubectl port-forward` command provide access to Spark UI.
-
-```bash
-kubectl port-forward spark-pi-2232778d0f663768ab27edc35cb73040-driver 4040:4040
-```
-
-To access Spark UI, open the address `127.0.0.1:4040` in a browser.
-
-![Spark UI](media/aks-spark-job/spark-ui.png)
-
-## Get job results and logs
-
-After the job has finished, the driver pod will be in a "Completed" state. Get the name of the pod with the following command.
-
-```bash
-kubectl get pods --show-all
-```
-
-Output:
-
-```output
-NAME READY STATUS RESTARTS AGE
-spark-pi-2232778d0f663768ab27edc35cb73040-driver 0/1 Completed 0 1m
-```
-
-Use the `kubectl logs` command to get logs from the spark driver pod. Replace the pod name with your driver pod's name.
-
-```bash
-kubectl logs spark-pi-2232778d0f663768ab27edc35cb73040-driver
-```
-
-Within these logs, you can see the result of the Spark job, which is the value of Pi.
-
-```output
-Pi is roughly 3.152155760778804
-```
-
-## Package jar with container image
-
-In the above example, the Spark jar file was uploaded to Azure storage. Another option is to package the jar file into custom-built Docker images.
-
-To do so, find the `dockerfile` for the Spark image located at `$sparkdir/resource-managers/kubernetes/docker/src/main/dockerfiles/spark/` directory. Add an `ADD` statement for the Spark job `jar` somewhere between `WORKDIR` and `ENTRYPOINT` declarations.
-
-Update the jar path to the location of the `SparkPi-assembly-0.1.0-SNAPSHOT.jar` file on your development system. You can also use your own custom jar file.
-
-```bash
-WORKDIR /opt/spark/work-dir
-
-ADD /path/to/SparkPi-assembly-0.1.0-SNAPSHOT.jar SparkPi-assembly-0.1.0-SNAPSHOT.jar
-
-ENTRYPOINT [ "/opt/entrypoint.sh" ]
-```
-
-Build and push the image with the included Spark scripts.
-
-```bash
-./bin/docker-image-tool.sh -r <your container repository name> -t <tag> build
-./bin/docker-image-tool.sh -r <your container repository name> -t <tag> push
-```
-
-When running the job, instead of indicating a remote jar URL, the `local://` scheme can be used with the path to the jar file in the Docker image.
-
-```bash
-./bin/spark-submit \
- --master k8s://https://<k8s-apiserver-host>:<k8s-apiserver-port> \
- --deploy-mode cluster \
- --name spark-pi \
- --class org.apache.spark.examples.SparkPi \
- --conf spark.executor.instances=3 \
- --conf spark.kubernetes.authenticate.driver.serviceAccountName=spark \
- --conf spark.kubernetes.container.image=<spark-image> \
- local:///opt/spark/work-dir/<your-jar-name>.jar
-```
-
-> [!WARNING]
-> From Spark [documentation][spark-docs]: "The Kubernetes scheduler is currently experimental. In future versions, there may be behavioral changes around configuration, container images and entrypoints".
-
-## Next steps
-
-Check out Spark documentation for more details.
-
-> [!div class="nextstepaction"]
-> [Spark documentation][spark-docs]
-
-<!-- LINKS - external -->
-[apache-spark]: https://spark.apache.org/
-[docker-hub]: https://docs.docker.com/docker-hub/
-[java-install]: /azure/developer/java/fundamentals/java-support-on-azure
-[maven-install]: https://maven.apache.org/install.html
-[sbt-install]: https://www.scala-sbt.org/1.x/docs/Setup.html
-[spark-docs]: https://spark.apache.org/docs/latest/running-on-kubernetes.html
-[spark-kubernetes-earliest-version]: https://spark.apache.org/releases/spark-release-2-3-0.html
-[spark-quickstart]: https://spark.apache.org/docs/latest/quick-start.html
--
-<!-- LINKS - internal -->
-[acr-aks]: cluster-container-registry-integration.md
-[acr-create]: ../container-registry/container-registry-get-started-azure-cli.md
-[aks-quickstart]: ./index.yml
-[azure-cli]: /cli/azure/
-[storage-account]: ../storage/blobs/storage-quickstart-blobs-cli.md
aks Supported Kubernetes Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/supported-kubernetes-versions.md
Last updated 08/09/2021 -+ # Supported Kubernetes versions in Azure Kubernetes Service (AKS)
Each number in the version indicates general compatibility with the previous ver
Aim to run the latest patch release of the minor version you're running. For example, your production cluster is on **`1.17.7`**. **`1.17.8`** is the latest available patch version available for the *1.17* series. You should upgrade to **`1.17.8`** as soon as possible to ensure your cluster is fully patched and supported.
-## Kubernetes version alias (Preview)
-
+## Alias minor version
> [!NOTE]
-> Kubernetes version alias requires Azure CLI version 2.31.0 or above with the aks-preview extension installed. Please use `az upgrade` to install the latest version of the CLI.
-
-You will need the *aks-preview* Azure CLI extension version 0.5.49 or greater. Install the *aks-preview* Azure CLI extension by using the [az extension add][az-extension-add] command. Or install any available updates by using the [az extension update][az-extension-update] command.
+> Alias minor version requires Azure CLI version 2.31.0 or above. Use `az upgrade` to install the latest version of the CLI.
-```azurecli-interactive
-# Install the aks-preview extension
-az extension add --name aks-preview
-
-# Update the extension to make sure you have the latest version installed
-az extension update --name aks-preview
-```
+Azure Kubernetes Service allows for you to create a cluster without specifying the exact patch version. When creating a cluster without designating a patch, the cluster will run the minor version's latest GA patch. For example, if you create a cluster with **`1.21`**, your cluster will be running **`1.21.7`**, which is the latest GA patch version of *1.21*.
-Azure Kubernetes Service allows for you to create a cluster without specifiying the exact patch version. When creating a cluster without specifying a patch, the cluster will run the minor version's latest patch. For example, if you create a cluster with **`1.21`**, your cluster will be running **`1.21.7`**, which is the latest patch version of *1.21*.
+When upgrading by alias minor version, only a higher minor version is supported. For example, upgrading from `1.14.x` to `1.14` will not trigger an upgrade to the latest GA `1.14` patch, but upgrading to `1.15` will trigger an upgrade to the latest GA `1.15` patch.
To see what patch you are on, run the `az aks show --resource-group myResourceGroup --name myAKSCluster` command. The property `currentKubernetesVersion` shows the whole Kubernetes version.
aks Tutorial Kubernetes Upgrade Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/tutorial-kubernetes-upgrade-cluster.md
description: In this Azure Kubernetes Service (AKS) tutorial, you learn how to u
Last updated 05/24/2021---+ #Customer intent: As a developer or IT pro, I want to learn how to upgrade an Azure Kubernetes Service (AKS) cluster so that I can use the latest version of Kubernetes and features.
To minimize disruption to running applications, AKS nodes are carefully cordoned
1. When the new node is ready and joined to the cluster, the Kubernetes scheduler begins to run pods on it. 1. The old node is deleted, and the next node in the cluster begins the cordon and drain process. + ### [Azure CLI](#tab/azure-cli) Use the [az aks upgrade][] command to upgrade the AKS cluster.
aks Upgrade Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/upgrade-cluster.md
Title: Upgrade an Azure Kubernetes Service (AKS) cluster
description: Learn how to upgrade an Azure Kubernetes Service (AKS) cluster to get the latest features and security updates. + Last updated 12/17/2020- # Upgrade an Azure Kubernetes Service (AKS) cluster
With a list of available versions for your AKS cluster, use the [az aks upgrade]
- This process repeats until all nodes in the cluster have been upgraded. - At the end of the process, the last buffer node will be deleted, maintaining the existing agent node count and zone balance. + ```azurecli-interactive az aks upgrade \ --resource-group myResourceGroup \
aks Use Multiple Node Pools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-multiple-node-pools.md
Title: Use multiple node pools in Azure Kubernetes Service (AKS)
description: Learn how to create and manage multiple node pools for a cluster in Azure Kubernetes Service (AKS) + Last updated 05/16/2022- # Create and manage multiple node pools for a cluster in Azure Kubernetes Service (AKS)
The following example output shows that *mynodepool* has been successfully creat
> [!TIP] > If no *VmSize* is specified when you add a node pool, the default size is *Standard_D2s_v3* for Windows node pools and *Standard_DS2_v2* for Linux node pools. If no *OrchestratorVersion* is specified, it defaults to the same version as the control plane.
+### Add an ARM64 node pool (preview)
+
+The ARM64 processor provides low power compute for your Kubernetes workloads. To create an ARM64 node pool, you will need to choose an [ARM capable instance SKU][arm-sku-vm].
++
+#### Install the `aks-preview` Azure CLI
+
+You also need the *aks-preview* Azure CLI extension version 0.5.23 or later. Install the *aks-preview* Azure CLI extension by using the [az extension add][az-extension-add] command. Or install any available updates by using the [az extension update][az-extension-update] command.
+
+```azurecli-interactive
+# Install the aks-preview extension
+az extension add --name aks-preview
+# Update the extension to make sure you have the latest version installed
+az extension update --name aks-preview
+```
+
+#### Register the `AKSARM64Preview` preview feature
+
+To use the feature, you must also enable the `AKSARM64Preview` feature flag on your subscription.
+
+Register the `AKSARM64Preview` feature flag by using the [az feature register][az-feature-register] command, as shown in the following example:
+
+```azurecli-interactive
+az feature register --namespace "Microsoft.ContainerService" --name "AKSARM64Preview"
+```
+
+It takes a few minutes for the status to show *Registered*. Verify the registration status by using the [az feature list][az-feature-list] command:
+
+```azurecli-interactive
+az feature list -o table --query "[?contains(name, 'Microsoft.ContainerService/AKSARM64Preview')].{Name:name,State:properties.state}"
+```
+
+When ready, refresh the registration of the *Microsoft.ContainerService* resource provider by using the [az provider register][az-provider-register] command:
+
+```azurecli-interactive
+az provider register --namespace Microsoft.ContainerService
+```
+
+Use `az aks nodepool add` command to add an ARM64 node pool.
+
+```azurecli
+az aks nodepool add \
+ --resource-group myResourceGroup \
+ --cluster-name myAKSCluster \
+ --name armpool \
+ --node-count 3 \
+ --node-vm-size Standard_Dpds_v5
+```
+ ### Add a node pool with a unique subnet A workload may require splitting a cluster's nodes into separate pools for logical isolation. This isolation can be supported with separate subnets dedicated to each node pool in the cluster. This can address requirements such as having non-contiguous virtual network address space to split across node pools.
A workload may require splitting a cluster's nodes into separate pools for logic
#### Limitations
-* All subnets assigned to nodepools must belong to the same virtual network.
+* All subnets assigned to node pools must belong to the same virtual network.
* System pods must have access to all nodes/pods in the cluster to provide critical functionality such as DNS resolution and tunneling kubectl logs/exec/port-forward proxy. * If you expand your VNET after creating the cluster you must update your cluster (perform any managed cluster operation but node pool operations don't count) before adding a subnet outside the original cidr. AKS will error out on the agent pool add now though we originally allowed it. If you don't know how to reconcile your cluster file a support ticket. * In clusters with Kubernetes version < 1.23.3, kube-proxy will SNAT traffic from new subnets, which can cause Azure Network Policy to drop the packets.
-* Windows nodes will SNAT traffic to the new subnets until the nodepool is reimaged.
+* Windows nodes will SNAT traffic to the new subnets until the node pool is reimaged.
* Internal load balancers default to one of the node pool subnets (usually the first subnet of the node pool at cluster creation). To override this behavior, you can [specify the load balancer's subnet explicitly using an annotation][internal-lb-different-subnet]. To create a node pool with a dedicated subnet, pass the subnet resource ID as an additional parameter when creating a node pool.
AKS offers a separate feature to automatically scale node pools with a feature c
If you no longer need a pool, you can delete it and remove the underlying VM nodes. To delete a node pool, use the [az aks node pool delete][az-aks-nodepool-delete] command and specify the node pool name. The following example deletes the *mynodepool* created in the previous steps: > [!CAUTION]
-> When you delete a node pool, AKS doesn't perform cordon and drain, and there are no recovery options for data loss that may occur when you delete a node pool. If pods can't be scheduled on other node pools, those applications become unavailable. Make sure you don't delete a node pool when in-use applications don't have data backups or the ability to run on other node pools in your cluster. To minimize the disruption of rescheduling pods currently running on the node pool you are going to delete, perform a cordon and drain on all nodes in the node pool before deleting. For more details, see [cordon and drain node pools][cordon-and-drain].
+> When you delete a node pool, AKS doesn't perform cordon and drain, and there are no recovery options for data loss that may occur when you delete a node pool. If pods can't be scheduled on other node pools, those applications become unavailable. Make sure you don't delete a node pool when in-use applications don't have data backups or the ability to run on other node pools in your cluster. To minimize the disruption of rescheduling pods currently running on the node pool you are going to delete, perform a cordon and drain on all nodes in the node pool before deleting. For more information, see [cordon and drain node pools][cordon-and-drain].
```azurecli-interactive az aks nodepool delete -g myResourceGroup --cluster-name myAKSCluster --name mynodepool --no-wait
Only pods that have this toleration applied can be scheduled on nodes in *taintn
### Setting nodepool labels
-For more details on using labels with node pools, see [Use labels in an Azure Kubernetes Service (AKS) cluster][use-labels].
+For more information on using labels with node pools, see [Use labels in an Azure Kubernetes Service (AKS) cluster][use-labels].
### Setting nodepool Azure tags
-For more details on using Azure tags with node pools, see [Use Azure tags in Azure Kubernetes Service (AKS)][use-tags].
+For more information on using Azure tags with node pools, see [Use Azure tags in Azure Kubernetes Service (AKS)][use-tags].
## Add a FIPS-enabled node pool
-The Federal Information Processing Standard (FIPS) 140-2 is a US government standard that defines minimum security requirements for cryptographic modules in information technology products and systems. AKS allows you to create Linux-based node pools with FIPS 140-2 enabled. Deployments running on FIPS-enabled node pools can use those cryptographic modules to provide increased security and help meet security controls as part of FedRAMP compliance. For more details on FIPS 140-2, see [Federal Information Processing Standard (FIPS) 140-2][fips].
+The Federal Information Processing Standard (FIPS) 140-2 is a US government standard that defines minimum security requirements for cryptographic modules in information technology products and systems. AKS allows you to create Linux-based node pools with FIPS 140-2 enabled. Deployments running on FIPS-enabled node pools can use those cryptographic modules to provide increased security and help meet security controls as part of FedRAMP compliance. For more information on FIPS 140-2, see [Federal Information Processing Standard (FIPS) 140-2][fips].
### Prerequisites
To create and use Windows Server container node pools, see [Create a Windows Ser
Use [proximity placement groups][reduce-latency-ppg] to reduce latency for your AKS applications. <!-- EXTERNAL LINKS -->
+[arm-vm-sku]: https://azure.microsoft.com/updates/public-preview-arm64based-azure-vms-can-deliver-up-to-50-better-priceperformance/
[kubernetes-drain]: https://kubernetes.io/docs/tasks/administer-cluster/safely-drain-node/ [kubectl-get]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get [kubectl-taint]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#taint
aks Windows Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/windows-faq.md
AKS uses Windows Server 2019 as the host OS version and only supports process is
## Is Kubernetes different on Windows and Linux?
-Windows Server node pool support includes some limitations that are part of the upstream Windows Server in Kubernetes project. These limitations are not specific to AKS. For more information on the upstream support for Windows Server in Kubernetes, see the [Supported functionality and limitations][upstream-limitations] section of the [Intro to Windows support in Kubernetes][intro-windows] document, from the Kubernetes project.
+Windows Server node pool support includes some limitations that are part of the upstream Windows Server in Kubernetes project. These limitations are not specific to AKS. For more information on the upstream support from the Kubernetes project, see the [Supported functionality and limitations][upstream-limitations] section of the [Intro to Windows support in Kubernetes][intro-windows] document.
Historically, Kubernetes is Linux-focused. Many examples used in the upstream [Kubernetes.io][kubernetes] website are intended for use on Linux nodes. When you create deployments that use Windows Server containers, the following considerations at the OS level apply:
Historically, Kubernetes is Linux-focused. Many examples used in the upstream [K
## What kind of disks are supported for Windows?
-Azure Disks and Azure Files are the supported volume types. These are accessed as NTFS volumes in the Windows Server container.
+Azure Disks and Azure Files are the supported volume types, and are accessed as NTFS volumes in the Windows Server container.
## Can I run Windows only clusters in AKS?
The master nodes (the control plane) in an AKS cluster are hosted by the AKS ser
## How do I patch my Windows nodes?
-To get the latest patches for Windows nodes, you can either [upgrade the node pool][nodepool-upgrade] or [upgrade the node image][upgrade-node-image]. Windows Updates are not enabled on nodes in AKS. AKS releases new node pool images as soon as patches are available, and it's the user's responsibility to upgrade node pools to stay current on patches and hotfixes. This is also true for the Kubernetes version being used. [AKS release notes][aks-release-notes] indicate when new versions are available. For more information on upgrading the entire Windows Server node pool, see [Upgrade a node pool in AKS][nodepool-upgrade]. If you're only interested in updating the node image, see [AKS node image upgrades][upgrade-node-image].
+To get the latest patches for Windows nodes, you can either [upgrade the node pool][nodepool-upgrade] or [upgrade the node image][upgrade-node-image]. Windows Updates are not enabled on nodes in AKS. AKS releases new node pool images as soon as patches are available, and it's the user's responsibility to upgrade node pools to stay current on patches and hotfixes. This patch process is also true for the Kubernetes version being used. [AKS release notes][aks-release-notes] indicate when new versions are available. For more information on upgrading the Windows Server node pool, see [Upgrade a node pool in AKS][nodepool-upgrade]. If you're only interested in updating the node image, see [AKS node image upgrades][upgrade-node-image].
> [!NOTE] > The updated Windows Server image will only be used if a cluster upgrade (control plane upgrade) has been performed prior to upgrading the node pool.
->
## What network plug-ins are supported?
-AKS clusters with Windows node pools must use the Azure Container Networking Interface (Azure CNI) (advanced) networking model. Kubenet (basic) networking is not supported. For more information on the differences in network models, see [Network concepts for applications in AKS][azure-network-models]. The Azure CNI network model requires additional planning and consideration for IP address management. For more information on how to plan and implement Azure CNI, see [Configure Azure CNI networking in AKS][configure-azure-cni].
+AKS clusters with Windows node pools must use the Azure Container Networking Interface (Azure CNI) (advanced) networking model. Kubenet (basic) networking is not supported. For more information on the differences in network models, see [Network concepts for applications in AKS][azure-network-models]. The Azure CNI network model requires extra planning and consideration for IP address management. For more information on how to plan and implement Azure CNI, see [Configure Azure CNI networking in AKS][configure-azure-cni].
Windows nodes on AKS clusters also have [Direct Server Return (DSR)][dsr] enabled by default when Calico is enabled.
The AKS cluster can have a maximum of 100 node pools. You can have a maximum of
## What can I name my Windows node pools?
-Keep names to a maximum of six characters. This is the current limitation of AKS.
+A Windows node pool can have a six-character name.
## Are all features supported with Windows nodes?
A cluster with Windows nodes can have approximately 500 services before it encou
Yes. Azure Hybrid Benefit for Windows Server reduces operating costs by letting you bring your on-premises Windows Server license to AKS Windows nodes.
-Azure Hybrid Benefit can be used on your entire AKS cluster or on individual nodes. For individual nodes, you need to browse to the [node resource group][resource-groups] and apply the Azure Hybrid Benefit to the nodes directly. For more information on applying Azure Hybrid Benefit to individual nodes, see [Azure Hybrid Benefit for Windows Server][hybrid-vms].
+Azure Hybrid Benefit can be used on your entire AKS cluster or on individual nodes. For individual nodes, you need to browse to the [node resource group][resource-groups] and apply the Azure Hybrid Benefit to the nodes directly. For more information on applying Azure Hybrid Benefit to individual nodes, see [Azure Hybrid Benefit for Windows Server][hybrid-vms].
-To use Azure Hybrid Benefit on a new AKS cluster, use the `--enable-ahub` argument.
+To use Azure Hybrid Benefit on a new AKS cluster, run the `az aks create` command and use the `--enable-ahub` argument.
```azurecli az aks create \
az aks create \
--enable-ahub ```
-To use Azure Hybrid Benefit on an existing AKS cluster, update the cluster by using the `--enable-ahub` argument.
+To use Azure Hybrid Benefit on an existing AKS cluster, run the `az aks update` command and use the update the cluster by using the `--enable-ahub` argument.
```azurecli az aks update \
az aks update \
--enable-ahub ```
-To check if Azure Hybrid Benefit is set on the cluster, use the following command:
+To check if Azure Hybrid Benefit is set on the Windows nodes in the cluster, run the `az vmss show` command with the `--name` and `--resource-group` arguments to query the virtual machine scale set. To identify the resource group the scale set for the Windows node pool is created in, you can run the `az vmss list -o table` command.
```azurecli
-az vmss show --name myAKSCluster --resource-group MC_CLUSTERNAME
+az vmss show --name myScaleSet --resource-group MC_<resourceGroup>_<clusterName>_<region>
```
-If the cluster has Azure Hybrid Benefit enabled, the output of `az vmss show` will be similar to the following:
+If the Windows nodes in the scale set have Azure Hybrid Benefit enabled, the output of `az vmss show` will be similar to the following:
```console
-"platformFaultDomainCount": 1,
- "provisioningState": "Succeeded",
- "proximityPlacementGroup": null,
- "resourceGroup": "MC_CLUSTERNAME"
+""hardwareProfile": null,
+ "licenseType": "Windows_Server",
+ "networkProfile": {
+ "healthProbe": null,
+ "networkApiVersion": null,
``` ## How do I change the time zone of a running container?
To see the current time zone of the running container or an available list of ti
Although maintaining session affinity from client connections to pods with Windows containers will be supported in the Windows Server 2022 OS version, you achieve session affinity by client IP currently by limiting your desired pod to run a single instance per node and configuring your Kubernetes service to direct traffic to the pod on the local node.
-Use the following configuration:
+Use the following configuration:
1. Use an AKS cluster running a minimum version of 1.20. 1. Constrain your pod to allow only one instance per Windows node. You can achieve this by using anti-affinity in your deployment configuration.
Use the following configuration:
## What if I need a feature that's not supported?
-If you encounter feature gaps, the open-source, upstream [aks-engine][aks-engine] project provides an easy and fully customizable way of running Kubernetes in Azure, including Windows support. For more information, see [AKS roadmap][aks-roadmap].
+If you encounter feature gaps, the open-source [aks-engine][aks-engine] project provides an easy and fully customizable way of running Kubernetes in Azure, including Windows support. For more information, see [AKS roadmap][aks-roadmap].
## Next steps
api-management Api Management Access Restriction Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-access-restriction-policies.md
For more information and examples of this policy, see [Advanced request throttli
<rate-limit-by-key calls="number" renewal-period="seconds" increment-condition="condition"
+ increment-count="number"
counter-key="key value" retry-after-header-name="header name" retry-after-variable-name="policy expression variable name" remaining-calls-header-name="header name" remaining-calls-variable-name="policy expression variable name"
In the following example, the rate limit of 10 calls per 60 seconds is keyed by
| calls | The maximum total number of calls allowed during the time interval specified in the `renewal-period`. Policy expression is allowed. | Yes | N/A | | counter-key | The key to use for the rate limit policy. | Yes | N/A | | increment-condition | The boolean expression specifying if the request should be counted towards the rate (`true`). | No | N/A |
+| increment-count | The number by which the counter is increased per request. | No | 1 |
| renewal-period | The length in seconds of the sliding window during which the number of allowed requests should not exceed the value specified in `calls`. Policy expression is allowed. Maximum allowed value: 300 seconds. | Yes | N/A | | retry-after-header-name | The name of a response header whose value is the recommended retry interval in seconds after the specified call rate is exceeded. | No | N/A | | retry-after-variable-name | The name of a policy expression variable that stores the recommended retry interval in seconds after the specified call rate is exceeded. | No | N/A |
This policy can be used in the following policy [sections](./api-management-howt
- **Policy sections:** inbound - **Policy scopes:** all scopes
api-management Api Management Advanced Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-advanced-policies.md
Title: Azure API Management advanced policies | Microsoft Docs description: Reference for the advanced policies available for use in Azure API Management. Provides policy usage, settings and examples. - Previously updated : 03/07/2022+ Last updated : 04/28/2022
This article provides a reference for advanced API Management policies, such as
- [Control flow](api-management-advanced-policies.md#choose) - Conditionally applies policy statements based on the results of the evaluation of Boolean [expressions](api-management-policy-expressions.md). - [Forward request](#ForwardRequest) - Forwards the request to the backend service.
+- [Include fragment](#IncludeFragment) - Inserts a policy fragment in the policy definition.
- [Limit concurrency](#LimitConcurrency) - Prevents enclosed policies from executing by more than the specified number of requests at a time. - [Log to event hub](#log-to-eventhub) - Sends messages in the specified format to an event hub defined by a Logger entity. - [Emit metrics](#emit-metrics) - Sends custom metrics to Application Insights at execution.
This policy can be used in the following policy [sections](./api-management-howt
- **Policy sections:** backend - **Policy scopes:** all scopes
+## <a name="IncludeFragment"></a> Include fragment
+
+The `include-fragment` policy inserts the contents of a previously created [policy fragment](policy-fragments.md) in the policy definition. A policy fragment is a centrally managed, reusable XML policy snippet that can be included in policy definitions in your API Management instance.
+
+The policy inserts the policy fragment as-is at the location you select in the policy definition.
+
+### Policy statement
+
+```xml
+<include-fragment fragment-id="fragment" />
+```
+
+### Example
+
+In the following example, the policy fragment named *myFragment* is added in the inbound section of a policy definition.
+
+```xml
+<inbound>
+ <include-fragment fragment-id="myFragment" />
+ <base />
+</inbound>
+[...]
+```
+
+## Elements
+
+| Element | Description | Required |
+| -- | - | -- |
+| include-fragment | Root element. | Yes |
+
+### Attributes
+
+| Attribute | Description | Required | Default |
+| | -- | -- | - |
+| fragment-id | A string. Expression allowed. Specifies the identifier (name) of a policy fragment created in the API Management instance. | Yes | N/A |
+
+### Usage
+
+This policy can be used in the following policy [sections](./api-management-howto-policies.md#sections) and [scopes](./api-management-howto-policies.md#scopes).
+
+- **Policy sections:** inbound, outbound, backend, on-error
+
+- **Policy scopes:** all scopes
+ ## <a name="LimitConcurrency"></a> Limit concurrency The `limit-concurrency` policy prevents enclosed policies from executing by more than the specified number of requests at any time. When that number is exceeded, new requests will fail immediately with the `429` Too Many Requests status code.
The `limit-concurrency` policy prevents enclosed policies from executing by more
</limit-concurrency> ```
-### Examples
-
-#### Example
+### Example
The following example demonstrates how to limit number of requests forwarded to a backend based on the value of a context variable.
api-management Api Management Howto Configure Notifications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-configure-notifications.md
To view and configure a notification template in the portal:
## Configure email settings
-You can modify general e-mail settings for notifications that are sent from your API Management instance. You can change the administrator email address, the name of the organization sending notification, and the originating email address.
+You can modify general email settings for notifications that are sent from your API Management instance. You can change the administrator email address, the name of the organization sending notifications, and the originating email address.
To modify email settings:
api-management Api Management Howto Create Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-create-groups.md
Once the association is added between the developer and the group, you can view
* Once a developer is added to a group, they can view and subscribe to the products associated with that group. For more information, see [How create and publish a product in Azure API Management][How create and publish a product in Azure API Management], * In addition to creating and managing groups in the Azure portal, you can create and manage your groups using the API Management REST API [Group](/rest/api/apimanagement/apimanagementrest/azure-api-management-rest-api-group-entity) entity.
+* Learn how to manage the administrator [email settings](api-management-howto-configure-notifications.md#configure-email-settings) that asre used in notifications to developers from your API Management instance.
+ [Create a group]: #create-group [Associate a group with a product]: #associate-group-product
api-management Api Management Howto Oauth2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-oauth2.md
Configuring OAuth 2.0 user authorization in the test console of the developer po
## Prerequisites
-This article shows you how to configure your API Management service instance to use OAuth 2.0 authorization in the developer portal's test console, but doesn't show you how to configure an OAuth 2.0 provider.
+This article shows you how to configure your API Management service instance to use OAuth 2.0 authorization in the developer portal's test console, but it doesn't show you how to configure an OAuth 2.0 provider.
If you haven't yet created an API Management service instance, see [Create an API Management service instance][Create an API Management service instance].
When configuring OAuth 2.0 user authorization in the test console of the develop
Depending on your scenarios, you may configure more or less restrictive token scopes for other client applications that you create to access backend APIs. * **Take extra care if you enable the Client Credentials flow**. The test console in the developer portal, when working with the Client Credentials flow, doesn't ask for credentials. An access token could be inadvertently exposed to developers or anonymous users of the developer console.
+## Keeping track of key information
+
+Throughout this tutorial you'll be asked to record key information to reference later on:
+
+- **Backend Application (client) ID**: The GUID of the application that represents the backend API
+- **Backend Application Scopes**: One or more scopes you may create to access the API. The scope format is `api://<Backend Application (client) ID>/<Scope Name>` (for example, api://1764e900-1827-4a0b-9182-b2c1841864c2/Read)
+- **Client Application (client) ID**: The GUID of the application that represents the developer portal
+- **Client Application Secret Value**: The GUID that serves as the secret for interaction with the client application in Azure Active Directory
+ ## Register applications with the OAuth server You'll need to register two applications with your OAuth 2.0 provider: one represents the backend API to be protected, and a second represents the client application that calls the API - in this case, the test console of the developer portal.
Optionally:
`https://login.microsoftonline.com/<tenant_id>/oauth2/token` (v1)
- * If you use **v1** endpoints, add a body parameter:
- * Name: **resource**.
+ * If you use **v1** endpoints, add a body parameter:
+ * Name: **resource**.
* Value: the back-end app **Application (client) ID**.
- * If you use **v2** endpoints:
- * Enter the back-end app scope you created in the **Default scope** field.
- * Set the value for the [`accessTokenAcceptedVersion`](../active-directory/develop/reference-app-manifest.md#accesstokenacceptedversion-attribute) property to `2` in the [application manifest](../active-directory/develop/reference-app-manifest.md) for both the backend-app and the client-app registrations.
+ * If you use **v2** endpoints:
+ * Enter the back-end app scope you created in the **Default scope** field.
+ * Set the value for the [`accessTokenAcceptedVersion`](../active-directory/develop/reference-app-manifest.md#accesstokenacceptedversion-attribute) property to `2` in the [application manifest](../active-directory/develop/reference-app-manifest.md) for both the backend-app and the client-app registrations.
* Accept the default settings for **Client authentication methods** and **Access token sending method**.
Optionally:
1. [Republish](api-management-howto-developer-portal-customize.md#publish) the developer portal.
+ > [!NOTE]
+ > When making OAuth 2.0-related changes, it is important that you remember to (re-)publish the developer portal after every modification as relevant changes (for example, scope change) otherwise cannot propagate into the portal and subsequently be used in trying out the APIs.
+ After saving the OAuth 2.0 server configuration, configure APIs to use this configuration, as shown in the next section. ## Configure an API to use OAuth 2.0 user authorization
For more information about using OAuth 2.0 and API Management, see [Protect a we
[Configure an OAuth 2.0 authorization server in API Management]: #step1 [Configure an API to use OAuth 2.0 user authorization]: #step2 [Test the OAuth 2.0 user authorization in the Developer Portal]: #step3
-[Next steps]: #next-steps
+[Next steps]: #next-steps
api-management Api Management Howto Use Managed Service Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-use-managed-service-identity.md
You can grant two types of identities to an API Management instance:
- A *system-assigned identity* is tied to your service and is deleted if your service is deleted. The service can have only one system-assigned identity. - A *user-assigned identity* is a standalone Azure resource that can be assigned to your service. The service can have multiple user-assigned identities.
+> [!NOTE]
+> Managed identities are specific to the Azure AD tenant where your Azure subscription is hosted. They don't get updated if a subscription is moved to a different directory. If a subscription is moved, you'll need to recreate and configure the identities.
+ ## Create a system-assigned managed identity ### Azure portal
api-management Api Management Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-policies.md
More information about policies:
- [Send message to Pub/Sub topic](api-management-dapr-policies.md#pubsub) - uses Dapr runtime to publish a message to a Publish/Subscribe topic. - [Trigger output binding](api-management-dapr-policies.md#bind) - uses Dapr runtime to invoke an external system via output binding.
-## [GraphQL validation policy](graphql-validation-policies.md)
-- [Validate GraphQL request](graphql-validation-policies.md#validate-graphql-request) - Validates and authorizes a request to a GraphQL API.
+## [GraphQL API policies](graphql-policies.md)
+- [Validate GraphQL request](graphql-policies.md#validate-graphql-request) - Validates and authorizes a request to a GraphQL API.
+- [Set GraphQL resolver](graphql-policies.md#set-graphql-resolver) - Retrieves or sets data for a GraphQL field in an object type specified in a GraphQL schema.
## [Transformation policies](api-management-transformation-policies.md) - [Convert JSON to XML](api-management-transformation-policies.md#ConvertJSONtoXML) - Converts request or response body from JSON to XML.
api-management Api Management Terminology https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-terminology.md
editor: ''
Previously updated : 10/11/2017 Last updated : 05/09/2022 # Azure API Management terminology
-This article gives definitions for the terms that are specific to API Management (APIM).
+This article gives definitions for the terms that are specific to Azure API Management.
## Term definitions
-* **Backend API** - An HTTP service that implements your API and its operations. For more information, see [Backends](backends.md).
-* **Frontend API**/**APIM API** - An APIM API does not host APIs, it creates façades for your APIs. You customize the façade according to your needs without touching the backend API. For more information, see [Import and publish an API](import-and-publish.md).
-* **APIM product** - a product contains one or more APIs as well as a usage quota and the terms of use. You can include a number of APIs and offer them to developers through the Developer portal. For more information, see [Create and publish a product](api-management-howto-add-products.md).
-* **APIM API operation** - Each APIM API represents a set of operations available to developers. Each APIM API contains a reference to the backend service that implements the API, and its operations map to the operations implemented by the backend service. For more information, see [Mock API responses](mock-api-responses.md).
-* **Version** - Sometimes you want to publish new or different API features to some users, while others want to stick with the API that currently works for them. For more information, see [Publish multiple versions of your API](api-management-get-started-publish-versions.md).
-* **Revision** - When your API is ready to go and starts to be used by developers, you usually need to take care in making changes to that API and at the same time not to disrupt callers of your API. It's also useful to let developers know about the changes you made. For more information, see [Use revisions](api-management-get-started-revise-api.md).
-* **Developer portal** - Your customers (developers) should use the Developer portal to access your APIs. The Developer portal can be customized. For more information, see [Customize the Developer portal](api-management-customize-styles.md).
+- **Backend API** - A service, most commonly HTTP-based, that implements an API and its operations. Sometimes backend APIs are referred to simply as backends. For more information, see [Backends](backends.md).
+- **Frontend API** - API Management serves as mediation layer over the backend APIs. Frontend API is an API that is exposed to API consumers from API Management. You can customize the shape and behavior of a frontend API in API Management without making changes to the backend API(s) that it represents. Sometimes frontend APIs are referred to simply as APIs. For more information, see [Import and publish an API](import-and-publish.md).
+- **Product** - A product is a bundle of frontend APIs that can be made available to a specified group of API consumers for self-service onboarding under a single access credential and a set of usage limits. An API can be part of multiple products. For more information, see [Create and publish a product](api-management-howto-add-products.md).
+- **API operation** - A frontend API in API Management can define multiple operations. An operation is a combination of an HTTP verb and a URL template uniquely resolvable within the frontend API. Often operations map one-to-one to backend API endpoints. For more information, see [Mock API responses](mock-api-responses.md).
+- **Version** - A version is a distinct variant of existing frontend API that differs in shape or behavior from the original. Versions give customers a choice of sticking with the original API or upgrading to a new version at the time of their choosing. Versions are a mechanism for releasing breaking changes without impacting API consumers. For more information, see [Publish multiple versions of your API](api-management-get-started-publish-versions.md).
+- **Revision** - A revision is a copy of an existing API that can be changed without impacting API consumers and swapped with the version currently in use by consumers usually after validation and testing. Revisions provide a mechanism for safely implementing nonbreaking changes. For more information, see [Use revisions](api-management-get-started-revise-api.md).
+- **Policy** - A policy is a reusable and composable component, implementing some commonly used API-related functionality. API Management offers over 50 built-in policies that take care of critical but undifferentiated horizontal concerns - for example, request transformation, routing, security, protection, caching. The policies can be applied at various scopes, which determine the affected APIs or operations and dynamically configured using policy expressions. For more information, see [Policies in Azure API Management](api-management-howto-policies.md).
+- **Developer portal** - The developer portal is a component of API Management. It provides a customizable experience for API discovery and self-service onboarding to API consumers. For more information, see [Customize the Developer portal](api-management-customize-styles.md).
## Next steps > [!div class="nextstepaction"]
+>
> [Create an instance](get-started-create-service-instance.md)-
api-management Api Management Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-versions.md
When the query string versioning scheme is used, the version identifier needs to
The format of an API request URL when using query string-based versioning is: `https://{yourDomain}/{apiName}/{operationId}?{queryStringParameterName}={versionIdentifier}`.
-For example, `https://apis.contoso.com/products?api-version=v1` and `https://apis.contoso.com/products/api-version=v2` could refer to the same `products` API but to versions `v1` and `v2` respectively.
+For example, `https://apis.contoso.com/products?api-version=v1` and `https://apis.contoso.com/products?api-version=v2` could refer to the same `products` API but to versions `v1` and `v2` respectively.
## Original versions
api-management Compute Infrastructure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/compute-infrastructure.md
The following table summarizes the compute platforms currently used for instance
<sup>1</sup> Newly created instances in these tiers, created using the Azure portal or specifying API version 2021-01-01-preview or later. Includes some existing instances in Developer and Premium tiers configured with virtual networks or availability zones.
+> [!NOTE]
+> Currently, the `stv2` platform isn't available in the US Government cloud or in the following Azure regions: China East, China East 2, China North, China North 2.
+ ## How do I know which platform hosts my API Management instance? Starting with API version `2021-04-01-preview`, the API Management instance exposes a read-only `platformVersion` property that shows this platform information.
If you have an existing Developer or Premium tier instance that's connected to a
### Prerequisites
-* A new or existing virtual network and subnet in the same region and subscription as your API Management instance.
+* A new or existing virtual network and subnet in the same region and subscription as your API Management instance. The subnet must be different from the one currently used for the instance hosted on the `stv1` platform, and a network security group must be attached.
* A new or existing Standard SKU [public IPv4 address](../virtual-network/ip-services/public-ip-addresses.md#sku) resource in the same region and subscription as your API Management instance.
api-management Graphql Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/graphql-api.md
Title: Import a GraphQL API using the Azure portal | Microsoft Docs
+ Title: Import a GraphQL API to Azure API Management using the portal | Microsoft Docs
-description: Learn how API Management supports GraphQL, add a GraphQL API, and GraphQL limitations.
+description: Learn how to add an existing GraphQL service as an API in Azure API Management. Manage the API and enable queries to pass through to the GraphQL endpoint.
Previously updated : 10/21/2021- Last updated : 05/19/2022+
-# Import a GraphQL API (preview)
+# Import a GraphQL API
-GraphQL is an open-source, industry-standard query language for APIs. Unlike endpoint-based (or REST-style) APIs designed around actions over resources, GraphQL APIs support a broader set of use cases and focus on data types, schemas, and queries.
-
-API Management tackles the security, authentication, and authorization challenges that come with publishing GraphQL APIs. Using API Management to expose your GraphQL APIs, you can:
-* Add a GraphQL service as APIs via Azure portal.
-* Secure GraphQL APIs by applying both existing access control policies and a [new policy](graphql-validation-policies.md) to secure and protect against GraphQL-specific attacks.
-* Explore the schema and run test queries against the GraphQL APIs in the Azure and developer portals.
- In this article, you'll: > [!div class="checklist"]
In this article, you'll:
> * Test your GraphQL API. > * Learn the limitations of your GraphQL API in API Management.
+If you want to import a GraphQL schema and set up field resolvers using REST or SOAP API endpoints, see [Import a GraphQL schema and set up field resolvers](graphql-schema-resolve-api.md).
+ ## Prerequisites - An existing API Management instance. [Create one if you haven't already](get-started-create-service-instance.md). - A GraphQL API. + ## Add a GraphQL API 1. Navigate to your API Management instance. 1. From the side navigation menu, under the **APIs** section, select **APIs**. 1. Under **Define a new API**, select the **GraphQL** icon.
- :::image type="content" source="media/graphql-api/import-graphql-api.png" alt-text="Selecting GraphQL icon from list of APIs":::
+ :::image type="content" source="media/graphql-api/import-graphql-api.png" alt-text="Screenshot of selecting GraphQL icon from list of APIs.":::
1. In the dialog box, select **Full** and complete the required form fields.
- :::image type="content" source="media/graphql-api/create-from-graphql-schema.png" alt-text="Demonstrate fields for creating GraphQL":::
+ :::image type="content" source="media/graphql-api/create-from-graphql-schema.png" alt-text="Screenshot of fields for creating a GraphQL API.":::
| Field | Description | |-|-|
- | Display name | The name by which your GraphQL API will be displayed. |
- | Name | Raw name of the GraphQL API. Automatically populates as you type the display name. |
- | GraphQL API endpoint | The base URL with your GraphQL API endpoint name. <br /> For example: *`https://example.com/your-GraphQL-name`*. You can also use the common ["Star Wars" GraphQL endpoint](https://swapi-graphql.netlify.app/.netlify/functions/index) as a demo. |
- | Upload schema file | Select to browse and upload your schema file. |
- | Description | Add a description of your API. |
- | URL scheme | Select HTTP, HTTPS, or Both. Default selection: *Both*. |
- | API URL suffix| Add a URL suffix to identify this specific API in this API Management instance. It has to be unique in this API Management instance. |
- | Base URL | Uneditable field displaying your API base URL |
- | Tags | Associate your GraphQL API with new or existing tags. |
- | Products | Associate your GraphQL API with a product to publish it. |
- | Gateways | Associate your GraphQL API with existing gateways. Default gateway selection: *Managed*. |
- | Version this API? | Select to version control your GraphQL API. |
-
-1. Click **Create**.
-
-## Test your GraphQL API
-
-1. Navigate to your API Management instance.
-1. From the side navigation menu, under the **APIs** section, select **APIs**.
-1. Under **All APIs**, select your GraphQL API.
-1. Select the **Test** tab to access the Test console.
-1. Under **Headers**:
- 1. Select the header from the **Name** drop-down menu.
- 1. Enter the value to the **Value** field.
- 1. Add more headers by selecting **+ Add header**.
- 1. Delete headers using the **trashcan icon**.
-1. If you've added a product to your GraphQL API, apply product scope under **Apply product scope**.
-1. Under **Query editor**, either:
- 1. Select at least one field or subfield from the list in the side menu. The fields and subfields you select appear in the query editor.
- 1. Start typing in the query editor to compose a query.
-
- :::image type="content" source="media/graphql-api/test-graphql-query.png" alt-text="Demonstrating adding fields to the query editor":::
-
-1. Under **Query variables**, add variables to reuse the same query or mutation and pass different values.
-1. Click **Send**.
-1. View the **Response**.
-
- :::image type="content" source="media/graphql-api/graphql-query-response.png" alt-text="View the test query response":::
-
-1. Repeat preceding steps to test different payloads.
-1. When testing is complete, exit test console.
-
-## Limitations
-
-* Only GraphQL pass through is supported.
-* A single GraphQL API in API Management corresponds to only a single GraphQL backend endpoint.
+ | **Display name** | The name by which your GraphQL API will be displayed. |
+ | **Name** | Raw name of the GraphQL API. Automatically populates as you type the display name. |
+ | **GraphQL API endpoint** | The base URL with your GraphQL API endpoint name. <br /> For example: *`https://example.com/your-GraphQL-name`*. You can also use a common ["Star Wars" GraphQL endpoint](https://swapi-graphql.azure-api.net/graphql) as a demo. |
+ | **Upload schema** | Optionally select to browse and upload your schema file to replace the schema retrieved from the GraphQL endpoint (if available). |
+ | **Description** | Add a description of your API. |
+ | **URL scheme** | Select **HTTP**, **HTTPS**, or **Both**. Default selection: *Both*. |
+ | **API URL suffix**| Add a URL suffix to identify this specific API in this API Management instance. It has to be unique in this API Management instance. |
+ | **Base URL** | Uneditable field displaying your API base URL |
+ | **Tags** | Associate your GraphQL API with new or existing tags. |
+ | **Products** | Associate your GraphQL API with a product to publish it. |
+ | **Gateways** | Associate your GraphQL API with existing gateways. Default gateway selection: *Managed*. |
+ | **Version this API?** | Select to apply a versioning scheme to your GraphQL API. |
+
+1. Select **Create**.
+1. After the API is created, browse the schema on the **Design** tab, in the **Frontend** section.
+ :::image type="content" source="media/graphql-api/explore-schema.png" alt-text="Screenshot of exploring the GraphQL schema in the portal.":::
+ [!INCLUDE [api-management-define-api-topics.md](../../includes/api-management-define-api-topics.md)]
api-management Graphql Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/graphql-policies.md
+
+ Title: Azure API Management policies for GraphQL APIs | Microsoft Docs
+description: Reference for Azure API Management policies to validate and resolve GraphQL API queries. Provides policy usage, settings, and examples.
++++ Last updated : 05/17/2022++++
+# API Management policies for GraphQL APIs
+
+This article provides a reference for API Management policies to validate and resolve queries to GraphQL APIs.
++
+## GraphQL API policies
+
+- [Validate GraphQL request](#validate-graphql-request) - Validates and authorizes a request to a GraphQL API.
+- [Set GraphQL resolver](#set-graphql-resolver) - Retrieves or sets data for a GraphQL field in an object type specified in a GraphQL schema.
+
+## Validate GraphQL request
+
+The `validate-graphql-request` policy validates the GraphQL request and authorizes access to specific query paths. An invalid query is a "request error". Authorization is only done for valid requests.
+++
+**Permissions**
+Because GraphQL queries use a flattened schema:
+* Permissions may be applied at any leaf node of an output type:
+ * Mutation, query, or subscription
+ * Individual field in a type declaration.
+* Permissions may not be applied to:
+ * Input types
+ * Fragments
+ * Unions
+ * Interfaces
+ * The schema element
+
+**Authorize element**
+Configure the `authorize` element to set an appropriate authorization rule for one or more paths.
+* Each rule can optionally provide a different action.
+* Use policy expressions to specify conditional actions.
+
+**Introspection system**
+The policy for path=`/__*` is the [introspection](https://graphql.org/learn/introspection/) system. You can use it to reject introspection requests (`__schema`, `__type`, etc.).
+
+### Policy statement
+
+```xml
+<validate-graphql-request error-variable-name="variable name" max-size="size in bytes" max-depth="query depth">
+ <authorize>
+ <rule path="query path, for example: '/listUsers' or '/__*'" action="string or policy expression that evaluates to 'allow|remove|reject|ignore'" />
+ </authorize>
+</validate-graphql-request>
+```
+
+### Example: Query validation
+
+This example applies the following validation and authorization rules to a GraphQL query:
+* Requests larger than 100 kb or with query depth greater than 4 are rejected.
+* Requests to the introspection system are rejected.
+* The `/Missions/name` field is removed from requests containing more than two headers.
+
+```xml
+<validate-graphql-request error-variable-name="name" max-size="102400" max-depth="4">
+ <authorize>
+ <rule path="/__*" action="reject" />
+ <rule path="/Missions/name" action="@(context.Request.Headers.Count > 2 ? "remove" : "allow")" />
+ </authorize>
+</validate-graphql-request>
+```
+
+### Example: Mutation validation
+
+This example applies the following validation and authorization rules to a GraphQL mutation:
+* Requests larger than 100 kb or with query depth greater than 4 are rejected.
+* Requests to mutate the `deleteUser` field are denied except when the request is from IP address `198.51.100.1`.
+
+```xml
+<validate-graphql-request error-variable-name="name" max-size="102400" max-depth="4">
+ <authorize>
+ <rule path="/Mutation/deleteUser" action="@(context.Request.IpAddress <> "198.51.100.1" ? "deny" : "allow")" />
+ </authorize>
+</validate-graphql-request>
+```
+
+### Elements
+
+| Name | Description | Required |
+| | | -- |
+| `validate-graphql-request` | Root element. | Yes |
+| `authorize` | Add this element to provide field-level authorization with both request- and field-level errors. | No |
+| `rule` | Add one or more of these elements to authorize specific query paths. Each rule can optionally specify a different [action](#request-actions). | No |
+
+### Attributes
+
+| Name | Description | Required | Default |
+| -- | - | -- | - |
+| `error-variable-name` | Name of the variable in `context.Variables` to log validation errors to. | No | N/A |
+| `max-size` | Maximum size of the request payload in bytes. Maximum allowed value: 102,400 bytes (100 KB). (Contact [support](https://azure.microsoft.com/support/options/) if you need to increase this limit.) | Yes | N/A |
+| `max-depth` | An integer. Maximum query depth. | No | 6 |
+| `path` | Path to execute authorization validation on. It must follow the pattern: `/type/field`. | Yes | N/A |
+| `action` | [Action](#request-actions) to perform if the rule applies. May be specified conditionally using a policy expression. | No | allow |
+
+### Request actions
+
+Available actions are described in the following table.
+
+|Action |Description |
+|||
+|`reject` | A request error happens, and the request is not sent to the back end. Additional rules if configured are not applied. |
+|`remove` | A field error happens, and the field is removed from the request. |
+|`allow` | The field is passed to the back end. |
+|`ignore` | The rule is not valid for this case and the next rule is applied. |
+
+### Error handling
+
+Failure to validate against the GraphQL schema, or a failure for the request's size or depth, is a request error and results in the request being failed with an errors block (but no data block).
+
+Similar to the [`Context.LastError`](api-management-error-handling-policies.md#lasterror) property, all GraphQL validation errors are automatically propagated in the `GraphQLErrors` variable. If the errors need to be propagated separately, you can specify an error variable name. Errors are pushed onto the `error` variable and the `GraphQLErrors` variable.
+
+### Usage
+
+This policy can be used in the following policy [sections](./api-management-howto-policies.md#sections) and [scopes](./api-management-howto-policies.md#scopes).
+
+- **Policy sections:** inbound
+
+- **Policy scopes:** all scopes
+
+## Set GraphQL resolver
+
+The `set-graphql-resolver` policy retrieves or sets data for a GraphQL field in an object type specified in a GraphQL schema. The schema must be imported to API Management. Currently the data must be resolved using an HTTP-based data source (REST or SOAP API).
++
+* This policy is invoked only when a matching GraphQL query is executed.
+* The policy resolves data for a single field. To resolve data for multiple fields, configure multiple occurrences of this policy in a policy definition.
+* The context for the HTTP request and HTTP response (if specified) differs from the context for the original gateway API request:
+ * The HTTP request context contains arguments that are passed in the GraphQL query as its body.
+ * The HTTP response context is the response from the independent HTTP call made by the resolver, not the context for the complete response for the gateway request.
+++
+### Policy statement
+
+```xml
+<set-graphql-resolver parent-type="type" field="field">
+ <http-data-source>
+ <http-request>
+ <set-method>HTTP method</set-method>
+ <set-url>URL</set-url>
+ [...]
+ </http-request>
+ <http-response>
+ [...]
+ </http-response>
+ </http-data-source>
+</set-graphql-resolver>
+```
+
+### Examples
+
+### Resolver for GraphQL query
+
+The following example resolves a query by making an HTTP `GET` call to a backend data source.
+
+#### Example schema
+
+```
+type Query {
+ users: [User]
+}
+
+type User {
+ id: String!
+ name: String!
+}
+```
+
+#### Example policy
+
+```xml
+<set-graphql-resolver parent-type="Query" field="users">
+ <http-data-source>
+ <http-request>
+ <set-method>GET</set-method>
+ <set-url>https://data.contoso.com/get/users</set-url>
+ </http-request>
+ </http-data-source>
+</set-graphql-resolver>
+```
+
+### Resolver for a GraqhQL query that returns a list, using a liquid template
+
+The following example uses a liquid template, supported for use in the [set-body](api-management-transformation-policies.md#SetBody) policy, to return a list in the HTTP response to a query.
+
+#### Example schema
+
+```
+type Query {
+ users: [User]
+}
+
+type User {
+ id: String!
+ name: String!
+}
+```
+
+#### Example policy
+
+```xml
+<set-graphql-resolver parent-type="Query" field="users">
+ <http-data-source>
+ <http-request>
+ <set-method>GET</set-method>
+ <set-url>https://data.contoso.com/users</set-url>
+ </http-request>
+ <http-response>
+ <set-body template="liquid">
+ [
+ {% JSONArrayFor elem in body %}
+ {
+ "name": "{{elem.title}}"
+ }
+ {% endJSONArrayFor %}
+ ]
+ </set-body>
+ </http-response>
+ </http-data-source>
+</set-graphql-resolver>
+```
+
+### Resolver for GraphQL mutation
+
+The following example resolves a mutation that inserts data by making a `POST` request to an HTTP data source. The policy expression in the `set-body` policy of the HTTP request modifies a `name` argument that is passed in the GraphQL query as its body.
+
+#### Example schema
+
+```
+type Query {
+ users: [User]
+}
+
+type Mutation {
+ makeUser(name: String!): User
+}
+
+type User {
+ id: String!
+ name: String!
+}
+```
+
+#### Example policy
+
+```xml
+<set-graphql-resolver parent-type="Mutation" field="makeUser">
+ <http-data-source>
+ <http-request>
+ <set-method>POST</set-method>
+ <set-url> https://data.contoso.com/user/create </set-url>
+ <set-header name="Content-Type" exists-action="override">
+ <value>application/json</value>
+ </set-header>
+ <set-body>@{
+ var body = context.Request.Body.As<JObject>(true);
+ JObject jsonObject = new JObject();
+ jsonObject.Add("name", body["name"])
+ return jsonObject.ToString();
+ }</set-body>
+ </http-request>
+ </http-data-source>
+</set-graphql-resolver>
+```
+
+### Elements
+
+| Name | Description | Required |
+| | | -- |
+| `set-graphql-resolver` | Root element. | Yes |
+| `http-data-source` | Configures the HTTP request and optionally the HTTP response that are used to resolve data for the given `parent-type` and `field`. | Yes |
+| `http-request` | Specifies a URL and child policies to configure the resolver's HTTP request. Each of the following policies can be specified at most once in the element. <br/><br/>Required policy: [set-method](api-management-advanced-policies.md#SetRequestMethod)<br/><br/>Optional policies: [set-header](api-management-transformation-policies.md#SetHTTPheader), [set-body](api-management-transformation-policies.md#SetBody), [authentication-certificate](api-management-authentication-policies.md#ClientCertificate) | Yes |
+| `set-url` | The URL of the resolver's HTTP request. | Yes |
+| `http-response` | Optionally specifies child policies to configure the resolver's HTTP response. If not specified, the response is returned as a raw string. Each of the following policies can be specified at most once. <br/><br/>Optional policies: [set-body](api-management-transformation-policies.md#SetBody), [json-to-xml](api-management-transformation-policies.md#ConvertJSONtoXML), [xml-to-json](api-management-transformation-policies.md#ConvertXMLtoJSON), [find-and-replace](api-management-transformation-policies.md#Findandreplacestringinbody) | No |
+
+### Attributes
+
+| Name | Description | Required | Default |
+| -- | - | -- | - |
+| `parent-type`| An object type in the GraphQL schema. | Yes | N/A |
+| `field`| A field of the specified `parent-type` in the GraphQL schema. | Yes | N/A |
+
+> [!NOTE]
+> Currently, the values of `parent-type` and `field` aren't validated by this policy. If they aren't valid, the policy is ignored, and the GraphQL query is forwarded to a GraphQL endpoint (if one is configured).
+
+### Usage
+
+This policy can be used in the following policy [sections](./api-management-howto-policies.md#sections) and [scopes](./api-management-howto-policies.md#scopes).
+
+- **Policy sections:** backend
+
+- **Policy scopes:** all scopes
+
api-management Graphql Schema Resolve Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/graphql-schema-resolve-api.md
+
+ Title: Import GraphQL schema and set up field resolvers | Microsoft Docs
+
+description: Import a GraphQL schema to API Management and configure a policy to resolve a GraphQL query using an HTTP-based data source.
++++ Last updated : 05/17/2022+++
+# Import a GraphQL schema and set up field resolvers
+
+++
+In this article, you'll:
+> [!div class="checklist"]
+> * Import a GraphQL schema to your API Management instance
+> * Set up a resolver for a GraphQL query using an existing HTTP endpoints
+> * Test your GraphQL API
+
+If you want to expose an existing GraphQL endpoint as an API, see [Import a GraphQL API](graphql-api.md).
+
+## Prerequisites
+
+- An existing API Management instance. [Create one if you haven't already](get-started-create-service-instance.md).
+- A valid GraphQL schema file with the `.graphql` extension.
+- A backend GraphQL endpoint is optional for this scenario.
+++
+## Add a GraphQL schema
+
+1. From the side navigation menu, under the **APIs** section, select **APIs**.
+1. Under **Define a new API**, select the **Synthetic GraphQL** icon.
+
+ :::image type="content" source="media/graphql-schema-resolve-api/import-graphql-api.png" alt-text="Screenshot of selecting Synthetic GraphQL icon from list of APIs.":::
+
+1. In the dialog box, select **Full** and complete the required form fields.
+
+ :::image type="content" source="media/graphql-schema-resolve-api/create-from-graphql-schema.png" alt-text="Screenshot of fields for creating a GraphQL API.":::
+
+ | Field | Description |
+ |-|-|
+ | **Display name** | The name by which your GraphQL API will be displayed. |
+ | **Name** | Raw name of the GraphQL API. Automatically populates as you type the display name. |
+ | **Fallback GraphQL endpoint** | For this scenario, optionally enter a URL with a GraphQL API endpoint name. API Management passes GraphQL queries to this endpoint when a custom resolver isn't set for a field. |
+ | **Upload schema file** | Select to browse and upload a valid GraphQL schema file with the `.graphql` extension. |
+ | Description | Add a description of your API. |
+ | URL scheme | Select **HTTP**, **HTTPS**, or **Both**. Default selection: *Both*. |
+ | **API URL suffix**| Add a URL suffix to identify this specific API in this API Management instance. It has to be unique in this API Management instance. |
+ | **Base URL** | Uneditable field displaying your API base URL |
+ | **Tags** | Associate your GraphQL API with new or existing tags. |
+ | **Products** | Associate your GraphQL API with a product to publish it. |
+ | **Gateways** | Associate your GraphQL API with existing gateways. Default gateway selection: *Managed*. |
+ | **Version this API?** | Select to apply a versioning scheme to your GraphQL API. |
+
+1. Select **Create**.
+
+1. After the API is created, browse the schema on the **Design** tab, in the **Frontend** section.
+
+## Configure resolver
+
+Configure the [set-graphql-resolver](graphql-policies.md#set-graphql-resolver) policy to map a field in the schema to an existing HTTP endpoint.
+
+Suppose you imported the following basic GraphQL schema and wanted to set up a resolver for the *users* query.
+
+```
+type Query {
+ users: [User]
+}
+
+type User {
+ id: String!
+ name: String!
+}
+```
+
+1. From the side navigation menu, under the **APIs** section, select **APIs** > your GraphQL API.
+1. On the **Design** tab of your GraphQL API, select **All operations**.
+1. In the **Backend** processing section, select **+ Add policy**.
+1. Configure the `set-graphql-resolver` policy to resolve the *users* query using an HTTP data source.
+
+ For example, the following `set-graphql-resolver` policy retrieves the *users* field by using a `GET` call on an existing HTTP data source.
+
+ ```xml
+ <set-graphql-resolver parent-type="Query" field="users">
+ <http-data-source>
+ <http-request>
+ <set-method>GET</set-method>
+ <set-url>https://myapi.contoso.com/users</set-url>
+ </http-request>
+ </http-data-source>
+ </set-graphql-resolver>
+ ```
+1. To resolve data for other fields in the schema, repeat the preceding step.
+1. Select **Save**.
+++
+## Next steps
+> [!div class="nextstepaction"]
+> [Transform and protect a published API](transform-api.md)
api-management Graphql Validation Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/graphql-validation-policies.md
- Title: Azure API Management validation policy for GraphQL requests | Microsoft Docs
-description: Reference for an Azure API Management policy to validate and authorize GraphQL requests. Provides policy usage, settings, and examples.
---- Previously updated : 03/07/2022----
-# API Management policy to validate and authorize GraphQL requests (preview)
-
-This article provides a reference for an API Management policy to validate and authorize requests to a [GraphQL API](graphql-api.md) imported to API Management.
--
-## Validation policy
-
-| Policy | Description |
-| | -- |
-| [Validate GraphQL request](#validate-graphql-request) | Validates and authorizes a request to a GraphQL API. |
--
-## Validate GraphQL request
-
-The `validate-graphql-request` policy validates the GraphQL request and authorizes access to specific query paths. An invalid query is a "request error". Authorization is only done for valid requests.
---
-**Permissions**
-Because GraphQL queries use a flattened schema:
-* Permissions may be applied at any leaf node of an output type:
- * Mutation, query, or subscription
- * Individual field in a type declaration.
-* Permissions may not be applied to:
- * Input types
- * Fragments
- * Unions
- * Interfaces
- * The schema element
-
-**Authorize element**
-Configure the `authorize` element to set an appropriate authorization rule for one or more paths.
-* Each rule can optionally provide a different action.
-* Use policy expressions to specify conditional actions.
-
-**Introspection system**
-The policy for path=`/__*` is the [introspection](https://graphql.org/learn/introspection/) system. You can use it to reject introspection requests (`__schema`, `__type`, etc.).
-
-### Policy statement
-
-```xml
-<validate-graphql-request error-variable-name="variable name" max-size="size in bytes" max-depth="query depth">
- <authorize>
- <rule path="query path, for example: '/listUsers' or '/__*'" action="string or policy expression that evaluates to 'allow|remove|reject|ignore'" />
- </authorize>
-</validate-graphql-request>
-```
-
-### Example: Query validation
-
-This example applies the following validation and authorization rules to a GraphQL query:
-* Requests larger than 100 kb or with query depth greater than 4 are rejected.
-* Requests to the introspection system are rejected.
-* The `/Missions/name` field is removed from requests containing more than two headers.
-
-```xml
-<validate-graphql-request error-variable-name="name" max-size="102400" max-depth="4">
- <authorize>
- <rule path="/__*" action="reject" />
- <rule path="/Missions/name" action="@(context.Request.Headers.Count > 2 ? "remove" : "allow")" />
- </authorize>
-</validate-graphql-request>
-```
-
-### Example: Mutation validation
-
-This example applies the following validation and authorization rules to a GraphQL mutation:
-* Requests larger than 100 kb or with query depth greater than 4 are rejected.
-* Requests to mutate the `deleteUser` field are denied except when the request is from IP address `198.51.100.1`.
-
-```xml
-<validate-graphql-request error-variable-name="name" max-size="102400" max-depth="4">
- <authorize>
- <rule path="/Mutation/deleteUser" action="@(context.Request.IpAddress <> "198.51.100.1" ? "deny" : "allow")" />
- </authorize>
-</validate-graphql-request>
-```
-
-### Elements
-
-| Name | Description | Required |
-| | | -- |
-| `validate-graphql-request` | Root element. | Yes |
-| `authorize` | Add this element to provide field-level authorization with both request- and field-level errors. | No |
-| `rule` | Add one or more of these elements to authorize specific query paths. Each rule can optionally specify a different [action](#request-actions). | No |
-
-### Attributes
-
-| Name | Description | Required | Default |
-| -- | - | -- | - |
-| `error-variable-name` | Name of the variable in `context.Variables` to log validation errors to. | No | N/A |
-| `max-size` | Maximum size of the request payload in bytes. Maximum allowed value: 102,400 bytes (100 KB). (Contact [support](https://azure.microsoft.com/support/options/) if you need to increase this limit.) | Yes | N/A |
-| `max-depth` | An integer. Maximum query depth. | No | 6 |
-| `path` | Path to execute authorization validation on. It must follow the pattern: `/type/field`. | Yes | N/A |
-| `action` | [Action](#request-actions) to perform if the rule applies. May be specified conditionally using a policy expression. | No | allow |
-
-### Request actions
-
-Available actions are described in the following table.
-
-|Action |Description |
-|||
-|`reject` | A request error happens, and the request is not sent to the back end. Additional rules if configured are not applied. |
-|`remove` | A field error happens, and the field is removed from the request. |
-|`allow` | The field is passed to the back end. |
-|`ignore` | The rule is not valid for this case and the next rule is applied. |
-
-### Usage
-
-This policy can be used in the following policy [sections](./api-management-howto-policies.md#sections) and [scopes](./api-management-howto-policies.md#scopes).
--- **Policy sections:** inbound--- **Policy scopes:** all scopes-
-## Error handling
-
-Failure to validate against the GraphQL schema, or a failure for the request's size or depth, is a request error and results in the request being failed with an errors block (but no data block).
-
-Similar to the [`Context.LastError`](api-management-error-handling-policies.md#lasterror) property, all GraphQL validation errors are automatically propagated in the `GraphQLErrors` variable. If the errors need to be propagated separately, you can specify an error variable name. Errors are pushed onto the `error` variable and the `GraphQLErrors` variable.
-
api-management Policy Fragments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/policy-fragments.md
+
+ Title: Reuse policy configurations in Azure API Management | Microsoft Docs
+description: Learn how to create and manage reusable policy fragments in Azure API Management. Policy fragments are XML elements containing policy configurations that can be included in any policy definition.
+
+documentationcenter: ''
++++ Last updated : 04/28/2022++++
+# Reuse policy configurations in your API Management policy definitions
+
+This article shows you how to create and use *policy fragments* in your API Management policy definitions. Policy fragments are centrally managed, reusable XML snippets containing one or more API Management [policy](api-management-howto-policies.md) configurations.
+
+Policy fragments help you configure policies consistently and maintain policy definitions without needing to repeat or retype XML code.
+
+A policy fragment:
+
+* Must be valid XML containing one or more policy configurations
+* May include [policy expressions](api-management-policy-expressions.md), if a referenced policy supports them
+* Is inserted as-is in a policy definition by using the [include-fragment](api-management-advanced-policies.md#IncludeFragment) policy
+
+Limitations:
+
+* A policy fragment can't include a policy section identifier (`<inbound>`, `<outbound>`, etc.) or the `<base/>` element.
+* Currently, a policy fragment can't nest another policy fragment.
+
+## Prerequisites
+
+If you don't already have an API Management instance and a backend API, see:
+
+- [Create an Azure API Management instance](get-started-create-service-instance.md)
+- [Import and publish an API](import-and-publish.md)
+
+While not required, you may want to [configure](set-edit-policies.md) one or more policy definitions. You can copy policy elements from these definitions when creating policy fragments.
++
+## Create a policy fragment
+
+1. In the left navigation of your API Management instance, under **APIs**, select **Policy fragments** > **+ Create**.
+1. In the **Create a new policy fragment** window, enter a **Name** and an optional **Description** of the policy fragment. The name must be unique within your API Management instance.
+
+ Example name: *ForwardContext*
+1. In the **XML policy fragment** editor, type or paste one or more policy XML elements between the `<fragment>` and `</fragment>` tags.
+
+ :::image type="content" source="media/policy-fragments/create-fragment.png" alt-text="Screenshot showing the create a new policy fragment form.":::
+
+ For example, the following fragment contains a [`set-header`](api-management-transformation-policies.md#SetHTTPheader) policy configuration to forward context information to a backend service. This fragment would be included in an inbound policy section. The policy expressions in this example access the built-in [`context` variable](api-management-policy-expressions.md#ContextVariables).
+
+ ```xml
+ <fragment>
+ <set-header name="x-request-context-data" exists-action="override">
+ <value>@(context.User.Id)</value>
+ <value>@(context.Deployment.Region)</value>
+ </set-header>
+ </fragment>
+ ```
+
+1. Select **Create**. The fragment is added to the list of policy fragments.
+
+## Include a fragment in a policy definition
+
+Configure the [`include-fragment`](api-management-advanced-policies.md#IncludeFragment) policy to insert a policy fragment in a policy definition. For more information about policy definitions, see [Set or edit policies](set-edit-policies.md).
+
+* You may include a fragment at any scope and in any policy section, as long as the underlying policy or policies in the fragment support that usage.
+* You may include multiple policy fragments in a policy definition.
+
+For example, insert the policy fragment named *ForwardContext* in the inbound policy section:
+
+```xml
+<policies>
+ <inbound>
+ <include-fragment fragment-id="ForwardContext" />
+ <base />
+ </inbound>
+[...]
+```
+
+> [!TIP]
+> To see the content of an included fragment displayed in the policy definition, select **Recalculate effective policy** in the policy editor.
+
+## Manage policy fragments
+
+After creating a policy fragment, you can view and update policy properties, or delete the policy at any time.
+
+**To view properties of a fragment:**
+
+1. In the left navigation of your API Management instance, under **APIs**, select **Policy fragments**. Select the name of your fragment.
+1. On the **Overview** page, review the **Policy document references** to see the policy definitions that include the fragment.
+1. On the **Properties** page, review the name and description of the policy fragment. The name can't be changed.
+
+**To edit a policy fragment:**
+
+1. In the left navigation of your API Management instance, under **APIs**, select **Policy fragments**. Select the name of your fragment.
+1. Select **Policy editor**.
+1. Update the statements in the fragment and then select **Apply**.
+
+> [!NOTE]
+> Update affects all policy definitions where the fragment is included.
+
+**To delete a policy fragment:**
+
+1. In the left navigation of your API Management instance, under **APIs**, select **Policy fragments**. Select the name of your fragment.
+1. Review **Policy document references** for policy definitions that include the fragment. Before a fragment can be deleted, you must remove the fragment references from all policy definitions.
+1. After all references are removed, select **Delete**.
+
+For more information about working with policies, see:
+++ [Tutorial: Transform and protect APIs](transform-api.md)++ [Set or edit policies](set-edit-policies.md)++ [Policy reference](./api-management-policies.md) for a full list of policy statements++ [Policy samples](./policies/index.md)
api-management Set Edit Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/set-edit-policies.md
Operation scope is configured for a selected API operation.
1. Select **Save** to propagate changes to the API Management gateway immediately.
+## Reuse policy configurations
+
+You can create reusable [policy fragments](policy-fragments.md) in your API Management instance. Policy fragments are XML elements containing your configurations of one or more policies. Policy fragments help you configure policies consistently and maintain policy definitions without needing to repeat or retype XML code.
+
+Use the [`include-fragment`](api-management-advanced-policies.md#IncludeFragment) policy to insert a policy fragment in a policy definition.
+ ## Use `base` element to set policy evaluation order If you configure policy definitions at more than one scope, multiple policies could apply to an API request or response. Depending on the order that the policies from the different scopes are applied, the transformation of the request or response could differ.
api-management Zone Redundancy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/zone-redundancy.md
Previously updated : 02/02/2022 Last updated : 05/11/2022
Configuring API Management for zone redundancy is currently supported in the fol
* South Africa North (*) * South Central US * Southeast Asia
+* Switzerland North
* UK South * West Europe * West US 2 * West US 3 > [!IMPORTANT]
-> The regions with * against them have restrictive access in an Azure Subscription to enable Availability Zone support. Please work with your Microsoft sales or customer representative
+> The regions with * against them have restrictive access in an Azure subscription to enable availability zone support. Please work with your Microsoft sales or customer representative.
## Prerequisites
app-service Configure Common https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-common.md
Here, you can configure some common settings for the app. Some settings require
- **Platform settings**: Lets you configure settings for the hosting platform, including: - **FTP state**: Allow only FTPS or disable FTP altogether.
- - **Bitness**: 32-bit or 64-bit. (Defaults to 32-bit for App Service created in the portal.)
+ - **Bitness**: 32-bit or 64-bit. For Windows apps only.
- **WebSocket protocol**: For [ASP.NET SignalR] or [socket.io](https://socket.io/), for example. - **Always On**: Keeps the app loaded even when there's no traffic. When **Always On** is not turned on (default), the app is unloaded after 20 minutes without any incoming requests. The unloaded app can cause high latency for new requests because of its warm-up time. When **Always On** is turned on, the front-end load balancer sends a GET request to the application root every five minutes. The continuous ping prevents the app from being unloaded.
app-service Configure Language Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-language-java.md
To confirm that the datasource was added to the JBoss server, SSH into your weba
## Choosing a Java runtime version
-App Service allows users to choose the major version of the JVM, such as Java 8 or Java 11, and the patch version, such as 1.8.0_232 or 11.0.5. You can also choose to have the patch version automatically updated as new minor versions become available. In most cases, production sites should use pinned patch JVM versions. This will prevent unnanticipated outages during a patch version auto-update. All Java web apps use 64-bit JVMs, this is not configurable.
+App Service allows users to choose the major version of the JVM, such as Java 8 or Java 11, and the patch version, such as 1.8.0_232 or 11.0.5. You can also choose to have the patch version automatically updated as new minor versions become available. In most cases, production sites should use pinned patch JVM versions. This will prevent unanticipated outages during a patch version auto-update. All Java web apps use 64-bit JVMs, this is not configurable.
If you are using Tomcat, you can choose to pin the patch version of Tomcat. On Windows, you can pin the patch versions of the JVM and Tomcat independently. On Linux, you can pin the patch version of Tomcat; the patch version of the JVM will also be pinned but is not separately configurable.
app-service How To Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/how-to-migrate.md
az network vnet subnet update -g $ASE_RG -n <subnet-name> --vnet-name <vnet-name
## 6. Migrate to App Service Environment v3
-Only start this step once you've completed all pre-migration actions listed previously and understand the [implications of migration](migrate.md#migrate-to-app-service-environment-v3) including what will happen during this time. There will be about one hour of downtime. Scaling and modifications to your existing App Service Environment will be blocked during this step.
+Only start this step once you've completed all pre-migration actions listed previously and understand the [implications of migration](migrate.md#migrate-to-app-service-environment-v3) including what will happen during this time. This step takes up to three hours and during that time there will be about one hour of application downtime. Scaling and modifications to your existing App Service Environment will be blocked during this step.
```azurecli az rest --method post --uri "${ASE_ID}/migrate?api-version=2021-02-01&phase=fullmigration"
App Service Environment v3 requires the subnet it's in to have a single delegati
## 5. Migrate to App Service Environment v3
-Once you've completed all of the above steps, you can start migration. Make sure you understand the [implications of migration](migrate.md#migrate-to-app-service-environment-v3) including what will happen during this time. There will be about one hour of downtime. Scaling and modifications to your existing App Service Environment will be blocked during this step.
+Once you've completed all of the above steps, you can start migration. Make sure you understand the [implications of migration](migrate.md#migrate-to-app-service-environment-v3) including what will happen during this time. This step takes up to three hours and during that time there will be about one hour of application downtime. Scaling and modifications to your existing App Service Environment will be blocked during this step.
When migration is complete, you'll have an App Service Environment v3 and all of your apps will be running in your new environment. You can confirm the environment's version by checking the **Configuration** page for your App Service Environment.
app-service Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/migrate.md
Title: Migrate to App Service Environment v3 by using the migration feature
description: Overview of the migration feature for migration to App Service Environment v3 Previously updated : 4/29/2022 Last updated : 5/23/2022
App Service Environment v3 requires the subnet it's in to have a single delegati
After updating all dependent resources with your new IPs and properly delegating your subnet, you should continue with migration as soon as possible.
-During migration, the following events will occur:
+During migration, which requires up to a three hour service window, the following events will occur:
- The existing App Service Environment is shut down and replaced by the new App Service Environment v3. - All App Service plans in the App Service Environment are converted from Isolated to Isolated v2.-- All of the apps that are on your App Service Environment are temporarily down. You should expect about one hour of downtime.
+- All of the apps that are on your App Service Environment are temporarily down. You should expect about one hour of downtime during this period.
- If you can't support downtime, see [migration-alternatives](migration-alternatives.md#guidance-for-manual-migration). - The public addresses that are used by the App Service Environment will change to the IPs identified during the previous step.
There's no cost to migrate your App Service Environment. You'll stop being charg
- **What if migrating my App Service Environment is not currently supported?** You won't be able migrate using the migration feature at this time. If you have an unsupported environment and want to migrate immediately, see the [manual migration options](migration-alternatives.md). This doc will be updated as additional regions and supported scenarios become available. - **Will I experience downtime during the migration?**
- Yes, you should expect about one hour of downtime during the migration step so plan accordingly. If downtime isn't an option for you, see the [manual migration options](migration-alternatives.md).
+ Yes, you should expect about one hour of downtime during the three hour service window during the migration step so plan accordingly. If downtime isn't an option for you, see the [manual migration options](migration-alternatives.md).
- **Will I need to do anything to my apps after the migration to get them running on the new App Service Environment?** No, all of your apps running on the old environment will be automatically migrated to the new environment and run like before. No user input is needed. - **What if my App Service Environment has a custom domain suffix?**
app-service Nat Gateway Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/networking/nat-gateway-integration.md
az network vnet subnet update --resource-group [myResourceGroup] --vnet-name [my
The same NAT gateway can be used across multiple subnets in the same Virtual Network allowing a NAT gateway to be used across multiple apps and App Service plans.
-NAT gateway supports both public IP addresses and public IP prefixes. A NAT gateway can support up to 16 IP addresses across individual IP addresses and prefixes. Each IP address allocates 64,000 ports (SNAT ports) allowing up to 1M available ports. Learn more in the [Scaling section](../../virtual-network/nat-gateway/nat-gateway-resource.md#scale-nat-gateway) of NAT gateway.
+NAT gateway supports both public IP addresses and public IP prefixes. A NAT gateway can support up to 16 IP addresses across individual IP addresses and prefixes. Each IP address allocates 64,512 ports (SNAT ports) allowing up to 1M available ports. Learn more in the [Scaling section](../../virtual-network/nat-gateway/nat-gateway-resource.md#scale-nat-gateway) of NAT gateway.
## Next steps
app-service Overview Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-managed-identity.md
The **IDENTITY_ENDPOINT** is a local URL from which your app can request tokens.
> | Parameter name | In | Description | > |-|--|--| > | resource | Query | The Azure AD resource URI of the resource for which a token should be obtained. This could be one of the [Azure services that support Azure AD authentication](../active-directory/managed-identities-azure-resources/services-support-managed-identities.md#azure-services-that-support-azure-ad-authentication) or any other resource URI. |
-> | api-version | Query | The version of the token API to be used. Use "2019-08-01" or later. |
+> | api-version | Query | The version of the token API to be used. Use `2019-08-01`. |
> | X-IDENTITY-HEADER | Header | The value of the IDENTITY_HEADER environment variable. This header is used to help mitigate server-side request forgery (SSRF) attacks. | > | client_id | Query | (Optional) The client ID of the user-assigned identity to be used. Cannot be used on a request that includes `principal_id`, `mi_res_id`, or `object_id`. If all ID parameters (`client_id`, `principal_id`, `object_id`, and `mi_res_id`) are omitted, the system-assigned identity is used. | > | principal_id | Query | (Optional) The principal ID of the user-assigned identity to be used. `object_id` is an alias that may be used instead. Cannot be used on a request that includes client_id, mi_res_id, or object_id. If all ID parameters (`client_id`, `principal_id`, `object_id`, and `mi_res_id`) are omitted, the system-assigned identity is used. |
app-service Tutorial Python Postgresql App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-python-postgresql-app.md
ms.devlang: python Last updated 03/09/2022-+ # Deploy a Python (Django or Flask) web app with PostgreSQL in Azure
To configure environment variables for the web app from VS Code, you must have t
Having issues? Refer first to the [Troubleshooting guide](configure-language-python.md#troubleshooting), otherwise, [let us know](https://aka.ms/DjangoCLITutorialHelp).
-> [!NOTE]
-> If you want to try an alternative approach to connect your app to the Postgres database in Azure, see the [Service Connector version](../service-connector/tutorial-django-webapp-postgres-cli.md) of this tutorial. Service Connector is a new Azure service that is currently in public preview. [Section 4.2](../service-connector/tutorial-django-webapp-postgres-cli.md#42-configure-environment-variables-to-connect-the-database) of that tutorial introduces a simplified process for creating the connection.
- ## 6 - Deploy your application code to Azure Azure App service supports multiple methods to deploy your application code to Azure including support for GitHub Actions and all major CI/CD tools. This article focuses on how to deploy your code from your local workstation to Azure.
application-gateway Disabled Listeners https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/disabled-listeners.md
Title: Identifying and fixing a disabled listener
+ Title: Understanding disabled listeners
description: The article explains the details of a disabled listener and ways to resolve the problem.
-# Identifying and fixing a disabled listener on your gateway
+# Understanding disabled listeners
The SSL/TLS certificates for Azure Application GatewayΓÇÖs listeners can be referenced from a customerΓÇÖs Key Vault resource. Your application gateway must always have access to such linked key vault resource and its certificate object to ensure smooth operations of the TLS termination feature and the overall health of the gateway resource.
application-gateway Tutorial Ssl Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/tutorial-ssl-cli.md
az network public-ip create \
--resource-group myResourceGroupAG \ --name myAGPublicIPAddress \ --allocation-method Static \
- --sku Standard
+ --sku Standard \
+ --location eastus
``` ## Create the application gateway
az group delete --name myResourceGroupAG --location eastus
## Next steps
-[Create an application gateway that hosts multiple web sites](./tutorial-multiple-sites-cli.md)
+[Create an application gateway that hosts multiple web sites](./tutorial-multiple-sites-cli.md)
applied-ai-services Managed Identities Secured Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/managed-identities-secured-access.md
+
+ Title: "Configure secure access with managed identities and private endpoints"
+
+description: Learn how to configure secure communications between Form Recognizer and other Azure Services.
+++++ Last updated : 05/23/2022+++
+# Configure secure access with managed identities and private endpoints
+
+This how-to guide will walk you through the process of enabling secure connections for your Form Recognizer resource. You can secure the following connections:
+
+* Communication between a client application within a Virtual Network (VNET) and your Form Recognizer Resource.
+
+* Communication between Form Recognizer Studio or the sample labeling tool (FOTT) and your Form Recognizer resource.
+
+* Communication between your Form Recognizer resource and a storage account (needed when training a custom model).
+
+You'll be setting up your environment to secure the resources:
+
+ :::image type="content" source="media/managed-identities/secure-config.png" alt-text="Screenshot of secure configuration with managed identity and private endpoints.":::
+
+## Prerequisites
+
+To get started, you'll need:
+
+* An active [**Azure account**](https://azure.microsoft.com/free/cognitive-services/)ΓÇöif you don't have one, you can [**create a free account**](https://azure.microsoft.com/free/).
+
+* A [**Form Recognizer**](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextTranslation) or [**Cognitive Services**](https://portal.azure.com/#create/Microsoft.CognitiveServicesAllInOne) resource in the Azure portal. For detailed steps, _see_ [Create a Cognitive Services resource using the Azure portal](../../cognitive-services/cognitive-services-apis-create-account.md?tabs=multiservice%2cwindows).
+
+* An [**Azure blob storage account**](https://portal.azure.com/#create/Microsoft.StorageAccount-ARM) in the same region as your Form Recognizer resource. You'll create containers to store and organize your blob data within your storage account.
+
+* An [**Azure virtual network**](https://portal.azure.com/#create/Microsoft.VirtualNetwork-ARM) in the same region as your Form Recognizer resource. You'll create a virtual network to deploy your application resources to train models and analyze documents.
+
+* An [**Azure data science VM**](https://portal.azure.com/#create/Microsoft.VirtualNetwork-ARM) optionally deploy a data science VM in the virtual network to test the secure connections being established.
+
+## Configure resources
+
+Configure each of the resources to ensure that the resources can communicate with each other:
+
+* Configure the Form Recognizer Studio to use the newly created Form Recognizer resource by accessing the settings page and selecting the resource.
+
+* Validate that the configuration works by selecting the Read API and analyzing a sample document. If the resource was configured correctly, the request will successfully complete.
+
+* Add a training dataset to a container in the Storage account you created.
+
+* Select the custom model tile to create a custom project. Ensure that you select the same Form Recognizer resource and the storage account you created in the previous step.
+
+* Select the container with the training dataset you uploaded in the previous step. Ensure that if the training dataset is within a folder, the folder path is set appropriately.
+
+* If you have the required permissions, the Studio will set the CORS setting required to access the storage account. If you don't have the permissions, you'll need to ensure that the CORS settings are configured on the Storage account before you can proceed.
+
+* Validate that the Studio is configured to access your training data, if you can see your documents in the labeling experience, all the required connections have been established.
+
+You now have a working implementation of all the components needed to build a Form Recognizer solution with the default security model:
+
+ :::image type="content" source="media/managed-identities/default-config.png" alt-text="Screenshot of default security configuration.":::
+
+Next, you'll complete the following steps:
+
+* Setup managed identity on the Form Recognizer resource.
+
+* Secure the storage account to restrict traffic from only specific virtual networks and IP addresses.
+
+* Configure the Form Recognizer managed identity to communicate with the storage account.
+
+* Disable public access to the Form Recognizer resource and create a private endpoint to make it accessible from the virtual network.
+
+* Add a private endpoint for the storage account in a selected virtual network.
+
+* Validate that you can train models and analyze documents from within the virtual network.
+
+## Setup managed identity for Form Recognizer
+
+Navigate to the Form Recognizer resource in the Azure portal and select the **Identity** tab. Toggle the **System assigned** managed identity to **On** and save the changes:
+
+ :::image type="content" source="media/managed-identities/v2-fr-mi.png" alt-text="Screenshot of configure managed identity.":::
+
+## Secure the Storage account to limit traffic
+
+Start configuring secure communications by navigating to the **Networking** tab on your **Storage account** in the Azure portal.
+
+1. Under **Firewalls and virtual networks**, choose **Enabled from selected virtual networks and IP addresses** from the **Public network access** list.
+
+1. Ensure that **Allow Azure services on the trusted services list to access this storage account** is selected from the **Exceptions** list.
+
+1. **Save** your changes.
+
+ :::image type="content" source="media/managed-identities/v2-stg-firewall.png" alt-text="Screenshot of configure storage firewall.":::
+
+> [!NOTE]
+>
+> Your storage account won't be accessible from the public internet.
+>
+> Refreshing the custom model labeling page in the Studio will result in an error message.
+
+## Enable access to storage from Form Recognizer
+
+To ensure that the Form Recognizer resource can access the training dataset, you'll need to add a role assignment for the managed identity that was created earlier.
+
+1. Staying on the storage account window in the Azure portal, navigate to the **Access Control (IAM)** tab in the left navigation bar.
+
+1. Select the **Add role assignment** button.
+
+ :::image type="content" source="media/managed-identities/v2-stg-role-assign-role.png" alt-text="Screenshot of add role assignment window.":::
+
+1. On the **Role** tab, search for and select the**Storage Blob Reader** permission and select **Next**.
+
+ :::image type="content" source="media/managed-identities/v2-stg-role-assignment.png" alt-text="Screenshot of choose a role tab.":::
+
+1. On the **Members** tab, select the **Managed identity** option and choose **+ Select members**
+
+1. On the **Select managed identities** dialog window, select the following options:
+
+ * **Subscription**. Select your subscription.
+
+ * **Managed Identity**. Select Form **Recognizer**.
+
+ * **Select**. Choose the Form Recognizer resource you enabled with a managed identity.
+
+ :::image type="content" source="media/managed-identities/v2-stg-role-assign-resource.png" alt-text="Screenshot of managed identities dialog window.":::
+
+1. **Close** the dialog window.
+
+1. Finally, select **Review + assign** to save your changes.
+
+Great! You've configured your Form Recognizer resource to use a managed identity to connect to a storage account.
+
+> [!TIP]
+>
+> When you try the [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio), you'll see the READ API and other prebuilt models don't require storage access to process documents. However, training a custom model requires additional configuration because the Studio can't directly communicate with a storage account.
+ > You can enable storage access by selecting **Add your client IP address** from the **Networking** tab of the storage account to configure your machine to access the storage account via IP allowlisting.
+
+## Configure private endpoints for access from VNETs
+
+When you connect to resources from a virtual network, adding private endpoints will ensure both the storage account and the Form Recognizer resource are accessible from the virtual network.
+
+Next, you'll configure the virtual network to ensure only resources within the virtual network or traffic router through the network will have access to the Form Recognizer resource and the storage account.
+
+### Enable your virtual network and private endpoints
+
+1. In the Azure portal, navigate to your Form Recognizer resource.
+
+1. Select the **Networking** tab from the left navigation bar.
+
+1. Enable the **Selected Networking and Private Endpoints** option from the **Firewalls and virtual networks** tab and select save.
+
+> [!NOTE]
+>
+>If you try accessing any of the Form Recognizer Studio features, you'll see an access denied message. To enable access from the Studio on your machine, select the **client IP address checkbox** and **Save** to restore access.
+
+ :::image type="content" source="media/managed-identities/v2-fr-network.png" alt-text="Screenshot showing how to disable public access to Form Recognizer.":::
+
+### Configure your private endpoint
+
+1. Navigate to the **Private endpoint connections** tab and select the **+ Private endpoint**. You'll be
+navigated to the **Create a private endpoint** dialog page.
+
+1. On the **Create private endpoint** dialog page, select the following options:
+
+ * **Subscription**. Select your billing subscription.
+
+ * **Resource group**. Select the appropriate resource group.
+
+ * **Name**. Enter a name for your private endpoint.
+
+ * **Region**. Select the same region as your virtual network.
+
+ * Select **Next: Resource**.
+
+ :::image type="content" source="media/managed-identities/v2-fr-private-end-basics.png" alt-text="Screenshot showing how to set-up a private endpoint":::
+
+### Configure your virtual network
+
+1. On the **Resource** tab, accept the default values and select **Next: Virtual Network**.
+
+1. On the **Virtual Network** tab, ensure that the virtual network you created is selected in the virtual network.
+
+1. If you have multiple subnets, select the subnet where you want the private endpoint to connect. Accept the default value to **Dynamically allocate IP address**.
+
+1. Select **Next: DNS**
+
+1. Accept the default value **Yes** to **integrate with private DNS zone**.
+
+ :::image type="content" source="media/managed-identities/v2-fr-private-end-vnet.png" alt-text="Screenshot showing how to configure private endpoint":::
+
+1. Accept the remaining defaults and select **Next: Tags**.
+
+1. Select **Next: Review + create** .
+
+Well done! Your Form Recognizer resource now is only accessible from the virtual network and any IP addresses in the IP allowlist.
+
+### Configure private endpoints for storage
+
+Navigate to your **storage account** on the Azure portal.
+
+1. Select the **Networking** tab from the left navigation menu.
+
+1. Select the **Private endpoint connections** tab.
+
+1. Choose add **+ Private endpoint**.
+
+1. Provide a name and choose the same region as the virtual network.
+
+1. Select **Next: Resource**.
+
+ :::image type="content" source="media/managed-identities/v2-stg-private-end-basics.png" alt-text="Screenshot showing how to create a private endpoint":::
+
+1. On the resource tab, select **blob** from the **Target sub-resource** list.
+
+1. select **Next: Virtual Network**.
+
+ :::image type="content" source="media/managed-identities/v2-stg-private-end-resource.png" alt-text="Screenshot showing how to configure a private endpoint for a blob.":::
+
+1. Select the **Virtual network** and **Subnet**. Make sure **Enable network policies for all private endpoints in this subnet** is selected and the **Dynamically allocate IP address** is enabled.
+
+1. Select **Next: DNS**.
+
+1. Make sure that **Yes** is enabled for **Integrate with private DNS zone**.
+
+1. Select **Next: Tags**.
+
+1. Select **Next: Review + create**.
+
+Great work! You now have all the connections between the Form Recognizer resource and storage configured to use managed identities.
+
+> [!NOTE]
+> The resources are only accessible from the virtual network.
+>
+> Studio access and analyze requests to your Form Recognizer resource will fail unless the request originates from the virtual network or is routed via the virtual network.
+
+## Validate your deployment
+
+To validate your deployment, you can deploy a virtual machine (VM) to the virtual network and connect to the resources.
+
+1. Configure a [Data Science VM](https://azuremarketplace.microsoft.com/marketplace/apps/microsoft-dsvm.dsvm-win-2019?tab=Overview) in the virtual network.
+
+1. Remotely connect into the VM from your desktop to launch a browser session to access Form Recognizer Studio.
+
+1. Analyze requests and the training operations should now work successfully.
+
+That's it! You can now configure secure access for your Form Recognizer resource with managed identities and private endpoints.
+
+## Common error messages
+
+* **Failed to access Blob container**:
+
+ :::image type="content" source="media/managed-identities/cors-error.png" alt-text="Screenshot of error message when CORS config is required":::
+
+ **Resolution**: [Configure CORS](quickstarts/try-v3-form-recognizer-studio.md#prerequisites-for-new-users).
+
+* **AuthorizationFailure**:
+
+ :::image type="content" source="media/managed-identities/auth-failure.png" alt-text="Screenshot of authorization failure error.":::
+
+ **Resolution**: Ensure that there's a network line-of-sight between the computer accessing the form recognizer studio and the storage account. For example, you may need to add the client IP address in the storage account's networking tab.
+
+* **ContentSourceNotAccessible**:
+
+ :::image type="content" source="media/managed-identities/content-source-error.png" alt-text="Screenshot of content source not accessible error.":::
+
+ **Resolution**: Make sure you've given your Form Recognizer managed identity the role of **Storage Blob Data Reader** and enabled **Trusted services** access or **Resource instance** rules on the networking tab.
+
+* **AccessDenied**:
+
+ :::image type="content" source="media/managed-identities/access-denied.png" alt-text="Screenshot of a access denied error.":::
+
+ **Resolution**: Check to make sure there's connectivity between the computer accessing the form recognizer studio and the form recognizer service. For example, you may need to add the client IP address to the Form Recognizer service's networking tab.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Access Azure Storage from a web app using managed identities](../../app-service/scenario-secure-app-access-storage.md?bc=%2fazure%2fapplied-ai-services%2fform-recognizer%2fbreadcrumb%2ftoc.json&toc=%2fazure%2fapplied-ai-services%2fform-recognizer%2ftoc.json)
+
applied-ai-services Managed Identities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/managed-identities.md
Previously updated : 02/22/2022 Last updated : 05/23/2022
You need to grant Form Recognizer access to your storage account before it can c
That's it! You've completed the steps to enable a system-assigned managed identity. With managed identity and Azure RBAC, you granted Form Recognizer specific access rights to your storage resource without having to manage credentials such as SAS tokens.
-## Learn more about managed identity
-
+## Next steps
> [!div class="nextstepaction"]
-> [Access Azure Storage form a web app using managed identities](../../app-service/scenario-secure-app-access-storage.md?bc=%2fazure%2fapplied-ai-services%2fform-recognizer%2fbreadcrumb%2ftoc.json&toc=%2fazure%2fapplied-ai-services%2fform-recognizer%2ftoc.json)
+> [Configure secure access with managed identities and private endpoints](managed-identities-secured-access.md)
applied-ai-services Try V3 Java Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/quickstarts/try-v3-java-sdk.md
In this quickstart you'll use following features to analyze and extract data and
* If you aren't using VS Code, make sure you have the following installed in your development environment:
- * A [**Java Development Kit** (JDK)](https://www.oracle.com/java/technologies/downloads/) version 8 or later.
+ * A [**Java Development Kit** (JDK)](https://wiki.openjdk.java.net/display/jdk8u) version 8 or later. For more information, *see* [supported Java Versions and update schedule](/azure/developer/java/fundamentals/java-support-on-azure#supported-java-versions-and-update-schedule).
* [**Gradle**](https://gradle.org/), version 6.8 or later.
applied-ai-services Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/service-limits.md
Previously updated : 05/09/2022 Last updated : 05/23/2022 # Form Recognizer service Quotas and Limits
-This article contains a quick reference and the **detailed description** of Azure Form Recognizer service Quotas and Limits for all [pricing tiers](https://azure.microsoft.com/pricing/details/form-recognizer/). It also contains some best practices to avoid request throttling.
+This article contains a quick reference and the **detailed description** of Azure Form Recognizer service Quotas and Limits for all [pricing tiers](https://azure.microsoft.com/pricing/details/form-recognizer/). It also contains some best practices to avoid request throttling.
For the usage with [Form Recognizer SDK](quickstarts/try-v3-csharp-sdk.md), [Form Recognizer REST API](quickstarts/try-v3-rest-api.md), [Form Recognizer Studio](quickstarts/try-v3-form-recognizer-studio.md) and [Sample Labeling Tool](https://fott-2-1.azurewebsites.net/). | Quota | Free (F0)<sup>1</sup> | Standard (S0) | |--|--|--| | **Concurrent Request limit** | 1 | 15 (default value) |
-| Adjustable | No<sup>2</sup> | Yes<sup>2</sup> |
-| **Compose Model limit** | 5 | 100 (default value) |
-| Adjustable | No<sup>2</sup> | No<sup>2</sup> |
+| Adjustable | No | Yes<sup>2</sup> |
+| **Max document size** | 500 MB | 500 MB |
+| Adjustable | No | No |
+| **Max number of pages (Analysis)** | 2 | No limit |
+| Adjustable | No | No |
+| **Max size of labels file** | 10 MB | 10 MB |
+| Adjustable | No | No |
+| **Max size of OCR json response** | 500 MB | 500 MB |
+| Adjustable | No | No |
+
+# [Form Recognizer v3.0 (Preview)](#tab/v30)
+
+| Quota | Free (F0)<sup>1</sup> | Standard (S0) |
+|--|--|--|
+| **Compose Model limit** | 5 | 200 (default value) |
+| Adjustable | No | No |
+| **Training dataset size - Template** | 50MB | 50MB (default value) |
+| Adjustable | No | No |
+| **Training dataset size - Neural** | 1GB | 1GB (default value) |
+| Adjustable | No | No |
+| **Max number of pages (Training) - Template** | 500 | 500 (default value) |
+| Adjustable | No | No |
+| **Max number of pages (Training) - Neural** | 50,000 | 50,000 (default value) |
+| Adjustable | No | No |
| **Custom neural model train** | 10 per month | 10 per month |
-| Adjustable | No<sup>2</sup> | Yes<sup>2</sup> |
+| Adjustable | No | Yes<sup>3</sup> |
+
+<sup>3</sup> Open a support request to increase the monthly training limit.
+
+# [Form Recognizer v2.1 (GA)](#tab/v21)
+
+| Quota | Free (F0)<sup>1</sup> | Standard (S0) |
+|--|--|--|
+| **Compose Model limit** | 5 | 100 (default value) |
+| Adjustable | No | No |
+| **Training dataset size** | 50 MB | 50 MB (default value) |
+| Adjustable | No | No |
+| **Max number of pages (Training)** | 500 | 500 (default value) |
+| Adjustable | No | No |
+
+--
-<sup>1</sup> For **Free (F0)** pricing tier see also monthly allowances at the [pricing page](https://azure.microsoft.com/pricing/details/form-recognizer/).
+<sup>1</sup> For **Free (F0)** pricing tier see also monthly allowances at the [pricing page](https://azure.microsoft.com/pricing/details/form-recognizer/).</br>
<sup>2</sup> See [best practices](#example-of-a-workload-pattern-best-practice), and [adjustment instructions](#create-and-submit-support-request). ## Detailed description, Quota adjustment, and best practices
-Before requesting a quota increase (where applicable), ensure that it is necessary. Form Recognizer service uses autoscaling to bring the required computational resources in "on-demand" and at the same time to keep the customer costs low, deprovision unused resources by not maintaining an excessive amount of hardware capacity. Every time your application receives a Response Code 429 ("Too many requests") while your workload is within the defined limits (see [Quotas and Limits quick reference](#form-recognizer-service-quotas-and-limits)) the most likely explanation is that the Service is scaling up to your demand and didn't reach the required scale yet, thus it doesn't immediately have enough resources to serve the request. This state is transient and shouldn't last long.
+
+Before requesting a quota increase (where applicable), ensure that it's necessary. Form Recognizer service uses autoscaling to bring the required computational resources in "on-demand" and at the same time to keep the customer costs low, deprovision unused resources by not maintaining an excessive amount of hardware capacity. Every time your application receives a Response Code 429 ("Too many requests") while your workload is within the defined limits (see [Quotas and Limits quick reference](#form-recognizer-service-quotas-and-limits)) the most likely explanation is that the Service is scaling up to your demand and didn't reach the required scale yet, thus it doesn't immediately have enough resources to serve the request. This state is transient and shouldn't last long.
### General best practices to mitigate throttling during autoscaling+ To minimize issues related to throttling (Response Code 429), we recommend using the following techniques:+ - Implement retry logic in your application - Avoid sharp changes in the workload. Increase the workload gradually <br/> *Example.* Your application is using Form Recognizer and your current workload is 10 TPS (transactions per second). The next second you increase the load to 40 TPS (that is four times more). The Service immediately starts scaling up to fulfill the new load, but likely it will not be able to do it within a second, so some of the requests will get Response Code 429.
The next sections describe specific cases of adjusting quotas.
Jump to [Form Recognizer: increasing concurrent request limit](#create-and-submit-support-request) ### Increasing transactions per second request limit+ By default the number of concurrent requests is limited to 15 transactions per second for a Form Recognizer resource. For the Standard pricing tier, this amount can be increased. Before submitting the request, ensure you're familiar with the material in [this section](#detailed-description-quota-adjustment-and-best-practices) and aware of these [best practices](#example-of-a-workload-pattern-best-practice). Increasing the Concurrent Request limit does **not** directly affect your costs. Form Recognizer service uses "Pay only for what you use" model. The limit defines how high the Service may scale before it starts throttle your requests. Existing value of Concurrent Request limit parameter is **not** visible via Azure portal, Command-Line tools, or API requests. To verify the existing value, create an Azure Support Request.
-#### Have the required information ready:
+#### Have the required information ready
- Form Recognizer Resource ID - Region
Existing value of Concurrent Request limit parameter is **not** visible via Azur
- **Location** (your endpoint Region) #### Create and submit support request+ Initiate the increase of transactions per second(TPS) limit for your resource by submitting the Support Request: - Ensure you have the [required information](#have-the-required-information-ready)
Initiate the increase of transactions per second(TPS) limit for your resource by
- Select *New support request* (*Support + troubleshooting* group) - A new window will appear with auto-populated information about your Azure Subscription and Azure Resource - Enter *Summary* (like "Increase Form Recognizer TPS limit")-- In *Problem type* select "Quota or usage validation"
+- In Problem type,* select "Quota or usage validation"
- Select *Next: Solutions* - Proceed further with the request creation - Under the *Details* tab enters the following in the *Description* field: - a note, that the request is about **Form Recognizer** quota.
- - Provide a TPS expectation you would like to scale to meet.
+ - Provide a TPS expectation you would like to scale to meet.
- Azure resource information you [collected](#have-the-required-information-ready). - Complete entering the required information and select *Create* button in *Review + create* tab - Note the support request number in Azure portal notifications. You'll be contacted shortly for further processing ## Example of a workload pattern best practice+ This example presents the approach we recommend following to mitigate possible request throttling due to [Autoscaling being in progress](#detailed-description-quota-adjustment-and-best-practices). It isn't an "exact recipe", but merely a template we invite to follow and adjust as necessary.
-Let us suppose that a Form Recognizer resource has the default limit set. Start the workload to submit your analyze requests. If you find that you're seeing frequent throttling with response code 429, start by backing off on the GET analyze response request and retry using the 2-3-5-8 pattern. In general it's recommended that you not call the get analyze response more than once every 2 seconds for a corresponding POST request.
+ Let us suppose that a Form Recognizer resource has the default limit set. Start the workload to submit your analyze requests. If you find that you're seeing frequent throttling with response code 429, start by implementing an exponential backoff on the GET analyze response request. By using a progressively longer wait time between retries for consecutive error responses, for example a 2-5-13-34 pattern of delays between requests. In general, it's recommended to not call the get analyze response more than once every 2 seconds for a corresponding POST request.
If you find that you're being throttled on the number of POST requests for documents being submitted, consider adding a delay between the requests. If your workload requires a higher degree of concurrent processing, you'll then need to create a support request to increase your service limits on transactions per second. Generally, it's highly recommended to test the workload and the workload patterns before going to production.
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Learn about error codes and troubleshooting](preview-error-guide.md)
automation Update Agent Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/troubleshoot/update-agent-issues.md
Results are shown on the page when they're ready. The checks sections show what'
### Operating system
-The operating system check verifies whether the Hybrid Runbook Worker is running one of the operating systems shown in the next table.
-
-|Operating system |Notes |
-|||
-|Windows Server 2012 and later |.NET Framework 4.6 or later is required. ([Download the .NET Framework](/dotnet/framework/install/guide-for-developers).)<br/> Windows PowerShell 5.1 is required. ([Download Windows Management Framework 5.1](https://www.microsoft.com/download/details.aspx?id=54616).) |
+The operating system check verifies whether the Hybrid Runbook Worker is running [one of the supported operating systems.](/azure/automation/update-management/operating-system-requirements.md#windows-operating-system)
+one of the supported operating systems
### .NET 4.6.2
automation Operating System Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/update-management/operating-system-requirements.md
All operating systems are assumed to be x64. x86 is not supported for any operat
> [!NOTE] > Update assessment of Linux machines is only supported in certain regions as listed in the Automation account and Log Analytics workspace [mappings table](../how-to/region-mappings.md#supported-mappings).
+### Windows
+ |Operating system |Notes | ||| |Windows Server 2019 (Datacenter/Standard including Server Core)<br><br>Windows Server 2016 (Datacenter/Standard excluding Server Core)<br><br>Windows Server 2012 R2(Datacenter/Standard)<br><br>Windows Server 2012 | | |Windows Server 2008 R2 (RTM and SP1 Standard)| Update Management supports assessments and patching for this operating system. The [Hybrid Runbook Worker](../automation-windows-hrw-install.md) is supported for Windows Server 2008 R2. |+
+### Linux
+|Operating system |Notes |
+|||
|CentOS 6, 7, and 8 | Linux agents require access to an update repository. Classification-based patching requires `yum` to return security data that CentOS doesn't have in its RTM releases. For more information on classification-based patching on CentOS, see [Update classifications on Linux](view-update-assessments.md#linux). | |Oracle Linux 6.x, 7.x, 8x | Linux agents require access to an update repository. | |Red Hat Enterprise 6, 7, and 8 | Linux agents require access to an update repository. |
availability-zones Az Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/availability-zones/az-overview.md
Some organizations require high availability of availability zones and protectio
## Azure regions with availability zones
-Azure provides the most extensive global footprint of any cloud provider and is rapidly opening new regions and availability zones. The following regions currently support availability zones.
-
-| Americas | Europe | Africa | Asia Pacific |
-|--|-||-|
-| Brazil South | France Central | South Africa North | Australia East |
-| Canada Central | Germany West Central | | Central India |
-| Central US | North Europe | | Japan East |
-| East US | Norway East | | Korea Central |
-| East US 2 | UK South | | Southeast Asia |
-| South Central US | West Europe | | East Asia |
-| US Gov Virginia | Sweden Central | | China North 3 |
-| West US 2 | Switzerland North* | | |
-| West US 3 | | | |
-
-\* To learn more about Availability Zones and available services support in these regions, contact your Microsoft sales or customer representative. For the upcoming regions that will support Availability Zones, see [Azure geographies](https://azure.microsoft.com/global-infrastructure/geographies/).
## Next steps
availability-zones Az Region https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/availability-zones/az-region.md
Azure strives to enable high resiliency across every service and offering. Runni
## Azure regions with availability zones
-Azure provides the most extensive global footprint of any cloud provider and is rapidly opening new regions and availability zones. The following regions currently support availability zones.
-
-| Americas | Europe | Africa | Asia Pacific |
-|--|-||-|
-| Brazil South | France Central | South Africa North | Australia East |
-| Canada Central | Germany West Central | | Central India |
-| Central US | North Europe | | Japan East |
-| East US | Norway East | | Korea Central |
-| East US 2 | UK South | | Southeast Asia |
-| South Central US | West Europe | | East Asia |
-| US Gov Virginia | Sweden Central | | China North 3 |
-| West US 2 | Switzerland North* | | |
-| West US 3 | | | |
-
-\* To learn more about Availability Zones and available services support in these regions, contact your Microsoft sales or customer representative. For the upcoming regions that will support Availability Zones, see [Azure geographies](https://azure.microsoft.com/global-infrastructure/geographies/).
For a list of Azure services that support availability zones by Azure region, see the [availability zones documentation](az-overview.md).
In the Product Catalog, always-available services are listed as "non-regional" s
| [Azure Load Balancer](../load-balancer/load-balancer-standard-availability-zones.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) ![An icon that signifies this service is zonal](media/icon-zonal.svg) | | [Azure Service Bus](../service-bus-messaging/service-bus-geo-dr.md#availability-zones) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) | | [Azure Service Fabric](../service-fabric/service-fabric-cross-availability-zones.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) ![An icon that signifies this service is zonal](media/icon-zonal.svg) |
-| [Azure Storage account](../storage/common/storage-redundancy.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
-| [Azure Storage: Azure Data Lake Storage](../storage/blobs/data-lake-storage-introduction.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
-| [Azure Storage: Disk Storage](../storage/common/storage-redundancy.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
-| Azure Storage:ΓÇ»[Blob Storage](../storage/common/storage-redundancy.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
-| Azure Storage:ΓÇ»[Managed Disks](../virtual-machines/managed-disks-overview.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) ![An icon that signifies this service is zonal](media/icon-zonal.svg) |
-| [Azure Virtual Machine Scale Sets](../virtual-machine-scale-sets/scripts/cli-sample-zone-redundant-scale-set.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) ![An icon that signifies this service is zonal.](media/icon-zonal.svg) |
-| [Azure Virtual Machines](../virtual-machines/windows/create-powershell-availability-zone.md) | ![An icon that signifies this service is zonal.](media/icon-zonal.svg) |
-| Virtual Machines:ΓÇ»[Av2-Series](../virtual-machines/windows/create-powershell-availability-zone.md) | ![An icon that signifies this service is zonal.](media/icon-zonal.svg) |
-| Virtual Machines:ΓÇ»[Bs-Series](../virtual-machines/windows/create-powershell-availability-zone.md) | ![An icon that signifies this service is zonal.](media/icon-zonal.svg) |
-| Virtual Machines:ΓÇ»[DSv2-Series](../virtual-machines/windows/create-powershell-availability-zone.md) | ![An icon that signifies this service is zonal.](media/icon-zonal.svg) |
-| Virtual Machines:ΓÇ»[DSv3-Series](../virtual-machines/windows/create-powershell-availability-zone.md) | ![An icon that signifies this service is zonal.](media/icon-zonal.svg) |
-| Virtual Machines:ΓÇ»[Dv2-Series](../virtual-machines/windows/create-powershell-availability-zone.md) | ![An icon that signifies this service is zonal.](media/icon-zonal.svg) |
-| Virtual Machines:ΓÇ»[Dv3-Series](../virtual-machines/windows/create-powershell-availability-zone.md) | ![An icon that signifies this service is zonal.](media/icon-zonal.svg) |
-| Virtual Machines:ΓÇ»[ESv3-Series](../virtual-machines/windows/create-powershell-availability-zone.md) | ![An icon that signifies this service is zonal.](media/icon-zonal.svg) |
-| Virtual Machines:ΓÇ»[Ev3-Series](../virtual-machines/windows/create-powershell-availability-zone.md) | ![An icon that signifies this service is zonal.](media/icon-zonal.svg) |
-| Virtual Machines:ΓÇ»[F-Series](../virtual-machines/windows/create-powershell-availability-zone.md) | ![An icon that signifies this service is zonal.](media/icon-zonal.svg) |
-| Virtual Machines:ΓÇ»[FS-Series](../virtual-machines/windows/create-powershell-availability-zone.md) | ![An icon that signifies this service is zonal.](media/icon-zonal.svg) |
-| Virtual Machines:ΓÇ»[Azure Compute Gallery](../virtual-machines/azure-compute-gallery.md#high-availability)| ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
+| [Azure Storage account](migrate-storage.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
+| [Azure Storage: Azure Data Lake Storage](migrate-storage.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
+| [Azure Storage: Disk Storage](migrate-storage.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
+| [Azure Storage: Blob Storage](migrate-storage.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
+| [Azure Storage: Managed Disks](migrate-storage.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) ![An icon that signifies this service is zonal](media/icon-zonal.svg) |
+| [Azure Virtual Machine Scale Sets](migrate-vm.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) ![An icon that signifies this service is zonal.](media/icon-zonal.svg) |
+| [Azure Virtual Machines](migrate-vm.md) | ![An icon that signifies this service is zonal.](media/icon-zonal.svg) |
+| Virtual Machines:ΓÇ»[Av2-Series](migrate-vm.md) | ![An icon that signifies this service is zonal.](media/icon-zonal.svg) |
+| Virtual Machines:ΓÇ»[Bs-Series](migrate-vm.md) | ![An icon that signifies this service is zonal.](media/icon-zonal.svg) |
+| Virtual Machines:ΓÇ»[DSv2-Series](migrate-vm.md) | ![An icon that signifies this service is zonal.](media/icon-zonal.svg) |
+| Virtual Machines:ΓÇ»[DSv3-Series](migrate-vm.md) | ![An icon that signifies this service is zonal.](media/icon-zonal.svg) |
+| Virtual Machines:ΓÇ»[Dv2-Series](migrate-vm.md) | ![An icon that signifies this service is zonal.](media/icon-zonal.svg) |
+| Virtual Machines:ΓÇ»[Dv3-Series](migrate-vm.md) | ![An icon that signifies this service is zonal.](media/icon-zonal.svg) |
+| Virtual Machines:ΓÇ»[ESv3-Series](migrate-vm.md) | ![An icon that signifies this service is zonal.](media/icon-zonal.svg) |
+| Virtual Machines:ΓÇ»[Ev3-Series](migrate-vm.md) | ![An icon that signifies this service is zonal.](media/icon-zonal.svg) |
+| Virtual Machines:ΓÇ»[F-Series](migrate-vm.md) | ![An icon that signifies this service is zonal.](media/icon-zonal.svg) |
+| Virtual Machines:ΓÇ»[FS-Series](migrate-vm.md) | ![An icon that signifies this service is zonal.](media/icon-zonal.svg) |
+| Virtual Machines:ΓÇ»[Azure Compute Gallery](migrate-vm.md)| ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
| [Azure Virtual Network](../vpn-gateway/create-zone-redundant-vnet-gateway.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) | | [Azure VPN Gateway](../vpn-gateway/about-zone-redundant-vnet-gateways.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
availability-zones Migrate Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/availability-zones/migrate-storage.md
+
+ Title: Migrate Azure Storage accounts to availability zone support
+description: Learn how to migrate your Azure storage accounts to availability zone support.
+++ Last updated : 05/09/2022+++++
+# Migrate Azure Storage accounts to availability zone support
+
+This guide describes how to migrate Azure Storage accounts from non-availability zone support to availability support. We'll take you through the different options for migration.
+
+Azure Storage always stores multiple copies of your data so that it is protected from planned and unplanned events, including transient hardware failures, network or power outages, and massive natural disasters. Redundancy ensures that your storage account meets the Service-Level Agreement (SLA) for Azure Storage even in the face of failures.
+
+Azure Storage offers the following types of replication:
+
+- Locally redundant storage (LRS)
+- Zone-redundant storage (ZRS)
+- Geo-redundant storage (GRS) or read-access geo-redundant storage (RA-GRS)
+- Geo-zone-redundant storage (GZRS) or read-access geo-zone-redundant storage (RA-GZRS)
+
+For an overview of each of these options, see [Azure Storage redundancy](../storage/common/storage-redundancy.md).
+
+You can switch a storage account from one type of replication to any other type, but some scenarios are more straightforward than others. This article describes two basic options for migration. The first is a manual migration and the second is a live migration that you must initiate by contacting Microsoft support.
+
+## Prerequisites
+
+- Make sure your storage account(s) are in a region that supports ZRS. To determine whether or not the region supports ZRS, see [Zone-redundant storage](../storage/common/storage-redundancy.md#zone-redundant-storage).
+
+- Confirm that your storage account(s) is a general-purpose v2 account. If your storage account is v1, you'll need to upgrade it to v2. To learn how to upgrade your v1 account, see [Upgrade to a general-purpose v2 storage account](../storage/common/storage-account-upgrade.md).
+
+## Downtime requirements
+
+If you choose manual migration, downtime is required. If you choose live migration, there's no downtime requirement.
+
+## Migration option 1: Manual migration
+
+### When to use a manual migration
+
+Use a manual migration if:
+
+- You need the migration to be completed by a certain date.
+
+- You want to migrate your data to a ZRS storage account that's in a different region than the source account.
+
+- You want to migrate data from ZRS to LRS, GRS or RA-GRS.
+
+- Your storage account is a premium page blob or block blob account.
+
+- Your storage account includes data that's in the archive tier.
+
+### How to manually migrate Azure Storage accounts
+
+To manually migration your Azure Storage accounts:
+
+1. Create a new storage account in the primary region with Zone Redundant Storage (ZRS) as the redundancy setting.
+
+1. Copy the data from your existing storage account to the new storage account. To perform a copy operation, use one of the following options:
+
+ - **Option 1:** Copy data by using an existing tool such as [AzCopy](../storage/common/storage-use-azcopy-v10.md), [Azure Data factory](../data-factory/connector-azure-blob-storage.md?tabs=data-factory), one of the Azure Storage client libraries, or a reliable third-party tool.
+
+ - **Option 2:** If you're familiar with Hadoop or HDInsight, you can attach both the source storage account and destination storage account to your cluster. Then, parallelize the data copy process with a tool like [DistCp](https://hadoop.apache.org/docs/r1.2.1/distcp.html).
+
+1. Determine which type of replication you need and follow the directions in [Switch between types of replication](../storage/common/redundancy-migration.md#switch-between-types-of-replication).
+
+## Migration option 2: Request a live migration
+
+### When to request a live migration
+
+Request a live migration if:
+
+- You want to migrate your storage account from LRS to ZRS in the primary region with no application downtime.
+
+- You want to migrate your storage account from ZRS to GZRS or RA-GZRS.
+
+- You don't need the migration to be completed by a certain date. While Microsoft handles your request for live migration promptly, there's no guarantee as to when a live migration will complete. Generally, the more data you have in your account, the longer it takes to migrate that data.
+
+### Live migration considerations
+
+During a live migration, you can access data in your storage account with no loss of durability or availability. The Azure Storage SLA is maintained during the migration process. There's no data loss associated with a live migration. Service endpoints, access keys, shared access signatures, and other account options remain unchanged after the migration.
+
+However, be aware of the following limitations:
+
+- The archive tier is not currently supported for ZRS accounts.
+
+- Unmanaged disks don't support ZRS or GZRS.
+
+- Only general-purpose v2 storage accounts support GZRS and RA-GZRS. GZRS and RA-GZRS support block blobs, page blobs (except for VHD disks), files, tables, and queues.
+
+- Live migration from LRS to ZRS isn't supported if the storage account contains Azure Files NFSv4.1 shares.
+
+- For premium performance, live migration is supported for premium file share accounts, but not for premium block blob or premium page blob accounts.
+
+### How to request a live migration
+
+[Request a live migration](../storage/common/redundancy-migration.md) by creating a new support request from Azure portal.
+
+## Next steps
+
+For more guidance on moving an Azure Storage account to another region, see:
+
+> [!div class="nextstepaction"]
+> [Move an Azure Storage account to another region](../storage/common/storage-account-move.md).
+
+Learn more about:
+
+> [!div class="nextstepaction"]
+> [Azure Storage redundancy](../storage/common/storage-redundancy.md)
+
+> [!div class="nextstepaction"]
+> [Regions and Availability Zones in Azure](az-overview.md)
+
+> [!div class="nextstepaction"]
+> [Azure Services that support Availability Zones](az-region.md)
availability-zones Migrate Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/availability-zones/migrate-vm.md
+
+ Title: Migrate Azure Virtual Machines and Azure Virtual Machine Scale Sets to availability zone support
+description: Learn how to migrate your Azure Virtual Machines and Virtual Machine Scale Sets to availability zone support.
+++ Last updated : 04/21/2022++++
+
+# Migrate Virtual Machines and Virtual Machine Scale Sets to availability zone support
+
+This guide describes how to migrate Virtual Machines (VMs) and Virtual Machine Scale Sets (VMSS) from non-availability zone support to availability zone support. We'll take you through the different options for migration, including how you can use availability zone support for Disaster Recovery solutions.
+
+Virtual Machine (VM) and Virtual Machine Scale Sets (VMSS) are zonal services, which means that VM resources can be deployed by using one of the following methods:
+
+- VM resources are deployed to a specific, self-selected availability zone to achieve more stringent latency or performance requirements.
+
+- VM resources are replicated to one or more zones within the region to improve the resiliency of the application and data in a High Availability (HA) architecture.
+
+When you migrate resources to availability zone support, we recommend that you select multiple zones for your new VMs and VMSS, to ensure high-availability of your compute resources.
+
+## Prerequisites
+
+To migrate to availability zone support, your VM SKUs must be available across the zones in for your region. To check for VM SKU availability, use one of the following methods:
+
+- Use PowerShell to [Check VM SKU availability](../virtual-machines/windows/create-PowerShell-availability-zone.md#check-vm-sku-availability).
+- Use the Azure CLI to [Check VM SKU availability](../virtual-machines/linux/create-cli-availability-zone.md#check-vm-sku-availability).
+- Go to [Foundational Services](az-region.md#an-icon-that-signifies-this-service-is-foundational-foundational-services).
+
+## Downtime requirements
+
+Because zonal VMs are created across the availability zones, all migration options mentioned in this article require downtime during deployment because zonal VMs are created across the availability zones.
+
+## Migration Option 1: Redeployment
+
+### When to use redeployment
+
+Use the redeployment option if you have good Infrastructure as Code (IaC) practices setup to manage infrastructure. The redeployment option gives you more control, and the ability to automate various processes within your deployment pipelines.
+
+### Redeployment considerations
+
+- When you redeploy your VM and VMSS resources, the underlying resources such as managed disk and IP address for the VM are created in the same availability zone. You must use a Standard SKU public IP address and load balancer to create zone-redundant network resources.
+
+- For zonal deployments that require reasonably low network latency and good performance between application tier and data tier, use [proximity placement groups](../virtual-machines/co-location.md). Proximity groups can force grouping of different VM resources under a single network spine. For an example of an SAP workload that uses proximity placement groups, see [Azure proximity placement groups for optimal network latency with SAP applications](../virtual-machines/workloads/sap/sap-proximity-placement-scenarios.md)
+
+### How to redeploy
+
+To redeploy, you'll need to recreate your VM and VMSS resources. To ensure high-availability of your compute resources, it's recommended that you select multiple zones for your new VMs and VMSS.
+
+To learn how create VMs in an availability zone, see:
+
+- [Create VM using Azure CLI](../virtual-machines/linux/create-cli-availability-zone.md)
+- [Create VM using Azure PowerShell](../virtual-machines/windows/create-PowerShell-availability-zone.md)
+- [Create VM using Azure portal](../virtual-machines/create-portal-availability-zone.md?tabs=standard)
+
+To learn how to create VMSS in an availability zone, see [Create a virtual machine scale set that uses Availability Zones](../virtual-machine-scale-sets/virtual-machine-scale-sets-use-availability-zones.md).
+
+## Migration Option 2: Azure Resource Mover
+
+### When to use Azure Resource Mover
+
+Use Azure Resource Mover for an easy way to move VMs or encrypted VMs from one region without availability zones to another with availability zones. If you want to learn more about the benefits of using Azure Resource Mover, see [Why use Azure Resource Mover?](../resource-mover/overview.md#why-use-resource-mover).
+
+### Azure Resource Mover considerations
+
+When you use Azure Resource mover, all keys and secrets are copied from the source key vault to the newly created destination key vault in your target region. All resources related to your customer-managed keys, such as Azure Key Vaults, disk encryption sets, VMs, disks, and snapshots, must be in the same subscription and region. Azure Key VaultΓÇÖs default availability and redundancy feature can't be used as the destination key vault for the moved VM resources, even if the target region is a secondary region to which your source key vault is replicated.
+
+### How to use Azure Resource Mover
+
+To learn how to move VMs to another region, see [Move Azure VMs to an availability zone in another region](../resource-mover/move-region-availability-zone.md)
+
+To learn how to move encrypted VMs to another region, see [Tutorial: Move encrypted Azure VMs across regions](../resource-mover/tutorial-move-region-encrypted-virtual-machines.md)
+
+## Disaster Recovery Considerations
+
+Typically, availability zones are used to deploy VMs in a High Availability configuration. They may be too close to each other to serve as a Disaster Recovery solution during a natural disaster. However, there are scenarios where availability zones can be used for Disaster Recovery. To learn more, see [Using Availability Zones for Disaster Recovery](../site-recovery/azure-to-azure-how-to-enable-zone-to-zone-disaster-recovery.md#using-availability-zones-for-disaster-recovery).
+
+The following requirements should be part of a disaster recovery strategy that helps your organization run its workloads during planned or unplanned outages across zones:
+
+- The source VM must already be a zonal VM, which means that it's placed in a logical zone.
+- You'll need to replicate your VM from one zone to another zone using Azure Site Recovery service.
+- Once your VM is replicated to another zone, you can follow steps to run a Disaster Recovery drill, fail over, reprotect, and failback.
+- To enable VM disaster recovery between availability zones, follow the instructions in [Enable Azure VM disaster recovery between availability zones](../site-recovery/azure-to-azure-how-to-enable-zone-to-zone-disaster-recovery.md) .
+
+## Next Steps
+
+Learn more about:
+
+> [!div class="nextstepaction"]
+> [Regions and Availability Zones in Azure](az-overview.md)
+
+> [!div class="nextstepaction"]
+> [Azure Services that support Availability Zones](az-region.md)
azure-app-configuration Concept Github Action https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/concept-github-action.md
A GitHub Actions [workflow](https://docs.github.com/en/actions/learn-github-acti
The GitHub [documentation](https://docs.github.com/en/actions/learn-github-actions/introduction-to-github-actions) provides in-depth view of GitHub workflows and actions. ## Enable GitHub Actions in your repository
-To start using this GitHub action, go to your repository and select the **Actions** tab. Select **New workflow**, then **Set up a workflow yourself**. Finally, search the marketplace for ΓÇ£Azure App Configuration Sync.ΓÇ¥
+To start using this GitHub Action, go to your repository and select the **Actions** tab. Select **New workflow**, then **Set up a workflow yourself**. Finally, search the marketplace for ΓÇ£Azure App Configuration Sync.ΓÇ¥
> [!div class="mx-imgBorder"] > ![Select the Action tab](media/find-github-action.png)
jobs:
``` ## Use strict sync
-By default the GitHub action does not enable strict mode, meaning that the sync will only add key-values from the configuration file to the App Configuration instance (no key-value pairs will be deleted). Enabling strict mode will mean key-value pairs that aren't in the configuration file are deleted from the App Configuration instance, so that it matches the configuration file. If you are syncing from multiple sources or using Azure Key Vault with App Configuration, you'll want to use different prefixes or labels with strict sync to avoid wiping out configuration settings from other files (see samples below).
+By default the GitHub Action does not enable strict mode, meaning that the sync will only add key-values from the configuration file to the App Configuration instance (no key-value pairs will be deleted). Enabling strict mode will mean key-value pairs that aren't in the configuration file are deleted from the App Configuration instance, so that it matches the configuration file. If you are syncing from multiple sources or using Azure Key Vault with App Configuration, you'll want to use different prefixes or labels with strict sync to avoid wiping out configuration settings from other files (see samples below).
```json on:
azure-arc Configure Transparent Data Encryption Manually https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/configure-transparent-data-encryption-manually.md
+
+ Title: Turn on transparent data encryption manually in Azure Arc-enabled SQL Managed Instance
+description: How-to guide to turn on transparent data encryption in an Azure Arc-enabled SQL Managed Instance
+++++++ Last updated : 05/22/2022+++
+# Enable transparent data encryption on Azure Arc-enabled SQL Managed Instance
+
+This article describes how to enable transparent data encryption on a database created in an Azure Arc-enabled SQL Managed Instance.
+
+## Prerequisites
+
+Before you proceed with this article, you must have an Azure Arc-enabled SQL Managed Instance resource created and have connected to it.
+
+- [An Azure Arc-enabled SQL Managed Instance created](./create-sql-managed-instance.md)
+- [Connect to Azure Arc-enabled SQL Managed Instance](./connect-managed-instance.md)
+
+## Turn on transparent data encryption on a database in Azure Arc-enabled SQL Managed Instance
+
+Turning on transparent data encryption in Azure Arc-enabled SQL Managed Instance follows the same steps as SQL Server on-premises. Follow the steps described in [SQL Server's transparent data encryption guide](/sql/relational-databases/security/encryption/transparent-data-encryption#enable-tde).
+
+After creating the necessary credentials, it's highly recommended to back up any newly created credentials.
+
+## Back up a transparent data encryption credential from Azure Arc-enabled SQL Managed Instance
+
+When backing up from Azure Arc-enabled SQL Managed Instance, the credentials will be stored within the container. It isn't necessary to store the credentials on a persistent volume, but you may use the mount path for the data volume within the container if you'd like: `/var/opt/mssql/data`. Otherwise, the credentials will be stored in-memory in the container. Below is an example of backing up a certificate from Azure Arc-enabled SQL Managed Instance.
+
+> [!NOTE]
+> If the `kubectl cp` command is run from Windows, the command may fail when using absolute Windows paths. `kubectl` can mistake the drive in the path as a pod name. For example, `kubectl` might mistake `C` to be a pod name in `C:\folder`. Users can avoid this issue by using relative paths or removing the `C:` from the provided path while in the `C:` drive. This issue also applies to environment variables on Windows like `$HOME`.
+
+1. Back up the certificate from the container to `/var/opt/mssql/data`.
+
+ ```sql
+ USE master;
+ GO
+
+ BACKUP CERTIFICATE <cert-name> TO FILE = '<cert-path>'
+ WITH PRIVATE KEY ( FILE = '<private-key-path>',
+ ENCRYPTION BY PASSWORD = '<UseStrongPasswordHere>');
+ ```
+
+ Example:
+
+ ```sql
+ USE master;
+ GO
+
+ BACKUP CERTIFICATE MyServerCert TO FILE = '/var/opt/mssql/data/servercert.crt'
+ WITH PRIVATE KEY ( FILE = '/var/opt/mssql/data/servercert.key',
+ ENCRYPTION BY PASSWORD = '<UseStrongPasswordHere>');
+ ```
+
+2. Copy the certificate from the container to your file system.
+
+ ```console
+ kubectl cp --namespace <namespace> --container arc-sqlmi <pod-name>:<pod-certificate-path> <local-certificate-path>
+ ```
+
+ Example:
+
+ ```console
+ kubectl cp --namespace arc-ns --container arc-sqlmi sql-0:/var/opt/mssql/data/servercert.crt ./sqlcerts/servercert.crt
+ ```
+
+3. Copy the private key from the container to your file system.
+
+ ```console
+ kubectl cp --namespace <namespace> --container arc-sqlmi <pod-name>:<pod-private-key-path> <local-private-key-path>
+ ```
+
+ Example:
+
+ ```console
+ kubectl cp --namespace arc-ns --container arc-sqlmi sql-0:/var/opt/mssql/data/servercert.key ./sqlcerts/servercert.key
+ ```
+
+4. Delete the certificate and private key from the container.
+
+ ```console
+ kubectl exec -it --namespace <namespace> --container arc-sqlmi <pod-name> -- bash -c "rm <certificate-path> <private-key-path>
+ ```
+
+ Example:
+
+ ```console
+ kubectl exec -it --namespace arc-ns --container arc-sqlmi sql-0 -- bash -c "rm /var/opt/mssql/data/servercert.crt /var/opt/mssql/data/servercert.key"
+ ```
+
+## Restore a transparent data encryption credential to Azure Arc-enabled SQL Managed Instance
+
+Similar to above, restore the credentials by copying them into the container and running the corresponding T-SQL afterwards.
+
+> [!NOTE]
+> If the `kubectl cp` command is run from Windows, the command may fail when using absolute Windows paths. `kubectl` can mistake the drive in the path as a pod name. For example, `kubectl` might mistake `C` to be a pod name in `C:\folder`. Users can avoid this issue by using relative paths or removing the `C:` from the provided path while in the `C:` drive. This issue also applies to environment variables on Windows like `$HOME`.
+
+1. Copy the certificate from your file system to the container.
+
+ ```console
+ kubectl cp --namespace <namespace> --container arc-sqlmi <local-certificate-path> <pod-name>:<pod-certificate-path>
+ ```
+
+ Example:
+
+ ```console
+ kubectl cp --namespace arc-ns --container arc-sqlmi ./sqlcerts/servercert.crt sql-0:/var/opt/mssql/data/servercert.crt
+ ```
+
+2. Copy the private key from your file system to the container.
+
+ ```console
+ kubectl cp --namespace <namespace> --container arc-sqlmi <local-private-key-path> <pod-name>:<pod-private-key-path>
+ ```
+
+ Example:
+
+ ```console
+ kubectl cp --namespace arc-ns --container arc-sqlmi ./sqlcerts/servercert.key sql-0:/var/opt/mssql/data/servercert.key
+ ```
+
+3. Create the certificate using file paths from `/var/opt/mssql/data`.
+
+ ```sql
+ USE master;
+ GO
+
+ CREATE CERTIFICATE <certicate-name>
+ FROM FILE = '<certificate-path>'
+ WITH PRIVATE KEY ( FILE = '<private-key-path>',
+ DECRYPTION BY PASSWORD = '<UseStrongPasswordHere>' );
+ ```
+
+ Example:
+
+ ```sql
+ USE master;
+ GO
+
+ CREATE CERTIFICATE MyServerCertRestored
+ FROM FILE = '/var/opt/mssql/data/servercert.crt'
+ WITH PRIVATE KEY ( FILE = '/var/opt/mssql/data/servercert.key',
+ DECRYPTION BY PASSWORD = '<UseStrongPasswordHere>' );
+ ```
+
+4. Delete the certificate and private key from the container.
+
+ ```console
+ kubectl exec -it --namespace <namespace> --container arc-sqlmi <pod-name> -- bash -c "rm <certificate-path> <private-key-path>
+ ```
+
+ Example:
+
+ ```console
+ kubectl exec -it --namespace arc-ns --container arc-sqlmi sql-0 -- bash -c "rm /var/opt/mssql/data/servercert.crt /var/opt/mssql/data/servercert.key"
+ ```
+
+## Next steps
+
+[Transparent data encryption](/sql/relational-databases/security/encryption/transparent-data-encryption)
azure-arc Deploy Active Directory Connector Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/deploy-active-directory-connector-portal.md
+
+ Title: Tutorial ΓÇô Deploy Active Directory connector using Azure portal
+description: Tutorial to deploy an Active Directory connector using Azure portal
++++++ Last updated : 05/24/2022+++
+# Tutorial ΓÇô Deploy Active Directory connector using Azure portal
+
+Active Directory (AD) connector is a key component to enable Active Directory authentication on Azure Arc-enabled SQL Managed Instances.
+
+This article explains how to deploy, manage, and delete an Active Directory (AD) connector in directly connected mode from the Azure portal.
+
+## Prerequisites
+
+For details about how to set up OU and AD account, go to [Deploy Azure Arc-enabled data services in Active Directory authentication - prerequisites](active-directory-prerequisites.md).
+
+Make sure you have the following deployed before proceed with the steps in this article:
+
+- An Arc-enabled Azure Kubernetes cluster.
+- A data controller in directly connected mode.
+
+## Create a new AD connector
+
+1. Log in to [Azure portal](https://portal.azure.com).
+1. In the search resources field at the top of the portal, type **data controllers**, and select **Azure Arc data controllers**.
+
+Azure takes you to where you can find all available data controllers deployed in your selected Azure subscription.
+
+1. Select the data controller where you wish to add an AD connector.
+1. Under **Settings** select **Active Directory**. The portal shows the Active Directory connectors for this data controller.
+1. Select **+ Add Connector**, the portal presents an **Add Connector** interface.
+1. Under **Active Directory connector**
+ 1. Specify your **Connector name**.
+ 2. Choose the account provisioning type - either **Automatic** or **Manual**.
+
+The account provisioning type determines whether you deploy a customer-managed keytab AD connector or a system-managed keytab AD connector.
+
+### Create a new customer-managed keytab AD connector
+
+1. Click **Add Connector**.
+
+1. Choose the account provisioning type **Manual**.
+
+1. Set the editable fields for your connector:
+ - **Realm**: The name of the Active Directory (AD) domain in uppercase. For example *CONTOSO.COM*.
+ - **Nameserver IP address**: A comma-separated list of Active Directory DNS server IP addresses. For example: *10.10.10.11, 10.10.10.12*.
+ - **Netbios domain name**: Optional. The NETBIOS name of the Active Directory domain. For example *CONTOSO*. Defaults to the first label of realm.
+ - **DNS domain name**: Optional. The DNS domain name associated with the Active Directory domain. For example, *contoso.com*.
+ - **DNS replicas**: Optional. The number of replicas to deploy for the DNS proxy service. Defaults to `1`.
+ - **Prefer Kubernetes DNS for PTR lookups**: Optional. Check to set Kubernetes DNS for IP address lookup. Clear to use Active Directory DNS.
+
+ ![Screenshot of the portal interface to add customer managed keytab.](media/active-directory-deployment/add-ad-customer-managed-keytab-connector-portal.png)
+
+1. Click **Add Connector** to create a new customer-managed keytab AD connector.
+
+### Create a new system-managed keytab AD connector
+1. Click **Add Connector**.
+1. Choose the account provisioning type **Automatic**.
+1. Set the editable fields for your connector:
+ - **Realm**: The name of the Active Directory (AD) domain in uppercase. For example *CONTOSO.COM*.
+ - **Nameserver IP address**: A comma-separated list of Active Directory DNS server IP addresses. For example: *10.10.10.11, 10.10.10.12*.
+ - **OU distinguished name** The distinguished name of the Organizational Unit (OU) pre-created in the Active Directory (AD) domain. For example, `OU=arcou,DC=contoso,DC=com`.
+ - **Domain Service Account username** The username of the Domain Service Account in Active Directory.
+ - **Domain Service Account password** The password of the Domain Service Account in Active Directory.
+ - **Primary domain controller hostname (Optional)** The hostname of the primary Active Directory domain controller. For example, `azdc01.contoso.com`.
+ - **Secondary domain controller hostname (Optional)** The secondary domain controller hostname.
+ - **Netbios domain name**: Optional. The NETBIOS name of the Active Directory domain. For example *CONTOSO*. Defaults to the first label of realm.
+ - **DNS domain name**: Optional. The DNS domain name associated with the Active Directory domain. For example, *contoso.com*.
+ - **DNS replicas (Optional)** The number of replicas to deploy for the DNS proxy service. Defaults to `1`.
+ - **Prefer Kubernetes DNS for PTR lookups**: Optional. Check to set Kubernetes DNS for IP address lookup. Clear to use Active Directory DNS.
+
+ ![Screenshot of the portal interface to add system managed keytab.](media/active-directory-deployment/add-ad-system-managed-keytab-connector-portal.png)
+
+1. Click **Add Connector** to create a new system-managed keytab AD connector.
+
+## Edit an existing AD connector
+
+1. Select the AD connect that you want to edit. Select the ellipses (**...**), and then **Edit**. The portal presents an **Edit Connector** interface.
+
+1. You may update any editable fields. For example:
+ - **Primary domain controller hostname** The hostname of the primary Active Directory domain controller. For example, `azdc01.contoso.com`.
+ - **Secondary domain controller hostname** The secondary domain controller hostname.
+ - **Nameserver IP address**: A comma-separated list of Active Directory DNS server IP addresses. For example: *10.10.10.11, 10.10.10.12*.
+ - **DNS replicas** The number of replicas to deploy for the DNS proxy service. Defaults to `1`.
+ - **Prefer Kubernetes DNS for PTR lookups**: Check to set Kubernetes DNS for IP address lookup. Clear to use Active Directory DNS.
+
+1. Click on **Apply** for changes to take effect.
++
+## Delete an AD connector
+
+1. Select the ellipses (**...**) on the right of the Active Directory connector you would like to delete.
+1. Select **Delete**.
+
+To delete multiple AD connectors at one time:
+
+1. Select the checkbox in the beginning row of each AD connector you want to delete.
+
+ Alternatively, select the checkbox in the top row to select all the AD connectors in the table.
+
+1. Click **Delete** in the management bar to delete the AD connectors that you selected.
+
+## Next steps
+* [Tutorial ΓÇô Deploy Active Directory connector using Azure CLI](deploy-active-directory-connector-cli.md)
+* [Tutorial ΓÇô Deploy AD connector in customer-managed keytab mode](deploy-customer-managed-keytab-active-directory-connector.md)
+* [Tutorial ΓÇô Deploy Active Directory connector in system-managed keytab mode](deploy-system-managed-keytab-active-directory-connector.md)
+* [Deploy Arc-enabled SQL Managed Instance with Active Directory Authentication](deploy-active-directory-sql-managed-instance.md).
azure-arc Limitations Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/limitations-managed-instance.md
description: Limitations of Azure Arc-enabled SQL Managed Instance
+
This article describes limitations of Azure Arc-enabled SQL Managed Instance.
-At this time, the business critical service tier is public preview. The general purpose service tier is generally available.
- ## Backup and restore ### Automated backups
azure-arc Managed Instance Business Continuity Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/managed-instance-business-continuity-overview.md
description: Overview business continuity for Azure Arc-enabled SQL Managed Inst
+
Last updated 01/27/2022
-# Overview: Azure Arc-enabled SQL Managed Instance business continuity (preview)
+# Overview: Azure Arc-enabled SQL Managed Instance business continuity
Business continuity is a combination of people, processes, and technology that enables businesses to recover and continue operating in the event of disruptions. In hybrid scenarios there is a joint responsibility between Microsoft and customer, such that customer owns and manages the on-premises infrastructure while the software is provided by Microsoft.
-Business continuity for Azure Arc-enabled SQL Managed Instance is available as preview.
-- ## Features This overview describes the set of capabilities that come built-in with Azure Arc-enabled SQL Managed Instance and how you can leverage them to recover from disruptions.
azure-arc Managed Instance Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/managed-instance-disaster-recovery.md
description: Describes disaster recovery for Azure Arc-enabled SQL Managed Insta
+
Last updated 04/06/2022
-# Azure Arc-enabled SQL Managed Instance - disaster recovery (preview)
+# Azure Arc-enabled SQL Managed Instance - disaster recovery
To configure disaster recovery in Azure Arc-enabled SQL Managed Instance, set up failover groups. - ## Background The distributed availability groups used in Azure Arc-enabled SQL Managed Instance is the same technology that is in SQL Server. Because Azure Arc-enabled SQL Managed Instance runs on Kubernetes, there's no Windows failover cluster involved. For more information, see [Distributed availability groups](/sql/database-engine/availability-groups/windows/distributed-availability-groups).
azure-arc Managed Instance Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/managed-instance-features.md
description: Features and Capabilities of Azure Arc-enabled SQL Managed Instance
+
Azure Arc-enabled SQL Managed Instance share a common code base with the latest
|Feature|Azure Arc-enabled SQL Managed Instance| |-|-| |Always On failover cluster instance<sup>1</sup>| Not Applicable. Similar capabilities available.|
-|Always On availability groups<sup>2</sup>|Business critical service tier. In preview.|
+|Always On availability groups<sup>2</sup>|Business Critical service tier.|
|Basic availability groups <sup>2</sup>|Not Applicable. Similar capabilities available.|
-|Minimum replica commit availability group <sup>2</sup>|Business critical service tier. In preview.|
+|Minimum replica commit availability group <sup>2</sup>|Business Critical service tier.
|Clusterless availability group|Yes| |Backup database | Yes - `COPY_ONLY` See [BACKUP - (Transact-SQL)](/sql/t-sql/statements/backup-transact-sql?view=azuresqldb-mi-current&preserve-view=true)| |Backup compression|Yes|
Azure Arc-enabled SQL Managed Instance supports various data tools that can help
| **Tool** | Azure Arc-enabled SQL Managed Instance| | | | |
-| Azure portal <sup>1</sup> | No |
+| Azure portal | Yes |
| Azure CLI | Yes | | [Azure Data Studio](/sql/azure-data-studio/what-is) | Yes | | Azure PowerShell | No |
Azure Arc-enabled SQL Managed Instance supports various data tools that can help
| [SQL Server PowerShell](/sql/relational-databases/scripting/sql-server-powershell) | Yes | | [SQL Server Profiler](/sql/tools/sql-server-profiler/sql-server-profiler) | Yes |
-<sup>1</sup> The Azure portal can be used to create, view, and delete Azure Arc-enabled SQL Managed Instances. Updates cannot be done through the Azure portal currently.
- [!INCLUDE [use-insider-azure-data-studio](includes/use-insider-azure-data-studio.md)] ### <a name="Unsupported"></a> Unsupported Features & Services
azure-arc Managed Instance High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/managed-instance-high-availability.md
+
-# High Availability with Azure Arc-enabled SQL Managed Instance (preview)
+# High Availability with Azure Arc-enabled SQL Managed Instance
Azure Arc-enabled SQL Managed Instance is deployed on Kubernetes as a containerized application. It uses Kubernetes constructs such as stateful sets and persistent storage to provide built-in health monitoring, failure detection, and failover mechanisms to maintain service health. For increased reliability, you can also configure Azure Arc-enabled SQL Managed Instance to deploy with extra replicas in a high availability configuration. Monitoring, failure detection, and automatic failover are managed by the Arc data services data controller. Arc-enabled data service provides this service is provided without user intervention. The service sets up the availability group, configures database mirroring endpoints, adds databases to the availability group, and coordinates failover and upgrade. This document explores both types of high availability. - Azure Arc-enabled SQL Managed Instance provides different levels of high availability depending on whether the SQL managed instance was deployed as a *General Purpose* service tier or *Business Critical* service tier. ## High availability in General Purpose service tier
Azure Arc-enabled SQL Managed Instance availability groups has the same limitati
## Next steps Learn more about [Features and Capabilities of Azure Arc-enabled SQL Managed Instance](managed-instance-features.md)-
azure-arc Point In Time Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/point-in-time-restore.md
+ Last updated 03/01/2022
The backups are stored under `/var/opt/mssql/backups/archived/<dbname>/<datetime
Point-in-time restore to Azure Arc-enabled SQL Managed Instance has the following limitations: - Point-in-time restore of a whole Azure Arc-enabled SQL Managed Instance is not possible. -- An Azure Arc-enabled SQL managed instance that is deployed with high availability (preview) does not currently support point-in-time restore.
+- An Azure Arc-enabled SQL managed instance that is deployed with high availability does not currently support point-in-time restore.
- You can only restore to the same Azure Arc-enabled SQL managed instance. - Dropping and creating different databases with same names isn't handled properly at this time. - Providing a future date when executing the restore operation using ```--dry-run``` will result in an error
azure-arc Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/release-notes.md
Previously updated : 05/04/2022 Last updated : 05/24/2022 -
-# Customer intent: As a data professional, I want to understand why my solutions would benefit from running with Azure Arc-enabled data services so that I can leverage the capability of the feature.
+
+#Customer intent: As a data professional, I want to understand why my solutions would benefit from running with Azure Arc-enabled data services so that I can leverage the capability of the feature.
# Release notes - Azure Arc-enabled data services This article highlights capabilities, features, and enhancements recently released or improved for Azure Arc-enabled data services.
+## May 24, 2022
+
+This release is published May 24, 2022.
+
+### Image tag
+
+`v1.7.0_2022-05-24`
+
+For complete release version information, see [Version log](version-log.md).
+
+### Data controller reminders and warnings
+
+Reminders and warnings are implemented in Azure portal, custom resource status, and through CLI when the billing data related to all resources managed by the data controller has not been uploaded or exported for an extended period.
+
+### SQL Managed Instance
+
+General Availability of Business Critical service tier. Azure Arc-enabled SQL Managed Instance instances that have a version greater than or equal to v1.7.0 will be charged through Azure billing meters.
+
+### User experience improvements
+
+#### Azure portal
+
+Added ability to create AD Connectors from Azure portal.
+
+Preview expected costs for Azure Arc-enabled SQL Managed Instance Business Critical tier when you create new instances.
+
+#### Azure Data Studio
+
+Added ability to upgrade Azure Arc-enabled SQL Managed Instances from Azure Data Studio in the indirect and direct connectivity modes.
+
+Preview expected costs for Azure Arc-enabled SQL Managed Instance Business Critical tier when you create new instances.
+ ## May 4, 2022 This release is published May 4, 2022.
Separated the availability group and failover group status into two different se
Updated SQL engine binaries to the latest version.
-Add support for `NodeSelector`, `TopologySpreadConstraints` and `Affinity`. Only available through Kubernetes yaml/json file create/edit currently. No Azure CLI, Azure Portal, or Azure Data Studio user experience yet.
+Add support for `NodeSelector`, `TopologySpreadConstraints` and `Affinity`. Only available through Kubernetes yaml/json file create/edit currently. No Azure CLI, Azure portal, or Azure Data Studio user experience yet.
Add support for specifying labels and annotations on the secondary service endpoint. `REQUIRED_SECONDARIES_TO_COMMIT` is now a function of the number of replicas.
In this release, the default value of the readable secondary service is `Cluster
### User experience improvements
-Notifications added in Azure Portal if billing data has not been uploaded to Azure recently.
+Notifications added in Azure portal if billing data has not been uploaded to Azure recently.
#### Azure Data Studio
For complete release version information, see [Version log](version-log.md).
You can create a maintenance window on the data controller, and if you have SQL managed instances with a desired version set to `auto`, they will be upgraded in the next maintenance windows after a data controller upgrade.
-Metrics for each replica in a business critical instance are now sent to the Azure portal so you can view them in the monitoring charts.
+Metrics for each replica in a Business Critical instance are now sent to the Azure portal so you can view them in the monitoring charts.
AD authentication connectors can now be set up in an `automatic mode` or *system-managed keytab* which will use a service account to automatically create SQL service accounts, SPNs, and DNS entries as an alternative to the AD authentication connectors which use the *customer-managed keytab* mode.
For complete release version information, see [Version log](version-log.md).
### Data controller - Initiate an upgrade of the data controller from the portal in the direct connected mode-- Removed block on data controller upgrade if there are business critical instances that exist
+- Removed block on data controller upgrade if there are Business Critical instances that exist
- Better handling of delete user experiences in Azure portal ### SQL Managed Instance -- Azure Arc-enabled SQL Managed Instance business critical instances can be upgraded from the January release and going forward (preview)
+- Azure Arc-enabled SQL Managed Instance Business Critical instances can be upgraded from the January release and going forward (preview)
- Business critical distributed availability group failover can now be done through a Kubernetes-native experience or the Azure CLI (indirect mode only) (preview)-- Added support for `LicenseType: DisasterRecovery` which will ensure that instances which are used for business critical distributed availability group secondary replicas:
+- Added support for `LicenseType: DisasterRecovery` which will ensure that instances which are used for Business Critical distributed availability group secondary replicas:
- Are not billed for - Automatically seed the system databases from the primary replica when the distributed availability group is created. (preview) - New option added to `desiredVersion` called `auto` - automatically upgrades a given SQL instance when there is a new upgrade available (preview)
This release is published December 16, 2021.
- Active Directory authentication in preview for SQL Managed Instance - Direct mode upgrade of SQL Managed Instance via Azure CLI - Edit memory and CPU configuration in Azure portal in directly connected mode-- Ability to specify a single replica for a business critical instance using Azure CLI or Kubernetes yaml file
+- Ability to specify a single replica for a Business Critical instance using Azure CLI or Kubernetes yaml file
- Updated SQL binaries to latest Azure PaaS-compatible binary version - Resolved issue where the point in time restore did not respect the configured time zone
The following `sql` commands now support directly connected mode:
- Automatically upload metrics to Azure Monitor - Automatically upload logs to Azure Log Analytics - Enable or disable automatic upload of Metrics and/or logs to Azure after deployment of Azure Arc data controller.-- Upgrade from July 2021 release in-place (only for generally available services such as Azure Arc data controller and general purpose SQL Managed Instance) using Azure CLI.
+- Upgrade from July 2021 release in-place (only for generally available services such as Azure Arc data controller and General Purpose SQL Managed Instance) using Azure CLI.
- Set the metrics and logs dashboards usernames and passwords separately at DC deployment time using the new environment variables: ```console
For complete list, see [Supported regions](overview.md#supported-regions).
### Azure Arc-enabled SQL Managed Instance -- Upgrade instances of Azure Arc-enabled SQL Managed Instance general purpose in-place
+- Upgrade instances of Azure Arc-enabled SQL Managed Instance General Purpose in-place
- The SQL binaries are updated to a new version - Direct connected mode deployment of Azure Arc enabled SQL Managed Instance using Azure CLI-- Point in time restore for Azure Arc enabled SQL Managed Instance is being made generally available with this release. Currently point in time restore is only supported for the general purpose SQL Managed Instance. Point in time restore for business critical SQL Managed Instance is still under preview.
+- Point in time restore for Azure Arc enabled SQL Managed Instance is being made generally available with this release. Currently point in time restore is only supported for the General Purpose SQL Managed Instance. Point in time restore for Business Critical SQL Managed Instance is still under preview.
- New `--dry-run` option provided for point in time restore - Recovery point objective is set to 5 minutes by default and is not configurable - Backup retention period is set to 7 days by default. A new option to set the retention period to zero disables automatic backups for development and test instances that do not require backups
For complete list, see [Supported regions](overview.md#supported-regions).
#### Data controller upgrade - At this time, upgrade of a directly connected data controller via CLI or the portal is not supported.-- You can only upgrade generally available services such as Azure Arc data controller and general purpose SQL Managed Instance at this time. If you also have business critical SQL Managed Instance and/or Azure Arc enabled PostgreSQL Hyperscale, remove them first, before proceeding to upgrade.
+- You can only upgrade generally available services such as Azure Arc data controller and General Purpose SQL Managed Instance at this time. If you also have Business Critical SQL Managed Instance and/or Azure Arc enabled PostgreSQL Hyperscale, remove them first, before proceeding to upgrade.
#### Commands
az arcdata sql mi-arc update
This release is published July 30, 2021.
-This release announces general availability for Azure Arc-enabled SQL Managed Instance [general purpose service tier](service-tiers.md) in indirectly connected mode.
+This release announces general availability for Azure Arc-enabled SQL Managed Instance [General Purpose service tier](service-tiers.md) in indirectly connected mode.
> [!NOTE] > In addition, this release provides the following Azure Arc-enabled services in preview: > - SQL Managed Instance in directly connected mode
- > - SQL Managed Instance [business critical service tier](service-tiers.md)
+ > - SQL Managed Instance [Business Critical service tier](service-tiers.md)
> - PostgreSQL Hyperscale ### Breaking changes
azure-arc Reserved Capacity Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/reserved-capacity-overview.md
description: Learn how to buy Azure Arc-enabled SQL Managed Instance reserved ca
+
The size of reservation should be based on the total amount of compute resources
The following list demonstrates a scenario to project how you would reserve resources: * **Current**:
- - One general purpose, 16 vCore managed instance
- - Two business critical, 8-vCore managed instances
+ - One General Purpose, 16 vCore managed instance
+ - Two Business Critical, 8-vCore managed instances
* **In the next year you will add**:
- - One more general purpose, 16 vCore managed instance
- - One more business critical, 32 vCore managed instance
+ - One more General Purpose, 16 vCore managed instance
+ - One more Business Critical, 32 vCore managed instance
* **Purchase a reservations for**:
- - 32 (2x16) vCore one year reservation for general purpose managed instance
- - 48 (2x8 + 32) vCore one year reservation for business critical managed instance
+ - 32 (2x16) vCore one year reservation for General Purpose managed instance
+ - 48 (2x8 + 32) vCore one year reservation for Business Critical managed instance
## Buy reserved capacity
azure-arc Service Tiers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/service-tiers.md
description: Explains the service tiers available for Azure Arc-enabled SQL Mana
+
As part of of the family of Azure SQL products, Azure Arc-enabled SQL Managed Instance is available in two [vCore](/azure/azure-sql/database/service-tiers-vcore) service tiers. -- **General purpose** is a budget-friendly tier designed for most workloads with common performance and availability features.-- **Business critical** tier is designed for performance-sensitive workloads with higher availability features.-
-At this time, the business critical service tier is [public preview](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). The general purpose service tier is generally available.
+- **General Purpose** is a budget-friendly tier designed for most workloads with common performance and availability features.
+- **Business Critical** tier is designed for performance-sensitive workloads with higher availability features.
In Azure, storage and compute is provided by Microsoft with guaranteed service level agreements (SLAs) for performance, throughput, availability, and etc. across each of the service tiers. With Azure Arc-enabled data services, customers provide the storage and compute. Hence, there are no guaranteed SLAs provided to customers with Azure Arc-enabled data services. However, customers get the flexibility to bring their own performant hardware irrespective of the service tier.
In Azure, storage and compute is provided by Microsoft with guaranteed service l
Following is a description of the various capabilities available from Azure Arc-enabled data services across the two service tiers:
-Area | Business critical (preview)* | General purpose
+Area | Business Critical | General Purpose
-|--| SQL Feature set | Same as SQL Server Enterprise Edition | Same as SQL Server Standard Edition CPU limit/instance | Unlimited | 24 cores
Disaster Recovery | Available via Failover Groups | Available via Failover Group
AHB exchange rates for IP component of price | 1:1 Enterprise Edition <br> 4:1 Standard Edition | 1:4 Enterprise Edition​ <br> 1:1 Standard Edition Dev/Test pricing | No cost | No cost
-\* Currently business critical service tier is in preview and does not incur any charges for use use during this preview. Some of the features may change as we get closer to general availability.
- ## How to choose between the service tiers Since customers bring their own hardware with performance and availability requirements based on their business needs, the primary differentiators between the service tiers are what is provided at the software level.
-### Choose general purpose if
+### Choose General Purpose if
-- CPU/memory requirements meet or are within the limits of the general purpose service tier
+- CPU/memory requirements meet or are within the limits of the General Purpose service tier
- The high availability options provided by Kubernetes, such as pod redeployments, is sufficient for the workload - Application does not need read scale out-- The application does not require any of the features found in the business critical service tier (same as SQL Server Enterprise Edition)
+- The application does not require any of the features found in the Business Critical service tier (same as SQL Server Enterprise Edition)
-### Choose business critical if
+### Choose Business Critical if
-- CPU/memory requirements exceed the limits of the general purpose service tier
+- CPU/memory requirements exceed the limits of the General Purpose service tier
- Application requires a higher level of High Availability such as built-in Availability Groups to handle application failovers than what is offered by Kubernetes. - Application can take advantage of read scale out to offload read workloads to the secondary replicas-- Application requires features found only in the business critical service tier (same as SQL Server Enterprise Edition)
+- Application requires features found only in the Business Critical service tier (same as SQL Server Enterprise Edition)
azure-arc Sizing Guidance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/sizing-guidance.md
description: Plan for the size of a deployment of Azure Arc-enabled data service
+
See the [storage-configuration](storage-configuration.md) article for details on
Each SQL managed instance must have the following minimum resource requests and limits:
-|Service tier|General purpose|Business critical (preview)|
+|Service tier|General Purpose|Business Critical|
|||| |CPU request|Minimum: 1; Maximum: 24; Default: 2|Minimum: 1; Maximum: unlimited; Default: 4| |CPU limit|Minimum: 1; Maximum: 24; Default: 2|Minimum: 1; Maximum: unlimited; Default: 4|
azure-arc Storage Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/storage-configuration.md
description: Explains Azure Arc-enabled data services storage configuration opti
+
If there are multiple databases on a given database instance, all of the databas
Important factors to consider when choosing a storage class for the database instance pods: -- Database instances can be deployed in either a single pod pattern or a multiple pod pattern. An example of a single pod pattern is a general purpose pricing tier Azure SQL managed instance. An example of a multiple pod pattern is a highly available business critical pricing tier Azure SQL managed instance. Database instances deployed with the single pod pattern **must** use a remote, shared storage class in order to ensure data durability and so that if a pod or node dies that when the pod is brought back up it can connect again to the persistent volume. In contrast, a highly available Azure SQL managed instance uses Always On Availability Groups to replicate the data from one instance to another either synchronously or asynchronously. Especially in the case where the data is replicated synchronously, there is always multiple copies of the data - typically three copies. Because of this, it is possible to use local storage or remote, shared storage classes for data and log files. If utilizing local storage, the data is still preserved even in the case of a failed pod, node, or storage hardware because there are multiple copies of the data. Given this flexibility, you might choose to use local storage for better performance.
+- Database instances can be deployed in either a single pod pattern or a multiple pod pattern. An example of a single pod pattern is a General Purpose pricing tier Azure SQL managed instance. An example of a multiple pod pattern is a highly available Business Critical pricing tier Azure SQL managed instance. Database instances deployed with the single pod pattern **must** use a remote, shared storage class in order to ensure data durability and so that if a pod or node dies that when the pod is brought back up it can connect again to the persistent volume. In contrast, a highly available Azure SQL managed instance uses Always On Availability Groups to replicate the data from one instance to another either synchronously or asynchronously. Especially in the case where the data is replicated synchronously, there is always multiple copies of the data - typically three copies. Because of this, it is possible to use local storage or remote, shared storage classes for data and log files. If utilizing local storage, the data is still preserved even in the case of a failed pod, node, or storage hardware because there are multiple copies of the data. Given this flexibility, you might choose to use local storage for better performance.
- Database performance is largely a function of the I/O throughput of a given storage device. If your database is heavy on reads or heavy on writes, then you should choose a storage class with hardware designed for that type of workload. For example, if your database is mostly used for writes, you might choose local storage with RAID 0. If your database is mostly used for reads of a small amount of "hot data", but there is a large overall storage volume of cold data, then you might choose a SAN device capable of tiered storage. Choosing the right storage class is not any different than choosing the type of storage you would use for any database. - If you are using a local storage volume provisioner, ensure that the local volumes that are provisioned for data, logs, and backups are each landing on different underlying storage devices to avoid contention on disk I/O. The OS should also be on a volume that is mounted to a separate disk(s). This is essentially the same guidance as would be followed for a database instance on physical hardware. - Because all databases on a given instance share a persistent volume claim and persistent volume, be sure not to colocate busy database instances on the same database instance. If possible, separate busy databases on to their own database instances to avoid I/O contention. Further, use node label targeting to land database instances onto separate nodes so as to distribute overall I/O traffic across multiple nodes. If you are using virtualization, be sure to consider distributing I/O traffic not just at the node level but also the combined I/O activity happening by all the node VMs on a given physical host.
azure-arc Upgrade Data Controller Direct Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/upgrade-data-controller-direct-cli.md
description: Article describes how to upgrade a directly connected Azure Arc dat
--++ Previously updated : 12/10/2021 Last updated : 05/24/2022
Preparing to upgrade dc arcdc in namespace arc to version <version-tag>.
Arcdata Control Plane would be upgraded to: <version-tag> ```
-To upgrade the data controller, run the `az arcdata dc upgrade` command. If you don't specify a target image, the data controller will be upgraded to the latest version.
+Upgrade the data controller by running an upgrade on the Arc data controller extension first. This can be done as follows:
+
+```azurecli
+az k8s-extension update --resource-group <resource-group> --cluster-name <connected cluster name> --cluster-type connectedClusters --name <name of extension> --version <extension version> --release-train stable --config systemDefaultValues.image="<registry>/<repository>/arc-bootstrapper:<imageTag>"
+```
+You can retrieve the name of your extension and its version, by browsing to the Overview blade of your Arc enabled kubernetes cluster and select Extensions tab on the left. You can also retrieve the name of your extension and its version running `az` CLI As follows:
+
+```azurecli
+az k8s-extension list --resource-group <resource-group> --cluster-name <connected cluster name> --cluster-type connectedClusters
+```
+
+For example:
+
+```azurecli
+az k8s-extension list --resource-group myresource-group --cluster-name myconnected-cluster --cluster-type connectedClusters
+```
+
+After retrieving the Arc data controller extension name and its version, the extension can be upgraded as follows:
+
+For example:
+
+```azurecli
+az k8s-extension update --resource-group myresource-group --cluster-name myconnected-cluster --cluster-type connectedClusters --name arcdc-ext --version 1.2.19481002 --release-train stable --config systemDefaultValues.image="mcr.microsoft.com/arcdata/arc-bootstrapper:v1.6.0_2022-05-02"
+```
+
+Once the extension is upgraded, run the `az arcdata dc upgrade` command to upgrade the data controller. If you don't specify a target image, the data controller will be upgraded to the latest version.
```azurecli az arcdata dc upgrade --resource-group <resource group> --name <data controller name> [--no-wait]
az arcdata dc upgrade --resource-group <resource group> --name <data controller
In example above, you can include `--desired-version <version>` to specify a version if you do not want the latest version.
+> [!NOTE]
+> Currently upgrade is only supported to the next immediate version. Hence, if you are more than one version behind, specify the `--desired-version` to avoid compatibility issues.
++ ## Monitor the upgrade status You can monitor the progress of the upgrade with CLI.
azure-arc Upgrade Sql Managed Instance Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/upgrade-sql-managed-instance-cli.md
description: Article describes how to upgrade an indirectly connected Azure Arc-
+
During a SQL Managed Instance General Purpose upgrade, the containers in the pod
### Business Critical - ### Upgrade To upgrade the Managed Instance, use the following command:
azure-arc Upgrade Sql Managed Instance Direct Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/upgrade-sql-managed-instance-direct-cli.md
description: Article describes how to upgrade a directly connected Azure Arc-ena
+ Previously updated : 11/10/2021 Last updated : 05/21/2022
During a SQL Managed Instance General Purpose upgrade, the containers in the pod
### Business Critical - ### Upgrade To upgrade the Managed Instance, use the following command: ````cli
-az sql mi-arc upgrade --resource-group <resource group> --name <instance name> [--no-wait]
+az sql mi-arc upgrade --resource-group <resource group> --name <instance name> --desired-version <imageTag> [--no-wait]
```` Example: ````cli
-az sql mi-arc upgrade --resource-group rgarc --name sql1 [--no-wait]
+az sql mi-arc upgrade --resource-group myresource-group --name sql1 --desired-version v1.6.0_2022-05-02 [--no-wait]
```` ## Monitor
Status:
Observed Generation: 2 Primary Endpoint: 30.76.129.38,1433 Ready Replicas: 1/1
- Running Version: v1.0.0_2021-07-30
+ Running Version: v1.5.0_2022-04-05
State: Updating ```
Status:
Observed Generation: 2 Primary Endpoint: 30.76.129.38,1433 Ready Replicas: 1/1
- Running Version: <version-tag>
+ Running Version: v1.6.0_2022-05-02
State: Ready ```
azure-arc Upgrade Sql Managed Instance Indirect Kubernetes Tools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/upgrade-sql-managed-instance-indirect-kubernetes-tools.md
description: Article describes how to upgrade an indirectly connected Azure Arc-
+
During a SQL Managed Instance General Purpose upgrade, the containers in the pod
### Business Critical - ### Upgrade Use a kubectl command to view the existing spec in yaml.
azure-arc Version Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/version-log.md
+ Last updated 5/04/2022
-# Customer intent: As a data professional, I want to understand what versions of components align with specific releases.
+#Customer intent: As a data professional, I want to understand what versions of components align with specific releases.
# Version log This article identifies the component versions with each release of Azure Arc-enabled data services.
+## May 24, 2022
+
+|Component |Value |
+|--||
+|Container images tag |`v1.7.0_2022-05-24`|
+|CRD names and versions |`datacontrollers.arcdata.microsoft.com`: v1beta1, v1 through v6</br>`exporttasks.tasks.arcdata.microsoft.com`: v1beta1, v1, v2</br>`kafkas.arcdata.microsoft.com`: v1beta1</br>`monitors.arcdata.microsoft.com`: v1beta1, v1, v2</br>`sqlmanagedinstances.sql.arcdata.microsoft.com`: v1beta1, v1 through v6</br>`postgresqls.arcdata.microsoft.com`: v1beta1, v1beta2</br>`sqlmanagedinstancerestoretasks.tasks.sql.arcdata.microsoft.com`: v1beta1, v1</br>`failovergroups.sql.arcdata.microsoft.com`: v1beta1, v1beta2,v1</br>`activedirectoryconnectors.arcdata.microsoft.com`: v1beta1, v1beta2|
+|ARM API version|2022-03-01-preview (No change)|
+|`arcdata` Azure CLI extension version| 1.4.1|
+|Arc enabled Kubernetes helm chart extension version|1.2.19581002|
+|Arc Data extension for Azure Data Studio|1.3.0|
+ ## May 4, 2022 |Component |Value |
All other components are the same as previously released.
## July 30, 2021
-This release introduces general availability for Azure Arc-enabled SQL Managed Instance general purpose and Azure Arc-enabled SQL Server. The following table describes the components in this release.
+This release introduces general availability for Azure Arc-enabled SQL Managed Instance General Purpose and Azure Arc-enabled SQL Server. The following table describes the components in this release.
|Component |Value | |--||
azure-arc Conceptual Configurations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/conceptual-configurations.md
Title: "Configurations and GitOps - Azure Arc-enabled Kubernetes" Previously updated : 03/02/2021 Last updated : 05/24/2022
keywords: "Kubernetes, Arc, Azure, containers, configuration, GitOps"
# Configurations and GitOps with Azure Arc-enabled Kubernetes > [!NOTE]
-> This document is for GitOps with Flux v1. GitOps with Flux v2 is now available in preview for Azure Arc-enabled Kubernetes and Azure Kubernetes Service (AKS) clusters; [learn about GitOps with Flux v2](./conceptual-gitops-flux2.md).
+> This document is for GitOps with Flux v1. GitOps with Flux v2 is now available for Azure Arc-enabled Kubernetes and Azure Kubernetes Service (AKS) clusters; [learn about GitOps with Flux v2](./conceptual-gitops-flux2.md). Eventually Azure will stop supporting GitOps with Flux v1, so begin using Flux v2 as soon as possible.
In relation to Kubernetes, GitOps is the practice of declaring the desired state of Kubernetes cluster configurations (deployments, namespaces, etc.) in a Git repository. This declaration is followed by a polling and pull-based deployment of these cluster configurations using an operator. The Git repository can contain: * YAML-format manifests describing any valid Kubernetes resources, including Namespaces, ConfigMaps, Deployments, DaemonSets, etc.
azure-arc Conceptual Gitops Ci Cd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/conceptual-gitops-ci-cd.md
Title: "CI/CD Workflow using GitOps - Azure Arc-enabled Kubernetes" Previously updated : 03/03/2021 Last updated : 05/24/2022
-description: "This article provides a conceptual overview of a CI/CD workflow using GitOps with Flux v2"
+description: "This article provides a conceptual overview of a CI/CD workflow using GitOps with Flux"
keywords: "GitOps, Kubernetes, K8s, Azure, Helm, Arc, AKS, Azure Kubernetes Service, containers, CI, CD, Azure DevOps" # CI/CD workflow using GitOps - Azure Arc-enabled Kubernetes > [!NOTE]
-> The workflow described in this document uses GitOps with Flux v1. GitOps with Flux v2 is now available in preview for Azure Arc-enabled Kubernetes and Azure Kubernetes Service (AKS) clusters; [learn about CI/CD workflow using GitOps with Flux v2](./conceptual-gitops-flux2-ci-cd.md).
+> The workflow described in this document uses GitOps with Flux v1. GitOps with Flux v2 is now available for Azure Arc-enabled Kubernetes and Azure Kubernetes Service (AKS) clusters; [learn about CI/CD workflow using GitOps with Flux v2](./conceptual-gitops-flux2-ci-cd.md). Eventually Azure will stop supporting GitOps with Flux v1, so begin using Flux v2 as soon as possible.
Modern Kubernetes deployments house multiple applications, clusters, and environments. With GitOps, you can manage these complex setups more easily, tracking the desired state of the Kubernetes environments declaratively with Git. Using common Git tooling to track cluster state, you can increase accountability, facilitate fault investigation, and enable automation to manage environments.
The CD pipeline is automatically triggered by successful CI builds. It uses the
### GitOps repo The GitOps repo represents the current desired state of all environments across clusters. Any change to this repo is picked up by the Flux service in each cluster and deployed. PRs are created with changes to the desired state, reviewed, and merged. These PRs contain changes to both deployment templates and the resulting rendered Kubernetes manifests. Low-level rendered manifests allow more careful inspection of changes typically unseen at the template-level. ### Kubernetes clusters
-At least one Azure Arc-enabled Kubernetes clusters serves the different environments needed by the application. For example, a single cluster can serve both a dev and QA environment through different namespaces. A second cluster can provide easier separation of environments and more fine-grained control.
+At least one Azure Arc-enabled Kubernetes cluster serves the different environments needed by the application. For example, a single cluster can serve both a dev and QA environment through different namespaces. A second cluster can provide easier separation of environments and more fine-grained control.
## Example workflow As an application developer, Alice: * Writes application code.
azure-arc Custom Locations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/custom-locations.md
If you are logged into Azure CLI as an Azure AD user, to enable this feature on
az connectedk8s enable-features -n <clusterName> -g <resourceGroupName> --features cluster-connect custom-locations ```
-If you are logged into Azure CLI using a service principal, to enable this feature on your cluster, execute the following steps:
+If you run the above command while being logged into Azure CLI using a service principal, you may observe the following warning:
-1. Fetch the Object ID of the Azure AD application used by Azure Arc service:
+```console
+Unable to fetch oid of 'custom-locations' app. Proceeding without enabling the feature. Insufficient privileges to complete the operation.
+```
+
+This is because a service principal doesn't have permissions to get information of the application used by Azure Arc service. To avoid this error, execute the following steps:
+
+1. Login into Azure CLI using your user account. Fetch the Object ID of the Azure AD application used by Azure Arc service:
```azurecli
- az ad sp show --id 'bc313c14-388c-4e7d-a58e-70017303ee3b' --query objectId -o tsv
+ az ad sp show --id bc313c14-388c-4e7d-a58e-70017303ee3b --query objectId -o tsv
```
-1. Use the `<objectId>` value from above step to enable custom locations feature on the cluster:
+1. Login into Azure CLI using the service principal. Use the `<objectId>` value from above step to enable custom locations feature on the cluster:
```azurecli az connectedk8s enable-features -n <cluster-name> -g <resource-group-name> --custom-locations-oid <objectId> --features cluster-connect custom-locations
azure-arc Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/extensions.md
Title: "Azure Arc-enabled Kubernetes cluster extensions" Previously updated : 11/24/2021+ Last updated : 05/24/2022
The Kubernetes extensions feature enables the following on Azure Arc-enabled Kub
In this article, you learn: > [!div class="checklist"]
-> * Current available Azure Arc-enabled Kubernetes cluster extensions.
+
+> * Which Azure Arc-enabled Kubernetes cluster extensions are currently available.
> * How to create extension instances. > * Required and optional parameters.
-> * How to view, list, update, and delete extension instances.
+> * How to view, list, update, and delete extension instances.
-A conceptual overview of this feature is available in [Cluster extensions - Azure Arc-enabled Kubernetes](conceptual-extensions.md) article.
+A conceptual overview of this feature is available in [Cluster extensions - Azure Arc-enabled Kubernetes](conceptual-extensions.md).
[!INCLUDE [preview features note](./includes/preview/preview-callout.md)] ## Prerequisites -- [Install or upgrade Azure CLI](/cli/azure/install-azure-cli) to version >= 2.16.0.-- `connectedk8s` (version >= 1.2.0) and `k8s-extension` (version >= 1.0.0) Azure CLI extensions. Install these Azure CLI extensions by running the following commands:
+* [Install or upgrade Azure CLI](/cli/azure/install-azure-cli) to version >= 2.16.0.
+* `connectedk8s` (version >= 1.2.0) and `k8s-extension` (version >= 1.0.0) Azure CLI extensions. Install these Azure CLI extensions by running the following commands:
```azurecli az extension add --name connectedk8s az extension add --name k8s-extension ```
-
+ If the `connectedk8s` and `k8s-extension` extension are already installed, you can update them to the latest version using the following command: ```azurecli
A conceptual overview of this feature is available in [Cluster extensions - Azur
az extension update --name k8s-extension ``` -- An existing Azure Arc-enabled Kubernetes connected cluster.
- - If you haven't connected a cluster yet, use our [quickstart](quickstart-connect-cluster.md).
- - [Upgrade your agents](agent-upgrade.md#manually-upgrade-agents) to version >= 1.5.3.
+* An existing Azure Arc-enabled Kubernetes connected cluster.
+ * If you haven't connected a cluster yet, use our [quickstart](quickstart-connect-cluster.md).
+ * [Upgrade your agents](agent-upgrade.md#manually-upgrade-agents) to version >= 1.5.3.
## Currently available extensions
A conceptual overview of this feature is available in [Cluster extensions - Azur
| [Azure App Service on Azure Arc](../../app-service/overview-arc-integration.md) | Allows you to provision an App Service Kubernetes environment on top of Azure Arc-enabled Kubernetes clusters. | | [Event Grid on Kubernetes](../../event-grid/kubernetes/overview.md) | Create and manage event grid resources such as topics and event subscriptions on top of Azure Arc-enabled Kubernetes clusters. | | [Azure API Management on Azure Arc](../../api-management/how-to-deploy-self-hosted-gateway-azure-arc.md) | Deploy and manage API Management gateway on Azure Arc-enabled Kubernetes clusters. |
-| [Azure Arc-enabled Machine Learning](../../machine-learning/how-to-attach-arc-kubernetes.md) | Deploy and run Azure Machine Learning on Azure Arc-enabled Kubernetes clusters. |
+| [Azure Arc-enabled Machine Learning](../../machine-learning/how-to-attach-kubernetes-anywhere.md) | Deploy and run Azure Machine Learning on Azure Arc-enabled Kubernetes clusters. |
| [Flux (GitOps)](./conceptual-gitops-flux2.md) | Use GitOps with Flux to manage cluster configuration and application deployment. |
+| [Dapr extension for Azure Kubernetes Service (AKS) and Arc-enabled Kubernetes](/azure/aks/dapr)| Eliminates the overhead of downloading Dapr tooling and manually installing and managing the runtime on your clusters. |
## Usage of cluster extensions
azure-arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/overview.md
Title: "Overview of Azure Arc-enabled Kubernetes" + Last updated 05/03/2022
Azure Arc-enabled Kubernetes supports the following scenarios for connected clus
* Deploy [Open Service Mesh](tutorial-arc-enabled-open-service-mesh.md) on top of your cluster for observability and policy enforcement on service-to-service interactions
-* Deploy machine learning workloads using [Azure Machine Learning for Kubernetes clusters](../../machine-learning/how-to-attach-arc-kubernetes.md?toc=/azure/azure-arc/kubernetes/toc.json).
+* Deploy machine learning workloads using [Azure Machine Learning for Kubernetes clusters](../../machine-learning/how-to-attach-kubernetes-anywhere.md?toc=/azure/azure-arc/kubernetes/toc.json).
* Create [custom locations](./custom-locations.md) as target locations for deploying Azure Arc-enabled Data Services (SQL Managed Instances, PostgreSQL Hyperscale.), [App Services on Azure Arc](../../app-service/overview-arc-integration.md) (including web, function, and logic apps), and [Event Grid on Kubernetes](../../event-grid/kubernetes/overview.md).
azure-arc Quickstart Connect Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/quickstart-connect-cluster.md
Title: "Quickstart: Connect an existing Kubernetes cluster to Azure Arc" description: In this quickstart, you learn how to connect an Azure Arc-enabled Kubernetes cluster. Previously updated : 05/16/2022 Last updated : 05/24/2022 ms.devlang: azurecli
Remove-AzConnectedKubernetes -ClusterName AzureArcTest1 -ResourceGroupName Azure
Advance to the next article to learn how to deploy configurations to your connected Kubernetes cluster using GitOps. > [!div class="nextstepaction"]
-> [Deploy configurations using GitOps](tutorial-use-gitops-connected-cluster.md)
+> [Deploy configurations using GitOps with Flux v2](tutorial-use-gitops-flux2.md)
azure-arc Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/troubleshooting.md
When you are connecting your cluster to Azure Arc or when you are enabling custo
Unable to fetch oid of 'custom-locations' app. Proceeding without enabling the feature. Insufficient privileges to complete the operation. ```
-The above warning is observed when you have used a service principal to log into Azure and this service principal doesn't have permissions to get information of the application used by Azure Arc service. To avoid this error, execute the following steps:
+The above warning is observed when you have used a service principal to log into Azure. This is because a service principal doesn't have permissions to get information of the application used by Azure Arc service. To avoid this error, execute the following steps:
-1. Fetch the Object ID of the Azure AD application used by Azure Arc service:
+1. Login into Azure CLI using your user account. Fetch the Object ID of the Azure AD application used by Azure Arc service:
```azurecli az ad sp show --id bc313c14-388c-4e7d-a58e-70017303ee3b --query objectId -o tsv ```
-1. Use the `<objectId>` value from above step to enable custom locations feature on the cluster:
+1. Login into Azure CLI using the service principal. Use the `<objectId>` value from above step to enable custom locations feature on the cluster:
- If you are enabling custom locations feature as part of connecting the cluster to Arc, run the following command: ```azurecli
The above warning is observed when you have used a service principal to log into
az connectedk8s enable-features -n <cluster-name> -g <resource-group-name> --custom-locations-oid <objectId> --features cluster-connect custom-locations ```
-Once above permissions are granted, you can now proceed to [enabling the custom location feature](custom-locations.md#enable-custom-locations-on-cluster) on the cluster.
- ## Azure Arc-enabled Open Service Mesh The following troubleshooting steps provide guidance on validating the deployment of all the Open Service Mesh extension components on your cluster.
azure-arc Tutorial Akv Secrets Provider https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/tutorial-akv-secrets-provider.md
Title: Azure Key Vault Secrets Provider extension (Preview)
+ Title: Azure Key Vault Secrets Provider extension
description: Tutorial for setting up Azure Key Vault provider for Secrets Store CSI Driver interface as an extension on Azure Arc enabled Kubernetes cluster Previously updated : 11/15/2021 Last updated : 5/13/2022
-# Using Azure Key Vault Secrets Provider extension to fetch secrets into Arc clusters (Preview)
+# Using Azure Key Vault Secrets Provider extension to fetch secrets into Arc clusters
The Azure Key Vault Provider for Secrets Store CSI Driver allows for the integration of Azure Key Vault as a secrets store with a Kubernetes cluster via a [CSI volume](https://kubernetes-csi.github.io/docs/).
The Azure Key Vault Provider for Secrets Store CSI Driver allows for the integra
- OpenShift Kubernetes Distribution - Canonical Kubernetes Distribution - Elastic Kubernetes Service
+ - Tanzu Kubernetes Grid
- ## Features - Mounts secrets/keys/certs to pod using a CSI Inline volume
The Azure Key Vault Provider for Secrets Store CSI Driver allows for the integra
The following steps assume that you already have a cluster with supported Kubernetes distribution connected to Azure Arc.
+To deploy using Azure portal, go to the cluster's **Extensions** blade under **Settings**. Click on **+Add** button.
+
+[![Extensions located under Settings for Arc enabled Kubernetes cluster](media/tutorial-akv-secrets-provider/extension-install-add-button.jpg)](media/tutorial-akv-secrets-provider/extension-install-add-button.jpg#lightbox)
+
+From the list of available extensions, select the **Azure Key Vault Secrets Provider** to deploy the latest version of the extension. You can also choose to customize the installation through the portal by changing the defaults on **Configuration** tab.
+
+[![AKV Secrets Provider available as an extension by clicking on Add button on Extensions blade](media/tutorial-akv-secrets-provider/extension-install-new-resource.jpg)](media/tutorial-akv-secrets-provider/extension-install-new-resource.jpg#lightbox)
+
+Alternatively, you can use the CLI experience captured below.
+ Set the environment variables: ```azurecli-interactive export CLUSTER_NAME=<arc-cluster-name> export RESOURCE_GROUP=<resource-group-name> ```
-While AKV secrets provider extension is in preview, the `az k8s-extension create` command only accepts `preview` for the `--release-train` flag.
```azurecli-interactive
-az k8s-extension create --cluster-name $CLUSTER_NAME --resource-group $RESOURCE_GROUP --cluster-type connectedClusters --extension-type Microsoft.AzureKeyVaultSecretsProvider --release-train preview --name akvsecretsprovider
+az k8s-extension create --cluster-name $CLUSTER_NAME --resource-group $RESOURCE_GROUP --cluster-type connectedClusters --extension-type Microsoft.AzureKeyVaultSecretsProvider --name akvsecretsprovider
``` The above will install the Secrets Store CSI Driver and the Azure Key Vault Provider on your cluster nodes. You should see output similar to the output shown below. It may take 3-5 minutes for the actual AKV secrets provider helm chart to get deployed to the cluster.
Note that only one instance of AKV secrets provider extension can be deployed on
"type": "SystemAssigned" }, "location": null,
- "name": "sscsi",
+ "name": "akvsecretsprovider",
"packageUri": null, "provisioningState": "Succeeded",
- "releaseTrain": "preview",
+ "releaseTrain": "Stable",
"resourceGroup": "$RESOURCE_GROUP", "scope": { "cluster": {
Note that only one instance of AKV secrets provider extension can be deployed on
}, "statuses": [], "systemData": {
- "createdAt": "2021-11-15T18:55:33.952130+00:00",
+ "createdAt": "2022-05-12T18:35:56.552889+00:00",
"createdBy": null, "createdByType": null,
- "lastModifiedAt": "2021-11-15T18:55:33.952130+00:00",
+ "lastModifiedAt": "2022-05-12T18:35:56.552889+00:00",
"lastModifiedBy": null, "lastModifiedByType": null }, "type": "Microsoft.KubernetesConfiguration/extensions",
- "version": "1.0.0"
+ "version": "1.1.3"
} ```
After connecting your cluster to Azure Arc, create a json file with the followin
} }, "ReleaseTrain": {
- "defaultValue": "preview",
+ "defaultValue": "stable",
"type": "String", "metadata": { "description": "The release train."
You should see a JSON output similar to the output below:
"name": "akvsecretsprovider", "packageUri": null, "provisioningState": "Succeeded",
- "releaseTrain": "preview",
+ "releaseTrain": "Stable",
"resourceGroup": "$RESOURCE_GROUP", "scope": { "cluster": {
You should see a JSON output similar to the output below:
}, "statuses": [], "systemData": {
- "createdAt": "2021-11-15T21:17:52.751916+00:00",
+ "createdAt": "2022-05-12T18:35:56.552889+00:00",
"createdBy": null, "createdByType": null,
- "lastModifiedAt": "2021-11-15T21:17:52.751916+00:00",
+ "lastModifiedAt": "2022-05-12T18:35:56.552889+00:00",
"lastModifiedBy": null, "lastModifiedByType": null }, "type": "Microsoft.KubernetesConfiguration/extensions",
- "version": "1.0.0"
+ "version": "1.1.3"
} ```
spec:
- "/bin/sleep" - "10000" volumeMounts:
- - name: secrets-store01-inline
+ - name: secrets-store-inline
mountPath: "/mnt/secrets-store" readOnly: true volumes:
- - name: secrets-store01-inline
+ - name: secrets-store-inline
csi: driver: secrets-store.csi.k8s.io readOnly: true
These settings can be changed either at the time of extension installation using
Use following command to add configuration settings while creating extension instance: ```azurecli-interactive
-az k8s-extension create --cluster-name $CLUSTER_NAME --resource-group $RESOURCE_GROUP --cluster-type connectedClusters --extension-type Microsoft.AzureKeyVaultSecretsProvider --release-train preview --name akvsecretsprovider --configuration-settings secrets-store-csi-driver.enableSecretRotation=true secrets-store-csi-driver.rotationPollInterval=3m secrets-store-csi-driver.syncSecret.enabled=true
+az k8s-extension create --cluster-name $CLUSTER_NAME --resource-group $RESOURCE_GROUP --cluster-type connectedClusters --extension-type Microsoft.AzureKeyVaultSecretsProvider --name akvsecretsprovider --configuration-settings secrets-store-csi-driver.enableSecretRotation=true secrets-store-csi-driver.rotationPollInterval=3m secrets-store-csi-driver.syncSecret.enabled=true
``` Use following command to update configuration settings of existing extension instance:
azure-arc Tutorial Gitops Ci Cd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/tutorial-gitops-ci-cd.md
Title: 'Tutorial: Implement CI/CD with GitOps using Azure Arc-enabled Kubernetes
description: This tutorial walks through setting up a CI/CD solution using GitOps with Azure Arc-enabled Kubernetes clusters. For a conceptual take on this workflow, see the CI/CD Workflow using GitOps - Azure Arc-enabled Kubernetes article. Previously updated : 03/03/2021 Last updated : 05/24/2021 # Tutorial: Implement CI/CD with GitOps using Azure Arc-enabled Kubernetes clusters > [!NOTE]
-> This tutorial uses GitOps with Flux v1. GitOps with Flux v2 is now available in preview for Azure Arc-enabled Kubernetes and Azure Kubernetes Service (AKS) clusters; [go to the tutorial that uses GitOps with Flux v2](./tutorial-gitops-flux2-ci-cd.md).
+> This tutorial uses GitOps with Flux v1. GitOps with Flux v2 is now available for Azure Arc-enabled Kubernetes and Azure Kubernetes Service (AKS) clusters; [go to the tutorial that uses GitOps with Flux v2](./tutorial-gitops-flux2-ci-cd.md). Eventually Azure will stop supporting GitOps with Flux v1, so begin using Flux v2 as soon as possible.
In this tutorial, you'll set up a CI/CD solution using GitOps with Azure Arc-enabled Kubernetes clusters. Using the sample Azure Vote app, you'll:
azure-arc Tutorial Gitops Flux2 Ci Cd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/tutorial-gitops-flux2-ci-cd.md
Previously updated : 12/15/2021 Last updated : 05/24/2022 # Tutorial: Implement CI/CD with GitOps (Flux v2)
-In this tutorial, you'll set up a CI/CD solution using GitOps (Flux v2) and Azure Arc-enabled Kubernetes or Azure Kubernetes Service (AKS) clusters. Using the sample Azure Vote app, you'll:
+In this tutorial, you'll set up a CI/CD solution using GitOps with Flux v2 and Azure Arc-enabled Kubernetes or Azure Kubernetes Service (AKS) clusters. Using the sample Azure Vote app, you'll:
> [!div class="checklist"] > * Create an Azure Arc-enabled Kubernetes or AKS cluster.
In this tutorial, you'll set up a CI/CD solution using GitOps (Flux v2) and Azur
> * Deploy the `dev` and `stage` environments. > * Test the application environments.
-General Availability of Azure Arc-enabled Kubernetes includes GitOps with Flux v1. The public preview of GitOps with Flux v2, documented here, is available in both Azure Arc-enabled Kubernetes and AKS. Flux v2 is the way forward, and Flux v1 will eventually be deprecated.
+> [!NOTE]
+> Eventually Azure will stop supporting GitOps with Flux v1, so begin using Flux v2 as soon as possible.
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
Import an [application repository](./conceptual-gitops-ci-cd.md#application-repo
* **arc-cicd-demo-src** application repository * URL: https://github.com/Azure/arc-cicd-demo-src * Contains the example Azure Vote App that you will deploy using GitOps.
-> [!NOTE]
-> Until Flux V2 integration is in Public Preview, work with [FluxV2 branch](https://github.com/Azure/arc-cicd-demo-src/tree/FluxV2) of the application repository. Once Flux V2 integration is in Public Preview, FluxV2 branch will be merged into the default branch.
* **arc-cicd-demo-gitops** GitOps repository * URL: https://github.com/Azure/arc-cicd-demo-gitops
Fork an [application repository](./conceptual-gitops-ci-cd.md#application-repo)
* **arc-cicd-demo-src** application repository * URL: https://github.com/Azure/arc-cicd-demo-src * Contains the example Azure Vote App that you will deploy using GitOps.
-> [!NOTE]
-> Until Flux V2 integration is in Public Preview, work with [FluxV2 branch](https://github.com/Azure/arc-cicd-demo-src/tree/FluxV2) of the application repository. Once Flux V2 integration is in Public Preview, FluxV2 branch will be merged into the default branch.
* **arc-cicd-demo-gitops** GitOps repository * URL: https://github.com/Azure/arc-cicd-demo-gitops
azure-arc Tutorial Use Gitops Connected Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/tutorial-use-gitops-connected-cluster.md
Previously updated : 03/02/2021 Last updated : 05/24/2022 # Tutorial: Deploy configurations using GitOps on an Azure Arc-enabled Kubernetes cluster > [!NOTE]
-> This tutorial is for GitOps with Flux v1. GitOps with Flux v2 is now available in preview for Azure Arc-enabled Kubernetes and Azure Kubernetes Service (AKS) clusters; [go to the tutorial for GitOps with Flux v2](./tutorial-use-gitops-flux2.md).
+> This tutorial is for GitOps with Flux v1. GitOps with Flux v2 is now available for Azure Arc-enabled Kubernetes and Azure Kubernetes Service (AKS) clusters; [go to the tutorial for GitOps with Flux v2](./tutorial-use-gitops-flux2.md). Eventually Azure will stop supporting GitOps with Flux v1, so begin using Flux v2 as soon as possible.
In this tutorial, you will apply configurations using GitOps on an Azure Arc-enabled Kubernetes cluster. You'll learn how to:
azure-arc Tutorial Use Gitops Flux2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/tutorial-use-gitops-flux2.md
description: "This tutorial shows how to use GitOps with Flux v2 to manage confi
keywords: "GitOps, Flux, Kubernetes, K8s, Azure, Arc, AKS, Azure Kubernetes Service, containers, devops" Previously updated : 04/11/2022 Last updated : 05/24/2022
-# Tutorial: Use GitOps with Flux v2 in Azure Arc-enabled Kubernetes or AKS clusters (preview)
+# Tutorial: Use GitOps with Flux v2 in Azure Arc-enabled Kubernetes or AKS clusters
GitOps with Flux v2 can be enabled in Azure Kubernetes Service (AKS) managed clusters or Azure Arc-enabled Kubernetes connected clusters as a cluster extension. After the `microsoft.flux` cluster extension is installed, you can create one or more `fluxConfigurations` resources that sync your Git repository sources to the cluster and reconcile the cluster to the desired state. With GitOps, you can use your Git repository as the source of truth for cluster configuration and application deployment.
+> [!NOTE]
+> Eventually Azure will stop supporting GitOps with Flux v1, so begin using Flux v2 as soon as possible.
+ This tutorial describes how to use GitOps in a Kubernetes cluster. Before you dive in, take a moment to [learn how GitOps with Flux works conceptually](./conceptual-gitops-flux2.md).
-General availability of Azure Arc-enabled Kubernetes includes GitOps with Flux v1. The public preview of GitOps with Flux v2, documented here, is available in both AKS and Azure Arc-enabled Kubernetes. Eventually Azure will stop supporting GitOps with Flux v1, so begin using Flux v2 as soon as possible.
+> [!IMPORTANT]
+> Add-on Azure management services, like Kubernetes Configuration, are charged when enabled. Costs related to use of Flux v2 will start to be billed on July 1, 2022. For more information, see [Azure Arc pricing](https://azure.microsoft.com/pricing/details/azure-arc/).
>[!IMPORTANT]
->GitOps with Flux v2 is in preview. In preparation for general availability, features are still being added to the preview. One recently-released feature, multi-tenancy, could affect some users. To understand how to work with multi-tenancy, [please review these details](#multi-tenancy).
->
->The `microsoft.flux` extension released major version 1.0.0. This includes the multi-tenancy feature. If you have existing GitOps Flux v2 configurations that use a previous version of the `microsoft.flux` extension you can upgrade to the latest extension manually using the Azure CLI: "az k8s-extension create -g <RESOURCE_GROUP> -c <CLUSTER_NAME> -n flux --extension-type microsoft.flux -t <CLUSTER_TYPE>" (use "-t connectedClusters" for Arc clusters and "-t managedClusters" for AKS clusters).
+> The `microsoft.flux` extension released major version 1.0.0. This includes the [multi-tenancy feature](#multi-tenancy). If you have existing GitOps Flux v2 configurations that use a previous version of the `microsoft.flux` extension you can upgrade to the latest extension manually using the Azure CLI: "az k8s-extension create -g <RESOURCE_GROUP> -c <CLUSTER_NAME> -n flux --extension-type microsoft.flux -t <CLUSTER_TYPE>" (use "-t connectedClusters" for Arc clusters and "-t managedClusters" for AKS clusters).
## Prerequisites
To manage GitOps through the Azure CLI or the Azure portal, you need the followi
### Supported regions
-GitOps is currently supported in all regions that Azure Arc-enabled Kubernetes supports. [See the supported regions](https://azure.microsoft.com/global-infrastructure/services/?regions=all&products=kubernetes-service,azure-arc). GitOps (preview) is currently supported in a subset of the regions that AKS supports. The GitOps service is adding new supported regions on a regular cadence.
+GitOps is currently supported in all regions that Azure Arc-enabled Kubernetes supports. [See the supported regions](https://azure.microsoft.com/global-infrastructure/services/?regions=all&products=kubernetes-service,azure-arc). GitOps is currently supported in a subset of the regions that AKS supports. The GitOps service is adding new supported regions on a regular cadence.
### Network requirements
az k8s-configuration flux kustomization -h
Group az k8s-configuration flux kustomization : Commands to manage Kustomizations associated with Flux v2 Kubernetes configurations.
- Command group 'k8s-configuration flux' is in preview and under development. Reference
- and support levels: https://aka.ms/CLI_refstatus
Commands: create : Create a Kustomization associated with a Flux v2 Kubernetes configuration.
This command is from the following extension: k8s-configuration
Command az k8s-configuration flux kustomization create : Create a Kustomization associated with a Kubernetes Flux v2 Configuration.
- Command group 'k8s-configuration flux kustomization' is in preview and under
- development. Reference and support levels: https://aka.ms/CLI_refstatus
+ Arguments --cluster-name -c [Required] : Name of the Kubernetes cluster. --cluster-type -t [Required] : Specify Arc connected clusters or AKS managed clusters.
For usage details, see the following documents:
* [Migrate to Flux v2 Helm from Flux v1 Helm](https://fluxcd.io/docs/migration/helm-operator-migration/) * [Flux Helm controller](https://fluxcd.io/docs/components/helm/)
+> [!TIP]
+> Because of how Helm handles index files, processing helm charts is an expensive operation and can have very high memory footprint. As a result, helm chart reconciliation, when occurring in parallel can cause memory spikes and OOMKilled if you are reconciling a large number of helm charts at a given time. By default, the source-controller sets its memory limit at 1Gi and its memory requests at 64Mi. If you need to increase this limit and requests due to a high number of large helm chart reconciliations, you can do so by running the following command after Microsoft.Flux extension installation.
+>
+> `az k8s-extension update -g <resource-group> -c <cluster-name> -n flux -t connectedClusters --config source-controller.resources.limits.memory=2Gi source-controller.resources.requests.memory=300Mi`
+ ### Use the GitRepository source for Helm charts If your Helm charts are stored in the `GitRepository` source that you configure as part of the `fluxConfigurations` resource, you can add an annotation to your HelmRelease yaml to indicate that the configured source should be used as the source of the Helm charts. The annotation is `clusterconfig.azure.com/use-managed-source: "true"`, and here is a usage example:
azure-arc Use Gitops With Helm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/use-gitops-with-helm.md
Title: "Deploy Helm Charts using GitOps on Azure Arc-enabled Kubernetes cluster"
# Previously updated : 03/03/2021 Last updated : 05/24/2022 description: "Use GitOps with Helm for an Azure Arc-enabled cluster configuration" keywords: "GitOps, Kubernetes, K8s, Azure, Helm, Arc, AKS, Azure Kubernetes Service, containers"
keywords: "GitOps, Kubernetes, K8s, Azure, Helm, Arc, AKS, Azure Kubernetes Serv
# Deploy Helm Charts using GitOps on an Azure Arc-enabled Kubernetes cluster > [!NOTE]
-> This article is for GitOps with Flux v1. GitOps with Flux v2 is now available in preview for Azure Arc-enabled Kubernetes and Azure Kubernetes Service (AKS) clusters; [go to the tutorial for GitOps with Flux v2](./tutorial-use-gitops-flux2.md).
+> This article is for GitOps with Flux v1. GitOps with Flux v2 is now available for Azure Arc-enabled Kubernetes and Azure Kubernetes Service (AKS) clusters; [go to the tutorial for GitOps with Flux v2](./tutorial-use-gitops-flux2.md). Eventually Azure will stop supporting GitOps with Flux v1, so begin using Flux v2 as soon as possible.
Helm is an open-source packaging tool that helps you install and manage the lifecycle of Kubernetes applications. Similar to Linux package managers like APT and Yum, Helm is used to manage Kubernetes charts, which are packages of pre-configured Kubernetes resources.
azure-arc Agent Release Notes Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/agent-release-notes-archive.md
Title: Archive for What's new with Azure Arc-enabled servers agent description: The What's new release notes in the Overview section for Azure Arc-enabled servers agent contains six months of activity. Thereafter, the items are removed from the main article and put into this article. Previously updated : 04/15/2022 Last updated : 05/24/2022
The Azure Connected Machine agent receives improvements on an ongoing basis. Thi
- Known issues - Bug fixes
+## Version 1.13 - November 2021
+
+### Known issues
+
+- Extensions may get stuck in transient states (creating, deleting, updating) on Windows machines running the 1.13 agent in certain conditions. Microsoft recommends upgrading to agent version 1.14 as soon as possible to resolve this issue.
+
+### Fixed
+
+- Improved reliability when installing or upgrading the agent.
+
+### New features
+
+- Local configuration of agent settings now available using the [azcmagent config command](manage-agent.md#config).
+- Proxy server settings can be [configured using agent-specific settings](manage-agent.md#update-or-remove-proxy-settings) instead of environment variables.
+- Extension operations will execute faster using a new notification pipeline. You may need to adjust your firewall or proxy server rules to allow the new network addresses for this notification service (see [networking configuration](network-requirements.md)). The extension manager will fall back to the existing behavior of checking every 5 minutes when the notification service cannot be reached.
+- Detection of the AWS account ID, instance ID, and region information for servers running in Amazon Web Services.
++ ## Version 1.12 - October 2021 ### Fixed
azure-arc Agent Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/agent-release-notes.md
Title: What's new with Azure Arc-enabled servers agent description: This article has release notes for Azure Arc-enabled servers agent. For many of the summarized issues, there are links to more details. Previously updated : 04/18/2022 Last updated : 05/24/2022
The Azure Connected Machine agent receives improvements on an ongoing basis. To
This page is updated monthly, so revisit it regularly. If you're looking for items older than six months, you can find them in [archive for What's new with Azure Arc-enabled servers agent](agent-release-notes-archive.md).
+## Version 1.18 - May 2022
+
+### New features
+
+- The agent can now be configured to operate in [monitoring mode](security-overview.md#agent-modes), which simplifies configuration of the agent for scenarios where you only want to use Arc for monitoring and security scenarios. This mode disables other agent functionality and prevents use of extensions that could make changes to the system (for example, the Custom Script Extension).
+- VMs and hosts running on Azure Stack HCI now report the cloud provider as "HCI" when [Azure benefits are enabled](/azure-stack/hci/manage/azure-benefits#enable-azure-benefits).
+
+### Fixed
+
+- `systemd` is now an official prerequisite on Linux and your package manger will alert you if you try to install the Azure Connected Machine agent on a server without systemd.
+- Guest configuration policies no longer create unnecessary files in the `/tmp` directory on Linux servers
+- Improved reliability when extracting extensions and guest configuration policy packages
+- Improved reliability for guest configuration policies that have child processes
+ ## Version 1.17 - April 2022 ### New features
This page is updated monthly, so revisit it regularly. If you're looking for ite
- A state corruption issue in the extension manager that could cause extension operations to get stuck in transient states has been fixed. Customers running agent version 1.13 are encouraged to upgrade to version 1.14 as soon as possible. If you continue to have issues with extensions after upgrading the agent, [submit a support ticket](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest).
-## Version 1.13 - November 2021
-
-### Known issues
--- Extensions may get stuck in transient states (creating, deleting, updating) on Windows machines running the 1.13 agent in certain conditions. Microsoft recommends upgrading to agent version 1.14 as soon as possible to resolve this issue.-
-### Fixed
--- Improved reliability when installing or upgrading the agent.-
-### New features
--- Local configuration of agent settings now available using the [azcmagent config command](manage-agent.md#config).-- Proxy server settings can be [configured using agent-specific settings](manage-agent.md#update-or-remove-proxy-settings) instead of environment variables.-- Extension operations will execute faster using a new notification pipeline. You may need to adjust your firewall or proxy server rules to allow the new network addresses for this notification service (see [networking configuration](network-requirements.md)). The extension manager will fall back to the existing behavior of checking every 5 minutes when the notification service cannot be reached.-- Detection of the AWS account ID, instance ID, and region information for servers running in Amazon Web Services.- ## Next steps - Before evaluating or enabling Azure Arc-enabled servers across multiple hybrid machines, review [Connected Machine agent overview](agent-overview.md) to understand requirements, technical details about the agent, and deployment methods.
azure-arc Onboard Service Principal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/onboard-service-principal.md
Title: Connect hybrid machines to Azure at scale description: In this article, you learn how to connect machines to Azure using Azure Arc-enabled servers using a service principal. Previously updated : 02/23/2022 Last updated : 05/23/2022 # Connect hybrid machines to Azure at scale
-You can enable Azure Arc-enabled servers for multiple Windows or Linux machines in your environment with several flexible options depending on your requirements. Using the template script we provide, you can automate every step of the installation, including establishing the connection to Azure Arc. However, you are required to interactively execute this script with an account that has elevated permissions on the target machine and in Azure.
+You can enable Azure Arc-enabled servers for multiple Windows or Linux machines in your environment with several flexible options depending on your requirements. Using the template script we provide, you can automate every step of the installation, including establishing the connection to Azure Arc. However, you are required to execute this script manually with an account that has elevated permissions on the target machine and in Azure.
-To connect the machines to Azure Arc-enabled servers, you can use an Azure Active Directory [service principal](../../active-directory/develop/app-objects-and-service-principals.md) instead of using your privileged identity to [interactively connect the machine](onboard-portal.md). This service principal is a special limited management identity that is granted only the minimum permission necessary to connect machines to Azure using the `azcmagent` command. This is safer than using a higher privileged account like a Tenant Administrator, and follows our access control security best practices. The service principal is used only during onboarding; it is not used for any other purpose.
+One method to connect the machines to Azure Arc-enabled servers is to use an Azure Active Directory [service principal](../../active-directory/develop/app-objects-and-service-principals.md). This service principal method can be used instead of your privileged identity to [interactively connect the machine](onboard-portal.md). This service principal is a special limited management identity that has only the minimum permission necessary to connect machines to Azure using the `azcmagent` command. This method is safer than using a higher privileged account like a Tenant Administrator and follows our access control security best practices. **The service principal is used only during onboarding; it is not used for any other purpose.**
-The installation methods to install and configure the Connected Machine agent requires that the automated method you use has administrator permissions on the machines: on Linux by using the root account, and on Windows as a member of the Local Administrators group.
+Before you start connecting your machines, review the following requirements:
-Before you get started, be sure to review the [prerequisites](prerequisites.md) and verify that your subscription and resources meet the requirements. For information about supported regions and other related considerations, see [supported Azure regions](overview.md#supported-regions). Also review our [at-scale planning guide](plan-at-scale-deployment.md) to understand the design and deployment criteria, as well as our management and monitoring recommendations.
+1. Make sure you have administrator permission on the machines you want to onboard.
+
+ Administrator permissions are required to install the Connected Machine agent on the machines; on Linux by using the root account, and on Windows as a member of the Local Administrators group.
+1. Review the [prerequisites](prerequisites.md) and verify that your subscription and resources meet the requirements. You will need to have the **Azure Connected Machine Onboarding** role or the **Contributor** role for the resource group of the machine.
+
+ For information about supported regions and other related considerations, see [supported Azure regions](overview.md#supported-regions). Also review our [at-scale planning guide](plan-at-scale-deployment.md) to understand the design and deployment criteria, as well as our management and monitoring recommendations.
+
+<!--The installation methods to install and configure the Connected Machine agent requires that the automated method you use has administrator permissions on the machines: on Linux by using the root account, and on Windows as a member of the Local Administrators group.
+
+Before you get started, be sure to review the [prerequisites](prerequisites.md) and verify that your subscription and resources meet the requirements. For information about supported regions and other related considerations, see [supported Azure regions](overview.md#supported-regions). Also review our [at-scale planning guide](plan-at-scale-deployment.md) to understand the design and deployment criteria, as well as our management and monitoring recommendations.-->
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
azure-arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/overview.md
Azure Arc-enabled servers lets you manage Windows and Linux physical servers and
When a hybrid machine is connected to Azure, it becomes a connected machine and is treated as a resource in Azure. Each connected machine has a Resource ID enabling the machine to be included in a resource group.
-To connect hybrid machines, you install the [Azure Connected Machine agent](agent-overview.md) on each machine. This agent does not deliver any other functionality, and it doesn't replace the Azure [Log Analytics agent](../../azure-monitor/agents/log-analytics-agent.md). The Log Analytics agent for Windows and Linux is required in order to:
+To connect hybrid machines, you install the [Azure Connected Machine agent](agent-overview.md) on each machine. This agent does not deliver any other functionality, and it doesn't replace the Azure [Log Analytics agent](../../azure-monitor/agents/log-analytics-agent.md) / [Azure Monitor Agent](../../azure-monitor/agents/azure-monitor-agent-overview.md). The Log Analytics agent or Azure Monitor Agent for Windows and Linux is required in order to:
* Proactively monitor the OS and workloads running on the machine * Manage it using Automation runbooks or solutions like Update Management
azure-arc Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/prerequisites.md
Title: Connected Machine agent prerequisites description: Learn about the prerequisites for installing the Connected Machine agent for Azure Arc-enabled servers. Previously updated : 05/10/2022 Last updated : 05/24/2022
The following versions of the Windows and Linux operating system are officially
## Software requirements
+Windows operating systems:
+ * NET Framework 4.6 or later is required. [Download the .NET Framework](/dotnet/framework/install/guide-for-developers). * Windows PowerShell 5.1 is required. [Download Windows Management Framework 5.1.](https://www.microsoft.com/download/details.aspx?id=54616).
+Linux operating systems:
+
+* systemd
+* wget (to download the installation script)
+ ## Required permissions The following Azure built-in roles are required for different aspects of managing connected machines:
azure-arc Security Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/security-overview.md
Title: Security overview description: Security information about Azure Arc-enabled servers. Previously updated : 04/15/2022 Last updated : 05/24/2022 # Azure Arc-enabled servers security overview
sudo azcmagent config set extensions.allowlist "Microsoft.EnterpriseCloud.Monito
sudo azcmagent config set guestconfiguration.enabled true ```
+## Agent modes
+
+A simpler way to configure local security controls for monitoring and security scenarios is to use the *monitor mode*, available with agent version 1.18 and newer. Modes are pre-defined configurations of the extension allowlist and guest configuration agent maintained by Microsoft. As new extensions become available that enable monitoring scenarios, Microsoft will update the allowlist and agent configuration to include or exclude the new functionality, as appropriate.
+
+There are two modes to choose from:
+
+1. **full** - the default mode. This allows all agent functionality.
+1. **monitor** - a restricted mode that disables the guest configuration policy agent and only allows the use of extensions related to monitoring and security.
+
+To enable monitor mode, run the following command:
+
+```bash
+azcmagent config set config.mode monitor
+```
+
+You can check the current mode of the agent and allowed extensions with the following command:
+
+```bash
+azcmagent config list
+```
+
+While in monitor mode, you cannot modify the extension allowlist or blocklist. If you need to change either list, change the agent back to full mode and specify your own allowlist and blocklist.
+
+To change the agent back to full mode, run the following command:
+
+```bash
+azcmagent config set config.mode full
+```
+ ## Using a managed identity with Azure Arc-enabled servers
-By default, the Azure Active Directory system assigned identity used by Arc can only be used to update the status of the Azure Arc-enabled server in Azure. For example, the *last seen* heartbeat status. You can optionally assign other roles to the identity if an application on your server uses the system assigned identity to access other Azure services. To learn more about configuring a system-assigned managed identity to access Azure resources, see [Authenticate against Azure resources with Azure Arc-enabled servers](managed-identity-authentication.md).
+By default, the Azure Active Directory system assigned identity used by Arc can only be used to update the status of the Azure Arc-enabled server in Azure. For example, the *last seen* heartbeat status. You can optionally assign other roles to the identity if an application on your server uses the system assigned identity to access other Azure services. To learn more about configuring a system-assigned managed identity to access Azure resources, see [Authenticate against Azure resources with Azure Arc-enabled servers](managed-identity-authentication.md).
While the Hybrid Instance Metadata Service can be accessed by any application running on the machine, only authorized applications can request an Azure AD token for the system assigned identity. On the first attempt to access the token URI, the service will generate a randomly generated cryptographic blob in a location on the file system that only trusted callers can read. The caller must then read the file (proving it has appropriate permission) and retry the request with the file contents in the authorization header to successfully retrieve an Azure AD token.
azure-arc Ssh Arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/ssh-arc-overview.md
To add access to SSH connections, run the following:
> If you are using a non-default port for your SSH connection, replace port 22 with your desired port in the previous command. ## Examples
-To view examples of using the ```az ssh vm``` command, view the az CLI documentation page for [az ssh](/cli/azure/ssh).
+To view examples of using the ```az ssh arc``` command, view the az CLI documentation page for [az ssh](/cli/azure/ssh).
azure-cache-for-redis Cache Administration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-administration.md
description: Learn how to perform administration tasks such as reboot and schedu
Previously updated : 07/05/2017 Last updated : 05/21/2021
azure-cache-for-redis Cache Aspnet Output Cache Provider https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-aspnet-output-cache-provider.md
ms.devlang: csharp Previously updated : 04/22/2018 Last updated : 05/18/2021+ # ASP.NET Output Cache Provider for Azure Cache for Redis
azure-cache-for-redis Cache Go Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-go-get-started.md
ms.devlang: golang Previously updated : 01/08/2021 Last updated : 09/09/2021
-#Customer intent: As a Go developer new to Azure Cache for Redis, I want to create a new Go app that uses Azure Cache for Redis.
+ # Quickstart: Use Azure Cache for Redis with Go
azure-cache-for-redis Cache How To Manage Redis Cache Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-how-to-manage-redis-cache-powershell.md
Title: Manage Azure Cache for Redis with Azure PowerShell description: Learn how to perform administrative tasks for Azure Cache for Redis using Azure PowerShell. - Previously updated : 07/13/2017 Last updated : 06/03/2021
azure-cache-for-redis Cache How To Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-how-to-monitor.md
Each metric includes two versions. One metric measures performance for the entir
| Connected Clients |The number of client connections to the cache during the specified reporting interval. This number maps to `connected_clients` from the Redis INFO command. Once the [connection limit](cache-configure.md#default-redis-server-configuration) is reached, later attempts to connect to the cache fail. Even if there are no active client applications, there may still be a few instances of connected clients because of internal processes and connections. | | Connections Created Per Second | The number of instantaneous connections created per second on the cache via port 6379 or 6380 (SSL). This metric can help identify whether clients are frequently disconnecting and reconnecting, which can cause higher CPU usage and Redis Server Load. | | Connections Closed Per Second | The number of instantaneous connections closed per second on the cache via port 6379 or 6380 (SSL). This metric can help identify whether clients are frequently disconnecting and reconnecting, which can cause higher CPU usage and Redis Server Load. |
-| CPU |The CPU utilization of the Azure Cache for Redis server as a percentage during the specified reporting interval. This value maps to the operating system `\Processor(_Total)\% Processor Time` performance counter. Note: This metric can be noisy due to low priority background security processes running on the node, so we recommend monitoring Server Load metric to track redis-server load.|
+| CPU |The CPU utilization of the Azure Cache for Redis server as a percentage during the specified reporting interval. This value maps to the operating system `\Processor(_Total)\% Processor Time` performance counter. Note: This metric can be noisy due to low priority background security processes running on the node, so we recommend monitoring Server Load metric to track load on a Redis server.|
| Errors | Specific failures and performance issues that the cache could be experiencing during a specified reporting interval. This metric has eight dimensions representing different error types, but could have more added in the future. The error types represented now are as follows: <br/><ul><li>**Failover** ΓÇô when a cache fails over (subordinate promotes to primary)</li><li>**Dataloss** ΓÇô when there's data loss on the cache</li><li>**UnresponsiveClients** ΓÇô when the clients aren't reading data from the server fast enough</li><li>**AOF** ΓÇô when there's an issue related to AOF persistence</li><li>**RDB** ΓÇô when there's an issue related to RDB persistence</li><li>**Import** ΓÇô when there's an issue related to Import RDB</li><li>**Export** ΓÇô when there's an issue related to Export RDB</li></ul> | | Evicted Keys |The number of items evicted from the cache during the specified reporting interval because of the `maxmemory` limit. This number maps to `evicted_keys` from the Redis INFO command. | | Expired Keys |The number of items expired from the cache during the specified reporting interval. This value maps to `expired_keys` from the Redis INFO command.|
azure-cache-for-redis Cache How To Premium Persistence https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-how-to-premium-persistence.md
Previously updated : 10/13/2021 Last updated : 05/17/2022 # Configure data persistence for a Premium Azure Cache for Redis instance
Azure Cache for Redis offers Redis persistence using the Redis database (RDB) an
- **RDB persistence** - When you use RDB persistence, Azure Cache for Redis persists a snapshot of your cache in a binary format. The snapshot is saved in an Azure Storage account. The configurable backup frequency determines how often to persist the snapshot. If a catastrophic event occurs that disables both the primary and replica cache, the cache is reconstructed using the most recent snapshot. Learn more about the [advantages](https://redis.io/topics/persistence#rdb-advantages) and [disadvantages](https://redis.io/topics/persistence#rdb-disadvantages) of RDB persistence. - **AOF persistence** - When you use AOF persistence, Azure Cache for Redis saves every write operation to a log. The log is saved at least once per second into an Azure Storage account. If a catastrophic event occurs that disables both the primary and replica cache, the cache is reconstructed using the stored write operations. Learn more about the [advantages](https://redis.io/topics/persistence#aof-advantages) and [disadvantages](https://redis.io/topics/persistence#aof-disadvantages) of AOF persistence.
+Azure Cache for Redis persistence features are intended to be used to restore data after data loss, not importing it to a new cache. You cannot import from AOF page blob backups to a new cache. To export data for importing back to a new cache, use the export RDB feature or automatic recurring RDB export. For more information on importing to a new cache, see [Import](cache-how-to-import-export-data.md#import).
+
+> [!NOTE]
+> Importing from AOF page blob backups to a new cache is not a supported option.
+ Persistence writes Redis data into an Azure Storage account that you own and manage. You configure the **New Azure Cache for Redis** on the left during cache creation. For existing premium caches, use the **Resource menu**. > [!NOTE]
No, you can enable RDB or AOF, but not both at the same time.
### How does persistence work with geo-replication?
-If you enable data persistence, geo-replication cannot be enabled for your premium cache.
+If you enable data persistence, geo-replication can't be enabled for your premium cache.
### Which persistence model should I choose?
All RDB persistence backups, except for the most recent one, are automatically d
### When should I use a second storage account?
-Use a second storage account for AOF persistence when you believe you have higher than expected set operations on the cache. Setting up the secondary storage account helps ensure your cache doesn't reach storage bandwidth limits.
+Use a second storage account for AOF persistence when you believe you have higher than expected set operations on the cache. Setting up the secondary storage account helps ensure your cache doesn't reach storage bandwidth limits.
### Does AOF persistence affect throughout, latency, or performance of my cache?
azure-cache-for-redis Cache Java Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-java-get-started.md
Title: 'Quickstart: Use Azure Cache for Redis in Java'
description: In this quickstart, you'll create a new Java app that uses Azure Cache for Redis Previously updated : 05/22/2020 Last updated : 03/21/2021 ms.devlang: java
Clone the repo [Java quickstart](https://github.com/Azure-Samples/azure-cache-re
[!INCLUDE [redis-cache-access-keys](includes/redis-cache-access-keys.md)]
-## Setting up the working environment
+## Setting up the working environment
Depending on your operating system, add environment variables for your **Host name** and **Primary access key** that you noted above. Open a command prompt, or a terminal window, and set up the following values:
Replace the placeholders with the following values:
## Understanding the Java sample
-In this sample, you use Maven to run the quickstart app.
+In this sample, you use Maven to run the quickstart app.
1. Change to the new *redistest* project directory.
In this sample, you use Maven to run the quickstart app.
} ```
- This code shows you how to connect to an Azure Cache for Redis instance using the cache host name and key environment variables. The code also stores and retrieves a string value in the cache. The `PING` and `CLIENT LIST` commands are also executed.
+ This code shows you how to connect to an Azure Cache for Redis instance using the cache host name and key environment variables. The code also stores and retrieves a string value in the cache. The `PING` and `CLIENT LIST` commands are also executed.
1. Close the *App.java*.
In this sample, you use Maven to run the quickstart app.
set REDISCACHEHOSTNAME=<YOUR_HOST_NAME>.redis.cache.windows.net set REDISCACHEKEY=<YOUR_PRIMARY_ACCESS_KEY> ```
-
+ 1. Execute the following Maven command to build and run the app: ```dos
In the example below, you see the `Message` key previously had a cached value. T
If you continue to use the quickstart code, you can keep the resources created in this quickstart and reuse them.
-Otherwise, if you're finished with the quickstart sample application, you can delete the Azure resources created in this quickstart to avoid charges.
+Otherwise, if you're finished with the quickstart sample application, you can delete the Azure resources created in this quickstart to avoid charges.
> [!IMPORTANT] > Deleting a resource group is irreversible and that the resource group and all the resources in it are permanently deleted. Make sure that you do not accidentally delete the wrong resource group or resources. If you created the resources for hosting this sample inside an existing resource group that contains resources you want to keep, you can delete each resource individually on the left instead of deleting the resource group.
Otherwise, if you're finished with the quickstart sample application, you can de
1. In the **Filter by name** textbox, type the name of your resource group. The instructions for this article used a resource group named *TestResources*. On your resource group in the result list, select **...** then **Delete resource group**.
- :::image type="content" source="./media/cache-java-get-started/azure-cache-redis-delete-resource-group.png" alt-text="Azure resource group deleted":::
+ :::image type="content" source="./media/cache-java-get-started/azure-cache-redis-delete-resource-group.png" alt-text="Azure resource group deleted":::
1. You'll be asked to confirm the deletion of the resource group. Type the name of your resource group to confirm, and select **Delete**.
azure-cache-for-redis Cache Manage Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-manage-cli.md
Title: Manage Azure Cache for Redis using Azure classic CLI description: Learn how to install the Azure classic CLI on any platform, how to use it to connect to your Azure account, and how to create and manage an Azure Cache for Redis from the classic CLI. - Previously updated : 01/23/2017 Last updated : 05/25/2021
The Azure classic CLI is a great way to manage your Azure infrastructure from any platform. This article shows how to create and manage your Azure Cache for Redis instances using the Azure classic CLI. [!INCLUDE [outdated-cli-content](../../includes/contains-classic-cli-content.md)]+ > [!NOTE] > For the latest Azure CLI sample scripts, see [Azure CLI Azure Cache for Redis samples](cli-samples.md).
azure-cache-for-redis Cache Ml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-ml.md
Previously updated : 09/30/2020 Last updated : 06/09/2021+ # Deploy a machine learning model to Azure Functions with Azure Cache for Redis
YouΓÇÖll be able to deploy a machine learning model to Azure Functions with any
| Setting | Suggested value | Description | | | - | -- |
- | **DNS name** | Enter a globally unique name. | The cache name must be a string between 1 and 63 characters. The string can contain only numbers, letters, or hyphens. The name must start and end with a number or letter, and can't contain consecutive hyphens. Your cache instance's *host name* will be *\<DNS name>.redis.cache.windows.net*. |
+ | **DNS name** | Enter a globally unique name. | The cache name must be a string between 1 and 63 characters. The string can contain only numbers, letters, or hyphens. The name must start and end with a number or letter, and can't contain consecutive hyphens. Your cache instance's _host name_ will be _\<DNS name>.redis.cache.windows.net_. |
| **Subscription** | Drop down and select your subscription. | The subscription under which to create this new Azure Cache for Redis instance. | | **Resource group** | Drop down and select a resource group, or select **Create new** and enter a new resource group name. | Name for the resource group in which to create your cache and other resources. By putting all your app resources in one resource group, you can easily manage or delete them together. | | **Location** | Drop down and select a location. | Select a [region](https://azure.microsoft.com/regions/) near other services that will use your cache. |
It takes a while for the cache to create. You can monitor progress on the Azure
Before deploying, you must define what is needed to run the model as a web service. The following list describes the core items needed for a deployment:
-* An __entry script__. This script accepts requests, scores the request using the model, and returns the results.
+* An **entry script**. This script accepts requests, scores the request using the model, and returns the results.
> [!IMPORTANT] > The entry script is specific to your model; it must understand the format of the incoming request data, the format of the data expected by your model, and the format of the data returned to clients.
For more information on entry script, see [Define scoring code.](../machine-lear
* **Dependencies**, such as helper scripts or Python/Conda packages required to run the entry script or model
-These entities are encapsulated into an __inference configuration__. The inference configuration references the entry script and other dependencies.
+These entities are encapsulated into an **inference configuration**. The inference configuration references the entry script and other dependencies.
> [!IMPORTANT] > When creating an inference configuration for use with Azure Functions, you must use an [Environment](/python/api/azureml-core/azureml.core.environment%28class%29) object. Please note that if you are defining a custom environment, you must add azureml-defaults with version >= 1.0.45 as a pip dependency. This package contains the functionality needed to host the model as a web service. The following example demonstrates creating an environment object and using it with an inference configuration:
For more information on environments, see [Create and manage environments for tr
For more information on inference configuration, see [Deploy models with Azure Machine Learning](../machine-learning/how-to-deploy-and-where.md?tabs=python#define-an-inference-configuration). > [!IMPORTANT]
-> When deploying to Functions, you do not need to create a __deployment configuration__.
+> When deploying to Functions, you do not need to create a **deployment configuration**.
## Install the SDK preview package for Functions support
When `show_output=True`, the output of the Docker build process is shown. Once t
} ```
- Save the value for __username__ and one of the __passwords__.
+ Save the value for **username** and one of the **passwords**.
1. If you don't already have a resource group or app service plan to deploy the service, the these commands demonstrate how to create both:
azure-cache-for-redis Cache Redis Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-redis-samples.md
Previously updated : 01/23/2017 Last updated : 05/11/2021 # Azure Cache for Redis samples
-You'll find a list of Azure Cache for Redis samples in this article.
-The samples cover scenarios such as:
+
+You'll find a list of Azure Cache for Redis samples in this article.
+The samples cover scenarios such as:
* Connecting to a cache * Reading and writing data to and from a cache
-* And using the ASP.NET Azure Cache for Redis providers.
+* And using the ASP.NET Azure Cache for Redis providers.
Some samples are downloadable projects. Other samples provide step-by-step guidance that includes code snippets but don't link to a downloadable project. ## Hello world samples+ The samples in this section show the basics of connecting to an Azure Cache for Redis instance. The sample also shows reading and writing data to the cache using different languages and Redis clients. The [Hello world](https://github.com/rustd/RedisSamples/tree/master/HelloWorld) sample shows how to do various cache operations using the [StackExchange.Redis](https://github.com/StackExchange/StackExchange.Redis) .NET client.
For more information, see the [StackExchange.Redis](https://github.com/StackExch
[How to use Azure Cache for Redis with Python](cache-python-get-started.md) shows how to get started with Azure Cache for Redis using Python and the [redis-py](https://github.com/andymccurdy/redis-py) client.
-[Work with .NET objects in the cache](cache-dotnet-how-to-use-azure-redis-cache.md#work-with-net-objects-in-the-cache) shows you one way to serialize .NET objects to write them to and read them from an Azure Cache for Redis instance.
+[Work with .NET objects in the cache](cache-dotnet-how-to-use-azure-redis-cache.md#work-with-net-objects-in-the-cache) shows you one way to serialize .NET objects to write them to and read them from an Azure Cache for Redis instance.
## Use Azure Cache for Redis as a Scale out Backplane for ASP.NET SignalR+ The [Use Azure Cache for Redis as a Scale out Backplane for ASP.NET SignalR](https://github.com/rustd/RedisSamples/tree/master/RedisAsSignalRBackplane) sample demonstrates how to use Azure Cache for Redis as a SignalR backplane. For more information about backplane, see [SignalR Scaleout with Redis](https://www.asp.net/signalr/overview/performance/scaleout-with-redis). ## Azure Cache for Redis customer query sample+ This sample compares performance between accessing data from a cache and accessing data from persistence storage. This sample has two projects. * [Demo how Azure Cache for Redis can improve performance by Caching data](https://github.com/rustd/RedisSamples/tree/master/RedisCacheCustomerQuerySample) * [Seed the Database and Cache for the demo](https://github.com/rustd/RedisSamples/tree/master/SeedCacheForCustomerQuerySample) ## ASP.NET Session State and Output Caching+ The [Use Azure Cache for Redis to store ASP.NET SessionState and OutputCache](https://github.com/rustd/RedisSamples/tree/master/SessionState_OutputCaching) sample demonstrates: * How to use Azure Cache for Redis to store ASP.NET Session and Output Cache * Using the SessionState and OutputCache providers for Redis. ## Manage Azure Cache for Redis with MAML
-The [Manage Azure Cache for Redis using Azure Management Libraries](https://github.com/rustd/RedisSamples/tree/master/ManageCacheUsingMAML) sample demonstrates how to use Azure Management Libraries to manage - (Create/ Update/ delete) your Cache.
+
+The [Manage Azure Cache for Redis using Azure Management Libraries](https://github.com/rustd/RedisSamples/tree/master/ManageCacheUsingMAML) sample demonstrates how to use Azure Management Libraries to manage - (Create/ Update/ delete) your cache.
## Custom monitoring sample+ The [Access Azure Cache for Redis Monitoring data](https://github.com/rustd/RedisSamples/tree/master/CustomMonitoring) sample demonstrates how to access monitoring data for your Azure Cache for Redis outside of the Azure portal. ## A Twitter-style clone written using PHP and Redis+ The [Retwis](https://github.com/SyntaxC4-MSFT/retwis) sample is the Redis Hello World. It's a minimal Twitter-style social network clone written using Redis and PHP using the [Predis](https://github.com/nrk/predis) client. The source code is designed to be simple and at the same time to show different Redis data structures. ## Bandwidth monitor+ The [Bandwidth monitor](https://github.com/JonCole/SampleCode/tree/master/BandWidthMonitor) sample allows you to monitor the bandwidth used on the client. To measure the bandwidth, run the sample on the cache client machine, make calls to the cache, and observe the bandwidth reported by the bandwidth monitor sample.
azure-cache-for-redis Cache Remove Tls 10 11 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-remove-tls-10-11.md
Title: Remove TLS 1.0 and 1.1 from use with Azure Cache for Redis description: Learn how to remove TLS 1.0 and 1.1 from your application when communicating with Azure Cache for Redis - Previously updated : 10/22/2019 Last updated : 05/25/2021 ms.devlang: csharp, golang, java, javascript, php, python+ # Remove TLS 1.0 and 1.1 from use with Azure Cache for Redis
Redigo uses TLS 1.2 by default.
## Additional information -- [How to configure Azure Cache for Redis](cache-configure.md)
+- [How to configure Azure Cache for Redis](cache-configure.md)
azure-cache-for-redis Cache Reserved Pricing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-reserved-pricing.md
Previously updated : 02/20/2020 Last updated : 06/01/2021+ # Prepay for Azure Cache for Redis compute resources with reserved capacity Azure Cache for Redis now helps you save money by prepaying for compute resources compared to pay-as-you-go prices. With Azure Cache for Redis reserved capacity, you make an upfront commitment on cache for one or three years to get a significant discount on the compute costs. To purchase Azure Cache for Redis reserved capacity, you need to specify the Azure region, service tier, and term.
-You do not need to assign the reservation to specific Azure Cache for Redis instances. An already running Azure Cache for Redis or ones that are newly deployed will automatically get the benefit of reserved pricing, up to the reserved cache size. By purchasing a reservation, you are pre-paying for the compute costs for one or three years. As soon as you buy a reservation, the Azure Cache for Redis compute charges that match the reservation attributes are no longer charged at the pay-as-you go rates. A reservation does not cover networking or storage charges associated with the cache. At the end of the reservation term, the billing benefit expires and the Azure Cache for Redis is billed at the pay-as-you go price. Reservations do not autorenew. For pricing information, see the [Azure Cache for Redis reserved capacity offering](https://azure.microsoft.com/pricing/details/cache).
+You do not need to assign the reservation to specific Azure Cache for Redis instances. An already running Azure Cache for Redis or ones that are newly deployed will automatically get the benefit of reserved pricing, up to the reserved cache size. By purchasing a reservation, you are pre-paying for the compute costs for one or three years. As soon as you buy a reservation, the Azure Cache for Redis compute charges that match the reservation attributes are no longer charged at the pay-as-you go rates. A reservation does not cover networking or storage charges associated with the cache. At the end of the reservation term, the billing benefit expires and the Azure Cache for Redis is billed at the pay-as-you go price. Reservations do not auto-renew. For pricing information, see the [Azure Cache for Redis reserved capacity offering](https://azure.microsoft.com/pricing/details/cache).
You can buy Azure Cache for Redis reserved capacity in the [Azure portal](https://portal.azure.com/). To buy the reserved capacity:
You can buy Azure Cache for Redis reserved capacity in the [Azure portal](https:
For the details on how enterprise customers and Pay-As-You-Go customers are charged for reservation purchases, see [understand Azure reservation usage for your Enterprise enrollment](../cost-management-billing/reservations/understand-reserved-instance-usage-ea.md) and [understand Azure reservation usage for your Pay-As-You-Go subscription](../cost-management-billing/reservations/understand-reserved-instance-usage.md). - ## Determine the right cache size before purchase The size of reservation should be based on the total amount of memory size that is used by the existing or soon-to-be-deployed cache within a specific region, and using the same service tier. For example, let's suppose that you're running two caches - one at 13 GB and the other at 26 GB. You'll need both for at least one year. Further, let's suppose that you plan to scale the existing 13-GB caches to 26 GB for a month to meet your seasonal demand, and then scale back. In this case, you can purchase either one P2-cache and one P3-cache or three P2-caches on a one-year reservation to maximize savings. You'll receive discount on the total amount of cache memory you reserve, independent of how that amount is allocated across your caches. - ## Buy Azure Cache for Redis reserved capacity You can buy a reserved VM instance in the [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_Reservations/CreateBlade/). Pay for the reservation [up front or with monthly payments](../cost-management-billing/reservations/prepare-buy-reservation.md).
You can buy a reserved VM instance in the [Azure portal](https://portal.azure.co
3. Select **Add** and then in the Purchase reservations pane, select **Azure Cache for Redis** to purchase a new reservation for your caches. 4. Fill in the required fields. Existing or new databases that match the attributes you select qualify to get the reserved capacity discount. The actual number of your Azure Cache for Redis instances that get the discount depend on the scope and quantity selected. - ![Overview of reserved pricing](media/cache-reserved-pricing/cache-reserved-price.png) - The following table describes required fields. | Field | Description |
If you have questions or need help, [create a support request](https://portal.az
The reservation discount is applied automatically to the Azure Cache for Redis instances that match the reservation scope and attributes. You can update the scope of the reservation through the Azure portal, PowerShell, Azure CLI, or the API.
-* To learn how reserved capacity discounts are applied to Azure Cache for Redis, see [Understand the Azure reservation discount](../cost-management-billing/reservations/understand-azure-cache-for-redis-reservation-charges.md)
+* To learn how reserved capacity discounts are applied to Azure Cache for Redis, see [Understand the Azure reservation discount](../cost-management-billing/reservations/understand-azure-cache-for-redis-reservation-charges.md)
* To learn more about Azure Reservations, see the following articles:
- * [What are Azure Reservations?](../cost-management-billing/reservations/save-compute-costs-reservations.md)
- * [Manage Azure Reservations](../cost-management-billing/reservations/manage-reserved-vm-instance.md)
- * [Understand Azure Reservations discount](../cost-management-billing/reservations/understand-reservation-charges.md)
- * [Understand reservation usage for your Pay-As-You-Go subscription](../cost-management-billing/reservations/understand-reservation-charges-mysql.md)
- * [Understand reservation usage for your Enterprise enrollment](../cost-management-billing/reservations/understand-reserved-instance-usage-ea.md)
- * [Azure Reservations in Partner Center Cloud Solution Provider (CSP) program](/partner-center/azure-reservations)
+ * [What are Azure Reservations?](../cost-management-billing/reservations/save-compute-costs-reservations.md)
+ * [Manage Azure Reservations](../cost-management-billing/reservations/manage-reserved-vm-instance.md)
+ * [Understand Azure Reservations discount](../cost-management-billing/reservations/understand-reservation-charges.md)
+ * [Understand reservation usage for your Pay-As-You-Go subscription](../cost-management-billing/reservations/understand-reservation-charges-mysql.md)
+ * [Understand reservation usage for your Enterprise enrollment](../cost-management-billing/reservations/understand-reserved-instance-usage-ea.md)
+ * [Azure Reservations in Partner Center Cloud Solution Provider (CSP) program](/partner-center/azure-reservations)
azure-cache-for-redis Cache Web App Cache Aside Leaderboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-web-app-cache-aside-leaderboard.md
ms.devlang: csharp Previously updated : 03/30/2018-
-#Customer intent: As an ASP.NET developer, new to Azure Cache for Redis, I want to use Azure Cache for Redis to improve performance and reduce back-end database load.
Last updated : 06/09/2021 # Tutorial: Create a cache-aside leaderboard on ASP.NET
The scaffolding code that was generated as part of this sample includes methods
Run the application locally on your machine to verify the functionality that has been added to support the teams.
-In this test, the application and database, are both running locally. The Azure Cache for Redis is not local. It is hosted remotely in Azure. That's why the cache will likely under-perform the database slightly. For best performance, the client application and Azure Cache for Redis instance should be in the same location.
+In this test, the application and database, are both running locally. The Azure Cache for Redis is not local. It is hosted remotely in Azure. That's why the cache will likely under-perform the database slightly. For best performance, the client application and Azure Cache for Redis instance should be in the same location.
In the next section, you deploy all resources to Azure to see the improved performance from using a cache.
azure-fluid-relay Container Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-fluid-relay/concepts/container-management.md
In most cases, developers will want to manage an inventory of containers and con
### Accessing containers
-Containers are referenced by container ID. Before a user can create or open a container, they must request a JWT that the Fluid Runtime will use when communicating with the Azure Fluid Relay Service. Any process with a valid JWT can access a container. It is the responsibility of the developer to generate JWTs for container access, which puts them in control of the business logic to control access as appropriate for their scenario. The Azure Fluid Relay service has no knowledge of which users should have access to a container. For more information on this topic, see [Azure Fluid Relay token contract](../how-tos/fluid-json-web-token.md)
+Containers are referenced by container ID. Before a user can create or open a container, they must request a JWT that the Fluid Runtime will use when communicating with the Azure Fluid Relay service. Any process with a valid JWT can access a container. It is the responsibility of the developer to generate JWTs for container access, which puts them in control of the business logic to control access as appropriate for their scenario. The Azure Fluid Relay service has no knowledge of which users should have access to a container. For more information on this topic, see [Azure Fluid Relay token contract](../how-tos/fluid-json-web-token.md)
> [!NOTE] > The JWT field **documentID** corresponds to the Fluid container ID.
azure-fluid-relay Data Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-fluid-relay/concepts/data-encryption.md
> [!NOTE] > This preview version is provided without a service-level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-Microsoft Azure Fluid Relay Server leverages the encryption-at-rest capability of [Azure Kubernetes](../../aks/enable-host-encryption.md), [Microsoft Azure Cosmos DB]/azure/cosmos-db/database-encryption-at-rest) and [Azure Blob Storage](../../storage/common/storage-service-encryption.md). The service-to-service communication between AFRS and these resources is TLS encrypted and is enclosed in with the Azure Virtual Network Boundary, protected from external interference by Network Security Rules.
+Azure Fluid Relay leverages the encryption-at-rest capability of [Azure Kubernetes Service](../../aks/enable-host-encryption.md), [Azure Cosmos DB](../../cosmos-db/database-encryption-at-rest.md) and [Azure Blob Storage](../../storage/common/storage-service-encryption.md). The service-to-service communication between Azure Fluid Relay and these resources is TLS encrypted and is enclosed in with the Azure Virtual Network boundary, protected from external interference by Network Security Rules.
-The diagram below shows at a high level how Azure Fluid Relay Server is implemented and how it handles data storage.
+The diagram below shows at a high level how Azure Fluid Relay is implemented and how it handles data storage.
:::image type="content" source="../images/data-encryption.png" alt-text="A diagram of data storage in Azure Fluid Relay"::: ## Frequently asked questions
-### How much more does Azure Fluid Relay Server cost if Encryption is enabled?
+### How much more does Azure Fluid Relay cost if encryption is enabled?
Encryption-at-rest is enabled by default. There is no additional cost.
The keys are managed by Microsoft.
### How often are encryption keys rotated?
-Microsoft has a set of internal guidelines for encryption key rotation, which Azure Fluid Relay Server follows. The specific guidelines are not published. Microsoft does publish the [Security Development Lifecycle (SDL)](https://www.microsoft.com/sdl/default.aspx), which is seen as a subset of internal guidance and has useful best practices for developers.
+Microsoft has a set of internal guidelines for encryption key rotation which Azure Fluid Relay follows. The specific guidelines are not published. Microsoft does publish the [Security Development Lifecycle (SDL)](https://www.microsoft.com/sdl/default.aspx), which is seen as a subset of internal guidance and has useful best practices for developers.
### Can I use my own encryption keys?
No, this feature is not available yet. Keep an eye out for more updates on this.
### What regions have encryption turned on?
-All Azure Fluid Relay Server regions have encryption turned on for all user data.
+All Azure Fluid Relay regions have encryption turned on for all user data.
-### Does encryption affect the performance latency and throughput SLAs?
+### Does encryption affect the performance latency and throughput?
-A: There is no impact or changes to the performance SLAs with encryption at rest enabled.
+A: There is no impact or changes to performance with encryption at rest enabled.
## See also - [Overview of Azure Fluid Relay architecture](architecture.md) - [Azure Fluid Relay token contract](../how-tos/fluid-json-web-token.md)-- [Authentication and authorization in your app](authentication-authorization.md)
+- [Authentication and authorization in your app](authentication-authorization.md)
azure-fluid-relay Data Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-fluid-relay/concepts/data-storage.md
+
+ Title: Data storage in Azure Fluid Relay
+description: Better understand the data storage in Fluid Relay Server
++ Last updated : 5/18/2022++++
+# Data storage in Azure Fluid Relay
+
+> [!NOTE]
+> This preview version is provided without a service-level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+
+A Container is the atomic unit of storage in the Azure Fluid Relay service and represents the data stored from a Fluid session, including operations and snapshots. The Fluid runtime uses the container to rehydrate the state of a Fluid session when a user joins for the first time or rejoins after leaving.
+
+You have control of the Azure region where container data is stored. During the provisioning of the Azure Fluid Relay resource, you can select the region where you want that data to be stored at-rest. All containers created in that Azure Fluid Relay resource will be stored in that region. Once selected, the region cannot be changed. You will need to create a new Azure Fluid Relay resource in another region to store data in a different region.
+
+To deliver a highly available service, the container data is replicated to another region. This helps in the cases where disaster recovery is needed in face of a full regional outage. Internally, Azure Fluid Relay uses Azure Blob Storage cross-region replication to achieve that. The region where data is replicated is defined by the Azure regional pairs listed on the [Cross-region replication in Azure](../../availability-zones/cross-region-replication-azure.md#azure-cross-region-replication-pairings-for-all-geographies) page.
+
+## Single region offering
+
+For regions that have the cross-region replication done outside of the geography (like Brazil South), Azure Fluid Relay provides a single region offering. You can select between the cross-region replication or this single region offering during the provisioning of the Azure Fluid Relay resource. Note that if you select the single region offering, you do not get the benefits of recovery from regional outage. Your application will experience downtime for the entire time the region is down.
+
+## What about in-transit data?
+During the sessionΓÇÖs lifetime, some data may live temporarily in-transit outside the region selected during resource provisioning. This allows the Azure Fluid Relay service to distribute changes in the DDSes between users at lower latency by placing the session in the closest region to your end users. The result is a better user experience for your end users.
+For the single region offering, in-transit data is scoped to the region selected. This may result in higher latencies distributing changes in DDSes to your end users if they are not close to that region.
+
+If the Fluid container is required for the duration of the collaborative session only, you can delete the container from the Azure Fluid Relay service. This helps you control the storage cost of your Azure Fluid Relay resource.
+
+## See also
+
+- [Overview of Azure Fluid Relay architecture](architecture.md)
+- [How to: Provision an Azure Fluid Relay service](../how-tos/provision-fluid-azure-portal.md)
+- [Delete Fluid containers in Azure Fluid Relay](../how-tos/container-deletion.md)
azure-fluid-relay Container Deletion https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-fluid-relay/how-tos/container-deletion.md
-# Delete Fluid containers in Microsoft Azure Fluid Relay Server
+# Delete Fluid containers in Azure Fluid Relay
> [!NOTE] > This preview version is provided without a service-level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
azure-fluid-relay Deploy Fluid Static Web Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-fluid-relay/how-tos/deploy-fluid-static-web-apps.md
If you don't have an Azure subscription, [create a free trial account](https://a
[!INCLUDE [fork-fluidhelloworld](../includes/fork-fluidhelloworld.md)]
-## Connect to Azure Fluid Relay Service
+## Connect to Azure Fluid Relay
-You can connect to Azure Fluid Relay service by providing the tenant ID and key that is uniquely generated for you when creating the Azure resource. You can build your own token provider implementation or you can use the two token provider implementations that the Fluid Framework provides: **InsecureTokenProvider** and **AzureFunctionTokenProvider**.
+You can connect to Azure Fluid Relay by providing the tenant ID and key that is uniquely generated for you when creating the Azure resource. You can build your own token provider implementation or you can use the two token provider implementations that the Fluid Framework provides: **InsecureTokenProvider** and **AzureFunctionTokenProvider**.
To learn more about using InsecureTokenProvider for local development, see [Connecting to the service](connect-fluid-azure-service.md#connecting-to-the-service) and [Authentication and authorization in your app](../concepts/authentication-authorization.md#the-token-provider).
azure-fluid-relay Provision Fluid Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-fluid-relay/how-tos/provision-fluid-azure-portal.md
> [!NOTE] > This preview version is provided without a service-level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-Before you can connect your app to an Azure Fluid Relay server, you must provision an Azure Fluid Relay server resource on your Azure account. This article walks through the steps to get your Azure Fluid Relay service provisioned and ready to use.
+Before you can connect your app to an Azure Fluid Relay, you must provision an Azure Fluid Relay server resource in your Azure account. This article walks through the steps to get your Azure Fluid Relay service provisioned and ready to use.
## Prerequisites
-To create an Azure Fluid Relay service, you must have an Azure account. If you don't have an account, you can [try Azure for free](https://azure.com/free).
+To create an Azure Fluid Relay resource, you must have an Azure account. If you don't have an account, you can [try Azure for free](https://azure.com/free).
## Create a resource group A resource group is a logical collection of Azure resources. All resources are deployed and managed in a resource group. To create a resource group:
azure-fluid-relay Quickstart Dice Roll https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-fluid-relay/quickstarts/quickstart-dice-roll.md
The sample code used in this quickstart is available [here](https://github.com/m
## Set up your development environment
-To follow along with this quickstart, you'll need an Azure account an [Azure Fluid Relay service provisioned](../how-tos/provision-fluid-azure-portal.md). If you don't have an account, you can [try Azure for free](https://azure.com/free).
+To follow along with this quickstart, you'll need an Azure account and [Azure Fluid Relay provisioned](../how-tos/provision-fluid-azure-portal.md). If you don't have an account, you can [try Azure for free](https://azure.com/free).
You'll also need the following software installed on your computer.
azure-functions Configure Networking How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/configure-networking-how-to.md
When you create a function app, you must create or link to a general-purpose Azu
> [!NOTE]
-> This feature currently works for all Windows and Linux virtual network-supported SKUs in the Dedicated (App Service) plan and for Windows Elastic Premium plans. ASEv3 is not supported yet. Consumption tier isn't supported.
+> This feature currently works for all Windows and Linux virtual network-supported SKUs in the Dedicated (App Service) plan and for Windows Elastic Premium plans. Consumption tier isn't supported.
To set up a function with a storage account restricted to a private network:
azure-functions Dotnet Isolated Process Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/dotnet-isolated-process-guide.md
description: Learn how to use a .NET isolated process to run your C# functions i
Previously updated : 05/12/2022 Last updated : 05/24/2022 recommendations: false #Customer intent: As a developer, I need to know how to create functions that run in an isolated process so that I can run my function code on current (not LTS) releases of .NET.
This section describes the current state of the functional and behavioral differ
| Output binding types | `IAsyncCollector`, [DocumentClient], [BrokeredMessage], and other client-specific types | Simple types, JSON serializable types, and arrays. | | Multiple output bindings | Supported | [Supported](#multiple-output-bindings) | | HTTP trigger | [HttpRequest]/[ObjectResult] | [HttpRequestData]/[HttpResponseData] |
-| Durable Functions | [Supported](durable/durable-functions-overview.md) | Not supported |
+| Durable Functions | [Supported](durable/durable-functions-overview.md) | [Supported (public preview)](https://github.com/microsoft/durabletask-dotnet#usage-with-azure-functions) |
| Imperative bindings | [Supported](functions-dotnet-class-library.md#binding-at-runtime) | Not supported | | function.json artifact | Generated | Not generated | | Configuration | [host.json](functions-host-json.md) | [host.json](functions-host-json.md) and custom initialization |
azure-functions Durable Functions Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-overview.md
Title: Durable Functions Overview - Azure
description: Introduction to the Durable Functions extension for Azure Functions. Previously updated : 12/23/2020 Last updated : 05/24/2022 #Customer intent: As a < type of user >, I want < what? > so that < why? >.
## <a name="language-support"></a>Supported languages
-Durable Functions currently supports the following languages:
+Durable Functions is designed to work with all Azure Functions programming languages but may have different minimum requirements for each language. The following table shows the minimum supported app configurations:
-* **C#**: both [precompiled class libraries](../functions-dotnet-class-library.md) and [C# script](../functions-reference-csharp.md).
-* **JavaScript**: supported only for version 2.x or later of the Azure Functions runtime. Requires version 1.7.0 of the Durable Functions extension, or a later version.
-* **Python**: requires version 2.3.1 of the Durable Functions extension, or a later version.
-* **F#**: precompiled class libraries and F# script. F# script is only supported for version 1.x of the Azure Functions runtime.
-* **PowerShell**: Supported only for version 3.x of the Azure Functions runtime and PowerShell 7. Requires version 2.x of the bundle extensions.
-
-To access the latest features and updates, it is recommended you use the latest versions of the Durable Functions extension and the language-specific Durable Functions libraries. Learn more about [Durable Functions versions](durable-functions-versions.md).
-
-Durable Functions has a goal of supporting all [Azure Functions languages](../supported-languages.md). See the [Durable Functions issues list](https://github.com/Azure/azure-functions-durable-extension/issues) for the latest status of work to support additional languages.
+| Language stack | Azure Functions Runtime versions | Language worker version | Minimum bundles version |
+| - | - | - | - |
+| .NET / C# / F# | Functions 1.0+ | In-process (GA) <br/> Out-of-process ([preview](https://github.com/microsoft/durabletask-dotnet#usage-with-azure-functions)) | N/A |
+| JavaScript/TypeScript | Functions 2.0+ | Node 8+ | 2.x bundles |
+| Python | Functions 2.0+ | Python 3.7+ | 2.x bundles |
+| PowerShell | Functions 3.0+ | PowerShell 7+ | 2.x bundles |
+| Java (coming soon) | Functions 3.0+ | Java 8+ | 4.x bundles |
Like Azure Functions, there are templates to help you develop Durable Functions using [Visual Studio 2019](durable-functions-create-first-csharp.md), [Visual Studio Code](quickstart-js-vscode.md), and the [Azure portal](durable-functions-create-portal.md).
azure-functions Functions Bindings Azure Sql Input https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-azure-sql-input.md
Title: Azure SQL input binding for Functions
description: Learn to use the Azure SQL input binding in Azure Functions. Previously updated : 5/3/2022+ Last updated : 5/24/2022 zone_pivot_groups: programming-languages-set-functions-lang-workers
When a function runs, the Azure SQL input binding retrieves data from a database
For information on setup and configuration details, see the [overview](./functions-bindings-azure-sql.md).
-## Example
+## Examples
+<a id="example"></a>
::: zone pivot="programming-language-csharp"
+More samples for the Azure SQL input binding are available in the [GitHub repository](https://github.com/Azure/azure-functions-sql-extension/tree/main/samples/samples-csharp).
+ # [In-process](#tab/in-process) This section contains the following examples:
-* [HTTP trigger, look up ID from query string](#http-trigger-look-up-id-from-query-string-c)
-* [HTTP trigger, get multiple docs from route data](#http-trigger-get-multiple-items-from-route-data-c)
+* [HTTP trigger, get row by ID from query string](#http-trigger-look-up-id-from-query-string-c)
+* [HTTP trigger, get multiple rows from route data](#http-trigger-get-multiple-items-from-route-data-c)
+* [HTTP trigger, delete rows](#http-trigger-delete-one-or-multiple-rows-c)
The examples refer to a `ToDoItem` class and a corresponding database table:
The examples refer to a `ToDoItem` class and a corresponding database table:
:::code language="sql" source="~/functions-sql-todo-sample/sql/create.sql" range="1-7"::: - <a id="http-trigger-look-up-id-from-query-string-c"></a>
+### HTTP trigger, get row by ID from query string
The following example shows a [C# function](functions-dotnet-class-library.md) that retrieves a single record. The function is triggered by an HTTP request that uses a query string to specify the ID. That ID is used to retrieve a `ToDoItem` record with the specified query.
namespace AzureSQLSamples
``` <a id="http-trigger-get-multiple-items-from-route-data-c"></a>
+### HTTP trigger, get multiple rows from route parameter
The following example shows a [C# function](functions-dotnet-class-library.md) that retrieves documents returned by the query. The function is triggered by an HTTP request that uses route data to specify the value of a query parameter. That parameter is used to filter the `ToDoItem` records in the specified query.
namespace AzureSQLSamples
``` <a id="http-trigger-delete-one-or-multiple-rows-c"></a>
+### HTTP trigger, delete rows
The following example shows a [C# function](functions-dotnet-class-library.md) that executes a stored procedure with input from the HTTP request query parameter.
Isolated process isn't currently supported.
::: zone-end+ > [!NOTE]
-> In the current preview, Azure SQL bindings are only supported by [C# class library functions](functions-dotnet-class-library.md).
+> In the current preview, Azure SQL bindings are only supported by [C# class library functions](functions-dotnet-class-library.md), [JavaScript functions](functions-reference-node.md), and [Python functions](functions-reference-python.md).
::: zone-end
-<!### Use these pivots when we get other non-C# languages added. ###
::: zone pivot="programming-language-javascript"
+More samples for the Azure SQL input binding are available in the [GitHub repository](https://github.com/Azure/azure-functions-sql-extension/tree/main/samples/samples-js).
+
+This section contains the following examples:
+
+* [HTTP trigger, get multiple rows](#http-trigger-get-multiple-items-javascript)
+* [HTTP trigger, get row by ID from query string](#http-trigger-look-up-id-from-query-string-javascript)
+* [HTTP trigger, delete rows](#http-trigger-delete-one-or-multiple-rows-javascript)
+
+The examples refer to a database table:
++
+<a id="http-trigger-get-multiple-items-javascript"></a>
+### HTTP trigger, get multiple rows
+
+The following example shows a SQL input binding in a function.json file and a JavaScript function that reads from a query and returns the results in the HTTP response.
+
+The following is binding data in the function.json file:
+
+```json
+{
+ "authLevel": "anonymous",
+ "type": "httpTrigger",
+ "direction": "in",
+ "name": "req",
+ "methods": [
+ "get"
+ ]
+},
+{
+ "type": "http",
+ "direction": "out",
+ "name": "res"
+},
+{
+ "name": "todoItems",
+ "type": "sql",
+ "direction": "in",
+ "commandText": "select [Id], [order], [title], [url], [completed] from dbo.ToDo",
+ "commandType": "Text",
+ "connectionStringSetting": "SqlConnectionString"
+}
+```
+
+The [configuration](#configuration) section explains these properties.
+
+The following is sample JavaScript code:
++
+```javascript
+module.exports = async function (context, req, todoItems) {
+ context.log('JavaScript HTTP trigger and SQL input binding function processed a request.');
+
+ context.res = {
+ // status: 200, /* Defaults to 200 */
+ mimetype: "application/json",
+ body: todoItems
+ };
+}
+```
+
+<a id="http-trigger-look-up-id-from-query-string-javascript"></a>
+### HTTP trigger, get row by ID from query string
+
+The following example shows a SQL input binding in a JavaScript function that reads from a query filtered by a parameter from the query string and returns the row in the HTTP response.
+
+The following is binding data in the function.json file:
+
+```json
+{
+ "authLevel": "anonymous",
+ "type": "httpTrigger",
+ "direction": "in",
+ "name": "req",
+ "methods": [
+ "get"
+ ]
+},
+{
+ "type": "http",
+ "direction": "out",
+ "name": "res"
+},
+{
+ "name": "todoItem",
+ "type": "sql",
+ "direction": "in",
+ "commandText": "select [Id], [order], [title], [url], [completed] from dbo.ToDo where Id = @Id",
+ "commandType": "Text",
+ "parameters": "@Id = {Query.id}",
+ "connectionStringSetting": "SqlConnectionString"
+}
+```
+
+The [configuration](#configuration) section explains these properties.
+
+The following is sample JavaScript code:
++
+```javascript
+module.exports = async function (context, req, todoItem) {
+ context.log('JavaScript HTTP trigger and SQL input binding function processed a request.');
+
+ context.res = {
+ // status: 200, /* Defaults to 200 */
+ mimetype: "application/json",
+ body: todoItem
+ };
+}
+```
+
+<a id="http-trigger-delete-one-or-multiple-rows-javascript"></a>
+### HTTP trigger, delete rows
+
+The following example shows a SQL input binding in a function.json file and a JavaScript function that executes a stored procedure with input from the HTTP request query parameter.
+
+The stored procedure `dbo.DeleteToDo` must be created on the database. In this example, the stored procedure deletes a single record or all records depending on the value of the parameter.
+++
+```json
+{
+ "authLevel": "anonymous",
+ "type": "httpTrigger",
+ "direction": "in",
+ "name": "req",
+ "methods": [
+ "get"
+ ]
+},
+{
+ "type": "http",
+ "direction": "out",
+ "name": "res"
+},
+{
+ "name": "todoItems",
+ "type": "sql",
+ "direction": "in",
+ "commandText": "DeleteToDo",
+ "commandType": "StoredProcedure",
+ "parameters": "@Id = {Query.id}",
+ "connectionStringSetting": "SqlConnectionString"
+}
+```
+
+The [configuration](#configuration) section explains these properties.
+
+The following is sample JavaScript code:
++
+```javascript
+module.exports = async function (context, req, todoItems) {
+ context.log('JavaScript HTTP trigger and SQL input binding function processed a request.');
+
+ context.res = {
+ // status: 200, /* Defaults to 200 */
+ mimetype: "application/json",
+ body: todoItems
+ };
+}
+```
+ ::: zone-end
-
::: zone pivot="programming-language-python"
+More samples for the Azure SQL input binding are available in the [GitHub repository](https://github.com/Azure/azure-functions-sql-extension/tree/main/samples/samples-python).
+
+This section contains the following examples:
+
+* [HTTP trigger, get multiple rows](#http-trigger-get-multiple-items-python)
+* [HTTP trigger, get row by ID from query string](#http-trigger-look-up-id-from-query-string-python)
+* [HTTP trigger, delete rows](#http-trigger-delete-one-or-multiple-rows-python)
+
+The examples refer to a database table:
++
+<a id="http-trigger-get-multiple-items-python"></a>
+### HTTP trigger, get multiple rows
+
+The following example shows a SQL input binding in a function.json file and a Python function that reads from a query and returns the results in the HTTP response.
+
+The following is binding data in the function.json file:
+
+```json
+{
+ "authLevel": "anonymous",
+ "type": "httpTrigger",
+ "direction": "in",
+ "name": "req",
+ "methods": [
+ "get"
+ ]
+},
+{
+ "type": "http",
+ "direction": "out",
+ "name": "$return"
+},
+{
+ "name": "todoItems",
+ "type": "sql",
+ "direction": "in",
+ "commandText": "select [Id], [order], [title], [url], [completed] from dbo.ToDo",
+ "commandType": "Text",
+ "connectionStringSetting": "SqlConnectionString"
+}
+```
+
+The [configuration](#configuration) section explains these properties.
+
+The following is sample Python code:
++
+```python
+import azure.functions as func
+import json
+
+def main(req: func.HttpRequest, todoItems: func.SqlRowList) -> func.HttpResponse:
+ rows = list(map(lambda r: json.loads(r.to_json()), todoItems))
+
+ return func.HttpResponse(
+ json.dumps(rows),
+ status_code=200,
+ mimetype="application/json"
+ )
+```
+
+<a id="http-trigger-look-up-id-from-query-string-python"></a>
+### HTTP trigger, get row by ID from query string
+
+The following example shows a SQL input binding in a Python function that reads from a query filtered by a parameter from the query string and returns the row in the HTTP response.
+
+The following is binding data in the function.json file:
+
+```json
+{
+ "authLevel": "anonymous",
+ "type": "httpTrigger",
+ "direction": "in",
+ "name": "req",
+ "methods": [
+ "get"
+ ]
+},
+{
+ "type": "http",
+ "direction": "out",
+ "name": "$return"
+},
+{
+ "name": "todoItem",
+ "type": "sql",
+ "direction": "in",
+ "commandText": "select [Id], [order], [title], [url], [completed] from dbo.ToDo where Id = @Id",
+ "commandType": "Text",
+ "parameters": "@Id = {Query.id}",
+ "connectionStringSetting": "SqlConnectionString"
+}
+```
+
+The [configuration](#configuration) section explains these properties.
+
+The following is sample Python code:
++
+```python
+import azure.functions as func
+import json
+
+def main(req: func.HttpRequest, todoItem: func.SqlRowList) -> func.HttpResponse:
+ rows = list(map(lambda r: json.loads(r.to_json()), todoItem))
+
+ return func.HttpResponse(
+ json.dumps(rows),
+ status_code=200,
+ mimetype="application/json"
+ )
+```
++
+<a id="http-trigger-delete-one-or-multiple-rows-python"></a>
+### HTTP trigger, delete rows
+
+The following example shows a SQL input binding in a function.json file and a Python function that executes a stored procedure with input from the HTTP request query parameter.
+
+The stored procedure `dbo.DeleteToDo` must be created on the database. In this example, the stored procedure deletes a single record or all records depending on the value of the parameter.
+++
+```json
+{
+ "authLevel": "anonymous",
+ "type": "httpTrigger",
+ "direction": "in",
+ "name": "req",
+ "methods": [
+ "get"
+ ]
+},
+{
+ "type": "http",
+ "direction": "out",
+ "name": "$return"
+},
+{
+ "name": "todoItems",
+ "type": "sql",
+ "direction": "in",
+ "commandText": "DeleteToDo",
+ "commandType": "StoredProcedure",
+ "parameters": "@Id = {Query.id}",
+ "connectionStringSetting": "SqlConnectionString"
+}
+```
+
+The [configuration](#configuration) section explains these properties.
+
+The following is sample Python code:
++
+```python
+import azure.functions as func
+import json
+
+def main(req: func.HttpRequest, todoItems: func.SqlRowList) -> func.HttpResponse:
+ rows = list(map(lambda r: json.loads(r.to_json()), todoItems))
+
+ return func.HttpResponse(
+ json.dumps(rows),
+ status_code=200,
+ mimetype="application/json"
+ )
+```
::: zone-end+
+<!### Use these pivots when we get other non-C# languages added. ###
+
+ ::: zone pivot="programming-language-java" ::: zone-end
In [C# class libraries](functions-dotnet-class-library.md), use the [Sql](https:
| Attribute property |Description| ||| | **CommandText** | Required. The Transact-SQL query command or name of the stored procedure executed by the binding. |
-| **ConnectionStringSetting** | The name of an app setting that contains the connection string for the database against which the query or stored procedure is being executed. This isn't the actual connection string and must instead resolve to an environment variable. |
-| **CommandType** | A [CommandType](/dotnet/api/system.data.commandtype) value, which is [Text](/dotnet/api/system.data.commandtype#fields) for a query and [StoredProcedure](/dotnet/api/system.data.commandtype#fields) for a stored procedure. |
-| **Parameters** | Zero or more parameter values passed to the command during execution as a single string. Must follow the format `@param1=param1,@param2=param2`. Neither the parameter name nor the parameter value can contain a comma (`,`) or an equals sign (`=`). |
+| **ConnectionStringSetting** | Required. The name of an app setting that contains the connection string for the database against which the query or stored procedure is being executed. This value isn't the actual connection string and must instead resolve to an environment variable name. |
+| **CommandType** | Required. A [CommandType](/dotnet/api/system.data.commandtype) value, which is [Text](/dotnet/api/system.data.commandtype#fields) for a query and [StoredProcedure](/dotnet/api/system.data.commandtype#fields) for a stored procedure. |
+| **Parameters** | Optional. Zero or more parameter values passed to the command during execution as a single string. Must follow the format `@param1=param1,@param2=param2`. Neither the parameter name nor the parameter value can contain a comma (`,`) or an equals sign (`=`). |
::: zone-end <!### Use these pivots when we get other non-C# languages added. ###
In the [Java functions runtime library](/java/api/overview/azure/functions/runti
| Element |Description| ||| | **commandText** | Required. The Transact-SQL query command or name of the stored procedure executed by the binding. |
-| **connectionStringSetting** | The name of an app setting that contains the connection string for the database against which the query or stored procedure is being executed. This isn't the actual connection string and must instead resolve to an environment variable. |
+| **connectionStringSetting** | The name of an app setting that contains the connection string for the database against which the query or stored procedure is being executed. This value isn't the actual connection string and must instead resolve to an environment variable name. |
| **commandType** | A [CommandType](/dotnet/api/system.data.commandtype) value, which is [Text](/dotnet/api/system.data.commandtype#fields) for a query and [StoredProcedure](/dotnet/api/system.data.commandtype#fields) for a stored procedure. | | **parameters** | Zero or more parameter values passed to the command during execution as a single string. Must follow the format `@param1=param1,@param2=param2`. Neither the parameter name nor the parameter value can contain a comma (`,`) or an equals sign (`=`). |
+-->
## Configuration
-The following table explains the binding configuration properties that you set in the *function.json* file.
+The following table explains the binding configuration properties that you set in the function.json file.
|function.json property | Description| ||-|
-|**type** | Must be set to `table`. This property is set automatically when you create the binding in the Azure portal.|
-|**direction** | Must be set to `in`. This property is set automatically when you create the binding in the Azure portal. |
-|**name** | The name of the variable that represents the table or entity in function code. |
+|**type** | Required. Must be set to `sql`. |
+|**direction** | Required. Must be set to `in`. |
+|**name** | Required. The name of the variable that represents the query results in function code. |
| **commandText** | Required. The Transact-SQL query command or name of the stored procedure executed by the binding. |
-| **connectionStringSetting** | The name of an app setting that contains the connection string for the database against which the query or stored procedure is being executed. This isn't the actual connection string and must instead resolve to an environment variable. |
-| **commandType** | A [CommandType](/dotnet/api/system.data.commandtype) value, which is [Text](/dotnet/api/system.data.commandtype#fields) for a query and [StoredProcedure](/dotnet/api/system.data.commandtype#fields) for a stored procedure. |
-| **parameters** | Zero or more parameter values passed to the command during execution as a single string. Must follow the format `@param1=param1,@param2=param2`. Neither the parameter name nor the parameter value can contain a comma (`,`) or an equals sign (`=`). |
+| **connectionStringSetting** | Required. The name of an app setting that contains the connection string for the database against which the query or stored procedure is being executed. This value isn't the actual connection string and must instead resolve to an environment variable name. |
+| **commandType** | Required. A [CommandType](/dotnet/api/system.data.commandtype) value, which is [Text](/dotnet/api/system.data.commandtype#fields) for a query and [StoredProcedure](/dotnet/api/system.data.commandtype#fields) for a stored procedure. |
+| **parameters** | Optional. Zero or more parameter values passed to the command during execution as a single string. Must follow the format `@param1=param1,@param2=param2`. Neither the parameter name nor the parameter value can contain a comma (`,`) or an equals sign (`=`). |
::: zone-end >+ [!INCLUDE [app settings to local.settings.json](../../includes/functions-app-settings-local.md)] ## Usage The attribute's constructor takes the SQL command text, the command type, parameters, and the connection string setting name. The command can be a Transact-SQL (T-SQL) query with the command type `System.Data.CommandType.Text` or stored procedure name with the command type `System.Data.CommandType.StoredProcedure`. The connection string setting name corresponds to the application setting (in `local.settings.json` for local development) that contains the [connection string](/dotnet/api/microsoft.data.sqlclient.sqlconnection.connectionstring?view=sqlclient-dotnet-core-3.0&preserve-view=true#Microsoft_Data_SqlClient_SqlConnection_ConnectionString) to the Azure SQL or SQL Server instance.
azure-functions Functions Bindings Azure Sql Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-azure-sql-output.md
Title: Azure SQL output binding for Functions
description: Learn to use the Azure SQL output binding in Azure Functions. Previously updated : 4/1/2022+ Last updated : 5/24/2022 zone_pivot_groups: programming-languages-set-functions-lang-workers
The Azure SQL output binding lets you write to a database.
For information on setup and configuration details, see the [overview](./functions-bindings-azure-sql.md).
-## Example
+## Examples
+<a id="example"></a>
+ ::: zone pivot="programming-language-csharp"
+More samples for the Azure SQL output binding are available in the [GitHub repository](https://github.com/Azure/azure-functions-sql-extension/tree/main/samples/samples-csharp).
+ # [In-process](#tab/in-process) This section contains the following examples:
-* [Http trigger, write one record](#http-trigger-write-one-record-c)
-* [Http trigger, write to two tables](#http-trigger-write-to-two-tables-c)
-* [Http trigger, write records using IAsyncCollector](#http-trigger-write-records-using-iasynccollector-c)
+* [HTTP trigger, write one record](#http-trigger-write-one-record-c)
+* [HTTP trigger, write to two tables](#http-trigger-write-to-two-tables-c)
+* [HTTP trigger, write records using IAsyncCollector](#http-trigger-write-records-using-iasynccollector-c)
The examples refer to a `ToDoItem` class and a corresponding database table:
The examples refer to a `ToDoItem` class and a corresponding database table:
<a id="http-trigger-write-one-record-c"></a>
-### Http trigger, write one record
+### HTTP trigger, write one record
The following example shows a [C# function](functions-dotnet-class-library.md) that adds a record to a database, using data provided in an HTTP POST request as a JSON body.
The following example shows a [C# function](functions-dotnet-class-library.md) t
<a id="http-trigger-write-to-two-tables-c"></a>
-### Http trigger, write to two tables
+### HTTP trigger, write to two tables
-The following example shows a [C# function](functions-dotnet-class-library.md) that adds records to a database in two different tables (`dbo.ToDo` and `dbo.RequestLog`), using data provided in an HTTP POST request as a JSON body and multiple output bindings.
+The following example shows a [C# function](functions-dotnet-class-library.md) that adds records to a database in two different tables (`dbo.ToDo` and `dbo.RequestLog`), using data provided in an HTTP POST request as a JSON body and multiple output bindings.
+
+```sql
+CREATE TABLE dbo.RequestLog (
+ Id int identity(1,1) primary key,
+ RequestTimeStamp datetime2 not null,
+ ItemCount int not null
+)
+```
```cs
Isolated process isn't currently supported.
::: zone-end+ > [!NOTE]
-> In the current preview, Azure SQL bindings are only supported by [C# class library functions](functions-dotnet-class-library.md).
+> In the current preview, Azure SQL bindings are only supported by [C# class library functions](functions-dotnet-class-library.md), [JavaScript functions](functions-reference-node.md), and [Python functions](functions-reference-python.md).
::: zone-end
-<!### Use these pivots when we get other non-C# languages added. ###
+ ::: zone pivot="programming-language-javascript"
-
+More samples for the Azure SQL output binding are available in the [GitHub repository](https://github.com/Azure/azure-functions-sql-extension/tree/main/samples/samples-js).
+This section contains the following examples:
+
+* [HTTP trigger, write records to a table](#http-trigger-write-records-to-table-javascript)
+* [HTTP trigger, write to two tables](#http-trigger-write-to-two-tables-javascript)
+
+The examples refer to a database table:
+++
+<a id="http-trigger-write-records-to-table-javascript"></a>
+### HTTP trigger, write records to a table
+
+The following example shows a SQL input binding in a function.json file and a JavaScript function that adds records to a table, using data provided in an HTTP POST request as a JSON body.
+
+The following is binding data in the function.json file:
+
+```json
+{
+ "authLevel": "anonymous",
+ "type": "httpTrigger",
+ "direction": "in",
+ "name": "req",
+ "methods": [
+ "post"
+ ]
+},
+{
+ "type": "http",
+ "direction": "out",
+ "name": "res"
+},
+{
+ "name": "todoItems",
+ "type": "sql",
+ "direction": "out",
+ "commandText": "dbo.ToDo",
+ "connectionStringSetting": "SqlConnectionString"
+}
+```
+
+The [configuration](#configuration) section explains these properties.
+
+The following is sample JavaScript code:
+
+```javascript
+module.exports = async function (context, req) {
+ context.log('JavaScript HTTP trigger and SQL output binding function processed a request.');
+ context.log(req.body);
+
+ if (req.body) {
+ context.bindings.todoItems = req.body;
+ context.res = {
+ body: req.body,
+ mimetype: "application/json",
+ status: 201
+ }
+ } else {
+ context.res = {
+ status: 400,
+ body: "Error reading request body"
+ }
+ }
+}
+```
+
+<a id="http-trigger-write-to-two-tables-javascript"></a>
+### HTTP trigger, write to two tables
+
+The following example shows a SQL input binding in a function.json file and a JavaScript function that adds records to a database in two different tables (`dbo.ToDo` and `dbo.RequestLog`), using data provided in an HTTP POST request as a JSON body and multiple output bindings.
+
+The second table, `dbo.RequestLog`, corresponds to the following definition:
+
+```sql
+CREATE TABLE dbo.RequestLog (
+ Id int identity(1,1) primary key,
+ RequestTimeStamp datetime2 not null,
+ ItemCount int not null
+)
+```
+
+The following is binding data in the function.json file:
+
+```json
+{
+ "authLevel": "anonymous",
+ "type": "httpTrigger",
+ "direction": "in",
+ "name": "req",
+ "methods": [
+ "post"
+ ]
+},
+{
+ "type": "http",
+ "direction": "out",
+ "name": "res"
+},
+{
+ "name": "todoItems",
+ "type": "sql",
+ "direction": "out",
+ "commandText": "dbo.ToDo",
+ "connectionStringSetting": "SqlConnectionString"
+},
+{
+ "name": "requestLog",
+ "type": "sql",
+ "direction": "out",
+ "commandText": "dbo.RequestLog",
+ "connectionStringSetting": "SqlConnectionString"
+}
+```
+
+The [configuration](#configuration) section explains these properties.
+
+The following is sample JavaScript code:
+
+```javascript
+module.exports = async function (context, req) {
+ context.log('JavaScript HTTP trigger and SQL output binding function processed a request.');
+ context.log(req.body);
+
+ const newLog = {
+ RequestTimeStamp = Date.now(),
+ ItemCount = 1
+ }
+
+ if (req.body) {
+ context.bindings.todoItems = req.body;
+ context.bindings.requestLog = newLog;
+ context.res = {
+ body: req.body,
+ mimetype: "application/json",
+ status: 201
+ }
+ } else {
+ context.res = {
+ status: 400,
+ body: "Error reading request body"
+ }
+ }
+}
+```
++ ::: zone pivot="programming-language-python"
+More samples for the Azure SQL output binding are available in the [GitHub repository](https://github.com/Azure/azure-functions-sql-extension/tree/main/samples/samples-python).
+
+This section contains the following examples:
+
+* [HTTP trigger, write records to a table](#http-trigger-write-records-to-table-python)
+* [HTTP trigger, write to two tables](#http-trigger-write-to-two-tables-python)
+
+The examples refer to a database table:
+++
+<a id="http-trigger-write-records-to-table-python"></a>
+### HTTP trigger, write records to a table
+
+The following example shows a SQL input binding in a function.json file and a Python function that adds records to a table, using data provided in an HTTP POST request as a JSON body.
+
+The following is binding data in the function.json file:
+
+```json
+{
+ "authLevel": "anonymous",
+ "type": "httpTrigger",
+ "direction": "in",
+ "name": "req",
+ "methods": [
+ "post"
+ ]
+},
+{
+ "type": "http",
+ "direction": "out",
+ "name": "$return"
+},
+{
+ "name": "todoItems",
+ "type": "sql",
+ "direction": "out",
+ "commandText": "dbo.ToDo",
+ "connectionStringSetting": "SqlConnectionString"
+}
+```
+
+The [configuration](#configuration) section explains these properties.
+
+The following is sample Python code:
+
+```python
+import logging
+import azure.functions as func
++
+def main(req: func.HttpRequest, todoItems: func.Out[func.SqlRow]) -> func.HttpResponse:
+ logging.info('Python HTTP trigger and SQL output binding function processed a request.')
+
+ try:
+ req_body = req.get_json()
+ rows = list(map(lambda r: json.loads(r.to_json()), req_body))
+ except ValueError:
+ pass
+
+ if req_body:
+ todoItems.set(rows)
+ return func.HttpResponse(
+ todoItems.to_json(),
+ status_code=201,
+ mimetype="application/json"
+ )
+ else:
+ return func.HttpResponse(
+ "Error accessing request body",
+ status_code=400
+ )
+```
+
+<a id="http-trigger-write-to-two-tables-python"></a>
+### HTTP trigger, write to two tables
+
+The following example shows a SQL input binding in a function.json file and a Python function that adds records to a database in two different tables (`dbo.ToDo` and `dbo.RequestLog`), using data provided in an HTTP POST request as a JSON body and multiple output bindings.
+
+The second table, `dbo.RequestLog`, corresponds to the following definition:
+
+```sql
+CREATE TABLE dbo.RequestLog (
+ Id int identity(1,1) primary key,
+ RequestTimeStamp datetime2 not null,
+ ItemCount int not null
+)
+```
+
+The following is binding data in the function.json file:
+
+```json
+{
+ "authLevel": "anonymous",
+ "type": "httpTrigger",
+ "direction": "in",
+ "name": "req",
+ "methods": [
+ "post"
+ ]
+},
+{
+ "type": "http",
+ "direction": "out",
+ "name": "$return"
+},
+{
+ "name": "todoItems",
+ "type": "sql",
+ "direction": "out",
+ "commandText": "dbo.ToDo",
+ "connectionStringSetting": "SqlConnectionString"
+},
+{
+ "name": "requestLog",
+ "type": "sql",
+ "direction": "out",
+ "commandText": "dbo.RequestLog",
+ "connectionStringSetting": "SqlConnectionString"
+}
+```
+
+The [configuration](#configuration) section explains these properties.
+
+The following is sample Python code:
+
+```python
+import logging
+from datetime import datetime
+import azure.functions as func
++
+def main(req: func.HttpRequest, todoItems: func.Out[func.SqlRow], requestLog: func.Out[func.SqlRow]) -> func.HttpResponse:
+ logging.info('Python HTTP trigger and SQL output binding function processed a request.')
+
+ try:
+ req_body = req.get_json()
+ rows = list(map(lambda r: json.loads(r.to_json()), req_body))
+ except ValueError:
+ pass
+
+ requestLog.set(func.SqlRow({
+ RequestTimeStamp: datetime.now(),
+ ItemCount: 1
+ }))
+
+ if req_body:
+ todoItems.set(rows)
+ return func.HttpResponse(
+ todoItems.to_json(),
+ status_code=201,
+ mimetype="application/json"
+ )
+ else:
+ return func.HttpResponse(
+ "Error accessing request body",
+ status_code=400
+ )
+```
+ ::: zone-end+
+<!### Use these pivots when we get other non-C# languages added. ###
+
+ ::: zone pivot="programming-language-java" ::: zone-end
In [C# class libraries](functions-dotnet-class-library.md), use the [Sql](https:
| Attribute property |Description| ||| | **CommandText** | Required. The name of the table being written to by the binding. |
-| **ConnectionStringSetting** | The name of an app setting that contains the connection string for the database to which data is being written. This isn't the actual connection string and must instead resolve to an environment variable. |
+| **ConnectionStringSetting** | Required. The name of an app setting that contains the connection string for the database to which data is being written. This isn't the actual connection string and must instead resolve to an environment variable. |
::: zone-end
In the [Java functions runtime library](/java/api/overview/azure/functions/runti
| **connectionStringSetting** | The name of an app setting that contains the connection string for the database to which data is being written. This isn't the actual connection string and must instead resolve to an environment variable.| ::: zone-end
+-->
+ ## Configuration The following table explains the binding configuration properties that you set in the *function.json* file. |function.json property | Description| ||-|
-|**type** | Must be set to `table`. This property is set automatically when you create the binding in the Azure portal.|
-|**direction** | Must be set to `in`. This property is set automatically when you create the binding in the Azure portal. |
-|**name** | The name of the variable that represents the table or entity in function code. |
-| **commandText** | Required. The Transact-SQL query command or name of the stored procedure executed by the binding. |
-| **connectionStringSetting** | The name of an app setting that contains the connection string for the database to which data is being written. This isn't the actual connection string and must instead resolve to an environment variable.|
+|**type** | Required. Must be set to `sql`.|
+|**direction** | Required. Must be set to `out`. |
+|**name** | Required. The name of the variable that represents the entity in function code. |
+| **commandText** | Required. The name of the table being written to by the binding. |
+| **connectionStringSetting** | Required. The name of an app setting that contains the connection string for the database to which data is being written. This isn't the actual connection string and must instead resolve to an environment variable.|
::: zone-end > [!INCLUDE [app settings to local.settings.json](../../includes/functions-app-settings-local.md)] ## Usage The `CommandText` property is the name of the table where the data is to be stored. The connection string setting name corresponds to the application setting that contains the [connection string](/dotnet/api/microsoft.data.sqlclient.sqlconnection.connectionstring?view=sqlclient-dotnet-core-3.0&preserve-view=true#Microsoft_Data_SqlClient_SqlConnection_ConnectionString) to the Azure SQL or SQL Server instance. ::: zone-end
azure-functions Functions Bindings Azure Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-azure-sql.md
Title: Azure SQL bindings for Functions
description: Understand how to use Azure SQL bindings in Azure Functions. Previously updated : 5/3/2022+ Last updated : 5/24/2022 zone_pivot_groups: programming-languages-set-functions-lang-workers
You can install this version of the extension in your function app by registerin
::: zone-end ++
+## Install bundle
+
+The SQL bindings extension is part of a preview [extension bundle], which is specified in your host.json project file.
+
+# [Preview Bundle v3.x](#tab/extensionv3)
+
+You can add the preview extension bundle by adding or replacing the following code in your `host.json` file:
+
+```json
+{
+ "version": "2.0",
+ "extensionBundle": {
+ "id": "Microsoft.Azure.Functions.ExtensionBundle.Preview",
+ "version": "[3.*, 4.0.0)"
+ }
+}
+```
+
+# [Preview Bundle v4.x](#tab/extensionv4)
+
+You can add the preview extension bundle by adding or replacing the following code in your `host.json` file:
+
+```json
+{
+ "version": "2.0",
+ "extensionBundle": {
+ "id": "Microsoft.Azure.Functions.ExtensionBundle.Preview",
+ "version": "[4.*, 5.0.0)"
+ }
+}
+```
+++++ > [!NOTE]
-> In the current preview, Azure SQL bindings are only supported by [C# class library functions](functions-dotnet-class-library.md).
+> Python language support for the SQL bindings extension is only available for v4 of the [functions runtime](./set-runtime-version.md#view-and-update-the-current-runtime-version) and requires runtime v4.5.0 for deployment in Azure. Learn more about determining the runtime in the [functions runtime](./set-runtime-version.md#view-and-update-the-current-runtime-version) documentation. Please see the tracking [GitHub issue](https://github.com/Azure/azure-functions-sql-extension/issues/250) for the latest update on availability.
-<!-- awaiting bundle support
## Install bundle
-The Kafka extension is part of an [extension bundle], which is specified in your host.json project file. When you create a project that targets version 2.x or later, you should already have this bundle installed. To learn more, see [extension bundle].
+The SQL bindings extension is part of a preview [extension bundle], which is specified in your host.json project file.
+
+# [Preview Bundle v4.x](#tab/extensionv4)
+
+You can add the preview extension bundle by adding or replacing the following code in your `host.json` file:
+
+```json
+{
+ "version": "2.0",
+ "extensionBundle": {
+ "id": "Microsoft.Azure.Functions.ExtensionBundle.Preview",
+ "version": "[4.*, 5.0.0)"
+ }
+}
+```
+
+# [Preview Bundle v3.x](#tab/extensionv3)
+
+Python support is not available with the SQL bindings extension in the v3 version of the functions runtime.
+++
+## Update packages
+
+Support for the SQL bindings extension is available in the 1.11.3b1 version of the [Azure Functions Python library](https://pypi.org/project/azure-functions/). Add this version of the library to your functions project with an update to the line for `azure-functions==` in the `requirements.txt` file in your Python Azure Functions project as seen in the following snippet:
+
+```
+azure-functions==1.11.3b1
+```
+
+Following setting the library version, update your application settings to [isolate the dependencies](./functions-app-settings.md#python_isolate_worker_dependencies-preview) by adding `PYTHON_ISOLATE_WORKER_DEPENDENCIES` with the value `1` to your application settings. Locally, this is set in the `local.settings.json` file as seen below:
+
+```json
+"PYTHON_ISOLATE_WORKER_DEPENDENCIES": "1"
+```
+
+Support for Python durable functions with SQL bindings isn't yet available.
>++++
+> [!NOTE]
+> In the current preview, Azure SQL bindings are only supported by [C# class library functions](functions-dotnet-class-library.md), [JavaScript functions](functions-reference-node.md), and [Python functions](functions-reference-python.md).
::: zone-end
The Kafka extension is part of an [extension bundle], which is specified in your
[core tools]: ./functions-run-local.md [extension bundle]: ./functions-bindings-register.md#extension-bundles [Azure Tools extension]: https://marketplace.visualstudio.com/items?itemName=ms-vscode.vscode-node-azure-pack-
azure-functions Functions Deployment Technologies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-deployment-technologies.md
Title: Deployment technologies in Azure Functions
description: Learn the different ways you can deploy code to Azure Functions. Previously updated : 04/25/2019 Last updated : 05/18/2022
The following table describes the available deployment methods for your Function
| -- | -- | -- | | Tools-based | &bull;&nbsp;[Visual&nbsp;Studio&nbsp;Code&nbsp;publish](functions-develop-vs-code.md#publish-to-azure)<br/>&bull;&nbsp;[Visual Studio publish](functions-develop-vs.md#publish-to-azure)<br/>&bull;&nbsp;[Core Tools publish](functions-run-local.md#publish) | Deployments during development and other ad hoc deployments. Deployments are managed locally by the tooling. | | App Service-managed| &bull;&nbsp;[Deployment&nbsp;Center&nbsp;(CI/CD)](functions-continuous-deployment.md)<br/>&bull;&nbsp;[Container&nbsp;deployments](functions-create-function-linux-custom-image.md#enable-continuous-deployment-to-azure) | Continuous deployment (CI/CD) from source control or from a container registry. Deployments are managed by the App Service platform (Kudu).|
-| External pipelines|&bull;&nbsp;[Azure Pipelines](functions-how-to-azure-devops.md)<br/>&bull;&nbsp;[GitHub actions](functions-how-to-github-actions.md) | Production and DevOps pipelines that include additional validation, testing, and other actions be run as part of an automated deployment. Deployments are managed by the pipeline. |
+| External pipelines|&bull;&nbsp;[Azure Pipelines](functions-how-to-azure-devops.md)<br/>&bull;&nbsp;[GitHub Actions](functions-how-to-github-actions.md) | Production and DevOps pipelines that include additional validation, testing, and other actions be run as part of an automated deployment. Deployments are managed by the pipeline. |
While specific Functions deployments use the best technology based on their context, most deployment methods are based on [zip deployment](#zip-deploy).
Some key concepts are critical to understanding how deployments work in Azure Fu
When you change any of your triggers, the Functions infrastructure must be aware of the changes. Synchronization happens automatically for many deployment technologies. However, in some cases, you must manually sync your triggers. When you deploy your updates by referencing an external package URL, local Git, cloud sync, or FTP, you must manually sync your triggers. You can sync triggers in one of three ways:
-* Restart your function app in the Azure portal
+* Restart your function app in the Azure portal.
* Send an HTTP POST request to `https://{functionappname}.azurewebsites.net/admin/host/synctriggers?code=<API_KEY>` using the [master key](functions-bindings-http-webhook-trigger.md#authorization-keys). * Send an HTTP POST request to `https://management.azure.com/subscriptions/<SUBSCRIPTION_ID>/resourceGroups/<RESOURCE_GROUP_NAME>/providers/Microsoft.Web/sites/<FUNCTION_APP_NAME>/syncfunctiontriggers?api-version=2016-08-01`. Replace the placeholders with your subscription ID, resource group name, and the name of your function app.
+When you deploy using an external package URL and the contents of the package change but the URL itself doesn't change, you need to manually restart your function app to fully sync your updates.
+ ### Remote build Azure Functions can automatically perform builds on the code it receives after zip deployments. These builds behave slightly differently depending on whether your app is running on Windows or Linux. Remote builds are not performed when an app has previously been set to run in [Run From Package](run-functions-from-deployment-package.md) mode. To learn how to use remote build, navigate to [zip deploy](#zip-deploy).
You can use an external package URL to reference a remote package (.zip) file th
> >If you use Azure Blob storage, use a private container with a [shared access signature (SAS)](../vs-azure-tools-storage-manage-with-storage-explorer.md#generate-a-sas-in-storage-explorer) to give Functions access to the package. Any time the application restarts, it fetches a copy of the content. Your reference must be valid for the lifetime of the application.
->__When to use it:__ External package URL is the only supported deployment method for Azure Functions running on Linux in the Consumption plan, if the user doesn't want a [remote build](#remote-build) to occur. When you update the package file that a function app references, you must [manually sync triggers](#trigger-syncing) to tell Azure that your application has changed.
+>__When to use it:__ External package URL is the only supported deployment method for Azure Functions running on Linux in the Consumption plan, if the user doesn't want a [remote build](#remote-build) to occur. When you update the package file that a function app references, you must [manually sync triggers](#trigger-syncing) to tell Azure that your application has changed. When you change the contents of the package file and not the URL itself, you must also restart your function app manually.
### Zip deploy
The following table shows the operating systems and languages that support porta
| F# | | | | | | | | Java | | | | | | | | JavaScript (Node.js) |Γ£ö|Γ£ö|Γ£ö| |Γ£ö<sup>\*</sup>|Γ£ö<sup>\*</sup>|
-| Python (Preview) | | | | | | |
-| PowerShell (Preview) |Γ£ö|Γ£ö|Γ£ö| | | |
+| Python | | | | | | |
+| PowerShell |Γ£ö|Γ£ö|Γ£ö| | | |
| TypeScript (Node.js) | | | | | | | <sup>*</sup> Portal editing is enabled only for HTTP and Timer triggers for Functions on Linux using Premium and Dedicated plans.
azure-functions Language Support Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/language-support-policy.md
Title: Azure Functions language runtime support policy
-description: Learn about Azure Functions language runtime support policy
+ Title: Azure Functions language runtime support policy
+description: Learn about Azure Functions language runtime support policy
Last updated 08/17/2021 # Language runtime support policy
-This article explains Azure functions language runtime support policy.
+This article explains Azure functions language runtime support policy.
## Retirement process
-Azure Functions runtime is built around various components, including operating systems, the Azure Functions host, and language-specific workers. To maintain full support coverages for function apps, Azure Functions uses a phased reduction in support as programming language versions reach their end-of-life dates. For most language versions, the retirement date coincides with the community end-of-life date.
+Azure Functions runtime is built around various components, including operating systems, the Azure Functions host, and language-specific workers. To maintain full support coverages for function apps, Azure Functions uses a phased reduction in support as programming language versions reach their end-of-life dates. For most language versions, the retirement date coincides with the community end-of-life date.
### Notification phase
We'll send notification emails to function app users about upcoming language ver
Starting on the end-of-life date for a language version, you can no longer create new function apps targeting that language version.
-After the language end-of-life date, function apps that use retired language versions won't be eligible for new features, security patches, and performance optimizations. However, these function apps will continue to run on the platform.
+After the language end-of-life date, function apps that use retired language versions won't be eligible for new features, security patches, and performance optimizations. However, these function apps will continue to run on the platform.
> [!IMPORTANT]
->You're highly encouraged to upgrade the language version of your affected function apps to a supported version.
+>You're highly encouraged to upgrade the language version of your affected function apps to a supported version.
>If you're running functions apps using an unsupported language version, you'll be required to upgrade before receiving support for the function apps.
There are few exceptions to the retirement policy outlined above. Here is a list
|Language Versions |EOL Date |Retirement Date| |--|--|-| |.NET 5|8 May 2022|TBA|
-|Node 6|30 April 2019|28 February 2022|
-|Node 8|31 December 2019|28 February 2022|
-|Node 10|30 April 2021|30 September 2022|
+|Node 6|30 April 2019|28 February 2022|
+|Node 8|31 December 2019|28 February 2022|
+|Node 10|30 April 2021|30 September 2022|
|PowerShell Core 6| 4 September 2020|30 September 2022|
-|Python 3.6 |23 December 2021|30 September 2022|
-
+|Python 3.6 |23 December 2021|30 September 2022|
+ ## Language version support timeline
To learn more about specific language version support policy timeline, visit the
* .NET - [dotnet.microsoft.com](https://dotnet.microsoft.com/platform/support/policy/dotnet-core) * Node - [github.com](https://github.com/nodejs/Release#release-schedule) * Java - [azul.com](https://www.azul.com/products/azul-support-roadmap/)
-* PowerShell - [dotnet.microsoft.com](/powershell/scripting/powershell-support-lifecycle?view=powershell-7.1&preserve-view=true#powershell-releases-end-of-life)
+* PowerShell - [docs.microsoft.com](/powershell/scripting/powershell-support-lifecycle#powershell-end-of-support-dates)
* Python - [devguide.python.org](https://devguide.python.org/#status-of-python-branches) ## Configuring language versions
To learn more about specific language version support policy timeline, visit the
|Node |[link](./functions-reference-node.md#setting-the-node-version)| |PowerShell |[link](./functions-reference-powershell.md#changing-the-powershell-version)| |Python |[link](./functions-reference-python.md#python-version)|
-
+ ## Next steps
azure-functions Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/start-stop-vms/overview.md
The Start/Stop VMs v2 (preview) feature starts or stops Azure virtual machines (
This new version of Start/Stop VMs v2 (preview) provides a decentralized low-cost automation option for customers who want to optimize their VM costs. It offers all of the same functionality as the [original version](../../automation/automation-solution-vm-management.md) available with Azure Automation, but it is designed to take advantage of newer technology in Azure.
+> [!NOTE]
+> We've added a plan (**AZ - Availability Zone**) to our Start/Stop V2 solution to enable a high-availability offering. You can now choose between Consumption and Availability Zone plans before you start your deployment. In most cases, the monthly cost of the Availability Zone plan is higher when compared to the Consumption plan.
+ > [!NOTE] > Automatic updating functionality was introduced on April 28th, 2022. This new auto update feature helps you stay on the latest version of the solution. This feature is enabled by default when you perform a new installation. > If you deployed your solution before this date, you can reinstall to the latest version from our [GitHub repository](https://github.com/microsoft/startstopv2-deployments)
azure-monitor Agents Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agents-overview.md
The following tables list the operating systems that are supported by the Azure
| Operating system | Azure Monitor agent | Log Analytics agent | Dependency agent | Diagnostics extension | |:|::|::|::|::| | Windows Server 2022 | X | | | |
+| Windows Server 2022 Core | X | | | |
| Windows Server 2019 | X | X | X | X | | Windows Server 2019 Core | X | | | | | Windows Server 2016 | X | X | X | X |
azure-monitor Azure Monitor Agent Extension Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-extension-versions.md
description: This article describes the version details for the Azure Monitor ag
Previously updated : 4/11/2022 Last updated : 5/19/2022
We strongly recommended to update to the latest version at all times, or opt in
## Version details | Release Date | Release notes | Windows | Linux | |:|:|:|:|
+| April 2022 | <ul><li>Private IP information added in Log Analytics <i>Heartbeat</i> table for Windows</li><li>Fixed bugs in Windows IIS log collection (preview) <ul><li>Updated IIS site column name to match backend KQL transform</li><li>Added delay to IIS upload task to account for IIS buffering</li></ul></li></ul> | 1.4.1.0<sup>Hotfix</sup> | Coming soon |
| March 2022 | <ul><li>Fixed timestamp and XML format bugs in Windows Event logs</li><li>Full Windows OS information in Log Analytics Heartbeat table</li><li>Fixed Linux performance counters to collect instance values instead of 'total' only</li></ul> | 1.3.0.0 | 1.17.5.0 | | February 2022 | <ul><li>Bugfixes for the AMA Client installer (private preview)</li><li>Versioning fix to reflect appropriate Windows major/minor/hotfix versions</li><li>Internal test improvement on Linux</li></ul> | 1.2.0.0 | 1.15.3 | | January 2022 | <ul><li>Syslog RFC compliance for Linux</li><li>Fixed issue for Linux perf counters not flowing on restart</li><li>Fixed installation failure on Windows Server 2008 R2 SP1</li></ul> | 1.1.5.1<sup>Hotfix</sup> | 1.15.2.0<sup>Hotfix</sup> |
We strongly recommended to update to the latest version at all times, or opt in
## Next steps - [Install and manage the extension](azure-monitor-agent-manage.md).-- [Create a data collection rule](data-collection-rule-azure-monitor-agent.md) to collect data from the agent and send it to Azure Monitor.
+- [Create a data collection rule](data-collection-rule-azure-monitor-agent.md) to collect data from the agent and send it to Azure Monitor.
azure-monitor Azure Monitor Agent Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-overview.md
description: Overview of the Azure Monitor agent, which collects monitoring data
Previously updated : 3/31/2022 Last updated : 5/19/2022 # Azure Monitor agent overview
-The Azure Monitor agent (AMA) collects monitoring data from the guest operating system of Azure virtual machines and delivers it to Azure Monitor. This article provides an overview of the Azure Monitor agent and includes information on how to install it and how to configure data collection.
+The Azure Monitor agent (AMA) collects monitoring data from the guest operating system of [supported infrastucture](#supported-resource-types) and delivers it to Azure Monitor. This article provides an overview of the Azure Monitor agent and includes information on how to install it and how to configure data collection.
Here's an **introductory video** explaining all about this new agent, including a quick demo of how to set things up using the Azure portal: [ITOps Talk: Azure Monitor Agent](https://www.youtube.com/watch?v=f8bIrFU8tCs) ## Relationship to other agents
-Eventually, the Azure Monitor agent will replace the following legacy monitoring agents that are currently used by Azure Monitor to collect guest data from virtual machines ([view known gaps](../faq.yml)):
+Eventually, the Azure Monitor agent will replace the following legacy monitoring agents that are currently used by Azure Monitor.
- [Log Analytics agent](./log-analytics-agent.md): Sends data to a Log Analytics workspace and supports VM insights and monitoring solutions. - [Telegraf agent](../essentials/collect-custom-metrics-linux-telegraf.md): Sends data to Azure Monitor Metrics (Linux only).
The Azure Monitor agent can coexist (run side by side on the same machine) with
> When using both agents during evaluation or migration, you can use the **'Category'** column of the [Heartbeat](/azure/azure-monitor/reference/tables/Heartbeat) table in your Log Analytics workspace, and filter for 'Azure Monitor Agent'. ## Supported resource types
-Azure virtual machines, virtual machine scale sets, and Azure Arc-enabled servers are currently supported. Azure Kubernetes Service and other compute resource types aren't currently supported.
+
+| Resource type | Installation method | Additional information |
+|:|:|:|
+| Virtual machines, scale sets | [Virtual machine extension](./azure-monitor-agent-manage.md#virtual-machine-extension-details) | Installs the agent using Azure extension framework |
+| On-premise servers (Arc-enabled servers) | [Virtual machine extension](./azure-monitor-agent-manage.md#virtual-machine-extension-details) (after installing [Arc agent](/azure/azure-arc/servers/deployment-options)) | Installs the agent using Azure extension framework, provided for on-premise by first installing [Arc agent](/azure/azure-arc/servers/deployment-options) |
+| Windows 10, 11 desktops, workstations | [Client installer (preview)](./azure-monitor-agent-windows-client.md) | Installs the agent using a Windows MSI installer |
+| Windows 10, 11 laptops | [Client installer (preview)](./azure-monitor-agent-windows-client.md) | Installs the agent using a Windows MSI installer. The installs works on laptops but the agent is **not optimized yet** for battery, network consumption |
## Supported regions Azure Monitor agent is available in all public regions that support Log Analytics, as well as the Azure Government and China clouds. Air-gapped clouds are not yet supported.
azure-monitor Azure Monitor Agent Windows Client https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-windows-client.md
description: This article describes the instructions to install the agent on Win
Previously updated : 4/13/2022 Last updated : 5/20/2022
This article provides instructions and guidance for using the client installer f
With the new client installer available in this preview, you can now collect telemetry data from your Windows client devices in addition to servers and virtual machines. Both the [generally available extension](./azure-monitor-agent-manage.md#virtual-machine-extension-details) and this installer use Data Collection rules to configure the **same underlying agent**.
+### Comparison with virtual machine extension
+Here is a comparison between client installer and VM extension for Azure Monitor agent. It also highlights which parts are in preview:
+
+| Functional component | For VMs/servers via extension | For clients via installer|
+|:|:|:|
+| Agent installation method | Via VM extension | Via client installer <sup>preview</sup> |
+| Agent installed | Azure Monitor Agent | Same |
+| Authentication | Using Managed Identity | Using AAD device token <sup>preview</sup> |
+| Central configuration | Via Data collection rules | Same |
+| Associating config rules to agents | DCRs associates directly to individual VM resources | DCRs associate to Monitored Object (MO), which maps to all devices within the AAD tenant <sup>preview</sup> |
+| Data upload to Log Analytics | Via Log Analytics endpoints | Same |
+| Feature support | All features documented [here](./azure-monitor-agent-overview.md) | Features dependent on AMA agent extension that don't require additional extensions. This includes support for Sentinel Windows Event filtering |
+| [Networking options](./azure-monitor-agent-overview.md#networking) | Proxy support, Private link support | Proxy support only |
+++ ## Supported device types | Device type | Supported? | Installation method | Additional information |
Make sure to start the installer on administrator command prompt. Silent install
## Questions and feedback
-Take this [quick survey](https://forms.microsoft.com/r/CBhWuT1rmM) or share your feedback/questions regarding the preview on the [Azure Monitor Agent User Community](https://teams.microsoft.com/l/team/19%3af3f168b782f64561b52abe75e59e83bc%40thread.tacv2/conversations?groupId=770d6aa5-c2f7-4794-98a0-84fd6ae7f193&tenantId=72f988bf-86f1-41af-91ab-2d7cd011db47).
+Take this [quick survey](https://forms.microsoft.com/r/CBhWuT1rmM) or share your feedback/questions regarding the preview on the [Azure Monitor Agent User Community](https://teams.microsoft.com/l/team/19%3af3f168b782f64561b52abe75e59e83bc%40thread.tacv2/conversations?groupId=770d6aa5-c2f7-4794-98a0-84fd6ae7f193&tenantId=72f988bf-86f1-41af-91ab-2d7cd011db47).
azure-monitor Alerts Metric Near Real Time https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-metric-near-real-time.md
Previously updated : 5/11/2022 Last updated : 5/18/2022 # Supported resources for metric alerts in Azure Monitor
Here's the full list of Azure Monitor metric sources supported by the newer aler
|Microsoft.Search/searchServices | No | No | [Search services](../essentials/metrics-supported.md#microsoftsearchsearchservices) | |Microsoft.ServiceBus/namespaces | Yes | No | [Service Bus](../essentials/metrics-supported.md#microsoftservicebusnamespaces) | |Microsoft.SignalRService/WebPubSub | Yes | No | [Web PubSub Service](../essentials/metrics-supported.md#microsoftsignalrservicewebpubsub) |
-|Microsoft.Sql/managedInstances | No | Yes | [SQL Managed Instances](../essentials/metrics-supported.md#microsoftsqlmanagedinstances) |
+|Microsoft.Sql/managedInstances | No | No | [SQL Managed Instances](../essentials/metrics-supported.md#microsoftsqlmanagedinstances) |
|Microsoft.Sql/servers/databases | No | Yes | [SQL Databases](../essentials/metrics-supported.md#microsoftsqlserversdatabases) | |Microsoft.Sql/servers/elasticPools | No | Yes | [SQL Elastic Pools](../essentials/metrics-supported.md#microsoftsqlserverselasticpools) | |Microsoft.Storage/storageAccounts |Yes | No | [Storage Accounts](../essentials/metrics-supported.md#microsoftstoragestorageaccounts)|
azure-monitor Itsmc Connections Servicenow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/itsmc-connections-servicenow.md
For information about installing ITSMC, see [Add the IT Service Management Conne
### OAuth setup
-ServiceNow supported versions include Quebec, Paris, Orlando, New York, Madrid, London, Kingston, Jakarta, Istanbul, Helsinki, and Geneva.
+ServiceNow supported versions include San Diego, Rome, Quebec, Paris, Orlando, New York, Madrid, London, Kingston, Jakarta, Istanbul, Helsinki, and Geneva.
ServiceNow admins must generate a client ID and client secret for their ServiceNow instance. See the following information as required: +
+- [Set up OAuth for San Diego](https://docs.servicenow.com/bundle/sandiego-platform-administration/page/administer/security/task/t_SettingUpOAuth.html)
+- [Set up OAuth for Rome](https://docs.servicenow.com/bundle/rome-platform-administration/page/administer/security/task/t_SettingUpOAuth.html)
- [Set up OAuth for Quebec](https://docs.servicenow.com/bundle/quebec-platform-administration/page/administer/security/task/t_SettingUpOAuth.html) - [Set up OAuth for Paris](https://docs.servicenow.com/bundle/paris-platform-administration/page/administer/security/task/t_SettingUpOAuth.html) - [Set up OAuth for Orlando](https://docs.servicenow.com/bundle/orlando-platform-administration/page/administer/security/task/t_SettingUpOAuth.html)
azure-monitor Resource Manager Action Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/resource-manager-action-groups.md
description: Sample Azure Resource Manager templates to deploy Azure Monitor act
Previously updated : 12/03/2020 Last updated : 04/27/2022 # Resource Manager template samples for action groups in Azure Monitor+ This article includes sample [Azure Resource Manager templates](../../azure-resource-manager/templates/syntax.md) to create [action groups](../alerts/action-groups.md) in Azure Monitor. Each sample includes a template file and a parameters file with sample values to provide to the template. [!INCLUDE [azure-monitor-samples](../../../includes/azure-monitor-resource-manager-samples.md)] ## Create an action group
-The following sample creates an action group.
+The following sample creates an action group.
### Template file
+# [Bicep](#tab/bicep)
+
+```bicep
+@description('Unique name within the resource group for the Action group.')
+param actionGroupName string
+
+@description('Short name up to 12 characters for the Action group.')
+param actionGroupShortName string
+
+resource actionGroup 'Microsoft.Insights/actionGroups@2021-09-01' = {
+ name: actionGroupName
+ location: 'Global'
+ properties: {
+ groupShortName: actionGroupShortName
+ enabled: true
+ smsReceivers: [
+ {
+ name: 'contosoSMS'
+ countryCode: '1'
+ phoneNumber: '5555551212'
+ }
+ {
+ name: 'contosoSMS2'
+ countryCode: '1'
+ phoneNumber: '5555552121'
+ }
+ ]
+ emailReceivers: [
+ {
+ name: 'contosoEmail'
+ emailAddress: 'devops@contoso.com'
+ useCommonAlertSchema: true
+ }
+ {
+ name: 'contosoEmail2'
+ emailAddress: 'devops2@contoso.com'
+ useCommonAlertSchema: true
+ }
+ ]
+ webhookReceivers: [
+ {
+ name: 'contosoHook'
+ serviceUri: 'http://requestb.in/1bq62iu1'
+ useCommonAlertSchema: true
+ }
+ {
+ name: 'contosoHook2'
+ serviceUri: 'http://requestb.in/1bq62iu2'
+ useCommonAlertSchema: true
+ }
+ ]
+ }
+}
+
+output actionGroupId string = actionGroup.id
+```
+
+# [JSON](#tab/json)
+ ```json {
- "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0", "parameters": { "actionGroupName": {
The following sample creates an action group.
"resources": [ { "type": "Microsoft.Insights/actionGroups",
- "apiVersion": "2018-03-01",
+ "apiVersion": "2021-09-01",
"name": "[parameters('actionGroupName')]", "location": "Global", "properties": {
The following sample creates an action group.
} } ],
- "outputs":{
- "actionGroupId":{
- "type":"string",
- "value":"[resourceId('Microsoft.Insights/actionGroups',parameters('actionGroupName'))]"
- }
+ "outputs": {
+ "actionGroupId": {
+ "type": "string",
+ "value": "[resourceId('Microsoft.Insights/actionGroups', parameters('actionGroupName'))]"
+ }
} } ``` ++ ### Parameter file ```json {
- "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#",
"contentVersion": "1.0.0.0", "parameters": { "actionGroupName": {
The following sample creates an action group.
} ``` -- ## Next steps * [Get other sample templates for Azure Monitor](../resource-manager-samples.md).
-* [Learn more about action groups](../alerts/action-groups.md).
+* [Learn more about action groups](../alerts/action-groups.md).
azure-monitor Resource Manager Alerts Metric https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/resource-manager-alerts-metric.md
Previously updated : 2/23/2022 Last updated : 04/27/2022+ + # Resource Manager template samples for metric alert rules in Azure Monitor This article provides samples of using [Azure Resource Manager templates](../../azure-resource-manager/templates/syntax.md) to configure [metric alert rules](../alerts/alerts-metric-near-real-time.md) in Azure Monitor. Each sample includes a template file and a parameters file with sample values to provide to the template.
See [Supported resources for metric alerts in Azure Monitor](../alerts/alerts-me
> [!NOTE] > Resource template for creating metric alerts for resource type: Azure Log Analytics Workspace (i.e.) `Microsoft.OperationalInsights/workspaces`, requires additional steps. For details, see [Metric Alert for Logs - Resource Template](../alerts/alerts-metric-logs.md#resource-template-for-metric-alerts-for-logs). -- ## Template references - [Microsoft.Insights metricAlerts](/azure/templates/microsoft.insights/2018-03-01/metricalerts) ## Single criteria, static threshold+ The following sample creates a metric alert rule using a single criteria and a static threshold. ### Template file
-```json
-{
- "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "alertName": {
- "type": "string",
- "minLength": 1,
- "metadata": {
- "description": "Name of the alert"
- }
- },
- "alertDescription": {
- "type": "string",
- "defaultValue": "This is a metric alert",
- "metadata": {
- "description": "Description of alert"
- }
- },
- "alertSeverity": {
- "type": "int",
- "defaultValue": 3,
- "allowedValues": [
- 0,
- 1,
- 2,
- 3,
- 4
- ],
- "metadata": {
- "description": "Severity of alert {0,1,2,3,4}"
- }
- },
- "isEnabled": {
- "type": "bool",
- "defaultValue": true,
- "metadata": {
- "description": "Specifies whether the alert is enabled"
- }
- },
- "resourceId": {
- "type": "string",
- "minLength": 1,
- "metadata": {
- "description": "Full Resource ID of the resource emitting the metric that will be used for the comparison. For example /subscriptions/00000000-0000-0000-0000-0000-00000000/resourceGroups/ResourceGroupName/providers/Microsoft.compute/virtualMachines/VM_xyz"
- }
- },
- "metricName": {
- "type": "string",
- "minLength": 1,
- "metadata": {
- "description": "Name of the metric used in the comparison to activate the alert."
- }
- },
- "operator": {
- "type": "string",
- "defaultValue": "GreaterThan",
- "allowedValues": [
- "Equals",
- "GreaterThan",
- "GreaterThanOrEqual",
- "LessThan",
- "LessThanOrEqual"
- ],
- "metadata": {
- "description": "Operator comparing the current value with the threshold value."
- }
- },
- "threshold": {
- "type": "string",
- "defaultValue": "0",
- "metadata": {
- "description": "The threshold value at which the alert is activated."
- }
- },
- "timeAggregation": {
- "type": "string",
- "defaultValue": "Average",
- "allowedValues": [
- "Average",
- "Minimum",
- "Maximum",
- "Total",
- "Count"
- ],
- "metadata": {
- "description": "How the data that is collected should be combined over time."
- }
- },
- "windowSize": {
- "type": "string",
- "defaultValue": "PT5M",
- "allowedValues": [
- "PT1M",
- "PT5M",
- "PT15M",
- "PT30M",
- "PT1H",
- "PT6H",
- "PT12H",
- "PT24H"
- ],
- "metadata": {
- "description": "Period of time used to monitor alert activity based on the threshold. Must be between one minute and one day. ISO 8601 duration format."
- }
- },
- "evaluationFrequency": {
- "type": "string",
- "defaultValue": "PT1M",
- "allowedValues": [
- "PT1M",
- "PT5M",
- "PT15M",
- "PT30M",
- "PT1H"
- ],
- "metadata": {
- "description": "how often the metric alert is evaluated represented in ISO 8601 duration format"
- }
- },
- "actionGroupId": {
- "type": "string",
- "defaultValue": "",
- "metadata": {
- "description": "The ID of the action group that is triggered when the alert is activated or deactivated"
- }
- }
- },
- "variables": { },
- "resources": [
+# [Bicep](#tab/bicep)
+
+```bicep
+@description('Name of the alert')
+@minLength(1)
+param alertName string
+
+@description('Description of alert')
+param alertDescription string = 'This is a metric alert'
+
+@description('Severity of alert {0,1,2,3,4}')
+@allowed([
+ 0
+ 1
+ 2
+ 3
+ 4
+])
+param alertSeverity int = 3
+
+@description('Specifies whether the alert is enabled')
+param isEnabled bool = true
+
+@description('Full Resource ID of the resource emitting the metric that will be used for the comparison. For example /subscriptions/00000000-0000-0000-0000-0000-00000000/resourceGroups/ResourceGroupName/providers/Microsoft.compute/virtualMachines/VM_xyz')
+@minLength(1)
+param resourceId string
+
+@description('Name of the metric used in the comparison to activate the alert.')
+@minLength(1)
+param metricName string
+
+@description('Operator comparing the current value with the threshold value.')
+@allowed([
+ 'Equals'
+ 'GreaterThan'
+ 'GreaterThanOrEqual'
+ 'LessThan'
+ 'LessThanOrEqual'
+])
+param operator string = 'GreaterThan'
+
+@description('The threshold value at which the alert is activated.')
+param threshold int = 0
+
+@description('How the data that is collected should be combined over time.')
+@allowed([
+ 'Average'
+ 'Minimum'
+ 'Maximum'
+ 'Total'
+ 'Count'
+])
+param timeAggregation string = 'Average'
+
+@description('Period of time used to monitor alert activity based on the threshold. Must be between one minute and one day. ISO 8601 duration format.')
+@allowed([
+ 'PT1M'
+ 'PT5M'
+ 'PT15M'
+ 'PT30M'
+ 'PT1H'
+ 'PT6H'
+ 'PT12H'
+ 'PT24H'
+])
+param windowSize string = 'PT5M'
+
+@description('how often the metric alert is evaluated represented in ISO 8601 duration format')
+@allowed([
+ 'PT1M'
+ 'PT5M'
+ 'PT15M'
+ 'PT30M'
+ 'PT1H'
+])
+param evaluationFrequency string = 'PT1M'
+
+@description('The ID of the action group that is triggered when the alert is activated or deactivated')
+param actionGroupId string = ''
+
+resource metricAlert 'Microsoft.Insights/metricAlerts@2018-03-01' = {
+ name: alertName
+ location: 'global'
+ properties: {
+ description: alertDescription
+ severity: alertSeverity
+ enabled: isEnabled
+ scopes: [
+ resourceId
+ ]
+ evaluationFrequency: evaluationFrequency
+ windowSize: windowSize
+ criteria: {
+ 'odata.type': 'Microsoft.Azure.Monitor.SingleResourceMultipleMetricCriteria'
+ allOf: [
{
- "name": "[parameters('alertName')]",
- "type": "Microsoft.Insights/metricAlerts",
- "location": "global",
- "apiVersion": "2018-03-01",
- "tags": {},
- "properties": {
- "description": "[parameters('alertDescription')]",
- "severity": "[parameters('alertSeverity')]",
- "enabled": "[parameters('isEnabled')]",
- "scopes": ["[parameters('resourceId')]"],
- "evaluationFrequency":"[parameters('evaluationFrequency')]",
- "windowSize": "[parameters('windowSize')]",
- "criteria": {
- "odata.type": "Microsoft.Azure.Monitor.SingleResourceMultipleMetricCriteria",
- "allOf": [
- {
- "name" : "1st criterion",
- "metricName": "[parameters('metricName')]",
- "dimensions":[],
- "operator": "[parameters('operator')]",
- "threshold" : "[parameters('threshold')]",
- "timeAggregation": "[parameters('timeAggregation')]"
- }
- ]
- },
- "actions": [
- {
- "actionGroupId": "[parameters('actionGroupId')]"
- }
- ]
- }
+ name: '1st criterion'
+ metricName: metricName
+ dimensions: []
+ operator: operator
+ threshold: threshold
+ timeAggregation: timeAggregation
+ criterionType: 'StaticThresholdCriterion'
}
+ ]
+ }
+ actions: [
+ {
+ actionGroupId: actionGroupId
+ }
]
+ }
} ```
-### Parameter file
+# [JSON](#tab/json)
+ ```json {
- "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "alertName": {
- "value": "New Metric Alert"
- },
- "alertDescription": {
- "value": "New metric alert created via template"
- },
- "alertSeverity": {
- "value":3
- },
- "isEnabled": {
- "value": true
- },
- "resourceId": {
- "value": "/subscriptions/replace-with-subscription-id/resourceGroups/replace-with-resourceGroup-name/providers/Microsoft.Compute/virtualMachines/replace-with-resource-name"
- },
- "metricName": {
- "value": "Percentage CPU"
- },
- "operator": {
- "value": "GreaterThan"
- },
- "threshold": {
- "value": "80"
- },
- "timeAggregation": {
- "value": "Average"
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "alertName": {
+ "type": "string",
+ "minLength": 1,
+ "metadata": {
+ "description": "Name of the alert"
+ }
+ },
+ "alertDescription": {
+ "type": "string",
+ "defaultValue": "This is a metric alert",
+ "metadata": {
+ "description": "Description of alert"
+ }
+ },
+ "alertSeverity": {
+ "type": "int",
+ "defaultValue": 3,
+ "allowedValues": [
+ 0,
+ 1,
+ 2,
+ 3,
+ 4
+ ],
+ "metadata": {
+ "description": "Severity of alert {0,1,2,3,4}"
+ }
+ },
+ "isEnabled": {
+ "type": "bool",
+ "defaultValue": true,
+ "metadata": {
+ "description": "Specifies whether the alert is enabled"
+ }
+ },
+ "resourceId": {
+ "type": "string",
+ "minLength": 1,
+ "metadata": {
+ "description": "Full Resource ID of the resource emitting the metric that will be used for the comparison. For example /subscriptions/00000000-0000-0000-0000-0000-00000000/resourceGroups/ResourceGroupName/providers/Microsoft.compute/virtualMachines/VM_xyz"
+ }
+ },
+ "metricName": {
+ "type": "string",
+ "minLength": 1,
+ "metadata": {
+ "description": "Name of the metric used in the comparison to activate the alert."
+ }
+ },
+ "operator": {
+ "type": "string",
+ "defaultValue": "GreaterThan",
+ "allowedValues": [
+ "Equals",
+ "GreaterThan",
+ "GreaterThanOrEqual",
+ "LessThan",
+ "LessThanOrEqual"
+ ],
+ "metadata": {
+ "description": "Operator comparing the current value with the threshold value."
+ }
+ },
+ "threshold": {
+ "type": "int",
+ "defaultValue": 0,
+ "metadata": {
+ "description": "The threshold value at which the alert is activated."
+ }
+ },
+ "timeAggregation": {
+ "type": "string",
+ "defaultValue": "Average",
+ "allowedValues": [
+ "Average",
+ "Minimum",
+ "Maximum",
+ "Total",
+ "Count"
+ ],
+ "metadata": {
+ "description": "How the data that is collected should be combined over time."
+ }
+ },
+ "windowSize": {
+ "type": "string",
+ "defaultValue": "PT5M",
+ "allowedValues": [
+ "PT1M",
+ "PT5M",
+ "PT15M",
+ "PT30M",
+ "PT1H",
+ "PT6H",
+ "PT12H",
+ "PT24H"
+ ],
+ "metadata": {
+ "description": "Period of time used to monitor alert activity based on the threshold. Must be between one minute and one day. ISO 8601 duration format."
+ }
+ },
+ "evaluationFrequency": {
+ "type": "string",
+ "defaultValue": "PT1M",
+ "allowedValues": [
+ "PT1M",
+ "PT5M",
+ "PT15M",
+ "PT30M",
+ "PT1H"
+ ],
+ "metadata": {
+ "description": "how often the metric alert is evaluated represented in ISO 8601 duration format"
+ }
+ },
+ "actionGroupId": {
+ "type": "string",
+ "defaultValue": "",
+ "metadata": {
+ "description": "The ID of the action group that is triggered when the alert is activated or deactivated"
+ }
+ }
+ },
+ "resources": [
+ {
+ "type": "Microsoft.Insights/metricAlerts",
+ "apiVersion": "2018-03-01",
+ "name": "[parameters('alertName')]",
+ "location": "global",
+ "properties": {
+ "description": "[parameters('alertDescription')]",
+ "severity": "[parameters('alertSeverity')]",
+ "enabled": "[parameters('isEnabled')]",
+ "scopes": [
+ "[parameters('resourceId')]"
+ ],
+ "evaluationFrequency": "[parameters('evaluationFrequency')]",
+ "windowSize": "[parameters('windowSize')]",
+ "criteria": {
+ "odata.type": "Microsoft.Azure.Monitor.SingleResourceMultipleMetricCriteria",
+ "allOf": [
+ {
+ "name": "1st criterion",
+ "metricName": "[parameters('metricName')]",
+ "dimensions": [],
+ "operator": "[parameters('operator')]",
+ "threshold": "[parameters('threshold')]",
+ "timeAggregation": "[parameters('timeAggregation')]",
+ "criterionType": "StaticThresholdCriterion"
+ }
+ ]
},
- "actionGroupId": {
- "value": "/subscriptions/replace-with-subscription-id/resourceGroups/resource-group-name/providers/Microsoft.Insights/actionGroups/replace-with-action-group"
- }
+ "actions": [
+ {
+ "actionGroupId": "[parameters('actionGroupId')]"
+ }
+ ]
+ }
}
+ ]
} ``` +
-## Single criteria, dynamic threshold
-The following sample creates a metric alert rule using a single criteria and a dynamic threshold.
-
-### Template file
-Save the json below as simpledynamicmetricalert.json for the purpose of this walkthrough.
+### Parameter file
```json {
- "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "alertName": {
- "type": "string",
- "minLength": 1,
- "metadata": {
- "description": "Name of the alert"
- }
- },
- "alertDescription": {
- "type": "string",
- "defaultValue": "This is a metric alert",
- "metadata": {
- "description": "Description of alert"
- }
- },
- "alertSeverity": {
- "type": "int",
- "defaultValue": 3,
- "allowedValues": [
- 0,
- 1,
- 2,
- 3,
- 4
- ],
- "metadata": {
- "description": "Severity of alert {0,1,2,3,4}"
- }
- },
- "isEnabled": {
- "type": "bool",
- "defaultValue": true,
- "metadata": {
- "description": "Specifies whether the alert is enabled"
- }
- },
- "resourceId": {
- "type": "string",
- "minLength": 1,
- "metadata": {
- "description": "Full Resource ID of the resource emitting the metric that will be used for the comparison. For example /subscriptions/00000000-0000-0000-0000-0000-00000000/resourceGroups/ResourceGroupName/providers/Microsoft.compute/virtualMachines/VM_xyz"
- }
- },
- "metricName": {
- "type": "string",
- "minLength": 1,
- "metadata": {
- "description": "Name of the metric used in the comparison to activate the alert."
- }
- },
- "operator": {
- "type": "string",
- "defaultValue": "GreaterOrLessThan",
- "allowedValues": [
- "GreaterThan",
- "LessThan",
- "GreaterOrLessThan"
- ],
- "metadata": {
- "description": "Operator comparing the current value with the threshold value."
- }
- },
- "alertSensitivity": {
- "type": "string",
- "defaultValue": "Medium",
- "allowedValues": [
- "High",
- "Medium",
- "Low"
- ],
- "metadata": {
- "description": "Tunes how 'noisy' the Dynamic Thresholds alerts will be: 'High' will result in more alerts while 'Low' will result in fewer alerts."
- }
- },
- "numberOfEvaluationPeriods": {
- "type": "string",
- "defaultValue": "4",
- "metadata": {
- "description": "The number of periods to check in the alert evaluation."
- }
- },
- "minFailingPeriodsToAlert": {
- "type": "string",
- "defaultValue": "3",
- "metadata": {
- "description": "The number of unhealthy periods to alert on (must be lower or equal to numberOfEvaluationPeriods)."
- }
- },
- "ignoreDataBefore": {
- "type": "string",
- "defaultValue": "",
- "metadata": {
- "description": "Use this option to set the date from which to start learning the metric historical data and calculate the dynamic thresholds (in ISO8601 format, e.g. '2019-12-31T22:00:00Z')."
- }
- },
- "timeAggregation": {
- "type": "string",
- "defaultValue": "Average",
- "allowedValues": [
- "Average",
- "Minimum",
- "Maximum",
- "Total",
- "Count"
- ],
- "metadata": {
- "description": "How the data that is collected should be combined over time."
- }
- },
- "windowSize": {
- "type": "string",
- "defaultValue": "PT5M",
- "allowedValues": [
- "PT5M",
- "PT15M",
- "PT30M",
- "PT1H"
- ],
- "metadata": {
- "description": "Period of time used to monitor alert activity based on the threshold. Must be between five minutes and one hour. ISO 8601 duration format."
- }
- },
- "evaluationFrequency": {
- "type": "string",
- "defaultValue": "PT5M",
- "allowedValues": [
- "PT5M",
- "PT15M",
- "PT30M",
- "PT1H"
- ],
- "metadata": {
- "description": "how often the metric alert is evaluated represented in ISO 8601 duration format"
- }
- },
- "actionGroupId": {
- "type": "string",
- "defaultValue": "",
- "metadata": {
- "description": "The ID of the action group that is triggered when the alert is activated or deactivated"
- }
- }
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "alertName": {
+ "value": "New Metric Alert"
},
- "variables": { },
- "resources": [
- {
- "name": "[parameters('alertName')]",
- "type": "Microsoft.Insights/metricAlerts",
- "location": "global",
- "apiVersion": "2018-03-01",
- "tags": {},
- "properties": {
- "description": "[parameters('alertDescription')]",
- "severity": "[parameters('alertSeverity')]",
- "enabled": "[parameters('isEnabled')]",
- "scopes": ["[parameters('resourceId')]"],
- "evaluationFrequency":"[parameters('evaluationFrequency')]",
- "windowSize": "[parameters('windowSize')]",
- "criteria": {
- "odata.type": "Microsoft.Azure.Monitor.MultipleResourceMultipleMetricCriteria",
- "allOf": [
- {
- "criterionType": "DynamicThresholdCriterion",
- "name" : "1st criterion",
- "metricName": "[parameters('metricName')]",
- "dimensions":[],
- "operator": "[parameters('operator')]",
- "alertSensitivity": "[parameters('alertSensitivity')]",
- "failingPeriods": {
- "numberOfEvaluationPeriods": "[parameters('numberOfEvaluationPeriods')]",
- "minFailingPeriodsToAlert": "[parameters('minFailingPeriodsToAlert')]"
- },
- "ignoreDataBefore": "[parameters('ignoreDataBefore')]",
- "timeAggregation": "[parameters('timeAggregation')]"
- }
- ]
- },
- "actions": [
- {
- "actionGroupId": "[parameters('actionGroupId')]"
- }
- ]
- }
- }
- ]
+ "alertDescription": {
+ "value": "New metric alert created via template"
+ },
+ "alertSeverity": {
+ "value":3
+ },
+ "isEnabled": {
+ "value": true
+ },
+ "resourceId": {
+ "value": "/subscriptions/replace-with-subscription-id/resourceGroups/replace-with-resourceGroup-name/providers/Microsoft.Compute/virtualMachines/replace-with-resource-name"
+ },
+ "metricName": {
+ "value": "Percentage CPU"
+ },
+ "operator": {
+ "value": "GreaterThan"
+ },
+ "threshold": {
+ "value": "80"
+ },
+ "timeAggregation": {
+ "value": "Average"
+ },
+ "actionGroupId": {
+ "value": "/subscriptions/replace-with-subscription-id/resourceGroups/resource-group-name/providers/Microsoft.Insights/actionGroups/replace-with-action-group"
+ }
+ }
} ```
-### Parameter file
+## Single criteria, dynamic threshold
-```json
-{
- "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "alertName": {
- "value": "New Metric Alert with Dynamic Thresholds"
- },
- "alertDescription": {
- "value": "New metric alert with Dynamic Thresholds created via template"
- },
- "alertSeverity": {
- "value":3
- },
- "isEnabled": {
- "value": true
- },
- "resourceId": {
- "value": "/subscriptions/replace-with-subscription-id/resourceGroups/replace-with-resourceGroup-name/providers/Microsoft.Compute/virtualMachines/replace-with-resource-name"
- },
- "metricName": {
- "value": "Percentage CPU"
- },
- "operator": {
- "value": "GreaterOrLessThan"
- },
- "alertSensitivity": {
- "value": "Medium"
- },
- "numberOfEvaluationPeriods": {
- "value": "4"
- },
- "minFailingPeriodsToAlert": {
- "value": "3"
- },
- "ignoreDataBefore": {
- "value": ""
- },
- "timeAggregation": {
- "value": "Average"
- },
- "actionGroupId": {
- "value": "/subscriptions/replace-with-subscription-id/resourceGroups/resource-group-name/providers/Microsoft.Insights/actionGroups/replace-with-action-group"
- }
- }
-}
-```
+The following sample creates a metric alert rule using a single criteria and a dynamic threshold.
+### Template file
+# [Bicep](#tab/bicep)
-## Multiple criteria, static threshold
-Metric alerts support alerting on multi-dimensional metrics and up to 5 criteria per alert rule. The following sample creates a metric alert rule on dimensional metrics and specify multiple criteria.
+```bicep
+@description('Name of the alert')
+@minLength(1)
+param alertName string
-The following constraints apply when using dimensions in an alert rule that contains multiple criteria:
-- You can only select one value per dimension within each criterion.-- You cannot use "\*" as a dimension value.-- When metrics that are configured in different criteria support the same dimension, then a configured dimension value must be explicitly set in the same way for all of those metrics in the relevant criteria.
- - In the example below, because both the **Transactions** and **SuccessE2ELatency** metrics have an **ApiName** dimension, and *criterion1* specifies the *"GetBlob"* value for the **ApiName** dimension, then *criterion2* must also set a *"GetBlob"* value for the **ApiName** dimension.
+@description('Description of alert')
+param alertDescription string = 'This is a metric alert'
-### Template file
+@description('Severity of alert {0,1,2,3,4}')
+@allowed([
+ 0
+ 1
+ 2
+ 3
+ 4
+])
+param alertSeverity int = 3
-```json
-{
- "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "alertName": {
- "type": "string",
- "metadata": {
- "description": "Name of the alert"
- }
- },
- "alertDescription": {
- "type": "string",
- "defaultValue": "This is a metric alert",
- "metadata": {
- "description": "Description of alert"
- }
- },
- "alertSeverity": {
- "type": "int",
- "defaultValue": 3,
- "allowedValues": [
- 0,
- 1,
- 2,
- 3,
- 4
- ],
- "metadata": {
- "description": "Severity of alert {0,1,2,3,4}"
- }
- },
- "isEnabled": {
- "type": "bool",
- "defaultValue": true,
- "metadata": {
- "description": "Specifies whether the alert is enabled"
- }
- },
- "resourceId": {
- "type": "string",
- "defaultValue": "",
- "metadata": {
- "description": "Resource ID of the resource emitting the metric that will be used for the comparison."
- }
- },
- "criterion1":{
- "type": "object",
- "metadata": {
- "description": "Criterion includes metric name, dimension values, threshold and an operator. The alert rule fires when ALL criteria are met"
- }
- },
- "criterion2": {
- "type": "object",
- "metadata": {
- "description": "Criterion includes metric name, dimension values, threshold and an operator. The alert rule fires when ALL criteria are met"
- }
- },
- "windowSize": {
- "type": "string",
- "defaultValue": "PT5M",
- "allowedValues": [
- "PT1M",
- "PT5M",
- "PT15M",
- "PT30M",
- "PT1H",
- "PT6H",
- "PT12H",
- "PT24H"
- ],
- "metadata": {
- "description": "Period of time used to monitor alert activity based on the threshold. Must be between one minute and one day. ISO 8601 duration format."
- }
- },
- "evaluationFrequency": {
- "type": "string",
- "defaultValue": "PT1M",
- "allowedValues": [
- "PT1M",
- "PT5M",
- "PT15M",
- "PT30M",
- "PT1H"
- ],
- "metadata": {
- "description": "how often the metric alert is evaluated represented in ISO 8601 duration format"
- }
- },
- "actionGroupId": {
- "type": "string",
- "defaultValue": "",
- "metadata": {
- "description": "The ID of the action group that is triggered when the alert is activated or deactivated"
- }
- }
- },
- "variables": {
- "criterion1": "[array(parameters('criterion1'))]",
- "criterion2": "[array(parameters('criterion2'))]",
- "criteria": "[concat(variables('criterion1'),variables('criterion2'))]"
- },
- "resources": [
+@description('Specifies whether the alert is enabled')
+param isEnabled bool = true
+
+@description('Full Resource ID of the resource emitting the metric that will be used for the comparison. For example /subscriptions/00000000-0000-0000-0000-0000-00000000/resourceGroups/ResourceGroupName/providers/Microsoft.compute/virtualMachines/VM_xyz')
+@minLength(1)
+param resourceId string
+
+@description('Name of the metric used in the comparison to activate the alert.')
+@minLength(1)
+param metricName string
+
+@description('Operator comparing the current value with the threshold value.')
+@allowed([
+ 'GreaterThan'
+ 'LessThan'
+ 'GreaterOrLessThan'
+])
+param operator string = 'GreaterOrLessThan'
+
+@description('Tunes how \'noisy\' the Dynamic Thresholds alerts will be: \'High\' will result in more alerts while \'Low\' will result in fewer alerts.')
+@allowed([
+ 'High'
+ 'Medium'
+ 'Low'
+])
+param alertSensitivity string = 'Medium'
+
+@description('The number of periods to check in the alert evaluation.')
+param numberOfEvaluationPeriods int = 4
+
+@description('The number of unhealthy periods to alert on (must be lower or equal to numberOfEvaluationPeriods).')
+param minFailingPeriodsToAlert int = 3
+
+@description('Use this option to set the date from which to start learning the metric historical data and calculate the dynamic thresholds (in ISO8601 format, e.g. \'2019-12-31T22:00:00Z\').')
+param ignoreDataBefore string = ''
+
+@description('How the data that is collected should be combined over time.')
+@allowed([
+ 'Average'
+ 'Minimum'
+ 'Maximum'
+ 'Total'
+ 'Count'
+])
+param timeAggregation string = 'Average'
+
+@description('Period of time used to monitor alert activity based on the threshold. Must be between five minutes and one hour. ISO 8601 duration format.')
+@allowed([
+ 'PT5M'
+ 'PT15M'
+ 'PT30M'
+ 'PT1H'
+])
+param windowSize string = 'PT5M'
+
+@description('how often the metric alert is evaluated represented in ISO 8601 duration format')
+@allowed([
+ 'PT5M'
+ 'PT15M'
+ 'PT30M'
+ 'PT1H'
+])
+param evaluationFrequency string = 'PT5M'
+
+@description('The ID of the action group that is triggered when the alert is activated or deactivated')
+param actionGroupId string = ''
+
+resource metricAlert 'Microsoft.Insights/metricAlerts@2018-03-01' = {
+ name: alertName
+ location: 'global'
+ properties: {
+ description: alertDescription
+ severity: alertSeverity
+ enabled: isEnabled
+ scopes: [
+ resourceId
+ ]
+ evaluationFrequency: evaluationFrequency
+ windowSize: windowSize
+ criteria: {
+ 'odata.type': 'Microsoft.Azure.Monitor.MultipleResourceMultipleMetricCriteria'
+ allOf: [
{
- "name": "[parameters('alertName')]",
- "type": "Microsoft.Insights/metricAlerts",
- "location": "global",
- "apiVersion": "2018-03-01",
- "tags": {},
- "properties": {
- "description": "[parameters('alertDescription')]",
- "severity": "[parameters('alertSeverity')]",
- "enabled": "[parameters('isEnabled')]",
- "scopes": ["[parameters('resourceId')]"],
- "evaluationFrequency":"[parameters('evaluationFrequency')]",
- "windowSize": "[parameters('windowSize')]",
- "criteria": {
- "odata.type": "Microsoft.Azure.Monitor.SingleResourceMultipleMetricCriteria",
- "allOf": "[variables('criteria')]"
- },
- "actions": [
- {
- "actionGroupId": "[parameters('actionGroupId')]"
- }
- ]
- }
+ criterionType: 'DynamicThresholdCriterion'
+ name: '1st criterion'
+ metricName: metricName
+ dimensions: []
+ operator: operator
+ alertSensitivity: alertSensitivity
+ failingPeriods: {
+ numberOfEvaluationPeriods: numberOfEvaluationPeriods
+ minFailingPeriodsToAlert: minFailingPeriodsToAlert
+ }
+ ignoreDataBefore: ignoreDataBefore
+ timeAggregation: timeAggregation
}
+ ]
+ }
+ actions: [
+ {
+ actionGroupId: actionGroupId
+ }
]
+ }
}+ ```
-### Parameter file
+# [JSON](#tab/json)
```json {
- "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "alertName": {
- "value": "New Multi-dimensional Metric Alert (Replace with your alert name)"
- },
- "alertDescription": {
- "value": "New multi-dimensional metric alert created via template (Replace with your alert description)"
- },
- "alertSeverity": {
- "value":3
- },
- "isEnabled": {
- "value": true
- },
- "resourceId": {
- "value": "/subscriptions/replace-with-subscription-id/resourceGroups/replace-with-resourcegroup-name/providers/Microsoft.Storage/storageAccounts/replace-with-storage-account"
- },
- "criterion1": {
- "value": {
- "name": "1st criterion",
- "metricName": "Transactions",
- "dimensions": [
- {
- "name":"ResponseType",
- "operator": "Include",
- "values": ["Success"]
- },
- {
- "name":"ApiName",
- "operator": "Include",
- "values": ["GetBlob"]
- }
- ],
- "operator": "GreaterThan",
- "threshold": "5",
- "timeAggregation": "Total"
- }
- },
- "criterion2": {
- "value":{
- "name": "2nd criterion",
- "metricName": "SuccessE2ELatency",
- "dimensions": [
- {
- "name":"ApiName",
- "operator": "Include",
- "values": ["GetBlob"]
- }
- ],
- "operator": "GreaterThan",
- "threshold": "250",
- "timeAggregation": "Average"
- }
- },
- "actionGroupId": {
- "value": "/subscriptions/replace-with-subscription-id/resourceGroups/replace-with-resource-group-name/providers/Microsoft.Insights/actionGroups/replace-with-actiongroup-name"
- }
- }
-}
-```
--
-## Multiple dimensions, static threshold
-A single alert rule can monitor multiple metric time series at a time, which results in fewer alert rules to manage. The following sample creates a static metric alert rule on dimensional metrics.
-
-In this sample, the alert rule monitors the dimensions value combinations of the **ResponseType** and **ApiName** dimensions for the **Transactions** metric:
-1. **ResponsType** - The use of the "\*" wildcard means that for each value of the **ResponseType** dimension, including future values, a different time series is monitored individually.
-2. **ApiName** - A different time series is monitored only for the **GetBlob** and **PutBlob** dimension values.
-
-For example, a few of the potential time series that are monitored by this alert rule are:
-- Metric = *Transactions*, ResponseType = *Success*, ApiName = *GetBlob*-- Metric = *Transactions*, ResponseType = *Success*, ApiName = *PutBlob*-- Metric = *Transactions*, ResponseType = *Server Timeout*, ApiName = *GetBlob*-- Metric = *Transactions*, ResponseType = *Server Timeout*, ApiName = *PutBlob*-
-### Template file
-
-```json
-{
- "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "alertName": {
- "type": "string",
- "metadata": {
- "description": "Name of the alert"
- }
- },
- "alertDescription": {
- "type": "string",
- "defaultValue": "This is a metric alert",
- "metadata": {
- "description": "Description of alert"
- }
- },
- "alertSeverity": {
- "type": "int",
- "defaultValue": 3,
- "allowedValues": [
- 0,
- 1,
- 2,
- 3,
- 4
- ],
- "metadata": {
- "description": "Severity of alert {0,1,2,3,4}"
- }
- },
- "isEnabled": {
- "type": "bool",
- "defaultValue": true,
- "metadata": {
- "description": "Specifies whether the alert is enabled"
- }
- },
- "resourceId": {
- "type": "string",
- "defaultValue": "",
- "metadata": {
- "description": "Resource ID of the resource emitting the metric that will be used for the comparison."
- }
- },
- "criterion":{
- "type": "object",
- "metadata": {
- "description": "Criterion includes metric name, dimension values, threshold and an operator. The alert rule fires when ALL criteria are met"
- }
- },
- "windowSize": {
- "type": "string",
- "defaultValue": "PT5M",
- "allowedValues": [
- "PT1M",
- "PT5M",
- "PT15M",
- "PT30M",
- "PT1H",
- "PT6H",
- "PT12H",
- "PT24H"
- ],
- "metadata": {
- "description": "Period of time used to monitor alert activity based on the threshold. Must be between one minute and one day. ISO 8601 duration format."
- }
- },
- "evaluationFrequency": {
- "type": "string",
- "defaultValue": "PT1M",
- "allowedValues": [
- "PT1M",
- "PT5M",
- "PT15M",
- "PT30M",
- "PT1H"
- ],
- "metadata": {
- "description": "how often the metric alert is evaluated represented in ISO 8601 duration format"
- }
- },
- "actionGroupId": {
- "type": "string",
- "defaultValue": "",
- "metadata": {
- "description": "The ID of the action group that is triggered when the alert is activated or deactivated"
- }
- }
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "alertName": {
+ "type": "string",
+ "minLength": 1,
+ "metadata": {
+ "description": "Name of the alert"
+ }
},
- "variables": {
- "criteria": "[array(parameters('criterion'))]"
- },
- "resources": [
- {
- "name": "[parameters('alertName')]",
- "type": "Microsoft.Insights/metricAlerts",
- "location": "global",
- "apiVersion": "2018-03-01",
- "tags": {},
- "properties": {
- "description": "[parameters('alertDescription')]",
- "severity": "[parameters('alertSeverity')]",
- "enabled": "[parameters('isEnabled')]",
- "scopes": ["[parameters('resourceId')]"],
- "evaluationFrequency":"[parameters('evaluationFrequency')]",
- "windowSize": "[parameters('windowSize')]",
- "criteria": {
- "odata.type": "Microsoft.Azure.Monitor.SingleResourceMultipleMetricCriteria",
- "allOf": "[variables('criteria')]"
- },
- "actions": [
- {
- "actionGroupId": "[parameters('actionGroupId')]"
- }
- ]
- }
- }
- ]
+ "alertDescription": {
+ "type": "string",
+ "defaultValue": "This is a metric alert",
+ "metadata": {
+ "description": "Description of alert"
+ }
+ },
+ "alertSeverity": {
+ "type": "int",
+ "defaultValue": 3,
+ "allowedValues": [
+ 0,
+ 1,
+ 2,
+ 3,
+ 4
+ ],
+ "metadata": {
+ "description": "Severity of alert {0,1,2,3,4}"
+ }
+ },
+ "isEnabled": {
+ "type": "bool",
+ "defaultValue": true,
+ "metadata": {
+ "description": "Specifies whether the alert is enabled"
+ }
+ },
+ "resourceId": {
+ "type": "string",
+ "minLength": 1,
+ "metadata": {
+ "description": "Full Resource ID of the resource emitting the metric that will be used for the comparison. For example /subscriptions/00000000-0000-0000-0000-0000-00000000/resourceGroups/ResourceGroupName/providers/Microsoft.compute/virtualMachines/VM_xyz"
+ }
+ },
+ "metricName": {
+ "type": "string",
+ "minLength": 1,
+ "metadata": {
+ "description": "Name of the metric used in the comparison to activate the alert."
+ }
+ },
+ "operator": {
+ "type": "string",
+ "defaultValue": "GreaterOrLessThan",
+ "allowedValues": [
+ "GreaterThan",
+ "LessThan",
+ "GreaterOrLessThan"
+ ],
+ "metadata": {
+ "description": "Operator comparing the current value with the threshold value."
+ }
+ },
+ "alertSensitivity": {
+ "type": "string",
+ "defaultValue": "Medium",
+ "allowedValues": [
+ "High",
+ "Medium",
+ "Low"
+ ],
+ "metadata": {
+ "description": "Tunes how 'noisy' the Dynamic Thresholds alerts will be: 'High' will result in more alerts while 'Low' will result in fewer alerts."
+ }
+ },
+ "numberOfEvaluationPeriods": {
+ "type": "int",
+ "defaultValue": 4,
+ "metadata": {
+ "description": "The number of periods to check in the alert evaluation."
+ }
+ },
+ "minFailingPeriodsToAlert": {
+ "type": "int",
+ "defaultValue": 3,
+ "metadata": {
+ "description": "The number of unhealthy periods to alert on (must be lower or equal to numberOfEvaluationPeriods)."
+ }
+ },
+ "ignoreDataBefore": {
+ "type": "string",
+ "defaultValue": "",
+ "metadata": {
+ "description": "Use this option to set the date from which to start learning the metric historical data and calculate the dynamic thresholds (in ISO8601 format, e.g. '2019-12-31T22:00:00Z')."
+ }
+ },
+ "timeAggregation": {
+ "type": "string",
+ "defaultValue": "Average",
+ "allowedValues": [
+ "Average",
+ "Minimum",
+ "Maximum",
+ "Total",
+ "Count"
+ ],
+ "metadata": {
+ "description": "How the data that is collected should be combined over time."
+ }
+ },
+ "windowSize": {
+ "type": "string",
+ "defaultValue": "PT5M",
+ "allowedValues": [
+ "PT5M",
+ "PT15M",
+ "PT30M",
+ "PT1H"
+ ],
+ "metadata": {
+ "description": "Period of time used to monitor alert activity based on the threshold. Must be between five minutes and one hour. ISO 8601 duration format."
+ }
+ },
+ "evaluationFrequency": {
+ "type": "string",
+ "defaultValue": "PT5M",
+ "allowedValues": [
+ "PT5M",
+ "PT15M",
+ "PT30M",
+ "PT1H"
+ ],
+ "metadata": {
+ "description": "how often the metric alert is evaluated represented in ISO 8601 duration format"
+ }
+ },
+ "actionGroupId": {
+ "type": "string",
+ "defaultValue": "",
+ "metadata": {
+ "description": "The ID of the action group that is triggered when the alert is activated or deactivated"
+ }
+ }
+ },
+ "resources": [
+ {
+ "type": "Microsoft.Insights/metricAlerts",
+ "apiVersion": "2018-03-01",
+ "name": "[parameters('alertName')]",
+ "location": "global",
+ "properties": {
+ "description": "[parameters('alertDescription')]",
+ "severity": "[parameters('alertSeverity')]",
+ "enabled": "[parameters('isEnabled')]",
+ "scopes": [
+ "[parameters('resourceId')]"
+ ],
+ "evaluationFrequency": "[parameters('evaluationFrequency')]",
+ "windowSize": "[parameters('windowSize')]",
+ "criteria": {
+ "odata.type": "Microsoft.Azure.Monitor.MultipleResourceMultipleMetricCriteria",
+ "allOf": [
+ {
+ "criterionType": "DynamicThresholdCriterion",
+ "name": "1st criterion",
+ "metricName": "[parameters('metricName')]",
+ "dimensions": [],
+ "operator": "[parameters('operator')]",
+ "alertSensitivity": "[parameters('alertSensitivity')]",
+ "failingPeriods": {
+ "numberOfEvaluationPeriods": "[parameters('numberOfEvaluationPeriods')]",
+ "minFailingPeriodsToAlert": "[parameters('minFailingPeriodsToAlert')]"
+ },
+ "ignoreDataBefore": "[parameters('ignoreDataBefore')]",
+ "timeAggregation": "[parameters('timeAggregation')]"
+ }
+ ]
+ },
+ "actions": [
+ {
+ "actionGroupId": "[parameters('actionGroupId')]"
+ }
+ ]
+ }
+ }
+ ]
} ``` ++ ### Parameter file ```json {
- "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "alertName": {
- "value": "New multi-dimensional metric alert rule (replace with your alert name)"
- },
- "alertDescription": {
- "value": "New multi-dimensional metric alert rule created via template (replace with your alert description)"
- },
- "alertSeverity": {
- "value":3
- },
- "isEnabled": {
- "value": true
- },
- "resourceId": {
- "value": "/subscriptions/replace-with-subscription-id/resourceGroups/replace-with-resourcegroup-name/providers/Microsoft.Storage/storageAccounts/replace-with-storage-account"
- },
- "criterion": {
- "value": {
- "name": "Criterion",
- "metricName": "Transactions",
- "dimensions": [
- {
- "name":"ResponseType",
- "operator": "Include",
- "values": ["*"]
- },
- {
- "name":"ApiName",
- "operator": "Include",
- "values": ["GetBlob", "PutBlob"]
- }
- ],
- "operator": "GreaterThan",
- "threshold": "5",
- "timeAggregation": "Total"
- }
- },
- "actionGroupId": {
- "value": "/subscriptions/replace-with-subscription-id/resourceGroups/replace-with-resource-group-name/providers/Microsoft.Insights/actionGroups/replace-with-actiongroup-name"
- }
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "alertName": {
+ "value": "New Metric Alert with Dynamic Thresholds"
+ },
+ "alertDescription": {
+ "value": "New metric alert with Dynamic Thresholds created via template"
+ },
+ "alertSeverity": {
+ "value":3
+ },
+ "isEnabled": {
+ "value": true
+ },
+ "resourceId": {
+ "value": "/subscriptions/replace-with-subscription-id/resourceGroups/replace-with-resourceGroup-name/providers/Microsoft.Compute/virtualMachines/replace-with-resource-name"
+ },
+ "metricName": {
+ "value": "Percentage CPU"
+ },
+ "operator": {
+ "value": "GreaterOrLessThan"
+ },
+ "alertSensitivity": {
+ "value": "Medium"
+ },
+ "numberOfEvaluationPeriods": {
+ "value": "4"
+ },
+ "minFailingPeriodsToAlert": {
+ "value": "3"
+ },
+ "ignoreDataBefore": {
+ "value": ""
+ },
+ "timeAggregation": {
+ "value": "Average"
+ },
+ "actionGroupId": {
+ "value": "/subscriptions/replace-with-subscription-id/resourceGroups/resource-group-name/providers/Microsoft.Insights/actionGroups/replace-with-action-group"
}
+ }
} ```
+## Multiple criteria, static threshold
+Metric alerts support alerting on multi-dimensional metrics and up to 5 criteria per alert rule. The following sample creates a metric alert rule on dimensional metrics and specifies multiple criteria.
-## Multiple dimensions, dynamic thresholds
-A single dynamic thresholds alert rule can create tailored thresholds for hundreds of metric time series (even different types) at a time, which results in fewer alert rules to manage. The following sample creates a dynamic thresholds metric alert rule on dimensional metrics.
--
-In this sample, the alert rule monitors the dimensions value combinations of the **ResponseType** and **ApiName** dimensions for the **Transactions** metric:
-1. **ResponsType** - For each value of the **ResponseType** dimension, including future values, a different time series is monitored individually.
-2. **ApiName** - A different time series is monitored only for the **GetBlob** and **PutBlob** dimension values.
-
-For example, a few of the potential time series that are monitored by this alert rule are:
-- Metric = *Transactions*, ResponseType = *Success*, ApiName = *GetBlob*-- Metric = *Transactions*, ResponseType = *Success*, ApiName = *PutBlob*-- Metric = *Transactions*, ResponseType = *Server Timeout*, ApiName = *GetBlob*-- Metric = *Transactions*, ResponseType = *Server Timeout*, ApiName = *PutBlob*
+The following constraints apply when using dimensions in an alert rule that contains multiple criteria:
->[!NOTE]
-> Multiple criteria are not currently supported for metric alert rules that use dynamic thresholds.
+- You can only select one value per dimension within each criterion.
+- You cannot use "\*" as a dimension value.
+- When metrics that are configured in different criteria support the same dimension, then a configured dimension value must be explicitly set in the same way for all of those metrics in the relevant criteria.
+ - In the example below, because both the **Transactions** and **SuccessE2ELatency** metrics have an **ApiName** dimension, and *criterion1* specifies the *"GetBlob"* value for the **ApiName** dimension, then *criterion2* must also set a *"GetBlob"* value for the **ApiName** dimension.
### Template file
+# [Bicep](#tab/bicep)
-```json
-{
- "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "alertName": {
- "type": "string",
- "metadata": {
- "description": "Name of the alert"
- }
- },
- "alertDescription": {
- "type": "string",
- "defaultValue": "This is a metric alert",
- "metadata": {
- "description": "Description of alert"
- }
- },
- "alertSeverity": {
- "type": "int",
- "defaultValue": 3,
- "allowedValues": [
- 0,
- 1,
- 2,
- 3,
- 4
- ],
- "metadata": {
- "description": "Severity of alert {0,1,2,3,4}"
- }
- },
- "isEnabled": {
- "type": "bool",
- "defaultValue": true,
- "metadata": {
- "description": "Specifies whether the alert is enabled"
- }
- },
- "resourceId": {
- "type": "string",
- "defaultValue": "",
- "metadata": {
- "description": "Resource ID of the resource emitting the metric that will be used for the comparison."
- }
- },
- "criterion":{
- "type": "object",
- "metadata": {
- "description": "Criterion includes metric name, dimension values, threshold and an operator."
- }
- },
- "windowSize": {
- "type": "string",
- "defaultValue": "PT5M",
- "allowedValues": [
- "PT5M",
- "PT15M",
- "PT30M",
- "PT1H"
- ],
- "metadata": {
- "description": "Period of time used to monitor alert activity based on the threshold. Must be between five minutes and one hour. ISO 8601 duration format."
- }
- },
- "evaluationFrequency": {
- "type": "string",
- "defaultValue": "PT5M",
- "allowedValues": [
- "PT5M",
- "PT15M",
- "PT30M",
- "PT1H"
- ],
- "metadata": {
- "description": "how often the metric alert is evaluated represented in ISO 8601 duration format"
- }
- },
- "actionGroupId": {
- "type": "string",
- "defaultValue": "",
- "metadata": {
- "description": "The ID of the action group that is triggered when the alert is activated or deactivated"
- }
- }
- },
- "variables": {
- "criteria": "[array(parameters('criterion'))]"
- },
- "resources": [
- {
- "name": "[parameters('alertName')]",
- "type": "Microsoft.Insights/metricAlerts",
- "location": "global",
- "apiVersion": "2018-03-01",
- "tags": {},
- "properties": {
- "description": "[parameters('alertDescription')]",
- "severity": "[parameters('alertSeverity')]",
- "enabled": "[parameters('isEnabled')]",
- "scopes": ["[parameters('resourceId')]"],
- "evaluationFrequency":"[parameters('evaluationFrequency')]",
- "windowSize": "[parameters('windowSize')]",
- "criteria": {
- "odata.type": "Microsoft.Azure.Monitor.MultipleResourceMultipleMetricCriteria",
- "allOf": "[variables('criteria')]"
- },
- "actions": [
- {
- "actionGroupId": "[parameters('actionGroupId')]"
- }
- ]
- }
- }
- ]
-}
-```
+```bicep
+@description('Name of the alert')
+param alertName string
-### Parameter file
+@description('Description of alert')
+param alertDescription string = 'This is a metric alert'
-```json
-{
- "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "alertName": {
- "value": "New Multi-dimensional Metric Alert with Dynamic Thresholds (Replace with your alert name)"
- },
- "alertDescription": {
- "value": "New multi-dimensional metric alert with Dynamic Thresholds created via template (Replace with your alert description)"
- },
- "alertSeverity": {
- "value":3
- },
- "isEnabled": {
- "value": true
- },
- "resourceId": {
- "value": "/subscriptions/replace-with-subscription-id/resourceGroups/replace-with-resourcegroup-name/providers/Microsoft.Storage/storageAccounts/replace-with-storage-account"
- },
- "criterion": {
- "value": {
- "criterionType": "DynamicThresholdCriterion",
- "name": "1st criterion",
- "metricName": "Transactions",
- "dimensions": [
- {
- "name":"ResponseType",
- "operator": "Include",
- "values": ["*"]
- },
- {
- "name":"ApiName",
- "operator": "Include",
- "values": ["GetBlob", "PutBlob"]
- }
- ],
- "operator": "GreaterOrLessThan",
- "alertSensitivity": "Medium",
- "failingPeriods": {
- "numberOfEvaluationPeriods": "4",
- "minFailingPeriodsToAlert": "3"
- },
- "timeAggregation": "Total"
- }
- },
- "actionGroupId": {
- "value": "/subscriptions/replace-with-subscription-id/resourceGroups/replace-with-resource-group-name/providers/Microsoft.Insights/actionGroups/replace-with-actiongroup-name"
- }
- }
-}
-```
+@description('Severity of alert {0,1,2,3,4}')
+@allowed([
+ 0
+ 1
+ 2
+ 3
+ 4
+])
+param alertSeverity int = 3
+@description('Specifies whether the alert is enabled')
+param isEnabled bool = true
+@description('Resource ID of the resource emitting the metric that will be used for the comparison.')
+param resourceId string = ''
-## Custom metric, static threshold
+@description('Criterion includes metric name, dimension values, threshold and an operator. The alert rule fires when ALL criteria are met')
+param criterion1 object
-You can use the following template to create a more advanced static threshold metric alert rule on a custom metric.
+@description('Criterion includes metric name, dimension values, threshold and an operator. The alert rule fires when ALL criteria are met')
+param criterion2 object
-To learn more about custom metrics in Azure Monitor, see [Custom metrics in Azure Monitor](../essentials/metrics-custom-overview.md).
+@description('Period of time used to monitor alert activity based on the threshold. Must be between one minute and one day. ISO 8601 duration format.')
+@allowed([
+ 'PT1M'
+ 'PT5M'
+ 'PT15M'
+ 'PT30M'
+ 'PT1H'
+ 'PT6H'
+ 'PT12H'
+ 'PT24H'
+])
+param windowSize string = 'PT5M'
-When creating an alert rule on a custom metric, you need to specify both the metric name and the metric namespace. You should also make sure that the custom metric is already being reported, as you cannot create an alert rule on a custom metric that doesn't yet exist.
+@description('how often the metric alert is evaluated represented in ISO 8601 duration format')
+@allowed([
+ 'PT1M'
+ 'PT5M'
+ 'PT15M'
+ 'PT30M'
+ 'PT1H'
+])
+param evaluationFrequency string = 'PT1M'
-### Template file
+@description('The ID of the action group that is triggered when the alert is activated or deactivated')
+param actionGroupId string = ''
-Save the json below as customstaticmetricalert.json for the purpose of this walkthrough.
+var criterion1_var = array(criterion1)
+var criterion2_var = array(criterion2)
+var criteria = concat(criterion1_var, criterion2_var)
-```json
-{
- "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "alertName": {
- "type": "string",
- "minLength": 1,
- "metadata": {
- "description": "Name of the alert"
- }
- },
- "alertDescription": {
- "type": "string",
- "defaultValue": "This is a metric alert",
- "metadata": {
- "description": "Description of alert"
- }
- },
- "alertSeverity": {
- "type": "int",
- "defaultValue": 3,
- "allowedValues": [
- 0,
- 1,
- 2,
- 3,
- 4
- ],
- "metadata": {
- "description": "Severity of alert {0,1,2,3,4}"
- }
- },
- "isEnabled": {
- "type": "bool",
- "defaultValue": true,
- "metadata": {
- "description": "Specifies whether the alert is enabled"
- }
- },
- "resourceId": {
- "type": "string",
- "minLength": 1,
- "metadata": {
- "description": "Full Resource ID of the resource emitting the metric that will be used for the comparison. For example /subscriptions/00000000-0000-0000-0000-0000-00000000/resourceGroups/ResourceGroupName/providers/Microsoft.compute/virtualMachines/VM_xyz"
- }
- },
- "metricName": {
- "type": "string",
- "minLength": 1,
- "metadata": {
- "description": "Name of the metric used in the comparison to activate the alert."
- }
- },
- "metricNamespace": {
- "type": "string",
- "minLength": 1,
- "metadata": {
- "description": "Namespace of the metric used in the comparison to activate the alert."
- }
- },
- "operator": {
- "type": "string",
- "defaultValue": "GreaterThan",
- "allowedValues": [
- "Equals",
- "GreaterThan",
- "GreaterThanOrEqual",
- "LessThan",
- "LessThanOrEqual"
- ],
- "metadata": {
- "description": "Operator comparing the current value with the threshold value."
- }
- },
- "threshold": {
- "type": "string",
- "defaultValue": "0",
- "metadata": {
- "description": "The threshold value at which the alert is activated."
- }
- },
- "timeAggregation": {
- "type": "string",
- "defaultValue": "Average",
- "allowedValues": [
- "Average",
- "Minimum",
- "Maximum",
- "Total",
- "Count"
- ],
- "metadata": {
- "description": "How the data that is collected should be combined over time."
- }
- },
- "windowSize": {
- "type": "string",
- "defaultValue": "PT5M",
- "allowedValues": [
- "PT1M",
- "PT5M",
- "PT15M",
- "PT30M",
- "PT1H",
- "PT6H",
- "PT12H",
- "PT24H"
- ],
- "metadata": {
- "description": "Period of time used to monitor alert activity based on the threshold. Must be between one minute and one day. ISO 8601 duration format."
- }
- },
- "evaluationFrequency": {
- "type": "string",
- "defaultValue": "PT1M",
- "allowedValues": [
- "PT1M",
- "PT5M",
- "PT15M",
- "PT30M",
- "PT1H"
- ],
- "metadata": {
- "description": "How often the metric alert is evaluated represented in ISO 8601 duration format"
- }
- },
- "actionGroupId": {
- "type": "string",
- "defaultValue": "",
- "metadata": {
- "description": "The ID of the action group that is triggered when the alert is activated or deactivated"
- }
- }
- },
- "variables": { },
- "resources": [
- {
- "name": "[parameters('alertName')]",
- "type": "Microsoft.Insights/metricAlerts",
- "location": "global",
- "apiVersion": "2018-03-01",
- "tags": {},
- "properties": {
- "description": "[parameters('alertDescription')]",
- "severity": "[parameters('alertSeverity')]",
- "enabled": "[parameters('isEnabled')]",
- "scopes": ["[parameters('resourceId')]"],
- "evaluationFrequency":"[parameters('evaluationFrequency')]",
- "windowSize": "[parameters('windowSize')]",
- "criteria": {
- "odata.type": "Microsoft.Azure.Monitor.SingleResourceMultipleMetricCriteria",
- "allOf": [
- {
- "name" : "1st criterion",
- "metricName": "[parameters('metricName')]",
- "metricNamespace": "[parameters('metricNamespace')]",
- "dimensions":[],
- "operator": "[parameters('operator')]",
- "threshold" : "[parameters('threshold')]",
- "timeAggregation": "[parameters('timeAggregation')]"
- }
- ]
- },
- "actions": [
- {
- "actionGroupId": "[parameters('actionGroupId')]"
- }
- ]
- }
- }
+resource metricAlert 'Microsoft.Insights/metricAlerts@2018-03-01' = {
+ name: alertName
+ location: 'global'
+ properties: {
+ description: alertDescription
+ severity: alertSeverity
+ enabled: isEnabled
+ scopes: [
+ resourceId
+ ]
+ evaluationFrequency: evaluationFrequency
+ windowSize: windowSize
+ criteria: {
+ 'odata.type': 'Microsoft.Azure.Monitor.SingleResourceMultipleMetricCriteria'
+ allOf: criteria
+ }
+ actions: [
+ {
+ actionGroupId: actionGroupId
+ }
]
+ }
} ```
-### Parameter file
+# [JSON](#tab/json)
```json {
- "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "alertName": {
- "value": "New alert rule on a custom metric"
- },
- "alertDescription": {
- "value": "New alert rule on a custom metric created via template"
- },
- "alertSeverity": {
- "value":3
- },
- "isEnabled": {
- "value": true
- },
- "resourceId": {
- "value": "/subscriptions/replace-with-subscription-id/resourceGroups/replace-with-resourceGroup-name/providers/microsoft.insights/components/replace-with-application-insights-resource-name"
- },
- "metricName": {
- "value": "The custom metric name"
- },
- "metricNamespace": {
- "value": "Azure.ApplicationInsights"
- },
- "operator": {
- "value": "GreaterThan"
- },
- "threshold": {
- "value": "80"
- },
- "timeAggregation": {
- "value": "Average"
- },
- "actionGroupId": {
- "value": "/subscriptions/replace-with-subscription-id/resourceGroups/resource-group-name/providers/Microsoft.Insights/actionGroups/replace-with-action-group"
- }
- }
-}
-```
--
->[!NOTE]
->
-> You can find the metric namespace of a specific custom metric by [browsing your custom metrics via the Azure portal](../essentials/metrics-custom-overview.md#browse-your-custom-metrics-via-the-azure-portal)
--
-## Multiple resources
-
-Azure Monitor supports monitoring multiple resources of the same type with a single metric alert rule, for resources that exist in the same Azure region. This feature is currently only supported in Azure public cloud and only for Virtual machines, SQL server databases, SQL server elastic pools and Data Box Edge devices. Also, this feature is only available for platform metrics, and isn't supported for custom metrics.
-
-Dynamic Thresholds alerts rule can also help create tailored thresholds for hundreds of metric series (even different types) at a time, which results in fewer alert rules to manage.
-
-This section will describe Azure Resource Manager templates for three scenarios to monitor multiple resources with a single rule.
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "alertName": {
+ "type": "string",
+ "metadata": {
+ "description": "Name of the alert"
+ }
+ },
+ "alertDescription": {
+ "type": "string",
+ "defaultValue": "This is a metric alert",
+ "metadata": {
+ "description": "Description of alert"
+ }
+ },
+ "alertSeverity": {
+ "type": "int",
+ "defaultValue": 3,
+ "allowedValues": [
+ 0,
+ 1,
+ 2,
+ 3,
+ 4
+ ],
+ "metadata": {
+ "description": "Severity of alert {0,1,2,3,4}"
+ }
+ },
+ "isEnabled": {
+ "type": "bool",
+ "defaultValue": true,
+ "metadata": {
+ "description": "Specifies whether the alert is enabled"
+ }
+ },
+ "resourceId": {
+ "type": "string",
+ "defaultValue": "",
+ "metadata": {
+ "description": "Resource ID of the resource emitting the metric that will be used for the comparison."
+ }
+ },
+ "criterion1": {
+ "type": "object",
+ "metadata": {
+ "description": "Criterion includes metric name, dimension values, threshold and an operator. The alert rule fires when ALL criteria are met"
+ }
+ },
+ "criterion2": {
+ "type": "object",
+ "metadata": {
+ "description": "Criterion includes metric name, dimension values, threshold and an operator. The alert rule fires when ALL criteria are met"
+ }
+ },
+ "windowSize": {
+ "type": "string",
+ "defaultValue": "PT5M",
+ "allowedValues": [
+ "PT1M",
+ "PT5M",
+ "PT15M",
+ "PT30M",
+ "PT1H",
+ "PT6H",
+ "PT12H",
+ "PT24H"
+ ],
+ "metadata": {
+ "description": "Period of time used to monitor alert activity based on the threshold. Must be between one minute and one day. ISO 8601 duration format."
+ }
+ },
+ "evaluationFrequency": {
+ "type": "string",
+ "defaultValue": "PT1M",
+ "allowedValues": [
+ "PT1M",
+ "PT5M",
+ "PT15M",
+ "PT30M",
+ "PT1H"
+ ],
+ "metadata": {
+ "description": "how often the metric alert is evaluated represented in ISO 8601 duration format"
+ }
+ },
+ "actionGroupId": {
+ "type": "string",
+ "defaultValue": "",
+ "metadata": {
+ "description": "The ID of the action group that is triggered when the alert is activated or deactivated"
+ }
+ }
+ },
+ "variables": {
+ "criterion1_var": "[array(parameters('criterion1'))]",
+ "criterion2_var": "[array(parameters('criterion2'))]",
+ "criteria": "[concat(variables('criterion1_var'), variables('criterion2_var'))]"
+ },
+ "resources": [
+ {
+ "type": "Microsoft.Insights/metricAlerts",
+ "apiVersion": "2018-03-01",
+ "name": "[parameters('alertName')]",
+ "location": "global",
+ "properties": {
+ "description": "[parameters('alertDescription')]",
+ "severity": "[parameters('alertSeverity')]",
+ "enabled": "[parameters('isEnabled')]",
+ "scopes": [
+ "[parameters('resourceId')]"
+ ],
+ "evaluationFrequency": "[parameters('evaluationFrequency')]",
+ "windowSize": "[parameters('windowSize')]",
+ "criteria": {
+ "odata.type": "Microsoft.Azure.Monitor.SingleResourceMultipleMetricCriteria",
+ "allOf": "[variables('criteria')]"
+ },
+ "actions": [
+ {
+ "actionGroupId": "[parameters('actionGroupId')]"
+ }
+ ]
+ }
+ }
+ ]
+}
+```
-- Monitoring all virtual machines (in one Azure region) in one or more resource groups.-- Monitoring all virtual machines (in one Azure region) in a subscription.-- Monitoring a list of virtual machines (in one Azure region) in a subscription.+
-> [!NOTE]
->
-> In a metric alert rule that monitors multiple resources, only one condition is allowed.
+### Parameter file
-### Static threshold alert on all virtual machines in one or more resource groups
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "alertName": {
+ "value": "New Multi-dimensional Metric Alert (Replace with your alert name)"
+ },
+ "alertDescription": {
+ "value": "New multi-dimensional metric alert created via template (Replace with your alert description)"
+ },
+ "alertSeverity": {
+ "value": 3
+ },
+ "isEnabled": {
+ "value": true
+ },
+ "resourceId": {
+ "value": "/subscriptions/replace-with-subscription-id/resourceGroups/replace-with-resourcegroup-name/providers/Microsoft.Storage/storageAccounts/replace-with-storage-account"
+ },
+ "criterion1": {
+ "value": {
+ "name": "1st criterion",
+ "metricName": "Transactions",
+ "dimensions": [
+ {
+ "name": "ResponseType",
+ "operator": "Include",
+ "values": [ "Success" ]
+ },
+ {
+ "name": "ApiName",
+ "operator": "Include",
+ "values": [ "GetBlob" ]
+ }
+ ],
+ "operator": "GreaterThan",
+ "threshold": "5",
+ "timeAggregation": "Total"
+ }
+ },
+ "criterion2": {
+ "value": {
+ "name": "2nd criterion",
+ "metricName": "SuccessE2ELatency",
+ "dimensions": [
+ {
+ "name": "ApiName",
+ "operator": "Include",
+ "values": [ "GetBlob" ]
+ }
+ ],
+ "operator": "GreaterThan",
+ "threshold": "250",
+ "timeAggregation": "Average"
+ }
+ },
+ "actionGroupId": {
+ "value": "/subscriptions/replace-with-subscription-id/resourceGroups/replace-with-resource-group-name/providers/Microsoft.Insights/actionGroups/replace-with-actiongroup-name"
+ }
+ }
+}
+```
-This template will create a static threshold metric alert rule that monitors Percentage CPU for all virtual machines (in one Azure region) in one or more resource groups.
+## Multiple dimensions, static threshold
-Save the json below as all-vms-in-resource-group-static.json for the purpose of this walk-through.
+A single alert rule can monitor multiple metric time series at a time, which results in fewer alert rules to manage. The following sample creates a static metric alert rule on dimensional metrics.
+
+In this sample, the alert rule monitors the dimensions value combinations of the **ResponseType** and **ApiName** dimensions for the **Transactions** metric:
+
+1. **ResponsType** - The use of the "\*" wildcard means that for each value of the **ResponseType** dimension, including future values, a different time series is monitored individually.
+2. **ApiName** - A different time series is monitored only for the **GetBlob** and **PutBlob** dimension values.
+
+For example, a few of the potential time series that are monitored by this alert rule are:
+
+- Metric = *Transactions*, ResponseType = *Success*, ApiName = *GetBlob*
+- Metric = *Transactions*, ResponseType = *Success*, ApiName = *PutBlob*
+- Metric = *Transactions*, ResponseType = *Server Timeout*, ApiName = *GetBlob*
+- Metric = *Transactions*, ResponseType = *Server Timeout*, ApiName = *PutBlob*
### Template file
+# [Bicep](#tab/bicep)
+
+```bicep
+@description('Name of the alert')
+param alertName string
+
+@description('Description of alert')
+param alertDescription string = 'This is a metric alert'
+
+@description('Severity of alert {0,1,2,3,4}')
+@allowed([
+ 0
+ 1
+ 2
+ 3
+ 4
+])
+param alertSeverity int = 3
+
+@description('Specifies whether the alert is enabled')
+param isEnabled bool = true
+
+@description('Resource ID of the resource emitting the metric that will be used for the comparison.')
+param resourceId string = ''
+
+@description('Criterion includes metric name, dimension values, threshold and an operator. The alert rule fires when ALL criteria are met')
+param criterion object
+
+@description('Period of time used to monitor alert activity based on the threshold. Must be between one minute and one day. ISO 8601 duration format.')
+@allowed([
+ 'PT1M'
+ 'PT5M'
+ 'PT15M'
+ 'PT30M'
+ 'PT1H'
+ 'PT6H'
+ 'PT12H'
+ 'PT24H'
+])
+param windowSize string = 'PT5M'
+
+@description('how often the metric alert is evaluated represented in ISO 8601 duration format')
+@allowed([
+ 'PT1M'
+ 'PT5M'
+ 'PT15M'
+ 'PT30M'
+ 'PT1H'
+])
+param evaluationFrequency string = 'PT1M'
+
+@description('The ID of the action group that is triggered when the alert is activated or deactivated')
+param actionGroupId string = ''
+
+var criteria = array(criterion)
+
+resource metricAlert 'Microsoft.Insights/metricAlerts@2018-03-01' = {
+ name: alertName
+ location: 'global'
+ properties: {
+ description: alertDescription
+ severity: alertSeverity
+ enabled: isEnabled
+ scopes: [
+ resourceId
+ ]
+ evaluationFrequency: evaluationFrequency
+ windowSize: windowSize
+ criteria: {
+ 'odata.type': 'Microsoft.Azure.Monitor.SingleResourceMultipleMetricCriteria'
+ allOf: criteria
+ }
+ actions: [
+ {
+ actionGroupId: actionGroupId
+ }
+ ]
+ }
+}
+```
+
+# [JSON](#tab/json)
+ ```json {
- "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "alertName": {
- "type": "string",
- "minLength": 1,
- "metadata": {
- "description": "Name of the alert"
- }
- },
- "alertDescription": {
- "type": "string",
- "defaultValue": "This is a metric alert",
- "metadata": {
- "description": "Description of alert"
- }
- },
- "alertSeverity": {
- "type": "int",
- "defaultValue": 3,
- "allowedValues": [
- 0,
- 1,
- 2,
- 3,
- 4
- ],
- "metadata": {
- "description": "Severity of alert {0,1,2,3,4}"
- }
- },
- "isEnabled": {
- "type": "bool",
- "defaultValue": true,
- "metadata": {
- "description": "Specifies whether the alert is enabled"
- }
- },
- "targetResourceGroup":{
- "type": "array",
- "minLength": 1,
- "metadata": {
- "description": "Full path of the resource group(s) where target resources to be monitored are in. For example - /subscriptions/00000000-0000-0000-0000-0000-00000000/resourceGroups/ResourceGroupName"
- }
- },
- "targetResourceRegion":{
- "type": "string",
- "allowedValues": [
- "EastUS",
- "EastUS2",
- "CentralUS",
- "NorthCentralUS",
- "SouthCentralUS",
- "WestCentralUS",
- "WestUS",
- "WestUS2",
- "CanadaEast",
- "CanadaCentral",
- "BrazilSouth",
- "NorthEurope",
- "WestEurope",
- "FranceCentral",
- "FranceSouth",
- "UKWest",
- "UKSouth",
- "GermanyCentral",
- "GermanyNortheast",
- "GermanyNorth",
- "GermanyWestCentral",
- "SwitzerlandNorth",
- "SwitzerlandWest",
- "NorwayEast",
- "NorwayWest",
- "SoutheastAsia",
- "EastAsia",
- "AustraliaEast",
- "AustraliaSoutheast",
- "AustraliaCentral",
- "AustraliaCentral2",
- "ChinaEast",
- "ChinaNorth",
- "ChinaEast2",
- "ChinaNorth2",
- "CentralIndia",
- "WestIndia",
- "SouthIndia",
- "JapanEast",
- "JapanWest",
- "KoreaCentral",
- "KoreaSouth",
- "SouthAfricaWest",
- "SouthAfricaNorth",
- "UAECentral",
- "UAENorth"
- ],
- "metadata": {
- "description": "Azure region in which target resources to be monitored are in (without spaces). For example: EastUS"
- }
- },
- "targetResourceType": {
- "type": "string",
- "minLength": 1,
- "metadata": {
- "description": "Resource type of target resources to be monitored."
- }
- },
- "metricName": {
- "type": "string",
- "minLength": 1,
- "metadata": {
- "description": "Name of the metric used in the comparison to activate the alert."
- }
- },
- "operator": {
- "type": "string",
- "defaultValue": "GreaterThan",
- "allowedValues": [
- "Equals",
- "GreaterThan",
- "GreaterThanOrEqual",
- "LessThan",
- "LessThanOrEqual"
- ],
- "metadata": {
- "description": "Operator comparing the current value with the threshold value."
- }
- },
- "threshold": {
- "type": "string",
- "defaultValue": "0",
- "metadata": {
- "description": "The threshold value at which the alert is activated."
- }
- },
- "timeAggregation": {
- "type": "string",
- "defaultValue": "Average",
- "allowedValues": [
- "Average",
- "Minimum",
- "Maximum",
- "Total",
- "Count"
- ],
- "metadata": {
- "description": "How the data that is collected should be combined over time."
- }
- },
- "windowSize": {
- "type": "string",
- "defaultValue": "PT5M",
- "allowedValues": [
- "PT1M",
- "PT5M",
- "PT15M",
- "PT30M",
- "PT1H",
- "PT6H",
- "PT12H",
- "PT24H"
- ],
- "metadata": {
- "description": "Period of time used to monitor alert activity based on the threshold. Must be between one minute and one day. ISO 8601 duration format."
- }
- },
- "evaluationFrequency": {
- "type": "string",
- "defaultValue": "PT1M",
- "allowedValues": [
- "PT1M",
- "PT5M",
- "PT15M",
- "PT30M"
- ],
- "metadata": {
- "description": "how often the metric alert is evaluated represented in ISO 8601 duration format"
- }
- },
- "actionGroupId": {
- "type": "string",
- "defaultValue": "",
- "metadata": {
- "description": "The ID of the action group that is triggered when the alert is activated or deactivated"
- }
- }
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "alertName": {
+ "type": "string",
+ "metadata": {
+ "description": "Name of the alert"
+ }
},
- "variables": { },
- "resources": [
- {
- "name": "[parameters('alertName')]",
- "type": "Microsoft.Insights/metricAlerts",
- "location": "global",
- "apiVersion": "2018-03-01",
- "tags": {},
- "properties": {
- "description": "[parameters('alertDescription')]",
- "severity": "[parameters('alertSeverity')]",
- "enabled": "[parameters('isEnabled')]",
- "scopes": "[parameters('targetResourceGroup')]",
- "targetResourceType": "[parameters('targetResourceType')]",
- "targetResourceRegion": "[parameters('targetResourceRegion')]",
- "evaluationFrequency":"[parameters('evaluationFrequency')]",
- "windowSize": "[parameters('windowSize')]",
- "criteria": {
- "odata.type": "Microsoft.Azure.Monitor.MultipleResourceMultipleMetricCriteria",
- "allOf": [
- {
- "name" : "1st criterion",
- "metricName": "[parameters('metricName')]",
- "dimensions":[],
- "operator": "[parameters('operator')]",
- "threshold" : "[parameters('threshold')]",
- "timeAggregation": "[parameters('timeAggregation')]"
- }
- ]
- },
- "actions": [
- {
- "actionGroupId": "[parameters('actionGroupId')]"
- }
- ]
- }
- }
- ]
+ "alertDescription": {
+ "type": "string",
+ "defaultValue": "This is a metric alert",
+ "metadata": {
+ "description": "Description of alert"
+ }
+ },
+ "alertSeverity": {
+ "type": "int",
+ "defaultValue": 3,
+ "allowedValues": [
+ 0,
+ 1,
+ 2,
+ 3,
+ 4
+ ],
+ "metadata": {
+ "description": "Severity of alert {0,1,2,3,4}"
+ }
+ },
+ "isEnabled": {
+ "type": "bool",
+ "defaultValue": true,
+ "metadata": {
+ "description": "Specifies whether the alert is enabled"
+ }
+ },
+ "resourceId": {
+ "type": "string",
+ "defaultValue": "",
+ "metadata": {
+ "description": "Resource ID of the resource emitting the metric that will be used for the comparison."
+ }
+ },
+ "criterion": {
+ "type": "object",
+ "metadata": {
+ "description": "Criterion includes metric name, dimension values, threshold and an operator. The alert rule fires when ALL criteria are met"
+ }
+ },
+ "windowSize": {
+ "type": "string",
+ "defaultValue": "PT5M",
+ "allowedValues": [
+ "PT1M",
+ "PT5M",
+ "PT15M",
+ "PT30M",
+ "PT1H",
+ "PT6H",
+ "PT12H",
+ "PT24H"
+ ],
+ "metadata": {
+ "description": "Period of time used to monitor alert activity based on the threshold. Must be between one minute and one day. ISO 8601 duration format."
+ }
+ },
+ "evaluationFrequency": {
+ "type": "string",
+ "defaultValue": "PT1M",
+ "allowedValues": [
+ "PT1M",
+ "PT5M",
+ "PT15M",
+ "PT30M",
+ "PT1H"
+ ],
+ "metadata": {
+ "description": "how often the metric alert is evaluated represented in ISO 8601 duration format"
+ }
+ },
+ "actionGroupId": {
+ "type": "string",
+ "defaultValue": "",
+ "metadata": {
+ "description": "The ID of the action group that is triggered when the alert is activated or deactivated"
+ }
+ }
+ },
+ "variables": {
+ "criteria": "[array(parameters('criterion'))]"
+ },
+ "resources": [
+ {
+ "type": "Microsoft.Insights/metricAlerts",
+ "apiVersion": "2018-03-01",
+ "name": "[parameters('alertName')]",
+ "location": "global",
+ "properties": {
+ "description": "[parameters('alertDescription')]",
+ "severity": "[parameters('alertSeverity')]",
+ "enabled": "[parameters('isEnabled')]",
+ "scopes": [
+ "[parameters('resourceId')]"
+ ],
+ "evaluationFrequency": "[parameters('evaluationFrequency')]",
+ "windowSize": "[parameters('windowSize')]",
+ "criteria": {
+ "odata.type": "Microsoft.Azure.Monitor.SingleResourceMultipleMetricCriteria",
+ "allOf": "[variables('criteria')]"
+ },
+ "actions": [
+ {
+ "actionGroupId": "[parameters('actionGroupId')]"
+ }
+ ]
+ }
+ }
+ ]
} ``` ++ ### Parameter file ```json {
- "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "alertName": {
- "value": "Multi-resource metric alert via Azure Resource Manager template"
- },
- "alertDescription": {
- "value": "New Multi-resource metric alert created via template"
- },
- "alertSeverity": {
- "value":3
- },
- "isEnabled": {
- "value": true
- },
- "targetResourceGroup":{
- "value": [
- "/subscriptions/replace-with-subscription-id/resourceGroups/replace-with-resource-group-name1",
- "/subscriptions/replace-with-subscription-id/resourceGroups/replace-with-resource-group-name2"
- ]
- },
- "targetResourceRegion":{
- "value": "SouthCentralUS"
- },
- "targetResourceType":{
- "value": "Microsoft.Compute/virtualMachines"
- },
- "metricName": {
- "value": "Percentage CPU"
- },
- "operator": {
- "value": "GreaterThan"
- },
- "threshold": {
- "value": "0"
- },
- "timeAggregation": {
- "value": "Average"
- },
- "actionGroupId": {
- "value": "/subscriptions/replace-with-subscription-id/resourceGroups/replace-with-resource-group-name/providers/Microsoft.Insights/actionGroups/replace-with-action-group-name"
- }
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "alertName": {
+ "value": "New multi-dimensional metric alert rule (replace with your alert name)"
+ },
+ "alertDescription": {
+ "value": "New multi-dimensional metric alert rule created via template (replace with your alert description)"
+ },
+ "alertSeverity": {
+ "value": 3
+ },
+ "isEnabled": {
+ "value": true
+ },
+ "resourceId": {
+ "value": "/subscriptions/replace-with-subscription-id/resourceGroups/replace-with-resourcegroup-name/providers/Microsoft.Storage/storageAccounts/replace-with-storage-account"
+ },
+ "criterion": {
+ "value": {
+ "name": "Criterion",
+ "metricName": "Transactions",
+ "dimensions": [
+ {
+ "name": "ResponseType",
+ "operator": "Include",
+ "values": [ "*" ]
+ },
+ {
+ "name": "ApiName",
+ "operator": "Include",
+ "values": [ "GetBlob", "PutBlob" ]
+ }
+ ],
+ "operator": "GreaterThan",
+ "threshold": "5",
+ "timeAggregation": "Total"
+ }
+ },
+ "actionGroupId": {
+ "value": "/subscriptions/replace-with-subscription-id/resourceGroups/replace-with-resource-group-name/providers/Microsoft.Insights/actionGroups/replace-with-actiongroup-name"
}
+ }
} ```
+## Multiple dimensions, dynamic thresholds
+A single dynamic thresholds alert rule can create tailored thresholds for hundreds of metric time series (even different types) at a time, which results in fewer alert rules to manage. The following sample creates a dynamic thresholds metric alert rule on dimensional metrics.
-### Dynamic Thresholds alert on all virtual machines in one or more resource groups
-This sample creates a dynamic thresholds metric alert rule that monitors Percentage CPU for all virtual machines in one Azure region in one or more resource groups.
+In this sample, the alert rule monitors the dimensions value combinations of the **ResponseType** and **ApiName** dimensions for the **Transactions** metric:
+
+1. **ResponsType** - For each value of the **ResponseType** dimension, including future values, a different time series is monitored individually.
+2. **ApiName** - A different time series is monitored only for the **GetBlob** and **PutBlob** dimension values.
+
+For example, a few of the potential time series that are monitored by this alert rule are:
+
+- Metric = *Transactions*, ResponseType = *Success*, ApiName = *GetBlob*
+- Metric = *Transactions*, ResponseType = *Success*, ApiName = *PutBlob*
+- Metric = *Transactions*, ResponseType = *Server Timeout*, ApiName = *GetBlob*
+- Metric = *Transactions*, ResponseType = *Server Timeout*, ApiName = *PutBlob*
+
+>[!NOTE]
+> Multiple criteria are not currently supported for metric alert rules that use dynamic thresholds.
### Template file
-```json
-{
- "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "alertName": {
- "type": "string",
- "minLength": 1,
- "metadata": {
- "description": "Name of the alert"
- }
- },
- "alertDescription": {
- "type": "string",
- "defaultValue": "This is a metric alert",
- "metadata": {
- "description": "Description of alert"
- }
- },
- "alertSeverity": {
- "type": "int",
- "defaultValue": 3,
- "allowedValues": [
- 0,
- 1,
- 2,
- 3,
- 4
- ],
- "metadata": {
- "description": "Severity of alert {0,1,2,3,4}"
- }
- },
- "isEnabled": {
- "type": "bool",
- "defaultValue": true,
- "metadata": {
- "description": "Specifies whether the alert is enabled"
- }
- },
- "targetResourceGroup":{
- "type": "array",
- "minLength": 1,
- "metadata": {
- "description": "Full path of the resource group(s) where target resources to be monitored are in. For example - /subscriptions/00000000-0000-0000-0000-0000-00000000/resourceGroups/ResourceGroupName"
- }
- },
- "targetResourceRegion":{
- "type": "string",
- "allowedValues": [
- "EastUS",
- "EastUS2",
- "CentralUS",
- "NorthCentralUS",
- "SouthCentralUS",
- "WestCentralUS",
- "WestUS",
- "WestUS2",
- "CanadaEast",
- "CanadaCentral",
- "BrazilSouth",
- "NorthEurope",
- "WestEurope",
- "FranceCentral",
- "FranceSouth",
- "UKWest",
- "UKSouth",
- "GermanyCentral",
- "GermanyNortheast",
- "GermanyNorth",
- "GermanyWestCentral",
- "SwitzerlandNorth",
- "SwitzerlandWest",
- "NorwayEast",
- "NorwayWest",
- "SoutheastAsia",
- "EastAsia",
- "AustraliaEast",
- "AustraliaSoutheast",
- "AustraliaCentral",
- "AustraliaCentral2",
- "ChinaEast",
- "ChinaNorth",
- "ChinaEast2",
- "ChinaNorth2",
- "CentralIndia",
- "WestIndia",
- "SouthIndia",
- "JapanEast",
- "JapanWest",
- "KoreaCentral",
- "KoreaSouth",
- "SouthAfricaWest",
- "SouthAfricaNorth",
- "UAECentral",
- "UAENorth"
- ],
- "metadata": {
- "description": "Azure region in which target resources to be monitored are in (without spaces). For example: EastUS"
- }
- },
- "targetResourceType": {
- "type": "string",
- "minLength": 1,
- "metadata": {
- "description": "Resource type of target resources to be monitored."
- }
- },
- "metricName": {
- "type": "string",
- "minLength": 1,
- "metadata": {
- "description": "Name of the metric used in the comparison to activate the alert."
- }
- },
- "operator": {
- "type": "string",
- "defaultValue": "GreaterOrLessThan",
- "allowedValues": [
- "GreaterThan",
- "LessThan",
- "GreaterOrLessThan"
- ],
- "metadata": {
- "description": "Operator comparing the current value with the threshold value."
- }
- },
- "alertSensitivity": {
- "type": "string",
- "defaultValue": "Medium",
- "allowedValues": [
- "High",
- "Medium",
- "Low"
- ],
- "metadata": {
- "description": "Tunes how 'noisy' the Dynamic Thresholds alerts will be: 'High' will result in more alerts while 'Low' will result in fewer alerts."
- }
- },
- "numberOfEvaluationPeriods": {
- "type": "string",
- "defaultValue": "4",
- "metadata": {
- "description": "The number of periods to check in the alert evaluation."
- }
- },
- "minFailingPeriodsToAlert": {
- "type": "string",
- "defaultValue": "3",
- "metadata": {
- "description": "The number of unhealthy periods to alert on (must be lower or equal to numberOfEvaluationPeriods)."
- }
- },
- "timeAggregation": {
- "type": "string",
- "defaultValue": "Average",
- "allowedValues": [
- "Average",
- "Minimum",
- "Maximum",
- "Total",
- "Count"
- ],
- "metadata": {
- "description": "How the data that is collected should be combined over time."
- }
- },
- "windowSize": {
- "type": "string",
- "defaultValue": "PT5M",
- "allowedValues": [
- "PT5M",
- "PT15M",
- "PT30M",
- "PT1H"
- ],
- "metadata": {
- "description": "Period of time used to monitor alert activity based on the threshold. Must be between five minutes and one hour. ISO 8601 duration format."
- }
+# [Bicep](#tab/bicep)
+
+```bicep
+@description('Name of the alert')
+param alertName string
+
+@description('Description of alert')
+param alertDescription string = 'This is a metric alert'
+
+@description('Severity of alert {0,1,2,3,4}')
+@allowed([
+ 0
+ 1
+ 2
+ 3
+ 4
+])
+param alertSeverity int = 3
+
+@description('Specifies whether the alert is enabled')
+param isEnabled bool = true
+
+@description('Resource ID of the resource emitting the metric that will be used for the comparison.')
+param resourceId string = ''
+
+@description('Criterion includes metric name, dimension values, threshold and an operator.')
+param criterion object
+
+@description('Period of time used to monitor alert activity based on the threshold. Must be between five minutes and one hour. ISO 8601 duration format.')
+@allowed([
+ 'PT5M'
+ 'PT15M'
+ 'PT30M'
+ 'PT1H'
+])
+param windowSize string = 'PT5M'
+
+@description('how often the metric alert is evaluated represented in ISO 8601 duration format')
+@allowed([
+ 'PT5M'
+ 'PT15M'
+ 'PT30M'
+ 'PT1H'
+])
+param evaluationFrequency string = 'PT5M'
+
+@description('The ID of the action group that is triggered when the alert is activated or deactivated')
+param actionGroupId string = ''
+
+var criteria = array(criterion)
+
+resource metricAlert 'Microsoft.Insights/metricAlerts@2018-03-01' = {
+ name: alertName
+ location: 'global'
+ properties: {
+ description: alertDescription
+ severity: alertSeverity
+ enabled: isEnabled
+ scopes: [
+ resourceId
+ ]
+ evaluationFrequency: evaluationFrequency
+ windowSize: windowSize
+ criteria: {
+ 'odata.type': 'Microsoft.Azure.Monitor.MultipleResourceMultipleMetricCriteria'
+ allOf: criteria
+ }
+ actions: [
+ {
+ actionGroupId: actionGroupId
+ }
+ ]
+ }
+}
+```
+
+# [JSON](#tab/json)
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "alertName": {
+ "type": "string",
+ "metadata": {
+ "description": "Name of the alert"
+ }
+ },
+ "alertDescription": {
+ "type": "string",
+ "defaultValue": "This is a metric alert",
+ "metadata": {
+ "description": "Description of alert"
+ }
+ },
+ "alertSeverity": {
+ "type": "int",
+ "defaultValue": 3,
+ "allowedValues": [
+ 0,
+ 1,
+ 2,
+ 3,
+ 4
+ ],
+ "metadata": {
+ "description": "Severity of alert {0,1,2,3,4}"
+ }
+ },
+ "isEnabled": {
+ "type": "bool",
+ "defaultValue": true,
+ "metadata": {
+ "description": "Specifies whether the alert is enabled"
+ }
+ },
+ "resourceId": {
+ "type": "string",
+ "defaultValue": "",
+ "metadata": {
+ "description": "Resource ID of the resource emitting the metric that will be used for the comparison."
+ }
+ },
+ "criterion": {
+ "type": "object",
+ "metadata": {
+ "description": "Criterion includes metric name, dimension values, threshold and an operator."
+ }
+ },
+ "windowSize": {
+ "type": "string",
+ "defaultValue": "PT5M",
+ "allowedValues": [
+ "PT5M",
+ "PT15M",
+ "PT30M",
+ "PT1H"
+ ],
+ "metadata": {
+ "description": "Period of time used to monitor alert activity based on the threshold. Must be between five minutes and one hour. ISO 8601 duration format."
+ }
+ },
+ "evaluationFrequency": {
+ "type": "string",
+ "defaultValue": "PT5M",
+ "allowedValues": [
+ "PT5M",
+ "PT15M",
+ "PT30M",
+ "PT1H"
+ ],
+ "metadata": {
+ "description": "how often the metric alert is evaluated represented in ISO 8601 duration format"
+ }
+ },
+ "actionGroupId": {
+ "type": "string",
+ "defaultValue": "",
+ "metadata": {
+ "description": "The ID of the action group that is triggered when the alert is activated or deactivated"
+ }
+ }
+ },
+ "variables": {
+ "criteria": "[array(parameters('criterion'))]"
+ },
+ "resources": [
+ {
+ "type": "Microsoft.Insights/metricAlerts",
+ "apiVersion": "2018-03-01",
+ "name": "[parameters('alertName')]",
+ "location": "global",
+ "properties": {
+ "description": "[parameters('alertDescription')]",
+ "severity": "[parameters('alertSeverity')]",
+ "enabled": "[parameters('isEnabled')]",
+ "scopes": [
+ "[parameters('resourceId')]"
+ ],
+ "evaluationFrequency": "[parameters('evaluationFrequency')]",
+ "windowSize": "[parameters('windowSize')]",
+ "criteria": {
+ "odata.type": "Microsoft.Azure.Monitor.MultipleResourceMultipleMetricCriteria",
+ "allOf": "[variables('criteria')]"
},
- "evaluationFrequency": {
- "type": "string",
- "defaultValue": "PT5M",
- "allowedValues": [
- "PT5M",
- "PT15M",
- "PT30M",
- "PT1H"
- ],
- "metadata": {
- "description": "how often the metric alert is evaluated represented in ISO 8601 duration format"
- }
+ "actions": [
+ {
+ "actionGroupId": "[parameters('actionGroupId')]"
+ }
+ ]
+ }
+ }
+ ]
+}
+```
+++
+### Parameter file
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "alertName": {
+ "value": "New Multi-dimensional Metric Alert with Dynamic Thresholds (Replace with your alert name)"
+ },
+ "alertDescription": {
+ "value": "New multi-dimensional metric alert with Dynamic Thresholds created via template (Replace with your alert description)"
+ },
+ "alertSeverity": {
+ "value": 3
+ },
+ "isEnabled": {
+ "value": true
+ },
+ "resourceId": {
+ "value": "/subscriptions/replace-with-subscription-id/resourceGroups/replace-with-resourcegroup-name/providers/Microsoft.Storage/storageAccounts/replace-with-storage-account"
+ },
+ "criterion": {
+ "value": {
+ "criterionType": "DynamicThresholdCriterion",
+ "name": "1st criterion",
+ "metricName": "Transactions",
+ "dimensions": [
+ {
+ "name": "ResponseType",
+ "operator": "Include",
+ "values": [ "*" ]
+ },
+ {
+ "name": "ApiName",
+ "operator": "Include",
+ "values": [ "GetBlob", "PutBlob" ]
+ }
+ ],
+ "operator": "GreaterOrLessThan",
+ "alertSensitivity": "Medium",
+ "failingPeriods": {
+ "numberOfEvaluationPeriods": "4",
+ "minFailingPeriodsToAlert": "3"
},
- "actionGroupId": {
- "type": "string",
- "defaultValue": "",
- "metadata": {
- "description": "The ID of the action group that is triggered when the alert is activated or deactivated"
- }
- }
+ "timeAggregation": "Total"
+ }
},
- "variables": { },
- "resources": [
+ "actionGroupId": {
+ "value": "/subscriptions/replace-with-subscription-id/resourceGroups/replace-with-resource-group-name/providers/Microsoft.Insights/actionGroups/replace-with-actiongroup-name"
+ }
+ }
+}
+```
+
+## Custom metric, static threshold
+
+You can use the following template to create a more advanced static threshold metric alert rule on a custom metric.
+
+To learn more about custom metrics in Azure Monitor, see [Custom metrics in Azure Monitor](../essentials/metrics-custom-overview.md).
+
+When creating an alert rule on a custom metric, you need to specify both the metric name and the metric namespace. You should also make sure that the custom metric is already being reported, as you cannot create an alert rule on a custom metric that doesn't yet exist.
+
+### Template file
+
+# [Bicep](#tab/bicep)
+
+```bicep
+@description('Name of the alert')
+@minLength(1)
+param alertName string
+
+@description('Description of alert')
+param alertDescription string = 'This is a metric alert'
+
+@description('Severity of alert {0,1,2,3,4}')
+@allowed([
+ 0
+ 1
+ 2
+ 3
+ 4
+])
+param alertSeverity int = 3
+
+@description('Specifies whether the alert is enabled')
+param isEnabled bool = true
+
+@description('Full Resource ID of the resource emitting the metric that will be used for the comparison. For example /subscriptions/00000000-0000-0000-0000-0000-00000000/resourceGroups/ResourceGroupName/providers/Microsoft.compute/virtualMachines/VM_xyz')
+@minLength(1)
+param resourceId string
+
+@description('Name of the metric used in the comparison to activate the alert.')
+@minLength(1)
+param metricName string
+
+@description('Namespace of the metric used in the comparison to activate the alert.')
+@minLength(1)
+param metricNamespace string
+
+@description('Operator comparing the current value with the threshold value.')
+@allowed([
+ 'Equals'
+ 'GreaterThan'
+ 'GreaterThanOrEqual'
+ 'LessThan'
+ 'LessThanOrEqual'
+])
+param operator string = 'GreaterThan'
+
+@description('The threshold value at which the alert is activated.')
+param threshold int = 0
+
+@description('How the data that is collected should be combined over time.')
+@allowed([
+ 'Average'
+ 'Minimum'
+ 'Maximum'
+ 'Total'
+ 'Count'
+])
+param timeAggregation string = 'Average'
+
+@description('Period of time used to monitor alert activity based on the threshold. Must be between one minute and one day. ISO 8601 duration format.')
+@allowed([
+ 'PT1M'
+ 'PT5M'
+ 'PT15M'
+ 'PT30M'
+ 'PT1H'
+ 'PT6H'
+ 'PT12H'
+ 'PT24H'
+])
+param windowSize string = 'PT5M'
+
+@description('How often the metric alert is evaluated represented in ISO 8601 duration format')
+@allowed([
+ 'PT1M'
+ 'PT5M'
+ 'PT15M'
+ 'PT30M'
+ 'PT1H'
+])
+param evaluationFrequency string = 'PT1M'
+
+@description('The ID of the action group that is triggered when the alert is activated or deactivated')
+param actionGroupId string = ''
+
+resource metricAlert 'Microsoft.Insights/metricAlerts@2018-03-01' = {
+ name: alertName
+ location: 'global'
+ properties: {
+ description: alertDescription
+ severity: alertSeverity
+ enabled: isEnabled
+ scopes: [
+ resourceId
+ ]
+ evaluationFrequency: evaluationFrequency
+ windowSize: windowSize
+ criteria: {
+ 'odata.type': 'Microsoft.Azure.Monitor.SingleResourceMultipleMetricCriteria'
+ allOf: [
{
- "name": "[parameters('alertName')]",
- "type": "Microsoft.Insights/metricAlerts",
- "location": "global",
- "apiVersion": "2018-03-01",
- "tags": {},
- "properties": {
- "description": "[parameters('alertDescription')]",
- "severity": "[parameters('alertSeverity')]",
- "enabled": "[parameters('isEnabled')]",
- "scopes": "[parameters('targetResourceGroup')]",
- "targetResourceType": "[parameters('targetResourceType')]",
- "targetResourceRegion": "[parameters('targetResourceRegion')]",
- "evaluationFrequency":"[parameters('evaluationFrequency')]",
- "windowSize": "[parameters('windowSize')]",
- "criteria": {
- "odata.type": "Microsoft.Azure.Monitor.MultipleResourceMultipleMetricCriteria",
- "allOf": [
- {
- "criterionType": "DynamicThresholdCriterion",
- "name" : "1st criterion",
- "metricName": "[parameters('metricName')]",
- "dimensions":[],
- "operator": "[parameters('operator')]",
- "alertSensitivity": "[parameters('alertSensitivity')]",
- "failingPeriods": {
- "numberOfEvaluationPeriods": "[parameters('numberOfEvaluationPeriods')]",
- "minFailingPeriodsToAlert": "[parameters('minFailingPeriodsToAlert')]"
- },
- "timeAggregation": "[parameters('timeAggregation')]"
- }
- ]
- },
- "actions": [
- {
- "actionGroupId": "[parameters('actionGroupId')]"
- }
- ]
- }
+ name: '1st criterion'
+ metricName: metricName
+ metricNamespace: metricNamespace
+ dimensions: []
+ operator: operator
+ threshold: threshold
+ timeAggregation: timeAggregation
+ criterionType: 'StaticThresholdCriterion'
}
+ ]
+ }
+ actions: [
+ {
+ actionGroupId: actionGroupId
+ }
]
+ }
+}
+```
+
+# [JSON](#tab/json)
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "alertName": {
+ "type": "string",
+ "minLength": 1,
+ "metadata": {
+ "description": "Name of the alert"
+ }
+ },
+ "alertDescription": {
+ "type": "string",
+ "defaultValue": "This is a metric alert",
+ "metadata": {
+ "description": "Description of alert"
+ }
+ },
+ "alertSeverity": {
+ "type": "int",
+ "defaultValue": 3,
+ "allowedValues": [
+ 0,
+ 1,
+ 2,
+ 3,
+ 4
+ ],
+ "metadata": {
+ "description": "Severity of alert {0,1,2,3,4}"
+ }
+ },
+ "isEnabled": {
+ "type": "bool",
+ "defaultValue": true,
+ "metadata": {
+ "description": "Specifies whether the alert is enabled"
+ }
+ },
+ "resourceId": {
+ "type": "string",
+ "minLength": 1,
+ "metadata": {
+ "description": "Full Resource ID of the resource emitting the metric that will be used for the comparison. For example /subscriptions/00000000-0000-0000-0000-0000-00000000/resourceGroups/ResourceGroupName/providers/Microsoft.compute/virtualMachines/VM_xyz"
+ }
+ },
+ "metricName": {
+ "type": "string",
+ "minLength": 1,
+ "metadata": {
+ "description": "Name of the metric used in the comparison to activate the alert."
+ }
+ },
+ "metricNamespace": {
+ "type": "string",
+ "minLength": 1,
+ "metadata": {
+ "description": "Namespace of the metric used in the comparison to activate the alert."
+ }
+ },
+ "operator": {
+ "type": "string",
+ "defaultValue": "GreaterThan",
+ "allowedValues": [
+ "Equals",
+ "GreaterThan",
+ "GreaterThanOrEqual",
+ "LessThan",
+ "LessThanOrEqual"
+ ],
+ "metadata": {
+ "description": "Operator comparing the current value with the threshold value."
+ }
+ },
+ "threshold": {
+ "type": "int",
+ "defaultValue": 0,
+ "metadata": {
+ "description": "The threshold value at which the alert is activated."
+ }
+ },
+ "timeAggregation": {
+ "type": "string",
+ "defaultValue": "Average",
+ "allowedValues": [
+ "Average",
+ "Minimum",
+ "Maximum",
+ "Total",
+ "Count"
+ ],
+ "metadata": {
+ "description": "How the data that is collected should be combined over time."
+ }
+ },
+ "windowSize": {
+ "type": "string",
+ "defaultValue": "PT5M",
+ "allowedValues": [
+ "PT1M",
+ "PT5M",
+ "PT15M",
+ "PT30M",
+ "PT1H",
+ "PT6H",
+ "PT12H",
+ "PT24H"
+ ],
+ "metadata": {
+ "description": "Period of time used to monitor alert activity based on the threshold. Must be between one minute and one day. ISO 8601 duration format."
+ }
+ },
+ "evaluationFrequency": {
+ "type": "string",
+ "defaultValue": "PT1M",
+ "allowedValues": [
+ "PT1M",
+ "PT5M",
+ "PT15M",
+ "PT30M",
+ "PT1H"
+ ],
+ "metadata": {
+ "description": "How often the metric alert is evaluated represented in ISO 8601 duration format"
+ }
+ },
+ "actionGroupId": {
+ "type": "string",
+ "defaultValue": "",
+ "metadata": {
+ "description": "The ID of the action group that is triggered when the alert is activated or deactivated"
+ }
+ }
+ },
+ "resources": [
+ {
+ "type": "Microsoft.Insights/metricAlerts",
+ "apiVersion": "2018-03-01",
+ "name": "[parameters('alertName')]",
+ "location": "global",
+ "properties": {
+ "description": "[parameters('alertDescription')]",
+ "severity": "[parameters('alertSeverity')]",
+ "enabled": "[parameters('isEnabled')]",
+ "scopes": [
+ "[parameters('resourceId')]"
+ ],
+ "evaluationFrequency": "[parameters('evaluationFrequency')]",
+ "windowSize": "[parameters('windowSize')]",
+ "criteria": {
+ "odata.type": "Microsoft.Azure.Monitor.SingleResourceMultipleMetricCriteria",
+ "allOf": [
+ {
+ "name": "1st criterion",
+ "metricName": "[parameters('metricName')]",
+ "metricNamespace": "[parameters('metricNamespace')]",
+ "dimensions": [],
+ "operator": "[parameters('operator')]",
+ "threshold": "[parameters('threshold')]",
+ "timeAggregation": "[parameters('timeAggregation')]",
+ "criterionType": "StaticThresholdCriterion"
+ }
+ ]
+ },
+ "actions": [
+ {
+ "actionGroupId": "[parameters('actionGroupId')]"
+ }
+ ]
+ }
+ }
+ ]
+}
+```
+++
+### Parameter file
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "alertName": {
+ "value": "New alert rule on a custom metric"
+ },
+ "alertDescription": {
+ "value": "New alert rule on a custom metric created via template"
+ },
+ "alertSeverity": {
+ "value": 3
+ },
+ "isEnabled": {
+ "value": true
+ },
+ "resourceId": {
+ "value": "/subscriptions/replace-with-subscription-id/resourceGroups/replace-with-resourceGroup-name/providers/microsoft.insights/components/replace-with-application-insights-resource-name"
+ },
+ "metricName": {
+ "value": "The custom metric name"
+ },
+ "metricNamespace": {
+ "value": "Azure.ApplicationInsights"
+ },
+ "operator": {
+ "value": "GreaterThan"
+ },
+ "threshold": {
+ "value": "80"
+ },
+ "timeAggregation": {
+ "value": "Average"
+ },
+ "actionGroupId": {
+ "value": "/subscriptions/replace-with-subscription-id/resourceGroups/resource-group-name/providers/Microsoft.Insights/actionGroups/replace-with-action-group"
+ }
+ }
+}
+```
+
+> [!NOTE]
+>
+> You can find the metric namespace of a specific custom metric by [browsing your custom metrics via the Azure portal](../essentials/metrics-custom-overview.md#browse-your-custom-metrics-via-the-azure-portal)
+
+## Multiple resources
+
+Azure Monitor supports monitoring multiple resources of the same type with a single metric alert rule, for resources that exist in the same Azure region. This feature is currently only supported in Azure public cloud and only for Virtual machines, SQL server databases, SQL server elastic pools and Azure Stack Edge devices. Also, this feature is only available for platform metrics, and isn't supported for custom metrics.
+
+Dynamic Thresholds alerts rule can also help create tailored thresholds for hundreds of metric series (even different types) at a time, which results in fewer alert rules to manage.
+
+This section will describe Azure Resource Manager templates for three scenarios to monitor multiple resources with a single rule.
+
+- Monitoring all virtual machines (in one Azure region) in one or more resource groups.
+- Monitoring all virtual machines (in one Azure region) in a subscription.
+- Monitoring a list of virtual machines (in one Azure region) in a subscription.
+
+> [!NOTE]
+>
+> In a metric alert rule that monitors multiple resources, only one condition is allowed.
+
+### Static threshold alert on all virtual machines in one or more resource groups
+
+This template will create a static threshold metric alert rule that monitors Percentage CPU for all virtual machines (in one Azure region) in one or more resource groups.
+
+Save the json below as all-vms-in-resource-group-static.json for the purpose of this walk-through.
+
+### Template file
+
+# [Bicep](#tab/bicep)
+
+```bicep
+@description('Name of the alert')
+@minLength(1)
+param alertName string
+
+@description('Description of alert')
+param alertDescription string = 'This is a metric alert'
+
+@description('Severity of alert {0,1,2,3,4}')
+@allowed([
+ 0
+ 1
+ 2
+ 3
+ 4
+])
+param alertSeverity int = 3
+
+@description('Specifies whether the alert is enabled')
+param isEnabled bool = true
+
+@description('Full path of the resource group(s) where target resources to be monitored are in. For example - /subscriptions/00000000-0000-0000-0000-0000-00000000/resourceGroups/ResourceGroupName')
+@minLength(1)
+param targetResourceGroup array
+
+@description('Azure region in which target resources to be monitored are in (without spaces). For example: EastUS')
+@allowed([
+ 'EastUS'
+ 'EastUS2'
+ 'CentralUS'
+ 'NorthCentralUS'
+ 'SouthCentralUS'
+ 'WestCentralUS'
+ 'WestUS'
+ 'WestUS2'
+ 'CanadaEast'
+ 'CanadaCentral'
+ 'BrazilSouth'
+ 'NorthEurope'
+ 'WestEurope'
+ 'FranceCentral'
+ 'FranceSouth'
+ 'UKWest'
+ 'UKSouth'
+ 'GermanyCentral'
+ 'GermanyNortheast'
+ 'GermanyNorth'
+ 'GermanyWestCentral'
+ 'SwitzerlandNorth'
+ 'SwitzerlandWest'
+ 'NorwayEast'
+ 'NorwayWest'
+ 'SoutheastAsia'
+ 'EastAsia'
+ 'AustraliaEast'
+ 'AustraliaSoutheast'
+ 'AustraliaCentral'
+ 'AustraliaCentral2'
+ 'ChinaEast'
+ 'ChinaNorth'
+ 'ChinaEast2'
+ 'ChinaNorth2'
+ 'CentralIndia'
+ 'WestIndia'
+ 'SouthIndia'
+ 'JapanEast'
+ 'JapanWest'
+ 'KoreaCentral'
+ 'KoreaSouth'
+ 'SouthAfricaWest'
+ 'SouthAfricaNorth'
+ 'UAECentral'
+ 'UAENorth'
+])
+param targetResourceRegion string
+
+@description('Resource type of target resources to be monitored.')
+@minLength(1)
+param targetResourceType string
+
+@description('Name of the metric used in the comparison to activate the alert.')
+@minLength(1)
+param metricName string
+
+@description('Operator comparing the current value with the threshold value.')
+@allowed([
+ 'Equals'
+ 'GreaterThan'
+ 'GreaterThanOrEqual'
+ 'LessThan'
+ 'LessThanOrEqual'
+])
+param operator string = 'GreaterThan'
+
+@description('The threshold value at which the alert is activated.')
+param threshold string = '0'
+
+@description('How the data that is collected should be combined over time.')
+@allowed([
+ 'Average'
+ 'Minimum'
+ 'Maximum'
+ 'Total'
+ 'Count'
+])
+param timeAggregation string = 'Average'
+
+@description('Period of time used to monitor alert activity based on the threshold. Must be between one minute and one day. ISO 8601 duration format.')
+@allowed([
+ 'PT1M'
+ 'PT5M'
+ 'PT15M'
+ 'PT30M'
+ 'PT1H'
+ 'PT6H'
+ 'PT12H'
+ 'PT24H'
+])
+param windowSize string = 'PT5M'
+
+@description('how often the metric alert is evaluated represented in ISO 8601 duration format')
+@allowed([
+ 'PT1M'
+ 'PT5M'
+ 'PT15M'
+ 'PT30M'
+])
+param evaluationFrequency string = 'PT1M'
+
+@description('The ID of the action group that is triggered when the alert is activated or deactivated')
+param actionGroupId string = ''
+
+resource metricAlert 'Microsoft.Insights/metricAlerts@2018-03-01' = {
+ name: alertName
+ location: 'global'
+ properties: {
+ description: alertDescription
+ severity: alertSeverity
+ enabled: isEnabled
+ scopes: targetResourceGroup
+ targetResourceType: targetResourceType
+ targetResourceRegion: targetResourceRegion
+ evaluationFrequency: evaluationFrequency
+ windowSize: windowSize
+ criteria: {
+ 'odata.type': 'Microsoft.Azure.Monitor.MultipleResourceMultipleMetricCriteria'
+ allOf: [
+ {
+ name: '1st criterion'
+ metricName: metricName
+ dimensions: []
+ operator: operator
+ threshold: threshold
+ timeAggregation: timeAggregation
+ criterionType: 'StaticThresholdCriterion'
+ }
+ ]
+ }
+ actions: [
+ {
+ actionGroupId: actionGroupId
+ }
+ ]
+ }
+}
+```
+
+# [JSON](#tab/json)
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "alertName": {
+ "type": "string",
+ "minLength": 1,
+ "metadata": {
+ "description": "Name of the alert"
+ }
+ },
+ "alertDescription": {
+ "type": "string",
+ "defaultValue": "This is a metric alert",
+ "metadata": {
+ "description": "Description of alert"
+ }
+ },
+ "alertSeverity": {
+ "type": "int",
+ "defaultValue": 3,
+ "allowedValues": [
+ 0,
+ 1,
+ 2,
+ 3,
+ 4
+ ],
+ "metadata": {
+ "description": "Severity of alert {0,1,2,3,4}"
+ }
+ },
+ "isEnabled": {
+ "type": "bool",
+ "defaultValue": true,
+ "metadata": {
+ "description": "Specifies whether the alert is enabled"
+ }
+ },
+ "targetResourceGroup": {
+ "type": "array",
+ "minLength": 1,
+ "metadata": {
+ "description": "Full path of the resource group(s) where target resources to be monitored are in. For example - /subscriptions/00000000-0000-0000-0000-0000-00000000/resourceGroups/ResourceGroupName"
+ }
+ },
+ "targetResourceRegion": {
+ "type": "string",
+ "allowedValues": [
+ "EastUS",
+ "EastUS2",
+ "CentralUS",
+ "NorthCentralUS",
+ "SouthCentralUS",
+ "WestCentralUS",
+ "WestUS",
+ "WestUS2",
+ "CanadaEast",
+ "CanadaCentral",
+ "BrazilSouth",
+ "NorthEurope",
+ "WestEurope",
+ "FranceCentral",
+ "FranceSouth",
+ "UKWest",
+ "UKSouth",
+ "GermanyCentral",
+ "GermanyNortheast",
+ "GermanyNorth",
+ "GermanyWestCentral",
+ "SwitzerlandNorth",
+ "SwitzerlandWest",
+ "NorwayEast",
+ "NorwayWest",
+ "SoutheastAsia",
+ "EastAsia",
+ "AustraliaEast",
+ "AustraliaSoutheast",
+ "AustraliaCentral",
+ "AustraliaCentral2",
+ "ChinaEast",
+ "ChinaNorth",
+ "ChinaEast2",
+ "ChinaNorth2",
+ "CentralIndia",
+ "WestIndia",
+ "SouthIndia",
+ "JapanEast",
+ "JapanWest",
+ "KoreaCentral",
+ "KoreaSouth",
+ "SouthAfricaWest",
+ "SouthAfricaNorth",
+ "UAECentral",
+ "UAENorth"
+ ],
+ "metadata": {
+ "description": "Azure region in which target resources to be monitored are in (without spaces). For example: EastUS"
+ }
+ },
+ "targetResourceType": {
+ "type": "string",
+ "minLength": 1,
+ "metadata": {
+ "description": "Resource type of target resources to be monitored."
+ }
+ },
+ "metricName": {
+ "type": "string",
+ "minLength": 1,
+ "metadata": {
+ "description": "Name of the metric used in the comparison to activate the alert."
+ }
+ },
+ "operator": {
+ "type": "string",
+ "defaultValue": "GreaterThan",
+ "allowedValues": [
+ "Equals",
+ "GreaterThan",
+ "GreaterThanOrEqual",
+ "LessThan",
+ "LessThanOrEqual"
+ ],
+ "metadata": {
+ "description": "Operator comparing the current value with the threshold value."
+ }
+ },
+ "threshold": {
+ "type": "string",
+ "defaultValue": "0",
+ "metadata": {
+ "description": "The threshold value at which the alert is activated."
+ }
+ },
+ "timeAggregation": {
+ "type": "string",
+ "defaultValue": "Average",
+ "allowedValues": [
+ "Average",
+ "Minimum",
+ "Maximum",
+ "Total",
+ "Count"
+ ],
+ "metadata": {
+ "description": "How the data that is collected should be combined over time."
+ }
+ },
+ "windowSize": {
+ "type": "string",
+ "defaultValue": "PT5M",
+ "allowedValues": [
+ "PT1M",
+ "PT5M",
+ "PT15M",
+ "PT30M",
+ "PT1H",
+ "PT6H",
+ "PT12H",
+ "PT24H"
+ ],
+ "metadata": {
+ "description": "Period of time used to monitor alert activity based on the threshold. Must be between one minute and one day. ISO 8601 duration format."
+ }
+ },
+ "evaluationFrequency": {
+ "type": "string",
+ "defaultValue": "PT1M",
+ "allowedValues": [
+ "PT1M",
+ "PT5M",
+ "PT15M",
+ "PT30M"
+ ],
+ "metadata": {
+ "description": "how often the metric alert is evaluated represented in ISO 8601 duration format"
+ }
+ },
+ "actionGroupId": {
+ "type": "string",
+ "defaultValue": "",
+ "metadata": {
+ "description": "The ID of the action group that is triggered when the alert is activated or deactivated"
+ }
+ }
+ },
+ "resources": [
+ {
+ "type": "Microsoft.Insights/metricAlerts",
+ "apiVersion": "2018-03-01",
+ "name": "[parameters('alertName')]",
+ "location": "global",
+ "properties": {
+ "description": "[parameters('alertDescription')]",
+ "severity": "[parameters('alertSeverity')]",
+ "enabled": "[parameters('isEnabled')]",
+ "scopes": "[parameters('targetResourceGroup')]",
+ "targetResourceType": "[parameters('targetResourceType')]",
+ "targetResourceRegion": "[parameters('targetResourceRegion')]",
+ "evaluationFrequency": "[parameters('evaluationFrequency')]",
+ "windowSize": "[parameters('windowSize')]",
+ "criteria": {
+ "odata.type": "Microsoft.Azure.Monitor.MultipleResourceMultipleMetricCriteria",
+ "allOf": [
+ {
+ "name": "1st criterion",
+ "metricName": "[parameters('metricName')]",
+ "dimensions": [],
+ "operator": "[parameters('operator')]",
+ "threshold": "[parameters('threshold')]",
+ "timeAggregation": "[parameters('timeAggregation')]",
+ "criterionType": "StaticThresholdCriterion"
+ }
+ ]
+ },
+ "actions": [
+ {
+ "actionGroupId": "[parameters('actionGroupId')]"
+ }
+ ]
+ }
+ }
+ ]
+}
+```
+++
+### Parameter file
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "alertName": {
+ "value": "Multi-resource metric alert via Azure Resource Manager template"
+ },
+ "alertDescription": {
+ "value": "New Multi-resource metric alert created via template"
+ },
+ "alertSeverity": {
+ "value": 3
+ },
+ "isEnabled": {
+ "value": true
+ },
+ "targetResourceGroup": {
+ "value": [
+ "/subscriptions/replace-with-subscription-id/resourceGroups/replace-with-resource-group-name1",
+ "/subscriptions/replace-with-subscription-id/resourceGroups/replace-with-resource-group-name2"
+ ]
+ },
+ "targetResourceRegion": {
+ "value": "SouthCentralUS"
+ },
+ "targetResourceType": {
+ "value": "Microsoft.Compute/virtualMachines"
+ },
+ "metricName": {
+ "value": "Percentage CPU"
+ },
+ "operator": {
+ "value": "GreaterThan"
+ },
+ "threshold": {
+ "value": "0"
+ },
+ "timeAggregation": {
+ "value": "Average"
+ },
+ "actionGroupId": {
+ "value": "/subscriptions/replace-with-subscription-id/resourceGroups/replace-with-resource-group-name/providers/Microsoft.Insights/actionGroups/replace-with-action-group-name"
+ }
+ }
+}
+```
+
+### Dynamic Thresholds alert on all virtual machines in one or more resource groups
+
+This sample creates a dynamic thresholds metric alert rule that monitors Percentage CPU for all virtual machines in one Azure region in one or more resource groups.
+
+### Template file
+
+# [Bicep](#tab/bicep)
+
+```bicep
+@description('Name of the alert')
+@minLength(1)
+param alertName string
+
+@description('Description of alert')
+param alertDescription string = 'This is a metric alert'
+
+@description('Severity of alert {0,1,2,3,4}')
+@allowed([
+ 0
+ 1
+ 2
+ 3
+ 4
+])
+param alertSeverity int = 3
+
+@description('Specifies whether the alert is enabled')
+param isEnabled bool = true
+
+@description('Full path of the resource group(s) where target resources to be monitored are in. For example - /subscriptions/00000000-0000-0000-0000-0000-00000000/resourceGroups/ResourceGroupName')
+@minLength(1)
+param targetResourceGroup array
+
+@description('Azure region in which target resources to be monitored are in (without spaces). For example: EastUS')
+@allowed([
+ 'EastUS'
+ 'EastUS2'
+ 'CentralUS'
+ 'NorthCentralUS'
+ 'SouthCentralUS'
+ 'WestCentralUS'
+ 'WestUS'
+ 'WestUS2'
+ 'CanadaEast'
+ 'CanadaCentral'
+ 'BrazilSouth'
+ 'NorthEurope'
+ 'WestEurope'
+ 'FranceCentral'
+ 'FranceSouth'
+ 'UKWest'
+ 'UKSouth'
+ 'GermanyCentral'
+ 'GermanyNortheast'
+ 'GermanyNorth'
+ 'GermanyWestCentral'
+ 'SwitzerlandNorth'
+ 'SwitzerlandWest'
+ 'NorwayEast'
+ 'NorwayWest'
+ 'SoutheastAsia'
+ 'EastAsia'
+ 'AustraliaEast'
+ 'AustraliaSoutheast'
+ 'AustraliaCentral'
+ 'AustraliaCentral2'
+ 'ChinaEast'
+ 'ChinaNorth'
+ 'ChinaEast2'
+ 'ChinaNorth2'
+ 'CentralIndia'
+ 'WestIndia'
+ 'SouthIndia'
+ 'JapanEast'
+ 'JapanWest'
+ 'KoreaCentral'
+ 'KoreaSouth'
+ 'SouthAfricaWest'
+ 'SouthAfricaNorth'
+ 'UAECentral'
+ 'UAENorth'
+])
+param targetResourceRegion string
+
+@description('Resource type of target resources to be monitored.')
+@minLength(1)
+param targetResourceType string
+
+@description('Name of the metric used in the comparison to activate the alert.')
+@minLength(1)
+param metricName string
+
+@description('Operator comparing the current value with the threshold value.')
+@allowed([
+ 'GreaterThan'
+ 'LessThan'
+ 'GreaterOrLessThan'
+])
+param operator string = 'GreaterOrLessThan'
+
+@description('Tunes how \'noisy\' the Dynamic Thresholds alerts will be: \'High\' will result in more alerts while \'Low\' will result in fewer alerts.')
+@allowed([
+ 'High'
+ 'Medium'
+ 'Low'
+])
+param alertSensitivity string = 'Medium'
+
+@description('The number of periods to check in the alert evaluation.')
+param numberOfEvaluationPeriods int = 4
+
+@description('The number of unhealthy periods to alert on (must be lower or equal to numberOfEvaluationPeriods).')
+param minFailingPeriodsToAlert int = 3
+
+@description('How the data that is collected should be combined over time.')
+@allowed([
+ 'Average'
+ 'Minimum'
+ 'Maximum'
+ 'Total'
+ 'Count'
+])
+param timeAggregation string = 'Average'
+
+@description('Period of time used to monitor alert activity based on the threshold. Must be between five minutes and one hour. ISO 8601 duration format.')
+@allowed([
+ 'PT5M'
+ 'PT15M'
+ 'PT30M'
+ 'PT1H'
+])
+param windowSize string = 'PT5M'
+
+@description('how often the metric alert is evaluated represented in ISO 8601 duration format')
+@allowed([
+ 'PT5M'
+ 'PT15M'
+ 'PT30M'
+ 'PT1H'
+])
+param evaluationFrequency string = 'PT5M'
+
+@description('The ID of the action group that is triggered when the alert is activated or deactivated')
+param actionGroupId string = ''
+
+resource metricAlert 'Microsoft.Insights/metricAlerts@2018-03-01' = {
+ name: alertName
+ location: 'global'
+ properties: {
+ description: alertDescription
+ severity: alertSeverity
+ enabled: isEnabled
+ scopes: targetResourceGroup
+ targetResourceType: targetResourceType
+ targetResourceRegion: targetResourceRegion
+ evaluationFrequency: evaluationFrequency
+ windowSize: windowSize
+ criteria: {
+ 'odata.type': 'Microsoft.Azure.Monitor.MultipleResourceMultipleMetricCriteria'
+ allOf: [
+ {
+ criterionType: 'DynamicThresholdCriterion'
+ name: '1st criterion'
+ metricName: metricName
+ dimensions: []
+ operator: operator
+ alertSensitivity: alertSensitivity
+ failingPeriods: {
+ numberOfEvaluationPeriods: numberOfEvaluationPeriods
+ minFailingPeriodsToAlert: minFailingPeriodsToAlert
+ }
+ timeAggregation: timeAggregation
+ }
+ ]
+ }
+ actions: [
+ {
+ actionGroupId: actionGroupId
+ }
+ ]
+ }
+}
+```
+
+# [JSON](#tab/json)
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "alertName": {
+ "type": "string",
+ "minLength": 1,
+ "metadata": {
+ "description": "Name of the alert"
+ }
+ },
+ "alertDescription": {
+ "type": "string",
+ "defaultValue": "This is a metric alert",
+ "metadata": {
+ "description": "Description of alert"
+ }
+ },
+ "alertSeverity": {
+ "type": "int",
+ "defaultValue": 3,
+ "allowedValues": [
+ 0,
+ 1,
+ 2,
+ 3,
+ 4
+ ],
+ "metadata": {
+ "description": "Severity of alert {0,1,2,3,4}"
+ }
+ },
+ "isEnabled": {
+ "type": "bool",
+ "defaultValue": true,
+ "metadata": {
+ "description": "Specifies whether the alert is enabled"
+ }
+ },
+ "targetResourceGroup": {
+ "type": "array",
+ "minLength": 1,
+ "metadata": {
+ "description": "Full path of the resource group(s) where target resources to be monitored are in. For example - /subscriptions/00000000-0000-0000-0000-0000-00000000/resourceGroups/ResourceGroupName"
+ }
+ },
+ "targetResourceRegion": {
+ "type": "string",
+ "allowedValues": [
+ "EastUS",
+ "EastUS2",
+ "CentralUS",
+ "NorthCentralUS",
+ "SouthCentralUS",
+ "WestCentralUS",
+ "WestUS",
+ "WestUS2",
+ "CanadaEast",
+ "CanadaCentral",
+ "BrazilSouth",
+ "NorthEurope",
+ "WestEurope",
+ "FranceCentral",
+ "FranceSouth",
+ "UKWest",
+ "UKSouth",
+ "GermanyCentral",
+ "GermanyNortheast",
+ "GermanyNorth",
+ "GermanyWestCentral",
+ "SwitzerlandNorth",
+ "SwitzerlandWest",
+ "NorwayEast",
+ "NorwayWest",
+ "SoutheastAsia",
+ "EastAsia",
+ "AustraliaEast",
+ "AustraliaSoutheast",
+ "AustraliaCentral",
+ "AustraliaCentral2",
+ "ChinaEast",
+ "ChinaNorth",
+ "ChinaEast2",
+ "ChinaNorth2",
+ "CentralIndia",
+ "WestIndia",
+ "SouthIndia",
+ "JapanEast",
+ "JapanWest",
+ "KoreaCentral",
+ "KoreaSouth",
+ "SouthAfricaWest",
+ "SouthAfricaNorth",
+ "UAECentral",
+ "UAENorth"
+ ],
+ "metadata": {
+ "description": "Azure region in which target resources to be monitored are in (without spaces). For example: EastUS"
+ }
+ },
+ "targetResourceType": {
+ "type": "string",
+ "minLength": 1,
+ "metadata": {
+ "description": "Resource type of target resources to be monitored."
+ }
+ },
+ "metricName": {
+ "type": "string",
+ "minLength": 1,
+ "metadata": {
+ "description": "Name of the metric used in the comparison to activate the alert."
+ }
+ },
+ "operator": {
+ "type": "string",
+ "defaultValue": "GreaterOrLessThan",
+ "allowedValues": [
+ "GreaterThan",
+ "LessThan",
+ "GreaterOrLessThan"
+ ],
+ "metadata": {
+ "description": "Operator comparing the current value with the threshold value."
+ }
+ },
+ "alertSensitivity": {
+ "type": "string",
+ "defaultValue": "Medium",
+ "allowedValues": [
+ "High",
+ "Medium",
+ "Low"
+ ],
+ "metadata": {
+ "description": "Tunes how 'noisy' the Dynamic Thresholds alerts will be: 'High' will result in more alerts while 'Low' will result in fewer alerts."
+ }
+ },
+ "numberOfEvaluationPeriods": {
+ "type": "int",
+ "defaultValue": 4,
+ "metadata": {
+ "description": "The number of periods to check in the alert evaluation."
+ }
+ },
+ "minFailingPeriodsToAlert": {
+ "type": "int",
+ "defaultValue": 3,
+ "metadata": {
+ "description": "The number of unhealthy periods to alert on (must be lower or equal to numberOfEvaluationPeriods)."
+ }
+ },
+ "timeAggregation": {
+ "type": "string",
+ "defaultValue": "Average",
+ "allowedValues": [
+ "Average",
+ "Minimum",
+ "Maximum",
+ "Total",
+ "Count"
+ ],
+ "metadata": {
+ "description": "How the data that is collected should be combined over time."
+ }
+ },
+ "windowSize": {
+ "type": "string",
+ "defaultValue": "PT5M",
+ "allowedValues": [
+ "PT5M",
+ "PT15M",
+ "PT30M",
+ "PT1H"
+ ],
+ "metadata": {
+ "description": "Period of time used to monitor alert activity based on the threshold. Must be between five minutes and one hour. ISO 8601 duration format."
+ }
+ },
+ "evaluationFrequency": {
+ "type": "string",
+ "defaultValue": "PT5M",
+ "allowedValues": [
+ "PT5M",
+ "PT15M",
+ "PT30M",
+ "PT1H"
+ ],
+ "metadata": {
+ "description": "how often the metric alert is evaluated represented in ISO 8601 duration format"
+ }
+ },
+ "actionGroupId": {
+ "type": "string",
+ "defaultValue": "",
+ "metadata": {
+ "description": "The ID of the action group that is triggered when the alert is activated or deactivated"
+ }
+ }
+ },
+ "resources": [
+ {
+ "type": "Microsoft.Insights/metricAlerts",
+ "apiVersion": "2018-03-01",
+ "name": "[parameters('alertName')]",
+ "location": "global",
+ "properties": {
+ "description": "[parameters('alertDescription')]",
+ "severity": "[parameters('alertSeverity')]",
+ "enabled": "[parameters('isEnabled')]",
+ "scopes": "[parameters('targetResourceGroup')]",
+ "targetResourceType": "[parameters('targetResourceType')]",
+ "targetResourceRegion": "[parameters('targetResourceRegion')]",
+ "evaluationFrequency": "[parameters('evaluationFrequency')]",
+ "windowSize": "[parameters('windowSize')]",
+ "criteria": {
+ "odata.type": "Microsoft.Azure.Monitor.MultipleResourceMultipleMetricCriteria",
+ "allOf": [
+ {
+ "criterionType": "DynamicThresholdCriterion",
+ "name": "1st criterion",
+ "metricName": "[parameters('metricName')]",
+ "dimensions": [],
+ "operator": "[parameters('operator')]",
+ "alertSensitivity": "[parameters('alertSensitivity')]",
+ "failingPeriods": {
+ "numberOfEvaluationPeriods": "[parameters('numberOfEvaluationPeriods')]",
+ "minFailingPeriodsToAlert": "[parameters('minFailingPeriodsToAlert')]"
+ },
+ "timeAggregation": "[parameters('timeAggregation')]"
+ }
+ ]
+ },
+ "actions": [
+ {
+ "actionGroupId": "[parameters('actionGroupId')]"
+ }
+ ]
+ }
+ }
+ ]
+}
+```
+++
+### Parameter file
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "alertName": {
+ "value": "Multi-resource metric alert with Dynamic Thresholds via Azure Resource Manager template"
+ },
+ "alertDescription": {
+ "value": "New Multi-resource metric alert with Dynamic Thresholds created via template"
+ },
+ "alertSeverity": {
+ "value": 3
+ },
+ "isEnabled": {
+ "value": true
+ },
+ "targetResourceGroup": {
+ "value": [
+ "/subscriptions/replace-with-subscription-id/resourceGroups/replace-with-resource-group-name1",
+ "/subscriptions/replace-with-subscription-id/resourceGroups/replace-with-resource-group-name2"
+ ]
+ },
+ "targetResourceRegion": {
+ "value": "SouthCentralUS"
+ },
+ "targetResourceType": {
+ "value": "Microsoft.Compute/virtualMachines"
+ },
+ "metricName": {
+ "value": "Percentage CPU"
+ },
+ "operator": {
+ "value": "GreaterOrLessThan"
+ },
+ "alertSensitivity": {
+ "value": "Medium"
+ },
+ "numberOfEvaluationPeriods": {
+ "value": "4"
+ },
+ "minFailingPeriodsToAlert": {
+ "value": "3"
+ },
+ "timeAggregation": {
+ "value": "Average"
+ },
+ "actionGroupId": {
+ "value": "/subscriptions/replace-with-subscription-id/resourceGroups/replace-with-resource-group-name/providers/Microsoft.Insights/actionGroups/replace-with-action-group-name"
+ }
+ }
+}
+```
+
+### Static threshold alert on all virtual machines in a subscription
+
+This sample creates a static threshold metric alert rule that monitors Percentage CPU for all virtual machines in one Azure region in a subscription.
+
+### Template file
+
+# [Bicep](#tab/bicep)
+
+```bicep
+@description('Name of the alert')
+@minLength(1)
+param alertName string
+
+@description('Description of alert')
+param alertDescription string = 'This is a metric alert'
+
+@description('Severity of alert {0,1,2,3,4}')
+@allowed([
+ 0
+ 1
+ 2
+ 3
+ 4
+])
+param alertSeverity int = 3
+
+@description('Specifies whether the alert is enabled')
+param isEnabled bool = true
+
+@description('Azure Resource Manager path up to subscription ID. For example - /subscriptions/00000000-0000-0000-0000-0000-00000000')
+@minLength(1)
+param targetSubscription string
+
+@description('Azure region in which target resources to be monitored are in (without spaces). For example: EastUS')
+@allowed([
+ 'EastUS'
+ 'EastUS2'
+ 'CentralUS'
+ 'NorthCentralUS'
+ 'SouthCentralUS'
+ 'WestCentralUS'
+ 'WestUS'
+ 'WestUS2'
+ 'CanadaEast'
+ 'CanadaCentral'
+ 'BrazilSouth'
+ 'NorthEurope'
+ 'WestEurope'
+ 'FranceCentral'
+ 'FranceSouth'
+ 'UKWest'
+ 'UKSouth'
+ 'GermanyCentral'
+ 'GermanyNortheast'
+ 'GermanyNorth'
+ 'GermanyWestCentral'
+ 'SwitzerlandNorth'
+ 'SwitzerlandWest'
+ 'NorwayEast'
+ 'NorwayWest'
+ 'SoutheastAsia'
+ 'EastAsia'
+ 'AustraliaEast'
+ 'AustraliaSoutheast'
+ 'AustraliaCentral'
+ 'AustraliaCentral2'
+ 'ChinaEast'
+ 'ChinaNorth'
+ 'ChinaEast2'
+ 'ChinaNorth2'
+ 'CentralIndia'
+ 'WestIndia'
+ 'SouthIndia'
+ 'JapanEast'
+ 'JapanWest'
+ 'KoreaCentral'
+ 'KoreaSouth'
+ 'SouthAfricaWest'
+ 'SouthAfricaNorth'
+ 'UAECentral'
+ 'UAENorth'
+])
+param targetResourceRegion string
+
+@description('Resource type of target resources to be monitored.')
+@minLength(1)
+param targetResourceType string
+
+@description('Name of the metric used in the comparison to activate the alert.')
+@minLength(1)
+param metricName string
+
+@description('Operator comparing the current value with the threshold value.')
+@allowed([
+ 'Equals'
+ 'GreaterThan'
+ 'GreaterThanOrEqual'
+ 'LessThan'
+ 'LessThanOrEqual'
+])
+param operator string = 'GreaterThan'
+
+@description('The threshold value at which the alert is activated.')
+param threshold string = '0'
+
+@description('How the data that is collected should be combined over time.')
+@allowed([
+ 'Average'
+ 'Minimum'
+ 'Maximum'
+ 'Total'
+ 'Count'
+])
+param timeAggregation string = 'Average'
+
+@description('Period of time used to monitor alert activity based on the threshold. Must be between one minute and one day. ISO 8601 duration format.')
+@allowed([
+ 'PT1M'
+ 'PT5M'
+ 'PT15M'
+ 'PT30M'
+ 'PT1H'
+ 'PT6H'
+ 'PT12H'
+ 'PT24H'
+])
+param windowSize string = 'PT5M'
+
+@description('how often the metric alert is evaluated represented in ISO 8601 duration format')
+@allowed([
+ 'PT1M'
+ 'PT5M'
+ 'PT15M'
+ 'PT30M'
+ 'PT1H'
+])
+param evaluationFrequency string = 'PT1M'
+
+@description('The ID of the action group that is triggered when the alert is activated or deactivated')
+param actionGroupId string = ''
+
+resource metricAlert 'Microsoft.Insights/metricAlerts@2018-03-01' = {
+ name: alertName
+ location: 'global'
+ properties: {
+ description: alertDescription
+ severity: alertSeverity
+ enabled: isEnabled
+ scopes: [
+ targetSubscription
+ ]
+ targetResourceType: targetResourceType
+ targetResourceRegion: targetResourceRegion
+ evaluationFrequency: evaluationFrequency
+ windowSize: windowSize
+ criteria: {
+ 'odata.type': 'Microsoft.Azure.Monitor.MultipleResourceMultipleMetricCriteria'
+ allOf: [
+ {
+ name: '1st criterion'
+ metricName: metricName
+ dimensions: []
+ operator: operator
+ threshold: threshold
+ timeAggregation: timeAggregation
+ criterionType: 'StaticThresholdCriterion'
+ }
+ ]
+ }
+ actions: [
+ {
+ actionGroupId: actionGroupId
+ }
+ ]
+ }
+}
+```
+
+# [JSON](#tab/json)
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "alertName": {
+ "type": "string",
+ "minLength": 1,
+ "metadata": {
+ "description": "Name of the alert"
+ }
+ },
+ "alertDescription": {
+ "type": "string",
+ "defaultValue": "This is a metric alert",
+ "metadata": {
+ "description": "Description of alert"
+ }
+ },
+ "alertSeverity": {
+ "type": "int",
+ "defaultValue": 3,
+ "allowedValues": [
+ 0,
+ 1,
+ 2,
+ 3,
+ 4
+ ],
+ "metadata": {
+ "description": "Severity of alert {0,1,2,3,4}"
+ }
+ },
+ "isEnabled": {
+ "type": "bool",
+ "defaultValue": true,
+ "metadata": {
+ "description": "Specifies whether the alert is enabled"
+ }
+ },
+ "targetSubscription": {
+ "type": "string",
+ "minLength": 1,
+ "metadata": {
+ "description": "Azure Resource Manager path up to subscription ID. For example - /subscriptions/00000000-0000-0000-0000-0000-00000000"
+ }
+ },
+ "targetResourceRegion": {
+ "type": "string",
+ "allowedValues": [
+ "EastUS",
+ "EastUS2",
+ "CentralUS",
+ "NorthCentralUS",
+ "SouthCentralUS",
+ "WestCentralUS",
+ "WestUS",
+ "WestUS2",
+ "CanadaEast",
+ "CanadaCentral",
+ "BrazilSouth",
+ "NorthEurope",
+ "WestEurope",
+ "FranceCentral",
+ "FranceSouth",
+ "UKWest",
+ "UKSouth",
+ "GermanyCentral",
+ "GermanyNortheast",
+ "GermanyNorth",
+ "GermanyWestCentral",
+ "SwitzerlandNorth",
+ "SwitzerlandWest",
+ "NorwayEast",
+ "NorwayWest",
+ "SoutheastAsia",
+ "EastAsia",
+ "AustraliaEast",
+ "AustraliaSoutheast",
+ "AustraliaCentral",
+ "AustraliaCentral2",
+ "ChinaEast",
+ "ChinaNorth",
+ "ChinaEast2",
+ "ChinaNorth2",
+ "CentralIndia",
+ "WestIndia",
+ "SouthIndia",
+ "JapanEast",
+ "JapanWest",
+ "KoreaCentral",
+ "KoreaSouth",
+ "SouthAfricaWest",
+ "SouthAfricaNorth",
+ "UAECentral",
+ "UAENorth"
+ ],
+ "metadata": {
+ "description": "Azure region in which target resources to be monitored are in (without spaces). For example: EastUS"
+ }
+ },
+ "targetResourceType": {
+ "type": "string",
+ "minLength": 1,
+ "metadata": {
+ "description": "Resource type of target resources to be monitored."
+ }
+ },
+ "metricName": {
+ "type": "string",
+ "minLength": 1,
+ "metadata": {
+ "description": "Name of the metric used in the comparison to activate the alert."
+ }
+ },
+ "operator": {
+ "type": "string",
+ "defaultValue": "GreaterThan",
+ "allowedValues": [
+ "Equals",
+ "GreaterThan",
+ "GreaterThanOrEqual",
+ "LessThan",
+ "LessThanOrEqual"
+ ],
+ "metadata": {
+ "description": "Operator comparing the current value with the threshold value."
+ }
+ },
+ "threshold": {
+ "type": "string",
+ "defaultValue": "0",
+ "metadata": {
+ "description": "The threshold value at which the alert is activated."
+ }
+ },
+ "timeAggregation": {
+ "type": "string",
+ "defaultValue": "Average",
+ "allowedValues": [
+ "Average",
+ "Minimum",
+ "Maximum",
+ "Total",
+ "Count"
+ ],
+ "metadata": {
+ "description": "How the data that is collected should be combined over time."
+ }
+ },
+ "windowSize": {
+ "type": "string",
+ "defaultValue": "PT5M",
+ "allowedValues": [
+ "PT1M",
+ "PT5M",
+ "PT15M",
+ "PT30M",
+ "PT1H",
+ "PT6H",
+ "PT12H",
+ "PT24H"
+ ],
+ "metadata": {
+ "description": "Period of time used to monitor alert activity based on the threshold. Must be between one minute and one day. ISO 8601 duration format."
+ }
+ },
+ "evaluationFrequency": {
+ "type": "string",
+ "defaultValue": "PT1M",
+ "allowedValues": [
+ "PT1M",
+ "PT5M",
+ "PT15M",
+ "PT30M",
+ "PT1H"
+ ],
+ "metadata": {
+ "description": "how often the metric alert is evaluated represented in ISO 8601 duration format"
+ }
+ },
+ "actionGroupId": {
+ "type": "string",
+ "defaultValue": "",
+ "metadata": {
+ "description": "The ID of the action group that is triggered when the alert is activated or deactivated"
+ }
+ }
+ },
+ "resources": [
+ {
+ "type": "Microsoft.Insights/metricAlerts",
+ "apiVersion": "2018-03-01",
+ "name": "[parameters('alertName')]",
+ "location": "global",
+ "properties": {
+ "description": "[parameters('alertDescription')]",
+ "severity": "[parameters('alertSeverity')]",
+ "enabled": "[parameters('isEnabled')]",
+ "scopes": [
+ "[parameters('targetSubscription')]"
+ ],
+ "targetResourceType": "[parameters('targetResourceType')]",
+ "targetResourceRegion": "[parameters('targetResourceRegion')]",
+ "evaluationFrequency": "[parameters('evaluationFrequency')]",
+ "windowSize": "[parameters('windowSize')]",
+ "criteria": {
+ "odata.type": "Microsoft.Azure.Monitor.MultipleResourceMultipleMetricCriteria",
+ "allOf": [
+ {
+ "name": "1st criterion",
+ "metricName": "[parameters('metricName')]",
+ "dimensions": [],
+ "operator": "[parameters('operator')]",
+ "threshold": "[parameters('threshold')]",
+ "timeAggregation": "[parameters('timeAggregation')]",
+ "criterionType": "StaticThresholdCriterion"
+ }
+ ]
+ },
+ "actions": [
+ {
+ "actionGroupId": "[parameters('actionGroupId')]"
+ }
+ ]
+ }
+ }
+ ]
+}
+```
+++
+### Parameter file
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "alertName": {
+ "value": "Multi-resource sub level metric alert via Azure Resource Manager template"
+ },
+ "alertDescription": {
+ "value": "New Multi-resource sub level metric alert created via template"
+ },
+ "alertSeverity": {
+ "value": 3
+ },
+ "isEnabled": {
+ "value": true
+ },
+ "targetSubscription": {
+ "value": "/subscriptions/replace-with-subscription-id"
+ },
+ "targetResourceRegion": {
+ "value": "SouthCentralUS"
+ },
+ "targetResourceType": {
+ "value": "Microsoft.Compute/virtualMachines"
+ },
+ "metricName": {
+ "value": "Percentage CPU"
+ },
+ "operator": {
+ "value": "GreaterThan"
+ },
+ "threshold": {
+ "value": "0"
+ },
+ "timeAggregation": {
+ "value": "Average"
+ },
+ "actionGroupId": {
+ "value": "/subscriptions/replace-with-subscription-id/resourceGroups/replace-with-resource-group-name/providers/Microsoft.Insights/actionGroups/replace-with-action-group-name"
+ }
+ }
+}
+```
+
+### Dynamic Thresholds alert on all virtual machines in a subscription
+
+This sample creates a Dynamic Thresholds metric alert rule that monitors Percentage CPU for all virtual machines (in one Azure region) in a subscription.
+
+### Template file
+
+# [Bicep](#tab/bicep)
+
+```bicep
+@description('Name of the alert')
+@minLength(1)
+param alertName string
+
+@description('Description of alert')
+param alertDescription string = 'This is a metric alert'
+
+@description('Severity of alert {0,1,2,3,4}')
+@allowed([
+ 0
+ 1
+ 2
+ 3
+ 4
+])
+param alertSeverity int = 3
+
+@description('Specifies whether the alert is enabled')
+param isEnabled bool = true
+
+@description('Azure Resource Manager path up to subscription ID. For example - /subscriptions/00000000-0000-0000-0000-0000-00000000')
+@minLength(1)
+param targetSubscription string
+
+@description('Azure region in which target resources to be monitored are in (without spaces). For example: EastUS')
+@allowed([
+ 'EastUS'
+ 'EastUS2'
+ 'CentralUS'
+ 'NorthCentralUS'
+ 'SouthCentralUS'
+ 'WestCentralUS'
+ 'WestUS'
+ 'WestUS2'
+ 'CanadaEast'
+ 'CanadaCentral'
+ 'BrazilSouth'
+ 'NorthEurope'
+ 'WestEurope'
+ 'FranceCentral'
+ 'FranceSouth'
+ 'UKWest'
+ 'UKSouth'
+ 'GermanyCentral'
+ 'GermanyNortheast'
+ 'GermanyNorth'
+ 'GermanyWestCentral'
+ 'SwitzerlandNorth'
+ 'SwitzerlandWest'
+ 'NorwayEast'
+ 'NorwayWest'
+ 'SoutheastAsia'
+ 'EastAsia'
+ 'AustraliaEast'
+ 'AustraliaSoutheast'
+ 'AustraliaCentral'
+ 'AustraliaCentral2'
+ 'ChinaEast'
+ 'ChinaNorth'
+ 'ChinaEast2'
+ 'ChinaNorth2'
+ 'CentralIndia'
+ 'WestIndia'
+ 'SouthIndia'
+ 'JapanEast'
+ 'JapanWest'
+ 'KoreaCentral'
+ 'KoreaSouth'
+ 'SouthAfricaWest'
+ 'SouthAfricaNorth'
+ 'UAECentral'
+ 'UAENorth'
+])
+param targetResourceRegion string
+
+@description('Resource type of target resources to be monitored.')
+@minLength(1)
+param targetResourceType string
+
+@description('Name of the metric used in the comparison to activate the alert.')
+@minLength(1)
+param metricName string
+
+@description('Operator comparing the current value with the threshold value.')
+@allowed([
+ 'GreaterThan'
+ 'LessThan'
+ 'GreaterOrLessThan'
+])
+param operator string = 'GreaterOrLessThan'
+
+@description('Tunes how \'noisy\' the Dynamic Thresholds alerts will be: \'High\' will result in more alerts while \'Low\' will result in fewer alerts.')
+@allowed([
+ 'High'
+ 'Medium'
+ 'Low'
+])
+param alertSensitivity string = 'Medium'
+
+@description('The number of periods to check in the alert evaluation.')
+param numberOfEvaluationPeriods int = 4
+
+@description('The number of unhealthy periods to alert on (must be lower or equal to numberOfEvaluationPeriods).')
+param minFailingPeriodsToAlert int = 3
+
+@description('How the data that is collected should be combined over time.')
+@allowed([
+ 'Average'
+ 'Minimum'
+ 'Maximum'
+ 'Total'
+ 'Count'
+])
+param timeAggregation string = 'Average'
+
+@description('Period of time used to monitor alert activity based on the threshold. Must be between five minutes and one hour. ISO 8601 duration format.')
+@allowed([
+ 'PT5M'
+ 'PT15M'
+ 'PT30M'
+ 'PT1H'
+])
+param windowSize string = 'PT5M'
+
+@description('how often the metric alert is evaluated represented in ISO 8601 duration format')
+@allowed([
+ 'PT5M'
+ 'PT15M'
+ 'PT30M'
+ 'PT1H'
+])
+param evaluationFrequency string = 'PT5M'
+
+@description('The ID of the action group that is triggered when the alert is activated or deactivated')
+param actionGroupId string = ''
+
+resource metricAlert 'Microsoft.Insights/metricAlerts@2018-03-01' = {
+ name: alertName
+ location: 'global'
+ properties: {
+ description: alertDescription
+ severity: alertSeverity
+ enabled: isEnabled
+ scopes: [
+ targetSubscription
+ ]
+ targetResourceType: targetResourceType
+ targetResourceRegion: targetResourceRegion
+ evaluationFrequency: evaluationFrequency
+ windowSize: windowSize
+ criteria: {
+ 'odata.type': 'Microsoft.Azure.Monitor.MultipleResourceMultipleMetricCriteria'
+ allOf: [
+ {
+ criterionType: 'DynamicThresholdCriterion'
+ name: '1st criterion'
+ metricName: metricName
+ dimensions: []
+ operator: operator
+ alertSensitivity: alertSensitivity
+ failingPeriods: {
+ numberOfEvaluationPeriods: numberOfEvaluationPeriods
+ minFailingPeriodsToAlert: minFailingPeriodsToAlert
+ }
+ timeAggregation: timeAggregation
+ }
+ ]
+ }
+ actions: [
+ {
+ actionGroupId: actionGroupId
+ }
+ ]
+ }
+}
+```
+
+# [JSON](#tab/json)
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "alertName": {
+ "type": "string",
+ "minLength": 1,
+ "metadata": {
+ "description": "Name of the alert"
+ }
+ },
+ "alertDescription": {
+ "type": "string",
+ "defaultValue": "This is a metric alert",
+ "metadata": {
+ "description": "Description of alert"
+ }
+ },
+ "alertSeverity": {
+ "type": "int",
+ "defaultValue": 3,
+ "allowedValues": [
+ 0,
+ 1,
+ 2,
+ 3,
+ 4
+ ],
+ "metadata": {
+ "description": "Severity of alert {0,1,2,3,4}"
+ }
+ },
+ "isEnabled": {
+ "type": "bool",
+ "defaultValue": true,
+ "metadata": {
+ "description": "Specifies whether the alert is enabled"
+ }
+ },
+ "targetSubscription": {
+ "type": "string",
+ "minLength": 1,
+ "metadata": {
+ "description": "Azure Resource Manager path up to subscription ID. For example - /subscriptions/00000000-0000-0000-0000-0000-00000000"
+ }
+ },
+ "targetResourceRegion": {
+ "type": "string",
+ "allowedValues": [
+ "EastUS",
+ "EastUS2",
+ "CentralUS",
+ "NorthCentralUS",
+ "SouthCentralUS",
+ "WestCentralUS",
+ "WestUS",
+ "WestUS2",
+ "CanadaEast",
+ "CanadaCentral",
+ "BrazilSouth",
+ "NorthEurope",
+ "WestEurope",
+ "FranceCentral",
+ "FranceSouth",
+ "UKWest",
+ "UKSouth",
+ "GermanyCentral",
+ "GermanyNortheast",
+ "GermanyNorth",
+ "GermanyWestCentral",
+ "SwitzerlandNorth",
+ "SwitzerlandWest",
+ "NorwayEast",
+ "NorwayWest",
+ "SoutheastAsia",
+ "EastAsia",
+ "AustraliaEast",
+ "AustraliaSoutheast",
+ "AustraliaCentral",
+ "AustraliaCentral2",
+ "ChinaEast",
+ "ChinaNorth",
+ "ChinaEast2",
+ "ChinaNorth2",
+ "CentralIndia",
+ "WestIndia",
+ "SouthIndia",
+ "JapanEast",
+ "JapanWest",
+ "KoreaCentral",
+ "KoreaSouth",
+ "SouthAfricaWest",
+ "SouthAfricaNorth",
+ "UAECentral",
+ "UAENorth"
+ ],
+ "metadata": {
+ "description": "Azure region in which target resources to be monitored are in (without spaces). For example: EastUS"
+ }
+ },
+ "targetResourceType": {
+ "type": "string",
+ "minLength": 1,
+ "metadata": {
+ "description": "Resource type of target resources to be monitored."
+ }
+ },
+ "metricName": {
+ "type": "string",
+ "minLength": 1,
+ "metadata": {
+ "description": "Name of the metric used in the comparison to activate the alert."
+ }
+ },
+ "operator": {
+ "type": "string",
+ "defaultValue": "GreaterOrLessThan",
+ "allowedValues": [
+ "GreaterThan",
+ "LessThan",
+ "GreaterOrLessThan"
+ ],
+ "metadata": {
+ "description": "Operator comparing the current value with the threshold value."
+ }
+ },
+ "alertSensitivity": {
+ "type": "string",
+ "defaultValue": "Medium",
+ "allowedValues": [
+ "High",
+ "Medium",
+ "Low"
+ ],
+ "metadata": {
+ "description": "Tunes how 'noisy' the Dynamic Thresholds alerts will be: 'High' will result in more alerts while 'Low' will result in fewer alerts."
+ }
+ },
+ "numberOfEvaluationPeriods": {
+ "type": "int",
+ "defaultValue": 4,
+ "metadata": {
+ "description": "The number of periods to check in the alert evaluation."
+ }
+ },
+ "minFailingPeriodsToAlert": {
+ "type": "int",
+ "defaultValue": 3,
+ "metadata": {
+ "description": "The number of unhealthy periods to alert on (must be lower or equal to numberOfEvaluationPeriods)."
+ }
+ },
+ "timeAggregation": {
+ "type": "string",
+ "defaultValue": "Average",
+ "allowedValues": [
+ "Average",
+ "Minimum",
+ "Maximum",
+ "Total",
+ "Count"
+ ],
+ "metadata": {
+ "description": "How the data that is collected should be combined over time."
+ }
+ },
+ "windowSize": {
+ "type": "string",
+ "defaultValue": "PT5M",
+ "allowedValues": [
+ "PT5M",
+ "PT15M",
+ "PT30M",
+ "PT1H"
+ ],
+ "metadata": {
+ "description": "Period of time used to monitor alert activity based on the threshold. Must be between five minutes and one hour. ISO 8601 duration format."
+ }
+ },
+ "evaluationFrequency": {
+ "type": "string",
+ "defaultValue": "PT5M",
+ "allowedValues": [
+ "PT5M",
+ "PT15M",
+ "PT30M",
+ "PT1H"
+ ],
+ "metadata": {
+ "description": "how often the metric alert is evaluated represented in ISO 8601 duration format"
+ }
+ },
+ "actionGroupId": {
+ "type": "string",
+ "defaultValue": "",
+ "metadata": {
+ "description": "The ID of the action group that is triggered when the alert is activated or deactivated"
+ }
+ }
+ },
+ "resources": [
+ {
+ "type": "Microsoft.Insights/metricAlerts",
+ "apiVersion": "2018-03-01",
+ "name": "[parameters('alertName')]",
+ "location": "global",
+ "properties": {
+ "description": "[parameters('alertDescription')]",
+ "severity": "[parameters('alertSeverity')]",
+ "enabled": "[parameters('isEnabled')]",
+ "scopes": [
+ "[parameters('targetSubscription')]"
+ ],
+ "targetResourceType": "[parameters('targetResourceType')]",
+ "targetResourceRegion": "[parameters('targetResourceRegion')]",
+ "evaluationFrequency": "[parameters('evaluationFrequency')]",
+ "windowSize": "[parameters('windowSize')]",
+ "criteria": {
+ "odata.type": "Microsoft.Azure.Monitor.MultipleResourceMultipleMetricCriteria",
+ "allOf": [
+ {
+ "criterionType": "DynamicThresholdCriterion",
+ "name": "1st criterion",
+ "metricName": "[parameters('metricName')]",
+ "dimensions": [],
+ "operator": "[parameters('operator')]",
+ "alertSensitivity": "[parameters('alertSensitivity')]",
+ "failingPeriods": {
+ "numberOfEvaluationPeriods": "[parameters('numberOfEvaluationPeriods')]",
+ "minFailingPeriodsToAlert": "[parameters('minFailingPeriodsToAlert')]"
+ },
+ "timeAggregation": "[parameters('timeAggregation')]"
+ }
+ ]
+ },
+ "actions": [
+ {
+ "actionGroupId": "[parameters('actionGroupId')]"
+ }
+ ]
+ }
+ }
+ ]
} ``` ++ ### Parameter file ```json {
- "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "alertName": {
- "value": "Multi-resource metric alert with Dynamic Thresholds via Azure Resource Manager template"
- },
- "alertDescription": {
- "value": "New Multi-resource metric alert with Dynamic Thresholds created via template"
- },
- "alertSeverity": {
- "value":3
- },
- "isEnabled": {
- "value": true
- },
- "targetResourceGroup":{
- "value": [
- "/subscriptions/replace-with-subscription-id/resourceGroups/replace-with-resource-group-name1",
- "/subscriptions/replace-with-subscription-id/resourceGroups/replace-with-resource-group-name2"
- ]
- },
- "targetResourceRegion":{
- "value": "SouthCentralUS"
- },
- "targetResourceType":{
- "value": "Microsoft.Compute/virtualMachines"
- },
- "metricName": {
- "value": "Percentage CPU"
- },
- "operator": {
- "value": "GreaterOrLessThan"
- },
- "alertSensitivity": {
- "value": "Medium"
- },
- "numberOfEvaluationPeriods": {
- "value": "4"
- },
- "minFailingPeriodsToAlert": {
- "value": "3"
- },
- "timeAggregation": {
- "value": "Average"
- },
- "actionGroupId": {
- "value": "/subscriptions/replace-with-subscription-id/resourceGroups/replace-with-resource-group-name/providers/Microsoft.Insights/actionGroups/replace-with-action-group-name"
- }
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "alertName": {
+ "value": "Multi-resource sub level metric alert with Dynamic Thresholds via Azure Resource Manager template"
+ },
+ "alertDescription": {
+ "value": "New Multi-resource sub level metric alert with Dynamic Thresholds created via template"
+ },
+ "alertSeverity": {
+ "value": 3
+ },
+ "isEnabled": {
+ "value": true
+ },
+ "targetSubscription": {
+ "value": "/subscriptions/replace-with-subscription-id"
+ },
+ "targetResourceRegion": {
+ "value": "SouthCentralUS"
+ },
+ "targetResourceType": {
+ "value": "Microsoft.Compute/virtualMachines"
+ },
+ "metricName": {
+ "value": "Percentage CPU"
+ },
+ "operator": {
+ "value": "GreaterOrLessThan"
+ },
+ "alertSensitivity": {
+ "value": "Medium"
+ },
+ "numberOfEvaluationPeriods": {
+ "value": "4"
+ },
+ "minFailingPeriodsToAlert": {
+ "value": "3"
+ },
+ "timeAggregation": {
+ "value": "Average"
+ },
+ "actionGroupId": {
+ "value": "/subscriptions/replace-with-subscription-id/resourceGroups/replace-with-resource-group-name/providers/Microsoft.Insights/actionGroups/replace-with-action-group-name"
}
+ }
} ```
+### Static threshold alert on a list of virtual machines
-### Static threshold alert on all virtual machines in a subscription
-This sample creates a static threshold metric alert rule that monitors Percentage CPU for all virtual machines in one Azure region in a subscription.
+This sample creates a static threshold metric alert rule that monitors Percentage CPU for a list of virtual machines in one Azure region in a subscription.
### Template file
-```json
-{
- "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "alertName": {
- "type": "string",
- "minLength": 1,
- "metadata": {
- "description": "Name of the alert"
- }
- },
- "alertDescription": {
- "type": "string",
- "defaultValue": "This is a metric alert",
- "metadata": {
- "description": "Description of alert"
- }
- },
- "alertSeverity": {
- "type": "int",
- "defaultValue": 3,
- "allowedValues": [
- 0,
- 1,
- 2,
- 3,
- 4
- ],
- "metadata": {
- "description": "Severity of alert {0,1,2,3,4}"
- }
- },
- "isEnabled": {
- "type": "bool",
- "defaultValue": true,
- "metadata": {
- "description": "Specifies whether the alert is enabled"
- }
- },
- "targetSubscription":{
- "type": "string",
- "minLength": 1,
- "metadata": {
- "description": "Azure Resource Manager path up to subscription ID. For example - /subscriptions/00000000-0000-0000-0000-0000-00000000"
- }
- },
- "targetResourceRegion":{
- "type": "string",
- "allowedValues": [
- "EastUS",
- "EastUS2",
- "CentralUS",
- "NorthCentralUS",
- "SouthCentralUS",
- "WestCentralUS",
- "WestUS",
- "WestUS2",
- "CanadaEast",
- "CanadaCentral",
- "BrazilSouth",
- "NorthEurope",
- "WestEurope",
- "FranceCentral",
- "FranceSouth",
- "UKWest",
- "UKSouth",
- "GermanyCentral",
- "GermanyNortheast",
- "GermanyNorth",
- "GermanyWestCentral",
- "SwitzerlandNorth",
- "SwitzerlandWest",
- "NorwayEast",
- "NorwayWest",
- "SoutheastAsia",
- "EastAsia",
- "AustraliaEast",
- "AustraliaSoutheast",
- "AustraliaCentral",
- "AustraliaCentral2",
- "ChinaEast",
- "ChinaNorth",
- "ChinaEast2",
- "ChinaNorth2",
- "CentralIndia",
- "WestIndia",
- "SouthIndia",
- "JapanEast",
- "JapanWest",
- "KoreaCentral",
- "KoreaSouth",
- "SouthAfricaWest",
- "SouthAfricaNorth",
- "UAECentral",
- "UAENorth"
- ],
- "metadata": {
- "description": "Azure region in which target resources to be monitored are in (without spaces). For example: EastUS"
- }
- },
- "targetResourceType": {
- "type": "string",
- "minLength": 1,
- "metadata": {
- "description": "Resource type of target resources to be monitored."
- }
- },
- "metricName": {
- "type": "string",
- "minLength": 1,
- "metadata": {
- "description": "Name of the metric used in the comparison to activate the alert."
- }
- },
- "operator": {
- "type": "string",
- "defaultValue": "GreaterThan",
- "allowedValues": [
- "Equals",
- "GreaterThan",
- "GreaterThanOrEqual",
- "LessThan",
- "LessThanOrEqual"
- ],
- "metadata": {
- "description": "Operator comparing the current value with the threshold value."
- }
- },
- "threshold": {
- "type": "string",
- "defaultValue": "0",
- "metadata": {
- "description": "The threshold value at which the alert is activated."
- }
- },
- "timeAggregation": {
- "type": "string",
- "defaultValue": "Average",
- "allowedValues": [
- "Average",
- "Minimum",
- "Maximum",
- "Total",
- "Count"
- ],
- "metadata": {
- "description": "How the data that is collected should be combined over time."
- }
- },
- "windowSize": {
- "type": "string",
- "defaultValue": "PT5M",
- "allowedValues": [
- "PT1M",
- "PT5M",
- "PT15M",
- "PT30M",
- "PT1H",
- "PT6H",
- "PT12H",
- "PT24H"
- ],
- "metadata": {
- "description": "Period of time used to monitor alert activity based on the threshold. Must be between one minute and one day. ISO 8601 duration format."
- }
- },
- "evaluationFrequency": {
- "type": "string",
- "defaultValue": "PT1M",
- "allowedValues": [
- "PT1M",
- "PT5M",
- "PT15M",
- "PT30M",
- "PT1H"
- ],
- "metadata": {
- "description": "how often the metric alert is evaluated represented in ISO 8601 duration format"
- }
- },
- "actionGroupId": {
- "type": "string",
- "defaultValue": "",
- "metadata": {
- "description": "The ID of the action group that is triggered when the alert is activated or deactivated"
- }
- }
- },
- "variables": { },
- "resources": [
- {
- "name": "[parameters('alertName')]",
- "type": "Microsoft.Insights/metricAlerts",
- "location": "global",
- "apiVersion": "2018-03-01",
- "tags": {},
- "properties": {
- "description": "[parameters('alertDescription')]",
- "severity": "[parameters('alertSeverity')]",
- "enabled": "[parameters('isEnabled')]",
- "scopes": ["[parameters('targetSubscription')]"],
- "targetResourceType": "[parameters('targetResourceType')]",
- "targetResourceRegion": "[parameters('targetResourceRegion')]",
- "evaluationFrequency":"[parameters('evaluationFrequency')]",
- "windowSize": "[parameters('windowSize')]",
- "criteria": {
- "odata.type": "Microsoft.Azure.Monitor.MultipleResourceMultipleMetricCriteria",
- "allOf": [
- {
- "name" : "1st criterion",
- "metricName": "[parameters('metricName')]",
- "dimensions":[],
- "operator": "[parameters('operator')]",
- "threshold" : "[parameters('threshold')]",
- "timeAggregation": "[parameters('timeAggregation')]"
- }
- ]
- },
- "actions": [
- {
- "actionGroupId": "[parameters('actionGroupId')]"
- }
- ]
- }
- }
- ]
-}
-```
+# [Bicep](#tab/bicep)
-### Parameter file
+```bicep
+@description('Name of the alert')
+@minLength(1)
+param alertName string
-```json
-{
- "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "alertName": {
- "value": "Multi-resource sub level metric alert via Azure Resource Manager template"
- },
- "alertDescription": {
- "value": "New Multi-resource sub level metric alert created via template"
- },
- "alertSeverity": {
- "value":3
- },
- "isEnabled": {
- "value": true
- },
- "targetSubscription":{
- "value": "/subscriptions/replace-with-subscription-id"
- },
- "targetResourceRegion":{
- "value": "SouthCentralUS"
- },
- "targetResourceType":{
- "value": "Microsoft.Compute/virtualMachines"
- },
- "metricName": {
- "value": "Percentage CPU"
- },
- "operator": {
- "value": "GreaterThan"
- },
- "threshold": {
- "value": "0"
- },
- "timeAggregation": {
- "value": "Average"
- },
- "actionGroupId": {
- "value": "/subscriptions/replace-with-subscription-id/resourceGroups/replace-with-resource-group-name/providers/Microsoft.Insights/actionGroups/replace-with-action-group-name"
- }
- }
-}
-```
+@description('Description of alert')
+param alertDescription string = 'This is a metric alert'
-### Dynamic Thresholds alert on all virtual machines in a subscription
-This sample creates a Dynamic Thresholds metric alert rule that monitors Percentage CPU for all virtual machines (in one Azure region) in a subscription.
+@description('Severity of alert {0,1,2,3,4}')
+@allowed([
+ 0
+ 1
+ 2
+ 3
+ 4
+])
+param alertSeverity int = 3
-### Template file
+@description('Specifies whether the alert is enabled')
+param isEnabled bool = true
-```json
-{
- "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "alertName": {
- "type": "string",
- "minLength": 1,
- "metadata": {
- "description": "Name of the alert"
- }
- },
- "alertDescription": {
- "type": "string",
- "defaultValue": "This is a metric alert",
- "metadata": {
- "description": "Description of alert"
- }
- },
- "alertSeverity": {
- "type": "int",
- "defaultValue": 3,
- "allowedValues": [
- 0,
- 1,
- 2,
- 3,
- 4
- ],
- "metadata": {
- "description": "Severity of alert {0,1,2,3,4}"
- }
- },
- "isEnabled": {
- "type": "bool",
- "defaultValue": true,
- "metadata": {
- "description": "Specifies whether the alert is enabled"
- }
- },
- "targetSubscription":{
- "type": "string",
- "minLength": 1,
- "metadata": {
- "description": "Azure Resource Manager path up to subscription ID. For example - /subscriptions/00000000-0000-0000-0000-0000-00000000"
- }
- },
- "targetResourceRegion":{
- "type": "string",
- "allowedValues": [
- "EastUS",
- "EastUS2",
- "CentralUS",
- "NorthCentralUS",
- "SouthCentralUS",
- "WestCentralUS",
- "WestUS",
- "WestUS2",
- "CanadaEast",
- "CanadaCentral",
- "BrazilSouth",
- "NorthEurope",
- "WestEurope",
- "FranceCentral",
- "FranceSouth",
- "UKWest",
- "UKSouth",
- "GermanyCentral",
- "GermanyNortheast",
- "GermanyNorth",
- "GermanyWestCentral",
- "SwitzerlandNorth",
- "SwitzerlandWest",
- "NorwayEast",
- "NorwayWest",
- "SoutheastAsia",
- "EastAsia",
- "AustraliaEast",
- "AustraliaSoutheast",
- "AustraliaCentral",
- "AustraliaCentral2",
- "ChinaEast",
- "ChinaNorth",
- "ChinaEast2",
- "ChinaNorth2",
- "CentralIndia",
- "WestIndia",
- "SouthIndia",
- "JapanEast",
- "JapanWest",
- "KoreaCentral",
- "KoreaSouth",
- "SouthAfricaWest",
- "SouthAfricaNorth",
- "UAECentral",
- "UAENorth"
- ],
- "metadata": {
- "description": "Azure region in which target resources to be monitored are in (without spaces). For example: EastUS"
- }
- },
- "targetResourceType": {
- "type": "string",
- "minLength": 1,
- "metadata": {
- "description": "Resource type of target resources to be monitored."
- }
- },
- "metricName": {
- "type": "string",
- "minLength": 1,
- "metadata": {
- "description": "Name of the metric used in the comparison to activate the alert."
- }
- },
- "operator": {
- "type": "string",
- "defaultValue": "GreaterOrLessThan",
- "allowedValues": [
- "GreaterThan",
- "LessThan",
- "GreaterOrLessThan"
- ],
- "metadata": {
- "description": "Operator comparing the current value with the threshold value."
- }
- },
- "alertSensitivity": {
- "type": "string",
- "defaultValue": "Medium",
- "allowedValues": [
- "High",
- "Medium",
- "Low"
- ],
- "metadata": {
- "description": "Tunes how 'noisy' the Dynamic Thresholds alerts will be: 'High' will result in more alerts while 'Low' will result in fewer alerts."
- }
- },
- "numberOfEvaluationPeriods": {
- "type": "string",
- "defaultValue": "4",
- "metadata": {
- "description": "The number of periods to check in the alert evaluation."
- }
- },
- "minFailingPeriodsToAlert": {
- "type": "string",
- "defaultValue": "3",
- "metadata": {
- "description": "The number of unhealthy periods to alert on (must be lower or equal to numberOfEvaluationPeriods)."
- }
- },
- "timeAggregation": {
- "type": "string",
- "defaultValue": "Average",
- "allowedValues": [
- "Average",
- "Minimum",
- "Maximum",
- "Total",
- "Count"
- ],
- "metadata": {
- "description": "How the data that is collected should be combined over time."
- }
- },
- "windowSize": {
- "type": "string",
- "defaultValue": "PT5M",
- "allowedValues": [
- "PT5M",
- "PT15M",
- "PT30M",
- "PT1H"
- ],
- "metadata": {
- "description": "Period of time used to monitor alert activity based on the threshold. Must be between five minutes and one hour. ISO 8601 duration format."
- }
- },
- "evaluationFrequency": {
- "type": "string",
- "defaultValue": "PT5M",
- "allowedValues": [
- "PT5M",
- "PT15M",
- "PT30M",
- "PT1H"
- ],
- "metadata": {
- "description": "how often the metric alert is evaluated represented in ISO 8601 duration format"
- }
- },
- "actionGroupId": {
- "type": "string",
- "defaultValue": "",
- "metadata": {
- "description": "The ID of the action group that is triggered when the alert is activated or deactivated"
- }
- }
- },
- "variables": { },
- "resources": [
- {
- "name": "[parameters('alertName')]",
- "type": "Microsoft.Insights/metricAlerts",
- "location": "global",
- "apiVersion": "2018-03-01",
- "tags": {},
- "properties": {
- "description": "[parameters('alertDescription')]",
- "severity": "[parameters('alertSeverity')]",
- "enabled": "[parameters('isEnabled')]",
- "scopes": ["[parameters('targetSubscription')]"],
- "targetResourceType": "[parameters('targetResourceType')]",
- "targetResourceRegion": "[parameters('targetResourceRegion')]",
- "evaluationFrequency":"[parameters('evaluationFrequency')]",
- "windowSize": "[parameters('windowSize')]",
- "criteria": {
- "odata.type": "Microsoft.Azure.Monitor.MultipleResourceMultipleMetricCriteria",
- "allOf": [
- {
- "criterionType": "DynamicThresholdCriterion",
- "name" : "1st criterion",
- "metricName": "[parameters('metricName')]",
- "dimensions":[],
- "operator": "[parameters('operator')]",
- "alertSensitivity": "[parameters('alertSensitivity')]",
- "failingPeriods": {
- "numberOfEvaluationPeriods": "[parameters('numberOfEvaluationPeriods')]",
- "minFailingPeriodsToAlert": "[parameters('minFailingPeriodsToAlert')]"
- },
- "timeAggregation": "[parameters('timeAggregation')]"
- }
- ]
- },
- "actions": [
- {
- "actionGroupId": "[parameters('actionGroupId')]"
- }
- ]
- }
- }
- ]
-}
-```
+@description('array of Azure resource Ids. For example - /subscriptions/00000000-0000-0000-0000-0000-00000000/resourceGroup/resource-group-name/Microsoft.compute/virtualMachines/vm-name')
+@minLength(1)
+param targetResourceId array
-### Parameter file
+@description('Azure region in which target resources to be monitored are in (without spaces). For example: EastUS')
+@allowed([
+ 'EastUS'
+ 'EastUS2'
+ 'CentralUS'
+ 'NorthCentralUS'
+ 'SouthCentralUS'
+ 'WestCentralUS'
+ 'WestUS'
+ 'WestUS2'
+ 'CanadaEast'
+ 'CanadaCentral'
+ 'BrazilSouth'
+ 'NorthEurope'
+ 'WestEurope'
+ 'FranceCentral'
+ 'FranceSouth'
+ 'UKWest'
+ 'UKSouth'
+ 'GermanyCentral'
+ 'GermanyNortheast'
+ 'GermanyNorth'
+ 'GermanyWestCentral'
+ 'SwitzerlandNorth'
+ 'SwitzerlandWest'
+ 'NorwayEast'
+ 'NorwayWest'
+ 'SoutheastAsia'
+ 'EastAsia'
+ 'AustraliaEast'
+ 'AustraliaSoutheast'
+ 'AustraliaCentral'
+ 'AustraliaCentral2'
+ 'ChinaEast'
+ 'ChinaNorth'
+ 'ChinaEast2'
+ 'ChinaNorth2'
+ 'CentralIndia'
+ 'WestIndia'
+ 'SouthIndia'
+ 'JapanEast'
+ 'JapanWest'
+ 'KoreaCentral'
+ 'KoreaSouth'
+ 'SouthAfricaWest'
+ 'SouthAfricaNorth'
+ 'UAECentral'
+ 'UAENorth'
+])
+param targetResourceRegion string
-```json
-{
- "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "alertName": {
- "value": "Multi-resource sub level metric alert with Dynamic Thresholds via Azure Resource Manager template"
- },
- "alertDescription": {
- "value": "New Multi-resource sub level metric alert with Dynamic Thresholds created via template"
- },
- "alertSeverity": {
- "value":3
- },
- "isEnabled": {
- "value": true
- },
- "targetSubscription":{
- "value": "/subscriptions/replace-with-subscription-id"
- },
- "targetResourceRegion":{
- "value": "SouthCentralUS"
- },
- "targetResourceType":{
- "value": "Microsoft.Compute/virtualMachines"
- },
- "metricName": {
- "value": "Percentage CPU"
- },
- "operator": {
- "value": "GreaterOrLessThan"
- },
- "alertSensitivity": {
- "value": "Medium"
- },
- "numberOfEvaluationPeriods": {
- "value": "4"
- },
- "minFailingPeriodsToAlert": {
- "value": "3"
- },
- "timeAggregation": {
- "value": "Average"
- },
- "actionGroupId": {
- "value": "/subscriptions/replace-with-subscription-id/resourceGroups/replace-with-resource-group-name/providers/Microsoft.Insights/actionGroups/replace-with-action-group-name"
+@description('Resource type of target resources to be monitored.')
+@minLength(1)
+param targetResourceType string
+
+@description('Name of the metric used in the comparison to activate the alert.')
+@minLength(1)
+param metricName string
+
+@description('Operator comparing the current value with the threshold value.')
+@allowed([
+ 'Equals'
+ 'GreaterThan'
+ 'GreaterThanOrEqual'
+ 'LessThan'
+ 'LessThanOrEqual'
+])
+param operator string = 'GreaterThan'
+
+@description('The threshold value at which the alert is activated.')
+param threshold string = '0'
+
+@description('How the data that is collected should be combined over time.')
+@allowed([
+ 'Average'
+ 'Minimum'
+ 'Maximum'
+ 'Total'
+ 'Count'
+])
+param timeAggregation string = 'Average'
+
+@description('Period of time used to monitor alert activity based on the threshold. Must be between one minute and one day. ISO 8601 duration format.')
+@allowed([
+ 'PT1M'
+ 'PT5M'
+ 'PT15M'
+ 'PT30M'
+ 'PT1H'
+ 'PT6H'
+ 'PT12H'
+ 'PT24H'
+])
+param windowSize string = 'PT5M'
+
+@description('how often the metric alert is evaluated represented in ISO 8601 duration format')
+@allowed([
+ 'PT1M'
+ 'PT5M'
+ 'PT15M'
+ 'PT30M'
+ 'PT1H'
+])
+param evaluationFrequency string = 'PT1M'
+
+@description('The ID of the action group that is triggered when the alert is activated or deactivated')
+param actionGroupId string = ''
+
+resource metricAlert 'Microsoft.Insights/metricAlerts@2018-03-01' = {
+ name: alertName
+ location: 'global'
+ properties: {
+ description: alertDescription
+ severity: alertSeverity
+ enabled: isEnabled
+ scopes: targetResourceId
+ targetResourceType: targetResourceType
+ targetResourceRegion: targetResourceRegion
+ evaluationFrequency: evaluationFrequency
+ windowSize: windowSize
+ criteria: {
+ 'odata.type': 'Microsoft.Azure.Monitor.MultipleResourceMultipleMetricCriteria'
+ allOf: [
+ {
+ name: '1st criterion'
+ metricName: metricName
+ dimensions: []
+ operator: operator
+ threshold: threshold
+ timeAggregation: timeAggregation
+ criterionType: 'StaticThresholdCriterion'
}
+ ]
}
+ actions: [
+ {
+ actionGroupId: actionGroupId
+ }
+ ]
+ }
} ``` -
-### Static threshold alert on a list of virtual machines
-This sample creates a static threshold metric alert rule that monitors Percentage CPU for a list of virtual machines in one Azure region in a subscription.
-
-### Template file
+# [JSON](#tab/json)
```json {
- "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "alertName": {
- "type": "string",
- "minLength": 1,
- "metadata": {
- "description": "Name of the alert"
- }
- },
- "alertDescription": {
- "type": "string",
- "defaultValue": "This is a metric alert",
- "metadata": {
- "description": "Description of alert"
- }
- },
- "alertSeverity": {
- "type": "int",
- "defaultValue": 3,
- "allowedValues": [
- 0,
- 1,
- 2,
- 3,
- 4
- ],
- "metadata": {
- "description": "Severity of alert {0,1,2,3,4}"
- }
- },
- "isEnabled": {
- "type": "bool",
- "defaultValue": true,
- "metadata": {
- "description": "Specifies whether the alert is enabled"
- }
- },
- "targetResourceId":{
- "type": "array",
- "minLength": 1,
- "metadata": {
- "description": "array of Azure resource Ids. For example - /subscriptions/00000000-0000-0000-0000-0000-00000000/resourceGroup/resource-group-name/Microsoft.compute/virtualMachines/vm-name"
- }
- },
- "targetResourceRegion":{
- "type": "string",
- "allowedValues": [
- "EastUS",
- "EastUS2",
- "CentralUS",
- "NorthCentralUS",
- "SouthCentralUS",
- "WestCentralUS",
- "WestUS",
- "WestUS2",
- "CanadaEast",
- "CanadaCentral",
- "BrazilSouth",
- "NorthEurope",
- "WestEurope",
- "FranceCentral",
- "FranceSouth",
- "UKWest",
- "UKSouth",
- "GermanyCentral",
- "GermanyNortheast",
- "GermanyNorth",
- "GermanyWestCentral",
- "SwitzerlandNorth",
- "SwitzerlandWest",
- "NorwayEast",
- "NorwayWest",
- "SoutheastAsia",
- "EastAsia",
- "AustraliaEast",
- "AustraliaSoutheast",
- "AustraliaCentral",
- "AustraliaCentral2",
- "ChinaEast",
- "ChinaNorth",
- "ChinaEast2",
- "ChinaNorth2",
- "CentralIndia",
- "WestIndia",
- "SouthIndia",
- "JapanEast",
- "JapanWest",
- "KoreaCentral",
- "KoreaSouth",
- "SouthAfricaWest",
- "SouthAfricaNorth",
- "UAECentral",
- "UAENorth"
- ],
- "metadata": {
- "description": "Azure region in which target resources to be monitored are in (without spaces). For example: EastUS"
- }
- },
- "targetResourceType": {
- "type": "string",
- "minLength": 1,
- "metadata": {
- "description": "Resource type of target resources to be monitored."
- }
- },
- "metricName": {
- "type": "string",
- "minLength": 1,
- "metadata": {
- "description": "Name of the metric used in the comparison to activate the alert."
- }
- },
- "operator": {
- "type": "string",
- "defaultValue": "GreaterThan",
- "allowedValues": [
- "Equals",
- "GreaterThan",
- "GreaterThanOrEqual",
- "LessThan",
- "LessThanOrEqual"
- ],
- "metadata": {
- "description": "Operator comparing the current value with the threshold value."
- }
- },
- "threshold": {
- "type": "string",
- "defaultValue": "0",
- "metadata": {
- "description": "The threshold value at which the alert is activated."
- }
- },
- "timeAggregation": {
- "type": "string",
- "defaultValue": "Average",
- "allowedValues": [
- "Average",
- "Minimum",
- "Maximum",
- "Total",
- "Count"
- ],
- "metadata": {
- "description": "How the data that is collected should be combined over time."
- }
- },
- "windowSize": {
- "type": "string",
- "defaultValue": "PT5M",
- "allowedValues": [
- "PT1M",
- "PT5M",
- "PT15M",
- "PT30M",
- "PT1H",
- "PT6H",
- "PT12H",
- "PT24H"
- ],
- "metadata": {
- "description": "Period of time used to monitor alert activity based on the threshold. Must be between one minute and one day. ISO 8601 duration format."
- }
- },
- "evaluationFrequency": {
- "type": "string",
- "defaultValue": "PT1M",
- "allowedValues": [
- "PT1M",
- "PT5M",
- "PT15M",
- "PT30M",
- "PT1H"
- ],
- "metadata": {
- "description": "how often the metric alert is evaluated represented in ISO 8601 duration format"
- }
- },
- "actionGroupId": {
- "type": "string",
- "defaultValue": "",
- "metadata": {
- "description": "The ID of the action group that is triggered when the alert is activated or deactivated"
- }
- }
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "alertName": {
+ "type": "string",
+ "minLength": 1,
+ "metadata": {
+ "description": "Name of the alert"
+ }
},
- "variables": { },
- "resources": [
- {
- "name": "[parameters('alertName')]",
- "type": "Microsoft.Insights/metricAlerts",
- "location": "global",
- "apiVersion": "2018-03-01",
- "tags": {},
- "properties": {
- "description": "[parameters('alertDescription')]",
- "severity": "[parameters('alertSeverity')]",
- "enabled": "[parameters('isEnabled')]",
- "scopes": "[parameters('targetResourceId')]",
- "targetResourceType": "[parameters('targetResourceType')]",
- "targetResourceRegion": "[parameters('targetResourceRegion')]",
- "evaluationFrequency":"[parameters('evaluationFrequency')]",
- "windowSize": "[parameters('windowSize')]",
- "criteria": {
- "odata.type": "Microsoft.Azure.Monitor.MultipleResourceMultipleMetricCriteria",
- "allOf": [
- {
- "name" : "1st criterion",
- "metricName": "[parameters('metricName')]",
- "dimensions":[],
- "operator": "[parameters('operator')]",
- "threshold" : "[parameters('threshold')]",
- "timeAggregation": "[parameters('timeAggregation')]"
- }
- ]
- },
- "actions": [
- {
- "actionGroupId": "[parameters('actionGroupId')]"
- }
- ]
- }
- }
- ]
+ "alertDescription": {
+ "type": "string",
+ "defaultValue": "This is a metric alert",
+ "metadata": {
+ "description": "Description of alert"
+ }
+ },
+ "alertSeverity": {
+ "type": "int",
+ "defaultValue": 3,
+ "allowedValues": [
+ 0,
+ 1,
+ 2,
+ 3,
+ 4
+ ],
+ "metadata": {
+ "description": "Severity of alert {0,1,2,3,4}"
+ }
+ },
+ "isEnabled": {
+ "type": "bool",
+ "defaultValue": true,
+ "metadata": {
+ "description": "Specifies whether the alert is enabled"
+ }
+ },
+ "targetResourceId": {
+ "type": "array",
+ "minLength": 1,
+ "metadata": {
+ "description": "array of Azure resource Ids. For example - /subscriptions/00000000-0000-0000-0000-0000-00000000/resourceGroup/resource-group-name/Microsoft.compute/virtualMachines/vm-name"
+ }
+ },
+ "targetResourceRegion": {
+ "type": "string",
+ "allowedValues": [
+ "EastUS",
+ "EastUS2",
+ "CentralUS",
+ "NorthCentralUS",
+ "SouthCentralUS",
+ "WestCentralUS",
+ "WestUS",
+ "WestUS2",
+ "CanadaEast",
+ "CanadaCentral",
+ "BrazilSouth",
+ "NorthEurope",
+ "WestEurope",
+ "FranceCentral",
+ "FranceSouth",
+ "UKWest",
+ "UKSouth",
+ "GermanyCentral",
+ "GermanyNortheast",
+ "GermanyNorth",
+ "GermanyWestCentral",
+ "SwitzerlandNorth",
+ "SwitzerlandWest",
+ "NorwayEast",
+ "NorwayWest",
+ "SoutheastAsia",
+ "EastAsia",
+ "AustraliaEast",
+ "AustraliaSoutheast",
+ "AustraliaCentral",
+ "AustraliaCentral2",
+ "ChinaEast",
+ "ChinaNorth",
+ "ChinaEast2",
+ "ChinaNorth2",
+ "CentralIndia",
+ "WestIndia",
+ "SouthIndia",
+ "JapanEast",
+ "JapanWest",
+ "KoreaCentral",
+ "KoreaSouth",
+ "SouthAfricaWest",
+ "SouthAfricaNorth",
+ "UAECentral",
+ "UAENorth"
+ ],
+ "metadata": {
+ "description": "Azure region in which target resources to be monitored are in (without spaces). For example: EastUS"
+ }
+ },
+ "targetResourceType": {
+ "type": "string",
+ "minLength": 1,
+ "metadata": {
+ "description": "Resource type of target resources to be monitored."
+ }
+ },
+ "metricName": {
+ "type": "string",
+ "minLength": 1,
+ "metadata": {
+ "description": "Name of the metric used in the comparison to activate the alert."
+ }
+ },
+ "operator": {
+ "type": "string",
+ "defaultValue": "GreaterThan",
+ "allowedValues": [
+ "Equals",
+ "GreaterThan",
+ "GreaterThanOrEqual",
+ "LessThan",
+ "LessThanOrEqual"
+ ],
+ "metadata": {
+ "description": "Operator comparing the current value with the threshold value."
+ }
+ },
+ "threshold": {
+ "type": "string",
+ "defaultValue": "0",
+ "metadata": {
+ "description": "The threshold value at which the alert is activated."
+ }
+ },
+ "timeAggregation": {
+ "type": "string",
+ "defaultValue": "Average",
+ "allowedValues": [
+ "Average",
+ "Minimum",
+ "Maximum",
+ "Total",
+ "Count"
+ ],
+ "metadata": {
+ "description": "How the data that is collected should be combined over time."
+ }
+ },
+ "windowSize": {
+ "type": "string",
+ "defaultValue": "PT5M",
+ "allowedValues": [
+ "PT1M",
+ "PT5M",
+ "PT15M",
+ "PT30M",
+ "PT1H",
+ "PT6H",
+ "PT12H",
+ "PT24H"
+ ],
+ "metadata": {
+ "description": "Period of time used to monitor alert activity based on the threshold. Must be between one minute and one day. ISO 8601 duration format."
+ }
+ },
+ "evaluationFrequency": {
+ "type": "string",
+ "defaultValue": "PT1M",
+ "allowedValues": [
+ "PT1M",
+ "PT5M",
+ "PT15M",
+ "PT30M",
+ "PT1H"
+ ],
+ "metadata": {
+ "description": "how often the metric alert is evaluated represented in ISO 8601 duration format"
+ }
+ },
+ "actionGroupId": {
+ "type": "string",
+ "defaultValue": "",
+ "metadata": {
+ "description": "The ID of the action group that is triggered when the alert is activated or deactivated"
+ }
+ }
+ },
+ "resources": [
+ {
+ "type": "Microsoft.Insights/metricAlerts",
+ "apiVersion": "2018-03-01",
+ "name": "[parameters('alertName')]",
+ "location": "global",
+ "properties": {
+ "description": "[parameters('alertDescription')]",
+ "severity": "[parameters('alertSeverity')]",
+ "enabled": "[parameters('isEnabled')]",
+ "scopes": "[parameters('targetResourceId')]",
+ "targetResourceType": "[parameters('targetResourceType')]",
+ "targetResourceRegion": "[parameters('targetResourceRegion')]",
+ "evaluationFrequency": "[parameters('evaluationFrequency')]",
+ "windowSize": "[parameters('windowSize')]",
+ "criteria": {
+ "odata.type": "Microsoft.Azure.Monitor.MultipleResourceMultipleMetricCriteria",
+ "allOf": [
+ {
+ "name": "1st criterion",
+ "metricName": "[parameters('metricName')]",
+ "dimensions": [],
+ "operator": "[parameters('operator')]",
+ "threshold": "[parameters('threshold')]",
+ "timeAggregation": "[parameters('timeAggregation')]",
+ "criterionType": "StaticThresholdCriterion"
+ }
+ ]
+ },
+ "actions": [
+ {
+ "actionGroupId": "[parameters('actionGroupId')]"
+ }
+ ]
+ }
+ }
+ ]
} ``` ++ ### Parameter file ```json {
- "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "alertName": {
- "value": "Multi-resource metric alert by list via Azure Resource Manager template"
- },
- "alertDescription": {
- "value": "New Multi-resource metric alert by list created via template"
- },
- "alertSeverity": {
- "value":3
- },
- "isEnabled": {
- "value": true
- },
- "targetResourceId":{
- "value": [
- "/subscriptions/replace-with-subscription-id/resourceGroups/replace-with-resource-group-name1/Microsoft.Compute/virtualMachines/replace-with-vm-name1",
- "/subscriptions/replace-with-subscription-id/resourceGroups/replace-with-resource-group-name2/Microsoft.Compute/virtualMachines/replace-with-vm-name2"
- ]
- },
- "targetResourceRegion":{
- "value": "SouthCentralUS"
- },
- "targetResourceType":{
- "value": "Microsoft.Compute/virtualMachines"
- },
- "metricName": {
- "value": "Percentage CPU"
- },
- "operator": {
- "value": "GreaterThan"
- },
- "threshold": {
- "value": "0"
- },
- "timeAggregation": {
- "value": "Average"
- },
- "actionGroupId": {
- "value": "/subscriptions/replace-with-subscription-id/resourceGroups/replace-with-resource-group-name/providers/Microsoft.Insights/actionGroups/replace-with-action-group-name"
- }
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "alertName": {
+ "value": "Multi-resource metric alert by list via Azure Resource Manager template"
+ },
+ "alertDescription": {
+ "value": "New Multi-resource metric alert by list created via template"
+ },
+ "alertSeverity": {
+ "value": 3
+ },
+ "isEnabled": {
+ "value": true
+ },
+ "targetResourceId": {
+ "value": [
+ "/subscriptions/replace-with-subscription-id/resourceGroups/replace-with-resource-group-name1/Microsoft.Compute/virtualMachines/replace-with-vm-name1",
+ "/subscriptions/replace-with-subscription-id/resourceGroups/replace-with-resource-group-name2/Microsoft.Compute/virtualMachines/replace-with-vm-name2"
+ ]
+ },
+ "targetResourceRegion": {
+ "value": "SouthCentralUS"
+ },
+ "targetResourceType": {
+ "value": "Microsoft.Compute/virtualMachines"
+ },
+ "metricName": {
+ "value": "Percentage CPU"
+ },
+ "operator": {
+ "value": "GreaterThan"
+ },
+ "threshold": {
+ "value": "0"
+ },
+ "timeAggregation": {
+ "value": "Average"
+ },
+ "actionGroupId": {
+ "value": "/subscriptions/replace-with-subscription-id/resourceGroups/replace-with-resource-group-name/providers/Microsoft.Insights/actionGroups/replace-with-action-group-name"
}
+ }
} ``` - ### Dynamic Thresholds alert on a list of virtual machines+ This sample creates a dynamic thresholds metric alert rule that monitors Percentage CPU for a list of virtual machines in one Azure region in a subscription. ### Template file
-```json
-{
- "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "alertName": {
- "type": "string",
- "minLength": 1,
- "metadata": {
- "description": "Name of the alert"
- }
- },
- "alertDescription": {
- "type": "string",
- "defaultValue": "This is a metric alert",
- "metadata": {
- "description": "Description of alert"
- }
- },
- "alertSeverity": {
- "type": "int",
- "defaultValue": 3,
- "allowedValues": [
- 0,
- 1,
- 2,
- 3,
- 4
- ],
- "metadata": {
- "description": "Severity of alert {0,1,2,3,4}"
- }
- },
- "isEnabled": {
- "type": "bool",
- "defaultValue": true,
- "metadata": {
- "description": "Specifies whether the alert is enabled"
- }
- },
- "targetResourceId":{
- "type": "array",
- "minLength": 1,
- "metadata": {
- "description": "array of Azure resource Ids. For example - /subscriptions/00000000-0000-0000-0000-0000-00000000/resourceGroup/resource-group-name/Microsoft.compute/virtualMachines/vm-name"
- }
- },
- "targetResourceRegion":{
- "type": "string",
- "allowedValues": [
- "EastUS",
- "EastUS2",
- "CentralUS",
- "NorthCentralUS",
- "SouthCentralUS",
- "WestCentralUS",
- "WestUS",
- "WestUS2",
- "CanadaEast",
- "CanadaCentral",
- "BrazilSouth",
- "NorthEurope",
- "WestEurope",
- "FranceCentral",
- "FranceSouth",
- "UKWest",
- "UKSouth",
- "GermanyCentral",
- "GermanyNortheast",
- "GermanyNorth",
- "GermanyWestCentral",
- "SwitzerlandNorth",
- "SwitzerlandWest",
- "NorwayEast",
- "NorwayWest",
- "SoutheastAsia",
- "EastAsia",
- "AustraliaEast",
- "AustraliaSoutheast",
- "AustraliaCentral",
- "AustraliaCentral2",
- "ChinaEast",
- "ChinaNorth",
- "ChinaEast2",
- "ChinaNorth2",
- "CentralIndia",
- "WestIndia",
- "SouthIndia",
- "JapanEast",
- "JapanWest",
- "KoreaCentral",
- "KoreaSouth",
- "SouthAfricaWest",
- "SouthAfricaNorth",
- "UAECentral",
- "UAENorth"
- ],
- "metadata": {
- "description": "Azure region in which target resources to be monitored are in (without spaces). For example: EastUS"
- }
- },
- "targetResourceType": {
- "type": "string",
- "minLength": 1,
- "metadata": {
- "description": "Resource type of target resources to be monitored."
- }
- },
- "metricName": {
- "type": "string",
- "minLength": 1,
- "metadata": {
- "description": "Name of the metric used in the comparison to activate the alert."
- }
- },
- "operator": {
- "type": "string",
- "defaultValue": "GreaterOrLessThan",
- "allowedValues": [
- "GreaterThan",
- "LessThan",
- "GreaterOrLessThan"
- ],
- "metadata": {
- "description": "Operator comparing the current value with the threshold value."
- }
- },
- "alertSensitivity": {
- "type": "string",
- "defaultValue": "Medium",
- "allowedValues": [
- "High",
- "Medium",
- "Low"
- ],
- "metadata": {
- "description": "Tunes how 'noisy' the Dynamic Thresholds alerts will be: 'High' will result in more alerts while 'Low' will result in fewer alerts."
- }
- },
- "numberOfEvaluationPeriods": {
- "type": "string",
- "defaultValue": "4",
- "metadata": {
- "description": "The number of periods to check in the alert evaluation."
- }
- },
- "minFailingPeriodsToAlert": {
- "type": "string",
- "defaultValue": "3",
- "metadata": {
- "description": "The number of unhealthy periods to alert on (must be lower or equal to numberOfEvaluationPeriods)."
- }
- },
- "timeAggregation": {
- "type": "string",
- "defaultValue": "Average",
- "allowedValues": [
- "Average",
- "Minimum",
- "Maximum",
- "Total",
- "Count"
- ],
- "metadata": {
- "description": "How the data that is collected should be combined over time."
- }
- },
- "windowSize": {
- "type": "string",
- "defaultValue": "PT5M",
- "allowedValues": [
- "PT5M",
- "PT15M",
- "PT30M",
- "PT1H"
- ],
- "metadata": {
- "description": "Period of time used to monitor alert activity based on the threshold. Must be between five minutes and one hour. ISO 8601 duration format."
- }
- },
- "evaluationFrequency": {
- "type": "string",
- "defaultValue": "PT5M",
- "allowedValues": [
- "PT5M",
- "PT15M",
- "PT30M",
- "PT1H"
- ],
- "metadata": {
- "description": "how often the metric alert is evaluated represented in ISO 8601 duration format"
- }
- },
- "actionGroupId": {
- "type": "string",
- "defaultValue": "",
- "metadata": {
- "description": "The ID of the action group that is triggered when the alert is activated or deactivated"
- }
- }
- },
- "variables": { },
- "resources": [
+# [Bicep](#tab/bicep)
+
+```bicep
+@description('Name of the alert')
+@minLength(1)
+param alertName string
+
+@description('Description of alert')
+param alertDescription string = 'This is a metric alert'
+
+@description('Severity of alert {0,1,2,3,4}')
+@allowed([
+ 0
+ 1
+ 2
+ 3
+ 4
+])
+param alertSeverity int = 3
+
+@description('Specifies whether the alert is enabled')
+param isEnabled bool = true
+
+@description('array of Azure resource Ids. For example - /subscriptions/00000000-0000-0000-0000-0000-00000000/resourceGroup/resource-group-name/Microsoft.compute/virtualMachines/vm-name')
+@minLength(1)
+param targetResourceId array
+
+@description('Azure region in which target resources to be monitored are in (without spaces). For example: EastUS')
+@allowed([
+ 'EastUS'
+ 'EastUS2'
+ 'CentralUS'
+ 'NorthCentralUS'
+ 'SouthCentralUS'
+ 'WestCentralUS'
+ 'WestUS'
+ 'WestUS2'
+ 'CanadaEast'
+ 'CanadaCentral'
+ 'BrazilSouth'
+ 'NorthEurope'
+ 'WestEurope'
+ 'FranceCentral'
+ 'FranceSouth'
+ 'UKWest'
+ 'UKSouth'
+ 'GermanyCentral'
+ 'GermanyNortheast'
+ 'GermanyNorth'
+ 'GermanyWestCentral'
+ 'SwitzerlandNorth'
+ 'SwitzerlandWest'
+ 'NorwayEast'
+ 'NorwayWest'
+ 'SoutheastAsia'
+ 'EastAsia'
+ 'AustraliaEast'
+ 'AustraliaSoutheast'
+ 'AustraliaCentral'
+ 'AustraliaCentral2'
+ 'ChinaEast'
+ 'ChinaNorth'
+ 'ChinaEast2'
+ 'ChinaNorth2'
+ 'CentralIndia'
+ 'WestIndia'
+ 'SouthIndia'
+ 'JapanEast'
+ 'JapanWest'
+ 'KoreaCentral'
+ 'KoreaSouth'
+ 'SouthAfricaWest'
+ 'SouthAfricaNorth'
+ 'UAECentral'
+ 'UAENorth'
+])
+param targetResourceRegion string
+
+@description('Resource type of target resources to be monitored.')
+@minLength(1)
+param targetResourceType string
+
+@description('Name of the metric used in the comparison to activate the alert.')
+@minLength(1)
+param metricName string
+
+@description('Operator comparing the current value with the threshold value.')
+@allowed([
+ 'GreaterThan'
+ 'LessThan'
+ 'GreaterOrLessThan'
+])
+param operator string = 'GreaterOrLessThan'
+
+@description('Tunes how \'noisy\' the Dynamic Thresholds alerts will be: \'High\' will result in more alerts while \'Low\' will result in fewer alerts.')
+@allowed([
+ 'High'
+ 'Medium'
+ 'Low'
+])
+param alertSensitivity string = 'Medium'
+
+@description('The number of periods to check in the alert evaluation.')
+param numberOfEvaluationPeriods int = 4
+
+@description('The number of unhealthy periods to alert on (must be lower or equal to numberOfEvaluationPeriods).')
+param minFailingPeriodsToAlert int = 3
+
+@description('How the data that is collected should be combined over time.')
+@allowed([
+ 'Average'
+ 'Minimum'
+ 'Maximum'
+ 'Total'
+ 'Count'
+])
+param timeAggregation string = 'Average'
+
+@description('Period of time used to monitor alert activity based on the threshold. Must be between five minutes and one hour. ISO 8601 duration format.')
+@allowed([
+ 'PT5M'
+ 'PT15M'
+ 'PT30M'
+ 'PT1H'
+])
+param windowSize string = 'PT5M'
+
+@description('how often the metric alert is evaluated represented in ISO 8601 duration format')
+@allowed([
+ 'PT5M'
+ 'PT15M'
+ 'PT30M'
+ 'PT1H'
+])
+param evaluationFrequency string = 'PT5M'
+
+@description('The ID of the action group that is triggered when the alert is activated or deactivated')
+param actionGroupId string = ''
+
+resource metricAlert 'Microsoft.Insights/metricAlerts@2018-03-01' = {
+ name: alertName
+ location: 'global'
+ properties: {
+ description: alertDescription
+ severity: alertSeverity
+ enabled: isEnabled
+ scopes: targetResourceId
+ targetResourceType: targetResourceType
+ targetResourceRegion: targetResourceRegion
+ evaluationFrequency: evaluationFrequency
+ windowSize: windowSize
+ criteria: {
+ 'odata.type': 'Microsoft.Azure.Monitor.MultipleResourceMultipleMetricCriteria'
+ allOf: [
{
- "name": "[parameters('alertName')]",
- "type": "Microsoft.Insights/metricAlerts",
- "location": "global",
- "apiVersion": "2018-03-01",
- "tags": {},
- "properties": {
- "description": "[parameters('alertDescription')]",
- "severity": "[parameters('alertSeverity')]",
- "enabled": "[parameters('isEnabled')]",
- "scopes": "[parameters('targetResourceId')]",
- "targetResourceType": "[parameters('targetResourceType')]",
- "targetResourceRegion": "[parameters('targetResourceRegion')]",
- "evaluationFrequency":"[parameters('evaluationFrequency')]",
- "windowSize": "[parameters('windowSize')]",
- "criteria": {
- "odata.type": "Microsoft.Azure.Monitor.MultipleResourceMultipleMetricCriteria",
- "allOf": [
- {
- "criterionType": "DynamicThresholdCriterion",
- "name" : "1st criterion",
- "metricName": "[parameters('metricName')]",
- "dimensions":[],
- "operator": "[parameters('operator')]",
- "alertSensitivity": "[parameters('alertSensitivity')]",
- "failingPeriods": {
- "numberOfEvaluationPeriods": "[parameters('numberOfEvaluationPeriods')]",
- "minFailingPeriodsToAlert": "[parameters('minFailingPeriodsToAlert')]"
- },
- "timeAggregation": "[parameters('timeAggregation')]"
- }
- ]
- },
- "actions": [
- {
- "actionGroupId": "[parameters('actionGroupId')]"
- }
- ]
- }
+ criterionType: 'DynamicThresholdCriterion'
+ name: '1st criterion'
+ metricName: metricName
+ dimensions: []
+ operator: operator
+ alertSensitivity: alertSensitivity
+ failingPeriods: {
+ numberOfEvaluationPeriods: numberOfEvaluationPeriods
+ minFailingPeriodsToAlert: minFailingPeriodsToAlert
+ }
+ timeAggregation: timeAggregation
}
+ ]
+ }
+ actions: [
+ {
+ actionGroupId: actionGroupId
+ }
]
+ }
} ```
-### Parameter file
+# [JSON](#tab/json)
```json {
- "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "alertName": {
- "value": "Multi-resource metric alert with Dynamic Thresholds by list via Azure Resource Manager template"
- },
- "alertDescription": {
- "value": "New Multi-resource metric alert with Dynamic Thresholds by list created via template"
- },
- "alertSeverity": {
- "value":3
- },
- "isEnabled": {
- "value": true
- },
- "targetResourceId":{
- "value": [
- "/subscriptions/replace-with-subscription-id/resourceGroups/replace-with-resource-group-name1/Microsoft.Compute/virtualMachines/replace-with-vm-name1",
- "/subscriptions/replace-with-subscription-id/resourceGroups/replace-with-resource-group-name2/Microsoft.Compute/virtualMachines/replace-with-vm-name2"
- ]
- },
- "targetResourceRegion":{
- "value": "SouthCentralUS"
- },
- "targetResourceType":{
- "value": "Microsoft.Compute/virtualMachines"
- },
- "metricName": {
- "value": "Percentage CPU"
- },
- "operator": {
- "value": "GreaterOrLessThan"
- },
- "alertSensitivity": {
- "value": "Medium"
- },
- "numberOfEvaluationPeriods": {
- "value": "4"
- },
- "minFailingPeriodsToAlert": {
- "value": "3"
- },
- "timeAggregation": {
- "value": "Average"
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "alertName": {
+ "type": "string",
+ "minLength": 1,
+ "metadata": {
+ "description": "Name of the alert"
+ }
+ },
+ "alertDescription": {
+ "type": "string",
+ "defaultValue": "This is a metric alert",
+ "metadata": {
+ "description": "Description of alert"
+ }
+ },
+ "alertSeverity": {
+ "type": "int",
+ "defaultValue": 3,
+ "allowedValues": [
+ 0,
+ 1,
+ 2,
+ 3,
+ 4
+ ],
+ "metadata": {
+ "description": "Severity of alert {0,1,2,3,4}"
+ }
+ },
+ "isEnabled": {
+ "type": "bool",
+ "defaultValue": true,
+ "metadata": {
+ "description": "Specifies whether the alert is enabled"
+ }
+ },
+ "targetResourceId": {
+ "type": "array",
+ "minLength": 1,
+ "metadata": {
+ "description": "array of Azure resource Ids. For example - /subscriptions/00000000-0000-0000-0000-0000-00000000/resourceGroup/resource-group-name/Microsoft.compute/virtualMachines/vm-name"
+ }
+ },
+ "targetResourceRegion": {
+ "type": "string",
+ "allowedValues": [
+ "EastUS",
+ "EastUS2",
+ "CentralUS",
+ "NorthCentralUS",
+ "SouthCentralUS",
+ "WestCentralUS",
+ "WestUS",
+ "WestUS2",
+ "CanadaEast",
+ "CanadaCentral",
+ "BrazilSouth",
+ "NorthEurope",
+ "WestEurope",
+ "FranceCentral",
+ "FranceSouth",
+ "UKWest",
+ "UKSouth",
+ "GermanyCentral",
+ "GermanyNortheast",
+ "GermanyNorth",
+ "GermanyWestCentral",
+ "SwitzerlandNorth",
+ "SwitzerlandWest",
+ "NorwayEast",
+ "NorwayWest",
+ "SoutheastAsia",
+ "EastAsia",
+ "AustraliaEast",
+ "AustraliaSoutheast",
+ "AustraliaCentral",
+ "AustraliaCentral2",
+ "ChinaEast",
+ "ChinaNorth",
+ "ChinaEast2",
+ "ChinaNorth2",
+ "CentralIndia",
+ "WestIndia",
+ "SouthIndia",
+ "JapanEast",
+ "JapanWest",
+ "KoreaCentral",
+ "KoreaSouth",
+ "SouthAfricaWest",
+ "SouthAfricaNorth",
+ "UAECentral",
+ "UAENorth"
+ ],
+ "metadata": {
+ "description": "Azure region in which target resources to be monitored are in (without spaces). For example: EastUS"
+ }
+ },
+ "targetResourceType": {
+ "type": "string",
+ "minLength": 1,
+ "metadata": {
+ "description": "Resource type of target resources to be monitored."
+ }
+ },
+ "metricName": {
+ "type": "string",
+ "minLength": 1,
+ "metadata": {
+ "description": "Name of the metric used in the comparison to activate the alert."
+ }
+ },
+ "operator": {
+ "type": "string",
+ "defaultValue": "GreaterOrLessThan",
+ "allowedValues": [
+ "GreaterThan",
+ "LessThan",
+ "GreaterOrLessThan"
+ ],
+ "metadata": {
+ "description": "Operator comparing the current value with the threshold value."
+ }
+ },
+ "alertSensitivity": {
+ "type": "string",
+ "defaultValue": "Medium",
+ "allowedValues": [
+ "High",
+ "Medium",
+ "Low"
+ ],
+ "metadata": {
+ "description": "Tunes how 'noisy' the Dynamic Thresholds alerts will be: 'High' will result in more alerts while 'Low' will result in fewer alerts."
+ }
+ },
+ "numberOfEvaluationPeriods": {
+ "type": "int",
+ "defaultValue": 4,
+ "metadata": {
+ "description": "The number of periods to check in the alert evaluation."
+ }
+ },
+ "minFailingPeriodsToAlert": {
+ "type": "int",
+ "defaultValue": 3,
+ "metadata": {
+ "description": "The number of unhealthy periods to alert on (must be lower or equal to numberOfEvaluationPeriods)."
+ }
+ },
+ "timeAggregation": {
+ "type": "string",
+ "defaultValue": "Average",
+ "allowedValues": [
+ "Average",
+ "Minimum",
+ "Maximum",
+ "Total",
+ "Count"
+ ],
+ "metadata": {
+ "description": "How the data that is collected should be combined over time."
+ }
+ },
+ "windowSize": {
+ "type": "string",
+ "defaultValue": "PT5M",
+ "allowedValues": [
+ "PT5M",
+ "PT15M",
+ "PT30M",
+ "PT1H"
+ ],
+ "metadata": {
+ "description": "Period of time used to monitor alert activity based on the threshold. Must be between five minutes and one hour. ISO 8601 duration format."
+ }
+ },
+ "evaluationFrequency": {
+ "type": "string",
+ "defaultValue": "PT5M",
+ "allowedValues": [
+ "PT5M",
+ "PT15M",
+ "PT30M",
+ "PT1H"
+ ],
+ "metadata": {
+ "description": "how often the metric alert is evaluated represented in ISO 8601 duration format"
+ }
+ },
+ "actionGroupId": {
+ "type": "string",
+ "defaultValue": "",
+ "metadata": {
+ "description": "The ID of the action group that is triggered when the alert is activated or deactivated"
+ }
+ }
+ },
+ "resources": [
+ {
+ "type": "Microsoft.Insights/metricAlerts",
+ "apiVersion": "2018-03-01",
+ "name": "[parameters('alertName')]",
+ "location": "global",
+ "properties": {
+ "description": "[parameters('alertDescription')]",
+ "severity": "[parameters('alertSeverity')]",
+ "enabled": "[parameters('isEnabled')]",
+ "scopes": "[parameters('targetResourceId')]",
+ "targetResourceType": "[parameters('targetResourceType')]",
+ "targetResourceRegion": "[parameters('targetResourceRegion')]",
+ "evaluationFrequency": "[parameters('evaluationFrequency')]",
+ "windowSize": "[parameters('windowSize')]",
+ "criteria": {
+ "odata.type": "Microsoft.Azure.Monitor.MultipleResourceMultipleMetricCriteria",
+ "allOf": [
+ {
+ "criterionType": "DynamicThresholdCriterion",
+ "name": "1st criterion",
+ "metricName": "[parameters('metricName')]",
+ "dimensions": [],
+ "operator": "[parameters('operator')]",
+ "alertSensitivity": "[parameters('alertSensitivity')]",
+ "failingPeriods": {
+ "numberOfEvaluationPeriods": "[parameters('numberOfEvaluationPeriods')]",
+ "minFailingPeriodsToAlert": "[parameters('minFailingPeriodsToAlert')]"
+ },
+ "timeAggregation": "[parameters('timeAggregation')]"
+ }
+ ]
},
- "actionGroupId": {
- "value": "/subscriptions/replace-with-subscription-id/resourceGroups/replace-with-resource-group-name/providers/Microsoft.Insights/actionGroups/replace-with-action-group-name"
- }
+ "actions": [
+ {
+ "actionGroupId": "[parameters('actionGroupId')]"
+ }
+ ]
+ }
}
+ ]
} ``` ++
+### Parameter file
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "alertName": {
+ "value": "Multi-resource metric alert with Dynamic Thresholds by list via Azure Resource Manager template"
+ },
+ "alertDescription": {
+ "value": "New Multi-resource metric alert with Dynamic Thresholds by list created via template"
+ },
+ "alertSeverity": {
+ "value": 3
+ },
+ "isEnabled": {
+ "value": true
+ },
+ "targetResourceId": {
+ "value": [
+ "/subscriptions/replace-with-subscription-id/resourceGroups/replace-with-resource-group-name1/Microsoft.Compute/virtualMachines/replace-with-vm-name1",
+ "/subscriptions/replace-with-subscription-id/resourceGroups/replace-with-resource-group-name2/Microsoft.Compute/virtualMachines/replace-with-vm-name2"
+ ]
+ },
+ "targetResourceRegion": {
+ "value": "SouthCentralUS"
+ },
+ "targetResourceType": {
+ "value": "Microsoft.Compute/virtualMachines"
+ },
+ "metricName": {
+ "value": "Percentage CPU"
+ },
+ "operator": {
+ "value": "GreaterOrLessThan"
+ },
+ "alertSensitivity": {
+ "value": "Medium"
+ },
+ "numberOfEvaluationPeriods": {
+ "value": "4"
+ },
+ "minFailingPeriodsToAlert": {
+ "value": "3"
+ },
+ "timeAggregation": {
+ "value": "Average"
+ },
+ "actionGroupId": {
+ "value": "/subscriptions/replace-with-subscription-id/resourceGroups/replace-with-resource-group-name/providers/Microsoft.Insights/actionGroups/replace-with-action-group-name"
+ }
+ }
+}
+```
## Availability test with metric alert+ [Application Insights availability tests](../app/monitor-web-app-availability.md) help you monitor the availability of your web site/application from various locations around the globe. Availability test alerts notify you when availability tests fail from a certain number of locations. Availability test alerts of the same resource type as metric alerts (Microsoft.Insights/metricAlerts). The following sample creates a simple availability test and associated alert. > [!NOTE]
This sample creates a dynamic thresholds metric alert rule that monitors Percent
### Template file
+# [Bicep](#tab/bicep)
+
+```bicep
+param appName string
+param pingURL string
+param pingText string = ''
+param actionGroupId string
+param location string
+
+var pingTestName = 'PingTest-${toLower(appName)}'
+var pingAlertRuleName = 'PingAlert-${toLower(appName)}-${subscription().subscriptionId}'
+
+resource pingTest 'Microsoft.Insights/webtests@2020-10-05-preview' = {
+ name: pingTestName
+ location: location
+ tags: {
+ 'hidden-link:${resourceId('Microsoft.Insights/components', appName)}': 'Resource'
+ }
+ properties: {
+ Name: pingTestName
+ Description: 'Basic ping test'
+ Enabled: true
+ Frequency: 300
+ Timeout: 120
+ Kind: 'ping'
+ RetryEnabled: true
+ Locations: [
+ {
+ Id: 'us-va-ash-azr'
+ }
+ {
+ Id: 'emea-nl-ams-azr'
+ }
+ {
+ Id: 'apac-jp-kaw-edge'
+ }
+ ]
+ Configuration: {
+ WebTest: '<WebTest Name="${pingTestName}" Enabled="True" CssProjectStructure="" CssIteration="" Timeout="120" WorkItemIds="" xmlns="http://microsoft.com/schemas/VisualStudio/TeamTest/2010" Description="" CredentialUserName="" CredentialPassword="" PreAuthenticate="True" Proxy="default" StopOnError="False" RecordedResultFile="" ResultsLocale=""> <Items> <Request Method="GET" Version="1.1" Url="${pingURL}" ThinkTime="0" Timeout="300" ParseDependentRequests="True" FollowRedirects="True" RecordResult="True" Cache="False" ResponseTimeGoal="0" Encoding="utf-8" ExpectedHttpStatusCode="200" ExpectedResponseUrl="" ReportingName="" IgnoreHttpStatusCode="False" /> </Items> <ValidationRules> <ValidationRule Classname="Microsoft.VisualStudio.TestTools.WebTesting.Rules.ValidationRuleFindText, Microsoft.VisualStudio.QualityTools.WebTestFramework, Version=10.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" DisplayName="Find Text" Description="Verifies the existence of the specified text in the response." Level="High" ExecutionOrder="BeforeDependents"> <RuleParameters> <RuleParameter Name="FindText" Value="${pingText}" /> <RuleParameter Name="IgnoreCase" Value="False" /> <RuleParameter Name="UseRegularExpression" Value="False" /> <RuleParameter Name="PassIfTextFound" Value="True" /> </RuleParameters> </ValidationRule> </ValidationRules> </WebTest>'
+ }
+ SyntheticMonitorId: pingTestName
+ }
+}
+
+resource metricAlert 'Microsoft.Insights/metricAlerts@2018-03-01' = {
+ name: pingAlertRuleName
+ location: 'global'
+ tags: {
+ 'hidden-link:${resourceId('Microsoft.Insights/components', appName)}': 'Resource'
+ 'hidden-link:${pingTest.id}': 'Resource'
+ }
+ properties: {
+ description: 'Alert for web test'
+ severity: 1
+ enabled: true
+ scopes: [
+ pingTest.id
+ resourceId('Microsoft.Insights/components', appName)
+ ]
+ evaluationFrequency: 'PT1M'
+ windowSize: 'PT5M'
+ criteria: {
+ 'odata.type': 'Microsoft.Azure.Monitor.WebtestLocationAvailabilityCriteria'
+ webTestId: pingTest.id
+ componentId: resourceId('Microsoft.Insights/components', appName)
+ failedLocationCount: 2
+ }
+ actions: [
+ {
+ actionGroupId: actionGroupId
+ }
+ ]
+ }
+}
+```
+
+# [JSON](#tab/json)
+ ```json {
- "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
+ "metadata": {
"parameters": { "appName": { "type": "string"
This sample creates a dynamic thresholds metric alert rule that monitors Percent
} }, "variables": {
- "pingTestName": "[concat('PingTest-', toLower(parameters('appName')))]",
- "pingAlertRuleName": "[concat('PingAlert-', toLower(parameters('appName')), '-', subscription().subscriptionId)]"
+ "pingTestName": "[format('PingTest-{0}', toLower(parameters('appName')))]",
+ "pingAlertRuleName": "[format('PingAlert-{0}-{1}', toLower(parameters('appName')), subscription().subscriptionId)]"
}, "resources": [ {
- "name": "[variables('pingTestName')]",
"type": "Microsoft.Insights/webtests",
- "apiVersion": "2014-04-01",
+ "apiVersion": "2020-10-05-preview",
+ "name": "[variables('pingTestName')]",
"location": "[parameters('location')]", "tags": {
- "[concat('hidden-link:', resourceId('Microsoft.Insights/components', parameters('appName')))]": "Resource"
+ "[format('hidden-link:{0}', resourceId('Microsoft.Insights/components', parameters('appName')))]": "Resource"
}, "properties": { "Name": "[variables('pingTestName')]",
This sample creates a dynamic thresholds metric alert rule that monitors Percent
} ], "Configuration": {
- "WebTest": "[concat('<WebTest Name=\"', variables('pingTestName'), '\" Enabled=\"True\" CssProjectStructure=\"\" CssIteration=\"\" Timeout=\"120\" WorkItemIds=\"\" xmlns=\"http://microsoft.com/schemas/VisualStudio/TeamTest/2010\" Description=\"\" CredentialUserName=\"\" CredentialPassword=\"\" PreAuthenticate=\"True\" Proxy=\"default\" StopOnError=\"False\" RecordedResultFile=\"\" ResultsLocale=\"\"> <Items> <Request Method=\"GET\" Version=\"1.1\" Url=\"', parameters('pingURL'), '\" ThinkTime=\"0\" Timeout=\"300\" ParseDependentRequests=\"True\" FollowRedirects=\"True\" RecordResult=\"True\" Cache=\"False\" ResponseTimeGoal=\"0\" Encoding=\"utf-8\" ExpectedHttpStatusCode=\"200\" ExpectedResponseUrl=\"\" ReportingName=\"\" IgnoreHttpStatusCode=\"False\" /> </Items> <ValidationRules> <ValidationRule Classname=\"Microsoft.VisualStudio.TestTools.WebTesting.Rules.ValidationRuleFindText, Microsoft.VisualStudio.QualityTools.WebTestFramework, Version=10.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a\" DisplayName=\"Find Text\" Description=\"Verifies the existence of the specified text in the response.\" Level=\"High\" ExecutionOrder=\"BeforeDependents\"> <RuleParameters> <RuleParameter Name=\"FindText\" Value=\"', parameters('pingText'), '\" /> <RuleParameter Name=\"IgnoreCase\" Value=\"False\" /> <RuleParameter Name=\"UseRegularExpression\" Value=\"False\" /> <RuleParameter Name=\"PassIfTextFound\" Value=\"True\" /> </RuleParameters> </ValidationRule> </ValidationRules> </WebTest>')]"
+ "WebTest": "[format('<WebTest Name=\"{0}\" Enabled=\"True\" CssProjectStructure=\"\" CssIteration=\"\" Timeout=\"120\" WorkItemIds=\"\" xmlns=\"http://microsoft.com/schemas/VisualStudio/TeamTest/2010\" Description=\"\" CredentialUserName=\"\" CredentialPassword=\"\" PreAuthenticate=\"True\" Proxy=\"default\" StopOnError=\"False\" RecordedResultFile=\"\" ResultsLocale=\"\"> <Items> <Request Method=\"GET\" Version=\"1.1\" Url=\"{1}\" ThinkTime=\"0\" Timeout=\"300\" ParseDependentRequests=\"True\" FollowRedirects=\"True\" RecordResult=\"True\" Cache=\"False\" ResponseTimeGoal=\"0\" Encoding=\"utf-8\" ExpectedHttpStatusCode=\"200\" ExpectedResponseUrl=\"\" ReportingName=\"\" IgnoreHttpStatusCode=\"False\" /> </Items> <ValidationRules> <ValidationRule Classname=\"Microsoft.VisualStudio.TestTools.WebTesting.Rules.ValidationRuleFindText, Microsoft.VisualStudio.QualityTools.WebTestFramework, Version=10.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a\" DisplayName=\"Find Text\" Description=\"Verifies the existence of the specified text in the response.\" Level=\"High\" ExecutionOrder=\"BeforeDependents\"> <RuleParameters> <RuleParameter Name=\"FindText\" Value=\"{2}\" /> <RuleParameter Name=\"IgnoreCase\" Value=\"False\" /> <RuleParameter Name=\"UseRegularExpression\" Value=\"False\" /> <RuleParameter Name=\"PassIfTextFound\" Value=\"True\" /> </RuleParameters> </ValidationRule> </ValidationRules> </WebTest>', variables('pingTestName'), parameters('pingURL'), parameters('pingText'))]"
}, "SyntheticMonitorId": "[variables('pingTestName')]" } }, {
- "name": "[variables('pingAlertRuleName')]",
"type": "Microsoft.Insights/metricAlerts", "apiVersion": "2018-03-01",
+ "name": "[variables('pingAlertRuleName')]",
"location": "global",
- "dependsOn": [
- "[resourceId('Microsoft.Insights/webtests', variables('pingTestName'))]"
- ],
"tags": {
- "[concat('hidden-link:', resourceId('Microsoft.Insights/components', parameters('appName')))]": "Resource",
- "[concat('hidden-link:', resourceId('Microsoft.Insights/webtests', variables('pingTestName')))]": "Resource"
+ "[format('hidden-link:{0}', resourceId('Microsoft.Insights/components', parameters('appName')))]": "Resource",
+ "[format('hidden-link:{0}', resourceId('Microsoft.Insights/webtests', variables('pingTestName')))]": "Resource"
}, "properties": { "description": "Alert for web test", "severity": 1, "enabled": true, "scopes": [
- "[resourceId('Microsoft.Insights/webtests',variables('pingTestName'))]",
- "[resourceId('Microsoft.Insights/components',parameters('appName'))]"
+ "[resourceId('Microsoft.Insights/webtests', variables('pingTestName'))]",
+ "[resourceId('Microsoft.Insights/components', parameters('appName'))]"
], "evaluationFrequency": "PT1M", "windowSize": "PT5M",
This sample creates a dynamic thresholds metric alert rule that monitors Percent
"actionGroupId": "[parameters('actionGroupId')]" } ]
- }
+ },
+ "dependsOn": [
+ "[resourceId('Microsoft.Insights/webtests', variables('pingTestName'))]"
+ ]
} ] } ``` ++ ### Parameter file ```json {
- "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "appName": {
- "value": "Replace with your Application Insights resource name"
- },
- "pingURL": {
- "value": "https://www.yoursite.com"
- },
- "actionGroupId": {
- "value": "/subscriptions/replace-with-subscription-id/resourceGroups/replace-with-resourceGroup-name/providers/microsoft.insights/actiongroups/replace-with-action-group-name"
- },
- "location": {
- "value": "Replace with the location of your Application Insights resource"
- },
- "pingText": {
- "defaultValue": "Optional parameter that allows you to perform a content-match for the presence of a specific string within the content returned from a pingURL response",
- "type": "String"
- },
- }
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "appName": {
+ "value": "Replace with your Application Insights resource name"
+ },
+ "pingURL": {
+ "value": "https://www.yoursite.com"
+ },
+ "actionGroupId": {
+ "value": "/subscriptions/replace-with-subscription-id/resourceGroups/replace-with-resourceGroup-name/providers/microsoft.insights/actiongroups/replace-with-action-group-name"
+ },
+ "location": {
+ "value": "Replace with the location of your Application Insights resource"
+ },
+ "pingText": {
+ "defaultValue": "Optional parameter that allows you to perform a content-match for the presence of a specific string within the content returned from a pingURL response",
+ "type": "String"
+ },
+ }
} ```
Additional configuration of the content-match `pingText` parameter is controlled
```xml <RuleParameter Name=\"FindText\" Value=\"',parameters('pingText'), '\" /> <RuleParameter Name=\"IgnoreCase\" Value=\"False\" />
-<RuleParameter Name=\"UseRegularExpression\" Value=\"False\" />
+<RuleParameter Name=\"UseRegularExpression\" Value=\"False\" />
<RuleParameter Name=\"PassIfTextFound\" Value=\"True\" /> ```+ ### Test locations |Id | Region |
azure-monitor App Map https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/app-map.md
Title: Application Map in Azure Application Insights | Microsoft Docs
-description: Monitor complex application topologies with the application map
+description: Monitor complex application topologies with Application Map and Intelligent View
Previously updated : 03/15/2019 Last updated : 05/16/2022 ms.devlang: csharp, java, javascript, python
Application Map helps you spot performance bottlenecks or failure hotspots across all components of your distributed application. Each node on the map represents an application component or its dependencies; and has health KPI and alerts status. You can select any component to get more detailed diagnostics, such as Application Insights events. If your app uses Azure services, you can also select Azure diagnostics, such as SQL Database Advisor recommendations.
+Application Map also features an [Intelligent View](#application-map-intelligent-view-public-preview) to assist with fast service health investigations.
## What is a Component? Components are independently deployable parts of your distributed/microservices application. Developers and operations teams have code-level visibility or access to telemetry generated by these application components.
-* Components are different from "observed" external dependencies such as SQL, Event Hubs etc. which your team/organization may not have access to (code or telemetry).
+* Components are different from "observed" external dependencies such as SQL, Event Hubs etc., which your team/organization may not have access to (code or telemetry).
* Components run on any number of server/role/container instances. * Components can be separate Application Insights resources (even if subscriptions are different) or different roles reporting to a single Application Insights resource. The preview map experience shows the components regardless of how they're set up.
Components are independently deployable parts of your distributed/microservices
You can see the full application topology across multiple levels of related application components. Components could be different Application Insights resources, or different roles in a single resource. The app map finds components by following HTTP dependency calls made between servers with the Application Insights SDK installed.
-This experience starts with progressive discovery of the components. When you first load the application map, a set of queries is triggered to discover the components related to this component. A button at the top-left corner will update with the number of components in your application as they're discovered.
+This experience starts with progressive discovery of the components. When you first load the Application Map, a set of queries is triggered to discover the components related to this component. A button at the top-left corner will update with the number of components in your application as they're discovered.
-On clicking "Update map components", the map is refreshed with all components discovered until that point. Depending on the complexity of your application, this may take a minute to load.
+On clicking "Update map components", the map is refreshed with all components discovered until that point. Depending on the complexity of your application, this update may take a minute to load.
If all of the components are roles within a single Application Insights resource, then this discovery step isn't required. The initial load for such an application will have all its components.
-![Screenshot shows an example of an application map.](media/app-map/app-map-001.png)
+![Screenshot shows an example of an Application Map.](media/app-map/app-map-001.png)
One of the key objectives with this experience is to be able to visualize complex topologies with hundreds of components.
To troubleshoot performance problems, select **investigate performance**.
### Go to details
-Select **go to details** to explore the end-to-end transaction experience, which can offer views to the call stack level.
+The **Go to details** button displays the end-to-end transaction experience, which offers views at the call stack level.
![Screenshot of go-to-details button](media/app-map/go-to-details.png)
To view active alerts and the underlying rules that cause the alerts to be trigg
## Set or override cloud role name
-Application Map uses the **cloud role name** property to identify the components on the map. To manually set or override cloud role name and change what gets displayed on the Application Map:
+Application Map uses the **cloud role name** property to identify the components on the map.
+
+Follow this guidance to manually set or override cloud role names and change what gets displayed on the Application Map:
> [!NOTE] > The Application Insights SDK or Agent automatically adds the cloud role name property to the telemetry emitted by components in an Azure App Service environment.
namespace CustomInitializer.Telemetry
} ```
-**ASP.NET apps: Load initializer to the active TelemetryConfiguration**
+**ASP.NET apps: Load initializer in the active TelemetryConfiguration**
In ApplicationInsights.config:
An alternate method for ASP.NET Web apps is to instantiate the initializer in co
**ASP.NET Core apps: Load initializer to the TelemetryConfiguration**
-For [ASP.NET Core](asp-net-core.md#adding-telemetryinitializers) applications, adding a new `TelemetryInitializer` is done by adding it to the Dependency Injection container, as shown below. This is done in `ConfigureServices` method of your `Startup.cs` class.
+For [ASP.NET Core](asp-net-core.md#adding-telemetryinitializers) applications, adding a new `TelemetryInitializer` is done by adding it to the Dependency Injection container, as shown below. This step is done in `ConfigureServices` method of your `Startup.cs` class.
```csharp using Microsoft.ApplicationInsights.Extensibility;
As far as how to think about **cloud role name**, it can be helpful to look at a
![Application Map Screenshot](media/app-map/cloud-rolename.png)
-In the Application Map above each of the names in green boxes are cloud role name values for different aspects of this particular distributed application. So for this app its roles consist of: `Authentication`, `acmefrontend`, `Inventory Management`, a `Payment Processing Worker Role`.
+In the Application Map above, each of the names in green boxes are cloud role name values for different aspects of this particular distributed application. So for this app its roles consist of: `Authentication`, `acmefrontend`, `Inventory Management`, a `Payment Processing Worker Role`.
-In the case of this app each of those cloud role names also represents a different unique Application Insights resource with their own instrumentation keys. Since the owner of this application has access to each of those four disparate Application Insights resources, Application Map is able to stitch together a map of the underlying relationships.
+In this app, each of those cloud role names also represents a different unique Application Insights resource with their own instrumentation keys. Since the owner of this application has access to each of those four disparate Application Insights resources, Application Map is able to stitch together a map of the underlying relationships.
For the [official definitions](https://github.com/Microsoft/ApplicationInsights-dotnet/blob/39a5ef23d834777eefdd72149de705a016eb06b0/Schema/PublicSchema/ContextTagKeys.bond#L93):
For the [official definitions](https://github.com/Microsoft/ApplicationInsights-
715: string CloudRoleInstance = "ai.cloud.roleInstance"; ```
-Alternatively, **cloud role instance** can be helpful for scenarios where **cloud role name** tells you the problem is somewhere in your web front-end, but you might be running your web front-end across multiple load-balanced servers so being able to drill in a layer deeper via Kusto queries and knowing if the issue is impacting all web front-end servers/instances or just one can be important.
+Alternatively, **cloud role instance** can be helpful for scenarios where **cloud role name** tells you the problem is somewhere in your web front-end, but you might be running across your web front-end multiple load-balanced servers so being able to drill in a layer deeper via Kusto queries and knowing if the issue is impacting all web front-end servers/instances or just one can be important.
-A scenario where you might want to override the value for cloud role instance could be if your app is running in a containerized environment where just knowing the individual server might not be enough information to locate a given issue.
+A scenario where you might want to override the value for cloud role instance could be if your app is running in a containerized environment. In this case, just knowing the individual server might not be enough information to locate a given issue.
For more information about how to override the cloud role name property with telemetry initializers, see [Add properties: ITelemetryInitializer](api-filtering-sampling.md#addmodify-properties-itelemetryinitializer).
+## Application Map Intelligent View (public preview)
+
+### Intelligent View summary
+
+Application Map's Intelligent View is designed to aid in service health investigations. It applies machine learning (ML) to quickly identify potential root cause(s) of issues by filtering out noise. The ML model learns from Application Map's historical behavior to identify dominant patterns and anomalies that indicate potential causes of an incident.
+
+In large distributed applications there's always some degree of noise coming from "benign" failures, which may cause Application Map to be noisy by showing many red edges. The Intelligent View shows only the most probable causes of service failure and removes node-to-node red edges (service-to-service communication) in healthy services. It not only highlights (in red) the edges that should be investigated but also offers actionable insights for the highlighted edge.
+
+### Intelligent View benefits
+
+> [!div class="checklist"]
+> * Reduces time to resolution by highlighting only failures that need to be investigated
+> * Provides actionable insights on why a certain red edge was highlighted
+> * Enables Application Map to be used for large distributed applications seamlessly. (By focusing only on the edges marked red).
+
+### Enabling Intelligent View in Application Map
+
+Enable the Intelligent View toggle. Optionally, to change the sensitivity of the detections choose--**Low**, **Medium**, or **High**. See more detail on [sensitivity here](#how-does-intelligent-view-sensitivity-work).
++
+After the Intelligent View has been enabled, select one of the highlighted edges to see the "actionable insights". The insights will be visible in the panel on the right and explain why the edge was highlighted.
++
+Begin your troubleshooting journey by selecting **Investigate Failures**. This button will launch the failures pane, in which you may investigate if the detected issue is the root cause. If no edges are red, the ML model didn't find potential incidents in the dependencies of your application.
+
+Provide your feedback by pressing the **Feedback** button on the map.
+
+### How does Intelligent View determine where red edges are highlighted?
+
+Intelligent View uses the patented AIOps machine learning model to highlight what's truly important in an Application Map.
+
+A non-exhaustive list of example considerations includes:
+
+* Failure rates
+* Request counts
+* Durations
+* Anomalies in the data
+* Types of dependency
+
+For comparison, the normal view only utilizes the raw failure rate.
+
+### How does Intelligent View sensitivity work?
+
+Intelligent View sensitivity adjusts the probability that a service issue will be detected.
+
+Adjust sensitivity to achieve the desired confidence level in highlighted edges.
+
+|Sensitivity Setting | Result |
+|||
+|High | Fewer edges will be highlighted. |
+|Medium (default) | A balanced number of edges will be highlighted. |
+|Low | More edges will be highlighted. |
+
+### Limitations of Intelligent View
+
+The Intelligent View works well for large distributed applications but sometimes it can take around one minute to load.
+ ## Troubleshooting If you're having trouble getting Application Map to work as expected, try these steps:
If you're having trouble getting Application Map to work as expected, try these
### Too many nodes on the map
-Application Map constructs an application node for each unique cloud role name present in your request telemetry and a dependency node for each unique combination of type, target, and cloud role name in your dependency telemetry. If there are more than 10,000 nodes in your telemetry, Application Map won't be able to fetch all the nodes and links, so your map will be incomplete. If this happens, a warning message will appear when viewing the map.
+Application Map constructs an application node for each unique cloud role name present in your request telemetry. In addition, a dependency node is constructed for each unique combination of type, target, and cloud role name.
-In addition, Application Map only supports up to 1000 separate ungrouped nodes rendered at once. Application Map reduces visual complexity by grouping dependencies together that have the same type and callers, but if your telemetry has too many unique cloud role names or too many dependency types, that grouping will be insufficient, and the map will be unable to render.
+If there are more than 10,000 nodes in your telemetry, Application Map won't be able to fetch all the nodes and links, so your map will be incomplete. If this scenario occurs, a warning message will appear when viewing the map.
-To fix this, you'll need to change your instrumentation to properly set the cloud role name, dependency type, and dependency target fields.
+Application Map only supports up to 1000 separate ungrouped nodes rendered at once. Application Map reduces visual complexity by grouping dependencies together that have the same type and callers.
-* Dependency target should represent the logical name of a dependency. In many cases, it's equivalent to the server or resource name of the dependency. For example, in the case of HTTP dependencies it's set to the hostname. It shouldn't contain unique IDs or parameters that change from one request to another.
+If your telemetry has too many unique cloud role names or too many dependency types, that grouping will be insufficient, and the map will be unable to render.
+
+To fix this issue, you'll need to change your instrumentation to properly set the cloud role name, dependency type, and dependency target fields.
+
+* Dependency target should represent the logical name of a dependency. In many cases, it's equivalent to the server or resource name of the dependency. For example, If there are HTTP dependencies, it's set to the hostname. It shouldn't contain unique IDs or parameters that change from one request to another.
* Dependency type should represent the logical type of a dependency. For example, HTTP, SQL or Azure Blob are typical dependency types. It shouldn't contain unique IDs. * The purpose of cloud role name is described in the [above section](#set-or-override-cloud-role-name).
+### Intelligent View
+
+#### Why isn't this edge highlighted, even with low sensitivity?
+
+Try these steps if a dependency appears to be failing but the model doesn't indicate it's a potential incident:
+
+* If this dependency has been failing for a while now, the model might believe it's a regular state and not highlight the edge for you. It focuses on problem solving in RT.
+* If this dependency has a minimal effect on the overall performance of the app that can also make the model ignore it.
+* If none of the above is correct, use the **Feedback** option and describe your experience--you can help us improve future model versions.
+
+#### Why is the edge highlighted?
+
+In a case where an edge is highlighted the explanation from the model should point you to the most important features that made the model give this dependency a high probability score. The recommendation isn't based solely on failures but other indicators like unexpected latency in dominant flows.
+
+#### Intelligent View doesn't load
+
+If Intelligent View doesn't load, ensure that you've opted into the preview on Application Map.
++
+#### Intelligent View takes a long time to load
+
+Avoid selecting the **Update Map Component**.
+
+Enable Intelligent View only for a single Application Insight resource.
++ ## Portal feedback To provide feedback, use the feedback option.
To provide feedback, use the feedback option.
* To learn more about how correlation works in Application Insights consult the [telemetry correlation article](correlation.md). * The [end-to-end transaction diagnostic experience](transaction-diagnostics.md) correlates server-side telemetry from across all your Application Insights monitored components into a single view.
-* For advanced correlation scenarios in ASP.NET Core and ASP.NET consult the [track custom operations](custom-operations-tracking.md) article.
+* For advanced correlation scenarios in ASP.NET Core and ASP.NET, consult the [track custom operations](custom-operations-tracking.md) article.
azure-monitor Azure Web Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-web-apps.md
There are two ways to enable application monitoring for Azure App Services hoste
- This method is the easiest to enable, and no code change or advanced configurations are required. It is often referred to as "runtime" monitoring. For Azure App Services we recommend at a minimum enabling this level of monitoring, and then based on your specific scenario you can evaluate whether more advanced monitoring through manual instrumentation is needed.
- - The following are support for auto-instrumentation monitoring:
+ - The following are supported for auto-instrumentation monitoring:
- [.NET Core](./azure-web-apps-net-core.md) - [.NET](./azure-web-apps-net.md) - [Java](./azure-web-apps-java.md)
azure-monitor Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript.md
Here's a sample of how to create a dynamic JS using Razor:
src: "https://js.monitor.azure.com/scripts/b/ai.2.min.js", // The SDK URL Source onInit: function(appInsights) { var serverId = "@this.Context.GetRequestTelemetry().Context.Operation.Id";
- appInsights.context.telemetryContext.parentID = serverId;
+ appInsights.context.telemetryTrace.parentID = serverId;
}, cfg: { // Application Insights Configuration instrumentationKey: "YOUR_INSTRUMENTATION_KEY_GOES_HERE"
azure-monitor Proactive Application Security Detection Pack https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/proactive-application-security-detection-pack.md
There are three types of security issues that are detected:
3. Suspicious user activity: the same user accesses the application from multiple countries or regions, around the same time. For example, the same user accessed the application from Spain and the United States within the same hour. This detection indicates a potentially malicious access attempt to your application. ## Does my app definitely have a security issue?
-A notification doesn't mean that your app definitely has a security issue. A detection of any of the scenarios above can, in many cases, indicate a security issue. in other cases, the detection may have a natural business justification, and can be ignored.
+A notification doesn't mean that your app definitely has a security issue. A detection of any of the scenarios above can, in many cases, indicate a security issue. In other cases, the detection may have a natural business justification, and can be ignored.
## How do I fix the "Insecure URL access" detection? 1. **Triage.** The notification provides the number of users who accessed insecure URLs, and the URL that was most affected by insecure access. This information can help you assign a priority to the problem.
azure-monitor Profiler https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/profiler.md
Title: Profile live Azure App Service apps with Application Insights | Microsoft Docs
+ Title: Enable Profiler for Azure App Service apps | Microsoft Docs
description: Profile live apps on Azure App Service with Application Insights Profiler. Previously updated : 08/06/2018 Last updated : 05/11/2022
-# Profile live Azure App Service apps with Application Insights
+# Enable Profiler for Azure App Service apps
-You can run Profiler on ASP.NET and ASP.NET Core apps that are running on Azure App Service using Basic service tier or higher. Enabling Profiler on Linux is currently only possible via [this method](profiler-aspnetcore-linux.md).
+Application Insights Profiler is pre-installed as part of the App Services runtime. You can run Profiler on ASP.NET and ASP.NET Core apps running on Azure App Service using Basic service tier or higher. Follow these steps even if you've included the App Insights SDK in your application at build time.
-## <a id="installation"></a> Enable Profiler for your app
-To enable Profiler for an app, follow the instructions below. If you're running a different type of Azure service, here are instructions for enabling Profiler on other supported platforms:
-* [Cloud Services](./profiler-cloudservice.md?toc=%2fazure%2fazure-monitor%2ftoc.json)
-* [Service Fabric Applications](./profiler-servicefabric.md?toc=%2fazure%2fazure-monitor%2ftoc.json)
-* [Virtual Machines](./profiler-vm.md?toc=%2fazure%2fazure-monitor%2ftoc.json)
-
-Application Insights Profiler is pre-installed as part of the App Services runtime. The steps below will show you how to enable it for your App Service. Follow these steps even if you've included the App Insights SDK in your application at build time.
+To enable Profiler on Linux, walk through the [ASP.NET Core Azure Linux web apps instructions](profiler-aspnetcore-linux.md).
> [!NOTE]
-> Codeless installation of Application Insights Profiler follows the .NET Core support policy.
-> For more information about supported runtimes, see [.NET Core Support Policy](https://dotnet.microsoft.com/platform/support/policy/dotnet-core).
+> Codeless installation of Application Insights Profiler follows the .NET Core support policy.
+> For more information about supported runtime, see [.NET Core Support Policy](https://dotnet.microsoft.com/platform/support/policy/dotnet-core).
++
+## Pre-requisites
+
+- An [Azure App Services ASP.NET/ASP.NET Core app](/app-service/quickstart-dotnetcore.md).
+- [Application Insights resource](./create-new-resource.md) connected to your App Service app.
+
+## Verify "Always On" setting is enabled
+
+1. In the Azure portal, navigate to your App Service.
+1. Under **Settings** in the left side menu, select **Configuration**.
+
+ :::image type="content" source="./media/profiler/configuration-menu.png" alt-text="Screenshot of selecting Configuration from the left side menu.":::
-1. Navigate to the Azure control panel for your App Service.
-1. Enable "Always On" setting for your app service. You can find this setting under **Settings**, **Configuration** page (see screenshot in the next step), and select the **General settings** tab.
-1. Navigate to **Settings > Application Insights** page.
+1. Select the **General settings** tab.
+1. Verify **Always On** > **On** is selected.
- ![Enable App Insights on App Services portal](./media/profiler/AppInsights-AppServices.png)
+ :::image type="content" source="./media/profiler/always-on.png" alt-text="Screenshot of the General tab on the Configuration pane and showing the Always On being enabled.":::
-1. Either follow the instructions on the pane to create a new resource or select an existing App Insights resource to monitor your app. Also make sure the Profiler is **On**. If your Application Insights resource is in a different subscription from your App Service, you can't use this page to configure Application Insights. You can still do it manually though by creating the necessary app settings manually. [The next section contains instructions for manually enabling Profiler.](#enable-profiler-manually-or-with-azure-resource-manager)
+1. Select **Save** if you've made changes.
- ![Add App Insights site extension][Enablement UI]
+## Enable Application Insights and Profiler
-1. Profiler is now enabled using an App Services App Setting.
+1. Under **Settings** in the left side menu, select **Application Insights**.
- ![App Setting for Profiler][profiler-app-setting]
+ :::image type="content" source="./media/profiler/app-insights-menu.png" alt-text="Screenshot of selecting Application Insights from the left side menu.":::
-## Enable Profiler manually or with Azure Resource Manager
-Application Insights Profiler can be enabled by creating app settings for your Azure App Service. The page with the options shown above creates these app settings for you. But you can automate the creation of these settings using a template or other means. These settings will also work if your Application Insights resource is in a different subscription from your Azure App Service.
-Here are the settings needed to enable the profiler:
+1. Under **Application Insights**, select **Enable**.
+1. Verify you've connected an Application Insights resource to your app.
+
+ :::image type="content" source="./media/profiler/enable-app-insights.png" alt-text="Screenshot of enabling App Insights on your app.":::
+
+1. Scroll down and select the **.NET** or **.NET Core** tab, depending on your app.
+1. Verify **Collection Level** > **Recommended** is selected.
+1. Under **Profiler**, select **On**.
+ - If you chose the **Basic** collection level earlier, the Profiler setting is disabled.
+1. Select **Apply**, then **Yes** to confirm.
+
+ :::image type="content" source="./media/profiler/enable-profiler.png" alt-text="Screenshot of enabling Profiler on your app.":::
+
+## Enable Profiler manually
+
+If your Application Insights resource is in a different subscription from your App Service, you'll need to enable Profiler manually by creating app settings for your Azure App Service. You can automate the creation of these settings using a template or other means. The settings needed to enable the profiler:
|App Setting | Value | ||-|
Here are the settings needed to enable the profiler:
|APPINSIGHTS_PROFILERFEATURE_VERSION | 1.0.0 | |DiagnosticServices_EXTENSION_VERSION | ~3 | -
-You can set these values using [Azure Resource Manager Templates](./azure-web-apps-net-core.md#app-service-application-settings-with-azure-resource-manager), [Azure PowerShell](/powershell/module/az.websites/set-azwebapp), [Azure CLI](/cli/azure/webapp/config/appsettings).
+Set these values using:
+- [Azure Resource Manager Templates](./azure-web-apps-net-core.md#app-service-application-settings-with-azure-resource-manager)
+- [Azure PowerShell](/powershell/module/az.websites/set-azwebapp)
+- [Azure CLI](/cli/azure/webapp/config/appsettings)
## Enable Profiler for other clouds
Currently the only regions that require endpoint modifications are [Azure Govern
## Enable Azure Active Directory authentication for profile ingestion
-Application Insights Profiler supports Azure AD authentication for profiles ingestion. This means, for all profiles of your application to be ingested, your application must be authenticated and provide the required application settings to the Profiler agent.
+Application Insights Profiler supports Azure AD authentication for profiles ingestion. For all profiles of your application to be ingested, your application must be authenticated and provide the required application settings to the Profiler agent.
-As of today, Profiler only supports Azure AD authentication when you reference and configure Azure AD using the Application Insights SDK in your application.
+Profiler only supports Azure AD authentication when you reference and configure Azure AD using the Application Insights SDK in your application.
-Below you can find all the steps required to enable Azure AD for profiles ingestion:
-1. Create and add the managed identity you want to use to authenticate against your Application Insights resource to your App Service.
+To enable Azure AD for profiles ingestion:
- a. For System-Assigned Managed identity, see the following [documentation](../../app-service/overview-managed-identity.md?tabs=portal%2chttp#add-a-system-assigned-identity)
+1. Create and add the managed identity to authenticate against your Application Insights resource to your App Service.
- b. For User-Assigned Managed identity, see the following [documentation](../../app-service/overview-managed-identity.md?tabs=portal%2chttp#add-a-user-assigned-identity)
+ a. [System-Assigned Managed identity documentation](../../app-service/overview-managed-identity.md?tabs=portal%2chttp#add-a-system-assigned-identity)
-2. Configure and enable Azure AD in your Application Insights resource. For more information, see the following [documentation](./azure-ad-authentication.md?tabs=net#configuring-and-enabling-azure-ad-based-authentication)
-3. Add the following application setting, used to let Profiler agent know which managed identity to use:
+ b. [User-Assigned Managed identity documentation](../../app-service/overview-managed-identity.md?tabs=portal%2chttp#add-a-user-assigned-identity)
-For System-Assigned Identity:
+1. [Configure and enable Azure AD](./azure-ad-authentication.md?tabs=net#configuring-and-enabling-azure-ad-based-authentication) in your Application Insights resource.
-|App Setting | Value |
-||-|
-|APPLICATIONINSIGHTS_AUTHENTICATION_STRING | Authorization=AAD |
+1. Add the following application setting to let the Profiler agent know which managed identity to use:
-For User-Assigned Identity:
+ For System-Assigned Identity:
-|App Setting | Value |
-||-|
-|APPLICATIONINSIGHTS_AUTHENTICATION_STRING | Authorization=AAD;ClientId={Client id of the User-Assigned Identity} |
+ |App Setting | Value |
+ ||-|
+ |APPLICATIONINSIGHTS_AUTHENTICATION_STRING | Authorization=AAD |
+
+ For User-Assigned Identity:
+
+ |App Setting | Value |
+ ||-|
+ |APPLICATIONINSIGHTS_AUTHENTICATION_STRING | Authorization=AAD;ClientId={Client id of the User-Assigned Identity} |
## Disable Profiler
-To stop or restart Profiler for an individual app's instance, on the left sidebar, select **WebJobs** and stop the webjob named `ApplicationInsightsProfiler3`.
+To stop or restart Profiler for an individual app's instance:
+
+1. Under **Settings** in the left side menu, select **WebJobs**.
- ![Disable Profiler for a web job][disable-profiler-webjob]
+ :::image type="content" source="./media/profiler/web-jobs-menu.png" alt-text="Screenshot of selecting web jobs from the left side menu.":::
+
+1. Select the webjob named `ApplicationInsightsProfiler3`.
+
+1. Click **Stop** from the top menu.
+
+ :::image type="content" source="./media/profiler/stop-web-job.png" alt-text="Screenshot of selecting stop for stopping the webjob.":::
+
+1. Select **Yes** to confirm.
We recommend that you have Profiler enabled on all your apps to discover any performance issues as early as possible. Profiler's files can be deleted when using WebDeploy to deploy changes to your web application. You can prevent the deletion by excluding the App_Data folder from being deleted during deployment. - ## Next steps * [Working with Application Insights in Visual Studio](./visual-studio.md)-
-[Enablement UI]: ./media/profiler/Enablement_UI.png
-[profiler-app-setting]:./media/profiler/profiler-app-setting.png
-[disable-profiler-webjob]: ./media/profiler/disable-profiler-webjob.png
azure-monitor Status Monitor V2 Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/status-monitor-v2-overview.md
It replaces Status Monitor.
Telemetry is sent to the Azure portal, where you can [monitor](./app-insights-overview.md) your app. > [!NOTE]
-> The module currently supports codeless instrumentation of .NET and .NET Core web apps hosted with IIS. Use an SDK to instrument Java and Node.js applications.
+> The module currently supports codeless instrumentation of ASP.NET and ASP.NET Core web apps hosted with IIS. Use an SDK to instrument Java and Node.js applications.
## PowerShell Gallery
azure-monitor Tutorial Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/tutorial-performance.md
To complete this tutorial:
- ASP.NET and web development - Azure development - Deploy a .NET application to Azure and [enable the Application Insights SDK](../app/asp-net.md).-- [Enable the Application Insights profiler](../app/profiler.md#installation) for your application.
+- [Enable the Application Insights profiler](../app/profiler.md) for your application.
## Log in to Azure Log in to the Azure portal at [https://portal.azure.com](https://portal.azure.com).
azure-monitor Change Analysis Query https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/change/change-analysis-query.md
+
+ Title: Pin and share a Change Analysis query to the Azure dashboard
+description: Learn how to pin an Azure Monitor Change Analysis query to the Azure dashboard and share with your team.
+++
+ms.contributor: cawa
Last updated : 05/12/2022 ++++
+# Pin and share a Change Analysis query to the Azure dashboard
+
+Let's say you want to curate a change view on specific resources, like all Virtual Machine changes in your subscription, and include it in a report sent periodically. You can pin the view to an Azure dashboard for monitoring or sharing scenarios. If you'd like to share a specific change with your team members, you can use the share feature in the Change Details page.
+
+## Pin to the Azure dashboard
+
+Once you have applied filters to the Change Analysis homepage:
+
+1. Select **Pin current filters** from the top menu.
+1. Enter a name for the pin.
+1. Click **OK** to proceed.
+
+ :::image type="content" source="./media/change-analysis/click-pin-menu.png" alt-text="Screenshot of selecting Pin current filters button in Change Analysis":::
+
+A side pane will open to configure the dashboard where you'll place your pin. You can select one of two dashboard types:
+
+| Dashboard type | Description |
+| -- | -- |
+| Private | Only you can access a private dashboard. Choose this option if you're creating the pin for your own easy access to the changes. |
+| Shared | A shared dashboard supports role-based access control for view/read access. Shared dashboards are created as a resource in your subscription with a region and resource group to host it. Choose this option if you're creating the pin to share with your team. |
+
+### Select an existing dashboard
+
+If you already have a dashboard to place the pin:
+
+1. Select the **Existing** tab.
+1. Select either **Private** or **Shared**.
+1. Select the dashboard you'd like to use.
+1. If you've selected **Shared**, select the subscription in which you'd like to place the dashboard.
+1. Select **Pin**.
+
+ :::image type="content" source="./media/change-analysis/existing-dashboard-small.png" alt-text="Screenshot of selecting an existing dashboard to pin your changes to. ":::
+
+### Create a new dashboard
+
+You can create a new dashboard for this pin.
+
+1. Select the **Create new** tab.
+1. Select either **Private** or **Shared**.
+1. Enter the name of the new dashboard.
+1. If you're creating a shared dashboard, enter the resource group and region information.
+1. Click **Create and pin**.
+
+ :::image type="content" source="./media/change-analysis/create-pin-dashboard-small.png" alt-text="Screenshot of creating a new dashboard to pin your changes to.":::
+
+Once the dashboard and pin are created, navigate to the Azure dashboard to view them.
+
+1. From the Azure portal home menu, select **Dashboard**. Use the **Manage Sharing** button in the top menu to handle access or "unshare". Click on the pin to navigate to the curated view of changes.
+
+ :::image type="content" source="./media/change-analysis/azure-dashboard.png" alt-text="Screenshot of selecting the Dashboard in the Azure portal home menu.":::
+
+ :::image type="content" source="./media/change-analysis/view-share-dashboard.png" alt-text="Screenshot of the pin in the dashboard.":::
+
+## Share a single change with your team
+
+In the Change Analysis homepage, select a line of change to view details on the change.
+
+1. On the Changed properties page, select **Share** from the top menu.
+1. On the Share Change Details pane, copy the deep link of the page and share with your team in messages, emails, reports, or whichever communication channel your team prefers.
+
+ :::image type="content" source="./media/change-analysis/share-single-change.png" alt-text="Screenshot of selecting the share button on the dashboard and copying link.":::
+++
+## Next steps
+
+Learn more about [Change Analysis](change-analysis.md).
azure-monitor Change Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/change/change-analysis.md
ms.contributor: cawa Previously updated : 04/18/2022 Last updated : 05/20/2022
Building on the power of [Azure Resource Graph](../../governance/resource-graph/
> > For more information, see [Supplemental terms of use for Microsoft Azure previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+> [!NOTE]
+> Change Analysis is currently only available in Public Azure Cloud.
+ ## Overview Change Analysis detects various types of changes, from the infrastructure layer through application deployment. Change Analysis is a subscription-level Azure resource provider that:
azure-monitor Container Insights Logging V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-logging-v2.md
+
+ Title: Configure ContainerLogv2 schema (preview) for Container Insights
+description: Switch your ContainerLog table to the ContainerLogv2 schema
++++++ Last updated : 05/11/2022++
+# Enable ContainerLogV2 schema (preview)
+Azure Monitor Container Insights is now in Public Preview of new schema for container logs called ContainerLogV2. As part of this schema, there new fields to make common queries to view AKS (Azure Kubernetes Service) and Azure Arc enabled Kubernetes data. In addition, this schema is compatible as a part of [Basic Logs](../logs/basic-logs-configure.md), which offer a low cost alternative to standard analytics logs.
+
+> [!NOTE]
+> The ContainerLogv2 schema is currently a preview feature, some features may be limited in the Portal experience from Container Insights
+
+>[!NOTE]
+>The new fields are:
+>* ContainerName
+>* PodName
+>* PodNamespace
+
+## ContainerLogV2 schema
+```kusto
+ Computer: string,
+ ContainerId: string,
+ ContainerName: string,
+ PodName: string,
+ PodNamespace: string,
+ LogMessage: dynamic,
+ LogSource: string,
+ TimeGenerated: datetime
+```
+## Enable ContainerLogV2 schema
+1. Customers can enable ContainerLogV2 schema at cluster level.
+2. To enable ContainerLogV2 schema, configure the cluster's configmap, Learn more about [configmap](https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/) in Kubernetes documentation & [Azure Monitor configmap](./container-insights-agent-config.md#configmap-file-settings-overview).
+3. Follow the instructions accordingly when configuring an existing ConfigMap or using a new one.
+
+### Configuring an existing ConfigMap
+When configuring an existing ConfigMap, we have to append the following section in your existing ConfigMap yaml file:
+
+```yaml
+[log_collection_settings.schema]
+ # In the absense of this configmap, default value for containerlog_schema_version is "v1"
+ # Supported values for this setting are "v1","v2"
+ # See documentation for benefits of v2 schema over v1 schema before opting for "v2" schema
+ containerlog_schema_version = "v2"
+```
+
+### Configuring a new ConfigMap
+1. Download the new ConfigMap from [here](https://aka.ms/container-azm-ms-agentconfig). For new downloaded configmapdefault the value for containerlog_schema_version is "v1"
+1. Update the "containerlog_schema_version = "v2""
+
+ ```yaml
+ [log_collection_settings.schema]
+ # In the absense of this configmap, default value for containerlog_schema_version is "v1"
+ # Supported values for this setting are "v1","v2"
+ # See documentation for benefits of v2 schema over v1 schema before opting for "v2" schema
+ containerlog_schema_version = "v2"
+ ```
+1. Once you have finished configuring the configmap Run the following kubectl command: kubectl apply -f `<configname>`
+
+>[!TIP]
+>Example: kubectl apply -f container-azm-ms-agentconfig.yaml.
+
+>[!NOTE]
+>* The configuration change can take a few minutes to complete before taking effect, all omsagent pods in the cluster will restart.
+>* The restart is a rolling restart for all omsagent pods, it will not restart all of them at the same time.
+
+## Next steps
+* Configure [Basic Logs](../logs/basic-logs-configure.md) for ContainerLogv2
azure-monitor Container Insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-overview.md
Container insights delivers a comprehensive monitoring experience using differen
>Support for Azure Red Hat OpenShift is a feature in public preview at this time. >
-* Monitor container workloads [deployed to Azure Arc-enabled Kubernetes (preview)](../../azure-arc/kubernetes/overview.md).
+* Monitor container workloads [deployed to Azure Arc-enabled Kubernetes](../../azure-arc/kubernetes/overview.md).
The main differences in monitoring a Windows Server cluster compared to a Linux cluster are the following:
azure-monitor Container Insights Prometheus Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-prometheus-integration.md
Perform the following steps to configure your ConfigMap configuration file for t
Example: `kubectl apply -f container-azm-ms-agentconfig.yaml`.
-The configuration change can take a few minutes to finish before taking effect, and all omsagent pods in the cluster will restart. The restart is a rolling restart for all omsagent pods, not all restart at the same time. When the restarts are finished, a message is displayed that's similar to the following and includes the result: `configmap "container-azm-ms-agentconfig" created`.
+The configuration change can take a few minutes to finish before taking effect. You must restart all omsagent pods manually. When the restarts are finished, a message is displayed that's similar to the following and includes the result: `configmap "container-azm-ms-agentconfig" created`.
## Configure and deploy ConfigMaps - Azure Red Hat OpenShift v3
azure-monitor Resource Manager Container Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/resource-manager-container-insights.md
description: Sample Azure Resource Manager templates to deploy and configureCont
Previously updated : 05/18/2020 Last updated : 05/05/2022 # Resource Manager template samples for Container insights+ This article includes sample [Azure Resource Manager templates](../../azure-resource-manager/templates/syntax.md) to deploy and configure the Log Analytics agent for virtual machines in Azure Monitor. Each sample includes a template file and a parameters file with sample values to provide to the template. [!INCLUDE [azure-monitor-samples](../../../includes/azure-monitor-resource-manager-samples.md)] - ## Enable for AKS cluster
-The following sample enables Container insights on an AKS cluster.
+The following sample enables Container insights on an AKS cluster.
### Template file
+# [Bicep](#tab/bicep)
+
+```bicep
+@description('AKS Cluster Resource ID')
+param aksResourceId string
+
+@description('Location of the AKS resource e.g. "East US"')
+param aksResourceLocation string
+
+@description('Existing all tags on AKS Cluster Resource')
+param aksResourceTagValues object
+
+@description('Azure Monitor Log Analytics Resource ID')
+param workspaceResourceId string
+
+resource aksResourceId_8 'Microsoft.ContainerService/managedClusters@2022-01-02-preview' = {
+ name: split(aksResourceId, '/')[8]
+ location: aksResourceLocation
+ tags: aksResourceTagValues
+ properties: {
+ addonProfiles: {
+ omsagent: {
+ enabled: true
+ config: {
+ logAnalyticsWorkspaceResourceID: workspaceResourceId
+ }
+ }
+ }
+ }
+}
+
+```
+
+# [JSON](#tab/json)
+ ```json {
- "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0", "parameters": { "aksResourceId": {
The following sample enables Container insights on an AKS cluster.
}, "resources": [ {
- "name": "[split(parameters('aksResourceId'),'/')[8]]",
"type": "Microsoft.ContainerService/managedClusters",
+ "apiVersion": "2022-01-02-preview",
+ "name": "[split(parameters('aksResourceId'), '/')[8]]",
"location": "[parameters('aksResourceLocation')]", "tags": "[parameters('aksResourceTagValues')]",
- "apiVersion": "2018-03-31",
"properties": {
- "mode": "Incremental",
- "id": "[parameters('aksResourceId')]",
"addonProfiles": { "omsagent": { "enabled": true,
The following sample enables Container insights on an AKS cluster.
} ``` ++ ### Parameter file ```json {
- "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#",
"contentVersion": "1.0.0.0", "parameters": { "aksResourceId": {
The following sample enables Container insights on an AKS cluster.
} ``` - ## Enable for new Azure Red Hat OpenShift v3 cluster ### Template file
+# [Bicep](#tab/bicep)
+
+```bicep
+@description('Location')
+param location string
+
+@description('Unique name for the cluster')
+param clusterName string
+
+@description('number of master nodes')
+param masterNodeCount int = 3
+
+@description('number of compute nodes')
+param computeNodeCount int = 3
+
+@description('number of infra nodes')
+param infraNodeCount int = 3
+
+@description('The ID of an Azure Active Directory tenant')
+param aadTenantId string
+
+@description('The ID of an Azure Active Directory client application')
+param aadClientId string
+
+@description('The secret of an Azure Active Directory client application')
+@secure()
+param aadClientSecret string
+
+@description('The Object ID of an Azure Active Directory Group that memberships will get synced into the OpenShift group \'osa-customer-admins\'. If not specified, no cluster admin access will be granted.')
+param aadCustomerAdminGroupId string
+
+@description('Azure ResourceId of an existing Log Analytics Workspace')
+param workspaceResourceId string
+
+resource clusterName_resource 'Microsoft.ContainerService/openShiftManagedClusters@2019-10-27-preview' = {
+ location: location
+ name: clusterName
+ properties: {
+ openShiftVersion: 'v3.11'
+ networkProfile: {
+ vnetCidr: '10.0.0.0/8'
+ }
+ authProfile: {
+ identityProviders: [
+ {
+ name: 'Azure AD'
+ provider: {
+ kind: 'AADIdentityProvider'
+ clientId: aadClientId
+ secret: aadClientSecret
+ tenantId: aadTenantId
+ customerAdminGroupId: aadCustomerAdminGroupId
+ }
+ }
+ ]
+ }
+ masterPoolProfile: {
+ count: masterNodeCount
+ subnetCidr: '10.0.0.0/24'
+ vmSize: 'Standard_D4s_v3'
+ }
+ agentPoolProfiles: [
+ {
+ role: 'compute'
+ name: 'compute'
+ count: computeNodeCount
+ subnetCidr: '10.0.0.0/24'
+ vmSize: 'Standard_D4s_v3'
+ osType: 'Linux'
+ }
+ {
+ role: 'infra'
+ name: 'infra'
+ count: infraNodeCount
+ subnetCidr: '10.0.0.0/24'
+ vmSize: 'Standard_D4s_v3'
+ osType: 'Linux'
+ }
+ ]
+ routerProfiles: [
+ {
+ name: 'default'
+ }
+ ]
+ monitorProfile: {
+ workspaceResourceID: workspaceResourceId
+ enabled: true
+ }
+ }
+}
+
+```
+
+# [JSON](#tab/json)
+ ```json {
- "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0", "parameters": { "location": {
The following sample enables Container insights on an AKS cluster.
} }, "aadClientSecret": {
- "type": "securestring",
+ "type": "secureString",
"metadata": { "description": "The secret of an Azure Active Directory client application" }
The following sample enables Container insights on an AKS cluster.
}, "resources": [ {
- "location": "[parameters('location')]",
- "name": "[parameters('clusterName')]",
"type": "Microsoft.ContainerService/openShiftManagedClusters",
- "apiVersion": "2019-09-30-preview",
+ "apiVersion": "2019-10-27-preview",
+ "name": "[parameters('clusterName')]",
+ "location": "[parameters('location')]",
"properties": { "openShiftVersion": "v3.11",
- "fqdn": "[concat(parameters('clusterName'), '.', parameters('location'), '.', 'cloudapp.azure.com')]",
"networkProfile": { "vnetCidr": "10.0.0.0/8" },
The following sample enables Container insights on an AKS cluster.
] }, "masterPoolProfile": {
- "name": "master",
"count": "[parameters('masterNodeCount')]", "subnetCidr": "10.0.0.0/24",
- "vmSize": "Standard_D4s_v3",
- "osType": "Linux"
+ "vmSize": "Standard_D4s_v3"
}, "agentPoolProfiles": [ {
The following sample enables Container insights on an AKS cluster.
} ``` ++ ### Parameter file ```json {
- "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#",
"contentVersion": "1.0.0.0", "parameters": { "location": {
The following sample enables Container insights on an AKS cluster.
### Template file
+# [Bicep](#tab/bicep)
+
+```bicep
+@description('ARO Cluster Resource ID')
+param aroResourceId string
+
+@description('Location of the aro cluster resource e.g. westcentralus')
+param aroResourceLocation string
+
+@description('Azure Monitor Log Analytics Resource ID')
+param workspaceResourceId string
+
+resource aroResourceId_8 'Microsoft.ContainerService/openShiftManagedClusters@2019-10-27-preview' = {
+ name: split(aroResourceId, '/')[8]
+ location: aroResourceLocation
+ properties: {
+ openShiftVersion: 'v3.11'
+ monitorProfile: {
+ enabled: true
+ workspaceResourceID: workspaceResourceId
+ }
+ }
+}
+```
+
+# [JSON](#tab/json)
+ ```json {
- "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0", "parameters": { "aroResourceId": {
The following sample enables Container insights on an AKS cluster.
}, "resources": [ {
- "name": "[split(parameters('aroResourceId'),'/')[8]]",
"type": "Microsoft.ContainerService/openShiftManagedClusters",
+ "apiVersion": "2019-10-27-preview",
+ "name": "[split(parameters('aroResourceId'), '/')[8]]",
"location": "[parameters('aroResourceLocation')]",
- "apiVersion": "2019-09-30-preview",
"properties": {
- "mode": "Incremental",
- "id": "[parameters('aroResourceId')]",
+ "openShiftVersion": "v3.11",
"monitorProfile": { "enabled": true, "workspaceResourceID": "[parameters('workspaceResourceId')]"
The following sample enables Container insights on an AKS cluster.
} ``` ++ ### Parameter file ```json {
- "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#",
"contentVersion": "1.0.0.0", "parameters": { "aroResourceId": {
azure-monitor Analyze Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/analyze-usage.md
SecurityEvent
```kusto Usage | where Solution == "LogManagement" and iff(isnotnull(toint(IsBillable)), IsBillable == true, IsBillable == "true") == true
-| summarize AggregatedValue = count() by DataType`
+| summarize AggregatedValue = count() by DataType
``` **Perf** data type ```kusto Perf
-| summarize AggregatedValue = count() by CounterPath`
+| summarize AggregatedValue = count() by CounterPath
``` ```kusto
azure-monitor Basic Logs Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/basic-logs-configure.md
Title: Configure Basic Logs in Azure Monitor (Preview) description: Configure a table for Basic Logs in Azure Monitor. + Last updated 05/15/2022- # Configure Basic Logs in Azure Monitor (Preview)
All tables in your Log Analytics are Analytics tables, by default. You can confi
You can currently configure the following tables for Basic Logs: - All tables created with the [Data Collection Rule (DCR)-based custom logs API.](custom-logs-overview.md) -- [ContainerLog](/azure/azure-monitor/reference/tables/containerlog) and [ContainerLogV2](/azure/azure-monitor/reference/tables/containerlogv2), which [Container Insights](../containers/container-insights-overview.md) uses and which include verbose text-based log records.
+- [ContainerLogV2](/azure/azure-monitor/reference/tables/containerlogv2), which [Container Insights](../containers/container-insights-overview.md) uses and which include verbose text-based log records.
- [AppTraces](/azure/azure-monitor/reference/tables/apptraces), which contains freeform log records for application traces in Application Insights. > [!NOTE]
azure-monitor Cost Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/cost-logs.md
The default pricing for Log Analytics is a Pay-As-You-Go model that's based on i
## Data size calculation Data volume is measured as the size of the data that will be stored in GB (10^9 bytes). The data size of a single record is calculated from a string representation of the columns that are stored in the Log Analytics workspace for that record, regardless of whether the data is sent from an agent or added during the ingestion process. This includes any custom columns added by the [custom logs API](custom-logs-overview.md), [ingestion-time transformations](ingestion-time-transformations.md), or [custom fields](custom-fields.md) that are added as data is collected and then stored in the workspace.
+>[!NOTE]
+>The billable data volume calculation is substantially smaller than the size of the entire incoming JSON-packaged event, often less than 50%. It is essential to understand this calculation of billed data size when estimating costs and comparing to other pricing models.
+ ### Excluded columns The following [standard columns](log-standard-columns.md) that are common to all tables, are excluded in the calculation of the record size. All other columns stored in Log Analytics are included in the calculation of the record size.
azure-monitor Log Analytics Workspace Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/log-analytics-workspace-overview.md
The following table summarizes the differences between the plans.
| Category | Analytics Logs | Basic Logs | |:|:|:| | Ingestion | Cost for ingestion. | Reduced cost for ingestion. |
-| Log queries | No additional cost. Full query capabilities. | Additional cost. [Subset of query capabilities](basic-logs-query.md#limitations). |
+| Log queries | No additional cost. Full query capabilities. | Additional cost.<br>[Subset of query capabilities](basic-logs-query.md#limitations). |
| Retention | Configure retention from 30 days to 730 days. | Retention fixed at 8 days. | | Alerts | Supported. | Not supported. |
azure-monitor Logs Dedicated Clusters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/logs-dedicated-clusters.md
Log Analytics Dedicated Clusters use a commitment tier pricing model of at least
## Create a dedicated cluster
-You must specify the following properties when you create a new dedicated cluster:
+Provide the following properties when creating new dedicated cluster:
-- **ClusterName**-- **ResourceGroupName**: You should use a central IT resource group because clusters are usually shared by many teams in the organization. For more design considerations, review [Designing your Azure Monitor Logs deployment](../logs/design-logs-deployment.md).
+- **ClusterName**--must be unique per resource group
+- **ResourceGroupName**--use central IT resource group since clusters are usually shared by many teams in the organization. For more design considerations, review [Designing your Azure Monitor Logs deployment](../logs/design-logs-deployment.md).
- **Location**-- **SkuCapacity**: The Commitment Tier (formerly called capacity reservations) can be set to 500, 1000, 2000 or 5000 GB/day. For more information on cluster costs, see [Dedicate clusters](./cost-logs.md#dedicated-clusters).
+- **SkuCapacity**--the Commitment Tier (formerly called capacity reservations) can be set to 500, 1000, 2000 or 5000 GB/day. For more information on cluster costs, see [Dedicate clusters](./cost-logs.md#dedicated-clusters).
The user account that creates the clusters must have the standard Azure resource creation permission: `Microsoft.Resources/deployments/*` and cluster write permission `Microsoft.OperationalInsights/clusters/write` by having in their role assignments this specific action or `Microsoft.OperationalInsights/*` or `*/write`.
azure-monitor Tutorial Custom Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/tutorial-custom-logs.md
The following PowerShell script both generates sample data to configure the cust
} ```
-3. Copy the sample log data from [sample data](#sample-data) or copy your own Apache log data into a file called *sample_access.log*.
+3. Copy the sample log data from [sample data](#sample-data) or copy your own Apache log data into a file called `sample_access.log`.
+
+4. To read the data in the file and create a JSON file called `data_sample.json` that you can send to the custom logs API, run:
```PowerShell .\LogGenerator.ps1 -Log "sample_access.log" -Type "file" -Output "data_sample.json" ```-
-4. Run the script using the following command to read this data and create a JSON file called *data_sample.json* that you can send to the custom logs API.
## Add custom log table Before you can send data to the workspace, you need to create the custom table that the data will be sent to.
azure-netapp-files Azacsnap Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azacsnap-release-notes.md
na Previously updated : 03/08/2022 Last updated : 05/24/2022
This page lists major changes made to AzAcSnap to provide new functionality or resolve defects.
+## May-2022
+
+### AzAcSnap v5.0.3 (Build: 20220524.14204) - Patch update to v5.0.2
+
+AzAcSnap v5.0.3 (Build: 20220524.14204) is provided as a patch update to the v5.0 branch with the following fix:
+
+- Fix for handling delimited identifiers when querying SAP HANA. This issue only impacted SAP HANA in HSR-HA node when there is a Secondary node configured with 'logreplay_readaccss' and has been resolved.
+
+Download the [latest release](https://aka.ms/azacsnapinstaller) of the installer and review how to [get started](azacsnap-get-started.md).
+
+### AzAcSnap v5.1 Preview (Build: 20220524.15550)
+
+AzAcSnap v5.1 Preview (Build: 20220524.15550) is an updated build to extend the preview expiry date for 90 days. This update contains the fix for handling delimited identifiers when querying SAP HANA as provided in v5.0.3.
+
+Read about the [AzAcSnap Preview](azacsnap-preview.md).
+Download the [latest release of the Preview installer](https://aka.ms/azacsnap-preview-installer).
+ ## Mar-2022 ### AzAcSnap v5.1 Preview (Build: 20220302.81795)
AzAcSnap v5.1 Preview (Build: 20220302.81795) has been released with the followi
- Azure Key Vault support for securely storing the Service Principal. - A new option for `-c backup --volume` which has the `all` parameter value.
-Details of these new features are in the AzAcSnap Preview documentation.
-
-Read about the new features and how to use the [AzAcSnap Preview](azacsnap-preview.md).
-Download the [latest release of the Preview installer](https://aka.ms/azacsnap-preview-installer).
- ## Feb-2022 ### AzAcSnap v5.1 Preview (Build: 20220220.55340)
AzAcSnap v5.0.2 (Build: 20210827.19086) is provided as a patch update to the v5.
- Fix the installer's check for the location of the hdbuserstore. The installer would check for the existence of an incorrect source directory for the hdbuserstore for the user running the install - this is fixed to check for `~/.hdb`. This fix is applicable to systems (for example, Azure Large Instance) where the hdbuserstore was pre-configured for the `root` user before installing `azacsnap`. - Installer now shows the version it will install/extract (if the installer is run without any arguments).
-Download the [latest release](https://aka.ms/azacsnapinstaller) of the installer and review how to [get started](azacsnap-get-started.md).
- ## May-2021 ### AzAcSnap v5.0.1 (Build: 20210524.14837) - Patch update to v5.0
azure-netapp-files Azure Netapp Files Create Volumes Smb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-create-volumes-smb.md
na Previously updated : 03/02/2022 Last updated : 05/18/2022 # Create an SMB volume for Azure NetApp Files
Before creating an SMB volume, you need to create an Active Directory connection
> [!IMPORTANT] > The SMB Continuous Availability feature is currently in public preview. You need to submit a waitlist request for accessing the feature through the **[Azure NetApp Files SMB Continuous Availability Shares Public Preview waitlist submission page](https://aka.ms/anfsmbcasharespreviewsignup)**. Wait for an official confirmation email from the Azure NetApp Files team before using the Continuous Availability feature. >
- You should enable Continuous Availability only for SQL Server and [FSLogix user profile containers](../virtual-desktop/create-fslogix-profile-container.md). Using SMB Continuous Availability shares for workloads other than SQL Server and FSLogix user profile containers is *not* supported. This feature is currently supported on Windows SQL Server. Linux SQL Server is not currently supported. If you are using a non-administrator (domain) account to install SQL Server, ensure that the account has the required security privilege assigned. If the domain account does not have the required security privilege (`SeSecurityPrivilege`), and the privilege cannot be set at the domain level, you can grant the privilege to the account by using the **Security privilege users** field of Active Directory connections. See [Create an Active Directory connection](create-active-directory-connections.md#create-an-active-directory-connection).
+ You should enable Continuous Availability only for Citrix App Layering, SQL Server, and [FSLogix user profile containers](../virtual-desktop/create-fslogix-profile-container.md). Using SMB Continuous Availability shares for workloads other than Citrix App Layering, SQL Server, and FSLogix user profile containers is *not* supported. This feature is currently supported on Windows SQL Server. Linux SQL Server is not currently supported. If you are using a non-administrator (domain) account to install SQL Server, ensure that the account has the required security privilege assigned. If the domain account does not have the required security privilege (`SeSecurityPrivilege`), and the privilege cannot be set at the domain level, you can grant the privilege to the account by using the **Security privilege users** field of Active Directory connections. See [Create an Active Directory connection](create-active-directory-connections.md#create-an-active-directory-connection).
<!-- [1/13/21] Commenting out command-based steps below, because the plan is to use form-based (URL) registration, similar to CRR feature registration --> <!--
azure-netapp-files Backup Requirements Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/backup-requirements-considerations.md
na Previously updated : 03/18/2022 Last updated : 05/23/2022 # Requirements and considerations for Azure NetApp Files backup
Azure NetApp Files backup in a region can only protect an Azure NetApp Files vol
* [Reverting a volume using snapshot revert](snapshots-revert-volume.md) is not supported on Azure NetApp Files volumes that have backups.
+* See [Restore a backup to a new volume](backup-restore-new-volume.md) for additional considerations related to restoring backups.
## Next steps
azure-netapp-files Backup Restore New Volume https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/backup-restore-new-volume.md
na Previously updated : 09/27/2021 Last updated : 05/23/2022 # Restore a backup to a new volume
Restoring a backup creates a new volume with the same protocol type. This articl
## Considerations
+* You can restore backups only within the same NetApp account. Restoring backups across NetApp accounts are not supported.
+
+* You can restore backups to a different capacity pool within the same NetApp account.
+
+* You can restore a backup only to a new volume. You cannot overwrite the existing volume with the backup.
+ * The new volume created by the restore operation cannot be mounted until the restore completes. * You should trigger the restore operation when there are no baseline backups. Otherwise, the restore might increase the load on the Azure Blob account where your data is backed up.
+See [Requirements and considerations for Azure NetApp Files backup](backup-requirements-considerations.md) for additional considerations about using Azure NetApp Files backup.
+ ## Steps 1. Select **Volumes**. Navigate to **Backups**.
azure-netapp-files Enable Continuous Availability Existing SMB https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/enable-continuous-availability-existing-SMB.md
na Previously updated : 02/23/2022 Last updated : 05/18/2022 # Enable Continuous Availability on existing SMB volumes
You can enable the SMB Continuous Availability (CA) feature when you [create a n
1. Make sure that you have [registered the SMB Continuous Availability Shares](https://aka.ms/anfsmbcasharespreviewsignup) feature.
- You should enable Continuous Availability only for SQL Server and [FSLogix user profile containers](../virtual-desktop/create-fslogix-profile-container.md). Using SMB Continuous Availability shares for workloads other than SQL Server and FSLogix user profile containers is *not* supported. This feature is currently supported on Windows SQL Server. Linux SQL Server is not currently supported. If you are using a non-administrator (domain) account to install SQL Server, ensure that the account has the required security privilege assigned. If the domain account does not have the required security privilege (`SeSecurityPrivilege`), and the privilege cannot be set at the domain level, you can grant the privilege to the account by using the **Security privilege users** field of Active Directory connections. See [Create an Active Directory connection](create-active-directory-connections.md#create-an-active-directory-connection).
+ You should enable Continuous Availability only for [Citrix App Layering](https://docs.citrix.com/en-us/citrix-app-layering/4.html), SQL Server, and [FSLogix user profile containers](../virtual-desktop/create-fslogix-profile-container.md). Using SMB Continuous Availability shares for workloads other than SQL Server and FSLogix user profile containers is *not* supported. This feature is currently supported on Windows SQL Server. Linux SQL Server is not currently supported. If you are using a non-administrator (domain) account to install SQL Server, ensure that the account has the required security privilege assigned. If the domain account does not have the required security privilege (`SeSecurityPrivilege`), and the privilege cannot be set at the domain level, you can grant the privilege to the account by using the **Security privilege users** field of Active Directory connections. See [Create an Active Directory connection](create-active-directory-connections.md#create-an-active-directory-connection).
3. Click the SMB volume that you want to have SMB CA enabled. Then click **Edit**. 4. On the Edit window that appears, select the **Enable Continuous Availability** checkbox.
azure-netapp-files Faq Application Volume Group https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/faq-application-volume-group.md
Previously updated : 03/09/2022 Last updated : 05/19/2022 # Application volume group FAQs
Creating a volume group involves many different steps, and not all of them can b
In the current implementation, the application volume group has a focus on the initial creation and deletion of a volume group only.
+## Can I clone a volume created with application volume group?
+
+Yes, you can clone a volume created by the application volume group. You can do so by selecting a snapshot and [restoring it to a new volume](snapshots-restore-new-volume.md). Cloning is a process outside of the application volume group workflow. As such, consider the following restrictions:
+
+* When you clone a single volume, none of the dependencies specific to the volume group are checked.
+* The cloned volume is not part of the volume group.
+* The cloned volume is always placed on the same storage endpoint as the source volume.
+* Currently, the listed IP addresses for the mount instructions might not display the optimal IP address as the recommended address for mounting the volume. To achieve the lowest latency for the cloned volume, you need to mount with the same IP address as the source volume.
+
+ ## What are the rules behind the proposed throughput for my HANA data and log volumes? SAP defines the Key Performance Indicators (KPIs) for the HANA data and log volume as 400 MiB/s for the data and 250 MiB/s for the log volume. This definition is independent of the size or the workload of the HANA database. Application volume group scales the throughput values in a way that even the smallest database meets the SAP HANA KPIs, and larger database will benefit from a higher throughput level, scaling the proposal based on the entered HANA database size.
azure-netapp-files Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/whats-new.md
na Previously updated : 04/12/2022 Last updated : 05/18/2022
Azure NetApp Files is updated regularly. This article provides a summary about the latest new features and enhancements.
+## May 2022
+
+* [SMB Continuous Availability (CA) shares support for Citrix App Layering](enable-continuous-availability-existing-smb.md) (Preview)
+
+ [Citrix App Layering](https://docs.citrix.com/en-us/citrix-app-layering/4.html) radically reduces the time it takes to manage Windows applications and images. App Layering separates the management of your OS and apps from your infrastructure. You can install each app and OS patch once, update the associated templates, and redeploy your images. You can publish layered images as open standard virtual disks, usable in any environment. App Layering can be used to provide dynamic access application layer virtual disks stored on SMB shared networked storage, including Azure NetApp Files. To enhance App Layering resiliency to events of storage service maintenance, Azure NetApp Files has extended support for [SMB Transparent Failover via SMB Continuous Availability (CA) shares on Azure NetApp Files](azure-netapp-files-create-volumes-smb.md#continuous-availability) for App Layering virtual disks. For more information, see [Azure NetApp Files Azure Virtual Desktop Infrastructure solutions | Citrix](azure-netapp-files-solution-architectures.md#citrix).
++ ## April 2022 * Features that are now generally available (GA)
azure-portal Azure Portal Safelist Urls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/azure-portal-safelist-urls.md
Title: Allow the Azure portal URLs on your firewall or proxy server description: To optimize connectivity between your network and the Azure portal and its services, we recommend you add these URLs to your allowlist. Previously updated : 03/09/2022 Last updated : 05/24/2022
The URL endpoints to allow for the Azure portal are specific to the Azure cloud
*.usgovcloudapi.net *.usgovtrafficmanager.net *.windowsazure.us
+graph.microsoftazure.us
``` #### [China Government Cloud](#tab/china-government-cloud)
azure-resource-manager Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/best-practices.md
description: Describes practices to follow when creating your Bicep files so the
Previously updated : 05/12/2022 Last updated : 05/16/2022 # Best practices for Bicep
If you would rather learn about Bicep best practices through step-by-step guidan
* It's a good practice to provide descriptions for your parameters. Try to make the descriptions helpful, and provide any important information about what the template needs the parameter values to be.
- You can also use `//` comments for some information.
+ You can also use `//` comments to add notes within your Bicep files.
* You can put parameter declarations anywhere in the template file, although it's usually a good idea to put them at the top of the file so your Bicep code is easy to read.
azure-resource-manager Scenarios Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/scenarios-monitoring.md
+
+ Title: Create monitoring resources by using Bicep
+description: Describes how to create monitoring resources by using Bicep.
+++ Last updated : 05/16/2022+
+# Create monitoring resources by using Bicep
+
+Azure has a comprehensive suite of tools that can monitor your applications and services. You can programmatically create your monitoring resources using Bicep to automate the creation of rules, diagnostic settings, and alerts when provisioning your Azure infrastructure.
+
+Bringing your monitoring configuration into your Bicep code might seem unusual, considering that there are tools available inside the Azure portal to set up alert rules, diagnostic settings and dashboards.
+
+However, alerts and diagnostic settings are essentially the same as your other infrastructure resources. By including them in your Bicep code, you can deploy and test your alerting resources as you would for other Azure resources.
+
+If you use Git or another version control tool to manage your Bicep files, you also gain the benefit of having a history of your monitoring configuration so that you can see how alerts were set up and configured.
+
+## Log Analytics and Application Insights workspaces
+
+You can create Log Analytics workspaces with the resource type [Microsoft.OperationalInsights/workspaces](/azure/templates/microsoft.operationalinsights/workspaces?tabs=bicep) and Application Insights workspaces with the type [Microsoft.Insights/components](/azure/templates/microsoft.insights/components?tabs=bicep). Both of these components are deployed to resource groups.
+
+## Diagnostic settings
+
+When creating [diagnostic settings](../../azure-monitor/essentials/diagnostic-settings.md) in Bicep, remember that this resource is an [extension resource](scope-extension-resources.md), which means it's applied to another resource. You can create diagnostic settings in Bicep by using the resource type [Microsoft.Insights/diagnosticSettings](/azure/templates/microsoft.insights/diagnosticsettings?tabs=bicep).
+
+When creating diagnostic settings in Bicep, you need to apply the scope of the diagnostic setting. The scope can be applied at the management, subscription, or resource group level. [Use the scope property on this resource to set the scope for this resource](../../azure-resource-manager/bicep/scope-extension-resources.md).
+
+Consider the following example:
++
+In the preceding example, you create a diagnostic setting for the App Service plan and send those diagnostics to Log Analytics. You can use the `scope` property to define your App Service plan as the scope for your diagnostic setting, and use the `workspaceId` property to define the Log Analytics workspace to send the diagnostic logs to. You can also export diagnostic settings to Event Hubs and Azure Storage Accounts.
+
+Diagnostic settings differ between resources, so ensure that the diagnostic settings you want to create are applicable for the resource you're using.
+
+## Alerts
+
+Alerts proactively notify you when issues are found within your Azure infrastructure and applications by monitoring data within Azure Monitor. By configuring your monitoring and alerting configuration within your Bicep code, you can automate the creation of these alerts alongside the infrastructure that you are provisioning in Azure.
+
+For more information about how alerts work in Azure see [Overview of alerts in Microsoft Azure](../../azure-monitor/alerts/alerts-overview.md).
+
+The following sections demonstrate how you can configure different types of alerts using Bicep code.
+
+### Action groups
+
+To be notified when alerts have been triggered, you need to create an action group. An action group is a collection of notification preferences that are defined by the owner of an Azure subscription. Action groups are used to notify users that an alert has been triggered, or to trigger automated responses to alerts.
+
+To create action groups in Bicep, you can use the type [Microsoft.Insights/actionGroups](/azure/templates/microsoft.insights/actiongroups?tabs=bicep). Here is an example:
++
+The preceding example creates an action group that sends alerts to an email address, but you can also define action groups that send alerts to Event Hubs, Azure Functions, Logic Apps and more.
+
+### Alert processing rules
+
+Alert processing rules (previously referred to as action rules) allow you to apply processing on alerts that have fired. You can create alert processing rules in Bicep using the type [Microsoft.AlertsManagement/actionRules](/azure/templates/microsoft.alertsmanagement/actionrules?tabs=bicep).
+
+Each alert processing rule has a scope, which could be a list of one or more specific resources, a specific resource group or your entire Azure subscription. When you define alert processing rules in Bicep, you define a list of resource IDs in the *scope* property, which targets those resources for the alert processing rule.
++
+In the preceding example, the `MonitorService` alert processing rule on Azure Backup Vault is defined, which is applied to the existing action group. This rule triggers alerts to the action group.
+
+### Log alert rules
+
+Log alerts automatically run a Log Analytics query. The query which is used to evaluate resource logs at an interval that you define, determines if the results meet some criteria that you specify, and then fires an alert.
+
+You can create log alert rules in Bicep by using the type [Microsoft.Insights/scheduledQueryRules](/azure/templates/microsoft.insights/scheduledqueryrules?tabs=bicep).
+
+### Metric alert rules
+
+Metric alerts notify you when one of your metrics crosses a defined threshold. You can define a metric alert rule in your Bicep code by using the type [Microsoft.Insights/metricAlerts](/azure/templates/microsoft.insights/metricalerts?tabs=bicep).
+
+### Activity log alerts
+
+The [Azure activity log](../../azure-monitor/essentials/activity-log.md) is a platform log in Azure that provides insights into events at the subscription level. This includes information such as when a resource in Azure is modified.
+
+Activity log alerts are alerts that are activated when a new activity log event occurs that matches the conditions that are specified in the alert.
+
+You can use the `scope` property within the type [Microsoft.Insights/activityLogAlerts](/azure/templates/microsoft.insights/activitylogalerts?tabs=bicep) to create activity log alerts on a specific resource or a list of resources using the resource IDs as a prefix.
+
+You define your alert rule conditions within the `condition` property and then configure the alert group to trigger these alerts to by using the `actionGroup` array. Here you can pass a single or multiple action groups to send activity log alerts to, depending on your requirements.
++
+### Resource health alerts
+
+Azure Resource Health keeps you informed about the current and historical health status of your Azure resources. By creating your resource health alerts using Bicep, you can create and customize these alerts in bulk.
+
+In Bicep, you can create resource health alerts with the type [Microsoft.Insights/activityLogAlerts](/azure/templates/microsoft.insights/activitylogalerts?tabs=bicep).
+
+Resource health alerts can be configured to monitor events at the level of a subscription, resource group, or individual resource.
+
+Consider the following example, where you create a resource health alert that reports on service health alerts. The alert is applied at the subscription level (using the `scope` property), and sends alerts to an existing action group:
+++
+### Smart detection alerts
+
+Smart detection alerts warn you of potential performance problems and failure anomalies in your web application. You can create smart detection alerts in Bicep using the type [Microsoft.AlertsManagement/smartDetectorAlertRules](/azure/templates/microsoft.alertsmanagement/smartdetectoralertrules?tabs=bicep).
+
+## Dashboards
+
+In Bicep, you can create portal dashboards by using the resource type [Microsoft.Portal/dashboards](/azure/templates/microsoft.portal/dashboards?tabs=bicep).
+
+For more information about creating dashboards with code, see [Programmatically create an Azure Dashboard](../../azure-portal/azure-portal-dashboards-create-programmatically.md).
+
+## Autoscale rules
+
+To create an autoscaling setting, you define these using the resource type [Microsoft.Insights/autoscaleSettings](/azure/templates/microsoft.insights/autoscalesettings?tabs=bicep).
+
+To target the resource that you want to apply the autoscaling setting to, you need to provide the target resource identifier of the resource that the setting should be added to.
+
+In this example, a *scale out* condition for the App Service plan based on the average CPU percentage over a 10 minute time period. If the App Service plan exceeds 70% average CPU consumption over 10 minutes, the autoscale engine scales out the plan by adding one instance.
++
+> [!NOTE]
+> When defining autoscaling rules, keep best practices in mind to avoid issues when attempting to autoscale, such as flapping. For more information, see the following documentation on [best practices for Autoscale](../../azure-monitor/autoscale/autoscale-best-practices.md).
+
+## Related resources
+
+- Resource documentation
+ - [Microsoft.OperationalInsights/workspaces](/azure/templates/microsoft.operationalinsights/workspaces?tabs=bicep)
+ - [Microsoft.Insights/components](/azure/templates/microsoft.insights/components?tabs=bicep)
+ - [Microsoft.Insights/diagnosticSettings](/azure/templates/microsoft.insights/diagnosticsettings?tabs=bicep)
+ - [Microsoft.Insights/actionGroups](/azure/templates/microsoft.insights/actiongroups?tabs=bicep)
+ - [Microsoft.Insights/scheduledQueryRules](/azure/templates/microsoft.insights/scheduledqueryrules?tabs=bicep)
+ - [Microsoft.Insights/metricAlerts](/azure/templates/microsoft.insights/metricalerts?tabs=bicep)
+ - [Microsoft.Portal/dashboards](/azure/templates/microsoft.portal/dashboards?tabs=bicep)
+ - [Microsoft.Insights/activityLogAlerts](/azure/templates/microsoft.insights/activitylogalerts?tabs=bicep)
+ - [Microsoft.AlertsManagement/smartDetectorAlertRules](/azure/templates/microsoft.alertsmanagement/smartdetectoralertrules?tabs=bicep).
+ - [Microsoft.Insights/autoscaleSettings](/azure/templates/microsoft.insights/autoscalesettings?tabs=bicep)
azure-resource-manager Resource Name Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/resource-name-rules.md
description: Shows the rules and restrictions for naming Azure resources.
Previously updated : 05/16/2022 Last updated : 05/17/2022 # Naming rules and restrictions for Azure resources
In the following tables, the term alphanumeric refers to:
> | labplans | resource group | 1-100 | Alphanumerics, hyphens, periods, and underscores.<br><br>Start with letter and end with alphanumeric. | > | labs | resource group | 1-100 | Alphanumerics, hyphens, periods, and underscores.<br><br>Start with letter and end with alphanumeric. |
+## Microsoft.LoadTestService
+
+> [!div class="mx-tableFixed"]
+> | Entity | Scope | Length | Valid Characters |
+> | | | | |
+> | loadtests | global | 1-64 | Can't use:<br>`<>*&@:?+/\,;=.|[]"` or space.<br><br>Can't start with underscore, hyphen, or number. Can't end with underscore or hyphen. |
+ ## Microsoft.Logic > [!div class="mx-tableFixed"]
In the following tables, the term alphanumeric refers to:
> | Entity | Scope | Length | Valid Characters | > | | | | | > | certificates | resource group | 1-260 | Can't use:<br>`/` <br><br>Can't end with space or period. |
-> | serverfarms | resource group | 1-40 | Alphanumerics and hyphens. |
-> | sites | global or per domain. See note below. | 2-60 | Contains alphanumerics and hyphens.<br><br>Can't start or end with hyphen. |
-> | sites / slots | site | 2-59 | Alphanumerics and hyphens. |
+> | serverfarms | resource group | 1-40 | Alphanumeric, hyphens and Unicode characters that can be mapped to Punycode |
+> | sites / functions / slots | global or per domain. See note below. | 2-60 | Alphanumeric, hyphens and Unicode characters that can be mapped to Punycode<br><br>Can't start or end with hyphen. |
> [!NOTE] > A web site must have a globally unique URL. When you create a web site that uses a hosting plan, the URL is `http://<app-name>.azurewebsites.net`. The app name must be globally unique. When you create a web site that uses an App Service Environment, the app name must be unique within the [domain for the App Service Environment](../../app-service/environment/using-an-ase.md#app-access). For both cases, the URL of the site is globally unique. > > Azure Functions has the same naming rules and restrictions as Microsoft.Web/sites. When generating the host ID, the function app name is truncated to 32 characters. This can cause host ID collision when a shared storage account is used. For more information, see [Host ID considerations](../../azure-functions/storage-considerations.md#host-id-considerations).
+>
+> Unicode characters are parsed to Punycode using the following method: https://docs.microsoft.com/dotnet/api/system.globalization.idnmapping.getascii
## Next steps
azure-resource-manager Deploy Github Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/deploy-github-actions.md
In the example above, replace the placeholders with your subscription ID and res
# [OpenID Connect](#tab/openid)
-You need to provide your application's **Client ID**, **Tenant ID**, and **Subscription ID** to the login action. These values can either be provided directly in the workflow or can be stored in GitHub secrets and referenced in your workflow. Saving the values as GitHub secrets is the more secure option.
-1. Open your GitHub repository and go to **Settings**.
+OpenID Connect is an authentication method that uses short-lived tokens. Setting up [OpenID Connect with GitHub Actions](https://docs.github.com/en/actions/deployment/security-hardening-your-deployments/about-security-hardening-with-openid-connect) is more complex process that offers hardened security.
-1. Select **Settings > Secrets > New secret**.
+1. If you do not have an existing application, register a [new Active Directory application and service principal that can access resources](../../active-directory/develop/howto-create-service-principal-portal.md). Create the Active Directory application.
-1. Create secrets for `AZURE_CLIENT_ID`, `AZURE_TENANT_ID`, and `AZURE_SUBSCRIPTION_ID`. Use these values from your Active Directory application for your GitHub secrets:
+ ```azurecli-interactive
+ az ad app create --display-name myApp
+ ```
- |GitHub Secret | Active Directory Application |
- |||
- |AZURE_CLIENT_ID | Application (client) ID |
- |AZURE_TENANT_ID | Directory (tenant) ID |
- |AZURE_SUBSCRIPTION_ID | Subscription ID |
+ This command will output JSON with an `appId` that is your `client-id`. Save the value to use as the `AZURE_CLIENT_ID` GitHub secret later.
-1. Save each secret by selecting **Add secret**.
+ You'll use the `objectId` value when creating federated credentials with Graph API and reference it as the `APPLICATION-OBJECT-ID`.
+
+1. Create a service principal. Replace the `$appID` with the appId from your JSON output.
+
+ This command generates JSON output with a different `objectId` and will be used in the next step. The new `objectId` is the `assignee-object-id`.
+
+ Copy the `appOwnerTenantId` to use as a GitHub secret for `AZURE_TENANT_ID` later.
+
+ ```azurecli-interactive
+ az ad sp create --id $appId
+ ```
+
+1. Create a new role assignment by subscription and object. By default, the role assignment will be tied to your default subscription. Replace `$subscriptionId` with your subscription ID, `$resourceGroupName` with your resource group name, and `$assigneeObjectId` with the generated `assignee-object-id`. Learn [how to manage Azure subscriptions with the Azure CLI](/cli/azure/manage-azure-subscriptions-azure-cli).
+
+ ```azurecli-interactive
+ az role assignment create --role contributor --subscription $subscriptionId --assignee-object-id $assigneeObjectId --assignee-principal-type ServicePrincipal --scopes /subscriptions/$subscriptionId/resourceGroups/$resourceGroupName/providers/Microsoft.Web/sites/
+ ```
+
+1. Run the following command to [create a new federated identity credential](/graph/api/application-post-federatedidentitycredentials?view=graph-rest-beta&preserve-view=true) for your active directory application.
+
+ * Replace `APPLICATION-OBJECT-ID` with the **objectId (generated while creating app)** for your Active Directory application.
+ * Set a value for `CREDENTIAL-NAME` to reference later.
+ * Set the `subject`. The value of this is defined by GitHub depending on your workflow:
+ * Jobs in your GitHub Actions environment: `repo:< Organization/Repository >:environment:< Name >`
+ * For Jobs not tied to an environment, include the ref path for branch/tag based on the ref path used for triggering the workflow: `repo:< Organization/Repository >:ref:< ref path>`. For example, `repo:n-username/ node_express:ref:refs/heads/my-branch` or `repo:n-username/ node_express:ref:refs/tags/my-tag`.
+ * For workflows triggered by a pull request event: `repo:< Organization/Repository >:pull_request`.
+
+ ```azurecli
+ az rest --method POST --uri 'https://graph.microsoft.com/beta/applications/<APPLICATION-OBJECT-ID>/federatedIdentityCredentials' --body '{"name":"<CREDENTIAL-NAME>","issuer":"https://token.actions.githubusercontent.com","subject":"repo:organization/repository:ref:refs/heads/main","description":"Testing","audiences":["api://AzureADTokenExchange"]}'
+ ```
+
+ To learn how to create a Create an active directory application, service principal, and federated credentials in Azure portal, see [Connect GitHub and Azure](/azure/developer/github/connect-from-azure#use-the-azure-login-action-with-openid-connect).
+
## Configure the GitHub secrets
azure-signalr Signalr Quickstart Azure Signalr Service Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-quickstart-azure-signalr-service-bicep.md
+
+ Title: 'Quickstart: Create an Azure SignalR Service - Bicep'
+description: In this quickstart, learn how to create an Azure SignalR Service using Bicep.
++ Last updated : 05/18/2022+++++
+# Quickstart: Use Bicep to deploy Azure SignalR Service
+
+This quickstart describes how to use Bicep to create an Azure SignalR Service using Azure CLI or PowerShell.
++
+## Prerequisites
+
+An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/).
+
+## Review the Bicep file
+
+The Bicep file used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/signalr/).
++
+The Bicep file defines one Azure resource:
+
+* [**Microsoft.SignalRService/SignalR**](/azure/templates/microsoft.signalrservice/signalr)
+
+## Deploy the Bicep file
+
+1. Save the Bicep file as **main.bicep** to your local computer.
+1. Deploy the Bicep file using either Azure CLI or Azure PowerShell.
+
+ # [CLI](#tab/CLI)
+
+ ```azurecli
+ az group create --name exampleRG --location eastus
+ az deployment group create --resource-group exampleRG --template-file main.bicep
+ ```
+
+ # [PowerShell](#tab/PowerShell)
+
+ ```azurepowershell
+ New-AzResourceGroup -Name exampleRG -Location eastus
+ New-AzResourceGroupDeployment -ResourceGroupName exampleRG -TemplateFile ./main.bicep
+ ```
+
+
+
+ When the deployment finishes, you should see a message indicating the deployment succeeded.
+
+## Review deployed resources
+
+Use the Azure portal, Azure CLI, or Azure PowerShell to list the deployed resources in the resource group.
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+az resource list --resource-group exampleRG
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell-interactive
+Get-AzResource -ResourceGroupName exampleRG
+```
+++
+## Clean up resources
+
+When no longer needed, use the Azure portal, Azure CLI, or Azure PowerShell to delete the resource group and its resources.
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+az group delete --name exampleRG
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell-interactive
+Remove-AzResourceGroup -Name exampleRG
+```
+++
+## Next steps
+
+For a step-by-step tutorial that guides you through the process of creating a Bicep file using Visual Studio Code, see:
+
+> [!div class="nextstepaction"]
+> [Quickstart: Create Bicep files with Visual Studio Code](../azure-resource-manager/bicep/quickstart-create-bicep-use-visual-studio-code.md)
azure-video-indexer Animated Characters Recognition How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/animated-characters-recognition-how-to.md
Title: Animated character detection with Azure Video Indexer (formerly Azure Video Analyzer for Media) how to
-description: This how to demonstrates how to use animated character detection with Azure Video Indexer (formerly Azure Video Analyzer for Media).
+ Title: Animated character detection with Azure Video Indexer how to
+description: This how to demonstrates how to use animated character detection with Azure Video Indexer.
# Use the animated character detection (preview) with portal and API
-Azure Video Indexer (formerly Azure Video Analyzer for Media) supports detection, grouping, and recognition of characters in animated content, this functionality is available through the Azure portal and through API. Review [this overview](animated-characters-recognition.md) topic.
+Azure Video Indexer supports detection, grouping, and recognition of characters in animated content, this functionality is available through the Azure portal and through API. Review [this overview](animated-characters-recognition.md) topic.
This article demonstrates to how to use the animated character detection with the Azure portal and the Azure Video Indexer API.
azure-video-indexer Animated Characters Recognition https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/animated-characters-recognition.md
Title: Animated character detection with Azure Video Indexer (formerly Azure Video Analyzer for Media)
-description: This topic demonstrates how to use animated character detection with Azure Video Indexer (formerly Azure Video Analyzer for Media).
+ Title: Animated character detection with Azure Video Indexer
+description: This topic demonstrates how to use animated character detection with Azure Video Indexer.
Last updated 11/19/2019
# Animated character detection (preview)
-Azure Video Indexer (formerly Azure Video Analyzer for Media) supports detection, grouping, and recognition of characters in animated content via integration with [Cognitive Services custom vision](https://azure.microsoft.com/services/cognitive-services/custom-vision-service/). This functionality is available both through the portal and through the API.
+Azure Video Indexer supports detection, grouping, and recognition of characters in animated content via integration with [Cognitive Services custom vision](https://azure.microsoft.com/services/cognitive-services/custom-vision-service/). This functionality is available both through the portal and through the API.
After uploading an animated video with a specific animation model, Azure Video Indexer extracts keyframes, detects animated characters in these frames, groups similar character, and chooses the best sample. Then, it sends the grouped characters to Custom Vision that identifies characters based on the models it was trained on.
azure-video-indexer Compare Video Indexer With Media Services Presets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/compare-video-indexer-with-media-services-presets.md
Title: Comparison of Azure Video Indexer (formerly Azure Video Analyzer for Media) and Azure Media Services v3 presets
-description: This article compares Azure Video Indexer (formerly Azure Video Analyzer for Media) capabilities and Azure Media Services v3 presets.
+ Title: Comparison of Azure Video Indexer and Azure Media Services v3 presets
+description: This article compares Azure Video Indexer capabilities and Azure Media Services v3 presets.
Last updated 02/24/2020
azure-video-indexer Concepts Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/concepts-overview.md
Title: Azure Video Indexer (formerly Azure Video Analyzer for Media) concepts - Azure
-description: This article gives a brief overview of Azure Video Indexer (formerly Azure Video Analyzer for Media) terminology and concepts.
+ Title: Azure Video Indexer concepts - Azure
+description: This article gives a brief overview of Azure Video Indexer terminology and concepts.
Last updated 01/19/2021
# Azure Video Indexer concepts
-This article gives a brief overview of Azure Video Indexer (formerly Azure Video Analyzer for Media) terminology and concepts.
+This article gives a brief overview of Azure Video Indexer terminology and concepts.
## Audio/video/combined insights
azure-video-indexer Connect Classic Account To Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/connect-classic-account-to-arm.md
# Connect an existing classic paid Azure Video Indexer account to ARM-based account This article details how to connect an existing classic paid Azure Video Indexer account to an Azure Resource Manager (ARM) based account.
-Today, Azure Video Indexer (formerly Azure Video Analyzer for Media), is a GA(general availability) product that is not an ARM resource on Azure.
+Today, Azure Video Indexer, is a GA(general availability) product that is not an ARM resource on Azure.
In this article, we will go through options on connecting your **existing** Azure Video Indexer account to [ARM][docs-arm-overview]. ## Prerequisites
azure-video-indexer Connect To Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/connect-to-azure.md
Title: Create an Azure Video Indexer account connected to Azure
-description: Learn how to create an Azure Video Indexer (formerly Azure Video Analyzer for Media) account connected to Azure.
+description: Learn how to create an Azure Video Indexer account connected to Azure.
Last updated 05/03/2022
# Create an Azure Video Indexer account
-When creating an Azure Video Indexer (formerly Azure Video Analyzer for Media) account, you can choose a free trial account (where you get a certain number of free indexing minutes) or a paid option (where you're not limited by the quota). With a free trial, Azure Video Indexer provides up to 600 minutes of free indexing to users and up to 2400 minutes of free indexing to users that subscribe to the Azure Video Indexer API on the [developer portal](https://aka.ms/avam-dev-portal). With the paid options, Azure Video Indexer offers two types of accounts: classic accounts(General Availability), and ARM-based accounts(Public Preview). Main difference between the two is account management platform. While classic accounts are built on the API Management, ARM-based accounts management is built on Azure, enables to apply access control to all services with role-based access control (Azure RBAC) natively.
+When creating an Azure Video Indexer account, you can choose a free trial account (where you get a certain number of free indexing minutes) or a paid option (where you're not limited by the quota). With a free trial, Azure Video Indexer provides up to 600 minutes of free indexing to users and up to 2400 minutes of free indexing to users that subscribe to the Azure Video Indexer API on the [developer portal](https://aka.ms/avam-dev-portal). With the paid options, Azure Video Indexer offers two types of accounts: classic accounts(General Availability), and ARM-based accounts(Public Preview). Main difference between the two is account management platform. While classic accounts are built on the API Management, ARM-based accounts management is built on Azure, enables to apply access control to all services with role-based access control (Azure RBAC) natively.
* You can create an Azure Video Indexer **classic** account through our [API](https://aka.ms/avam-dev-portal). * You can create an Azure Video Indexer **ARM-based** account through one of the following:
azure-video-indexer Considerations When Use At Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/considerations-when-use-at-scale.md
Title: Things to consider when using Azure Video Indexer (formerly Azure Video Analyzer for Media) at scale - Azure
-description: This topic explains what things to consider when using Azure Video Indexer (formerly Azure Video Analyzer for Media) at scale.
+ Title: Things to consider when using Azure Video Indexer at scale - Azure
+description: This topic explains what things to consider when using Azure Video Indexer at scale.
Last updated 11/13/2020
# Things to consider when using Azure Video Indexer at scale
-When using Azure Video Indexer (formerly Azure Video Analyzer for Media) to index videos and your archive of videos is growing, consider scaling.
+When using Azure Video Indexer to index videos and your archive of videos is growing, consider scaling.
This article answers questions like:
azure-video-indexer Customize Brands Model Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/customize-brands-model-overview.md
Title: Customize a Brands model in Azure Video Indexer (formerly Azure Video Analyzer for Media) - Azure
-description: This article gives an overview of what is a Brands model in Azure Video Indexer (formerly Azure Video Analyzer for Media) and how to customize it.
+ Title: Customize a Brands model in Azure Video Indexer - Azure
+description: This article gives an overview of what is a Brands model in Azure Video Indexer and how to customize it.
Last updated 12/15/2019
# Customize a Brands model in Azure Video Indexer
-Azure Video Indexer (formerly Azure Video Analyzer for Media) supports brand detection from speech and visual text during indexing and reindexing of video and audio content. The brand detection feature identifies mentions of products, services, and companies suggested by Bing's brands database. For example, if Microsoft is mentioned in a video or audio content or if it shows up in visual text in a video, Azure Video Indexer detects it as a brand in the content. Brands are disambiguated from other terms using context.
+Azure Video Indexer supports brand detection from speech and visual text during indexing and reindexing of video and audio content. The brand detection feature identifies mentions of products, services, and companies suggested by Bing's brands database. For example, if Microsoft is mentioned in a video or audio content or if it shows up in visual text in a video, Azure Video Indexer detects it as a brand in the content. Brands are disambiguated from other terms using context.
Brand detection is useful in a wide variety of business scenarios such as contents archive and discovery, contextual advertising, social media analysis, retail compete analysis, and many more. Azure Video Indexer brand detection enables you to index brand mentions in speech and visual text, using Bing's brands database as well as with customization by building a custom Brands model for each Azure Video Indexer account. The custom Brands model feature allows you to select whether or not Azure Video Indexer will detect brands from the Bing brands database, exclude certain brands from being detected (essentially creating a list of unapproved brands), and include brands that should be part of your model that might not be in Bing's brands database (essentially creating a list of approved brands). The custom Brands model that you create will only be available in the account in which you created the model.
azure-video-indexer Customize Brands Model With Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/customize-brands-model-with-api.md
Title: Customize a Brands model with Azure Video Indexer (formerly Azure Video Analyzer for Media) API
-description: Learn how to customize a Brands model with the Azure Video Indexer (formerly Azure Video Analyzer for Media) API.
+ Title: Customize a Brands model with Azure Video Indexer API
+description: Learn how to customize a Brands model with the Azure Video Indexer API.
# Customize a Brands model with the Azure Video Indexer API
-Azure Video Indexer (formerly Azure Video Analyzer for Media) supports brand detection from speech and visual text during indexing and reindexing of video and audio content. The brand detection feature identifies mentions of products, services, and companies suggested by Bing's brands database. For example, if Microsoft is mentioned in video or audio content or if it shows up in visual text in a video, Azure Video Indexer detects it as a brand in the content. A custom Brands model allows you to exclude certain brands from being detected and include brands that should be part of your model that might not be in Bing's brands database. For more information, see [Overview](customize-brands-model-overview.md).
+Azure Video Indexer supports brand detection from speech and visual text during indexing and reindexing of video and audio content. The brand detection feature identifies mentions of products, services, and companies suggested by Bing's brands database. For example, if Microsoft is mentioned in video or audio content or if it shows up in visual text in a video, Azure Video Indexer detects it as a brand in the content. A custom Brands model allows you to exclude certain brands from being detected and include brands that should be part of your model that might not be in Bing's brands database. For more information, see [Overview](customize-brands-model-overview.md).
> [!NOTE] > If your video was indexed prior to adding a brand, you need to reindex it.
azure-video-indexer Customize Brands Model With Website https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/customize-brands-model-with-website.md
Title: Customize a Brands model with the Azure Video Indexer (formerly Azure Video Analyzer for Media) website
-description: Learn how to customize a Brands model with the Azure Video Indexer (formerly Azure Video Analyzer for Media) website.
+ Title: Customize a Brands model with the Azure Video Indexer website
+description: Learn how to customize a Brands model with the Azure Video Indexer website.
# Customize a Brands model with the Azure Video Indexer website
-Azure Video Indexer (formerly Azure Video Analyzer for Media) supports brand detection from speech and visual text during indexing and reindexing of video and audio content. The brand detection feature identifies mentions of products, services, and companies suggested by Bing's brands database. For example, if Microsoft is mentioned in video or audio content or if it shows up in visual text in a video, Azure Video Indexer detects it as a brand in the content.
+Azure Video Indexer supports brand detection from speech and visual text during indexing and reindexing of video and audio content. The brand detection feature identifies mentions of products, services, and companies suggested by Bing's brands database. For example, if Microsoft is mentioned in video or audio content or if it shows up in visual text in a video, Azure Video Indexer detects it as a brand in the content.
A custom Brands model allows you to:
azure-video-indexer Customize Content Models Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/customize-content-models-overview.md
Title: Customizing content models in Azure Video Indexer (formerly Azure Video Analyzer for Media)
+ Title: Customizing content models in Azure Video Indexer
description: This article gives links to the conceptual articles that explain the benefits of each type of customization. This article also links to how-to guides that show how you can implement the customization of each model. Last updated 06/26/2019
# Customizing content models in Azure Video Indexer
-Azure Video Indexer (formerly Azure Video Analyzer for Media) allows you to customize some of its models to be adapted to your specific use case. These models include [brands](customize-brands-model-overview.md), [language](customize-language-model-overview.md), and [person](customize-person-model-overview.md). You can easily customize these models using the Azure Video Indexer website or API.
+Azure Video Indexer allows you to customize some of its models to be adapted to your specific use case. These models include [brands](customize-brands-model-overview.md), [language](customize-language-model-overview.md), and [person](customize-person-model-overview.md). You can easily customize these models using the Azure Video Indexer website or API.
This article gives links to articles that explain the benefits of each type of customization. The article also links to how-to guides that show how you can implement the customization of each model.
azure-video-indexer Customize Language Model Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/customize-language-model-overview.md
Title: Customize a Language model in Azure Video Indexer (formerly Azure Video Analyzer for Media) - Azure
-description: This article gives an overview of what is a Language model in Azure Video Indexer (formerly Azure Video Analyzer for Media) and how to customize it.
+ Title: Customize a Language model in Azure Video Indexer - Azure
+description: This article gives an overview of what is a Language model in Azure Video Indexer and how to customize it.
Last updated 02/02/2022
# Customize a Language model with Azure Video Indexer
-Azure Video Indexer (formerly Azure Video Analyzer for Media) supports automatic speech recognition through integration with the Microsoft [Custom Speech Service](https://azure.microsoft.com/services/cognitive-services/custom-speech-service/). You can customize the Language model by uploading adaptation text, namely text from the domain whose vocabulary you'd like the engine to adapt to. Once you train your model, new words appearing in the adaptation text will be recognized, assuming default pronunciation, and the Language model will learn new probable sequences of words. See the list of supported by Azure Video Indexer languages in [supported langues](language-support.md).
+Azure Video Indexer supports automatic speech recognition through integration with the Microsoft [Custom Speech Service](https://azure.microsoft.com/services/cognitive-services/custom-speech-service/). You can customize the Language model by uploading adaptation text, namely text from the domain whose vocabulary you'd like the engine to adapt to. Once you train your model, new words appearing in the adaptation text will be recognized, assuming default pronunciation, and the Language model will learn new probable sequences of words. See the list of supported by Azure Video Indexer languages in [supported langues](language-support.md).
Let's take a word that is highly specific, like "Kubernetes" (in the context of Azure Kubernetes service), as an example. Since the word is new to Azure Video Indexer, it is recognized as "communities". You need to train the model to recognize it as "Kubernetes". In other cases, the words exist, but the Language model is not expecting them to appear in a certain context. For example, "container service" is not a 2-word sequence that a non-specialized Language model would recognize as a specific set of words.
azure-video-indexer Customize Language Model With Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/customize-language-model-with-api.md
Title: Customize a Language model with Azure Video Indexer (formerly Azure Video Analyzer for Media) API
-description: Learn how to customize a Language model with the Azure Video Indexer (formerly Azure Video Analyzer for Media) API.
+ Title: Customize a Language model with Azure Video Indexer API
+description: Learn how to customize a Language model with the Azure Video Indexer API.
# Customize a Language model with the Azure Video Indexer API
-Azure Video Indexer (formerly Azure Video Analyzer for Media) lets you create custom Language models to customize speech recognition by uploading adaptation text, namely text from the domain whose vocabulary you'd like the engine to adapt to. Once you train your model, new words appearing in the adaptation text will be recognized.
+Azure Video Indexer lets you create custom Language models to customize speech recognition by uploading adaptation text, namely text from the domain whose vocabulary you'd like the engine to adapt to. Once you train your model, new words appearing in the adaptation text will be recognized.
For a detailed overview and best practices for custom Language models, see [Customize a Language model with Azure Video Indexer](customize-language-model-overview.md).
azure-video-indexer Customize Language Model With Website https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/customize-language-model-with-website.md
Title: Customize Language model with Azure Video Indexer (formerly Azure Video Analyzer for Media) website
-description: Learn how to customize a Language model with the Azure Video Indexer (formerly Azure Video Analyzer for Media) website.
+ Title: Customize Language model with Azure Video Indexer website
+description: Learn how to customize a Language model with the Azure Video Indexer website.
# Customize a Language model with the Azure Video Indexer website
-Azure Video Indexer (formerly Azure Video Analyzer for Media) lets you create custom Language models to customize speech recognition by uploading adaptation text, namely text from the domain whose vocabulary you'd like the engine to adapt to. Once you train your model, new words appearing in the adaptation text will be recognized.
+Azure Video Indexer lets you create custom Language models to customize speech recognition by uploading adaptation text, namely text from the domain whose vocabulary you'd like the engine to adapt to. Once you train your model, new words appearing in the adaptation text will be recognized.
For a detailed overview and best practices for custom language models, see [Customize a Language model with Azure Video Indexer](customize-language-model-overview.md).
azure-video-indexer Customize Person Model Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/customize-person-model-overview.md
Title: Customize a Person model in Azure Video Indexer (formerly Azure Video Analyzer for Media) - Azure
-description: This article gives an overview of what is a Person model in Azure Video Indexer (formerly Azure Video Analyzer for Media) and how to customize it.
+ Title: Customize a Person model in Azure Video Indexer - Azure
+description: This article gives an overview of what is a Person model in Azure Video Indexer and how to customize it.
Last updated 05/15/2019
# Customize a Person model in Azure Video Indexer
-Azure Video Indexer (formerly Azure Video Analyzer for Media) supports celebrity recognition in your videos. The celebrity recognition feature covers approximately one million faces based on commonly requested data source such as IMDB, Wikipedia, and top LinkedIn influencers. Faces that are not recognized by Azure Video Indexer are still detected but are left unnamed. Customers can build custom Person models and enable Azure Video Indexer to recognize faces that are not recognized by default. Customers can build these Person models by pairing a person's name with image files of the person's face.
+Azure Video Indexer supports celebrity recognition in your videos. The celebrity recognition feature covers approximately one million faces based on commonly requested data source such as IMDB, Wikipedia, and top LinkedIn influencers. Faces that are not recognized by Azure Video Indexer are still detected but are left unnamed. Customers can build custom Person models and enable Azure Video Indexer to recognize faces that are not recognized by default. Customers can build these Person models by pairing a person's name with image files of the person's face.
If your account caters to different use-cases, you can benefit from being able to create multiple Person models per account. For example, if the content in your account is meant to be sorted into different channels, you might want to create a separate Person model for each channel.
azure-video-indexer Customize Person Model With Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/customize-person-model-with-api.md
Title: Customize a Person model with Azure Video Indexer (formerly Azure Video Analyzer for Media) API
-description: Learn how to customize a Person model with the Azure Video Indexer (formerly Azure Video Analyzer for Media) API.
+ Title: Customize a Person model with Azure Video Indexer API
+description: Learn how to customize a Person model with the Azure Video Indexer API.
# Customize a Person model with the Azure Video Indexer API
-Azure Video Indexer (formerly Azure Video Analyzer for Media) supports face detection and celebrity recognition for video content. The celebrity recognition feature covers about one million faces based on commonly requested data source such as IMDB, Wikipedia, and top LinkedIn influencers. Faces that aren't recognized by the celebrity recognition feature are detected but left unnamed. After you upload your video to Azure Video Indexer and get results back, you can go back and name the faces that weren't recognized. Once you label a face with a name, the face and name get added to your account's Person model. Azure Video Indexer will then recognize this face in your future videos and past videos.
+Azure Video Indexer supports face detection and celebrity recognition for video content. The celebrity recognition feature covers about one million faces based on commonly requested data source such as IMDB, Wikipedia, and top LinkedIn influencers. Faces that aren't recognized by the celebrity recognition feature are detected but left unnamed. After you upload your video to Azure Video Indexer and get results back, you can go back and name the faces that weren't recognized. Once you label a face with a name, the face and name get added to your account's Person model. Azure Video Indexer will then recognize this face in your future videos and past videos.
You can use the Azure Video Indexer API to edit faces that were detected in a video, as described in this topic. You can also use the Azure Video Indexer website, as described in [Customize Person model using the Azure Video Indexer website](customize-person-model-with-api.md).
azure-video-indexer Customize Person Model With Website https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/customize-person-model-with-website.md
Title: Customize a Person model with Azure Video Indexer (formerly Azure Video Analyzer for Media) website
-description: Learn how to customize a Person model with the Azure Video Indexer (formerly Azure Video Analyzer for Media) website.
+ Title: Customize a Person model with Azure Video Indexer website
+description: Learn how to customize a Person model with the Azure Video Indexer website.
# Customize a Person model with the Azure Video Indexer website
-Azure Video Indexer (formerly Azure Video Analyzer for Media) supports celebrity recognition for video content. The celebrity recognition feature covers approximately one million faces based on commonly requested data source such as IMDB, Wikipedia, and top LinkedIn influencers. For a detailed overview, see [Customize a Person model in Azure Video Indexer](customize-person-model-overview.md).
+Azure Video Indexer supports celebrity recognition for video content. The celebrity recognition feature covers approximately one million faces based on commonly requested data source such as IMDB, Wikipedia, and top LinkedIn influencers. For a detailed overview, see [Customize a Person model in Azure Video Indexer](customize-person-model-overview.md).
You can use the Azure Video Indexer website to edit faces that were detected in a video, as described in this topic. You can also use the API, as described in [Customize a Person model using APIs](customize-person-model-with-api.md).
azure-video-indexer Deploy With Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/deploy-with-arm-template.md
## Overview
-In this tutorial you will create an Azure Video Indexer (formerly Azure Video Analyzer for Media) account by using Azure Resource Manager (ARM) template (preview).
+In this tutorial you will create an Azure Video Indexer account by using Azure Resource Manager (ARM) template (preview).
The resource will be deployed to your subscription and will create the Azure Video Indexer resource based on parameters defined in the avam.template file. > [!NOTE]
The resource will be deployed to your subscription and will create the Azure Vid
## Reference documentation
-If you're new to Azure Video Indexer (formerly Azure Video Analyzer for Media), see:
+If you're new to Azure Video Indexer, see:
* [Azure Video Indexer Documentation](/azure/azure-video-indexer) * [Azure Video Indexer Developer Portal](https://api-portal.videoindexer.ai/)
azure-video-indexer Edit Transcript Lines Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/edit-transcript-lines-portal.md
+
+ Title: Insert or remove transcript lines in Azure Video Indexer portal
+description: This article explains how to insert or remove a transcript line in Azure Video Indexer portal.
++ Last updated : 05/03/2022++
+# Insert or remove transcript lines in Video Indexer portal
+
+This article explains how to insert or remove a transcript line in Azure Video Indexer portal.
+
+## Add new line to the transcript timeline
+
+While in the edit mode, hover between two transcription lines. You'll find a gap between **ending time** of the **transcript line** and the beginning of the following transcript line, user should see the following **add new transcription line** option.
++
+After clicking the add new transcription line, there will be an option to add the new text and the time stamp for the new line. Enter the text, choose the time stamp for the new line, and select **save**. Default timestamp is the gap between the previous and next transcript line.
++
+If there isnΓÇÖt an option to add a new line, you can adjust the end/start time of the relevant transcript lines to fit a new line in your desired place.
+
+Choose an existing line in the transcript line, click the **three dots** icon, select edit and change the time stamp accordingly.
+
+> [!NOTE]
+> New lines will not appear as part of the **From transcript edits** in the **Content model customization** under languages.
+>
+> While using the API, when adding a new line, **Speaker name** can be added using free text. For example, *Speaker 1* can now become *Adam*.
+
+## Edit existing line
+
+While in the edit mode, select the three dots icon. The editing options were enhanced, they now contain not just the text but also the timestamp with accuracy of milliseconds.
+
+## Delete line
+
+Lines can now be deleted through the same three dots icon.
+
+## Example how and when to use this feature
+
+To consolidate two lines which you believe should appear as one.
+
+1. Go to line number 2, select edit.
+1. Copy the text
+1. Delete the line
+1. Go to line 1, edit, paste the text and save.
+
+## Next steps
+
+For updating transcript lines and text using API visit [Azure Video Indexer Developer portal](https://aka.ms/avam-dev-portal)
azure-video-indexer Invite Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/invite-users.md
# Quickstart: Invite users to Azure Video Indexer
-To collaborate with your colleagues, you can invite them to your Azure Video Indexer (formerly Azure Video Analyzer for Media) account.
+To collaborate with your colleagues, you can invite them to your Azure Video Indexer account.
> [!NOTE] > Only the accountΓÇÖs admin can add or remove users.</br>
azure-video-indexer Language Identification Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/language-identification-model.md
Title: Use Azure Video Indexer (formerly Azure Video Analyzer for Media) to auto identify spoken languages - Azure
-description: This article describes how the Azure Video Indexer (formerly Azure Video Analyzer for Media) language identification model is used to automatically identifying the spoken language in a video.
+ Title: Use Azure Video Indexer to auto identify spoken languages - Azure
+description: This article describes how the Azure Video Indexer language identification model is used to automatically identifying the spoken language in a video.
Last updated 04/12/2020
# Automatically identify the spoken language with language identification model
-Azure Video Indexer (formerly Azure Video Analyzer for Media) supports automatic language identification (LID), which is the process of automatically identifying the spoken language content from audio and sending the media file to be transcribed in the dominant identified language.
+Azure Video Indexer supports automatic language identification (LID), which is the process of automatically identifying the spoken language content from audio and sending the media file to be transcribed in the dominant identified language.
Currently LID supports: English, Spanish, French, German, Italian, Mandarin Chinese, Japanese, Russian, and Portuguese (Brazilian).
azure-video-indexer Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/language-support.md
Title: Language support in Azure Video Indexer
-description: This article provides a comprehensive list of language support by service features in Azure Video Indexer (formerly Azure Video Analyzer for Media).
+description: This article provides a comprehensive list of language support by service features in Azure Video Indexer.
Last updated 04/07/2022
# Language support in Azure Video Indexer
-This article provides a comprehensive list of language support by service features in Azure Video Indexer (formerly Azure Video Analyzer for Media). For the list and definitions of all the features, see [Overview](video-indexer-overview.md).
+This article provides a comprehensive list of language support by service features in Azure Video Indexer. For the list and definitions of all the features, see [Overview](video-indexer-overview.md).
## General language support
azure-video-indexer Live Stream Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/live-stream-analysis.md
Title: Live stream analysis using Azure Video Indexer (formerly Azure Video Analyzer for Media)
-description: This article shows how to perform a live stream analysis using Azure Video Indexer (formerly Azure Video Analyzer for Media).
+ Title: Live stream analysis using Azure Video Indexer
+description: This article shows how to perform a live stream analysis using Azure Video Indexer.
Last updated 11/13/2019 # Live stream analysis with Azure Video Indexer
-Azure Video Indexer (formerly Azure Video Analyzer for Media) is an Azure service designed to extract deep insights from video and audio files offline. This is to analyze a given media file already created in advance. However, for some use cases it's important to get the media insights from a live feed as quick as possible to unlock operational and other use cases pressed in time. For example, such rich metadata on a live stream could be used by content producers to automate TV production.
+Azure Video Indexer is an Azure service designed to extract deep insights from video and audio files offline. This is to analyze a given media file already created in advance. However, for some use cases it's important to get the media insights from a live feed as quick as possible to unlock operational and other use cases pressed in time. For example, such rich metadata on a live stream could be used by content producers to automate TV production.
A solution described in this article, allows customers to use Azure Video Indexer in near real-time resolutions on live feeds. The delay in indexing can be as low as four minutes using this solution, depending on the chunks of data being indexed, the input resolution, the type of content and the compute powered used for this process.
azure-video-indexer Logic Apps Connector Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/logic-apps-connector-tutorial.md
Title: The Azure Video Indexer (formerly Azure Video Analyzer for Media) connectors with Logic App and Power Automate tutorial.
-description: This tutorial shows how to unlock new experiences and monetization opportunities Azure Video Indexer (formerly Azure Video Analyzer for Media) connectors with Logic App and Power Automate.
+ Title: The Azure Video Indexer connectors with Logic App and Power Automate tutorial.
+description: This tutorial shows how to unlock new experiences and monetization opportunities Azure Video Indexer connectors with Logic App and Power Automate.
Last updated 09/21/2020
Last updated 09/21/2020
# Tutorial: use Azure Video Indexer with Logic App and Power Automate
-Azure Video Indexer (formerly Azure Video Analyzer for Media) [REST API](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Delete-Video) supports both server-to-server and client-to-server communication and enables Azure Video Indexer users to integrate video and audio insights easily into their application logic, unlocking new experiences and monetization opportunities.
+Azure Video Indexer [REST API](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Delete-Video) supports both server-to-server and client-to-server communication and enables Azure Video Indexer users to integrate video and audio insights easily into their application logic, unlocking new experiences and monetization opportunities.
To make the integration even easier, we support [Logic Apps](https://azure.microsoft.com/services/logic-apps/) and [Power Automate](https://preview.flow.microsoft.com/connectors/shared_videoindexer-v2/video-indexer-v2/) connectors that are compatible with our API. You can use the connectors to set up custom workflows to effectively index and extract insights from a large amount of video and audio files, without writing a single line of code. Furthermore, using the connectors for your integration gives you better visibility on the health of your workflow and an easy way to debug it. 
azure-video-indexer Manage Account Connected To Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/manage-account-connected-to-azure.md
Title: Manage an Azure Video Indexer (formerly Azure Video Analyzer for Media) account
-description: Learn how to manage an Azure Video Indexer (formerly Azure Video Analyzer for Media) account connected to Azure.
+ Title: Manage an Azure Video Indexer account
+description: Learn how to manage an Azure Video Indexer account connected to Azure.
Last updated 01/14/2021
# Manage an Azure Video Indexer account connected to Azure
-This article demonstrates how to manage an Azure Video Indexer (formerly Azure Video Analyzer for Media) account that's connected to your Azure subscription and an Azure Media Services account.
+This article demonstrates how to manage an Azure Video Indexer account that's connected to your Azure subscription and an Azure Media Services account.
> [!NOTE] > You have to be the Azure Video Indexer account owner to do account configuration adjustments discussed in this topic.
azure-video-indexer Manage Multiple Tenants https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/manage-multiple-tenants.md
Title: Manage multiple tenants with Azure Video Indexer (formerly Azure Video Analyzer for Media) - Azure
-description: This article suggests different integration options for managing multiple tenants with Azure Video Indexer (formerly Azure Video Analyzer for Media).
+ Title: Manage multiple tenants with Azure Video Indexer - Azure
+description: This article suggests different integration options for managing multiple tenants with Azure Video Indexer.
Last updated 05/15/2019
# Manage multiple tenants
-This article discusses different options for managing multiple tenants with Azure Video Indexer (formerly Azure Video Analyzer for Media). Choose a method that is most suitable for your scenario:
+This article discusses different options for managing multiple tenants with Azure Video Indexer. Choose a method that is most suitable for your scenario:
* Azure Video Indexer account per tenant * Single Azure Video Indexer account for all tenants
azure-video-indexer Multi Language Identification Transcription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/multi-language-identification-transcription.md
Title: Automatically identify and transcribe multi-language content with Azure Video Indexer (formerly Azure Video Analyzer for Media)
-description: This topic demonstrates how to automatically identify and transcribe multi-language content with Azure Video Indexer (formerly Azure Video Analyzer for Media).
+ Title: Automatically identify and transcribe multi-language content with Azure Video Indexer
+description: This topic demonstrates how to automatically identify and transcribe multi-language content with Azure Video Indexer.
# Automatically identify and transcribe multi-language content
-Azure Video Indexer (formerly Azure Video Analyzer for Media) supports automatic language identification and transcription in multi-language content. This process involves automatically identifying the spoken language in different segments from audio, sending each segment of the media file to be transcribed and combine the transcription back to one unified transcription.
+Azure Video Indexer supports automatic language identification and transcription in multi-language content. This process involves automatically identifying the spoken language in different segments from audio, sending each segment of the media file to be transcribed and combine the transcription back to one unified transcription.
## Choosing multilingual identification on indexing with portal
azure-video-indexer Observed People Tracing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/observed-people-tracing.md
# Trace observed people in a video (preview)
-Azure Video Indexer (formerly Azure Video Analyzer for Media) detects observed people in videos and provides information such as the location of the person in the video frame and the exact timestamp (start, end) when a person appears. The API returns the bounding box coordinates (in pixels) for each person instance detected, including detection confidence.
+Azure Video Indexer detects observed people in videos and provides information such as the location of the person in the video frame and the exact timestamp (start, end) when a person appears. The API returns the bounding box coordinates (in pixels) for each person instance detected, including detection confidence.
Some scenarios where this feature could be useful:
azure-video-indexer Odrv Download https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/odrv-download.md
Title: Index videos stored on OneDrive - Azure Video Indexer
-description: Learn how to index videos stored on OneDrive by using Azure Video Indexer (formerly Azure Video Analyzer for Media).
+description: Learn how to index videos stored on OneDrive by using Azure Video Indexer.
Last updated 12/17/2021
azure-video-indexer Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/regions.md
Title: Regions in which Azure Video Indexer (formerly Azure Video Analyzer for Media) is available
-description: This article talks about Azure regions in which Azure Video Indexer (formerly Azure Video Analyzer for Media) is available.
+ Title: Regions in which Azure Video Indexer is available
+description: This article talks about Azure regions in which Azure Video Indexer is available.
# Azure regions in which Azure Video Indexer exists
-Azure Video Indexer (formerly Azure Video Analyzer for Media) APIs contain a **location** parameter that you should set to the Azure region to which the call should be routed. This must be an [Azure region in which Azure Video Indexer is available](https://azure.microsoft.com/global-infrastructure/services/?products=cognitive-services&regions=all).
+Azure Video Indexer APIs contain a **location** parameter that you should set to the Azure region to which the call should be routed. This must be an [Azure region in which Azure Video Indexer is available](https://azure.microsoft.com/global-infrastructure/services/?products=cognitive-services&regions=all).
## Locations
azure-video-indexer Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/release-notes.md
Title: Azure Video Indexer (formerly Azure Video Analyzer for Media) release notes | Microsoft Docs
-description: To stay up-to-date with the most recent developments, this article provides you with the latest updates on Azure Video Indexer (formerly Azure Video Analyzer for Media).
+ Title: Azure Video Indexer release notes | Microsoft Docs
+description: To stay up-to-date with the most recent developments, this article provides you with the latest updates on Azure Video Indexer.
Last updated 05/16/2022
>Get notified about when to revisit this page for updates by copying and pasting this URL: `https://docs.microsoft.com/api/search/rss?search=%22Azure+Media+Services+Video+Indexer+release+notes%22&locale=en-us` into your RSS feed reader.
-To stay up-to-date with the most recent Azure Video Indexer (formerly Azure Video Analyzer for Media) developments, this article provides you with information about:
+To stay up-to-date with the most recent Azure Video Indexer developments, this article provides you with information about:
* [Important notice](#upcoming-critical-changes) about planned changes * The latest releases
Fixed bugs related to CSS, theming and accessibility:
### Automatic Scaling of Media Reserved Units
-Starting August 1st 2021, Azure Video Analyzer for Media (formerly Video Indexer) enabled [Media Reserved Units (MRUs)](/azure/azure/media-services/latest/concept-media-reserved-units) auto scaling by [Azure Media Services](/azure/azure/media-services/latest/media-services-overview), as a result you do not need to manage them through Azure Video Analyzer for Media. That will allow price optimization, for example price reduction in many cases, based on your business needs as it is being auto scaled.
+Starting August 1st 2021, Azure Video Indexer enabled [Media Reserved Units (MRUs)](/azure/azure/media-services/latest/concept-media-reserved-units) auto scaling by [Azure Media Services](/azure/azure/media-services/latest/media-services-overview), as a result you do not need to manage them through Azure Video Indexer. That will allow price optimization, for example price reduction in many cases, based on your business needs as it is being auto scaled.
## June 2021
azure-video-indexer Scenes Shots Keyframes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/scenes-shots-keyframes.md
Title: Azure Video Indexer (formerly Azure Video Analyzer for Media) scenes, shots, and keyframes
-description: This topic gives an overview of the Azure Video Indexer (formerly Azure Video Analyzer for Media) scenes, shots, and keyframes.
+ Title: Azure Video Indexer scenes, shots, and keyframes
+description: This topic gives an overview of the Azure Video Indexer scenes, shots, and keyframes.
Last updated 07/05/2019
# Scenes, shots, and keyframes
-Azure Video Indexer (formerly Azure Video Analyzer for Media) supports segmenting videos into temporal units based on structural and semantic properties. This capability enables customers to easily browse, manage, and edit their video content based on varying granularities. For example, based on scenes, shots, and keyframes, described in this topic.
+Azure Video Indexer supports segmenting videos into temporal units based on structural and semantic properties. This capability enables customers to easily browse, manage, and edit their video content based on varying granularities. For example, based on scenes, shots, and keyframes, described in this topic.
![Scenes, shots, and keyframes](./media/scenes-shots-keyframes/scenes-shots-keyframes.png)
azure-video-indexer Upload Index Videos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/upload-index-videos.md
Title: Upload and index videos with Azure Video Indexer (formerly Azure Video Analyzer for Media)
-description: Learn two methods for uploading and indexing videos by using Azure Video Indexer (formerly Azure Video Analyzer for Media).
+ Title: Upload and index videos with Azure Video Indexer
+description: Learn two methods for uploading and indexing videos by using Azure Video Indexer.
Last updated 11/15/2021
# Upload and index your videos
-This article shows how to upload and index videos by using the Azure Video Indexer (formerly Azure Video Analyzer for Media) website and the Upload Video API.
+This article shows how to upload and index videos by using the Azure Video Indexer website and the Upload Video API.
When you're creating an Azure Video Indexer account, you choose between:
azure-video-indexer Use Editor Create Project https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/use-editor-create-project.md
Title: Use the Azure Video Indexer (formerly Azure Video Analyzer for Media) editor to create projects and add video clips
-description: This topic demonstrates how to use the Azure Video Indexer (formerly Azure Video Analyzer for Media) editor to create projects and add video clips.
+ Title: Use the Azure Video Indexer editor to create projects and add video clips
+description: This topic demonstrates how to use the Azure Video Indexer editor to create projects and add video clips.
# Add video clips to your projects
-The [Azure Video Indexer (formerly Azure Video Analyzer for Media)](https://www.videoindexer.ai/) website enables you to use your video's deep insights to: find the right media content, locate the parts that youΓÇÖre interested in, and use the results to create an entirely new project.
+The [Azure Video Indexer](https://www.videoindexer.ai/) website enables you to use your video's deep insights to: find the right media content, locate the parts that youΓÇÖre interested in, and use the results to create an entirely new project.
Once created, the project can be rendered and downloaded from Azure Video Indexer and be used in your own editing applications or downstream workflows.
azure-video-indexer Video Indexer Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/video-indexer-disaster-recovery.md
Title: Azure Video Indexer (formerly Azure Video Analyzer for Media) failover and disaster recovery
-description: Learn how to fail over to a secondary Azure Video Indexer (formerly Azure Video Analyzer for Media) account if a regional datacenter failure or disaster occurs.
+ Title: Azure Video Indexer failover and disaster recovery
+description: Learn how to fail over to a secondary Azure Video Indexer account if a regional datacenter failure or disaster occurs.
editor: ''
# Azure Video Indexer failover and disaster recovery
-Azure Video Indexer (formerly Azure Video Analyzer for Media) doesn't provide instant failover of the service if there's a regional datacenter outage or failure. This article explains how to configure your environment for a failover to ensure optimal availability for apps and minimized recovery time if a disaster occurs.
+Azure Video Indexer doesn't provide instant failover of the service if there's a regional datacenter outage or failure. This article explains how to configure your environment for a failover to ensure optimal availability for apps and minimized recovery time if a disaster occurs.
We recommend that you configure business continuity disaster recovery (BCDR) across regional pairs to benefit from Azure's isolation and availability policies. For more information, see [Azure paired regions](../availability-zones/cross-region-replication-azure.md).
azure-video-indexer Video Indexer Embed Widgets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/video-indexer-embed-widgets.md
Title: Embed Azure Video Indexer (formerly Azure Video Analyzer for Media) widgets in your apps
-description: Learn how to embed Azure Video Indexer (formerly Azure Video Analyzer for Media) widgets in your apps.
+ Title: Embed Azure Video Indexer widgets in your apps
+description: Learn how to embed Azure Video Indexer widgets in your apps.
Last updated 04/15/2022
# Embed Video Analyzer for Media widgets in your apps
-This article shows how you can embed Azure Video Indexer (formerly Azure Video Analyzer for Media) widgets in your apps. Azure Video Indexer supports embedding three types of widgets into your apps: *Cognitive Insights*, *Player*, and *Editor*.
+This article shows how you can embed Azure Video Indexer widgets in your apps. Azure Video Indexer supports embedding three types of widgets into your apps: *Cognitive Insights*, *Player*, and *Editor*.
Starting with version 2, the widget base URL includes the region of the specified account. For example, an account in the West US region generates: `https://www.videoindexer.ai/embed/insights/.../?location=westus2`.
azure-video-indexer Video Indexer Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/video-indexer-get-started.md
Title: Sign up for Azure Video Indexer (formerly Azure Video Analyzer for Media) and upload your first video - Azure
-description: Learn how to sign up and upload your first video using the Azure Video Indexer (formerly Azure Video Analyzer for Media) portal.
+ Title: Sign up for Azure Video Indexer and upload your first video - Azure
+description: Learn how to sign up and upload your first video using the Azure Video Indexer portal.
Last updated 01/25/2021
# Quickstart: How to sign up and upload your first video
-This getting started quickstart shows how to sign in to the Azure Video Indexer (formerly Azure Video Analyzer for Media) website and how to upload your first video.
+This getting started quickstart shows how to sign in to the Azure Video Indexer website and how to upload your first video.
When creating an Azure Video Indexer account, you can choose a free trial account (where you get a certain number of free indexing minutes) or a paid option (where you aren't limited by the quota). With free trial, Azure Video Indexer provides up to 600 minutes of free indexing to website users and up to 2400 minutes of free indexing to API users. With paid option, you create an Azure Video Indexer account that is [connected to your Azure subscription and an Azure Media Services account](connect-to-azure.md). You pay for minutes indexed, for more information, see [Media Services pricing](https://azure.microsoft.com/pricing/details/azure/media-services/).
azure-video-indexer Video Indexer Output Json V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/video-indexer-output-json-v2.md
Title: Examine the Azure Video Indexer output
-description: This topic examines the Azure Video Indexer (formerly Azure Video Analyzer for Media) output produced by the Get Video Index API.
+description: This topic examines the Azure Video Indexer output produced by the Get Video Index API.
# Examine the Azure Video Indexer output
-When a video is indexed, Azure Video Indexer (formerly Azure Video Analyzer for Media) produces the JSON content that contains details of the specified video insights. The insights include transcripts, optical character recognition elements (OCRs), faces, topics, blocks, and similar details. Each insight type includes instances of time ranges that show when the insight appears in the video.
+When a video is indexed, Azure Video Indexer produces the JSON content that contains details of the specified video insights. The insights include transcripts, optical character recognition elements (OCRs), faces, topics, blocks, and similar details. Each insight type includes instances of time ranges that show when the insight appears in the video.
You can visually examine the video's summarized insights by pressing the **Play** button on the video on the [Azure Video Indexer](https://www.videoindexer.ai/) website.
azure-video-indexer Video Indexer Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/video-indexer-overview.md
Title: What is Azure Video Indexer (formerly Azure Video Analyzer for Media)?
-description: This article gives an overview of the Azure Video Indexer (formerly Azure Video Analyzer for Media) service.
+ Title: What is Azure Video Indexer?
+description: This article gives an overview of the Azure Video Indexer service.
Last updated 02/15/2022
[!INCLUDE [regulation](./includes/regulation.md)]
-Azure Video Indexer (formerly Azure Video Analyzer for Media) is a cloud application, part of Azure Applied AI Services, built on Azure Media Services and Azure Cognitive Services (such as the Face, Translator, Computer Vision, and Speech). It enables you to extract the insights from your videos using Azure Video Indexer video and audio models.
+Azure Video Indexer is a cloud application, part of Azure Applied AI Services, built on Azure Media Services and Azure Cognitive Services (such as the Face, Translator, Computer Vision, and Speech). It enables you to extract the insights from your videos using Azure Video Indexer video and audio models.
To start extracting insights with Azure Video Indexer, you need to create an account and upload videos. When you upload your videos to Azure Video Indexer, it analyses both visuals and audio by running different AI models. As Azure Video Indexer analyzes your video, the insights that are extracted by the AI models.
azure-video-indexer Video Indexer Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/video-indexer-search.md
Title: Search for exact moments in videos with Azure Video Indexer (formerly Azure Video Analyzer for Media)
-description: Learn how to search for exact moments in videos using Azure Video Indexer (formerly Azure Video Analyzer for Media).
+ Title: Search for exact moments in videos with Azure Video Indexer
+description: Learn how to search for exact moments in videos using Azure Video Indexer.
Last updated 11/23/2019
# Search for exact moments in videos with Azure Video Indexer
-This topic shows you how to use the Azure Video Indexer (formerly Azure Video Analyzer for Media) website to search for exact moments in videos.
+This topic shows you how to use the Azure Video Indexer website to search for exact moments in videos.
1. Go to the [Azure Video Indexer](https://www.videoindexer.ai/) website and sign in. 1. Specify the search keywords and the search will be performed among all videos in your account's library.
azure-video-indexer Video Indexer Use Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/video-indexer-use-apis.md
Title: Use the Azure Video Indexer (formerly Azure Video Analyzer for Media) API
-description: This article describes how to get started with Azure Video Indexer (formerly Azure Video Analyzer for Media) API.
+ Title: Use the Azure Video Indexer API
+description: This article describes how to get started with Azure Video Indexer API.
Last updated 01/07/2021
# Tutorial: Use the Azure Video Indexer API
-Azure Video Indexer (formerly Azure Video Analyzer for Media) consolidates various audio and video artificial intelligence (AI) technologies offered by Microsoft into one integrated service, making development simpler. The APIs are designed to enable developers to focus on consuming Media AI technologies without worrying about scale, global reach, availability, and reliability of cloud platforms. You can use the API to upload your files, get detailed video insights, get URLs of embeddable insight and player widgets, and more.
+Azure Video Indexer consolidates various audio and video artificial intelligence (AI) technologies offered by Microsoft into one integrated service, making development simpler. The APIs are designed to enable developers to focus on consuming Media AI technologies without worrying about scale, global reach, availability, and reliability of cloud platforms. You can use the API to upload your files, get detailed video insights, get URLs of embeddable insight and player widgets, and more.
When creating an Azure Video Indexer account, you can choose a free trial account (where you get a certain number of free indexing minutes) or a paid option (where you're not limited by the quota). With a free trial, Azure Video Indexer provides up to 600 minutes of free indexing to website users and up to 2400 minutes of free indexing to API users. With a paid option, you create an Azure Video Indexer account that's [connected to your Azure subscription and an Azure Media Services account](connect-to-azure.md). You pay for minutes indexed, for more information, see [Media Services pricing](https://azure.microsoft.com/pricing/details/azure/media-services/).
azure-video-indexer Video Indexer View Edit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/video-indexer-view-edit.md
Title: View and edit Azure Video Indexer (formerly Azure Video Analyzer for Media) insights
-description: This article demonstrates how to view and edit Azure Video Indexer (formerly Azure Video Analyzer for Media) insights.
+ Title: View and edit Azure Video Indexer insights
+description: This article demonstrates how to view and edit Azure Video Indexer insights.
# View and edit Azure Video Indexer insights
-This topic shows you how to view and edit the Azure Video Indexer (formerly Azure Video Analyzer for Media) insights of a video.
+This topic shows you how to view and edit the Azure Video Indexer insights of a video.
1. Browse to the [Azure Video Indexer](https://www.videoindexer.ai/) website and sign in. 2. Find a video from which you want to create your Azure Video Indexer insights. For more information, see [Find exact moments within videos](video-indexer-search.md).
azure-vmware Concepts Hub And Spoke https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/concepts-hub-and-spoke.md
The architecture has the following main components:
- **ExpressRoute gateway:** Enables the communication between Azure VMware Solution private cloud, shared services on Hub virtual network, and workloads running on Spoke virtual networks. -- **ExpressRoute Global Reach:** Enables the connectivity between on-premises and Azure VMware Solution private cloud. The connectivity between Azure VMware Solution and the Azure fabric is through ExpressRoute Global Reach only. You can't select any option beyond ExpressRoute Fast Path. ExpressRoute Direct isn't supported.
+- **ExpressRoute Global Reach:** Enables the connectivity between on-premises and Azure VMware Solution private cloud. The connectivity between Azure VMware Solution and the Azure fabric is through ExpressRoute Global Reach only.
- **S2S VPN considerations:** For Azure VMware Solution production deployments, Azure S2S VPN isn't supported due to network requirements for VMware HCX. However, you can use it for a PoC deployment.
azure-vmware Configure Vmware Syslogs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/configure-vmware-syslogs.md
Last updated 04/11/2022
Diagnostic settings are used to configure streaming export of platform logs and metrics for a resource to the destination of your choice. You can create up to five different diagnostic settings to send different logs and metrics to independent destinations. In this article, you'll configure a diagnostic setting to collect VMware syslogs for your Azure VMware Solution private cloud. You'll store the syslog to a storage account to view the vCenter Server logs and analyze for diagnostic purposes.
+ >[!IMPORTANT]
+ >The **VMware syslogs** contains the following logs:
+ > - Distributed Firewall Logs
+ >- NSX-T Manager Logs
+ >- Gateway Firewall Logs
+ >- ESXi Logs
+ >- vCenter Logs
+ >- NSX Edge Logs
+
## Prerequisites
backup Backup Azure Delete Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-delete-vault.md
Title: Delete a Microsoft Azure Recovery Services vault description: In this article, learn how to remove dependencies and then delete an Azure Backup Recovery Services vault. Previously updated : 04/11/2022 Last updated : 05/23/2022
First, read the **[Before you start](#before-you-start)** section to understand
>[!Note] >- To download the PowerShell file to delete your vault, go to vault **Overview** -> **Delete** -> **Delete using PowerShell Script**, and then click **Generate and Download Script** as shown in the screenshot below. This generates a customized script specific to the vault, which requires no additional changes. You can run the script in the PowerShell console by switching to the downloaded scriptΓÇÖs directory and running the file using: _.\NameofFile.ps1_
->- Ensure PowerShell version 7 or later and the latest _Az module_ are installed. To install the same, see the [instructions here](?tabs=powershell#powershell-install-az-module).
+>- Ensure PowerShell version 7 or higher is installed. To install the same, see the [instructions here](?tabs=powershell#powershell-install-az-module).
If you're sure that all the items backed up in the vault are no longer required and wish to delete them at once without reviewing, you can directly run the PowerShell script in this section. The script will delete all the backup items recursively and eventually the entire vault.
Follow these steps:
- **Step 1**: Seek the necessary permissions from the security administrator to delete the vault if Multi-User Authorization has been enabled against the vault. [Learn more](./multi-user-authorization.md#authorize-critical-protected-operations-using-azure-ad-privileged-identity-management) -- <a id="powershell-install-az-module">**Step 2**</a>: Install the _Az module_ and upgrade to PowerShell 7 version by performing these steps:
+- <a id="powershell-install-az-module">**Step 2**</a>: Upgrade to PowerShell 7 version by performing these steps:
1. Upgrade to PowerShell 7: Run the following command in your console:
Follow these steps:
1. Open PowerShell 7 as administrator.
- 1. Uninstall old Az module and install the latest version by running the following commands:
-
- ```azurepowershell-interactive
- Uninstall-Module -Name Az.RecoveryServices
- Set-ExecutionPolicy -ExecutionPolicy Unrestricted
- Install-Module -Name Az.RecoveryServices -Repository PSGallery -Force -AllowClobber
- ```
- **Step 3**: Save the PowerShell script in .ps1 format. Then, to run the script in your PowerShell console, type `./NameOfFile.ps1`. This recursively deletes all backup items and eventually the entire Recovery Services vault.
For more information on the ARMClient command, see [ARMClient README](https://gi
## Next steps - [Learn about Recovery Services vaults](backup-azure-recovery-services-vault-overview.md).-- [Learn about monitoring and managing Recovery Services vaults](backup-azure-manage-windows-server.md).
+- [Learn about monitoring and managing Recovery Services vaults](backup-azure-manage-windows-server.md).
backup Backup Azure Monitoring Built In Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-monitoring-built-in-monitor.md
Title: Monitor Azure Backup protected workloads description: In this article, learn about the monitoring and notification capabilities for Azure Backup workloads using the Azure portal. Previously updated : 03/21/2022 Last updated : 05/16/2022 ms.assetid: 86ebeb03-f5fa-4794-8a5f-aa5cbbf68a81
The following table summarizes the different backup alerts currently available (
| **Alert Category** | **Alert Name** | **Supported workload types / vault types** | **Description** | | | - | | -- |
-| Security | Delete Backup Data | <li> Azure Virtual Machine <br><br> <li> SQL in Azure VM (non-AG scenarios) <br><br> <li> SAP HANA in Azure VM <br><br> <li> Azure Backup Agent <br><br> <li> DPM <br><br> <li> Azure Backup Server <br><br> <li> Azure Database for PostgreSQL Server <br><br> <li> Azure Blobs <br><br> <li> Azure Managed Disks | This alert is fired when a user stops backup and deletes backup data (Note ΓÇô If soft-delete feature is disabled for the vault, Delete Backup Data alert is not received) |
-| Security | Upcoming Purge | <li> Azure Virtual Machine <br><br> <li> SQL in Azure VM (non-AG scenarios) <br><br> <li> SAP HANA in Azure VM | For all workloads which support soft-delete, this alert is fired when the backup data for an item is 2 days away from being permanently purged by the Azure Backup service |
-| Security | Purge Complete | <li> Azure Virtual Machine <br><br> <li> SQL in Azure VM (non-AG scenarios) <br><br> <li> SAP HANA in Azure VM | Delete Backup Data |
+| Security | Delete Backup Data | - Azure Virtual Machine <br><br> - SQL in Azure VM (non-AG scenarios) <br><br> - SAP HANA in Azure VM <br><br> - Azure Backup Agent <br><br> - DPM <br><br> - Azure Backup Server <br><br> - Azure Database for PostgreSQL Server <br><br> - Azure Blobs <br><br> - Azure Managed Disks | This alert is fired when a user stops backup and deletes backup data (Note ΓÇô If soft-delete feature is disabled for the vault, Delete Backup Data alert is not received) |
+| Security | Upcoming Purge | - Azure Virtual Machine <br><br> - SQL in Azure VM (non-AG scenarios) <br><br> - SAP HANA in Azure VM | For all workloads which support soft-delete, this alert is fired when the backup data for an item is 2 days away from being permanently purged by the Azure Backup service |
+| Security | Purge Complete | - Azure Virtual Machine <br><br> - SQL in Azure VM (non-AG scenarios) <br><br> - SAP HANA in Azure VM | Delete Backup Data |
| Security | Soft Delete Disabled for Vault | Recovery Services vaults | This alert is fired when the soft-deleted backup data for an item has been permanently deleted by the Azure Backup service |
-| Jobs | Backup Failure | <li> Azure Virtual Machine <br><br> <li> SQL in Azure VM (non-AG scenarios) <br><br> <li> SAP HANA in Azure VM <br><br> <li> Azure Backup Agent <br><br> <li> Azure Files <br><br> <li> Azure Database for PostgreSQL Server <br><br> <li> Azure Managed Disks | This alert is fired when a backup job failure has occurred. By default, alerts for backup failures are turned off. Refer to the [section on turning on alerts for this scenario](#turning-on-azure-monitor-alerts-for-job-failure-scenarios) for more details. |
-| Jobs | Restore Failure | <li> Azure Virtual Machine <br><br> <li> SQL in Azure VM (non-AG scenarios) <br><br> <li> SAP HANA in Azure VM <br><br> <li> Azure Backup Agent <br><br> <li> Azure Files <br><br> <li> Azure Database for PostgreSQL Server <br><br> <li> Azure Blobs <br><br> <li> Azure Managed Disks| This alert is fired when a restore job failure has occurred. By default, alerts for restore failures are turned off. Refer to the [section on turning on alerts for this scenario](#turning-on-azure-monitor-alerts-for-job-failure-scenarios) for more details. |
+| Jobs | Backup Failure | - Azure Virtual Machine <br><br> - SQL in Azure VM (non-AG scenarios) <br><br> - SAP HANA in Azure VM <br><br> - Azure Backup Agent <br><br> - Azure Files <br><br> - Azure Database for PostgreSQL Server <br><br> - Azure Managed Disks | This alert is fired when a backup job failure has occurred. By default, alerts for backup failures are turned off. Refer to the [section on turning on alerts for this scenario](#turning-on-azure-monitor-alerts-for-job-failure-scenarios) for more details. |
+| Jobs | Restore Failure | - Azure Virtual Machine <br><br> - SQL in Azure VM (non-AG scenarios) <br><br> - SAP HANA in Azure VM <br><br> - Azure Backup Agent <br><br> - Azure Files <br><br> - Azure Database for PostgreSQL Server <br><br> - Azure Blobs <br><br> - Azure Managed Disks | This alert is fired when a restore job failure has occurred. By default, alerts for restore failures are turned off. Refer to the [section on turning on alerts for this scenario](#turning-on-azure-monitor-alerts-for-job-failure-scenarios) for more details. |
### Turning on Azure Monitor alerts for job failure scenarios To opt in to Azure Monitor alerts for backup failure and restore failure scenarios, follow the below steps:
-1. Navigate to the Azure portal and search for **Preview Features**.
+**Choose a vault type**:
- ![Screenshot for viewing preview features in portal](media/backup-azure-monitoring-laworkspace/portal-preview-features.png)
+# [Recovery Services vaults](#tab/recovery-services-vaults)
-2. You can view the list of all preview features that are available for you to opt in to.
+1. Go to the Azure portal and search for **Preview Features**.
- * If you wish to receive job failure alerts for workloads backed up to Recovery Services vaults, select the flag named **EnableAzureBackupJobFailureAlertsToAzureMonitor** corresponding to Microsoft.RecoveryServices provider (column 3).
- * If you wish to receive job failure alerts for workloads backed up to the Backup vaults, select the flag named **EnableAzureBackupJobFailureAlertsToAzureMonitor** corresponding to Microsoft.DataProtection provider (column 3).
+ :::image type="content" source="media/backup-azure-monitoring-laworkspace/portal-preview-features.png" alt-text="Screenshot for viewing preview features in portal.":::
- ![Screenshot for Alerts preview registration](media/backup-azure-monitoring-laworkspace/alert-preview-feature-flags.png)
+1. You can view the list of all preview features that are available for you to opt in to.
-3. Click **Register** to enable this feature for your subscription.
- > [!NOTE]
- > It may take up to 24 hours for the registration to take effect. To enable this feature for multiple subscriptions, repeat the above process by selecting the relevant subscription at the top of the screen. We also recommend to re-register the preview flag if a new resource has been created in the subscription after the initial registration to continue receiving alerts.
+ To receive job failure alerts for workloads backed up to Recovery Services vaults, select the flag named **EnableAzureBackupJobFailureAlertsToAzureMonitor** corresponding to *Microsoft.RecoveryServices* provider (column 3).
-4. As a best practice, we also recommend you to register the resource provider to ensure that the feature registration information gets synced with the Azure Backup service as expected. To register the resource provider, run the following PowerShell command in the subscription for which you have registered the feature flag.
+ :::image type="content" source="media/backup-azure-monitoring-laworkspace/alert-preview-feature-flags.png" alt-text="Screenshot for Alerts preview registration.":::
-```powershell
-Register-AzResourceProvider -ProviderNamespace <ProviderNamespace>
-```
+1. Click **Register** to enable this feature for your subscription.
-To receive alerts for Recovery Services vaults, use the value _Microsoft.RecoveryServices_ for the _ProviderNamespace_ parameter. To receive alerts for Backup vaults, use the value _Microsoft.DataProtection_.
+ > [!NOTE]
+ > It may take up to 24 hours for the registration to take effect. To enable this feature for multiple subscriptions, repeat the above process by selecting the relevant subscription at the top of the screen. We also recommend to re-register the preview flag if a new resource has been created in the subscription after the initial registration to continue receiving alerts.
+
+1. As a best practice, we also recommend you to register the resource provider to ensure that the feature registration information gets synced with the Azure Backup service as expected.
+
+ To register the resource provider, run the following PowerShell command in the subscription for which you have registered the feature flag.
+
+ ```powershell
+ Register-AzResourceProvider -ProviderNamespace <ProviderNamespace>
+ ```
+
+ To receive alerts for Recovery Services vaults, use the value *Microsoft.RecoveryServices* for the *ProviderNamespace* parameter.
+
+# [Backup vaults](#tab/backup-vaults)
+
+For Backup vaults, you no longer need to use a feature flag to opt in to alerts for job failure scenarios. Built-in Azure Monitor alerts are generated for job failures by default. If you want to turn off alerts for these scenarios, you can edit the monitoring settings property of the vault accordingly.
+
+To manage monitoring settings for a Backup vault, follow these steps:
+
+1. Go to the vault and click **Properties**.
+
+1. Locate the **Monitoring Settings** vault property and click **Update**.
+
+ :::image type="content" source="media/backup-azure-monitoring-laworkspace/monitoring-settings-backup-vault.png" alt-text="Screenshot for monitoring settings in backup vault.":::
++
+1. In the context pane, select the appropriate options to enable/disable built-in Azure Monitor alerts for job failures depending on your requirement.
+
+1. Click **Update** to save the setting for the vault.
+
+ :::image type="content" source="media/backup-azure-monitoring-laworkspace/job-failure-alert-setting-inline.png" alt-text="Screenshot for updating Azure Monitor alert settings in backup vault." lightbox="media/backup-azure-monitoring-laworkspace/job-failure-alert-setting-expanded.png":::
++ ### Viewing fired alerts in the Azure portal
-Once an alert is fired for a vault, you can view the alert in the Azure portal by navigating to Backup center. On the **Overview** tab, you can see a summary of active alerts split by severity. There're two kinds of alerts displayed:
+Once an alert is fired for a vault, you can go to Backup center to view the alert in the Azure portal. On the **Overview** tab, you can see a summary of active alerts split by severity. There're two kinds of alerts displayed:
* **Datasource Alerts**: Alerts that are tied to a specific datasource being backed up (for example, back up or restore failure for a VM, deleting backup data for a database, and so on) appear under the **Datasource Alerts** section. * **Global Alerts**: Alerts that are not tied to a specific datasource (for example, disabling soft-delete functionality for a vault) appear under the **Global Alerts** section. Each of the above types of alerts is further split into **Security** and **Configured** alerts. Currently, Security alerts include the scenarios of deleting backup data, or disabling soft-delete for vault (for the applicable workloads as detailed in the above section). Configured alerts include backup failure and restore failure since these alerts are only fired after registering the feature in the preview portal.
-![Screenshot for viewing alerts in Backup center](media/backup-azure-monitoring-laworkspace/backup-center-azure-monitor-alerts.png)
Clicking any of the numbers (or on the **Alerts** menu item) opens up a list of all active alerts fired with the relevant filters applied. You can filter on a range of properties, such as subscription, resource group, vault, severity, state, and so on. You can click any of the alerts to get more details about the alert, such as the affected datasource, alert description and recommended action, and so on.
-![Screenshot for viewing details of the alert](media/backup-azure-monitoring-laworkspace/backup-center-alert-details.png)
You can change the state of an alert to **Acknowledged** or **Closed** by clicking on **Change Alert State**.
-![Screenshot for changing state of the alert](media/backup-azure-monitoring-laworkspace/backup-center-change-alert-state.png)
> [!NOTE] > - In Backup center, only alerts for Azure-based workloads are displayed currently. To view alerts for on-premises resources, navigate to the Recovery Services vault and click the **Alerts** menu item.
To configure notifications for Azure Monitor alerts, create an [alert processing
1. On the **Basics** tab, select the name of the action group, the subscription, and resource group under which it should be created.
- ![Screenshot for basic properties of action group](media/backup-azure-monitoring-laworkspace/azure-monitor-action-groups-basic.png)
+ :::image type="content" source="media/backup-azure-monitoring-laworkspace/azure-monitor-action-groups-basic.png" alt-text="Screenshot for basic properties of action group.":::
1. On the **Notifications** tab, select **Email/SMS message/Push/Voice** and enter the recipient email ID.
- ![Screenshot for setting notification properties](media/backup-azure-monitoring-laworkspace/azure-monitor-email.png)
+ :::image type="content" source="media/backup-azure-monitoring-laworkspace/azure-monitor-email.png" alt-text="Screenshot for setting notification properties.":::
1. Click **Review+Create** and then **Create** to deploy the action group.
backup Backup Azure Sap Hana Database Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-sap-hana-database-troubleshoot.md
Title: Troubleshoot SAP HANA databases backup errors description: Describes how to troubleshoot common errors that might occur when you use Azure Backup to back up SAP HANA databases. Previously updated : 04/01/2022 Last updated : 05/16/2022
Upgrades from SDC to MDC that cause a SID change can be handled as follows:
- Ensure that the new MDC version is currently [supported by Azure Backup](sap-hana-backup-support-matrix.md#scenario-support) - **Stop protection with retain data** for the old SDC database-- Move the _config.json_ file located at _/opt/msawb/etc/config/SAPHana/_.-- Perform the upgrade. After completion, the HANA system is now MDC with a system DB and tenant DBs
+- Move the *config.json* file located at `/opt/msawb/etc/config/SAPHana/`.
+- Perform the upgrade. After completion, the HANA system is now MDC with a system DB and tenant DBs.
- Rerun the [pre-registration script](https://aka.ms/scriptforpermsonhana) with correct details (new SID and MDC). Due to a change in SID, you might face issues with successful execution of the script. Contact Azure Backup support if you face issues.-- Re-register the extension for the same machine in the Azure portal (**Backup** -> **View details** -> Select the relevant Azure VM -> Re-register)-- Select **Rediscover DBs** for the same VM. This action should show the new DBs in step 3 as SYSTEMDB and Tenant DB, not SDC-- The older SDC database continues to exist in the vault and have old backed up data retained according to the policy-- Configure backup for these databases
+- Re-register the extension for the same machine in the Azure portal (**Backup** -> **View details** -> Select the relevant Azure VM -> Re-register).
+- Select **Rediscover DBs** for the same VM. This action should show the new DBs in step 3 as SYSTEMDB and Tenant DB, not SDC.
+- The older SDC database continues to exist in the vault and have old backed up data retained according to the policy.
+- Configure backup for these databases.
## Re-registration failures
backup Backup Azure Vms Enhanced Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-vms-enhanced-policy.md
Title: Back up Azure VMs with Enhanced policy description: Learn how to configure Enhanced policy to back up VMs. Previously updated : 04/30/2022 Last updated : 05/06/2022
This article explains how to use _Enhanced policy_ to configure _Multiple Backup
Azure Backup now supports _Enhanced policy_ that's needed to support new Azure offerings. For example, [Trusted Launch VM](../virtual-machines/trusted-launch.md) is supported with _Enhanced policy_ only. >[!Important]
->The existing [default policy](./backup-during-vm-creation.md#create-a-vm-with-backup-configured) wonΓÇÖt support protecting newer Azure offerings, such as Trusted Launch VM, UltraSSD, Shared disk, and Confidential Azure VMs.
+>- [Default policy](./backup-during-vm-creation.md#create-a-vm-with-backup-configured) will not support protecting newer Azure offerings, such as [Trusted Launch VM](backup-support-matrix-iaas.md#tvm-backup), [Ultra SSD](backup-support-matrix-iaas.md#vm-storage-support), [Shared disk](backup-support-matrix-iaas.md#vm-storage-support), and Confidential Azure VMs.
+>- Enhanced policy currently doesn't support protecting Ultra SSD.
You must enable backup of Trusted Launch VM through enhanced policy only. Enhanced policy provides the following features:
Follow these steps:
- [Run a backup immediately](./backup-azure-vms-first-look-arm.md#run-a-backup-immediately) - [Verify Backup job status](./backup-azure-arm-vms-prepare.md#verify-backup-job-status) - [Restore Azure virtual machines](./backup-azure-arm-restore-vms.md#restore-disks)
+- [Troubleshoot VM backup](backup-azure-vms-troubleshoot.md#usererrormigrationfromtrustedlaunchvm-tonontrustedvmnotallowed)
backup Backup Azure Vms Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-vms-troubleshoot.md
Title: Troubleshoot backup errors with Azure VMs
description: In this article, learn how to troubleshoot errors encountered with backup and restore of Azure virtual machines. Previously updated : 05/13/2022 Last updated : 05/17/2022+++ # Troubleshooting backup failures on Azure virtual machines
To resolve this issue, try to restore the VM from a different restore point.<br>
| The selected subnet doesn't exist: <br>Select a subnet that exists. |None | | The Backup service doesn't have authorization to access resources in your subscription. |To resolve this error, first restore disks by using the steps in [Restore backed-up disks](backup-azure-arm-restore-vms.md#restore-disks). Then use the PowerShell steps in [Create a VM from restored disks](backup-azure-vms-automation.md#restore-an-azure-vm). |
+### UserErrorMigrationFromTrustedLaunchVM ToNonTrustedVMNotAllowed
+
+**Error code**: UserErrorMigrationFromTrustedLaunchVMToNonTrustedVMNotAllowed
+
+**Error message**: Backup cannot be configured for the VM which has migrated from Trusted Launch mode to non Trusted Launch mode.
+
+**Scenario 1**: Migration of Trusted Launch VM to Generation 2 VM is blocked.
+
+Migration of Trusted Launch VM to Generation 2 VM is not supported. This is because the VM Guest State (VMGS) blob created for Trusted Launch VMs isn't present for Generation 2 VM. Therefore, the VM won't start.
+
+**Scenario 2**: Unable to protect a Standard VM with the same name as of Trusted Launch VM that was previously deleted.
+
+To resolve this issue:
+
+1. [Disable soft delete](backup-azure-security-feature-cloud.md#disabling-soft-delete-using-azure-portal).
+1. [Stop VM protection with delete backup data](backup-azure-manage-vms.md#stop-protection-and-delete-backup-data).
+1. Re-enable soft delete.
+1. Configure VM protection again with the appropriate policy after the old backup data deletion is complete from the Recovery Services vault.
+
+>[!Note]
+>You can also create a VM:
+>
+>- With a different name than the original one, **or**
+>- In a different resource group with the same name.
+ ## Backup or restore takes time If your backup takes more than 12 hours, or restore takes more than 6 hours, review [best practices](backup-azure-vms-introduction.md#best-practices), and
backup Backup Rbac Rs Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-rbac-rs-vault.md
The following table captures the Backup management actions and corresponding Azu
>[!Note] >If you've contributor access at the resource group level and want to configure backup from file share blade, ensure to get *microsoft.recoveryservices/Locations/backupStatus/action* permission at the subscription level. To do so, create a [*custom role*](../role-based-access-control/custom-roles-portal.md#start-from-scratch) and assign this permission.
+### Minimum role requirements for Azure disk backup
+
+| Management Operation | Minimum Azure role required | Scope Required | Alternative |
+| | | | |
+| Validate before configuring backup | Backup Operator | Backup vault | |
+| | Disk Backup Reader | Disk to be backed up| |
+| Enable backup from backup vault | Backup Operator | Backup vault | |
+| | Disk Backup Reader | Disk to be backed up | In addition, the backup vault MSI should be given [these permissions](/azure/backup/disk-backup-faq##what-are-the-permissions-used-by-azure-backup-during-backup-and-restore-operation-) |
+| On demand backup of disk | Backup Operator | Backup vault | |
+| Validate before restoring a disk | Backup Operator | Backup vault | |
+| | Disk Restore Operator | Resource group where disks will be restored to | |
+| Restoring a disk | Backup Operator | Backup vault | |
+| | Disk Restore Operator | Resource group where disks will be restored to | In addition, the backup vault MSI should be given [these permissions](/azure/backup/disk-backup-faq##what-are-the-permissions-used-by-azure-backup-during-backup-and-restore-operation-) |
+
+### Minimum role requirements for Azure blob backup
+
+| Management Operation | Minimum Azure role required | Scope Required | Alternative |
+| | | | |
+| Validate before configuring backup | Backup Operator | Backup vault | |
+| | Storage account backup contributor | Storage account containing the blob | |
+| Enable backup from backup vault | Backup Operator | Backup vault | |
+| | Storage account backup contributor | Storage account containing the blob | In addition, the backup vault MSI should be given [these permissions](/azure/backup/blob-backup-configure-manage#grant-permissions-to-the-backup-vault-on-storage-accounts) |
+| On demand backup of blob | Backup Operator | Backup vault | |
+| Validate before restoring a blob | Backup Operator | Backup vault | |
+| | Storage account backup contributor | Storage account containing the blob | |
+| Restoring a blob | Backup Operator | Backup vault | |
+| | Storage account backup contributor | Storage account containing the blob | In addition, the backup vault MSI should be given [these permissions](/azure/backup/blob-backup-configure-manage#grant-permissions-to-the-backup-vault-on-storage-accounts) |
+
+### Minimum role requirements for Azure database for PostGreSQL server backup
+
+| Management Operation | Minimum Azure role required | Scope Required | Alternative |
+| | | | |
+| Validate before configuring backup | Backup Operator | Backup vault | |
+| | Reader | Azure PostGreSQL server | |
+| Enable backup from backup vault | Backup Operator | Backup vault | |
+| | Contributor | Azure PostGreSQL server | Alternatively, instead of a built-in-role, you can consider a custom role which has the following permissions: Microsoft.DBforPostgreSQL/servers/write Microsoft.DBforPostgreSQL/servers/read In addition, the backup vault MSI should be given [these permissions](/azure/backup/backup-azure-database-postgresql-overview#set-of-permissions-needed-for-azure-postgresql-database-backup) |
+| On demand backup of PostGreSQL server | Backup Operator | Backup vault | |
+| Validate before restoring a server | Backup Operator | Backup vault | |
+| | Contributor | Target Azure PostGreSQL server | Alternatively, instead of a built-in-role, you can consider a custom role which has the following permissions: Microsoft.DBforPostgreSQL/servers/write Microsoft.DBforPostgreSQL/servers/read
+| Restoring a server | Backup Operator | Backup vault | |
+| | Contributor | Target Azure PostGreSQL server | Alternatively, instead of a built-in-role, you can consider a custom role which has the following permissions: Microsoft.DBforPostgreSQL/servers/write Microsoft.DBforPostgreSQL/servers/read In addition, the backup vault MSI should be given [these permissions](/azure/backup/backup-azure-database-postgresql-overview#set-of-permissions-needed-for-azure-postgresql-database-restore) |
+ ## Next steps * [Azure role-based access control (Azure RBAC)](../role-based-access-control/role-assignments-portal.md): Get started with Azure RBAC in the Azure portal.
backup Backup Support Matrix Iaas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-support-matrix-iaas.md
Title: Support matrix for Azure VM backup description: Provides a summary of support settings and limitations when backing up Azure VMs with the Azure Backup service. Previously updated : 04/30/2022 Last updated : 05/24/2022
Data disk size | Individual disk size can be up to 32 TB and a maximum of 256 TB
Storage type | Standard HDD, Standard SSD, Premium SSD. <br><br> Backup and restore of [ZRS disks](../virtual-machines/disks-redundancy.md#zone-redundant-storage-for-managed-disks) is supported. Managed disks | Supported. Encrypted disks | Supported.<br/><br/> Azure VMs enabled with Azure Disk Encryption can be backed up (with or without the Azure AD app).<br/><br/> Encrypted VMs can't be recovered at the file/folder level. You must recover the entire VM.<br/><br/> You can enable encryption on VMs that are already protected by Azure Backup.
-Disks with Write Accelerator enabled | Currently, Azure VM with WA disk backup is previewed in all Azure public regions. <br><br> To enroll your subscription for WA Disk, write to us at [askazurebackupteam@microsoft.com](mailto:askazurebackupteam@microsoft.com). <br><br> Snapshots donΓÇÖt include WA disk snapshots for unsupported subscriptions as WA disk will be excluded. <br><br>**Important** <br> Virtual machines with WA disks need internet connectivity for a successful backup (even though those disks are excluded from the backup).
+Disks with Write Accelerator enabled | Azure VM with WA disk backup is available in all Azure public regions starting from May 18, 2020. If WA disk backup is not required as part of VM backup, you can choose to remove with [**Selective disk** feature](selective-disk-backup-restore.md). <br><br>**Important** <br> Virtual machines with WA disks need internet connectivity for a successful backup (even though those disks are excluded from the backup).
Back up & Restore deduplicated VMs/disks | Azure Backup doesn't support deduplication. For more information, see this [article](./backup-support-matrix.md#disk-deduplication-support) <br/> <br/> - Azure Backup doesn't deduplicate across VMs in the Recovery Services vault <br/> <br/> - If there are VMs in deduplication state during restore, the files can't be restored because the vault doesn't understand the format. However, you can successfully perform the full VM restore. Add disk to protected VM | Supported. Resize disk on protected VM | Supported.
backup Enable Multi User Authorization Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/enable-multi-user-authorization-quickstart.md
+
+ Title: Quickstart - Multi-user authorization using Resource Guard
+description: In this quickstart, learn how to use Multi-user authorization to protect against unauthorized operation.
+ Last updated : 05/05/2022+++++
+# Quickstart: Enable protection using Multi-user authorization on Recovery Services vault in Azure Backup
+
+Multi-user authorization (MUA) for Azure Backup allows you to add an additional layer of protection to critical operations on your Recovery Services vaults. For MUA, Azure Backup uses another Azure resource called the Resource Guard to ensure critical operations are performed only with applicable authorization. Learn about [MUA concepts](multi-user-authorization-concept.md).
+
+This quickstart describes how to enable Multi-user authorization (MUA) for Azure Backup.
+
+## Prerequisites
+
+Before you start:
+
+- Ensure the Resource Guard and the Recovery Services vault are in the same Azure region.
+- Ensure the Backup admin does **not** have **Contributor** permissions on the Resource Guard. You can choose to have the Resource Guard in another subscription of the same directory or in another directory to ensure maximum isolation.
+- Ensure that your subscriptions containing the Recovery Services vault as well as the Resource Guard (in different subscriptions or tenants) are registered to use the **Microsoft.RecoveryServices** provider. For more details, see [Azure resource providers and types](../azure-resource-manager/management/resource-providers-and-types.md#register-resource-provider-1).
+- Ensure that you [create a Resource Guard](multi-user-authorization.md#create-a-resource-guard) in a different subsctiption/tenant as that of the vault located in the same region.
+- Ensure to [assign permissions to the Backup admin on the Resource Guard to enable MUA](multi-user-authorization.md#assign-permissions-to-the-backup-admin-on-the-resource-guard-to-enable-mua).
+
+## Enable MUA
+
+The Backup admin now has the Reader role on the Resource Guard and can easily enable multi-user authorization on vaults managed by them.
+
+Follow these steps:
+
+1. Go to the Recovery Services vault.
+1. Go to **Properties** on the left navigation panel, then to **Multi-User Authorization** and click **Update**.
+1. The option to enable MUA appears. Choose a Resource Guard using one of the following ways:
+
+ 1. You can either specify the URI of the Resource Guard, make sure you specify the URI of a Resource Guard you have **Reader** access to and that is the same regions as the vault. You can find the URI (Resource Guard ID) of the Resource Guard in its **Overview** screen:
+
+ 1. Or, you can select the Resource Guard from the list of Resource Guards you have **Reader** access to, and those available in the region.
+
+ 1. Click **Select Resource Guard**
+ 1. Click on the dropdown and select the directory the Resource Guard is in.
+ 1. Click **Authenticate** to validate your identity and access.
+ 1. After authentication, choose the **Resource Guard** from the list displayed.
+
+1. Click **Save** once done to enable MUA.
+
+## Next steps
+
+- [Protect against unauthorized (protected) operations](multi-user-authorization.md#protect-against-unauthorized-protected-operations)
+- [Authorize critical (protected) operations using Azure AD Privileged Identity Management](multi-user-authorization.md#authorize-critical-protected-operations-using-azure-ad-privileged-identity-management)
+- [Performing a protected operation after approval](multi-user-authorization.md#performing-a-protected-operation-after-approval)
+- [Disable MUA on a Recovery Services vault](multi-user-authorization.md#disable-mua-on-a-recovery-services-vault)
backup Multi User Authorization Concept https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/multi-user-authorization-concept.md
+
+ Title: Multi-user authorization using Resource Guard
+description: An overview of Multi-user authorization using Resource Guard.
+ Last updated : 05/05/2022++++
+# Multi-user authorization using Resource Guard
+
+Multi-user authorization (MUA) for Azure Backup allows you to add an additional layer of protection to critical operations on your Recovery Services vaults. For MUA, Azure Backup uses another Azure resource called the Resource Guard to ensure critical operations are performed only with applicable authorization.
+
+## How does MUA for Backup work?
+
+Azure Backup uses the Resource Guard as an authorization service for a Recovery Services vault. Therefore, to perform a critical operation (described below) successfully, you must have sufficient permissions on the associated Resource Guard as well.
+
+> [!Important]
+> To function as intended, the Resource Guard must be owned by a different user, and the vault admin must not have Contributor permissions. You can place Resource Guard in a subscription or tenant different from the one containing the Recovery Services vault to provide better protection.
+
+## Critical operations
+
+The following table lists the operations defined as critical operations and can be protected by a Resource Guard. You can choose to exclude certain operations from being protected using the Resource Guard when associating vaults with it. Note that operations denoted as Mandatory cannot be excluded from being protected using the Resource Guard for vaults associated with it. Also, the excluded critical operations would apply to all vaults associated with a Resource Guard.
+
+**Operation** | **Mandatory/Optional**
+ |
+Disable soft delete | Mandatory
+Disable MUA protection | Mandatory
+Modify backup policy | Optional: Can be excluded
+Modify protection | Optional: Can be excluded
+Stop protection | Optional: Can be excluded
+Change MARS security PIN | Optional: Can be excluded
+
+### Concepts and process
+The concepts and the processes involved when using MUA for Backup are explained below.
+
+LetΓÇÖs consider the following two users for a clear understanding of the process and responsibilities. These two roles are referenced throughout this article.
+
+**Backup admin**: Owner of the Recovery Services vault and performs management operations on the vault. To begin with, the Backup admin must not have any permissions on the Resource Guard.
+
+**Security admin**: Owner of the Resource Guard and serves as the gatekeeper of critical operations on the vault. Hence, the Security admin controls permissions that the Backup admin needs to perform critical operations on the vault.
+
+Following is a diagrammatic representation for performing a critical operation on a vault that has MUA configured using a Resource Guard.
+
+
+Here is the flow of events in a typical scenario:
+
+1. The Backup admin creates the Recovery Services vault.
+1. The Security admin creates the Resource Guard. The Resource Guard can be in a different subscription or a different tenant w.r.t the Recovery Services vault. It must be ensured that the Backup admin does not have Contributor permissions on the Resource Guard.
+1. The Security admin grants the **Reader** role to the Backup Admin for the Resource Guard (or a relevant scope). The Backup admin requires the reader role to enable MUA on the vault.
+1. The Backup admin now configures the Recovery Services vault to be protected by MUA via the Resource Guard.
+1. Now, if the Backup admin wants to perform a critical operation on the vault, they need to request access to the Resource Guard. The Backup admin can contact the Security admin for details on gaining access to perform such operations. They can do this using Privileged Identity Management (PIM) or other processes as mandated by the organization.
+1. The Security admin temporarily grants the **Contributor** role on the Resource Guard to the Backup admin to perform critical operations.
+1. Now, the Backup admin initiates the critical operation.
+1. The Azure Resource Manager checks if the Backup admin has sufficient permissions or not. Since the Backup admin now has Contributor role on the Resource Guard, the request is completed.
+ - If the Backup admin did not have the required permissions/roles, the request would have failed.
+1. The security admin ensures that the privileges to perform critical operations are revoked after authorized actions are performed or after a defined duration. Using JIT tools [Azure Active Directory Privileged Identity Management](../active-directory/privileged-identity-management/pim-configure.md) may be useful in ensuring this.
+
+>[!NOTE]
+>- MUA provides protection on the above listed operations performed on the Recovery Services vaults only. Any operations performed directly on the data source (i.e., the Azure resource/workload that is protected) are beyond the scope of the Resource Guard.
+>- This feature is currently available via the Azure portal only.
+>- This feature is currently supported for Recovery Services vaults only and not available for Backup vaults.
+
+## Usage scenarios
+
+The following table depicts scenarios for creating your Resource Guard and Recovery Services vault (RS vault), along with the relative protection offered by each.
+
+>[!Important]
+> The Backup admin must not have Contributor permissions to the Resource Guard in any scenario.
+
+**Usage scenario** | **Protection due to MUA** | **Ease of implementation** | **Notes**
+ | | | |
+RS vault and Resource Guard are **in the same subscription.** </br> The Backup admin does not have access to the Resource Guard. | Least isolation between the Backup admin and the Security admin. | Relatively easy to implement since only one subscription is required. | Resource level permissions/ roles need to be ensured are correctly assigned.
+RS vault and Resource Guard are **in different subscriptions but the same tenant.** </br> The Backup admin does not have access to the Resource Guard or the corresponding subscription. | Medium isolation between the Backup admin and the Security admin. | Relatively medium ease of implementation since two subscriptions (but a single tenant) are required. | Ensure that that permissions/ roles are correctly assigned for the resource or the subscription.
+RS vault and Resource Guard are **in different tenants.** </br> The Backup admin does not have access to the Resource Guard, the corresponding subscription, or the corresponding tenant.| Maximum isolation between the Backup admin and the Security admin, hence, maximum security. | Relatively difficult to test since requires two tenants or directories to test. | Ensure that permissions/ roles are correctly assigned for the resource, the subscription or the directory.
+
+ >[!NOTE]
+ > For this article, we will demonstrate creation of the Resource Guard in a different tenant that offers maximum protection. In terms of requesting and approving requests for performing critical operations, this article demonstrates the same using [Azure Active Directory Privileged Identity Management](../active-directory/privileged-identity-management/pim-configure.md) in the tenant housing the Resource Guard. You can optionally use other mechanisms to manage JIT permissions on the Resource Guard as per your setup.
+
+## Next steps
+
+[Configure Multi-user authorization using Resource Guard](multi-user-authorization.md)
backup Multi User Authorization Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/multi-user-authorization-tutorial.md
+
+ Title: Tutorial - Enable Multi-user authorization using Resource Guard
+description: In this tutorial, you'll learn about how create a resource guard and enable Multi-user authorization on Recovery Services vault for Azure Backup.
+ Last updated : 05/05/2022++++
+# Tutorial: Create a Resource Guard and enable Multi-user authorization in Azure Backup
+
+This tutorial describes how to create a Resource Guard and enable Multi-user authorization on a Recovery Services vault. This adds an additional layer of protection to critical operations on your Recovery Services vaults.
+
+This tutorial includes the following:
+
+>[!div class="checklist"]
+>- Prerequisies
+>- Create a Resource Guard
+>- Enable MUA on a Recovery Services vault
+
+>[!NOTE]
+> Multi-user authorization for Azure Backup is available in all public Azure regions.
+
+## Prerequisites
+
+Before you start:
+
+- Ensure the Resource Guard and the Recovery Services vault are in the same Azure region.
+- Ensure the Backup admin does **not** have **Contributor** permissions on the Resource Guard. You can choose to have the Resource Guard in another subscription of the same directory or in another directory to ensure maximum isolation.
+- Ensure that your subscriptions containing the Recovery Services vault as well as the Resource Guard (in different subscriptions or tenants) are registered to use the **Microsoft.RecoveryServices** provider. For more details, see [Azure resource providers and types](../azure-resource-manager/management/resource-providers-and-types.md#register-resource-provider-1).
+
+Learn about various [MUA usage scenarios](multi-user-authorization-concept.md#usage-scenarios).
+
+## Create a Resource Guard
+
+>[!Note]
+>The **Security admin** creates the Resource Guard. We recommend that you create it in a **different subscription** or a **different tenant** as the vault. However, it should be in the **same region** as the vault. The Backup admin must **NOT** have *contributor* access on the Resource Guard or the subscription that contains it.
+>
+>Create the Resource Guard in a tenant different from the vault tenant.
+
+Follow these steps:
+
+1. In the Azure portal, go to the directory under which you wish to create the Resource Guard.
+
+1. Search for **Resource Guards** in the search bar and select the corresponding item from the drop-down.
+
+ - Click **Create** to start creating a Resource Guard.
+ - In the create blade, fill in the required details for this Resource Guard.
+ - Make sure the Resource Guard is in the same Azure regions as the Recovery Services vault.
+ - Also, it is helpful to add a description of how to get or request access to perform actions on associated vaults when needed. This description would also appear in the associated vaults to guide the backup admin on getting the required permissions. You can edit the description later if needed, but having a well-defined description at all times is encouraged.
+
+1. On the **Protected operations** tab, select the operations you need to protect using this resource guard.
+
+ You can also [select the operations to be protected after creating the resource guard](#select-operations-to-protect-using-resource-guard).
+
+1. Optionally, add any tags to the Resource Guard as per the requirements
+1. Click **Review + Create**.
+1. Follow notifications for status and successful creation of the Resource Guard.
+
+### Select operations to protect using Resource Guard
+
+>[!Note]
+>Choose the operations you want to protect using the Resource Guard out of all supported critical operations. By default, all supported critical operations are enabled. However, you can exempt certain operations from falling under the purview of MUA using Resource Guard. The security admin can perform the following steps:
+
+Follow these steps:
+
+1. In the Resource Guard created above, go to **Properties**.
+2. Select **Disable** for operations that you wish to exclude from being authorized using the Resource Guard. Note that the operations **Disable soft delete** and **Remove MUA protection** cannot be disabled.
+1. Optionally, you can also update the description for the Resource Guard using this blade.
+1. Click **Save**.
+
+## Assign permissions to the Backup admin on the Resource Guard to enable MUA
+
+>[!Note]
+>To enable MUA on a vault, the admin of the vault must have **Reader** role on the Resource Guard or subscription containing the Resource Guard. To assign the **Reader** role on the Resource Guard:
+
+Follow these steps:
+
+1. In the Resource Guard created above, go to the Access Control (IAM) blade, and then go to **Add role assignment**.
+1. Select **Reader** from the list of built-in roles and click **Next** on the bottom of the screen.
+1. Click **Select members** and add the Backup adminΓÇÖs email ID to add them as the **Reader**. Since the Backup admin is in another tenant in this case, they will be added as guests to the tenant containing the Resource Guard.
+1. Click **Select** and then proceed to **Review + assign** to complete the role assignment.
+
+## Enable MUA on a Recovery Services vault
+
+>[!Note]
+>The Backup admin now has the Reader role on the Resource Guard and can easily enable multi-user authorization on vaults managed by them and performs the following steps.
+
+1. Go to the Recovery Services vault.
+1. Go to **Properties** on the left navigation panel, then to **Multi-User Authorization** and click **Update**.
+1. Now you are presented with the option to enable MUA and choose a Resource Guard using one of the following ways:
+
+ 1. You can either specify the URI of the Resource Guard, make sure you specify the URI of a Resource Guard you have **Reader** access to and that is the same regions as the vault. You can find the URI (Resource Guard ID) of the Resource Guard in its **Overview** screen:
+
+ 1. Or you can select the Resource Guard from the list of Resource Guards you have **Reader** access to, and those available in the region.
+
+ 1. Click **Select Resource Guard**
+ 1. Click on the dropdown and select the directory the Resource Guard is in.
+ 1. Click **Authenticate** to validate your identity and access.
+ 1. After authentication, choose the **Resource Guard** from the list displayed.
+
+1. Click **Save** once done to enable MUA.
+
+## Next steps
+
+- [Protect against unauthorized (protected) operations](multi-user-authorization.md#protect-against-unauthorized-protected-operations)
+- [Authorize critical (protected) operations using Azure AD Privileged Identity Management](multi-user-authorization.md#authorize-critical-protected-operations-using-azure-ad-privileged-identity-management)
+- [Performing a protected operation after approval](multi-user-authorization.md#performing-a-protected-operation-after-approval)
+- [Disable MUA on a Recovery Services vault](multi-user-authorization.md#disable-mua-on-a-recovery-services-vault)
+
backup Multi User Authorization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/multi-user-authorization.md
Title: Multi-user authorization using Resource Guard
-description: An overview of Multi-user authorization using Resource Guard.
+ Title: Configure Multi-user authorization using Resource Guard
+description: This article explains how to configure Multi-user authorization using Resource Guard.
Previously updated : 12/06/2021 Last updated : 05/05/2022
-# Multi-user authorization using Resource Guard (Preview)
+# Configure Multi-user authorization using Resource Guard in Azure Backup
-Multi-user authorization (MUA) for Azure Backup allows you to add an additional layer of protection to critical operations on your Recovery Services vaults. For MUA, Azure Backup uses another Azure resource called the Resource Guard to ensure critical operations are performed only with applicable authorization.
+This article describes how to configure Multi-user authorization (MUA) for Azure Backup to add an additional layer of protection to critical operations on your Recovery Services vaults
This document includes the following: -- How MUA using Resource Guard works-- Before you start-- Testing scenarios-- Create a Resource Guard-- Enable MUA on a Recovery Services vault-- Protect against unauthorized operations on a vault-- Authorize critical operations on a vault-- Disable MUA on a Recovery Services vault
+>[!div class="checklist"]
+>- Before you start
+>- Testing scenarios
+>- Create a Resource Guard
+>- Enable MUA on a Recovery Services vault
+>- Protect against unauthorized operations on a vault
+>- Authorize critical operations on a vault
+>- Disable MUA on a Recovery Services vault
>[!NOTE]
-> Multi-user authorization for Backup is currently in preview and is available in all public Azure regions.
-
-## How does MUA for Backup work?
-
-Azure Backup uses the Resource Guard as an authorization service for a Recovery Services vault. Therefore, to perform a critical operation (described below) successfully, you must have sufficient permissions on the associated Resource Guard as well.
-
-> [!Important]
-> To function as intended, the Resource Guard must be owned by a different user, and the vault admin must not have Contributor permissions. You can place Resource Guard in a subscription or tenant different from the one containing the Recovery Services vault to provide better protection.
-
-### Critical operations
-
-The following table lists the operations defined as critical operations and can be protected by a Resource Guard. You can choose to exclude certain operations from being protected using the Resource Guard when associating vaults with it. Note that operations denoted as Mandatory cannot be excluded from being protected using the Resource Guard for vaults associated with it. Also, the excluded critical operations would apply to all vaults associated with a Resource Guard.
-
-**Operation** | **Mandatory/Optional**
- |
-Disable soft delete | Mandatory
-Disable MUA protection | Mandatory
-Modify backup policy | Optional: Can be excluded
-Modify protection | Optional: Can be excluded
-Stop protection | Optional: Can be excluded
-Change MARS security PIN | Optional: Can be excluded
-
-### Concepts
-The concepts and the processes involved when using MUA for Backup are explained below.
-
-LetΓÇÖs consider the following two users for a clear understanding of the process and responsibilities. These two roles are referenced throughout this article.
-
-**Backup admin**: Owner of the Recovery Services vault and performs management operations on the vault. To begin with, the Backup admin must not have any permissions on the Resource Guard.
-
-**Security admin**: Owner of the Resource Guard and serves as the gatekeeper of critical operations on the vault. Hence, the Security admin controls permissions that the Backup admin needs to perform critical operations on the vault.
-
-Following is a diagrammatic representation for performing a critical operation on a vault that has MUA configured using a Resource Guard.
-
-
-
-Here is the flow of events in a typical scenario:
-
-1. The Backup admin creates the Recovery Services vault.
-1. The Security admin creates the Resource Guard. The Resource Guard can be in a different subscription or a different tenant w.r.t the Recovery Services vault. It must be ensured that the Backup admin does not have Contributor permissions on the Resource Guard.
-1. The Security admin grants the **Reader** role to the Backup Admin for the Resource Guard (or a relevant scope). The Backup admin requires the reader role to enable MUA on the vault.
-1. The Backup admin now configures the Recovery Services vault to be protected by MUA via the Resource Guard.
-1. Now, if the Backup admin wants to perform a critical operation on the vault, they need to request access to the Resource Guard. The Backup admin can contact the Security admin for details on gaining access to perform such operations. They can do this using Privileged Identity Management (PIM) or other processes as mandated by the organization.
-1. The Security admin temporarily grants the **Contributor** role on the Resource Guard to the Backup admin to perform critical operations.
-1. Now, the Backup admin initiates the critical operation.
-1. The Azure Resource Manager checks if the Backup admin has sufficient permissions or not. Since the Backup admin now has Contributor role on the Resource Guard, the request is completed.
- - If the Backup admin did not have the required permissions/roles, the request would have failed.
-1. The security admin ensures that the privileges to perform critical operations are revoked after authorized actions are performed or after a defined duration. Using JIT tools [Azure Active Directory Privileged Identity Management](../active-directory/privileged-identity-management/pim-configure.md) may be useful in ensuring this.
-
->[!NOTE]
->- MUA provides protection on the above listed operations performed on the Recovery Services vaults only. Any operations performed directly on the data source (i.e., the Azure resource/workload that is protected) are beyond the scope of the Resource Guard.
->- This feature is currently available via the Azure portal only.
->- This feature is currently supported for Recovery Services vaults only and not available for Backup vaults.
+> Multi-user authorization for Azure Backup is available in all public Azure regions.
## Before you start -- The Resource Guard and the Recovery Services vault must be in the same Azure region.-- As stated in the previous section, ensure the Backup admin does **not** have **Contributor** permissions on the Resource Guard. You can choose to have the Resource Guard in another subscription of the same directory or in another directory to ensure maximum isolation.
+- Ensure the Resource Guard and the Recovery Services vault are in the same Azure region.
+- Ensure the Backup admin does **not** have **Contributor** permissions on the Resource Guard. You can choose to have the Resource Guard in another subscription of the same directory or in another directory to ensure maximum isolation.
- Ensure that your subscriptions containing the Recovery Services vault as well as the Resource Guard (in different subscriptions or tenants) are registered to use the **Microsoft.RecoveryServices** provider. For more details, see [Azure resource providers and types](../azure-resource-manager/management/resource-providers-and-types.md#register-resource-provider-1).
-## Usage scenarios
-
-The following table depicts scenarios for creating your Resource Guard and Recovery Services vault (RS vault), along with the relative protection offered by each.
-
->[!Important]
-> The Backup admin must not have Contributor permissions to the Resource Guard in any scenario.
+Learn about various [MUA usage scenarios](multi-user-authorization-concept.md#usage-scenarios).
-**Usage scenario** | **Protection due to MUA** | **Ease of implementation** | **Notes**
- | | | |
-RS vault and Resource Guard are **in the same subscription.** </br> The Backup admin does not have access to the Resource Guard. | Least isolation between the Backup admin and the Security admin. | Relatively easy to implement since only one subscription is required. | Resource level permissions/ roles need to be ensured are correctly assigned.
-RS vault and Resource Guard are **in different subscriptions but the same tenant.** </br> The Backup admin does not have access to the Resource Guard or the corresponding subscription. | Medium isolation between the Backup admin and the Security admin. | Relatively medium ease of implementation since two subscriptions (but a single tenant) are required. | Ensure that that permissions/ roles are correctly assigned for the resource or the subscription.
-RS vault and Resource Guard are **in different tenants.** </br> The Backup admin does not have access to the Resource Guard, the corresponding subscription, or the corresponding tenant.| Maximum isolation between the Backup admin and the Security admin, hence, maximum security. | Relatively difficult to test since requires two tenants or directories to test. | Ensure that permissions/ roles are correctly assigned for the resource, the subscription or the directory.
-
- >[!NOTE]
- > For this article, we will demonstrate creation of the Resource Guard in a different tenant that offers maximum protection. In terms of requesting and approving requests for performing critical operations, this article demonstrates the same using [Azure Active Directory Privileged Identity Management](../active-directory/privileged-identity-management/pim-configure.md) in the tenant housing the Resource Guard. You can optionally use other mechanisms to manage JIT permissions on the Resource Guard as per your setup.
-
-## Creating a Resource Guard
+## Create a Resource Guard
The **Security admin** creates the Resource Guard. We recommend that you create it in a **different subscription** or a **different tenant** as the vault. However, it should be in the **same region** as the vault. The Backup admin must **NOT** have *contributor* access on the Resource Guard or the subscription that contains it. For the following example, create the Resource Guard in a tenant different from the vault tenant.
-1. In the Azure portal, navigate to the directory under which you wish to create the Resource Guard.
+1. In the Azure portal, go to the directory under which you wish to create the Resource Guard.
:::image type="content" source="./media/multi-user-authorization/portal-settings-directories-subscriptions.png" alt-text="Screenshot showing the portal settings."::: 1. Search for **Resource Guards** in the search bar and select the corresponding item from the drop-down.
- :::image type="content" source="./media/multi-user-authorization/resource-guards-preview-inline.png" alt-text="Screenshot showing resource guards in preview." lightbox="./media/multi-user-authorization/resource-guards-preview-expanded.png":::
+ :::image type="content" source="./media/multi-user-authorization/resource-guards-preview-inline.png" alt-text="Screenshot showing resource guards." lightbox="./media/multi-user-authorization/resource-guards-preview-expanded.png":::
- Click **Create** to start creating a Resource Guard. - In the create blade, fill in the required details for this Resource Guard. - Make sure the Resource Guard is in the same Azure regions as the Recovery Services vault. - Also, it is helpful to add a description of how to get or request access to perform actions on associated vaults when needed. This description would also appear in the associated vaults to guide the backup admin on getting the required permissions. You can edit the description later if needed, but having a well-defined description at all times is encouraged.
- :::image type="content" source="./media/multi-user-authorization/create-resource-guard.png" alt-text="Screenshot showing to create resource guard.":::
-
+1. On the **Protected operations** tab, select the operations you need to protect using this resource guard.
+
+ You can also [select the operations to be protected after creating the resource guard](#select-operations-to-protect-using-resource-guard).
+ 1. Optionally, add any tags to the Resource Guard as per the requirements
-1. Click **Review + Create**
+1. Click **Review + Create**.
1. Follow notifications for status and successful creation of the Resource Guard. ### Select operations to protect using Resource Guard Choose the operations you want to protect using the Resource Guard out of all supported critical operations. By default, all supported critical operations are enabled. However, you can exempt certain operations from falling under the purview of MUA using Resource Guard. The security admin can perform the following steps:
-1. In the Resource Guard created above, navigate to **Properties**.
+1. In the Resource Guard created above, go to **Properties**.
2. Select **Disable** for operations that you wish to exclude from being authorized using the Resource Guard. Note that the operations **Disable soft delete** and **Remove MUA protection** cannot be disabled. 3. Optionally, you can also update the description for the Resource Guard using this blade. 4. Click **Save**.
Choose the operations you want to protect using the Resource Guard out of all su
To enable MUA on a vault, the admin of the vault must have **Reader** role on the Resource Guard or subscription containing the Resource Guard. To assign the **Reader** role on the Resource Guard:
-1. In the Resource Guard created above, navigate to the Access Control (IAM) blade, and then go to **Add role assignment**.
+1. In the Resource Guard created above, go to the Access Control (IAM) blade, and then go to **Add role assignment**.
:::image type="content" source="./media/multi-user-authorization/demo-resource-guard-access-control.png" alt-text="Screenshot showing demo resource guard-access control.":::
To enable MUA on a vault, the admin of the vault must have **Reader** role on th
Now that the Backup admin has the Reader role on the Resource Guard, they can easily enable multi-user authorization on vaults managed by them. The following steps are performed by the **Backup admin**.
-1. Go to the Recovery Services vault. Navigate to **Properties** on the left navigation panel, then to **Multi-User Authorization** and click **Update**.
+1. Go to the Recovery Services vault. Go to **Properties** on the left navigation panel, then to **Multi-User Authorization** and click **Update**.
- :::image type="content" source="./media/multi-user-authorization/test-vault-properties.png" alt-text="Screenshot showing the Recovery services vault-properties.":::
+ :::image type="content" source="./media/multi-user-authorization/test-vault-properties.png" alt-text="Screenshot showing the Recovery services vault properties.":::
1. Now you are presented with the option to enable MUA and choose a Resource Guard using one of the following ways:
Once you have enabled MUA, the operations in scope will be restricted on the vau
Depicted below is an illustration of what happens when the Backup admin tries to perform such a protected operation (For example, disabling soft delete is depicted here. Other protected operations have a similar experience). The following steps are performed by a Backup admin without required permissions.
-1. To disable soft delete, navigate to the Recovery Services Vault > Properties > Security Settings and click **Update**, which brings up the Security Settings.
+1. To disable soft delete, go to the Recovery Services Vault > Properties > Security Settings and click **Update**, which brings up the Security Settings.
1. Disable the soft delete using the slider. You are informed that this is a protected operation, and you need to verify their access to the Resource Guard. 1. Select the directory containing the Resource Guard and Authenticate yourself. This step may not be required if the Resource Guard is in the same directory as the vault. 1. Proceed to click **Save**. The request fails with an error informing them about not having sufficient permissions on the Resource Guard to let you perform this operation.
The following sub-sections discuss authorizing these requests using PIM. There a
### Create an eligible assignment for the Backup admin (if using Azure AD Privileged Identity Management)
-Using PIM, the Security admin can create an eligible assignment for the Backup admin as a Contributor to the Resource Guard. This enables the Backup admin to raise a request (for the Contributor role) when they need to perform a protected operation. To do so, the **security admin** performs the following:
+The Security admin can use PIM to create an eligible assignment for the Backup admin as a Contributor to the Resource Guard. This enables the Backup admin to raise a request (for the Contributor role) when they need to perform a protected operation. To do so, the **security admin** performs the following:
-1. In the security tenant (which contains the Resource Guard), navigate to **Privileged Identity Management** (search for this in the search bar in the Azure portal) and then go to **Azure Resources** (under **Manage** on the left menu).
+1. In the security tenant (which contains the Resource Guard), go to **Privileged Identity Management** (search for this in the search bar in the Azure portal) and then go to **Azure Resources** (under **Manage** on the left menu).
1. Select the resource (the Resource Guard or the containing subscription/RG) to which you want to assign the **Contributor** role. 1. If you donΓÇÖt see the corresponding resource in the list of resources, ensure you add the containing subscription to be managed by PIM.
-1. In the selected resource, navigate to **Assignments** (under **Manage** on the left menu) and go to **Add assignments**.
+1. In the selected resource, go to **Assignments** (under **Manage** on the left menu) and go to **Add assignments**.
:::image type="content" source="./media/multi-user-authorization/add-assignments.png" alt-text="Screenshot showing how to add assignments.":::
Note if this is not configured, any requests will be automatically approved with
1. In Azure AD PIM, select **Azure Resources** on the left navigation bar and select your Resource Guard.
-1. Go to **Settings** and then navigate to the **Contributor** role.
+1. Go to **Settings** and then go to the **Contributor** role.
:::image type="content" source="./media/multi-user-authorization/add-contributor.png" alt-text="Screenshot showing how to add contributor.":::
Note if this is not configured, any requests will be automatically approved with
After the security admin creates an eligible assignment, the Backup admin needs to activate the assignment for the Contributor role to be able to perform protected actions. The following actions are performed by the **Backup admin** to activate the role assignment.
-1. Navigate to [Azure AD Privileged Identity Management](../active-directory/privileged-identity-management/pim-configure.md). If the Resource Guard is in another directory, switch to that directory and then navigate to [Azure AD Privileged Identity Management](../active-directory/privileged-identity-management/pim-configure.md).
-1. Navigate to My roles > Azure resources on the left menu.
+1. Go to [Azure AD Privileged Identity Management](../active-directory/privileged-identity-management/pim-configure.md). If the Resource Guard is in another directory, switch to that directory and then go to [Azure AD Privileged Identity Management](../active-directory/privileged-identity-management/pim-configure.md).
+1. Go to My roles > Azure resources on the left menu.
1. The Backup admin can see an Eligible assignment for the contributor role. Click **Activate** to activate it. 1. The Backup admin is informed via portal notification that the request is sent for approval.
After the security admin creates an eligible assignment, the Backup admin needs
### Approve activation of requests to perform critical operations Once the Backup admin raises a request for activating the Contributor role, the request is to be reviewed and approved by the **security admin**.
-1. In the security tenant, navigates to [Azure AD Privileged Identity Management.](../active-directory/privileged-identity-management/pim-configure.md)
-1. Navigate to **Approve Requests**.
+1. In the security tenant, go to [Azure AD Privileged Identity Management.](../active-directory/privileged-identity-management/pim-configure.md)
+1. Go to **Approve Requests**.
1. Under **Azure resources**, the request raised by the Backup admin requesting activation as a **Contributor** can be seen. 1. Review the request. If genuine, select the request and click **Approve** to approve it. 1. The Backup admin is informed by email (or other organizational alerting mechanisms) that their request is now approved.
The following screenshot shows an example of disabling soft delete for an MUA-en
Disabling MUA is a protected operation, and hence, is protected using MUA. This means that the Backup admin must have the required Contributor role in the Resource Guard. Details on obtaining this role are described here. Following is a summary of steps to disable MUA on a vault. 1. The Backup admin requests the Security admin for **Contributor** role on the Resource Guard. They can request this to use the methods approved by the organization such as JIT procedures, like [Azure AD Privileged Identity Management](../active-directory/privileged-identity-management/pim-configure.md), or other internal tools and procedures. 1. The Security admin approves the request (if they find it worthy of being approved) and informs the Backup admin. Now the Backup admin has the ΓÇÿContributorΓÇÖ role on the Resource Guard.
-1. The Backup admin navigates to the vault > Properties > Multi-user Authorization
+1. The Backup admin goes to the vault -> **Properties** -> **Multi-user Authorization**.
1. Click **Update** 1. Uncheck the Protect with Resource Guard check box 1. Choose the Directory that contains the Resource Guard and verify access using the Authenticate button (if applicable).
backup Register Microsoft Azure Recovery Services Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/scripts/register-microsoft-azure-recovery-services-agent.md
Last updated 06/23/2021
# PowerShell Script to register an on-premises Windows server or a client machine with Recovery Services vault
-This script helps you to register your on-premises Windows server or client machine with a Recovery Services vault.
+This script helps you to register your on-premises Windows server or client machine with a Recovery Services vault.
## Sample script
Catch {
1. Save the above script on your machine with a name of your choice and .ps1 extension. 1. Execute the script by providing the following parameters:
- - ΓÇô vaultcredPath -Complete Path of downloaded vault credential file
- - ΓÇô passphrase- Plain text string converted into secure string using [ConvertTo-SecureString](/powershell/module/microsoft.powershell.security/convertto-securestring?view=powershell-7.1&preserve-view=true) cmdlet.
+ - `$vaultcredPath` - Complete Path of downloaded vault credential file
+ - `$passphrase` - Plain text string converted into secure string using [ConvertTo-SecureString](/powershell/module/microsoft.powershell.security/convertto-securestring) cmdlet.
>[!Note] >You also need to provide the Security PIN generated from the Azure portal. To generate the PIN, navigate to **Settings** -> **Properties** -> **Security PIN** in the Recovery Services vault blade, and then select **Generate**.
Catch {
## Next steps [Learn more](../backup-client-automation.md) about how to use PowerShell to deploy and manage on-premises backups using MARS agent.-
backup Tutorial Backup Sap Hana Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/tutorial-backup-sap-hana-db.md
Title: Tutorial - Back up SAP HANA databases in Azure VMs description: In this tutorial, learn how to back up SAP HANA databases running on Azure VM to an Azure Backup Recovery Services vault. Previously updated : 04/01/2022 Last updated : 05/16/2022
Running the pre-registration script performs the following functions:
* Based on your Linux distribution, the script installs or updates any necessary packages required by the Azure Backup agent. * It performs outbound network connectivity checks with Azure Backup servers and dependent services like Azure Active Directory and Azure Storage. * It logs into your HANA system using the custom user key or SYSTEM user key mentioned as part of the [prerequisites](#prerequisites). This is used to create a backup user (AZUREWLBACKUPHANAUSER) in the HANA system and the user key can be deleted after the pre-registration script runs successfully. _Note that the SYSTEM user key must not be deleted_.
+* It checks and warns if the */opt/msawb* folder is placed in the root partition and the root partition is 2 GB in size. The script recommends that you increase the root partition size to 4 GB or move the */opt/msawb* folder to a different location where it has space to grow to a maximum of 4 GB in size. Note that if you place the */opt/msawb* folder in the root partition of 2 GB size, this could lead to root partition getting full and causing the backups to fail.
* AZUREWLBACKUPHANAUSER is assigned these required roles and permissions: * For MDC: DATABASE ADMIN and BACKUP ADMIN (from HANA 2.0 SPS05 onwards): to create new databases during restore. * For SDC: BACKUP ADMIN: to create new databases during restore.
backup Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/whats-new.md
Title: What's new in Azure Backup description: Learn about new features in Azure Backup. Previously updated : 05/16/2022 Last updated : 05/24/2022
You can learn more about the new releases by bookmarking this page or by [subscr
## Updates summary - May 2022
+ - [Multi-user authorization using Resource Guard is now generally available](#multi-user-authorization-using-resource-guard-is-now-generally-available)
- [Archive tier support for Azure Virtual Machines is now generally available](#archive-tier-support-for-azure-virtual-machines-is-now-generally-available) - February 2022 - [Multiple backups per day for Azure Files is now generally available](#multiple-backups-per-day-for-azure-files-is-now-generally-available)
You can learn more about the new releases by bookmarking this page or by [subscr
- February 2021 - [Backup for Azure Blobs (in preview)](#backup-for-azure-blobs-in-preview)
+## Multi-user authorization using Resource Guard is now generally available
+
+Azure Backup now supports multi-user authorization (MUA) that allows you to add an additional layer of protection to critical operations on your Recovery Services vaults. For MUA, Azure Backup uses the Azure resource, Resource Guard, to ensure critical operations are performed only with applicable authorization.
+
+For more information, see [how to protect Recovery Services vault and manage critical operations with MUA](multi-user-authorization.md).
+ ## Archive tier support for Azure Virtual Machines is now generally available Azure Backup now supports the movement of recovery points to the Vault-archive tier for Azure Virtual Machines from the Azure portal. This allows you to move the archivable/recommended recovery points (corresponding to a backup item) to the Vault-archive tier at one go.
bastion Bastion Connect Vm Scale Set https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/bastion-connect-vm-scale-set.md
Title: 'Connect to a Windows virtual machine scale set using Azure Bastion'
+ Title: 'Connect to a virtual machine scale set using Azure Bastion'
description: Learn how to connect to an Azure virtual machine scale set using Azure Bastion.- - Previously updated : 09/20/2021 Last updated : 05/24/2022 # Connect to a virtual machine scale set using Azure Bastion
-This article shows you how to securely and seamlessly RDP to your Windows virtual machine scale set instance in an Azure virtual network using Azure Bastion. You can connect to a virtual machine scale set instance directly from the Azure portal. When using Azure Bastion, VMs don't require a client, agent, or additional software. For more information about Azure Bastion, see the [Overview](bastion-overview.md).
+This article shows you how to securely and seamlessly connect to your virtual machine scale set instance in an Azure virtual network directly from the Azure portal using Azure Bastion. When you use Azure Bastion, VMs don't require a client, agent, or additional software. For more information about Azure Bastion, see the [Overview](bastion-overview.md). For more information about virtual machine scale sets, see [What are virtual machine scale sets?](../virtual-machine-scale-sets/overview.md)
## Prerequisites
-Make sure that you have set up an Azure Bastion host for the virtual network in which the virtual machine scale set resides. For more information, see [Create an Azure Bastion host](./tutorial-create-host-portal.md). Once the Bastion service is provisioned and deployed in your virtual network, you can use it to connect to a virtual machine scale set instance in this virtual network. Bastion assumes that you are using RDP to connect to a Windows virtual machine scale set, and SSH to connect to your Linux virtual machine scale set. For information about connection to a Linux VM, see [Connect to a VM - Linux](bastion-connect-vm-ssh-linux.md).
+Make sure that you have set up an Azure Bastion host for the virtual network in which the virtual machine scale set resides. For more information, see [Create an Azure Bastion host](tutorial-create-host-portal.md). Once the Bastion service is provisioned and deployed in your virtual network, you can use it to connect to a virtual machine scale set instance in this virtual network.
+
+## <a name="rdp"></a>Connect
-## <a name="rdp"></a>Connect using RDP
+This section shows you the basic steps to connect to your virtual machine scale set.
1. Open the [Azure portal](https://portal.azure.com). Go to the virtual machine scale set that you want to connect to.
- ![navigate](./media/bastion-connect-vm-scale-set/1.png)
-2. Go to the virtual machine scale set instance that you want to connect to, then select **Connect**. When using an RDP connection, the virtual machine scale set should be a Windows virtual machine scale set.
+ :::image type="content" source="./media/bastion-connect-vm-scale-set/select-scale-set.png" alt-text="Screenshot shows virtual machine scale sets." lightbox="./media/bastion-connect-vm-scale-set/select-scale-set.png":::
+
+1. Go to the virtual machine scale set instance that you want to connect to.
+
+ :::image type="content" source="./media/bastion-connect-vm-scale-set/select-instance.png" alt-text="Screenshot shows virtual machine scale set instances." lightbox="./media/bastion-connect-vm-scale-set/select-instance.png":::
+
+1. Select **Connect** at the top of the page, then choose **Bastion** from the dropdown.
+
+ :::image type="content" source="./media/bastion-connect-vm-scale-set/select-connect.png" alt-text="Screenshot shows select the connect button and choose Bastion from the dropdown." lightbox="./media/bastion-connect-vm-scale-set/select-connect.png":::
- ![virtual machine scale set](./media/bastion-connect-vm-scale-set/2.png)
-3. After you select **Connect**, a side bar appears that has three tabs ΓÇô RDP, SSH, and Bastion. Select the **Bastion** tab from the side bar. If you didn't provision Bastion for the virtual network, you can select the link to configure Bastion. For configuration instructions, see [Configure Bastion](./tutorial-create-host-portal.md).
+1. On the **Bastion** page, fill in the required settings. The settings you can select depend on the virtual machine to which you're connecting, and the [Bastion SKU](configuration-settings.md#skus) tier that you're using. The Standard SKU gives you more connection options than the Basic SKU. For more information about settings, see [Bastion configuration settings](configuration-settings.md).
- ![Bastion tab](./media/bastion-connect-vm-scale-set/3.png)
-4. On the Bastion tab, enter the username and password for your virtual machine scale set, then select **Connect**.
+ :::image type="content" source="./media/bastion-connect-vm-scale-set/connection-settings.png" alt-text="Screenshot shows connection settings options with Open in new browser tab selected." lightbox="./media/bastion-connect-vm-scale-set/connection-settings.png":::
- ![connect](./media/bastion-connect-vm-scale-set/4.png)
-5. The RDP connection to this virtual machine via Bastion will open directly in the Azure portal (over HTML5) using port 443 and the Bastion service.
+1. After filling in the values on the Bastion page, select **Connect** to connect to the instance.
## Next steps
bastion Bastion Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/bastion-faq.md
description: Learn about frequently asked questions for Azure Bastion.
Previously updated : 03/22/2022 Last updated : 04/26/2022 # Azure Bastion FAQ
At this time, IPv6 isn't supported. Azure Bastion supports IPv4 only. This means
Azure Bastion doesn't move or store customer data out of the region it's deployed in.
+### <a name="vwan"></a>Does Azure Bastion support Virtual WAN?
+
+Yes, you can use Azure Bastion for Virtual WAN deployments. However, deploying Azure Bastion within a Virtual WAN hub isn't supported. You can deploy Azure Bastion in a spoke VNet and use the [IP-based connection](connect-ip-address.md) feature to connect to virtual machines deployed across a different VNet via the Virtual WAN hub. For more information, see [Set up routing configuration for a virtual network connection](../virtual-wan/how-to-virtual-hub-routing.md#routing-configuration).
+ ### <a name="dns"></a>Can I use Azure Bastion with Azure Private DNS Zones?
-Azure Bastion needs to be able to communicate with certain internal endpoints to successfully connect to target resources. Therefore, you *can* use Azure Bastion with Azure Private DNS Zones as long as the zone name you select doesn't overlap with the naming of these internal endpoints. Before you deploy your Azure Bastion resource, please make sure that the host virtual network is not linked to a private DNS zone with the following exact names:
+Azure Bastion needs to be able to communicate with certain internal endpoints to successfully connect to target resources. Therefore, you *can* use Azure Bastion with Azure Private DNS Zones as long as the zone name you select doesn't overlap with the naming of these internal endpoints. Before you deploy your Azure Bastion resource, make sure that the host virtual network isn't linked to a private DNS zone with the following exact names:
* management.azure.com * blob.core.windows.net
Azure Bastion needs to be able to communicate with certain internal endpoints to
You may use a private DNS zone ending with one of the names listed above (ex: dummy.blob.core.windows.net).
-The use of Azure Bastion is also not supported with Azure Private DNS Zones in national clouds.
+Azure Bastion isn't supported with Azure Private DNS Zones in national clouds.
### <a name="subnet"></a>Can I have an Azure Bastion subnet of size /27 or smaller (/28, /29, etc.)?
bastion Connect Ip Address https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/connect-ip-address.md
+
+ Title: 'Connect to a VM - specified private IP address: Azure portal'
+
+description: Learn how to connect to your virtual machines using a specified private IP address via Azure Bastion.
++++ Last updated : 04/26/2022++++
+# Connect to a VM via specified private IP address through the portal
+
+IP-based connection lets you connect to your on-premises, non-Azure, and Azure virtual machines via Azure Bastion over ExpressRoute or a VPN site-to-site connection using a specified private IP address. The steps in this article show you how to configure your Bastion deployment, and then connect to an on-premises resource using IP-based connection. For more information about Azure Bastion, see the [Overview](bastion-overview.md).
++
+> [!NOTE]
+> This configuration requires the Standard SKU tier for Azure Bastion. To upgrade, see [Upgrade a SKU](upgrade-sku.md).
+>
+
+**Limitations**
+
+IP-based connection wonΓÇÖt work with force tunneling over VPN, or when a default route is advertised over an ExpressRoute circuit. Azure Bastion requires access to the Internet and force tunneling, or the default route advertisement will result in traffic blackholing.
+
+## Prerequisites
+
+Before you begin these steps, verify that you have the following environment set up:
+
+* A VNet with Bastion already deployed.
+
+ * Make sure that you have deployed Bastion to the virtual network. Once the Bastion service is provisioned and deployed in your virtual network, you can use it to connect to any VM deployed in any of the virtual networks that is reachable from Bastion.
+ * To deploy Bastion, see [Quickstart: Deploy Bastion with default settings](quickstart-host-portal.md).
+
+* A virtual machine in any reachable virtual network. This is the virtual machine to which you'll connect.
+
+## Configure Bastion
+
+1. Sign in to the [Azure portal](https://ms.portal.azure.com/).
+
+1. In the Azure portal, go to your Bastion deployment.
+
+1. IP based connection requires the Standard SKU tier. On the **Configuration** page, for **Tier**, verify the tier is set to the **Standard** SKU. If the tier is set to the Basic SKU, select **Standard** from the dropdown.
+1. To enable **IP based connection**, select **IP based connection**.
+
+ :::image type="content" source="./media/connect-ip-address/ip-connection.png" alt-text="Screenshot that shows the Configuration page." lightbox="./media/connect-ip-address/ip-connection.png":::
+
+1. Select **Apply** to apply the changes. It takes a few minutes for the Bastion configuration to complete.
+
+## Connect to VM
+
+1. To connect to a VM using a specified private IP address, you make the connection from Bastion to the VM, not directly from the VM page. On your Bastion page, select **Connect** to open the Connect page.
+
+1. On the Bastion **Connect** page, for **Hostname**, enter the private IP address of the target VM.
+
+ :::image type="content" source="./media/connect-ip-address/ip-address.png" alt-text="Screenshot of the Connect using Azure Bastion page." lightbox="./media/connect-ip-address/ip-address.png":::
+
+1. Adjust your connection settings to the desired **Protocol** and **Port**.
+
+1. Enter your credentials in **Username** and **Password**.
+
+1. Select **Connect** to connect to your virtual machine.
+
+## Next steps
+
+Read the [Bastion FAQ](bastion-faq.md) for additional information.
batch Pool Endpoint Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/pool-endpoint-configuration.md
Each NAT pool configuration includes one or more [network security group (NSG) r
The following C# snippet shows how to configure the RDP endpoint on compute nodes in a Windows pool to deny all network traffic. The endpoint uses a frontend pool of ports in the range *60000 - 60099*. ```csharp
-pool.NetworkConfiguration = new NetworkConfiguration
+using Microsoft.Azure.Batch;
+using Microsoft.Azure.Batch.Common;
+
+namespace AzureBatch
{
- EndpointConfiguration = new PoolEndpointConfiguration(new InboundNatPool[]
- {
- new InboundNatPool("RDP", InboundEndpointProtocol.Tcp, 3389, 60000, 60099, new NetworkSecurityGroupRule[]
+ public void SetPortsPool()
+ {
+ pool.NetworkConfiguration = new NetworkConfiguration
{
- new NetworkSecurityGroupRule(162, NetworkSecurityGroupRuleAccess.Deny, "*"),
- })
- })
-};
+ EndpointConfiguration = new PoolEndpointConfiguratio(new InboundNatPool[]
+ {
+ new InboundNatPool("RDP", InboundEndpointProtocol.Tcp, 3389, 60000, 60099, new NetworkSecurityGroupRule[]
+ {
+ new NetworkSecurityGroupRule(162, NetworkSecurityGroupRuleAccess.Deny, "*"),
+ })
+ })
+ };
+ }
+}
``` ## Example: Deny all SSH traffic from the internet
pool.NetworkConfiguration = new NetworkConfiguration
The following Python snippet shows how to configure the SSH endpoint on compute nodes in a Linux pool to deny all internet traffic. The endpoint uses a frontend pool of ports in the range *4000 - 4100*. ```python
-pool.network_configuration = batchmodels.NetworkConfiguration(
- endpoint_configuration=batchmodels.PoolEndpointConfiguration(
- inbound_nat_pools=[batchmodels.InboundNATPool(
- name='SSH',
- protocol='tcp',
- backend_port=22,
- frontend_port_range_start=4000,
- frontend_port_range_end=4100,
- network_security_group_rules=[
- batchmodels.NetworkSecurityGroupRule(
- priority=170,
- access=batchmodels.NetworkSecurityGroupRuleAccess.deny,
- source_address_prefix='Internet'
+from azure.batch import models as batchmodels
+
+class AzureBatch(object):
+ def set_ports_pool(self, **kwargs):
+ pool.network_configuration = batchmodels.NetworkConfiguration(
+ endpoint_configuration=batchmodels.PoolEndpointConfiguration(
+ inbound_nat_pools=[batchmodels.InboundNATPool(
+ name='SSH',
+ protocol='tcp',
+ backend_port=22,
+ frontend_port_range_start=4000,
+ frontend_port_range_end=4100,
+ network_security_group_rules=[
+ batchmodels.NetworkSecurityGroupRule(
+ priority=170,
+ access=batchmodels.NetworkSecurityGroupRuleAccess.deny,
+ source_address_prefix='Internet'
+ )
+ ]
)
- ]
+ ]
+ )
)
- ]
- )
-)
``` ## Example: Allow RDP traffic from a specific IP address
pool.network_configuration = batchmodels.NetworkConfiguration(
The following C# snippet shows how to configure the RDP endpoint on compute nodes in a Windows pool to allow RDP access only from IP address *198.51.100.7*. The second NSG rule denies traffic that does not match the IP address. ```csharp
-pool.NetworkConfiguration = new NetworkConfiguration
+using Microsoft.Azure.Batch;
+using Microsoft.Azure.Batch.Common;
+
+namespace AzureBatch
{
- EndpointConfiguration = new PoolEndpointConfiguration(new InboundNatPool[]
+ public void SetPortsPool()
{
- new InboundNatPool("RDP", InboundEndpointProtocol.Tcp, 3389, 7500, 8000, new NetworkSecurityGroupRule[]
- {
- new NetworkSecurityGroupRule(179,NetworkSecurityGroupRuleAccess.Allow, "198.51.100.7"),
- new NetworkSecurityGroupRule(180,NetworkSecurityGroupRuleAccess.Deny, "*")
- })
- })
-};
+ pool.NetworkConfiguration = new NetworkConfiguration
+ {
+ EndpointConfiguration = new PoolEndpointConfiguration(new InboundNatPool[]
+ {
+ new InboundNatPool("RDP", InboundEndpointProtocol.Tcp, 3389, 7500, 8000, new NetworkSecurityGroupRule[]
+ {
+ new NetworkSecurityGroupRule(179, NetworkSecurityGroupRuleAccess.Allow, "198.51.100.7"),
+ new NetworkSecurityGroupRule(180, NetworkSecurityGroupRuleAccess.Deny, "*")
+ })
+ })
+ };
+ }
+}
``` ## Example: Allow SSH traffic from a specific subnet
pool.NetworkConfiguration = new NetworkConfiguration
The following Python snippet shows how to configure the SSH endpoint on compute nodes in a Linux pool to allow access only from the subnet *192.168.1.0/24*. The second NSG rule denies traffic that does not match the subnet. ```python
-pool.network_configuration = batchmodels.NetworkConfiguration(
- endpoint_configuration=batchmodels.PoolEndpointConfiguration(
- inbound_nat_pools=[batchmodels.InboundNATPool(
- name='SSH',
- protocol='tcp',
- backend_port=22,
- frontend_port_range_start=4000,
- frontend_port_range_end=4100,
- network_security_group_rules=[
- batchmodels.NetworkSecurityGroupRule(
- priority=170,
- access='allow',
- source_address_prefix='192.168.1.0/24'
- ),
- batchmodels.NetworkSecurityGroupRule(
- priority=175,
- access='deny',
- source_address_prefix='*'
+from azure.batch import models as batchmodels
+
+class AzureBatch(object):
+ def set_ports_pool(self, **kwargs):
+ pool.network_configuration = batchmodels.NetworkConfiguration(
+ endpoint_configuration=batchmodels.PoolEndpointConfiguration(
+ inbound_nat_pools=[batchmodels.InboundNATPool(
+ name='SSH',
+ protocol='tcp',
+ backend_port=22,
+ frontend_port_range_start=4000,
+ frontend_port_range_end=4100,
+ network_security_group_rules=[
+ batchmodels.NetworkSecurityGroupRule(
+ priority=170,
+ access='allow',
+ source_address_prefix='192.168.1.0/24'
+ ),
+ batchmodels.NetworkSecurityGroupRule(
+ priority=175,
+ access='deny',
+ source_address_prefix='*'
+ )
+ ]
)
- ]
+ ]
+ )
)
- ]
- )
-)
``` ## Next steps
cognitive-services Howtocallvisionapi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/Vision-API-How-to-Topics/HowToCallVisionAPI.md
This article demonstrates how to call the Image Analysis API to return information about an image's visual features. It also shows you how to parse the returned information using the client SDKs or REST API.
-This guide assumes you have already <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesComputerVision" title="created a Computer Vision resource" target="_blank">created a Computer Vision resource </a> and obtained a subscription key and endpoint URL. If you're using a client SDK, you'll also need to authenticate a client object. If you haven't done these steps, follow the [quickstart](../quickstarts-sdk/image-analysis-client-library.md) to get started.
+This guide assumes you have already <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesComputerVision" title="created a Computer Vision resource" target="_blank">created a Computer Vision resource </a> and obtained a key and endpoint URL. If you're using a client SDK, you'll also need to authenticate a client object. If you haven't done these steps, follow the [quickstart](../quickstarts-sdk/image-analysis-client-library.md) to get started.
## Submit data to the service
cognitive-services Call Read Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/Vision-API-How-to-Topics/call-read-api.md
In this guide, you'll learn how to call the Read API to extract text from images. You'll learn the different ways you can configure the behavior of this API to meet your needs.
-This guide assumes you have already <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesComputerVision" title="created a Computer Vision resource" target="_blank">create a Computer Vision resource </a> and obtained a subscription key and endpoint URL. If you haven't, follow a [quickstart](../quickstarts-sdk/client-library.md) to get started.
+This guide assumes you have already <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesComputerVision" title="created a Computer Vision resource" target="_blank">create a Computer Vision resource </a> and obtained a key and endpoint URL. If you haven't, follow a [quickstart](../quickstarts-sdk/client-library.md) to get started.
## Determine how to process the data (optional)
cognitive-services Spatial Analysis Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/spatial-analysis-operations.md
Spatial Analysis lets you analyze video streams from camera devices in real time. For each camera device you configure, the Spatial Analysis operations will generate an output stream of JSON messages sent to your instance of Azure IoT Hub.
-The Spatial Analysis container implements the following operations:
+The Spatial Analysis container implements the following operations. You can configure these operations in the deployment manifest of your container.
| Operation Identifier| Description| |||
The following are the parameters required by each of the Spatial Analysis operat
| `INPUT_VIDEO_WIDTH` | Input video/stream's frame width (for example, 1920). This is an optional field and if provided, the frame will be scaled to this dimension while preserving the aspect ratio.| | `DETECTOR_NODE_CONFIG` | JSON indicating which GPU to run the detector node on. It should be in the following format: `"{ \"gpu_index\": 0 }",`| | `TRACKER_NODE_CONFIG` | JSON indicating whether to compute speed in the tracker node or not. It should be in the following format: `"{ \"enable_speed\": true }",`|
-| `CAMERA_CONFIG` | JSON indicating the calibrated camera parameters for multiple cameras. If the skill you used requires calibration and you already have the camera parameter, you can use this config to provide them directly. Should be in the following format: `"{ \"cameras\": [{\"source_id\": \"endcomputer.0.persondistancegraph.detector+end_computer1\", \"camera_height\": 13.105561256408691, \"camera_focal_length\": 297.60003662109375, \"camera_tiltup_angle\": 0.9738943576812744}] }"`, the `source_id` is used to identify each camera. It can be get from the `source_info` of the event we published. It will only take effect when `do_calibration=false` in `DETECTOR_NODE_CONFIG`.|
+| `CAMERA_CONFIG` | JSON indicating the calibrated camera parameters for multiple cameras. If the skill you used requires calibration and you already have the camera parameter, you can use this config to provide them directly. Should be in the following format: `"{ \"cameras\": [{\"source_id\": \"endcomputer.0.persondistancegraph.detector+end_computer1\", \"camera_height\": 13.105561256408691, \"camera_focal_length\": 297.60003662109375, \"camera_tiltup_angle\": 0.9738943576812744}] }"`, the `source_id` is used to identify each camera. It can be gotten from the `source_info` of the event we published. It will only take effect when `do_calibration=false` in `DETECTOR_NODE_CONFIG`.|
| `CAMERACALIBRATOR_NODE_CONFIG` | JSON indicating which GPU to run the camera calibrator node on and whether to use calibration or not. It should be in the following format: `"{ \"gpu_index\": 0, \"do_calibration\": true, \"enable_orientation\": true}",`| | `CALIBRATION_CONFIG` | JSON indicating parameters to control how the camera calibration works. It should be in the following format: `"{\"enable_recalibration\": true, \"quality_check_frequency_seconds\": 86400}",`| | `SPACEANALYTICS_CONFIG` | JSON configuration for zone and line as outlined below.|
cognitive-services Export Delete Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Custom-Vision-Service/export-delete-data.md
To learn how to view and delete user data in Custom Vision, see the following ta
| Data | View operation | Delete operation | | - | - | - |
-| Account info (Subscription Keys) | [GetAccountInfo](https://go.microsoft.com/fwlink/?linkid=865446) | Delete using Azure portal (Azure Subscriptions). Or using "Delete Your Account" button in CustomVision.ai settings page (Microsoft Account Subscriptions) |
+| Account info (Keys) | [GetAccountInfo](https://go.microsoft.com/fwlink/?linkid=865446) | Delete using Azure portal (Azure Subscriptions). Or using "Delete Your Account" button in CustomVision.ai settings page (Microsoft Account Subscriptions) |
| Iteration details | [GetIteration](https://go.microsoft.com/fwlink/?linkid=865446) | [DeleteIteration](https://go.microsoft.com/fwlink/?linkid=865446) | | Iteration performance details | [GetIterationPerformance](https://go.microsoft.com/fwlink/?linkid=865446) | [DeleteIteration](https://go.microsoft.com/fwlink/?linkid=865446) | | List of iterations | [GetIterations](https://go.microsoft.com/fwlink/?linkid=865446) | [DeleteIteration](https://go.microsoft.com/fwlink/?linkid=865446) |
cognitive-services Export Programmatically https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Custom-Vision-Service/export-programmatically.md
This guide shows you how to export your model to an ONNX file with the Python SD
## Create a training client
-You need to have a [CustomVisionTrainingClient](/python/api/azure-cognitiveservices-vision-customvision/azure.cognitiveservices.vision.customvision.training.customvisiontrainingclient) object to export a model iteration. Create variables for your Custom Vision training resources Azure endpoint and subscription keys, and use them to create the client object.
+You need to have a [CustomVisionTrainingClient](/python/api/azure-cognitiveservices-vision-customvision/azure.cognitiveservices.vision.customvision.training.customvisiontrainingclient) object to export a model iteration. Create variables for your Custom Vision training resources Azure endpoint and keys, and use them to create the client object.
```python ENDPOINT = "PASTE_YOUR_CUSTOM_VISION_TRAINING_ENDPOINT_HERE"
cognitive-services Get Started Build Detector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Custom-Vision-Service/get-started-build-detector.md
The training process should only take a few minutes. During this time, informati
## Evaluate the detector
-After training has completed, the model's performance is calculated and displayed. The Custom Vision service uses the images that you submitted for training to calculate precision, recall, and mean average precision. Precision and recall are two different measurements of the effectiveness of a detector:
+After training has completed, the model's performance is calculated and displayed. The Custom Vision service uses the images that you submitted for training to calculate precision, recall, and mean average precision, using a process called [k-fold cross validation](https://wikipedia.org/wiki/Cross-validation_(statistics)). Precision and recall are two different measurements of the effectiveness of a detector:
- **Precision** indicates the fraction of identified classifications that were correct. For example, if the model identified 100 images as dogs, and 99 of them were actually of dogs, then the precision would be 99%. - **Recall** indicates the fraction of actual classifications that were correctly identified. For example, if there were actually 100 images of apples, and the model identified 80 as apples, the recall would be 80%.
cognitive-services Logo Detector Mobile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Custom-Vision-Service/logo-detector-mobile.md
To learn more about how the app handles this data, start with the **GetResources
The Custom Vision portion of the tutorial is complete. If you want to run the app, you'll need to integrate the Computer Vision service as well. The app uses the Computer Vision text recognition feature to supplement the logo detection process. An Azure logo can be recognized by its appearance *or* by the text printed near it. Unlike Custom Vision models, Computer Vision is pretrained to perform certain operations on images or videos.
-Subscribe to the Computer Vision service to get a key and endpoint URL. For help on this step, see [How to obtain subscription keys](../cognitive-services-apis-create-account.md?tabs=singleservice%2Cwindows).
+Subscribe to the Computer Vision service to get a key and endpoint URL. For help on this step, see [How to obtain keys](../cognitive-services-apis-create-account.md?tabs=singleservice%2Cwindows).
![The Computer Vision service in the Azure portal, with the Quickstart menu selected. A link for keys is outlined, as is the API endpoint URL](media/azure-logo-tutorial/comvis-keys.png)
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Custom-Vision-Service/overview.md
keywords: image recognition, image identifier, image recognition app, custom vis
# What is Custom Vision?
-Azure Custom Vision is an image recognition service that lets you build, deploy, and improve your own image identifier models. An image identifier applies labels to images, according to their detected visual characteristics. Each label represents a classifications or objects. Unlike the [Computer Vision](../computer-vision/overview.md) service, Custom Vision allows you to specify your own labels and train custom models to detect them.
+Azure Custom Vision is an image recognition service that lets you build, deploy, and improve your own image identifier models. An image identifier applies labels to images, according to their detected visual characteristics. Each label represents a classification or object. Unlike the [Computer Vision](../computer-vision/overview.md) service, Custom Vision allows you to specify your own labels and train custom models to detect them.
This documentation contains the following types of articles: * The [quickstarts](./getting-started-build-a-classifier.md) are step-by-step instructions that let you make calls to the service and get results in a short period of time.
This documentation contains the following types of articles:
## What it does
-The Custom Vision service uses a machine learning algorithm to analyze images. You, the developer, submit groups of images that have and don't have the characteristics in question. You label the images yourself at the time of submission. Then the algorithm trains to this data and calculates its own accuracy by testing itself on those same images. Once you've trained the algorithm, you can test, retrain, and eventually use it in your image recognition app to [classify images](getting-started-build-a-classifier.md). You can also [export the model](export-your-model.md) itself for offline use.
+The Custom Vision service uses a machine learning algorithm to analyze images. You, the developer, submit groups of images that have and don't have the characteristics in question. You label the images yourself with your own custom labels (tags) at the time of submission. Then the algorithm trains to this data and calculates its own accuracy by testing itself on those same images. Once you've trained the algorithm, you can test, retrain, and eventually use it in your image recognition app to [classify images](getting-started-build-a-classifier.md). You can also [export the model](export-your-model.md) itself for offline use.
### Classification and object detection
Custom Vision primarily doesn't replicate data out of the specified region, exce
## Next steps
-Follow the [Build a classifier](getting-started-build-a-classifier.md) quickstart to get started using Custom Vision on the web portal, or complete an [SDK quickstart](quickstarts/image-classification.md) to implement the basic scenarios in code.
+Follow the [Build a classifier](getting-started-build-a-classifier.md) quickstart to get started using Custom Vision on the web portal, or complete an [SDK quickstart](quickstarts/image-classification.md) to implement the basic scenarios in code.
cognitive-services Cognitive Services Encryption Keys Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Encryption/cognitive-services-encryption-keys-portal.md
The process to enable Customer-Managed Keys with Azure Key Vault for Cognitive S
* [Language Understanding service encryption of data at rest](../LUIS/encrypt-data-at-rest.md) * [QnA Maker encryption of data at rest](../QnAMaker/encrypt-data-at-rest.md) * [Translator encryption of data at rest](../translator/encrypt-data-at-rest.md)
+* [Language service encryption of data at rest](../language-service/concepts/encryption-data-at-rest.md)
## Speech
The process to enable Customer-Managed Keys with Azure Key Vault for Cognitive S
## Next steps * [What is Azure Key Vault](../../key-vault/general/overview.md)?
-* [Cognitive Services Customer-Managed Key Request Form](https://aka.ms/cogsvc-cmk)
+* [Cognitive Services Customer-Managed Key Request Form](https://aka.ms/cogsvc-cmk)
cognitive-services Howtodetectfacesinimage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Face/Face-API-How-to-Topics/HowtoDetectFacesinImage.md
The code snippets in this guide are written in C# by using the Azure Cognitive S
## Setup
-This guide assumes that you already constructed a [FaceClient](/dotnet/api/microsoft.azure.cognitiveservices.vision.face.faceclient) object, named `faceClient`, with a Face subscription key and endpoint URL. For instructions on how to set up this feature, follow one of the quickstarts.
+This guide assumes that you already constructed a [FaceClient](/dotnet/api/microsoft.azure.cognitiveservices.vision.face.faceclient) object, named `faceClient`, with a Face key and endpoint URL. For instructions on how to set up this feature, follow one of the quickstarts.
## Submit data to the service
cognitive-services Find Similar Faces https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Face/Face-API-How-to-Topics/find-similar-faces.md
The following code uses the above method to get face data from a series of image
#### [REST API](#tab/rest)
-Copy the following cURL command and insert your subscription key and endpoint where appropriate. Then run the command to detect one of the target faces.
+Copy the following cURL command and insert your key and endpoint where appropriate. Then run the command to detect one of the target faces.
:::code language="shell" source="~/cognitive-services-quickstart-code/curl/face/detect.sh" ID="detect_for_similar":::
The following method takes a set of target faces and a single source face. Then,
#### [REST API](#tab/rest)
-Copy the following cURL command and insert your subscription key and endpoint where appropriate.
+Copy the following cURL command and insert your key and endpoint where appropriate.
:::code language="shell" source="~/cognitive-services-quickstart-code/curl/face/detect.sh" ID="similar":::
cognitive-services How To Add Faces https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Face/Face-API-How-to-Topics/how-to-add-faces.md
static async Task WaitCallLimitPerSecondAsync()
## Step 2: Authorize the API call
-When you use a client library, you must pass your subscription key to the constructor of the **FaceClient** class. For example:
+When you use a client library, you must pass your key to the constructor of the **FaceClient** class. For example:
```csharp private readonly IFaceClient faceClient = new FaceClient(
private readonly IFaceClient faceClient = new FaceClient(
new System.Net.Http.DelegatingHandler[] { }); ```
-To get the subscription key, go to the Azure Marketplace from the Azure portal. For more information, see [Subscriptions](https://www.microsoft.com/cognitive-services/sign-up).
+To get the key, go to the Azure Marketplace from the Azure portal. For more information, see [Subscriptions](https://www.microsoft.com/cognitive-services/sign-up).
## Step 3: Create the PersonGroup
cognitive-services How To Migrate Face Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Face/Face-API-How-to-Topics/how-to-migrate-face-data.md
This same migration strategy also applies to LargePersonGroup and LargeFaceList
You need the following items: -- Two Face subscription keys, one with the existing data and one to migrate to. To subscribe to the Face service and get your key, follow the instructions in [Create a Cognitive Services account](../../cognitive-services-apis-create-account.md).
+- Two Face keys, one with the existing data and one to migrate to. To subscribe to the Face service and get your key, follow the instructions in [Create a Cognitive Services account](../../cognitive-services-apis-create-account.md).
- The Face subscription ID string that corresponds to the target subscription. To find it, select **Overview** in the Azure portal. - Any edition of [Visual Studio 2015 or 2017](https://www.visualstudio.com/downloads/).
In the **Main** method in *Program.cs*, create two [FaceClient](/dotnet/api/micr
[!INCLUDE [subdomains-note](../../../../includes/cognitive-services-custom-subdomains-note.md)] ```csharp
-var FaceClientEastAsia = new FaceClient(new ApiKeyServiceClientCredentials("<East Asia Subscription Key>"))
+var FaceClientEastAsia = new FaceClient(new ApiKeyServiceClientCredentials("<East Asia Key>"))
{ Endpoint = "https://southeastasia.api.cognitive.microsoft.com/>" };
-var FaceClientWestUS = new FaceClient(new ApiKeyServiceClientCredentials("<West US Subscription Key>"))
+var FaceClientWestUS = new FaceClient(new ApiKeyServiceClientCredentials("<West US Key>"))
{ Endpoint = "https://westus.api.cognitive.microsoft.com/" }; ```
-Fill in the subscription key values and endpoint URLs for your source and target subscriptions.
+Fill in the key values and endpoint URLs for your source and target subscriptions.
## Prepare a PersonGroup for migration
cognitive-services How To Use Large Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Face/Face-API-How-to-Topics/how-to-use-large-scale.md
The samples are written in C# by using the Azure Cognitive Services Face client
## Step 1: Initialize the client object
-When you use the Face client library, the subscription key and subscription endpoint are passed in through the constructor of the FaceClient class. For example:
+When you use the Face client library, the key and subscription endpoint are passed in through the constructor of the FaceClient class. For example:
```csharp
-string SubscriptionKey = "<Subscription Key>";
-// Use your own subscription endpoint corresponding to the subscription key.
+string SubscriptionKey = "<Key>";
+// Use your own subscription endpoint corresponding to the key.
string SubscriptionEndpoint = "https://westus.api.cognitive.microsoft.com"; private readonly IFaceClient faceClient = new FaceClient( new ApiKeyServiceClientCredentials(subscriptionKey),
private readonly IFaceClient faceClient = new FaceClient(
faceClient.Endpoint = SubscriptionEndpoint ```
-To get the subscription key with its corresponding endpoint, go to the Azure Marketplace from the Azure portal.
+To get the key with its corresponding endpoint, go to the Azure Marketplace from the Azure portal.
For more information, see [Subscriptions](https://azure.microsoft.com/services/cognitive-services/directory/vision/). ## Step 2: Code migration
cognitive-services Orchestration Projects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/LUIS/how-to/orchestration-projects.md
ms. Previously updated : 03/08/2022 Last updated : 05/23/2022 # Combine LUIS and question answering capabilities
-Cognitive Services provides two natural language processing services, [Language Understanding](../what-is-luis.md) (LUIS) and question answering, each with a different purpose. Understand when to use each service and how they compliment each other.
+Cognitive Services provides two natural language processing services, [Language Understanding](../what-is-luis.md) (LUIS) and question answering, each with a different purpose. Understand when to use each service and how they complement each other.
Natural language processing (NLP) allows your client application, such as a chat bot, to work with your users' natural language.
As an example, if your chat bot receives the text "How do I get to the Human Res
Orchestration helps you connect more than one project and service together. Each connection in the orchestration is represented by a type and relevant data. The intent needs to have a name, a project type (LUIS, question answering, or conversational language understanding, and a project you want to connect to by name.
-You can use conversational language understanding to create a new orchestration project, See the [conversational language understanding documentation](../../language-service/orchestration-workflow/how-to/create-project.md).
-
+You can use orchestration workflow to create new orchestration projects. See [orchestration workflow](../../language-service/orchestration-workflow/how-to/create-project.md) for more information.
## Set up orchestration between Cognitive Services features To use an orchestration project to connect LUIS, question answering, and conversational language understanding, you need: * A language resource in [Language Studio](https://language.azure.com/) or the Azure portal.
-* To change your LUIS authoring resource to the Language resource. You can also optionally export your application from LUIS, and then [import it into conversational language understanding](../../language-service/orchestration-workflow/how-to/create-project.md#export-and-import-a-project).
+* To change your LUIS authoring resource to the Language resource. You can also optionally export your application from LUIS, and then [import it into conversational language understanding](../../language-service/orchestration-workflow/how-to/create-project.md#import-an-orchestration-workflow-project).
>[!Note] >LUIS can be used with Orchestration projects in West Europe only, and requires the authoring resource to be a Language resource. You can either import the application in the West Europe Language resource or change the authoring resource from the portal.
cognitive-services Batch Transcription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/batch-transcription.md
and can read audio or write transcriptions by using a SAS URI with [Blob Storage
## Batch transcription result
-For each audio input, one transcription result file is created. The [Get transcriptions files](https://westus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetTranscriptionFiles) operation returns a list of result files for this transcription.
-To find the transcription file for a specific input file, filter all returned files with `kind` set to `Transcription`, and `name` set to `{originalInputName.suffix}.json`.
+For each audio input, one transcription result file is created. The [Get transcriptions files](https://westus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetTranscriptionFiles) operation returns a list of result files for this transcription. The only way to confirm the audio input for a transcription, is to check the `source` field in the transcription result file.
Each transcription result file has this format:
cognitive-services Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/document-translation/faq.md
Title: Frequently asked questions - Document Translation
-description: Get answers to frequently asked questions about Document Translation in the Translator service from Azure Cognitive Services.
+description: Get answers to frequently asked questions about Azure Cognitive Services Document Translation.
+ Previously updated : 07/15/2021 Last updated : 05/24/2022
-# Document Translation FAQ
+<!-- markdownlint-disable MD001 -->
-This article contains answers to frequently asked questions about Document Translation.
+# Answers to frequently asked questions
-|Frequently asked questions|
-|:--|
-|**When should I specify the source language of the document in the request?**<br/>If the language of the content in the source document is known, its recommended to specify the source language in the request to get a better translation. If the document has content in multiple languages or the language is unknown, then donΓÇÖt specify the source language in the request. Document translation automatically identifies language for each text segment and translates.|
-|**To what extent are the layout, structure, and formatting maintained?**<br/>While translating text from the source to the target language, the overall length of translated text may differ from source. This could result in reflow of text across pages. The same fonts may not be available both in source and target language. In general, the same font style is applied in target language to retain formatting closer to source.|
-|**Will the text embedded in an image within a document gets translated?**<br/>No. The text embedded in an image within a document will not get translated.|
-|**Does document translation translate content from scanned documents?**<br/>No. Document translation doesnΓÇÖt translate content from scanned documents.|
+## Document Translation: FAQ
+#### Should I specify the source language in a request?
+If the language of the content in the source document is known, it's recommended to specify the source language in the request to get a better translation. If the document has content in multiple languages or the language is unknown, then don't specify the source language in the request. Document translation automatically identifies language for each text segment and translates.
+#### To what extent are the layout, structure, and formatting maintained?
+
+When text is translated from the source to target language, the overall length of translated text may differ from source. The result could be reflow of text across pages. The same fonts may not be available both in source and target language. In general, the same font style is applied in target language to retain formatting closer to source.
+
+#### Will the text in an image within a document gets translated?
+
+No. The text in an image within a document won't get translated.
+
+#### Can Document Translation translate content from scanned documents?
+
+Yes. Document translation translates content from _scanned PDF_ documents.
+
+#### Will my document be translated if it's password protected?
+
+No. If your scanned or text-embedded PDFs are password-locked, you must remove the lock before submission.
+
+#### If I'm using managed identities, do I also need a SAS token URL?
+
+No. Don't include SAS token URLSΓÇöyour requests will fail. Managed identities eliminate the need for you to include shared access signature tokens (SAS) with your HTTP requests.
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/document-translation/overview.md
Title: What is Microsoft Azure Cognitive Services Document Translation?
-description: An overview of the cloud-based batch document translation service and process.
+description: An overview of the cloud-based batch Document Translation service and process.
+ Previously updated : 05/25/2021 Last updated : 05/24/2022 recommendations: false # What is Document Translation?
-Document Translation is a cloud-based feature of the [Azure Translator](../translator-overview.md) service and is part of the Azure Cognitive Service family of REST APIs. In this overview, you'll learn how the Document Translation API can be used to translate multiple and complex documents across all [supported languages and dialects](../../language-support.md) while preserving original document structure and data format.
+Document Translation is a cloud-based feature of the [Azure Translator](../translator-overview.md) service and is part of the Azure Cognitive Service family of REST APIs. The Document Translation API can be used to translate multiple and complex documents across all [supported languages and dialects](../../language-support.md), while preserving original document structure and data format.
This documentation contains the following article types:
This documentation contains the following article types:
> [!NOTE] > When translating documents with content in multiple languages, the feature is intended for complete sentences in a single language. If sentences are composed of more than one language, the content may not all translate into the target language.
->
+> For more information on input requirements, *see* [content limits](get-started-with-document-translation.md#content-limits)
## Document Translation development options
You can add Document Translation to your applications using the REST API or a cl
## Get started
-In our how-to guide, you'll learn how to quickly get started using Document Translator. To begin, you'll need an active [Azure account](https://azure.microsoft.com/free/cognitive-services/). If you don't have one, you can [create a free account](https://azure.microsoft.com/free).
+In our how-to guide, you'll learn how to quickly get started using Document Translation. To begin, you'll need an active [Azure account](https://azure.microsoft.com/free/cognitive-services/). If you don't have one, you can [create a free account](https://azure.microsoft.com/free).
> [!div class="nextstepaction"] > [Start here](get-started-with-document-translation.md "Learn how to use Document Translation with HTTP REST")
The following document file types are supported by Document Translation:
| File type| File extension|Description| |||--|
-|Adobe PDF|pdf|Adobe Acrobat portable document format|
+|Adobe PDF|pdf|Portable document file format.|
|Comma-Separated Values |csv| A comma-delimited raw-data file used by spreadsheet programs.| |HTML|html, htm|Hyper Text Markup Language.| |Localization Interchange File Format|xlf. , xliff| A parallel document format, export of Translation Memory systems. The languages used are defined inside the file.|
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/language-support.md
+ Previously updated : 02/01/2022 Last updated : 05/24/2022 # Translator language support
-**Translation - Cloud:** Cloud translation is available in all languages for the Translate operation of Text Translation and for Document Translation.
+**Translation - Cloud:** Cloud translation is available in all languages for the Translate operation of Text Translation and for Document Translation.
**Translation ΓÇô Containers:** Language support for Containers.
-**Custom Translator:** Custom Translator can be used to create customized translation models which you can then use to customize your translated output while using the Text Translation or Document Translation features.
+**Custom Translator:** Custom Translator can be used to create customized translation models that you can then use to customize your translated output while using the Text Translation or Document Translation features.
**Auto Language Detection:** Automatically detect the language of the source text while using Text Translation or Document Translation.
## Translation
-| Language | Language code | Cloud ΓÇô Text Translation and Document Translation| Containers ΓÇô Text Translation|Custom Translator|Auto Language Detection|Dictionary
+> [!NOTE]
+> Language code `pt` will default to `pt-br`, Portuguese (Brazil).
+>
+> Γÿ╝ Indicates the language is not available for scanned PDF document translation.
+
+|Language | Language code | Γÿ╝ Cloud ΓÇô Text Translation and Document Translation | Containers ΓÇô Text Translation|Custom Translator|Auto Language Detection|Dictionary
|:-|:-:|:-:|:-:|:-:|:-:|:-:|
-| Afrikaans | `af` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+| Afrikaans | `af` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
| Albanian | `sq` |Γ£ö|Γ£ö||Γ£ö||
-| Amharic | `am` |Γ£ö|Γ£ö||||
+| Amharic Γÿ╝ | `am` |Γ£ö|Γ£ö||||
| Arabic | `ar` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-| Armenian | `hy` |Γ£ö|Γ£ö||Γ£ö||
-| Assamese | `as` |Γ£ö|Γ£ö|Γ£ö|||
-| Azerbaijani | `az` |Γ£ö|Γ£ö||||
-| Bangla | `bn` |Γ£ö|Γ£ö|Γ£ö||Γ£ö|
-| Bashkir | `ba` |Γ£ö|||||
-| 🆕Basque | `eu` |✔|||||
+| Armenian Γÿ╝ | `hy` |Γ£ö|Γ£ö||Γ£ö||
+| Assamese Γÿ╝ | `as` |Γ£ö|Γ£ö|Γ£ö|||
+| Azerbaijani (Latin) | `az` |Γ£ö|Γ£ö||||
+| Bangla Γÿ╝ | `bn` |Γ£ö|Γ£ö|Γ£ö||Γ£ö|
+| Bashkir Γÿ╝ | `ba` |Γ£ö|||||
+| Basque | `eu` |Γ£ö|||||
| Bosnian (Latin) | `bs` |Γ£ö|Γ£ö|Γ£ö||Γ£ö| | Bulgarian | `bg` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-| Cantonese (Traditional) | `yue` |Γ£ö|Γ£ö||||
+| Cantonese (Traditional) Γÿ╝ | `yue` |Γ£ö|Γ£ö||||
| Catalan | `ca` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | Chinese (Literary) | `lzh` |Γ£ö||||| | Chinese Simplified | `zh-Hans` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
| Czech | `cs` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | Danish | `da` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | Dari | `prs` |Γ£ö|Γ£ö||||
-| Divehi | `dv` |Γ£ö|||Γ£ö||
+| Divehi Γÿ╝ | `dv` |Γ£ö|||Γ£ö||
| Dutch | `nl` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | English | `en` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | Estonian | `et` |Γ£ö|Γ£ö|Γ£ö|Γ£ö||
-| 🆕Faroese | `fo` |✔|||||
+| Faroese | `fo` |Γ£ö|||||
| Fijian | `fj` |Γ£ö|Γ£ö|Γ£ö||| | Filipino | `fil` |Γ£ö|Γ£ö|Γ£ö||| | Finnish | `fi` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | French | `fr` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | French (Canada) | `fr-ca` |Γ£ö|Γ£ö||||
-| 🆕Galician | `gl` |✔|||||
-| Georgian | `ka` |Γ£ö|||Γ£ö||
+| Galician | `gl` |Γ£ö|||||
+| Georgian Γÿ╝ | `ka` |Γ£ö|||Γ£ö||
| German | `de` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-| Greek | `el` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-| Gujarati | `gu` |Γ£ö|Γ£ö|Γ£ö|Γ£ö||
+| Greek Γÿ╝ | `el` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+| Gujarati Γÿ╝ | `gu` |Γ£ö|Γ£ö|Γ£ö|Γ£ö||
| Haitian Creole | `ht` |Γ£ö|Γ£ö||Γ£ö|Γ£ö|
-| Hebrew | `he` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+| Hebrew Γÿ╝ | `he` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
| Hindi | `hi` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-| Hmong Daw | `mww` |Γ£ö|Γ£ö|||Γ£ö|
+| Hmong Daw (Latin) | `mww` |Γ£ö|Γ£ö|||Γ£ö|
| Hungarian | `hu` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | Icelandic | `is` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | Indonesian | `id` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-| Inuinnaqtun | `ikt` |Γ£ö|||||
-| Inuktitut | `iu` |Γ£ö|Γ£ö|Γ£ö|Γ£ö||
-| Inuktitut (Latin) | `iu-Latn` |Γ£ö|||||
+| Inuinnaqtun Γÿ╝ | `ikt` |Γ£ö|||||
+| Inuktitut Γÿ╝ | `iu` |Γ£ö|Γ£ö|Γ£ö|Γ£ö||
+| Inuktitut (Latin) | `iu-Latn` |Γ£ö|||||
| Irish | `ga` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|| | Italian | `it` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | Japanese | `ja` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-| Kannada | `kn` |Γ£ö|Γ£ö|Γ£ö|||
+| Kannada Γÿ╝ | `kn` |Γ£ö|Γ£ö|Γ£ö|||
| Kazakh | `kk` |Γ£ö|Γ£ö||||
-| Khmer | `km` |Γ£ö|Γ£ö||Γ£ö||
+| Khmer Γÿ╝ | `km` |Γ£ö|Γ£ö||Γ£ö||
| Klingon | `tlh-Latn` |Γ£ö| ||Γ£ö|Γ£ö|
-| Klingon (plqaD) | `tlh-Piqd` |Γ£ö| ||Γ£ö||
+| Klingon (plqaD) Γÿ╝ | `tlh-Piqd` |Γ£ö| ||Γ£ö||
| Korean | `ko` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-| Kurdish (Central) | `ku` |Γ£ö|Γ£ö||Γ£ö||
-| Kurdish (Northern) | `kmr` |Γ£ö|Γ£ö||||
-| Kyrgyz | `ky` |Γ£ö|||||
-| Lao | `lo` |Γ£ö|Γ£ö||Γ£ö||
-| Latvian | `lv` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+| Kurdish (Central) | `ku` |Γ£ö|Γ£ö||Γ£ö||
+| Kurdish (Northern) Γÿ╝ | `kmr` |Γ£ö|Γ£ö||||
+| Kyrgyz (Cyrillic) | `ky` |Γ£ö|||||
+| Lao Γÿ╝ | `lo` |Γ£ö|Γ£ö||Γ£ö||
+| Latvian Γÿ╝| `lv` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
| Lithuanian | `lt` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-| Macedonian | `mk` |Γ£ö|||Γ£ö||
-| Malagasy | `mg` |Γ£ö|Γ£ö|Γ£ö|||
-| Malay | `ms` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-| Malayalam | `ml` |Γ£ö|Γ£ö|Γ£ö|||
+| Macedonian Γÿ╝ | `mk` |Γ£ö|||Γ£ö||
+| Malagasy Γÿ╝ | `mg` |Γ£ö|Γ£ö|Γ£ö|||
+| Malay (Latin) | `ms` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+| Malayalam Γÿ╝ | `ml` |Γ£ö|Γ£ö|Γ£ö|||
| Maltese | `mt` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | Maori | `mi` |Γ£ö|Γ£ö|Γ£ö||| | Marathi | `mr` |Γ£ö|Γ£ö|Γ£ö|||
-| Mongolian (Cyrillic) | `mn-Cyrl` |Γ£ö|||||
-| Mongolian (Traditional) | `mn-Mong` |Γ£ö|||Γ£ö||
-| Myanmar | `my` |Γ£ö|Γ£ö||Γ£ö||
+| Mongolian (Cyrillic) Γÿ╝| `mn-Cyrl` |Γ£ö|||||
+| Mongolian (Traditional) Γÿ╝ | `mn-Mong` |Γ£ö|||Γ£ö||
+| Myanmar Γÿ╝ | `my` |Γ£ö|Γ£ö||Γ£ö||
| Nepali | `ne` |Γ£ö|Γ£ö|||| | Norwegian | `nb` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-| Odia | `or` |Γ£ö|Γ£ö|Γ£ö|||
+| Odia Γÿ╝ | `or` |Γ£ö|Γ£ö|Γ£ö|||
| Pashto | `ps` |Γ£ö|Γ£ö||Γ£ö|| | Persian | `fa` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | Polish | `pl` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | Portuguese (Brazil) | `pt` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | Portuguese (Portugal) | `pt-pt` |Γ£ö|Γ£ö|||| | Punjabi | `pa` |Γ£ö|Γ£ö|Γ£ö|||
-| Queretaro Otomi | `otq` |Γ£ö|Γ£ö||||
+| Queretaro Otomi Γÿ╝ | `otq` |Γ£ö|Γ£ö||||
| Romanian | `ro` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | Russian | `ru` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-| Samoan | `sm` |Γ£ö| |Γ£ö|||
+| Samoan (Latin) | `sm` |Γ£ö| |Γ£ö|||
| Serbian (Cyrillic) | `sr-Cyrl` |Γ£ö|Γ£ö||Γ£ö|| | Serbian (Latin) | `sr-Latn` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | Slovak | `sk` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | Slovenian | `sl` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-| 🆕Somali | `so` |✔|||✔||
+| Somali (Arabic) | `so` |Γ£ö|||Γ£ö||
| Spanish | `es` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-| Swahili | `sw` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+| Swahili (Latin) | `sw` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
| Swedish | `sv` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-| Tahitian | `ty` |Γ£ö| |Γ£ö|Γ£ö||
-| Tamil | `ta` |Γ£ö|Γ£ö|Γ£ö||Γ£ö|
-| Tatar | `tt` |Γ£ö|||||
-| Telugu | `te` |Γ£ö|Γ£ö|Γ£ö|||
-| Thai | `th` |Γ£ö| |Γ£ö|Γ£ö|Γ£ö|
-| Tibetan | `bo` |Γ£ö||||
-| Tigrinya | `ti` |Γ£ö|Γ£ö||||
+| Tahitian Γÿ╝ | `ty` |Γ£ö| |Γ£ö|Γ£ö||
+| Tamil Γÿ╝ | `ta` |Γ£ö|Γ£ö|Γ£ö||Γ£ö|
+| Tatar (Latin) | `tt` |Γ£ö|||||
+| Telugu Γÿ╝ | `te` |Γ£ö|Γ£ö|Γ£ö|||
+| Thai Γÿ╝ | `th` |Γ£ö| |Γ£ö|Γ£ö|Γ£ö|
+| Tibetan Γÿ╝ | `bo` |Γ£ö||||
+| Tigrinya Γÿ╝ | `ti` |Γ£ö|Γ£ö||||
| Tongan | `to` |Γ£ö|Γ£ö|Γ£ö||| | Turkish | `tr` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-| Turkmen | `tk` |Γ£ö||||
+| Turkmen (Latin) | `tk` |Γ£ö||||
| Ukrainian | `uk` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | Upper Sorbian | `hsb` |Γ£ö||||| | Urdu | `ur` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-| Uyghur | `ug` |Γ£ö||||
+| Uyghur (Arabic) | `ug` |Γ£ö||||
| Uzbek (Latin | `uz` |Γ£ö|||Γ£ö||
-| Vietnamese | `vi` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+| Vietnamese Γÿ╝ | `vi` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
| Welsh | `cy` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | Yucatec Maya | `yua` |Γ£ö|Γ£ö||Γ£ö||
-| 🆕Zulu | `zu` |✔|||||
+| Zulu | `zu` |Γ£ö|||||
-> [!NOTE]
-> Language code `pt` will default to `pt-br`, Portuguese (Brazil).
+## Document Translation: scanned PDF support
+
+|Language|Language Code|Supported as source language for scanned PDF?|Supported as target language for scanned PDF?|
+|:-|:-:|:-:|:-:|
+|Afrikaans|`af`|Yes|Yes|
+|Albanian|`sq`|Yes|Yes|
+|Amharic|`am`|No|No|
+|Arabic|`ar`|No|No|
+|Armenian|`hy`|No|No|
+|Assamese|`as`|No|No|
+|Azerbaijani (Latin)|`az`|Yes|Yes|
+|Bangla|`bn`|No|No|
+|Bashkir|`ba`|No|Yes|
+|Basque|`eu`|Yes|Yes|
+|Bosnian (Latin)|`bs`|Yes|Yes|
+|Bulgarian|`bg`|Yes|Yes|
+|Cantonese (Traditional)|`yue`|No|Yes|
+|Catalan|`ca`|Yes|Yes|
+|Chinese (Literary)|`lzh`|No|Yes|
+|Chinese Simplified|`zh-Hans`|Yes|Yes|
+|Chinese Traditional|`zh-Hant`|Yes|Yes|
+|Croatian|`hr`|Yes|Yes|
+|Czech|`cs`|Yes|Yes|
+|Danish|`da`|Yes|Yes|
+|Dari|`prs`|No|No|
+|Divehi|`dv`|No|No|
+|Dutch|`nl`|Yes|Yes|
+|English|`en`|Yes|Yes|
+|Estonian|`et`|Yes|Yes|
+|Faroese|`fo`|Yes|Yes|
+|Fijian|`fj`|Yes|Yes|
+|Filipino|`fil`|Yes|Yes|
+|Finnish|`fi`|Yes|Yes|
+|French|`fr`|Yes|Yes|
+|French (Canada)|`fr-ca`|Yes|Yes|
+|Galician|`gl`|Yes|Yes|
+|Georgian|`ka`|No|No|
+|German|`de`|Yes|Yes|
+|Greek|`el`|Yes|Yes|
+|Gujarati|`gu`|No|No|
+|Haitian Creole|`ht`|Yes|Yes|
+|Hebrew|`he`|No|No|
+|Hindi|`hi`|Yes|Yes|
+|Hmong Daw (Latin)|`mww`|Yes|Yes|
+|Hungarian|`hu`|Yes|Yes|
+|Icelandic|`is`|Yes|Yes|
+|Indonesian|`id`|Yes|Yes|
+|Interlingua|`ia`|Yes|Yes|
+|Inuinnaqtun|`ikt`|No|Yes|
+|Inuktitut|`iu`|No|No|
+|Inuktitut (Latin)|`iu-Latn`|Yes|Yes|
+|Irish|`ga`|Yes|Yes|
+|Italian|`it`|Yes|Yes|
+|Japanese|`ja`|Yes|Yes|
+|Kannada|`kn`|No|Yes|
+|Kazakh (Cyrillic)|`kk`, `kk-cyrl`|Yes|Yes|
+|Kazakh (Latin)|`kk-latn`|Yes|Yes|
+|Khmer|`km`|No|No|
+|Klingon|`tlh-Latn`|No|No|
+|Klingon (plqaD)|`tlh-Piqd`|No|No|
+|Korean|`ko`|Yes|Yes|
+|Kurdish (Arabic) (Central)|`ku-arab`,`ku`|No|No|
+|Kurdish (Latin) (Northern)|`ku-latn`, `kmr`|Yes|Yes|
+|Kyrgyz (Cyrillic)|`ky`|Yes|Yes|
+|Lao|`lo`|No|No|
+|Latvian|`lv`|No|Yes|
+|Lithuanian|`lt`|Yes|Yes|
+|Macedonian|`mk`|No|Yes|
+|Malagasy|`mg`|No|Yes|
+|Malay (Latin)|`ms`|Yes|Yes|
+|Malayalam|`ml`|No|Yes|
+|Maltese|`mt`|Yes|Yes|
+|Maori|`mi`|Yes|Yes|
+|Marathi|`mr`|Yes|Yes|
+|Mongolian (Cyrillic)|`mn-Cyrl`|Yes|Yes|
+|Mongolian (Traditional)|`mn-Mong`|No|No|
+|Myanmar (Burmese)|`my`|No|No|
+|Nepali|`ne`|Yes|Yes|
+|Norwegian|`nb`|Yes|Yes|
+|Odia|`or`|No|No|
+|Pashto|`ps`|No|No|
+|Persian|`fa`|No|No|
+|Polish|`pl`|Yes|Yes|
+|Portuguese (Brazil)|`pt`, `pt-br`|Yes|Yes|
+|Portuguese (Portugal)|`pt-pt`|Yes|Yes|
+|Punjabi|`pa`|No|Yes|
+|Queretaro Otomi|`otq`|No|Yes|
+|Romanian|`ro`|Yes|Yes|
+|Russian|`ru`|Yes|Yes|
+|Samoan (Latin)|`sm`|Yes|Yes|
+|Serbian (Cyrillic)|`sr-Cyrl`|No|Yes|
+|Serbian (Latin)|`sr`, `sr-latn`|Yes|Yes|
+|Slovak|`sk`|Yes|Yes|
+|Slovenian|`sl`|Yes|Yes|
+|Somali|`so`|No|Yes|
+|Spanish|`es`|Yes|Yes|
+|Swahili (Latin)|`sw`|Yes|Yes|
+|Swedish|`sv`|Yes|Yes|
+|Tahitian|`ty`|No|Yes|
+|Tamil|`ta`|No|Yes|
+|Tatar (Latin)|`tt`|Yes|Yes|
+|Telugu|`te`|No|Yes|
+|Thai|`th`|No|No|
+|Tibetan|`bo`|No|No|
+|Tigrinya|`ti`|No|No|
+|Tongan|`to`|Yes|Yes|
+|Turkish|`tr`|Yes|Yes|
+|Turkmen (Latin)|`tk`|Yes|Yes|
+|Ukrainian|`uk`|No|Yes|
+|Upper Sorbian|`hsb`|Yes|Yes|
+|Urdu|`ur`|No|No|
+|Uyghur (Arabic)|`ug`|No|No|
+|Uzbek (Latin)|`uz`|Yes|Yes|
+|Vietnamese|`vi`|No|Yes|
+|Welsh|`cy`|Yes|Yes|
+|Yucatec Maya|`yua`|Yes|Yes|
+|Zulu|`zu`|Yes|Yes|
## Transliteration+ The [Transliterate operation](reference/v3-0-transliterate.md) in the Text Translation feature supports the following languages. In the "To/From", "<-->" indicates that the language can be transliterated from or to either of the scripts listed. The "-->" indicates that the language can only be transliterated from one script to the other. | Language | Language code | Script | To/From | Script|
The [Transliterate operation](reference/v3-0-transliterate.md) in the Text Trans
|Urdu| `ur` | Arabic `Arab` | <--> | Latin `Latn` | ## Other Cognitive Services
-Add additional capabilities to your apps and workflows by utilizing other Cognitive Services with Translator. Language lists for additional services are below.
+
+Add more capabilities to your apps and workflows by utilizing other Cognitive Services with Translator. Language lists for other services are below:
+ * [Computer Vision](../computer-vision/language-support.md) * [Speech](../speech-service/language-support.md) * [Language service](../language-service/index.yml)
- * Select the feature you want to use, and then **Language support** on the left navigation menu.
+ * Select the feature you want to use, and then **Language support** on the left navigation menu.
-View all [Cognitive Services](../index.yml).
+View all [Cognitive Services](../index.yml).
## Next steps+ * [Text Translation reference](reference/v3-0-reference.md) * [Document Translation reference](document-translation/reference/rest-api-guide.md) * [Custom Translator overview](custom-translator/overview.md)-
cognitive-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/whats-new.md
Title: What's new in Translator?
+ Title: What's new in Azure Cognitive Services Translator?
description: Learn of the latest changes to the Translator Service API. + Previously updated : 04/25/2022 Last updated : 05/24/2022 - <!-- markdownlint-disable MD024 --> <!-- markdownlint-disable MD036 -->
-# What's new in Azure Cognitive Services Translator
+# What's new in Azure Cognitive Services Translator?
Bookmark this page to stay up to date with release notes, feature enhancements, and documentation updates.
-* Translator is a language service that enables users to translate text and documents, helps entities expand their global outreach, and supports preservation of at-risk and endangered languages.
+Translator is a language service that enables users to translate text and documents, helps entities expand their global outreach, and supports preservation of at-risk and endangered languages.
+
+Translator service supports language translation for more than 100 languages. If your language community is interested in partnering with Microsoft to add your language to Translator, contact us via the [Translator community partner onboarding form](https://forms.office.com/pages/responsepage.aspx?id=v4j5cvGGr0GRqy180BHbR-riVR3Xj0tOnIRdZOALbM9UOU1aMlNaWFJOOE5YODhRR1FWVzY0QzU1OS4u).
+
+## May 2022
+
+### [Document Translation support for scanned PDF documents](https://aka.ms/blog_ScannedPdfTranslation)
-* Translator supports language translation for more than 100 languages. If your language community is interested in partnering with Microsoft to add your language to Translator, contact us via the [Translator community partner onboarding form](https://forms.office.com/pages/responsepage.aspx?id=v4j5cvGGr0GRqy180BHbR-riVR3Xj0tOnIRdZOALbM9UOU1aMlNaWFJOOE5YODhRR1FWVzY0QzU1OS4u).
+* Document Translator uses optical character recognition (OCR) technology to extract and translate text in scanned PDF document while retaining the original layout.
## April 2022
Bookmark this page to stay up to date with release notes, feature enhancements,
* **Mongolian (Traditional)**. Traditional Mongolian script is the first writing system created specifically for the Mongolian language. Mongolian is the official language of Mongolia. * **Tatar**. A Turkic language used by speakers in modern Tatarstan. It's closely related to Crimean Tatar and Siberian Tatar but each belongs to different subgroups. * **Tibetan**. It has nearly 6 million speakers and can be found in many Tibetan Buddhist publications.
- * **Turkmen**. The official language of Turkmenistan. It's very similar to Turkish and Azerbaijani.
- * **Uyghur**. A Turkic language with nearly 15 million speakers. It is spoken primarily in Western China.
+ * **Turkmen**. The official language of Turkmenistan. It's similar to Turkish and Azerbaijani.
+ * **Uyghur**. A Turkic language with nearly 15 million speakers. It's spoken primarily in Western China.
* **Uzbek (Latin)**. A Turkic language that is the official language of Uzbekistan. It's spoken by 34 million native speakers. These additions bring the total number of languages supported in Translator to 103.
These additions bring the total number of languages supported in Translator to 1
### [Text and document translation support for literary Chinese](https://www.microsoft.com/translator/blog/2021/08/25/microsoft-translator-releases-literary-chinese-translation/)
-* Azure Cognitive Services Translator has [text and document language support](language-support.md) for literary Chinese, a traditional style of written Chinese used by classical Chinese poets and in ancient Chinese poetry.
+* Azure Cognitive Services Translator has [text and document language support](language-support.md) for literary Chinese. Classical or literary Chinese is a traditional style of written Chinese used by traditional Chinese poets and in ancient Chinese poetry.
## June 2021
cognitive-services Cognitive Services Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/cognitive-services-security.md
NSString* value =
Customer Lockbox is available for this service: * Translator
+* Conversational language understanding
+* Custom text classification
+* Custom named entity recognition
+* Orchestration workflow
For the following services, Microsoft engineers will not access any customer data in the E0 tier:
cognitive-services Cognitive Services Virtual Networks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/cognitive-services-virtual-networks.md
Virtual networks (VNETs) are supported in [regions where Cognitive Services are
> [!NOTE]
-> If you're using LUIS or Speech Services, the **CognitiveServicesManagement** tag only enables you use the service using the SDK or REST API. To access and use LUIS portal and/or Speech Studio from a virtual network, you will need to use the following tags:
+> If you're using LUIS, Speech Services, or Language services, the **CognitiveServicesManagement** tag only enables you use the service using the SDK or REST API. To access and use LUIS portal , Speech Studio or Language Studio from a virtual network, you will need to use the following tags:
> * **AzureActiveDirectory** > * **AzureFrontDoor.Frontend** > * **AzureResourceManager**
cognitive-services Container Image Tags https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/containers/container-image-tags.md
This container image has the following tags available. You can also find a full
# [Latest version](#tab/current)
-* Release notes for version `1.7.0`:
- * Update langauge detection engine, and fix the support of throttling rate for continuous accuracy mode
+* Release notes for version `1.9.0`:
+ * Bug fixes
| Image Tags | Notes | ||:| | `latest` | |
-| `1.8.0-amd64-preview` | |
+| `1.9.0-amd64-preview` | |
# [Previous versions](#tab/previous)
cognitive-services Data Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/concepts/data-limits.md
+ Last updated 02/25/2022
The following limit specifies the maximum number of characters that can be in a
| Feature | Value | |||
+| Conversation summarization | 7,000 characters as measured by [StringInfo.LengthInTextElements](/dotnet/api/system.globalization.stringinfo.lengthintextelements).|
| Text Analytics for health | 30,720 characters as measured by [StringInfo.LengthInTextElements](/dotnet/api/system.globalization.stringinfo.lengthintextelements). | | All other pre-configured features (synchronous) | 5,120 as measured by [StringInfo.LengthInTextElements](/dotnet/api/system.globalization.stringinfo.lengthintextelements). | | All other pre-configured features ([asynchronous](use-asynchronously.md)) | 125,000 characters across all submitted documents, as measured by [StringInfo.LengthInTextElements](/dotnet/api/system.globalization.stringinfo.lengthintextelements) (maximum of 25 documents). |
Exceeding the following document limits will generate an HTTP 400 error code.
| Feature | Max Documents Per Request | |-|--|
+| Conversation summarization | 1 |
| Language Detection | 1000 | | Sentiment Analysis | 10 | | Opinion Mining | 10 |
Requests rates are measured for each feature separately. You can send the maximu
## See also * [What is Azure Cognitive Service for Language](../overview.md)
-* [Pricing details](https://aka.ms/unifiedLanguagePricing)
+* [Pricing details](https://aka.ms/unifiedLanguagePricing)
cognitive-services Encryption Data At Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/concepts/encryption-data-at-rest.md
+
+ Title: Language service encryption of data at rest
+description: Learn how the Language service encrypts your data when it's persisted to the cloud.
+++++ Last updated : 05/24/2022+
+#Customer intent: As a user of the Language service, I want to learn how encryption at rest works.
++
+# Language services encryption of data at rest
+
+The Language services automatically encrypt your data when it is persisted to the cloud. The Language services encryption protects your data and helps you meet your organizational security and compliance commitments.
+
+## About Cognitive Services encryption
+
+Data is encrypted and decrypted using [FIPS 140-2](https://en.wikipedia.org/wiki/FIPS_140-2) compliant [256-bit AES](https://en.wikipedia.org/wiki/Advanced_Encryption_Standard) encryption. Encryption and decryption are transparent, meaning encryption and access are managed for you. Your data is secure by default and you don't need to modify your code or applications to take advantage of encryption.
+
+## About encryption key management
+
+By default, your subscription uses Microsoft-managed encryption keys. There is also the option to manage your subscription with your own keys called customer-managed keys (CMK). CMK offers greater flexibility to create, rotate, disable, and revoke access controls. You can also audit the encryption keys used to protect your data.
+
+## Customer-managed keys with Azure Key Vault
+
+There is also an option to manage your subscription with your own keys. Customer-managed keys (CMK), also known as Bring your own key (BYOK), offer greater flexibility to create, rotate, disable, and revoke access controls. You can also audit the encryption keys used to protect your data.
+
+You must use Azure Key Vault to store your customer-managed keys. You can either create your own keys and store them in a key vault, or you can use the Azure Key Vault APIs to generate keys. The Cognitive Services resource and the key vault must be in the same region and in the same Azure Active Directory (Azure AD) tenant, but they can be in different subscriptions. For more information about Azure Key Vault, see [What is Azure Key Vault?](/azure/key-vault/general/overview).
+
+### Customer-managed keys for Language services
+
+To request the ability to use customer-managed keys, fill out and submit theΓÇ»[Language Service Customer-Managed Key Request Form](https://aka.ms/cogsvc-cmk). It will take approximately 3-5 business days to hear back on the status of your request. Depending on demand, you may be placed in a queue and approved as space becomes available. Once approved for using CMK with Language services, you'll need to create a new Language resource from the Azure portal.
++
+### Enable customer-managed keys
+
+A new Cognitive Services resource is always encrypted using Microsoft-managed keys. It's not possible to enable customer-managed keys at the time that the resource is created. Customer-managed keys are stored in Azure Key Vault, and the key vault must be provisioned with access policies that grant key permissions to the managed identity that is associated with the Cognitive Services resource. The managed identity is available only after the resource is created using the Pricing Tier for CMK.
+
+To learn how to use customer-managed keys with Azure Key Vault for Cognitive Services encryption, see:
+
+- [Configure customer-managed keys with Key Vault for Cognitive Services encryption from the Azure portal](../../encryption/cognitive-services-encryption-keys-portal.md)
+
+Enabling customer managed keys will also enable a system assigned managed identity, a feature of Azure AD. Once the system assigned managed identity is enabled, this resource will be registered with Azure Active Directory. After being registered, the managed identity will be given access to the Key Vault selected during customer managed key setup. You can learn more about [Managed Identities](../../../active-directory/managed-identities-azure-resources/overview.md).
+
+> [!IMPORTANT]
+> If you disable system assigned managed identities, access to the key vault will be removed and any data encrypted with the customer keys will no longer be accessible. Any features depended on this data will stop working.
+
+> [!IMPORTANT]
+> Managed identities do not currently support cross-directory scenarios. When you configure customer-managed keys in the Azure portal, a managed identity is automatically assigned under the covers. If you subsequently move the subscription, resource group, or resource from one Azure AD directory to another, the managed identity associated with the resource is not transferred to the new tenant, so customer-managed keys may no longer work. For more information, see **Transferring a subscription between Azure AD directories** in [FAQs and known issues with managed identities for Azure resources](../../../active-directory/managed-identities-azure-resources/known-issues.md#transferring-a-subscription-between-azure-ad-directories).
+
+### Store customer-managed keys in Azure Key Vault
+
+To enable customer-managed keys, you must use an Azure Key Vault to store your keys. You must enable both the **Soft Delete** and **Do Not Purge** properties on the key vault.
+
+Only RSA keys of size 2048 are supported with Cognitive Services encryption. For more information about keys, see **Key Vault keys** in [About Azure Key Vault keys, secrets and certificates](../../../key-vault/general/about-keys-secrets-certificates.md).
+
+### Rotate customer-managed keys
+
+You can rotate a customer-managed key in Azure Key Vault according to your compliance policies. When the key is rotated, you must update the Cognitive Services resource to use the new key URI. To learn how to update the resource to use a new version of the key in the Azure portal, see the section titled **Update the key version** in [Configure customer-managed keys for Cognitive Services by using the Azure portal](../../encryption/cognitive-services-encryption-keys-portal.md).
+
+Rotating the key does not trigger re-encryption of data in the resource. There is no further action required from the user.
+
+### Revoke access to customer-managed keys
+
+To revoke access to customer-managed keys, use PowerShell or Azure CLI. For more information, see [Azure Key Vault PowerShell](/powershell/module/az.keyvault//) or [Azure Key Vault CLI](/cli/azure/keyvault). Revoking access effectively blocks access to all data in the Cognitive Services resource, as the encryption key is inaccessible by Cognitive Services.
+
+## Next steps
+
+* [Language Service Customer-Managed Key Request Form](https://aka.ms/cogsvc-cmk)
+* [Learn more about Azure Key Vault](../../../key-vault/general/overview.md)
cognitive-services Model Lifecycle https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/concepts/model-lifecycle.md
+ Previously updated : 04/21/2022 Last updated : 05/09/2022
Language service features utilize AI models that are versioned. We update the language service with new model versions to improve accuracy, support, and quality. As models become older, they are retired. Use this article for information on that process, and what you can expect for your applications.
+## Prebuilt features
+ ### Expiration timeline
-As new models and new functionality become available and older, less accurate models are retired, see the following timelines for model and endpoint expiration:
+Our standard (not customized) language service is built upon AI models that we call pre-trained models. We update the language service with new model versions every few months to improve model accuracy, support, and quality.
+
+Pre-built Model Capabilities: As new models and new functionality become available and older, less accurate models are retired. Unless otherwise noted, retired pre-built models will be automatically updated to the newest model version.
-New Language AI models are being released every few months. So, an expiration of any publicly available model is six months after a deprecation notice is issued followed by new model-version release.
+During the model version deprecation period, API calls to the soon-to-be retired model versions will return a warning. After model-version deprecation, API calls to deprecated model-versions will return responses using the newest model version with an additional warning message. So, your implementation will never break, but results might change.
-Model-version retirement period is defined from a release of a newer model-version for the capability until a specific older version is deprecated. This period is defined as six months for stable model versions, and three months for previews. For example, a stable model-version `2021-01-01` will be deprecated six months after a successor model-version `2021-07-01` is released, on January 1, 2022. Preview capabilities in preview APIs do not maintain a minimum retirement period and can be deprecated at any time.
+The model-version retirement period is defined as: the period of time from a release of a newer model-version for the capability, until a specific older version is deprecated. This period is defined as six months for stable model versions, and three months for previews. For example, a stable model-version `2021-01-01` will be deprecated six months after a successor model-version `2021-07-01` is released, on January 1, 2022. Preview capabilities in preview APIs do not maintain a minimum retirement period and can be deprecated at any time.
-After model-version deprecation, API calls to deprecated model-versions will return an error.
-## Choose the model-version used on your data
+#### Choose the model-version used on your data
-By default, API requests will use the latest Generally Available model. You can use an optional parameter to select the version of the model to be used. To use preview versions, you must specify this parameter.
+By default, API requests will use the latest Generally Available model. You can use an optional parameter to select the version of the model to be used.
> [!TIP] > If youΓÇÖre using the SDK for C#, Java, JavaScript or Python, see the reference documentation for information on the appropriate model-version parameter.- For synchronous endpoints, use the `model-version` query parameter. For example: POST `<resource-url>/text/analytics/v3.1/sentiment?model-version=2021-10-01-preview`.
For asynchronous endpoints, use the `model-version` property in the request body
The model-version used in your API request will be included in the response object.
+> [!NOTE]
+> If you are using an model version that is not listed in the table, then it was subjected to the expiration policy.
-## Available versions
-
-Use the table below to find which model versions are supported by each feature.
+Use the table below to find which model versions are supported by each feature:
| Feature | Supported versions | Latest Generally Available version | Latest preview version | |--||||
-| Custom text classification | `2021-11-01-preview` | | `2021-11-01-preview` |
-| Conversational language understanding | `2021-11-01-preview` | | `2021-11-01-preview` |
-| Sentiment Analysis and opinion mining | `2019-10-01`, `2020-04-01`, `2021-10-01` | `2021-10-01` | |
-| Language Detection | `2019-10-01`, `2020-07-01`, `2020-09-01`, `2021-01-05` | `2021-01-05` | |
-| Entity Linking | `2019-10-01`, `2020-02-01` | `2020-02-01` | |
-| Named Entity Recognition (NER) | `2019-10-01`, `2020-02-01`, `2020-04-01`,`2021-01-15`,`2021-06-01` | `2021-06-01` | |
-| Custom NER | `2021-11-01-preview` | | `2021-11-01-preview` |
-| Personally Identifiable Information (PII) detection | `2019-10-01`, `2020-02-01`, `2020-04-01`,`2020-07-01`, `2021-01-15` | `2021-01-15` | |
-| Question answering | `2021-10-01` | `2021-10-01` |
-| Text Analytics for health | `2021-05-15`, `2022-03-01` | `2022-03-01` | |
-| Key phrase extraction | `2019-10-01`, `2020-07-01`, `2021-06-01` | `2021-06-01` | |
+| Sentiment Analysis and opinion mining | `2019-10-01` | `2021-10-01` | |
+| Language Detection | `2021-11-20` | `2021-11-20` | |
+| Entity Linking | `2021-06-01` | `2021-06-01` | |
+| Named Entity Recognition (NER) | `2021-06-01` | `2021-06-01` | |
+| Personally Identifiable Information (PII) detection | `2020-07-01`, `2021-01-15` | `2021-01-15` | |
+| PII detection for conversations (Preview) | `2022-05-15-preview` | | `2022-05-15-preview` |
+| Question answering | `2021-10-01` | `2021-10-01` | |
+| Text Analytics for health | `2021-05-15`, `2022-03-01` | `2022-03-01` | |
+| Key phrase extraction | `2021-06-01` | `2021-06-01` | |
| Text summarization | `2021-08-01` | `2021-08-01` | | +
+## Custom features
+
+### Expiration timeline
+
+As new training configs and new functionality become available; older and less accurate configs are retired, see the following timelines for configs expiration:
+
+New configs are being released every few months. So, training configs expiration of any publicly available config is **six months** after its release. If you have assigned a trained model to a deployment, this deployment expires after **twelve months** from the training config expiration.
+
+After training config version expires, API calls will return an error when called or used if called with an expired config version. By default, training requests will use the latest available training config version. To change the config version, use `trainingConfigVersion` when submitting a training job and assign the version you want.
+
+> [!Tip]
+> It's recommended to use the latest supported config version
+
+You can train and deploy a custom AI model from the date of training config version release, up until the **Training config expiration** date. After this date, you will have to use another supported training config version for submitting any training or deployment jobs.
+
+Deployment expiration is when your deployed model will be unavailable to be used for prediction.
+
+Use the table below to find which model versions are supported by each feature:
+
+| Feature | Supported Training config versions | Training config expiration | Deployment expiration |
+||--|||
+| Custom text classification | `2022-05-01` | `10/28/2022` | `10/28/2023` |
+| Conversational language understanding | `2022-05-01` | `10/28/2022` | `10/28/2023` |
+| Custom named entity recognition | `2022-05-01` | `10/28/2022` | `10/28/2023` |
+| Orchestration workflow | `2022-05-01` | `10/28/2022` | `10/28/2023` |
++
+## API versions
+
+When you're making API calls to the following features, you need to specify the `API-VERISON` you want to use to complete your request. It's recommended to use the latest available API versions.
+
+If you are using the [Language Studio](https://aka.ms/languageStudio) for building your project you will be using the latest API version available. If you need to use another API version this is only available directly through APIs.
+
+Use the table below to find which API versions are supported by each feature:
+
+| Feature | Supported versions | Latest Generally Available version | Latest preview version |
+|--||||
+| Custom text classification | `2022-03-01-preview` | | `2022-03-01-preview` |
+| Conversational language understanding | `2022-03-01-preview` | | `2022-03-01-preview` |
+| Custom named entity recognition | `2022-03-01-preview` | | `2022-03-01-preview` |
+| Orchestration workflow | `2022-03-01-preview` | | `2022-03-01-preview` |
++ ## Next steps [Azure Cognitive Service for Language overview](../overview.md)
cognitive-services Backwards Compatibility https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/conversational-language-understanding/concepts/backwards-compatibility.md
- Previously updated : 03/03/2022+ Last updated : 05/13/2022 # Backwards compatibility with LUIS applications
-You can reuse some of the content of your existing LUIS applications in [Conversational Language Understanding](../overview.md). When working with Conversational Language Understanding projects, you can:
-* Create CLU conversation projects from LUIS application JSON files.
+You can reuse some of the content of your existing [LUIS](../../../LUIS/what-is-luis.md) applications in [conversational language understanding](../overview.md). When working with conversational language understanding projects, you can:
+* Create conversational language understanding conversation projects from LUIS application JSON files.
* Create LUIS applications that can be connected to [orchestration workflow](../../orchestration-workflow/overview.md) projects. > [!NOTE]
You can reuse some of the content of your existing LUIS applications in [Convers
## Import a LUIS application JSON file into Conversational Language Understanding
+### [Language Studio](#tab/studio)
+ To import a LUIS application JSON file, click on the icon next to **Create a new project** and select **Import**. Then select the LUIS file. When you import a new project into Conversational Language Understanding, you can select an exported LUIS application JSON file, and the service will automatically create a project with the currently available features. :::image type="content" source="../media/import.png" alt-text="A screenshot showing the import button for conversation projects." lightbox="../media/import.png":::
-### Supported features
-When importing the LUIS JSON application into CLU, it will create a **Conversations** project with the following features will be selected:
+### [REST API](#tab/rest-api)
++++
+## Supported features
+When you import the LUIS JSON application into conversational language understanding, it will create a **Conversations** project with the following features will be selected:
|**Feature**|**Notes**|
-| :- | :- |
-|Intents|All of your intents will be transferred as CLU intents with the same names.|
-|ML entities|All of your ML entities will be transferred as CLU entities with the same names. The labels will be persisted and used to train the Learned component of the entity. Structured ML entities will transfer over the leaf nodes of the structure as different entities and apply their labels accordingly.|
-|Utterances|All of your LUIS utterances will be transferred as CLU utterances with their intent and entity labels. Structured ML entity labels will only consider the top-level entity labels, and the individual sub-entity labels will be ignored.|
+|: - |: - |
+|Intents|All of your intents will be transferred as conversational language understanding intents with the same names.|
+|ML entities|All of your ML entities will be transferred as conversational language understanding entities with the same names. The labels will be persisted and used to train the Learned component of the entity. Structured ML entities will transfer over the leaf nodes of the structure as different entities and apply their labels accordingly.|
+|Utterances|All of your LUIS utterances will be transferred as conversational language understanding utterances with their intent and entity labels. Structured ML entity labels will only consider the top-level entity labels, and the individual subentity labels will be ignored.|
|Culture|The primary language of the Conversation project will be the LUIS app culture. If the culture is not supported, the importing will fail. |
-|List entities|All of your list entities will be transferred as CLU entities with the same names. The normalized values and synonyms of each list will be transferred as keys and synonyms in the list component for the CLU entity.|
-|Prebuilt entities|All of your prebuilt entities will be transferred as CLU entities with the same names. The CLU entity will have the relevant [prebuilt entities](entity-components.md#prebuilt-component) enabled if they are supported. |
-|Required entity features in ML entities|If you had a prebuilt entity or a list entity as a required feature to another ML entity, then the ML entity will be transferred as a CLU entity with the same name and its labels will apply. The CLU entity will include the required feature entity as a component. The [overlap method](entity-components.md#overlap-methods) will be set as ΓÇ£Exact OverlapΓÇ¥ for the CLU entity.|
-|Non-required entity features in ML entities|If you had a prebuilt entity or a list entity as a non-required feature to another ML entity, then the ML entity will be transferred as a CLU entity with the same name and its ML labels will apply. If an ML entity was used as a feature to another ML entity, it will not be transferred over.|
-|Roles|All of your roles will be transferred as CLU entities with the same names. Each role will be its own CLU entity. The roleΓÇÖs entity type will determine which component is populated for the role. Roles on prebuilt entities will transfer as CLU entities with the prebuilt entity component enabled and the role labels transferred over to train the Learned component. Roles on list entities will transfer as CLU entities with the list entity component populated and the role labels transferred over to train the Learned component. Roles on ML entities will be transferred as CLU entities with their labels applied to train the Learned component of the entity. |
+|List entities|All of your list entities will be transferred as conversational language understanding entities with the same names. The normalized values and synonyms of each list will be transferred as keys and synonyms in the list component for the conversational language understanding entity.|
+|Prebuilt entities|All of your prebuilt entities will be transferred as conversational language understanding entities with the same names. The conversational language understanding entity will have the relevant [prebuilt entities](entity-components.md#prebuilt-component) enabled if they are supported. |
+|Required entity features in ML entities|If you had a prebuilt entity or a list entity as a required feature to another ML entity, then the ML entity will be transferred as a conversational language understanding entity with the same name and its labels will apply. The conversational language understanding entity will include the required feature entity as a component. The [overlap method](entity-components.md#entity-options) will be set as ΓÇ£Exact OverlapΓÇ¥ for the conversational language understanding entity.|
+|Non-required entity features in ML entities|If you had a prebuilt entity or a list entity as a non-required feature to another ML entity, then the ML entity will be transferred as a conversational language understanding entity with the same name and its ML labels will apply. If an ML entity was used as a feature to another ML entity, it will not be transferred over.|
+|Roles|All of your roles will be transferred as conversational language understanding entities with the same names. Each role will be its own conversational language understanding entity. The roleΓÇÖs entity type will determine which component is populated for the role. Roles on prebuilt entities will transfer as conversational language understanding entities with the prebuilt entity component enabled and the role labels transferred over to train the Learned component. Roles on list entities will transfer as conversational language understanding entities with the list entity component populated and the role labels transferred over to train the Learned component. Roles on ML entities will be transferred as conversational language understanding entities with their labels applied to train the Learned component of the entity. |
-### Unsupported features
+## Unsupported features
-When importing the LUIS JSON application into CLU, certain features will be ignored, but they will not block you from importing the application. The following features will be ignored:
+When you import the LUIS JSON application into conversational language understanding, certain features will be ignored, but they will not block you from importing the application. The following features will be ignored:
|**Feature**|**Notes**|
-| :- | :- |
-|Application Settings|The settings such as Normalize Punctuation, Normalize Diacritics, and Use All Training Data were meant to improve predictions for intents and entities. The new models in CLU are not sensitive to small changes such as punctuation and are therefore not available as settings.|
-|Features|Phrase list features and features to intents will all be ignored. Features were meant to introduce semantic understanding for LUIS that CLU can provide out of the box with its new models.|
-|Patterns|Patterns were used to cover for lack of quality in intent classification. The new models in CLU are expected to perform better without needing patterns.|
-|Pattern.Any Entities|Pattern.Any entities were used to cover for lack of quality in ML entity extraction. The new models in CLU are expected to perform better without needing pattern.any entities.|
+|: - |: - |
+|Application Settings|The settings such as Normalize Punctuation, Normalize Diacritics, and Use All Training Data were meant to improve predictions for intents and entities. The new models in conversational language understanding are not sensitive to small changes such as punctuation and are therefore not available as settings.|
+|Features|Phrase list features and features to intents will all be ignored. Features were meant to introduce semantic understanding for LUIS that conversational language understanding can provide out of the box with its new models.|
+|Patterns|Patterns were used to cover for lack of quality in intent classification. The new models in conversational language understanding are expected to perform better without needing patterns.|
+|`Pattern.Any` Entities|`Pattern.Any` entities were used to cover for lack of quality in ML entity extraction. The new models in conversational language understanding are expected to perform better without needing `Pattern.Any` entities.|
|Regex Entities| Not currently supported | |Structured ML Entities| Not currently supported |
-## Use a published LUIS application in Conversational Language Understanding orchestration projects
+## Use a published LUIS application in orchestration workflow projects
You can only connect to published LUIS applications that are owned by the same Language resource that you use for Conversational Language Understanding. You can change the authoring resource to a Language **S** resource in **West Europe** applications. See the [LUIS documentation](../../../luis/luis-how-to-azure-subscription.md#assign-luis-resources) for steps on assigning a different resource to your LUIS application. You can also export then import the LUIS applications into your Language resource. You must train and publish LUIS applications for them to appear in Conversational Language Understanding when you want to connect them to orchestration projects.
cognitive-services Data Formats https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/conversational-language-understanding/concepts/data-formats.md
+
+ Title: conversational language understanding data formats
+
+description: Learn about the data formats accepted by conversational language understanding.
++++++ Last updated : 05/13/2022++++
+# Data formats accepted by conversational language understanding
++
+If you're uploading your data into CLU it has to follow a specific format, use this article to learn more about accepted data formats.
+
+## Import project file format
+
+If you're [importing a project](../how-to/create-project.md#import-project) into CLU the file uploaded has to be in the following format.
+
+```json
+{
+ "projectFileVersion": "2022-05-01",
+ "stringIndexType": "Utf16CodeUnit",
+ "metadata": {
+ "projectKind": "Conversation",
+ "projectName": "{PROJECT-NAME}",
+ "multilingual": true,
+ "description": "DESCRIPTION",
+ "language": "{LANGUAGE-CODE}"
+ },
+ "assets": {
+ "projectKind": "Conversation",
+ "intents": [
+ {
+ "category": "intent1"
+ }
+ ],
+ "entities": [
+ {
+ "category": "entity1",
+ "compositionSetting": "requireExactOverlap",
+ "list": {
+ "sublists": [
+ {
+ "listKey": "list1",
+ "synonyms": [
+ {
+ "language": "{LANGUAGE-CODE}",
+ "values": [
+ "{VALUES-FOR-LIST}"
+ ]
+ }
+ ]
+ }
+ ]
+ },
+ "prebuilts": [
+ {
+ "category": "PREBUILT1"
+ }
+ ]
+ }
+ ],
+ "utterances": [
+ {
+ "text": "utterance1",
+ "intent": "intent1",
+ "language": "{LANGUAGE-CODE}",
+ "dataset": "{DATASET}",
+ "entities": [
+ {
+ "category": "ENTITY1",
+ "offset": 6,
+ "length": 4
+ }
+ ]
+ }
+ ]
+ }
+}
+
+```
+
+|Key |Placeholder |Value | Example |
+|||-|--|
+| `api-version` | `{API-VERSION}` | The version of the API you're calling. The value referenced here is for the latest released [model version](../../concepts/model-lifecycle.md#choose-the-model-version-used-on-your-data) released. | `2022-03-01-preview` |
+|`confidenceThreshold`|`{CONFIDENCE-THRESHOLD}`|This is the threshold score below which the intent will be predicted as [none intent](none-intent.md)|`0.7`|
+| `projectName` | `{PROJECT-NAME}` | The name of your project. This value is case-sensitive. | `EmailApp` |
+| `multilingual` | `true`| A boolean value that enables you to have documents in multiple languages in your dataset and when your model is deployed you can query the model in any supported language (not necessarily included in your training documents. See [Language support](../language-support.md#multi-lingual-option) for more information about supported language codes. | `true`|
+|`sublists`|`[]`|Array containing a sublists|`[]`|
+|`synonyms`|`[]`|Array containing all the synonyms|synonym|
+| `language` | `{LANGUAGE-CODE}` | A string specifying the language code for the utterances used in your project. If your project is a multilingual project, choose the [language code](../language-support.md) of the majority of the utterances. |`en-us`|
+| `intents` | `[]` | Array containing all the intents you have in the project. These are the intent types that will be extracted from your utterances.| `[]` |
+| `entities` | `[]` | Array containing all the entities in your project. These are the entities that will be extracted from your utterances.| `[]` |
+| `dataset` | `{DATASET}` | The test set to which this utterance will go to when split before training. Learn more about data splitting [here](../how-to/train-model.md#data-splitting) . Possible values for this field are `Train` and `Test`. |`Train`|
+| `category` | ` ` | The type of entity associated with the span of text specified. | `Entity1`|
+| `offset` | ` ` | The inclusive character position of the start of the entity. |`5`|
+| `length` | ` ` | The character length of the entity. |`5`|
+| `language` | `{LANGUAGE-CODE}` | A string specifying the language code for the utterances used in your project. If your project is a multilingual project, choose the [language code](../language-support.md) of the majority of the utterances. |`en-us`|
+
+## Utterance file format
+
+CLU offers the option to upload your utterance directly to the project rather than typing them in one by one. You can find this option in the [data labeling](../how-to/tag-utterances.md) page for your project.
+
+```json
+[
+ {
+ "text": "{Utterance-Text}",
+ "language": "{LANGUAGE-CODE}",
+ "dataset": "{DATASET}",
+ "intent": "{intent}",
+ "entities": [
+ {
+ "entityName": "{entity}",
+ "offset": 19,
+ "length": 10
+ }
+ ]
+ },
+ {
+ "text": "{Utterance-Text}",
+ "language": "{LANGUAGE-CODE}",
+ "dataset": "{DATASET}",
+ "intent": "{intent}",
+ "entities": [
+ {
+ "entityName": "{entity}",
+ "offset": 20,
+ "length": 10
+ },
+ {
+ "entityName": "{entity}",
+ "offset": 31,
+ "length": 5
+ }
+ ]
+ }
+]
+
+```
+
+|Key |Placeholder |Value | Example |
+|||-|--|
+|`text`|`{Utterance-Text}`|Your utterance text|Testing|
+| `language` | `{LANGUAGE-CODE}` | A string specifying the language code for the utterances used in your project. If your project is a multilingual project, choose the language code of the majority of the utterances. See [Language support](../language-support.md) for more information about supported language codes. |`en-us`|
+| `dataset` | `{DATASET}` | The test set to which this utterance will go to when split before training. Learn more about data splitting [here](../how-to/train-model.md#data-splitting) . Possible values for this field are `Train` and `Test`. |`Train`|
+|`intent`|`{intent}`|The assigned intent| intent1|
+|`entity`|`{entity}`|Entity to be extracted| entity1|
+| `category` | ` ` | The type of entity associated with the span of text specified. | `Entity1`|
+| `offset` | ` ` | The inclusive character position of the start of the text. |`0`|
+| `length` | ` ` | The length of the bounding box in terms of UTF16 characters. Training only considers the data in this region. |`500`|
++
+## Next steps
+
+* You can import your labeled data into your project directly. See [import project](../how-to/create-project.md#import-project) for more information.
+* See the [how-to article](../how-to/tag-utterances.md) more information about labeling your data. When you're done labeling your data, you can [train your model](../how-to/train-model.md).
cognitive-services Entity Components https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/conversational-language-understanding/concepts/entity-components.md
Previously updated : 11/02/2021 Last updated : 05/13/2022 # Entity components
-In Conversational Language Understanding, entities are relevant pieces of information that are extracted from your utterances. An entity can be extracted by different methods. They can be learned through context, matched from a list, or detected by a prebuilt recognized entity. Every entity in your project is composed of one or more of these methods, which are defined as your entity's components. When an entity is defined by more than one component, their predictions can overlap. You can determine the behavior of an entity prediction when its components overlap by using a fixed set of options in the **Overlap Method**.
+In Conversational Language Understanding, entities are relevant pieces of information that are extracted from your utterances. An entity can be extracted by different methods. They can be learned through context, matched from a list, or detected by a prebuilt recognized entity. Every entity in your project is composed of one or more of these methods, which are defined as your entity's components. When an entity is defined by more than one component, their predictions can overlap. You can determine the behavior of an entity prediction when its components overlap by using a fixed set of options in the **Entity options**.
## Component types
The learned component uses the entity tags you label your utterances with to tra
### List component
-The list component represents a fixed, closed set of related words along with their synonyms. The component performs an exact text match against the list of values you provide as synonyms. Each synonym belongs to a "list key" which can be used as the normalized, standard value for the synonym that will return in the output if the list component is matched. List keys are **not** used for matching.
+The list component represents a fixed, closed set of related words along with their synonyms. The component performs an exact text match against the list of values you provide as synonyms. Each synonym belongs to a "list key", which can be used as the normalized, standard value for the synonym that will return in the output if the list component is matched. List keys are **not** used for matching.
+
+In multilingual projects, you can specify a different set of synonyms for each language. While using the prediction API, you can specify the language in the input request, which will only match the synonyms associated to that language.
:::image type="content" source="../media/list-component.png" alt-text="A screenshot showing an example of list components for entities." lightbox="../media/list-component.png"::: ### Prebuilt component
-The prebuilt component allows you to select from a library of common types such as numbers, datetimes, and names. When added, a prebuilt component is automatically detected. You can have up to 5 prebuilt components per entity. See [the list of supported prebuilt components](../prebuilt-component-reference.md) for more information.
+The prebuilt component allows you to select from a library of common types such as numbers, datetimes, and names. When added, a prebuilt component is automatically detected. You can have up to five prebuilt components per entity. See [the list of supported prebuilt components](../prebuilt-component-reference.md) for more information.
:::image type="content" source="../media/prebuilt-component.png" alt-text="A screenshot showing an example of prebuilt components for entities." lightbox="../media/prebuilt-component.png":::
-## Overlap methods
+## Entity options
When multiple components are defined for an entity, their predictions may overlap. When an overlap occurs, each entity's final prediction is determined by one of the following options.
-### Longest overlap
-
-When two or more components are found in the text and the overlap method is used, the component with the **longest set of characters** is returned.
-
-This option is best used when you're interested in extracting the longest possible prediction by the different components. This method guarantees that whenever there is confusion (overlap), the returned component will be the longest.
-
-#### Examples
-
-If "Palm Beach" was matched by the List component and "Palm Beach Extension" was predicted by the Learned component, then "**Palm Beach Extension**" is returned because it is the longest set of characters in this overlap.
--
-If "Palm Beach" was matched by the List component and "Beach Extension" was predicted by the Learned component, then "**Beach Extension**" is returned because it is the component with longest set of characters in this overlap.
--
-If "Palm Beach" was matched from the List component and "Extension" was predicted by the Learned component, then 2 separate instances of the entities are returned, as there is no overlap between them: one for "**Palm Beach**" and one for "**Extension**", as no overlap has occurred in this instance.
--
-### Exact overlap
+### Combine components
-All components must overlap at the **exact same characters** in the text for the entity to return. If one of the defined components is not matched or predicted, the entity will not return.
+Combine components as one entity when they overlap by taking the union of all the components.
-This option is best when you have a strict entity that needs to have several components detected at the same time to be extracted.
+Use this to combine all components when they overlap. When components are combined, you get all the extra information thatΓÇÖs tied to a list or prebuilt component when they are present.
-#### Examples
+#### Example
-If "Palm Beach" was matched by the list component and "Palm Beach" was predicted by the learned component, and those were the only 2 components defined in the entity, then "**Palm Beach**" is returned because all the components overlapped at the exact same characters.
+Suppose you have an entity called Software that has a list component, which contains ΓÇ£Proseware OSΓÇ¥ as an entry. In your utterance data, you have ΓÇ£I want to buy Proseware OS 9ΓÇ¥ with ΓÇ£Proseware OS 9ΓÇ¥ tagged as Software:
-If "Palm Beach" was matched by the list component and "Beach Extension" was predicted by the learned component, then the entity is **not** returned because all the components did not overlap at the exact same characters.
+By using combine components, the entity will return with the full context as ΓÇ£Proseware OS 9ΓÇ¥ along with the key from the list component:
-If "Palm Beach" was matched from the list component and "Extension" was predicted by the learned component, then the entity is **not** returned because no overlap has occurred in this instance.
+Suppose you had the same utterance but only ΓÇ£OS 9ΓÇ¥ was predicted by the learned component:
-### Union overlap
+With combine components, the entity will still return as ΓÇ£Proseware OS 9ΓÇ¥ with the key from the list component:
-When two or more components are found in the text and overlap, the **union** of the components' spans are returned.
-This option is best when you're optimizing for recall and attempting to get the longest possible match that can be combined.
-#### Examples
+### Do not combine components
-If "Palm Beach" was matched by the list component and "Palm Beach Extension" was predicted by the learned component, then "**Palm Beach Extension**" is returned because the first character at the beginning of the overlap is the "P" in "Palm", and the last letter at the end of the overlapping components is the "n" in "Extension".
+Each overlapping component will return as a separate instance of the entity. Apply your own logic after prediction with this option.
+#### Example
-If "Palm Beach" was matched by the list component and "Beach Extension" was predicted by the learned component, then "**Palm Beach Extension**" is returned because the first character at the beginning of the overlap is the "P" in "Palm", and the last letter at the end of the overlapping components is the "n" in "Extension".
+Suppose you have an entity called Software that has a list component, which contains ΓÇ£Proseware DesktopΓÇ¥ as an entry. In your utterance data, you have ΓÇ£I want to buy Proseware Desktop ProΓÇ¥ with ΓÇ£Proseware Desktop ProΓÇ¥ tagged as Software:
-If "New York" was predicted by the prebuilt component, "York Beach" was matched by the list component, and "Beach Extension" was predicted by the learned component, then " __**New York Beach Extension**__" is returned because the first character at the beginning of the overlap is the "N" in "New" and the last letter at the end of the overlapping components is the "n" in "Extension".
+When you do not combine components, the entity will return twice:
-### Return all separately
-Every component's match or prediction is returned as a **separate instance** of the entity.
+> [!NOTE]
+> During public preview of the service, there were 4 available options: **Longest overlap**, **Exact overlap**, **Union overlap**, and **Return all separately**. **Longest overlap** and **exact overlap** are deprecated and will only be supported for projects that previously had those options selected. **Union overlap** has been renamed to **Combine components**, while **Return all separately** has been renamed to **Do not combine components**.
-This option is best when you'd like to apply your own overlap logic for the entity after the prediction.
+## How to use components and options
-#### Examples
+Components give you the flexibility to define your entity in more than one way. When you combine components, you make sure that each component is represented and you reduce the number of entities returned in your predictions.
-If "Palm Beach" was matched by the list component and "Palm Beach Extension" was predicted by the learned component, then the entity returns two instances: one for "**Palm Beach**" and another for "**Palm Beach Extension**".
+A common practice is to extend a prebuilt component with a list of values that the prebuilt might not support. For example, if you have an **Organization** entity, which has a _General.Organization_ prebuilt component added to it, the entity may not predict all the organizations specific to your domain. You can use a list component to extend the values of the Organization entity and thereby extending the prebuilt with your own organizations.
+Other times you may be interested in extracting an entity through context such as a **Product** in a retail project. You would label for the learned component of the product to learn _where_ a product is based on its position within the sentence. You may also have a list of products that you already know before hand that you'd like to always extract. Combining both components in one entity allows you to get both options for the entity.
-If "New York" was predicted by the prebuilt component, "York Beach" was matched by the list component, and "Beach Extension" was predicted by the learned component, then the entity returns with 3 instances: one for "**New York**", one for "**York Beach**", and one for "**Beach Extension**".
+When you do not combine components, you simply allow every component to act as an independent entity extractor. One way of using this option is to separate the entities extracted from a list to the ones extracted through the learned or prebuilt components to handle and treat them differently.
## Next steps
cognitive-services Evaluation Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/conversational-language-understanding/concepts/evaluation-metrics.md
Previously updated : 11/02/2021 Last updated : 05/13/2022
-# Evaluation metrics for Conversational Language Understanding models
+# Evaluation metrics for conversational language understanding models
-Model evaluation in conversational language understanding uses the following metrics:
+Your [dataset is split](../how-to/train-model.md#data-splitting) into two parts: a set for training, and a set for testing. The training set is used to train the model, while the testing set is used as a test for model after training to calculate the model performance and evaluation. The testing set isn't introduced to the model through the training process, to make sure that the model is tested on new data.
+Model evaluation is triggered automatically after training is completed successfully. The evaluation process starts by using the trained model to predict user defined intents and entities for utterances in the test set, and compares them with the provided tags (which establishes a baseline of truth). The results are returned so you can review the modelΓÇÖs performance. For evaluation, conversational language understanding uses the following metrics:
-|Metric |Description |Calculation |
-||||
-|Precision | The ratio of successful recognitions to all attempted recognitions. This shows how many times the model's entity recognition is truly a good recognition. | `Precision = #True_Positive / (#True_Positive + #False_Positive)` |
-|Recall | The ratio of successful recognitions to the actual number of entities present. | `Recall = #True_Positive / (#True_Positive + #False_Negatives)` |
-|F1 score | The combination of precision and recall. | `F1 Score = 2 * Precision * Recall / (Precision + Recall)` |
+* **Precision**: Measures how precise/accurate your model is. It's the ratio between the correctly identified positives (true positives) and all identified positives. The precision metric reveals how many of the predicted classes are correctly labeled.
-## Model-level and entity-level evaluation metrics
+ `Precision = #True_Positive / (#True_Positive + #False_Positive)`
-Precision, recall, and F1 score are calculated for each entity separately (entity-level evaluation) and for the model collectively (model-level evaluation).
+* **Recall**: Measures the model's ability to predict actual positive classes. It's the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
-The definitions of precision, recall, and evaluation are the same for both entity-level and model-level evaluations. However, the counts for *True Positives*, *False Positives*, and *False Negatives* differ can differ. For example, consider the following text.
+ `Recall = #True_Positive / (#True_Positive + #False_Negatives)`
+
+* **F1 score**: The F1 score is a function of Precision and Recall. It's needed when you seek a balance between Precision and Recall.
+
+ `F1 Score = 2 * Precision * Recall / (Precision + Recall)`
++
+Precision, recall, and F1 score are calculated for:
+* Each entity separately (entity-level evaluation)
+* Each intent separately (intent-level evaluation)
+* For the model collectively (model-level evaluation).
+
+The definitions of precision, recall, and evaluation are the same for entity-level, intent-level and model-level evaluations. However, the counts for *True Positives*, *False Positives*, and *False Negatives* can differ. For example, consider the following text.
### Example
-*The first party of this contract is John Smith, resident of 5678 Main Rd., City of Frederick, state of Nebraska. And the second party is Forrest Ray, resident of 123-345 Integer Rd., City of Corona, state of New Mexico. There is also Fannie Thomas resident of 7890 River Road, city of Colorado Springs, State of Colorado.*
+* Make a response with thank you very much.
+* reply with saying yes.
+* Check my email please.
+* email to cynthia that dinner last week was splendid.
+* send email to mike
+
+These are the intents used: *Reply*,*sendEmail*,*readEmail*. These are the entities: *contactName*, *message*.
+
+The model could make the following predictions:
+
+| Utterance | Predicted intent | Actual intent |Predicted entity| Actual entity|
+|--|--|--|--|--|
+|Make a response with thank you very much|Reply|Reply|`thank you very much` as `message` |`thank you very much` as `message` |
+|reply with saying yes| sendEmail|Reply|--|`yes` as `message`|
+|Check my email please|readEmail|readEmail|--|--|
+|email to cynthia that dinner last week was splendid|Reply|sendEmail|`dinner last week was splendid` as `message`| `cynthia` as `contactName`, `dinner last week was splendid` as `message`|
+|send email to mike|sendEmail|sendEmail|`mike` as `message`|`mike` as `contactName`|
-The model extracting entities from this text could have the following predictions:
-| Entity | Predicted as | Actual type |
+### Intent level evaluation for *Reply* intent
+
+| Key | Count | Explanation |
|--|--|--|
-| John Smith | Person | Person |
-| Frederick | Person | City |
-| Forrest | City | Person |
-| Fannie Thomas | Person | Person |
-| Colorado Springs | City | City |
+| True Positive | 1 | Utterance 1 was correctly predicted as *Reply*. |
+| False Positive | 1 |Utterance 4 was mistakenly predicted as *Reply*. |
+| False Negative | 1 | Utterance 2 was mistakenly predicted as *sendEmail*. |
+
+**Precision** = `#True_Positive / (#True_Positive + #False_Positive) = 1 / (1 + 1) = 0.5`
+
+**Recall** = `#True_Positive / (#True_Positive + #False_Negatives) = 1 / (1 + 1) = 0.5`
-### Entity-level evaluation for the *person* entity
+**F1 Score** = `2 * Precision * Recall / (Precision + Recall) = (2 * 0.5 * 0.5) / (0.5 + 0.5) = 0.5 `
-The model would have the following entity-level evaluation, for the *person* entity:
+
+### Intent level evaluation for *sendEmail* intent
| Key | Count | Explanation | |--|--|--|
-| True Positive | 2 | *John Smith* and *Fannie Thomas* were correctly predicted as *person*. |
-| False Positive | 1 | *Frederick* was incorrectly predicted as *person* while it should have been *city*. |
-| False Negative | 1 | *Forrest* was incorrectly predicted as *city* while it should have been *person*. |
+| True Positive | 1 | Utterance 5 was correctly predicted as *sendEmail* |
+| False Positive | 1 |Utterance 2 was mistakenly predicted as *sendEmail*. |
+| False Negative | 1 | Utterance 4 was mistakenly predicted as *Reply*. |
+
+**Precision** = `#True_Positive / (#True_Positive + #False_Positive) = 1 / (1 + 1) = 0.5`
-* **Precision**: `#True_Positive / (#True_Positive + #False_Positive)` = `2 / (2 + 1) = 0.67`
-* **Recall**: `#True_Positive / (#True_Positive + #False_Negatives)` = `2 / (2 + 1) = 0.67`
-* **F1 Score**: `2 * Precision * Recall / (Precision + Recall)` = `(2 * 0.67 * 0.67) / (0.67 + 0.67) = 0.67`
+**Recall** = `#True_Positive / (#True_Positive + #False_Negatives) = 1 / (1 + 1) = 0.5`
-### Entity-level evaluation for the *city* entity
+**F1 Score** = `2 * Precision * Recall / (Precision + Recall) = (2 * 0.5 * 0.5) / (0.5 + 0.5) = 0.5 `
-The model would have the following entity-level evaluation, for the *city* entity:
+### Intent level evaluation for *readEmail* intent
| Key | Count | Explanation | |--|--|--|
-| True Positive | 1 | *Colorado Springs* was correctly predicted as *city*. |
-| False Positive | 1 | *Forrest* was incorrectly predicted as *city* while it should have been *person*. |
-| False Negative | 1 | *Frederick* was incorrectly predicted as *person* while it should have been *city*. |
+| True Positive | 1 | Utterance 3 was correctly predicted as *readEmail*. |
+| False Positive | 0 |--|
+| False Negative | 0 |--|
-* **Precision** = `#True_Positive / (#True_Positive + #False_Positive)` = `2 / (2 + 1) = 0.67`
-* **Recall** = `#True_Positive / (#True_Positive + #False_Negatives)` = `2 / (2 + 1) = 0.67`
-* **F1 Score** = `2 * Precision * Recall / (Precision + Recall)` = `(2 * 0.67 * 0.67) / (0.67 + 0.67) = 0.67`
+**Precision** = `#True_Positive / (#True_Positive + #False_Positive) = 1 / (1 + 0) = 1`
-### Model-level evaluation for the collective model
+**Recall** = `#True_Positive / (#True_Positive + #False_Negatives) = 1 / (1 + 0) = 1`
+
+**F1 Score** = `2 * Precision * Recall / (Precision + Recall) = (2 * 1 * 1) / (1 + 1) = 1`
-The model would have the following evaluation for the model in its entirety:
+### Entity level evaluation for *contactName* entity
| Key | Count | Explanation | |--|--|--|
-| True Positive | 3 | *John Smith* and *Fannie Thomas* were correctly predicted as *person*. *Colorado Springs* was correctly predicted as *city*. This is the sum of true positives for all entities. |
-| False Positive | 2 | *Forrest* was incorrectly predicted as *city* while it should have been *person*. *Frederick* was incorrectly predicted as *person* while it should have been *city*. This is the sum of false positives for all entities. |
-| False Negative | 2 | *Forrest* was incorrectly predicted as *city* while it should have been *person*. *Frederick* was incorrectly predicted as *person* while it should have been *city*. This is the sum of false negatives for all entities. |
+| True Positive | 1 | `cynthia` was correctly predicted as `contactName` in utterance 4|
+| False Positive | 0 |--|
+| False Negative | 1 | `mike` was mistakenly predicted as `message` in utterance 5 |
-* **Precision** = `#True_Positive / (#True_Positive + #False_Positive)` = `3 / (3 + 2) = 0.6`
-* **Recall** = `#True_Positive / (#True_Positive + #False_Negatives)` = `3 / (3 + 2) = 0.6`
-* **F1 Score** = `2 * Precision * Recall / (Precision + Recall)` = `(2 * 0.6 * 0.6) / (0.6 + 0.6) = 0.6`
+**Precision** = `#True_Positive / (#True_Positive + #False_Positive) = 1 / (1 + 0) = 1`
-## Interpreting entity-level evaluation metrics
+**Recall** = `#True_Positive / (#True_Positive + #False_Negatives) = 1 / (1 + 1) = 0.5`
-So what does it actually mean to have high precision or high recall for a certain entity?
+**F1 Score** = `2 * Precision * Recall / (Precision + Recall) = (2 * 1 * 0.5) / (1 + 0.5) = 0.67`
-| Recall | Precision | Interpretation |
+### Entity level evaluation for *message* entity
+
+| Key | Count | Explanation |
|--|--|--|
-| High | High | This entity is handled well by the model. |
-| Low | High | The model cannot always extract this entity, but when it does it is with high confidence. |
-| High | Low | The model extracts this entity well, however it is with low confidence as it is sometimes extracted as another type. |
-| Low | Low | This entity type is poorly handled by the model, because it is not usually extracted. When it is, it is not with high confidence. |
+| True Positive | 2 |`thank you very much` was correctly predicted as `message` in utterance 1 and `dinner last week was splendid` was correctly predicted as `message` in utterance 4 |
+| False Positive | 1 |`mike` was mistakenly predicted as `message` in utterance 5 |
+| False Negative | 1 | ` yes` was not predicted as `message` in utterance 2 |
+
+**Precision** = `#True_Positive / (#True_Positive + #False_Positive) = 2 / (2 + 1) = 0.67`
+
+**Recall** = `#True_Positive / (#True_Positive + #False_Negatives) = 2 / (2 + 1) = 0.67`
+
+**F1 Score** = `2 * Precision * Recall / (Precision + Recall) = (2 * 0.67 * 0.67) / (0.67 + 0.67) = 0.67`
++
+### Model-level evaluation for the collective model
+
+| Key | Count | Explanation |
+|--|--|--|
+| True Positive | 6 | Sum of TP for all intents and entities |
+| False Positive | 3| Sum of FP for all intents and entities |
+| False Negative | 4 | Sum of FN for all intents and entities |
+
+**Precision** = `#True_Positive / (#True_Positive + #False_Positive) = 6 / (6 + 3) = 0.67`
+
+**Recall** = `#True_Positive / (#True_Positive + #False_Negatives) = 6 / (6 + 4) = 0.60`
+
+**F1 Score** = `2 * Precision * Recall / (Precision + Recall) = (2 * 0.67 * 0.60) / (0.67 + 0.60) = 0.63`
## Confusion matrix
-A Confusion matrix is an N x N matrix used for model performance evaluation, where N is the number of entities.
-The matrix compares the actual tags with the tags predicted by the model.
+A Confusion matrix is an N x N matrix used for model performance evaluation, where N is the number of entities or intents.
+The matrix compares the expected labels with the ones predicted by the model.
This gives a holistic view of how well the model is performing and what kinds of errors it is making.
-You can use the Confusion matrix to identify entities that are too close to each other and often get mistaken (ambiguity). In this case consider merging these entity types together. If that isn't possible, consider adding more tagged examples of both entities to help the model differentiate between them.
+You can use the Confusion matrix to identify intents or entities that are too close to each other and often get mistaken (ambiguity). In this case consider merging these intents or entities together. If that isn't possible, consider adding more tagged examples of both intents or entities to help the model differentiate between them.
The highlighted diagonal in the image below is the correctly predicted entities, where the predicted tag is the same as the actual tag. :::image type="content" source="../media/confusion-matrix-example.png" alt-text="A screenshot of an example confusion matrix" lightbox="../media/confusion-matrix-example.png":::
-You can calculate the entity-level and model-level evaluation metrics from the confusion matrix:
+You can calculate the intent-level or entity-level and model-level evaluation metrics from the confusion matrix:
-* The values in the diagonal are the *True Positive* values of each entity.
-* The sum of the values in the entity rows (excluding the diagonal) is the *false positive* of the model.
-* The sum of the values in the entity columns (excluding the diagonal) is the *false Negative* of the model.
+* The values in the diagonal are the *True Positive* values of each intent or entity.
+* The sum of the values in the intent or entities rows (excluding the diagonal) is the *false positive* of the model.
+* The sum of the values in the intent or entities columns (excluding the diagonal) is the *false Negative* of the model.
Similarly,
-* The *true positive* of the model is the sum of *true Positives* for all entities.
-* The *false positive* of the model is the sum of *false positives* for all entities.
-* The *false Negative* of the model is the sum of *false negatives* for all entities.
+* The *true positive* of the model is the sum of *true Positives* for all intents or entities.
+* The *false positive* of the model is the sum of *false positives* for all intents or entities.
+* The *false Negative* of the model is the sum of *false negatives* for all intents or entities.
+ ## Next steps
cognitive-services None Intent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/conversational-language-understanding/concepts/none-intent.md
Previously updated : 02/28/2022 Last updated : 05/13/2022
# None intent
-Every project in Conversational Language Understanding includes a default None intent. The None intent is a required intent and can't be deleted or renamed. The intent is meant to categorize any utterances that do not belong to any of your other custom intents.
+Every project in conversational language understanding includes a default **None** intent. The None intent is a required intent and can't be deleted or renamed. The intent is meant to categorize any utterances that do not belong to any of your other custom intents.
An utterance can be predicted as the None intent if the top scoring intent's score is **lower** than the None score threshold. It can also be predicted if the utterance is similar to examples added to the None intent. ## None score threshold
-You can go to the project settings of any project and set the **None score threshold**. The threshold is a decimal score from **0.0** to **1.0**.
+You can go to the **project settings** of any project and set the **None score threshold**. The threshold is a decimal score from **0.0** to **1.0**.
For any query and utterance, the highest scoring intent ends up **lower** than the threshold score, the top intent will be automatically replaced with the None intent. The scores of all the other intents remain unchanged.
The score should be set according to your own observations of prediction scores,
When you export a project's JSON file, the None score threshold is defined in the _**"settings"**_ parameter of the JSON as the _**"confidenceThreshold"**_, which accepts a decimal value between 0.0 and 1.0.
-The default score for Orchestration Workflow projects is set at **0.5** and regular conversation projects at **0.0** when creating new projects in the language studio. If you are importing a project and the "confidenceThreshold" setting is absent, the threshold is set at **0.0**.
- > [!NOTE] > During model evaluation of your test set, the None score threshold is not applied.
You should also consider adding false positive examples to the None intent. For
## Next steps
-[Conversational Language Understanding overview](../overview.md)
+[Conversational language understanding overview](../overview.md)
cognitive-services Fail Over https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/conversational-language-understanding/fail-over.md
- Title: Back up and recover your conversational language understanding models-
-description: Learn how to save and recover your conversational language understanding models.
------ Previously updated : 02/07/2022----
-# Back up and recover your conversational language understanding models
-
-When you create a Language resource in the Azure portal, you specify a region for it to be created in. From then on, your resource and all of the operations related to it take place in the specified Azure server region. It's rare, but not impossible, to encounter a network issue that hits an entire region. If your solution needs to always be available, then you should design it to either fail-over into another region. This requires two Azure Language resources in different regions and the ability to sync your CLU models across regions.
-
-If your app or business depends on the use of a CLU model, we recommend that you create a replica of your project into another supported region. So that if a regional outage occurs, you can then access your model in the other fail-over region where you replicated your project.
-
-Replicating a project means that you export your project metadata and assets and import them into a new project. This only makes a copy of your project settings, intents, entities and utterances. You still need to [train](./how-to/train-model.md) and [deploy](how-to/deploy-query-model.md) the models to be available for use with [runtime APIs](https://aka.ms/clu-apis).
--
-In this article, you will learn to how to use the export and import APIs to replicate your project from one resource to another existing in different supported geographical regions, guidance on keeping your projects in sync and changes needed to your runtime consumption.
-
-## Prerequisites
-
-* Two Azure Language resources in different Azure regions, each of them in a different region.
-
-## Get your resource keys endpoint
-
-Use the following steps to get the keys and endpoint of your primary and secondary resources. These will be used in the following steps.
-
-* Go to your resource overview page in the [Azure portal](https://ms.portal.azure.com/#home)
-
-* From the menu of the left side of the screen, select **Keys and Endpoint**. Use endpoint for the API requests and youΓÇÖll need the key for `Ocp-Apim-Subscription-Key` header.
-
-> [!TIP]
-> Keep a note of keys and endpoints for both primary and secondary resources. Use these values to replace the following placeholders:
-`{YOUR-PRIMARY-ENDPOINT}`, `{YOUR-PRIMARY-RESOURCE-KEY}`, `{YOUR-SECONDARY-ENDPOINT}` and `{YOUR-SECONDARY-RESOURCE-KEY}`.
-> Also take note of your project name, your model name and your deployment name. Use these values to replace the following placeholders: `{PROJECT-NAME}`, `{MODEL-NAME}` and `{DEPLOYMENT-NAME}`.
-
-## Export your primary project assets
-
-Start by exporting the project assets from the project in your primary resource.
-
-### Submit export job
-
-Create a **POST** request using the following URL, headers, and JSON body to export project metadata and assets.
-
-Use the following URL to export your primary project assets. Replace the placeholder values below with your own values.
-
-```rest
-{YOUR-PRIMARY-ENDPOINT}/language/analyze-conversations/projects/{PROJECT-NAME}/:export?api-version=2021-11-01-preview
-```
-
-|Placeholder |Value | Example |
-||||
-|`{YOUR-PRIMARY-ENDPOINT}` | The endpoint for authenticating your API request. | `https://<your-custom-subdomain>.cognitiveservices.azure.com` |
-|`{PROJECT-NAME}` | The name for your project. This value is case-sensitive. | `myProject` |
-
-#### Headers
-
-Use the following headers to authenticate your request.
-
-|Key|Description|Value|
-|--|--|--|
-|`Ocp-Apim-Subscription-Key`| The key to your resource. Used for authenticating your API requests.| `{YOUR-PRIMARY-RESOURCE-KEY}` |
-|`format`| The format you want to use for the exported assets. | `JSON` |
-
-#### Body
-
-Use the following JSON in your request body specifying that you want to export all the assets.
-
-```json
-{
- "assetsToExport": ["*"]
-}
-```
-
-Once you send your API request, youΓÇÖll receive a `202` response indicating success. In the response headers, extract the `location` value. It will be formatted like this:
-
-```rest
-{YOUR-PRIMARY-ENDPOINT}/language/analyze-conversations/projects/{PROJECT-NAME}/export/jobs/{JOB-ID}?api-version=2021-11-01-preview
-```
-
-`JOB-ID` is used to identify your request, since this operation is asynchronous. YouΓÇÖll use this URL in the next step to get the training status.
-
-### Get export job status
-
-Use the following **GET** request to query the status of your export job status. You can use the URL you received from the previous step, or replace the placeholder values below with your own values.
-
-```rest
-{YOUR-PRIMARY-ENDPOINT}/language/analyze-conversations/projects/{PROJECT-NAME}/export/jobs/{JOB-ID}?api-version=2021-11-01-preview
-```
-
-|Placeholder |Value | Example |
-||||
-|`{YOUR-PRIMARY-ENDPOINT}` | The endpoint for authenticating your API request. | `https://<your-custom-subdomain>.cognitiveservices.azure.com` |
-|`{PROJECT-NAME}` | The name for your project. This value is case-sensitive. | `myProject` |
-|`{JOB-ID}` | The ID for locating your export job status. This is in the `location` header value you received in the previous step. | `xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxxx` |
-
-#### Headers
-
-Use the following header to authenticate your request.
-
-|Key|Description|Value|
-|--|--|--|
-|`Ocp-Apim-Subscription-Key`| The key to your resource. Used for authenticating your API requests.| `{YOUR-PRIMARY-RESOURCE-KEY}` |
-
-#### Response body
-
-```json
-{
- "resultUrl": "{RESULT-URL}",
- "jobId": "string",
- "createdDateTime": "2021-10-19T23:24:41.572Z",
- "lastUpdatedDateTime": "2021-10-19T23:24:41.572Z",
- "expirationDateTime": "2021-10-19T23:24:41.572Z",
- "status": "unknown",
- "errors": [
- {
- "code": "unknown",
- "message": "string"
- }
- ]
-}
-```
-
-Use the url from the `resultUrl` key in the body to view the exported assets from this job.
-
-### Get export results
-
-Submit a **GET** request using the `{RESULT-URL}` you received from the previous step to view the results of the export job.
-
-#### Headers
-
-Use the following header to authenticate your request.
-
-|Key|Description|Value|
-|--|--|--|
-|`Ocp-Apim-Subscription-Key`| The key to your resource. Used for authenticating your API requests.| `{YOUR-PRIMARY-RESOURCE-KEY}` |
-
-Copy the response body as you will use it as the body for the next import job.
-
-## Import to a new project
-
-Now go ahead and import the exported project assets in your new project in the secondary region so you can replicate it.
-
-### Submit import job
-
-Create a **POST** request using the following URL, headers, and JSON body to create your project and import the tags file.
-
-Use the following URL to create a project and import your tags file. Replace the placeholder values below with your own values.
-
-```rest
-{YOUR-SECONDARY-ENDPOINT}/language/analyze-conversations/projects/{PROJECT-NAME}/:import?api-version=2021-11-01-preview
-```
-
-|Placeholder |Value | Example |
-||||
-|`{YOUR-SECONDARY-ENDPOINT}` | The endpoint for authenticating your API request. | `https://<your-custom-subdomain>.cognitiveservices.azure.com` |
-|`{PROJECT-NAME}` | The name for your project. This value is case-sensitive. | `myProject` |
-
-#### Headers
-
-Use the following header to authenticate your request.
-
-|Key|Description|Value|
-|--|--|--|
-|`Ocp-Apim-Subscription-Key`| The key to your resource. Used for authenticating your API requests.| `{YOUR-SECONDARY-RESOURCE-KEY}` |
-
-#### Body
-
-Use the response body you got from the previous export step. It 's formatted like this:
-
-```json
-{
- "api-version": "2021-11-01-preview",
- "metadata": {
- "name": "myProject",
- "description": "A test application",
- "type": "conversation",
- "multilingual": true,
- "language": "en-us",
- "settings": {
- }
- },
- "assets": {
- "intents": [
- {
- "name": "Read"
- },
- {
- "name": "Attach"
- }
- ],
- "entities": [
- {
- "name": "Sender"
- },
- {
- "name": "FileName"
- },
- {
- "name": "FileType"
- }
- ],
- "examples": [
- {
- "text": "Open Blake's email",
- "language": "en-us",
- "intent": "Read",
- "entities": [
- {
- "entityName": "Sender",
- "offset": 5,
- "length": 5
- }
- ]
- },
- {
- "text": "Attach the excel file called reports q1",
- "language": "en-us",
- "intent": "Attach",
- "entities": [
- {
- "entityName": "FileType",
- "offset": 11,
- "length": 5
- },
- {
- "entityName": "FileName",
- "offset": 29,
- "length": 10
- }
- ]
- }
- ]
- }
-}
-```
-Once you send your API request, youΓÇÖll receive a `202` response indicating success. In the response headers, extract the `location` value. It will be formatted like this:
-
-```rest
-{YOUR-PRIMARY-ENDPOINT}/language/analyze-conversations/projects/{PROJECT-NAME}/import/jobs/{JOB-ID}?api-version=2021-11-01-preview
-```
-
-`JOB-ID` is used to identify your request, since this operation is asynchronous. YouΓÇÖll use this URL in the next step to get the import status.
-
-### Get import job status
-
-Use the following **GET** request to query the status of your import job status. You can use the URL you received from the previous step, or replace the placeholder values below with your own values.
-
-```rest
-{YOUR-PRIMARY-ENDPOINT}/language/analyze-conversations/projects/{PROJECT-NAME}/import/jobs/{JOB-ID}?api-version=2021-11-01-preview
-```
-
-|Placeholder |Value | Example |
-||||
-|`{YOUR-PRIMARY-ENDPOINT}` | The endpoint for authenticating your API request. | `https://<your-custom-subdomain>.cognitiveservices.azure.com` |
-|`{PROJECT-NAME}` | The name for your project. This value is case-sensitive. | `myProject` |
-|`{JOB-ID}` | The ID for locating your export job status. This is in the `location` header value you received in the previous step. | `xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxxx` |
-
-#### Headers
-
-Use the following header to authenticate your request.
-
-|Key|Description|Value|
-|--|--|--|
-|`Ocp-Apim-Subscription-Key`| The key to your resource. Used for authenticating your API requests.| `{YOUR-PRIMARY-RESOURCE-KEY}` |
-
-#### Response body
-
-```json
-{
- "jobId": "string",
- "createdDateTime": "2021-10-19T23:24:41.572Z",
- "lastUpdatedDateTime": "2021-10-19T23:24:41.572Z",
- "expirationDateTime": "2021-10-19T23:24:41.572Z",
- "status": "unknown",
- "errors": [
- {
- "code": "unknown",
- "message": "string"
- }
- ]
-}
-```
-
-Now you have replicated your project into another resource in another region.
--
-## Train your model
-
-After importing your project, you only have copied the project's assets and metadata and assets. You still need to train your model, which will incur usage on your account.
-
-### Submit training job
-
-Create a **POST** request using the following URL, headers, and JSON body to start training an NER model. Replace the placeholder values below with your own values.
-
-```rest
-{YOUR-SECONDARY-ENDPOINT}/language/analyze-conversations/projects/{PROJECT-NAME}/:train?api-version=2021-11-01-preview
-```
-
-|Placeholder |Value | Example |
-||||
-|`{YOUR-SECONDARY-ENDPOINT}` | The endpoint for authenticating your API request. | `https://<your-custom-subdomain>.cognitiveservices.azure.com` |
-|`{PROJECT-NAME}` | The name for your project. This value is case-sensitive. | `myProject` |
-
-### Headers
-
-Use the following header to authenticate your request.
-
-|Key|Description|Value|
-|--|--|--|
-|`Ocp-Apim-Subscription-Key`| The key to your resource. Used for authenticating your API requests.| `{YOUR-SECONDARY-RESOURCE-KEY}` |
-
-### Request body
-
-Use the following JSON in your request. Use the same model name and `runValidation` setting you have in your primary project for consistency.
-
-```json
-{
- "modelLabel": "{MODEL-NAME}",
- "runValidation": true
-}
-```
-
-|Key |Value | Example |
-||||
-|`modelLabel ` | Your Model name. | {MODEL-NAME} |
-|`runValidation` | Boolean value to run validation on the test set. | true |
-
-Once you send your API request, youΓÇÖll receive a `202` response indicating success. In the response headers, extract the `location` value. It will be formatted like this:
-
-```rest
-{YOUR-SECONDARY-ENDPOINT}/language/analyze-conversations/projects/{PROJECT-NAME}/train/jobs/{JOB-ID}?api-version=2021-11-01-preview
-```
-
-`JOB-ID` is used to identify your request, since this operation is asynchronous. YouΓÇÖll use this URL in the next step to get the training status.
-
-### Get Training Status
-
-Use the following **GET** request to query the status of your model's training process. You can use the URL you received from the previous step, or replace the placeholder values below with your own values.
-
-```rest
-{YOUR-SECONDARY-ENDPOINT}/language/analyze-conversations/projects/{PROJECT-NAME}/train/jobs/{JOB-ID}?api-version=2021-11-01-preview
-```
-
-|Placeholder |Value | Example |
-||||
-|`{YOUR-SECONDARY-ENDPOINT}` | The endpoint for authenticating your API request. | `https://<your-custom-subdomain>.cognitiveservices.azure.com` |
-|`{PROJECT-NAME}` | The name for your project. This value is case-sensitive. | `myProject` |
-|`{JOB-ID}` | The ID for locating your model's training status. This is in the `location` header value you received in the previous step. | `xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxxx` |
-
-#### Headers
-
-Use the following header to authenticate your request.
-
-|Key|Description|Value|
-|--|--|--|
-|`Ocp-Apim-Subscription-Key`| The key to your resource. Used for authenticating your API requests.| `{YOUR-SECONDARY-RESOURCE-KEY}` |
-
-## Deploy your model
-
-This is the step where you make your trained model available form consumption via the [runtime API](https://aka.ms/clu-apis).
-> [!TIP]
-> Use the same deployment name as your primary project for easier maintenance and minimal changes to your system to handle redirecting your traffic.
-
-## Submit deploy job
-
-Create a **PUT** request using the following URL, headers, and JSON body to start deploying a custom NER model.
-
-```rest
-{YOUR-SECONDARY-ENDPOINT}/language/analyze-conversations/projects/{PROJECT-NAME}/deployments/{DEPLOYMENT-NAME}?api-version=2021-11-01-preview
-```
-
-|Placeholder |Value | Example |
-||||
-|`{YOUR-SECONDARY-ENDPOINT}` | The endpoint for authenticating your API request. | `https://<your-custom-subdomain>.cognitiveservices.azure.com` |
-|`{PROJECT-NAME}` | The name for your project. This value is case-sensitive. | `myProject` |
-|`{DEPLOYMENT-NAME}` | The name of your deployment. This value is case-sensitive. | `prod` |
-
-#### Headers
-
-Use the following header to authenticate your request.
-
-|Key|Description|Value|
-|--|--|--|
-|`Ocp-Apim-Subscription-Key`| The key to your resource. Used for authenticating your API requests.| `{YOUR-SECONDARY-RESOURCE-KEY}` |
-
-#### Request body
-
-Use the following JSON in your request. Use the name of the model you wan to deploy.
-
-```json
-{
- "trainedModelLabel": "{MODEL-NAME}",
- "deploymentName": {DEPLOYMENT-NAME}
-}
-```
-
-Once you send your API request, youΓÇÖll receive a `202` response indicating success. In the response headers, extract the `location` value. It will be formatted like this:
-
-```rest
-{YOUR-SECONDARY-ENDPOINT}/language/analyze-conversations/projects/{PROJECT-NAME}/deployments/{DEPLOYMENT-NAME}/jobs/{JOB-ID}?api-version=2021-11-01-preview
-```
-
-`JOB-ID` is used to identify your request, since this operation is asynchronous. You will use this URL in the next step to get the publishing status.
-
-### Get the deployment status
-
-Use the following **GET** request to query the status of your model's publishing process. You can use the URL you received from the previous step, or replace the placeholder values below with your own values.
-
-```rest
-{YOUR-SECONDARY-ENDPOINT}/language/analyze-conversations/projects/{PROJECT-NAME}/deployments/{DEPLOYMENT-NAME}/jobs/{JOB-ID}?api-version=2021-11-01-preview
-```
-
-|Placeholder |Value | Example |
-||||
-|`{YOUR-SECONDARY-ENDPOINT}` | The endpoint for authenticating your API request. | `https://<your-custom-subdomain>.cognitiveservices.azure.com` |
-|`{PROJECT-NAME}` | The name for your project. This value is case-sensitive. | `myProject` |
-|`{DEPLOYMENT-NAME}` | The name of your deployment. This value is case-sensitive. | `prod` |
-|`{JOB-ID}` | The ID for locating your model's training status. This is in the `location` header value you received in the previous step. | `xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxxx` |
-
-#### Headers
-
-Use the following header to authenticate your request.
-
-|Key|Description|Value|
-|--|--|--|
-|`Ocp-Apim-Subscription-Key`| The key to your resource. Used for authenticating your API requests.| `{YOUR-SECONDARY-RESOURCE-KEY}` |
-
-At this point you have replicated your project into another resource, which is in another region, trained and deployed the model. Now you would want to make changes to your system to handle traffic redirection in case of failure.
-
-## Changes in calling the runtime
-
-Within your system, at the step where you call [runtime API](https://aka.ms/clu-apis) check for the response code returned from the submit task API. If you observe a **consistent** failure in submitting the request, this could indicate an outage in your primary region. Failure once doesn't mean an outage, it may be transient issue. Retry submitting the job through the secondary resource you have created. For the second request use your `{YOUR-SECONDARY-ENDPOINT}` and secondary key, if you have followed the steps above, `{PROJECT-NAME}` and `{DEPLOYMENT-NAME}` would be the same so no changes are required to the request body.
-
-In case you revert to using your secondary resource you will observe slight increase in latency because of the difference in regions where your model is deployed.
-
-## Check if your projects are out of sync
-
-Maintaining the freshness of both projects is an important part of process. You need to frequently check if any updates were made to your primary project so that you move them over to your secondary project. This way if your primary region fail and you move into the secondary region you should expect similar model performance since it already contains the latest updates. Setting the frequency of checking if your projects are in sync is an important choice, we recommend that you do this check daily in order to guarantee the freshness of data in your secondary model.
-
-### Get project details
-
-Use the following url to get your project details, one of the keys returned in the body
-
-Use the following **GET** request to get your project details. You can use the URL you received from the previous step, or replace the placeholder values below with your own values.
-
-```rest
-{YOUR-PRIMARY-ENDPOINT}/language/analyze-conversations/projects/{PROJECT-NAME}?api-version=2021-11-01-preview
-```
-
-|Placeholder |Value | Example |
-||||
-|`{YOUR-PRIMARY-ENDPOINT}` | The endpoint for authenticating your API request. | `https://<your-custom-subdomain>.cognitiveservices.azure.com` |
-|`{PROJECT-NAME}` | The name for your project. This value is case-sensitive. | `myProject` |
-
-#### Headers
-
-Use the following header to authenticate your request.
-
-|Key|Description|Value|
-|--|--|--|
-|`Ocp-Apim-Subscription-Key`| The key to your resource. Used for authenticating your API requests.| `{YOUR-PRIMARY-RESOURCE-KEY}` |
-
-#### Response body
-
-```json
- {
- "createdDateTime": "2021-10-19T23:24:41.572Z",
- "lastModifiedDateTime": "2021-10-19T23:24:41.572Z",
- "lastTrainedDateTime": "2021-10-19T23:24:41.572Z",
- "lastDeployedDateTime": "2021-10-19T23:24:41.572Z",
- "type": "conversation",
- "name": "myProject",
- "multiLingual": true,
- "description": "string",
- "language": "en-us",
- "settings": {}
- }
-```
-
-Repeat the same steps for your replicated project using `{YOUR-SECONDARY-ENDPOINT}` and `{YOUR-SECONDARY-RESOURCE-KEY}`. Compare the returned `lastModifiedDateTime` from both project. If your primary project was modified sooner than your secondary one, you need to repeat the steps of [exporting](#export-your-primary-project-assets), [importing](#import-to-a-new-project), [training](#train-your-model) and [deploying](#deploy-your-model).
-
-## Next steps
-
-In this article, you have learned how to use the export and import APIs to replicate your project to a secondary Language resource in other region. Next, explore the API reference docs to see what else you can do with authoring APIs.
-
-* [Authoring REST API reference ](https://westus.dev.cognitive.microsoft.com/docs/services/language-authoring-clu-apis-2022-03-01-preview/operations/Projects_TriggerImportProjectJob)
-
-* [Runtime prediction REST API reference ](https://aka.ms/ct-runtime-swagger)
cognitive-services Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/conversational-language-understanding/faq.md
Previously updated : 01/10/2022 Last updated : 05/23/2022
Yes, you can [import any LUIS application](./concepts/backwards-compatibility.md
No, the service only supports JSON format. You can go to LUIS, import the `.LU` file and export it as a JSON file.
+## How do I handle out of scope or domain utterances that aren't relevant to my intents?
+
+Add any out of scope utterances to the [none intent](./concepts/none-intent.md).
+ ## Is there any SDK support? Yes, only for predictions, and [samples are available](https://aka.ms/cluSampleCode). There is currently no authoring support for the SDK.
+## Can I connect to Orchestration workflow projects?
+
+Yes, you can connect your CLU project in orchestration workflow. All you need is to make sure that both projects are under the same Language resource
+ ## Are there APIs for this feature?
-Yes, all the APIs [are available](https://aka.ms/clu-apis).
+Yes, all the APIs are available.
+* [Authoring APIs](https://aka.ms/clu-authoring-apis)
+* [Prediction API](https://aka.ms/clu-runtime-api)
## Next steps
cognitive-services Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/conversational-language-understanding/glossary.md
+
+ Title: Definitions used in conversational language understanding
+
+description: Learn about definitions used in conversational language understanding.
++++++ Last updated : 05/13/2022++++
+# Terms and definitions used in conversation language understanding
+
+Use this article to learn about some of the definitions and terms you may encounter when using conversation language understanding.
+
+## Entity
+Entities are words in utterances that describe information used to fulfill or identify an intent. If your entity is complex and you would like your model to identify specific parts, you can break your model into subentities. For example, you might want your model to predict an address, but also the subentities of street, city, state, and zipcode.
+
+## F1 score
+The F1 score is a function of Precision and Recall. It's needed when you seek a balance between [precision](#precision) and [recall](#recall).
+
+## Intent
+An intent represents a task or action the user wants to perform. It's a purpose or goal expressed in a user's input, such as booking a flight, or paying a bill.
+
+## List entity
+A list entity represents a fixed, closed set of related words along with their synonyms. List entities are exact matches, unlike machined learned entities.
+
+The entity will be predicted if a word in the list entity is included in the list. For example, if you have a list entity called "size" and you have the words "small, medium, large" in the list, then the size entity will be predicted for all utterances where the words "small", "medium", or "large" are used regardless of the context.
+
+## Model
+A model is an object that's trained to do a certain task, in this case conversation understanding tasks. Models are trained by providing labeled data to learn from so they can later be used to understand utterances.
+
+* **Model evaluation** is the process that happens right after training to know how well does your model perform.
+* **Deployment** is the process of assigning your model to a deployment to make it available for use via the [prediction API](https://aka.ms/ct-runtime-swagger).
+
+## Overfitting
+
+Overfitting happens when the model is fixated on the specific examples and is not able to generalize well.
+
+## Precision
+Measures how precise/accurate your model is. It's the ratio between the correctly identified positives (true positives) and all identified positives. The precision metric reveals how many of the predicted classes are correctly labeled.
+
+## Project
+A project is a work area for building your custom ML models based on your data. Your project can only be accessed by you and others who have access to the Azure resource being used.
+
+## Recall
+Measures the model's ability to predict actual positive classes. It's the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
+
+## Regular expression
+A regular expression entity represents a regular expression. Regular expression entities are exact matches.
+
+## Schema
+Schema is defined as the combination of intents and entities within your project. Schema design is a crucial part of your project's success. When creating a schema, you want think about which intents and entities should be included in your project
+
+## Training data
+Training data is the set of information that is needed to train a model.
+
+## Utterance
+
+An utterance is user input that is short text representative of a sentence in a conversation. It is a natural language phrase such as "book 2 tickets to Seattle next Tuesday". Example utterances are added to train the model and the model predicts on new utterance at runtime
++
+## Next steps
+
+* [Data and service limits](service-limits.md).
+* [Conversation language understanding overview](../overview.md).
cognitive-services Build Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/conversational-language-understanding/how-to/build-schema.md
Previously updated : 03/03/2022 Last updated : 05/13/2022 # How to build your project schema
-In Conversational Language Understanding, the *schema* is defined as the combination of intents and entities within your project. Schema design is a crucial part of your project's success. When creating a schema, you want think about which intents and entities should be included in your project.
+In conversational language understanding projects, the *schema* is defined as the combination of intents and entities within your project. Schema design is a crucial part of your project's success. When creating a schema, you want think about which intents and entities should be included in your project.
## Guidelines and recommendations Consider the following guidelines when picking intents for your project:
- 1. Create distinct, separable intents. An intent is best described as a user's desired overall action. Think of the project you're building and identify all the different actions your users may take when interacting with your project. Sending, calling, and canceling are all actions that are best represented as different intents. "Canceling an order" and "canceling an appointment" are very similar, with the distinction being *what* they are canceling. Those two actions should be represented under the same intent.
+ 1. Create distinct, separable intents. An intent is best described as action the user wants to perform. Think of the project you're building and identify all the different actions your users may take when interacting with your project. Sending, calling, and canceling are all actions that are best represented as different intents. "Canceling an order" and "canceling an appointment" are very similar, with the distinction being *what* they are canceling. Those two actions should be represented under the same intent, *Cancel*.
2. Create entities to extract relevant pieces of information within your text. The entities should be used to capture the relevant information needed to fulfill your user's action. For example, *order* or *appointment* could be different things a user is trying to cancel, and you should create an entity to capture that piece of information.
They might create an intent to represent each of these actions. They might also
* Date * Meeting durations
-## Build project schema for conversation projects
+## Add intents
-To build a project schema for conversation projects:
+To build a project schema within [Language Studio](https://aka.ms/languageStudio):
-1. Select the **Intents** or **Entities** tab in the Build Schema page, and select **Add**. You will be prompted for a name before completing the creation of the intent or entity.
+1. Select **Schema definition** from the left side menu.
-2. Clicking on an intent will take you to the [tag utterances](tag-utterances.md) page, where you can add examples for intents, and label examples entities.
+2. From the top pivots, you can change the view to be **Intents** or **Entities**.
+
+2. To create an intent, select **Add** from the top menu. You will be prompted to type in a name before completing creating the intent.
+
+3. Repeat the above step to create all the intents to capture all the actions that you think the user will want to perform while using the project.
:::image type="content" source="../media/build-schema-page.png" alt-text="A screenshot showing the schema creation page for conversation projects in Language Studio." lightbox="../media/build-schema-page.png":::+
+4. When you select the intent, you will be directed to the [Data labeling](tag-utterances.md) page, with a filter set for the intent you selected. You can add examples for intents and label them with entities.
-3. After creating an entity, you'll be routed to the entity details page. Every component is defined by multiple components. You can label examples in the tag utterances page to train a learned component, add a list of values to match against in the list component, or add a set of prebuilt components from the available library. Learn more about components [here](../concepts/entity-components.md)
+## Add entities
+
+1. Move to **Entities** pivot from the top of the page.
+
+2. To add an entity, select **Add** from the top menu. You will be prompted to type in a name before completing creating the entity.
+
+3. After creating an entity, you'll be routed to the entity details page where you can define the composition settings for this entity.
+
+4. Every entity can be defined by multiple components: learned, list or prebuilt. A learned component is added to all your entities once you label them in your utterances.
+
+ :::image type="content" source="../media/entity-details.png" alt-text="A screenshot showing the entity details page for conversation projects in Language Studio." lightbox="../media/entity-details.png":::
+
+5.You can add a [list](../concepts/entity-components.md#list-component) or [prebuilt](../concepts/entity-components.md#prebuilt-component) component to each entity.
+
+### Add prebuilt component
+
+To add a **prebuilt** component, select **Add new prebuilt** and from the drop-down menu, select the prebuilt type to you want to add to this entity.
- :::image type="content" source="../media/entity-details.png" alt-text="A screenshot showing the entity details page for conversation projects in Language Studio." lightbox="../media/entity-details.png":::
+ <!--:::image type="content" source="../media/add-prebuilt-component.png" alt-text="A screenshot showing a prebuilt-component in Language Studio." lightbox="../media/add-prebuilt-component.png":::-->
+
+### Add list component
+To add a **list** component, select **Add new list**. You can add multiple lists to each entity.
+
+1. To create a new list, in the *Enter value* text box enter this is the normalized value that will be returned when any of the synonyms values is extracted.
+
+2. From the *language* drop-down menu, select the language of the synonyms list and start typing in your synonyms and hit enter after each one. It is recommended to have synonyms lists in multiple languages.
+
+ <!--:::image type="content" source="../media/add-list-component.png" alt-text="A screenshot showing a list component in Language Studio." lightbox="../media/add-list-component.png":::-->
+
+### Define entity options
+
+Change to the **Entity options** pivot in the entity details page. When multiple components are defined for an entity, their predictions may overlap. When an overlap occurs, each entity's final prediction is determined based on the [entity option](../concepts/entity-components.md#entity-options) you select in this step. Select the one that you want to apply to this entity and click on the **Save** button at the top.
+
+ <!--:::image type="content" source="../media/entity-options.png" alt-text="A screenshot showing an entity option in Language Studio." lightbox="../media/entity-options.png":::-->
++
+After you create your entities, you can come back and edit them. You can **Edit entity components** or **delete** them by selecting this option from the top menu.
+
## Next Steps
-* [Tag utterances](tag-utterances.md)
+
+* [Add utterances and label your data](tag-utterances.md)
cognitive-services Call Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/conversational-language-understanding/how-to/call-api.md
+
+ Title: Send prediction requests to a conversational language understanding deployment
+
+description: Learn about sending prediction requests for conversational language understanding.
++++++ Last updated : 05/13/2022++++
+# Send prediction requests to a deployment
+
+After the deployment is added successfully, you can query the deployment for intent and entities predictions from your utterance based on the model you assigned to the deployment.
+You can query the deployment programmatically through the [prediction API](https://aka.ms/ct-runtime-swagger) or through the client libraries (Azure SDK).
+
+## Test deployed model
+
+You can use the Language Studio to submit an utterance, get predictions and visualize the results.
++++
+## Send a conversational language understanding request
+
+# [Language Studio](#tab/language-studio)
++
+# [REST APIs](#tab/REST-APIs)
+
+First you will need to get your resource key and endpoint:
++
+### Query your model
++
+# [Client libraries (Azure SDK)](#tab/azure-sdk)
+
+First you will need to get your resource key and endpoint:
++
+### Use the client libraries (Azure SDK)
+
+You can also use the client libraries provided by the Azure SDK to send requests to your model.
+
+> [!NOTE]
+> The client library for conversational language understanding is only available for:
+> * .NET
+> * Python
+
+1. Go to your resource overview page in the [Azure portal](https://portal.azure.com/#home)
+
+2. From the menu on the left side, select **Keys and Endpoint**. Use endpoint for the API requests and you will need the key for `Ocp-Apim-Subscription-Key` header.
+
+ :::image type="content" source="../../custom-text-classification/media/get-endpoint-azure.png" alt-text="A screenshot showing a key and endpoint in the Azure portal." lightbox="../../custom-text-classification/media/get-endpoint-azure.png":::
++
+3. Download and install the client library package for your language of choice:
+
+ |Language |Package version |
+ |||
+ |.NET | [5.2.0-beta.2](https://www.nuget.org/packages/Azure.AI.TextAnalytics/5.2.0-beta.2) |
+ |Python | [5.2.0b2](https://pypi.org/project/azure-ai-textanalytics/5.2.0b2/) |
+
+4. After you've installed the client library, use the following samples on GitHub to start calling the API.
+
+ * [C#](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/cognitivelanguage/Azure.AI.Language.Conversations/samples)
+ * [Python](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/cognitivelanguage/azure-ai-language-conversations/samples)
+
+5. See the following reference documentation for more information:
+
+ * [C#](/dotnet/api/azure.ai.language.conversations?view=azure-dotnet-preview&preserve-view=true)
+ * [Python](/python/api/azure-ai-language-conversations/azure.ai.language.conversations?view=azure-python-preview&preserve-view=true)
+
++
+## Next steps
+
+* [Conversational language understanding overview](../overview.md)
cognitive-services Create Project https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/conversational-language-understanding/how-to/create-project.md
Previously updated : 03/03/2022 Last updated : 05/13/2022
-# How to create projects in Conversational Language Understanding
+# How to create a CLU project
-Conversational Language Understanding allows you to create conversation projects. To create orchestration projects, see the [orchestration workflow](../../orchestration-workflow/overview.md) documentation.
+Use this article to learn how to set up these requirements and create a project.
-## Sign in to Language Studio
-To get started, you have to first sign in to [Language Studio](https://aka.ms/languageStudio) and create a Language resource. Select **Done** once selection is complete.
-## Navigate to Conversational Language Understanding
+## Prerequisites
+
+Before you start using CLU, you will need several things:
+
+* An Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services).
+* An Azure Language resource
+
+### Create a Language resource
-In language studio, find the **Understand conversational language** section, and select **Conversational language understanding**.
+Before you start using CLU, you will need an Azure Language resource.
-You will see the Conversational Language Understanding projects page.
+> [!NOTE]
+> * You need to have an **owner** role assigned on the resource group to create a Language resource.
+++
+## Sign in to Language Studio
+ ## Create a conversation project
-After selecting conversation, you need to provide the following details:
-- Name: Project name-- Description: Optional project description-- Text primary language: The primary language of your project. Your training data should be mainly be in this language.-- Enable multiple languages: Whether you would like to enable your project to support multiple languages at once.
+Once you have a Language resource created, create a Conversational Language Understanding project.
-Once you're done, click next, review the details, and then click create project to complete the process.
+### [Language Studio](#tab/language-studio)
-## Import a project
-You can export a Conversational Language Understanding project as a JSON file at any time by going to the conversation projects page, selecting a project, and pressing **Export**.
-That project can be reimported as a new project. If you import a project with the exact same name, it replaces the project's data with the newly imported project's data.
+### [REST APIs](#tab/rest-api)
++++
+## Import project
+
+### [Language Studio](#tab/language-studio)
+
+You can export a Conversational Language Understanding project as a JSON file at any time by going to the conversation projects page, selecting a project, and from the top menu, clicking on **Export**.
:::image type="content" source="../media/export.png" alt-text="A screenshot showing the Conversational Language Understanding export button." lightbox="../media/export.png":::
+That project can be reimported as a new project. If you import a project with the exact same name, it replaces the project's data with the newly imported project's data.
+ If you have an existing LUIS application, you can _import_ the LUIS application JSON to Conversational Language Understanding directly, and it will create a Conversation project with all the pieces that are currently available: Intents, ML entities, and utterances. See [backwards compatibility with LUIS](../concepts/backwards-compatibility.md) for more information.
-Click on the arrow button next to **Create a new project** and select **Import**, then select the LUIS or Conversational Language Understanding JSON file.
+To import a project, click on the arrow button next to **Create a new project** and select **Import**, then select the LUIS or Conversational Language Understanding JSON file.
:::image type="content" source="../media/import.png" alt-text="A screenshot showing the Conversational Language Understanding import button." lightbox="../media/import.png":::
+### [REST APIs](#tab/rest-api)
+
+You can import a CLU JSON into the service
++++
+## Export project
+
+### [Language Studio](#tab/Language-Studio)
+
+You can export a Conversational Language Understanding project as a JSON file at any time by going to the conversation projects page, selecting a project, and pressing **Export**.
+
+### [REST APIs](#tab/rest-apis)
+
+You can export a Conversational Language Understanding project as a JSON file at any time.
++++
+## Get CLU project details
+
+### [Language Studio](#tab/language-studio)
++
+### [Rest APIs](#tab/rest-api)
++++
+## Delete resources
+
+### [Language Studio](#tab/language-studio)
++
+### [REST APIs](#tab/rest-api)
+
+When you don't need your project anymore, you can delete your project using the APIs.
++++ ## Next Steps [Build schema](./build-schema.md)+
cognitive-services Deploy Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/conversational-language-understanding/how-to/deploy-model.md
+
+ Title: How to deploy a model for conversational language understanding
+
+description: Use this article to learn how to deploy models for conversational language understanding.
++++++ Last updated : 04/26/2022++++
+# Deploy a model
+
+Once you are satisfied with how your model performs, it's ready to be deployed, and query it for predictions from utterances. Deploying a model makes it available for use through the [prediction API](https://aka.ms/ct-runtime-swagger).
+
+## Prerequisites
+
+* A successfully [created project](create-project.md)
+* [Labeled utterances](tag-utterances.md) and successfully [trained model](train-model.md)
+* Reviewed the [model evaluation details](view-model-evaluation.md) to determine how your model is performing.
+
+See [project development lifecycle](../overview.md#project-development-lifecycle) for more information.
+
+## Deploy model
+
+After you have reviewed the model's performance and decide it's fit to be used in your environment, you need to assign it to a deployment to be able to query it. Assigning the model to a deployment makes it available for use through the [prediction API](https://aka.ms/clu-apis). It is recommended to create a deployment named `production` to which you assign the best model you have built so far and use it in your system. You can create another deployment called `staging` to which you can assign the model you're currently working on to be able to test it. You can have a maximum on 10 deployments in your project.
+
+# [Language Studio](#tab/language-studio)
+
+
+# [REST APIs](#tab/rest-api)
+
+### Submit deployment job
++
+### Get deployment job status
++++
+## Swap deployments
+
+After you are done testing a model assigned to one deployment, you might want to assign it to another deployment. Swapping deployments involves:
+* Taking the model assigned to the first deployment, and assigning it to the second deployment.
+* taking the model assigned to second deployment and assign it to the first deployment.
+
+This can be used to swap your `production` and `staging` deployments when you want to take the model assigned to `staging` and assign it to `production`.
+
+# [Language Studio](#tab/language-studio)
++
+# [REST APIs](#tab/rest-api)
++++
+## Delete deployment
+
+# [Language Studio](#tab/language-studio)
++
+# [REST APIs](#tab/rest-api)
++++
+## Next steps
+
+* Use [prediction API to query your model](call-api.md)
cognitive-services Deploy Query Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/conversational-language-understanding/how-to/deploy-query-model.md
- Title: How to send a Conversational Language Understanding job-
-description: Learn about sending a request for Conversational Language Understanding.
------ Previously updated : 03/15/2022----
-# Deploy and test a conversational language understanding model
-
-After you have [trained a model](./train-model.md) on your dataset, you're ready to deploy it. After deploying your model, you'll be able to query it for predictions.
-
-> [!Tip]
-> Before deploying a model, make sure to view the model details to make sure that the model is performing as expected.
-
-## Deploy model
-
-Deploying a model hosts and makes it available for predictions through an endpoint.
-
-When a model is deployed, you will be able to test the model directly in the portal or by calling the API associated to it.
-
-### Conversation projects deployments
-
-1. Click on *Add deployment* to submit a new deployment job
-
- :::image type="content" source="../media/add-deployment-model.png" alt-text="A screenshot showing the model deployment button in Language Studio." lightbox="../media/add-deployment-model.png":::
-
-2. In the window that appears, you can create a new deployment name by giving the deployment a name or override an existing deployment name. Then, you can add a trained model to this deployment name.
-
- :::image type="content" source="../media/create-deployment-job.png" alt-text="A screenshot showing the add deployment job screen in Language Studio." lightbox="../media/create-deployment-job.png":::
--
-#### Swap deployments
-
-If you would like to swap the models between two deployments, simply select the two deployment names you want to swap and click on **Swap deployments**. From the window that appears, select the deployment name you want to swap with.
--
-#### Delete deployment
-
-To delete a deployment, select the deployment you want to delete and click on **Delete deployment**.
-
-> [!TIP]
-> If you're using the REST API, see the [quickstart](../quickstart.md?pivots=rest-api#deploy-your-model) and REST API [reference documentation](https://westus.dev.cognitive.microsoft.com/docs/services/language-authoring-clu-apis-2022-03-01-preview/operations/Projects_TriggerImportProjectJob) for examples and more information.
-
-> [!NOTE]
-> You can only have ten deployment names.
-
-## Send a Conversational Language Understanding request
-
-Once your model is deployed, you can begin using the deployed model for predictions. Outside of the test model page, you can begin calling your deployed model via API requests to your provided custom endpoint. This endpoint request obtains the intent and entity predictions defined within the model.
-
-You can get the full URL for your endpoint by going to the **Deploy model** page, selecting your deployed model, and clicking on "Get prediction URL".
--
-Add your key to the `Ocp-Apim-Subscription-Key` header value, and replace the query and language parameters.
-
-> [!TIP]
-> As you construct your requests, see the [quickstart](../quickstart.md?pivots=rest-api#query-model) and REST API [reference documentation](https://aka.ms/clu-apis) for more information.
-
-### Use the client libraries (Azure SDK)
-
-You can also use the client libraries provided by the Azure SDK to send requests to your model.
-
-> [!NOTE]
-> The client library for conversational language understanding is only available for:
-> * .NET
-> * Python
-
-1. Go to your resource overview page in the [Azure portal](https://portal.azure.com/#home)
-
-2. From the menu on the left side, select **Keys and Endpoint**. Use endpoint for the API requests and you will need the key for `Ocp-Apim-Subscription-Key` header.
-
- :::image type="content" source="../../custom-classification/media/get-endpoint-azure.png" alt-text="Get the Azure endpoint" lightbox="../../custom-classification/media/get-endpoint-azure.png":::
-
-3. Download and install the client library package for your language of choice:
-
- |Language |Package version |
- |||
- |.NET | [5.2.0-beta.2](https://www.nuget.org/packages/Azure.AI.TextAnalytics/5.2.0-beta.2) |
- |Python | [5.2.0b2](https://pypi.org/project/azure-ai-textanalytics/5.2.0b2/) |
-
-4. After you've installed the client library, use the following samples on GitHub to start calling the API.
-
- * [C#](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/cognitivelanguage/Azure.AI.Language.Conversations/samples)
- * [Python](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/cognitivelanguage/azure-ai-language-conversations/samples)
-
-5. See the following reference documentation for more information:
-
- * [C#](/dotnet/api/azure.ai.language.conversations?view=azure-dotnet-preview&preserve-view=true)
- * [Python](/python/api/azure-ai-language-conversations/azure.ai.language.conversations?view=azure-python-preview&preserve-view=true)
-
-## API response for a conversations project
-
-In a conversations project, you'll get predictions for both your intents and entities that are present within your project.
-- The intents and entities include a confidence score between 0.0 to 1.0 associated with how confident the model is about predicting a certain element in your project. -- The top scoring intent is contained within its own parameter.-- Only predicted entities will show up in your response.-- Entities indicate:
- - The text of the entity that was extracted
- - Its start location denoted by an offset value
- - The length of the entity text denoted by a length value.
-
-## Next steps
-
-* [Conversational Language Understanding overview](../overview.md)
cognitive-services Fail Over https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/conversational-language-understanding/how-to/fail-over.md
+
+ Title: Back up and recover your conversational language understanding models
+
+description: Learn how to save and recover your conversational language understanding models.
++++++ Last updated : 05/16/2022++++
+# Back up and recover your conversational language understanding models
+
+When you create a Language resource in the Azure portal, you specify a region for it to be created in. From then on, your resource and all of the operations related to it take place in the specified Azure server region. It's rare, but not impossible, to encounter a network issue that hits an entire region. If your solution needs to always be available, then you should design it to either fail-over into another region. This requires two Azure Language resources in different regions and the ability to sync your CLU models across regions.
+
+If your app or business depends on the use of a CLU model, we recommend that you create a replica of your project into another supported region. So that if a regional outage occurs, you can then access your model in the other fail-over region where you replicated your project.
+
+Replicating a project means that you export your project metadata and assets and import them into a new project. This only makes a copy of your project settings, intents, entities and utterances. You still need to [train](../how-to/train-model.md) and [deploy](../how-to/deploy-model.md) the models to be available for use with [runtime APIs](https://aka.ms/clu-apis).
++
+In this article, you will learn to how to use the export and import APIs to replicate your project from one resource to another existing in different supported geographical regions, guidance on keeping your projects in sync and changes needed to your runtime consumption.
+
+## Prerequisites
+
+* Two Azure Language resources in different Azure regions, each of them in a different region.
+
+## Get your resource keys endpoint
+
+Use the following steps to get the keys and endpoint of your primary and secondary resources. These will be used in the following steps.
++
+> [!TIP]
+> Keep a note of keys and endpoints for both primary and secondary resources. Use these values to replace the following placeholders:
+`{PRIMARY-ENDPOINT}`, `{PRIMARY-RESOURCE-KEY}`, `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}`.
+> Also take note of your project name, your model name and your deployment name. Use these values to replace the following placeholders: `{PROJECT-NAME}`, `{MODEL-NAME}` and `{DEPLOYMENT-NAME}`.
+
+## Export your primary project assets
+
+Start by exporting the project assets from the project in your primary resource.
+
+### Submit export job
+
+Replace the placeholders in the following request with your `{PRIMARY-ENDPOINT}` and `{PRIMARY-RESOURCE-KEY}` that you obtained in the first step.
++
+### Get export job status
+
+Replace the placeholders in the following request with your `{PRIMARY-ENDPOINT}` and `{PRIMARY-RESOURCE-KEY}` that you obtained in the first step.
++
+Copy the response body as you will use it as the body for the next import job.
+
+## Import to a new project
+
+Now go ahead and import the exported project assets in your new project in the secondary region so you can replicate it.
+
+### Submit import job
+
+Replace the placeholders in the following request with your `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}` that you obtained in the first step.
++
+### Get import job status
+
+Replace the placeholders in the following request with your `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}` that you obtained in the first step.
+++
+## Train your model
+
+After importing your project, you only have copied the project's assets and metadata and assets. You still need to train your model, which will incur usage on your account.
+
+### Submit training job
+
+Replace the placeholders in the following request with your `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}` that you obtained in the first step.
+++
+### Get Training Status
+
+Replace the placeholders in the following request with your `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}` that you obtained in the first step.
++
+## Deploy your model
+
+This is the step where you make your trained model available form consumption via the [runtime prediction API](https://aka.ms/ct-runtime-swagger).
+
+> [!TIP]
+> Use the same deployment name as your primary project for easier maintenance and minimal changes to your system to handle redirecting your traffic.
+
+### Submit deployment job
+
+Replace the placeholders in the following request with your `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}` that you obtained in the first step.
++
+### Get the deployment status
+
+Replace the placeholders in the following request with your `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}` that you obtained in the first step.
++
+## Changes in calling the runtime
+
+Within your system, at the step where you call [runtime API](https://aka.ms/clu-apis) check for the response code returned from the submit task API. If you observe a **consistent** failure in submitting the request, this could indicate an outage in your primary region. Failure once doesn't mean an outage, it may be transient issue. Retry submitting the job through the secondary resource you have created. For the second request use your `{YOUR-SECONDARY-ENDPOINT}` and secondary key, if you have followed the steps above, `{PROJECT-NAME}` and `{DEPLOYMENT-NAME}` would be the same so no changes are required to the request body.
+
+In case you revert to using your secondary resource you will observe slight increase in latency because of the difference in regions where your model is deployed.
+
+## Check if your projects are out of sync
+
+Maintaining the freshness of both projects is an important part of process. You need to frequently check if any updates were made to your primary project so that you move them over to your secondary project. This way if your primary region fail and you move into the secondary region you should expect similar model performance since it already contains the latest updates. Setting the frequency of checking if your projects are in sync is an important choice, we recommend that you do this check daily in order to guarantee the freshness of data in your secondary model.
+
+### Get project details
+
+Use the following url to get your project details, one of the keys returned in the body indicates the last modified date of the project.
+Repeat the following step twice, one for your primary project and another for your secondary project and compare the timestamp returned for both of them to check if they are out of sync.
++
+Repeat the same steps for your replicated project using `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}`. Compare the returned `lastModifiedDateTime` from both projects. If your primary project was modified sooner than your secondary one, you need to repeat the steps of [exporting](#export-your-primary-project-assets), [importing](#import-to-a-new-project), [training](#train-your-model) and [deploying](#deploy-your-model) your model.
+
+## Next steps
+
+In this article, you have learned how to use the export and import APIs to replicate your project to a secondary Language resource in other region. Next, explore the API reference docs to see what else you can do with authoring APIs.
+
+* [Authoring REST API reference ](https://aka.ms/clu-authoring-apis)
+
+* [Runtime prediction REST API reference ](https://aka.ms/clu-apis)
+
cognitive-services Tag Utterances https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/conversational-language-understanding/how-to/tag-utterances.md
Previously updated : 04/05/2022 Last updated : 04/26/2022
-# How to tag utterances
+# Label your utterances in Language Studio
-Once you have [built a schema](build-schema.md) for your project, you should add training utterances to your project. The utterances should be similar to what your users will use when interacting with the project. When you add an utterance, you have to assign which intent it belongs to. After the utterance is added, label the words within your utterance with the entities in your project. Your labels for entities should be consistent across the different utterances.
+Once you have [built a schema](build-schema.md) for your project, you should add training utterances to your project. The utterances should be similar to what your users will use when interacting with the project. When you add an utterance, you have to assign which intent it belongs to. After the utterance is added, label the words within your utterance that you want to extract as entities.
-Tagging is the process of assigning your utterances to intents, and labeling them with entities. You will want to spend time tagging your utterances - introducing and refining the data that will train the underlying machine learning models for your project. The machine learning models generalize based on the examples you provide it. The more examples you provide, the more data points the model has to make better generalizations.
+Data labeling is a crucial step in development lifecycle; this data will be used in the next step when training your model so that your model can learn from the labeled data. If you already have labeled utterances, you can directly [import it into your project](create-project.md#import-project), but you need to make sure that your data follows the [accepted data format](../concepts/data-formats.md). See [create project](create-project.md#import-project) to learn more about importing labeled data into your project. Labeled data informs the model how to interpret text, and is used for training and evaluation.
-> [!NOTE]
-> An entity's learned components is only defined when you label utterances for that entity. You can also have entities that include _only_ list or prebuilt components without labelling learned components. see the [entity components](../concepts/entity-components.md) article for more information.
+## Prerequisites
-When you enable multiple languages in your project, you must also specify the language of the utterance you're adding. As part of the multilingual capabilities of Conversational Language Understanding, you can train your project in a dominant language, and then predict in the other available languages. Adding examples to other languages increases the model's performance in these languages if you determine it isn't doing well, but avoid duplicating your data across all the languages you would like to support.
+Before you can label your data, you need:
-For example, to improve a calender bot's performance with users, a developer might add examples mostly in English, and a few in Spanish or French as well. They might add utterances such as:
+* A successfully [created project](create-project.md).
-* "_Set a meeting with **Matt** and **Kevin** **tomorrow** at **12 PM**._" (English)
-* "_Reply as **tentative** to the **weekly update** meeting._" (English)
-* "_Cancelar mi **pr├│xima** reuni├│n_." (Spanish)
+See the [project development lifecycle](../overview.md#project-development-lifecycle) for more information.
-## Tag utterances
+## Data labeling guidelines
+After [building your schema](build-schema.md) and [creating your project](create-project.md), you will need to label your data. Labeling your data is important so your model knows which words will be associated with the entities you need to extract. You will want to spend time labeling your utterances - introducing and refining the data that will be used to in training your models.
-*Section 1* is where you choose which dataset you're viewing. You can add utterances in the training set or testing set.
-Your utterances are split into two sets:
-* Training set: These utterances are used to create your conversational model during training. The training set is processed as part of the training job to produce a trained model.
-* Testing set: These utterances are used to test the performance of your conversation model when the model is created. Testing set is also used to output the evaluation of the model.
+<!-- Composition guidance where does this live -->
-When adding utterances, you have the option to add to a specific set explicitly (training or testing). If you choose to do this you need to set your split type in train model page to **manual split of training and testing data**, otherwise keep all your utterances in the train set and use **Automatically split the testing set from training data**. See [How to train your model](train-model.md#train-model) for more information.
+<!--
+ > [!NOTE]
+ > An entity's learned components is only defined when you label utterances for that entity. You can also have entities that include _only_ list or prebuilt components without labelling learned components. see the [entity components](../concepts/entity-components.md) article for more information.
+ -->
-*Section 2* is where you add your utterances. You must select one of the intents from the drop-down list, the language of the utterance (if applicable), and the utterance itself. Press the enter key in the utterance's text box to add the utterance.
+As you add utterances and label them, keep in mind:
-*Section 3* includes your project's entities and distribution of intents and entities across your training set and testing set.
+* The machine learning models generalize based on the labeled examples you provide it; the more examples you provide, the more data points the model has to make better generalizations.
-You can select the highlight button next to any of the entities you've added, and then hover over the text to label the entities within your utterances, shown in *section 4*. You can also add new entities here by clicking the **+ Add Entity** button.
+* The precision, consistency and completeness of your labeled data are key factors to determining model performance.
-When you select your distribution, you can also view tag distribution across:
+ * **Label precisely**: Label each entity to its right type always. Only include what you want extracted, avoid unnecessary data in your labels.
+ * **Label consistently**: The same entity should have the same label across all the utterances.
+ * **Label completely**: Label all the instances of the entity in all your utterances.
-* Total instances per tagged entity: The distribution of each of your entities across the training and testing sets.
+* For [Multilingual projects](../language-support.md#multi-lingual-option), adding utterances in other languages increases the model's performance in these languages, but avoid duplicating your data across all the languages you would like to support. For example, to improve a calender bot's performance with users, a developer might add examples mostly in English, and a few in Spanish or French as well. They might add utterances such as:
-* Unique utterances per tagged entity: How your entities are distributed among the different utterances you have.
-* Utterances per intent: The distribution of utterances among intents across your training and testing sets.
+ * "_Set a meeting with **Matt** and **Kevin** **tomorrow** at **12 PM**._" (English)
+ * "_Reply as **tentative** to the **weekly update** meeting._" (English)
+ * "_Cancelar mi **pr├│xima** reuni├│n_." (Spanish)
+## How to label your utterances
+
+Use the following steps to label your utterances:
+
+1. Go to your project page in [Language Studio](https://aka.ms/languageStudio).
+
+2. From the left side menu, select **Data labeling**. In this page, you can start adding your utterance and labeling them. You can also upload your utterance directly by clicking on **Upload utterance file** from the top menu, make sure it follows the [accepted format](../concepts/data-formats.md#utterance-file-format).
-*Section 4* includes the utterances you've added. You can drag over the text you want to label, and a contextual menu of the entities will appear.
+3. From the top pivots, you can change the view to be **training set** or **testing set**. Learn more about [training and testing sets](train-model.md#data-splitting) and how they're used for model training and evaluation.
+
+ :::image type="content" source="../media/tag-utterances.png" alt-text="A screenshot of the page for tagging utterances in Language Studio." lightbox="../media/tag-utterances.png":::
+
+ > [!TIP]
+ > If you are planning on using **Automatically split the testing set from training data** splitting, add all your utterances to the training set.
+
+
+4. From the **Select intent** dropdown menu, select one of the intents, the language of the utterance (for multilingual projects), and the utterance itself. Press the enter key in the utterance's text box to add the utterance.
+
+5. You have two options to label entities in an utterance:
+
+ |Option |Description |
+ |||
+ |Label using a brush | Select the brush icon next to an entity in the right pane, then highlight the text in the utterance you want to label. |
+ |Label using inline menu | Highlight the word you want to label as an entity, and a menu will appear. Select the entity you want to label these words with. |
+
+6. In the right side pane, under the **Labels** pivot, you can find all the entity types in your project and the count of labeled instances per each.
+
+7. Under the **Distribution** pivot you can view the distribution across training and testing sets. You have two options for viewing:
+ * *Total instances per labeled entity* where you can view count of all labeled instances of a specific entity.
+ * *Unique utterances per labeled entity* where each utterance is counted if it contains at least one labeled instance of this entity.
+ * *Utterances per intent* where you can view count of utterances per intent.
+
-> [!NOTE]
-> Unlike LUIS, you cannot label overlapping entities. The same characters cannot be labeled by more than one entity.
-> list and prebuilt components are not shown in the tag utterances page, and all labels here only apply to the learned component.
+ > [!NOTE]
+ > list and prebuilt components are not shown in the data labeling page, and all labels here only apply to the **learned component**.
-## Filter Utterances
+To remove a label:
+ 1. From within your utterance, select the entity you want to remove a label from.
+ 3. Scroll through the menu that appears, and select **Remove label**.
-Clicking on **Filter** lets you view only the utterances associated to the intents and entities you select in the filter pane.
-When clicking on an intent in the [build schema](./build-schema.md) page then you'll be moved to the **Tag Utterances** page, with that intent filtered automatically.
+To delete or rename an entity:
+ 1. Select the entity you want to edit in the right side pane.
+ 2. Click on the three dots next to the entity, and select the option you want from the drop-down menu.
## Next Steps
-* [Train and Evaluate Model](./train-model.md)
+* [Train Model](./train-model.md)
cognitive-services Train Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/conversational-language-understanding/how-to/train-model.md
Previously updated : 04/05/2022 Last updated : 05/12/2022
-# Train and evaluate models
+# Train your conversational language understanding model
-After you have completed [tagging your utterances](tag-utterances.md), you can train your model. Training is the process of converting the current state of your project's training data to build a model that can be used for predictions. Every time you train, you must name your training instance.
+After you have completed [labeling your utterances](tag-utterances.md), you can start training a model. Training is the process where the model learns from your [labeled utterances](tag-utterances.md). <!--After training is completed, you will be able to [view model performance](view-model-evaluation.md).-->
-You can create and train multiple models within the same project. However, if you re-train a specific model it overwrites the last state.
+To train a model, start a training job. Only successfully completed jobs create a model. Training jobs expire after seven days, after this time you will no longer be able to retrieve the job details. If your training job completed successfully and a model was created, it won't be affected by the job expiring. You can only have one training job running at a time, and you can't start other jobs in the same project.
-To train a model, you need to start a training job. The output of a successful training job is your trained model. Training jobs will be automatically deleted after seven days from creation, but the output trained model will not be deleted.
+The training times can be anywhere from a few seconds when dealing with simple projects, up to a couple of hours when you reach the [maximum limit](../service-limits.md) of utterances.
-The training times can be anywhere from a few seconds when dealing with orchestration workflow projects, up to a couple of hours when you reach the [maximum limit](../service-limits.md) of utterances. Before training, you will have the option to enable evaluation, which lets you view how your model performs.
+Model evaluation is triggered automatically after training is completed successfully. The evaluation process starts by using the trained model to run predictions on the utterances in the testing set, and compares the predicted results with the provided labels (which establishes a baseline of truth). <!--The results are returned so you can review the [modelΓÇÖs performance](view-model-evaluation.md).-->
+
+## Prerequisites
+
+* A successfully [created project](create-project.md) with a configured Azure blob storage account
+* [Labeled utterances](tag-utterances.md)
+
+<!--See the [project development lifecycle](../overview.md#project-development-lifecycle) for more information.-->
+
+## Data splitting
+
+Before you start the training process, labeled utterances in your project are divided into a training set and a testing set. Each one of them serves a different function.
+The **training set** is used in training the model, this is the set from which the model learns the labeled utterances.
+The **testing set** is a blind set that isn't introduced to the model during training but only during evaluation.
+
+After the model is trained successfully, the model can be used to make predictions from the utterances in the testing set. These predictions are used to calculate [evaluation metrics](../concepts/evaluation-metrics.md).
+It is recommended to make sure that all your intents and entities are adequately represented in both the training and testing set.
+
+Conversational language understanding supports two methods for data splitting:
+
+* **Automatically splitting the testing set from training data**: The system will split your tagged data between the training and testing sets, according to the percentages you choose. The recommended percentage split is 80% for training and 20% for testing.
+
+ > [!NOTE]
+ > If you choose the **Automatically splitting the testing set from training data** option, only the data assigned to training set will be split according to the percentages provided.
+
+* **Use a manual split of training and testing data**: This method enables users to define which utterances should belong to which set. This step is only enabled if you have added utterances to your testing set during [labeling](tag-utterances.md).
+
+## Training modes
+
+CLU supports two modes for training your models
+
+* **Standard training** uses fast machine learning algorithms to train your models relatively quickly. This is currently only available for **English** and is disabled for any project that doesn't use English (US), or English (UK) as its primary language. This training option is free of charge. Standard training allows you to add utterances and test them quickly at no cost. The evaluation scores shown should guide you on where to make changes in your project and add more utterances. Once youΓÇÖve iterated a few times and made incremental improvements, you can consider using advanced training to train another version of your model.
+
+* **Advanced training** uses the latest in machine learning technology to customize models with your data. This is expected to show better performance scores for your models and will enable you to use the [multilingual capabilities](../language-support.md#multi-lingual-option) of CLU as well. Advanced training is priced differently. See the [pricing information](https://azure.microsoft.com/pricing/details/cognitive-services/language-service) for details.
+
+Use the evaluation scores to guide your decisions. There might be times where a specific example is predicted incorrectly in advanced training as opposed to when you used standard training mode. However, if the overall evaluation results are better using advanced, then it is recommended to use your final model. If that isnΓÇÖt the case and you are not looking to use any multilingual capabilities, you can continue to use model trained with standard mode.
+
+> [!Note]
+> You should expect to see a difference in behaviors in intent confidence scores between the training modes as each algorithm calibrates their scores differently.
## Train model
-1. Go to your project page in Language Studio.
-2. Select **Train** from the left side menu.
-3. Select **Start a training job** from the top menu.
-4. To train a new model, select **Train a new model** and type in the model name in the text box below. You can **overwrite an existing model** by selecting this option and select the model you want from the dropdown below.
+# [Language Studio](#tab/language-studio)
-Choose if you want to evaluate and measure your model's performance. The **Run evaluation with training toggle** is enabled by default, meaning you want to measure your model's performance and you will have the option to choose how you want to split your training and testing utterances. You are provided with the two options below:
-* **Automatically split the testing set from training data**: A selected stratified sample from all training utterances according to the percentages that you configure in the text box. The default value is set to 80% for training and 20% for testing. Any utterances already assigned to the testing set will be ignored completely if you choose this option.
+# [REST APIs](#tab/rest-api)
-* **Use a manual split of training and testing data**: The training and testing utterances that youΓÇÖve provided and assigned during tagging to create your custom model and measure its performance. Note that this option will only be enabled if you add utterances to the testing set in the tag data page. Otherwise, it will be disabled.
+### Start training job
+### Get training job status
-Click the **Train** button and wait for training to complete. You will see the training status of your training job. Only successfully completed jobs will generate models.
+Training could take sometime depending on the size of your training data and complexity of your schema. You can use the following request to keep polling the status of the training job until it is successfully completed.
-## Evaluate model
-After model training is completed, you can view your model details and see how well it performs against the test set if you enabled evaluation in the training step. Observing how well your model performed is called evaluation. The test set is composed of 20% of your utterances, and this split is done at random before training. The test set consists of data that was not introduced to the model during the training process. For the evaluation process to complete there must be at least 10 utterances in your training set.
++
+### Cancel training job
-In the **view model details** page, you'll be able to see all your models, with their current training status, and the date they were last trained.
+# [Language Studio](#tab/language-studio)
-* Click on the model name for more details. A model name is only clickable if you've enabled evaluation before hand.
-* In the **Overview** section you can find the macro precision, recall and F1 score for the collective intents or entities, based on which option you select.
-* Under the **Intents** and **Entities** tabs you can find the micro precision, recall and F1 score for each intent or entity separately.
+# [REST APIs](#tab/rest-api)
-> [!NOTE]
-> If you don't see any of the intents or entities you have in your model displayed here, it is because they weren't in any of the utterances that were used for the test set.
++
-You can view the [confusion matrix](../concepts/evaluation-metrics.md#confusion-matrix) for intents and entities by clicking on the **Test set confusion matrix** tab at the top fo the screen.
## Next steps+ * [Model evaluation metrics](../concepts/evaluation-metrics.md)
-* [Deploy and query the model](./deploy-query-model.md)
+<!--* [Deploy and query the model](./deploy-model.md)-->
cognitive-services View Model Evaluation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/conversational-language-understanding/how-to/view-model-evaluation.md
+
+ Title: How to view conversational language understanding models details
+description: Use this article to learn about viewing the details for a conversational language understanding model.
+++++++ Last updated : 05/16/2022++++
+# View conversational language understanding model details
+
+After model training is completed, you can view your model details and see how well it performs against the test set.
+
+> [!NOTE]
+> Using the **Automatically split the testing set from training data** option may result in different model evaluation result every time you [train a new model](train-model.md), as the test set is selected randomly from your utterances. To make sure that the evaulation is calcualted on the same test set every time you train a model, make sure to use the **Use a manual split of training and testing data** option when starting a training job and define your **Testing set** when [add your utterances](tag-utterances.md).
+
+## Prerequisites
+
+Before viewing a model's evaluation, you need:
+
+* A successfully [created project](create-project.md).
+* A successfully [trained model](train-model.md).
+
+See the [project development lifecycle](../overview.md#project-development-lifecycle) for more information.
+
+## Model details
+
+### [Language studio](#tab/Language-studio)
+
+In the **view model details** page, you'll be able to see all your models, with their current training status, and the date they were last trained.
++
+### [REST APIs](#tab/REST-APIs)
++++
+## Delete model
+
+### [Language studio](#tab/Language-studio)
+++
+### [REST APIs](#tab/REST-APIs)
++++++
+## Next steps
+
+* As you review your how your model performs, learn about the [evaluation metrics](../concepts/evaluation-metrics.md) that are used.
+* If you're happy with your model performance, you can [deploy your model](deploy-model.md)
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/conversational-language-understanding/language-support.md
Title: Conversational Language Understanding language support
+ Title: Conversational language understanding language support
-description: This article explains which natural languages are supported by the Conversational Language Understanding feature of Azure Cognitive Service for Language.
+description: This article explains which natural languages are supported by the conversational language understanding feature of Azure Cognitive Service for Language.
Previously updated : 03/14/2022 Last updated : 05/12/2022
-# Conversational Language Understanding language support
+# Language support for conversational language understanding
-Use this article to learn which natural languages are supported by the Conversational Language Understanding feature.
+Use this article to learn about the languages currently supported by CLU feature.
-### Supported languages for conversation projects
+## Multi-lingual option
-When creating a conversation project in CLU, you can specify the primary language of your project. The primary language is used as the default language of the project.
+With conversational language understanding, you can train a model in one language and use to predict intents and entities from utterances in another language. This feature is powerful because it helps save time and effort. Instead of building separate projects for every language, you can handle multi-lingual dataset in one project. Your dataset doesn't have to be entirely in the same language but you should enable the multi-lingual option for your project while creating or later in project settings. If you notice your model performing poorly in certain languages during the evaluation process, consider adding more data in these languages to your training set.
-The supported languages for conversation projects are:
+You can train your project entirely with English utterances, and query it in: French, German, Mandarin, Japanese, Korean, and others. Conversational language understanding makes it easy for you to scale your projects to multiple languages by using multilingual technology to train your models.
+
+Whenever you identify that a particular language is not performing as well as other languages, you can add utterances for that language in your project. In the tag utterances page in Language Studio, you can select the language of the utterance you're adding. When you introduce examples for that language to the model, it is introduced to more of the syntax of that language, and learns to predict it better.
+
+You aren't expected to add the same number of utterances for every language. You should build the majority of your project in one language, and only add a few utterances in languages you observe aren't performing well. If you create a project that is primarily in English, and start testing it in French, German, and Spanish, you might observe that German doesn't perform as well as the other two languages. In that case, consider adding 5% of your original English examples in German, train a new model and test in German again. You should see better results for German queries. The more utterances you add, the more likely the results are going to get better.
+
+When you add data in another language, you shouldn't expect it to negatively affect other languages.
+
+### List and prebuilt components in multiple languages
+
+Projects with multiple languages enabled will allow you to specify synonyms **per language** for every list key. Depending on the language you query your project with, you will only get matches for the list component with synonyms of that language. When you query your project, you can specify the language in the request body:
+
+```json
+"query": "{query}"
+"language": "{language code}"
+```
+
+If you do not provide a language, it will fall back to the default language of your project.
+
+Prebuilt components are similar, where you should expect to get predictions for prebuilt components that are available in specific languages. The request's language again determines which components are attempting to be predicted. <!--See the [prebuilt components](../prebuilt-component-reference.md) reference article for the language support of each prebuilt component.-->
+++
+## Languages supported by conversational language understanding
+
+Conversational language understanding supports utterances in the following languages:
| Language | Language code | | | |
The supported languages for conversation projects are:
| German | `de` | | Greek | `el` | | English (US) | `en-us` |
-| English (Uk) | `en-gb` |
+| English (UK) | `en-gb` |
| Esperanto | `eo` | | Spanish | `es` | | Estonian | `et` |
The supported languages for conversation projects are:
| Chinese (Traditional) | `zh-hant` | | Zulu | `zu` |
-#### Multilingual conversation projects
-When you enable multiple languages in a project, you can add data in multiple languages to your project. You can also train the project in one language and immediately predict it in other languages. The quality of predictions may vary between languages ΓÇô and certain language groups work better than others with respect to multilingual predictions.
## Next steps
-[Conversational language understanding overview](overview.md)
+* [Conversational language understanding overview](overview.md)
+* [Service limits](service-limits.md)
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/conversational-language-understanding/overview.md
Previously updated : 11/02/2021 Last updated : 05/13/2022
-# What is Conversational language understanding (preview)?
+# What is conversational language understanding?
-Conversational language understanding is a cloud-based service that enables you to train conversational language models using state-of-the-art technology.
+Conversational language understanding is one of the custom features offered by [Azure Cognitive Service for Language](../overview.md). It is a cloud-based API service that applies machine-learning intelligence to enable you to build natural language understanding component to be used in an end-to-end conversational application.
-The API is a part of [Azure Cognitive Services](../../index.yml), a collection of machine learning and AI algorithms in the cloud for your development projects. You can use these features with the REST API, or the client libraries.
+Conversational language understanding (CLU) enables users to build custom natural language understanding models to predict the overall intention of an incoming utterance and extract important information from it. CLU only provides the intelligence to understand the input text for the client application and doesn't perform any actions. By creating a CLU project, developers can iteratively tag utterances, train and evaluate model performance before making it available for consumption. The quality of the tagged data greatly impacts model performance. To simplify building and customizing your model, the service offers a custom web portal that can be accessed through the [Language studio](https://aka.ms/languageStudio). You can easily get started with the service by following the steps in this [quickstart](quickstart.md).
-## Features
+This documentation contains the following article types:
-Conversational language understanding applies custom machine-learning intelligence to a user's conversational, natural language text to predict overall meaning, and pull out relevant, detailed information.
+* [Quickstarts](quickstart.md) are getting-started instructions to guide you through making requests to the service.
+* [Concepts](concepts/evaluation-metrics.md) provide explanations of the service functionality and features.
+* [How-to guides](how-to/tag-utterances.md) contain instructions for using the service in more specific or customized ways.
++
+## Example usage scenarios
+
+CLU can be used in multiple scenarios across a variety of industries. Some examples are:
+
+### End-to-end conversational bot
+
+Use CLU to build and train a custom natural language understanding model based on a specific domain and the expected users' utterances. Integrate it with any end-to-end conversational bot so that it can process and analyze incoming text in real time to identify the intention of the text and extract important information from it. Have the bot perform the desired action based on the intention and extracted information. An example would be a customized retail bot for online shopping or food ordering.
+
+### Human assistant bots
+
+One example of a human assistant bot is to help staff improve customer engagements by triaging customer queries and assigning them to the appropriate support engineer. Another example would be a human resources bot in an enterprise that allows employees to communicate in natural language and receive guidance based on the query.
+
+### Command and control application
+
+When you integrate a client application with a speech-to-text component, users can speak a command in natural language for CLU to process, identify intent, and extract information from the text for the client application to perform an action. This use case has many applications, such as to stop, play, forward, and rewind a song or turn lights on or off.
+
+### Enterprise chat bot
+
+In a large corporation, an enterprise chat bot may handle a variety of employee affairs. It might handle frequently asked questions served by a custom question answering knowledge base, a calendar specific skill served by conversational language understanding, and an interview feedback skill served by LUIS. Use Orchestration workflow to connect all these skills together and appropriately route the incoming requests to the correct service.
++
+## Project development lifecycle
+
+Creating a CLU project typically involves several different steps.
++
+Follow these steps to get the most out of your model:
+
+1. **Build schema**: Know your data and define the actions and relevant information that needs to be recognized from user's input utterances. In this step you create the [intents](glossary.md#intent) that you want to assign to user's utterances, and the relevant [entities](glossary.md#entity) you want extracted.
+
+2. **Tag data**: The quality of data tagging is a key factor in determining model performance.
+
+3. **Train model**: Your model starts learning from your tagged data.
+
+4. **View model evaluation details**: View the evaluation details for your model to determine how well it performs when introduced to new data.
+
+5. **Deploy model**: Deploying a model makes it available for use via the [Runtime API](https://aka.ms/clu-apis).
+
+6. **Predict intents and entities**: Use your custom model to predict intents and entities from user's utterances.
+
+## Reference documentation and code samples
+
+As you use CLU, see the following reference documentation and samples for Azure Cognitive Services for Language:
+
+|Development option / language |Reference documentation |Samples |
+||||
+|REST APIs (Authoring) | [REST API documentation](https://aka.ms/clu-authoring-apis) | |
+|REST APIs (Runtime) | [REST API documentation](https://aka.ms/clu-apis) | |
+|C# | [C# documentation](/dotnet/api/azure.ai.textanalytics?view=azure-dotnet-preview&preserve-view=true) | [C# samples](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/cognitivelanguage/Azure.AI.Language.Conversations/samples) |
+|Python | [Python documentation](/python/api/overview/azure/ai-textanalytics-readme?view=azure-python-preview&preserve-view=true) | [Python samples](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/cognitivelanguage/azure-ai-language-conversations/samples) |
+
+## Responsible AI
+
+An AI system includes not only the technology, but also the people who will use it, the people who will be affected by it, and the environment in which it's deployed. Read the [transparency note for CLU]() to learn about responsible AI use and deployment in your systems. You can also see the following articles for more information:
++
+## Next steps
+
+* Use the [quickstart article](quickstart.md) to start using conversational language understanding.
+
+* As you go through the project development lifecycle, review the [glossary](glossary.md) to learn more about the terms used throughout the documentation for this feature.
+
+* Remember to view the [service limits](service-limits.md) for information such as [regional availability](service-limits.md#regional-availability).
-* Advanced natural language understanding technology using advanced neural networks.
-* Robust and semantically aware classification and extraction models.
-* Simplified model building experience, using Language Studio.
-* Natively multilingual models that let you to train in one language, and test in others.
cognitive-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/conversational-language-understanding/quickstart.md
zone_pivot_groups: usage-custom-language-features
-# Quickstart: Conversational language understanding (preview)
+# Quickstart: Conversational language understanding
-Use this article to get started with Custom text classification using Language Studio and the REST API. Follow these steps to try out an example.
+Use this article to get started with Conversational Language understanding using Language Studio and the REST API. Follow these steps to try out an example.
::: zone pivot="language-studio"
cognitive-services Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/conversational-language-understanding/service-limits.md
Previously updated : 11/02/2021 Last updated : 05/12/2022 -+
-# Service limits for Conversational Language Understanding
+# Conversational language understanding limits
-Learn about the data, region, and throughput limits for the Conversational Language Understanding service
+Use this article to learn about the data and service limits when using conversational language understanding.
-## Region limits
+## Language resource limits
-- Conversational Language Understanding is only available in 2 regions: **West US 2** and **West Europe**. -- The only available SKU to access Conversational Language Understanding is the **Language** resource with the **S** sku.
+* Your Language resource must be one of the following [pricing tiers](https://azure.microsoft.com/pricing/details/cognitive-services/language-service/):
-## Data limits
+ |Tier|Description|Limit|
+ |--|--|--|
+ |F0|Free tier|You are only allowed **One** Language resource **per subscription**.|
+ |S |Paid tier|You can have up to 100 Language resources in the S tier per region.|
++
+See [pricing](https://azure.microsoft.com/pricing/details/cognitive-services/language-service/) for more information.
+
+* You can have up to **500** projects per resource.
+
+* Project names have to be unique within the same resource across all custom features.
+
+### Regional availability
-The following limits are observed for the Conversational Language Understanding service.
+Conversational language understanding is only available in some Azure regions. To use conversational language understanding, you must choose a Language resource in one of following regions:
+* West US 2
+* East US
+* East US 2
+* West US 3
+* South Central US
+* West Europe
+* North Europe
+* UK south
+* Australia East
-|Item|Limit|
-| | |
-|Utterances|15,000 per project|
-|Intents|500 per project|
-|Entities|100 per project|
-|Utterance length|500 characters|
-|Intent and entity name length|50 characters|
-|Models|10 per project|
-|Projects|500 per resource|
-|Synonyms|20,000 per list component|
-## Throughput limits
+## API limits
-|Item | Limit |
- |
-|Authoring Calls| 60 requests per minute|
-|Prediction Calls| 60 requests per minute|
+|Item|Request type| Maximum limit|
+|:-|:-|:-|
+|Authoring API|POST|10 per minute|
+|Authoring API|GET|100 per minute|
+|Predection API|GET/POST|1,000 per minute|
## Quota limits
-| Item | Limit |
- |
-|Authoring Calls| Unlimited|
-|Prediction Calls| 100,000 per month|
+|Pricing tier |Item |Limit |
+| | | |
+|F|Training time| 1 hour per month |
+|S|Training time| Unlimited, Pay as you go |
+|F|Prediction Calls| 5,000 request per month |
+|S|Prediction Calls| Unlimited, Pay as you go |
+
+## Data limits
+
+The following limits are observed for the conversational language understanding.
+
+|Item|Lower Limit| Upper Limit |
+| | | |
+|Count of utterances per project | 1 | 15,000|
+|Utterance length in characters | 1 | 500 |
+|Count of intents per project | 1 | 500|
+|Count of entities per project | 1 | 500|
+|Count of list synonyms per entity| 0 | 20,000 |
+|Count of prebuilt components per entity| 0 | 7 |
+|Count of trained models per project| 0 | 10 |
+|Count of deployments per project| 0 | 10 |
+
+## Naming limits
+
+| Item | Limits |
+|--|--|
+| Project name | You can only use letters `(a-z, A-Z)`, and numbers `(0-9)` with no spaces. Maximum allowed length is 50 characters. |
+| Model name | You can only use letters `(a-z, A-Z)`, numbers `(0-9)` and symbols `_ . -`. Maximum allowed length is 50 characters. |
+| Deployment name | You can only use letters `(a-z, A-Z)`, numbers `(0-9)` and symbols `_ . -`. Maximum allowed length is 50 characters. |
+| Intent name | You can only use letters `(a-z, A-Z)`, numbers `(0-9)` and symbols `_ . -`. Maximum allowed length is 50 characters. |
+| Entity name | You can only use letters `(a-z, A-Z)`, numbers `(0-9)` and symbols `_ . -`. Maximum allowed length is 50 characters. |
## Next steps
-[Conversational Language Understanding overview](overview.md)
+* [Conversational language understanding overview](overview.md)
cognitive-services Bot Framework https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/conversational-language-understanding/tutorials/bot-framework.md
+
+ Title: Add natural language understanding to your bot in Bot Framework SDK using conversational language understanding
+description: Learn how to train a bot to understand natural language.
+keywords: conversational language understanding, bot framework, bot, language understanding, nlu
+++++++ Last updated : 05/17/2022++
+# Integrate conversational language understanding with Bot Framework
+
+A dialog is the interaction that occurs between user queries and an application. Dialog management is the process that defines the automatic behavior that should occur for different customer interactions. While conversational language understanding can classify intents and extract information through entities, the [Bot Framework SDK](/azure/bot-service/bot-service-overview) allows you to configure the applied logic for the responses returned from it.
+
+This tutorial will explain how to integrate your own conversational language understanding (CLU) project for a flight booking project in the Bot Framework SDK that includes three intents: **Book Flight**, **Get Weather**, and **None**.
++
+## Prerequisites
+
+- Create a [Language resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextAnalytics) in the Azure portal to get your key and endpoint. After it deploys, select **Go to resource**.
+ - You will need the key and endpoint from the resource you create to connect your bot to the API. You'll paste your key and endpoint into the code below later in the tutorial.
+- Download the **Core Bot** for CLU [sample in C#](https://aka.ms/clu-botframework-overview).
+ - Clone the entire Bot Framework Samples repository to get access to this sample project.
++
+## Import a project in conversational language understanding
+
+1. Copy the [FlightBooking.json](https://aka.ms/clu-botframework-json) file in the **Core Bot** for CLU sample.
+2. Sign into the [Language Studio](https://language.cognitive.azure.com/) and select your Language resource.
+3. Navigate to [Conversational Language Understanding](https://language.cognitive.azure.com/clu/projects) and click on the service. This will route you the projects page. Click the Import button next to the Create New Project button. Import the FlightBooking.json file with the project name as **FlightBooking**. This will automatically import the CLU project with all the intents, entities, and utterances.
+
+ :::image type="content" source="../media/import.png" alt-text="A screenshot showing where to import a J son file." lightbox="../media/import.png":::
+
+4. Once the project is loaded, click on **Training** on the left. Press on Start a training job, provide the model name **v1** and press Train. All other settings such as **Standard Training** and the evaluation settings can be left as is.
+
+ :::image type="content" source="../media/train-model.png" alt-text="A screenshot of the training page in C L U." lightbox="../media/train-model.png":::
+
+5. Once training is complete, click to **Deployments** on the left. Click on Add Deployment and create a new deployment with the name **Testing**, and assign model **v1** to the deployment.
+
+ :::image type="content" source="../media/deploy-model-tutorial.png" alt-text="A screenshot of the deployment page within the deploy model screen in C L U." lightbox="../media/deploy-model-tutorial.png":::
+
+## Update the settings file
+
+Now that your CLU project is deployed and ready, update the settings that will connect to the deployment.
+
+In the **Core Bot** sample, update your [appsettings.json](https://aka.ms/clu-botframework-settings) with the appropriate values.
+
+- The _CluProjectName_ is **FlightBooking**.
+- The _CluDeploymentName_ is **Testing**
+- The _CluAPIKey_ can be either of the keys in the **Keys and Endpoint** section for your Language resource in the [Azure portal](https://portal.azure.com). You can also copy your key from the Project Settings tab in CLU.
+- The _CluAPIHostName_ is the endpoint found in the **Keys and Endpoint** section for your Language resource in the Azure portal. Note the format should be ```<Language_Resource_Name>.cognitiveservices.azure.com``` without `https://`
+
+```json
+{
+ "MicrosoftAppId": "",
+ "MicrosoftAppPassword": "",
+ "CluProjectName": "",
+ "CluDeploymentName": "",
+ "CluAPIKey": "",
+ "CluAPIHostName": ""
+}
+```
+
+## Identify integration points
+
+In the Core Bot sample, under the CLU folder, you can check out the **FlightBookingRecognizer.cs** file. Here is where the CLU API call to the deployed endpoint is made to retrieve the CLU prediction for intents and entities.
+
+```csharp
+ public FlightBookingRecognizer(IConfiguration configuration)
+ {
+ var cluIsConfigured = !string.IsNullOrEmpty(configuration["CluProjectName"]) && !string.IsNullOrEmpty(configuration["CluDeploymentName"]) && !string.IsNullOrEmpty(configuration["CluAPIKey"]) && !string.IsNullOrEmpty(configuration["CluAPIHostName"]);
+ if (cluIsConfigured)
+ {
+ var cluApplication = new CluApplication(
+ configuration["CluProjectName"],
+ configuration["CluDeploymentName"],
+ configuration["CluAPIKey"],
+ "https://" + configuration["CluAPIHostName"]);
+ // Set the recognizer options depending on which endpoint version you want to use.
+ var recognizerOptions = new CluOptions(cluApplication)
+ {
+ Language = "en"
+ };
+
+ _recognizer = new CluRecognizer(recognizerOptions);
+ }
+```
++
+Under the folder Dialogs folder, find the **MainDialog** which uses the following to make a CLU prediction.
+
+```csharp
+ var cluResult = await _cluRecognizer.RecognizeAsync<FlightBooking>(stepContext.Context, cancellationToken);
+```
+
+The logic that determines what to do with the CLU result follows it.
+
+```csharp
+ switch (cluResult.TopIntent().intent)
+ {
+ case FlightBooking.Intent.BookFlight:
+ // Initialize BookingDetails with any entities we may have found in the response.
+ var bookingDetails = new BookingDetails()
+ {
+ Destination = cluResult.Entities.toCity,
+ Origin = cluResult.Entities.fromCity,
+ TravelDate = cluResult.Entities.flightDate,
+ };
+
+ // Run the BookingDialog giving it whatever details we have from the CLU call, it will fill out the remainder.
+ return await stepContext.BeginDialogAsync(nameof(BookingDialog), bookingDetails, cancellationToken);
+
+ case FlightBooking.Intent.GetWeather:
+ // We haven't implemented the GetWeatherDialog so we just display a TODO message.
+ var getWeatherMessageText = "TODO: get weather flow here";
+ var getWeatherMessage = MessageFactory.Text(getWeatherMessageText, getWeatherMessageText, InputHints.IgnoringInput);
+ await stepContext.Context.SendActivityAsync(getWeatherMessage, cancellationToken);
+ break;
+
+ default:
+ // Catch all for unhandled intents
+ var didntUnderstandMessageText = $"Sorry, I didn't get that. Please try asking in a different way (intent was {cluResult.TopIntent().intent})";
+ var didntUnderstandMessage = MessageFactory.Text(didntUnderstandMessageText, didntUnderstandMessageText, InputHints.IgnoringInput);
+ await stepContext.Context.SendActivityAsync(didntUnderstandMessage, cancellationToken);
+ break;
+ }
+```
+
+## Run the bot locally
+
+Run the sample locally on your machine **OR** run the bot from a terminal or from Visual Studio:
+
+### Run the bot from a terminal
+
+From a terminal, navigate to `samples/csharp_dotnetcore/90.core-bot-with-clu/90.core-bot-with-clu`
+
+Then run the following command
+
+```bash
+# run the bot
+dotnet run
+```
+
+### Run the bot from Visual Studio
+
+1. Launch Visual Studio
+1. From the top navigation menu, select **File**, **Open**, then **Project/Solution**
+1. Navigate to the `samples/csharp_dotnetcore/90.core-bot-with-clu/90.core-bot-with-clu` folder
+1. Select the `CoreBotWithCLU.csproj` file
+1. Press `F5` to run the project
++
+## Testing the bot using Bot Framework Emulator
+
+[Bot Framework Emulator](https://github.com/microsoft/botframework-emulator) is a desktop application that allows bot developers to test and debug their bots on localhost or running remotely through a tunnel.
+
+- Install the [latest Bot Framework Emulator](https://github.com/Microsoft/BotFramework-Emulator/releases).
+
+## Connect to the bot using Bot Framework Emulator
+
+1. Launch Bot Framework Emulator
+1. Select **File**, then **Open Bot**
+1. Enter a Bot URL of `http://localhost:3978/api/messages` and press Connect and wait for it to load
+1. You can now query for different examples such as "Travel from Cairo to Paris" and observe the results
+
+If the top intent returned from CLU resolves to "_Book flight_". Your bot will ask additional questions until it has enough information stored to create a travel booking. At that point it will return this booking information back to your user.
+
+## Next steps
+
+Learn more about the [Bot Framework SDK](/azure/bot-service/bot-service-overview).
++
cognitive-services Data Formats https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-classification/concepts/data-formats.md
- Title: Custom text classification data formats-
-description: Learn about the data formats accepted by custom text classification.
------ Previously updated : 11/02/2021----
-# Accepted data formats
-
-When data is used by your model for learning, it expects the data to be in a specific format. When you tag your data in Language Studio, it gets converted to the JSON format described in this article. You can also manually tag your files.
--
-## JSON file format
-
-Your tags file should be in the `json` format below.
-
-```json
-{
- "classifiers": [
- {
- "name": "Class1"
- },
- {
- "name": "Class2"
- }
- ],
- "documents": [
- {
- "location": "file1.txt",
- "language": "en-us",
- "classifiers": [
- {
- "classifierName": "Class2"
- },
- {
- "classifierName": "Class1"
- }
- ]
- },
- {
- "location": "file2.txt",
- "language": "en-us",
- "classifiers": [
- {
- "classifierName": "Class2"
- }
- ]
- }
- ]
-}
-```
-
-### Data description
-
-* `classifiers`: An array of classifiers for your data. Each classifier represents one of the classes you want to tag your data with.
-* `documents`: An array of tagged documents.
- * `location`: The path of the file. The file has to be in root of the storage container.
- * `language`: Language of the file. Use one of the [supported culture locales](../language-support.md).
- * `classifiers`: Array of classifier objects assigned to the file. If you're working on a single label classification project, there should be one classifier per file only.
-
-## Next steps
-
-See the [how-to article](../how-to/tag-data.md) more information about tagging your data. When you're done tagging your data, you can [train your model](../how-to/train-model.md).
cognitive-services Evaluation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-classification/concepts/evaluation.md
- Title: Custom text classification evaluation metrics-
-description: Learn about evaluation metrics in custom text classification.
------ Previously updated : 04/05/2022----
-# Evaluation metrics
-
-Your [dataset is split](../how-to/train-model.md) into two parts: a set for training, and a set for testing. The training set while building the model and the testing set is used as a blind set to evaluate model performance after training is completed.
-
-Model evaluation is triggered after training is completed successfully. The evaluation process starts by using the trained model to predict user defined classes for files in the test set, and compares them with the provided data tags (which establishes a baseline of truth). The results are returned so you can review the modelΓÇÖs performance. For evaluation, custom text classification uses the following metrics:
-
-* **Precision**: Measures how precise/accurate your model is. It is the ratio between the correctly identified positives (true positives) and all identified positives. The precision metric reveals how many of the predicted classes are correctly tagged.
-
- `Precision = #True_Positive / (#True_Positive + #False_Positive)`
-
-* **Recall**: Measures the model's ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
-
- `Recall = #True_Positive / (#True_Positive + #False_Negatives)`
-
-* **F1 score**: The F1 score is a function of Precision and Recall. It's needed when you seek a balance between Precision and Recall.
-
- `F1 Score = 2 * Precision * Recall / (Precision + Recall)` <br>
-
->[!NOTE]
-> Precision, recall and F1 score are calculated for each class separately (*class-level* evaluation) and for the model collectively (*model-level* evaluation).
-
-## Model-level and Class-level evaluation metrics
-
-The definitions of precision, recall, and evaluation are the same for both class-level and model-level evaluations. However, the count of *True Positive*, *False Positive*, and *False Negative* differ as shown in the following example.
-
-The below sections use the following example dataset:
-
-| File | Actual classes | Predicted classes |
-|--|--|--|
-| 1 | action, comedy | comedy|
-| 2 | action | action |
-| 3 | romance | romance |
-| 4 | romance, comedy | romance |
-| 5 | comedy | action |
-
-### Class-level evaluation for the *action* class
-
-| Key | Count | Explanation |
-|--|--|--|
-| True Positive | 1 | File 2 was correctly classified as *action*. |
-| False Positive | 1 | File 5 was mistakenly classified as *action*. |
-| False Negative | 1 | File 1 was not classified as *Action* though it should have. |
-
-**Precision** = `#True_Positive / (#True_Positive + #False_Positive) = 1 / (1 + 1) = 0.5`
-
-**Recall** = `#True_Positive / (#True_Positive + #False_Negatives) = 1 / (1 + 1) = 0.5`
-
-**F1 Score** = `2 * Precision * Recall / (Precision + Recall) = (2 * 0.5 * 0.5) / (0.5 + 0.5) = 0.5`
-
-### Class-level evaluation for the *comedy* class
-
-| Key | Count | Explanation |
-|--|--|--|
-| True positive | 1 | File 1 was correctly classified as *comedy*. |
-| False positive | 0 | No files were mistakenly classified as *comedy*. |
-| False negative | 2 | Files 5 and 4 were not classified as *comedy* though they should have. |
-
-**Precision** = `#True_Positive / (#True_Positive + #False_Positive) = 1 / (1 + 0) = 1`
-
-**Recall** = `#True_Positive / (#True_Positive + #False_Negatives) = 1 / (1 + 2) = 0.67`
-
-**F1 Score** = `2 * Precision * Recall / (Precision + Recall) = (2 * 1 * 0.67) / (1 + 0.67) = 0.8`
-
-### Model-level evaluation for the collective model
-
-| Key | Count | Explanation |
-|--|--|--|
-| True Positive | 4 | Files 1, 2, 3 and 4 were given correct classes at prediction. |
-| False Positive | 1 | File 5 was given a wrong class at prediction. |
-| False Negative | 2 | Files 1 and 4 were not given all correct class at prediction. |
-
-**Precision** = `#True_Positive / (#True_Positive + #False_Positive) = 4 / (4 + 1) = 0.8`
-
-**Recall** = `#True_Positive / (#True_Positive + #False_Negatives) = 4 / (4 + 2) = 0.67`
-
-**F1 Score** = `2 * Precision * Recall / (Precision + Recall) = (2 * 0.8 * 0.67) / (0.8 + 0.67) = 0.12`
-
-> [!NOTE]
-> For single-label classification models, the count of false negatives and false positives are always equal. Custom single-label classification models always predict one class for each file. If the prediction is not correct, FP count of the predicted class increases by one and FN of the actual class increases by one, overall count of FP and FN for the model will always be equal. This is not the case for multi-label classification, because failing to predict one of the classes of a file is counted as a false negative.
-
-## Interpreting class-level evaluation metrics
-
-So what does it actually mean to have a high precision or a high recall for a certain class?
-
-| Recall | Precision | Interpretation |
-|--|--|--|
-| High | High | This class is perfectly handled by the model. |
-| Low | High | The model cannot always predict this class but when it does it is with high confidence. This may be because this class is underrepresented in the dataset so consider balancing your data distribution.|
-| High | Low | The model predicts this class well, however it is with low confidence. This may be because this class is over represented in the dataset so consider balancing your data distribution. |
-| Low | Low | This class is poorly handled by the model where it is not usually predicted and when it is, it is not with high confidence. |
-
-Custom text classification models are expected to experience both false negatives and false positives. You need to consider how each will affect the overall system, and carefully think through scenarios where the model will ignore correct predictions, and recognize incorrect predictions. Depending on your scenario, either *precision* or *recall* could be more suitable evaluating your model's performance.
-
-For example, if your scenario involves processing technical support tickets, predicting the wrong class could cause it to be forwarded to the wrong department/team. In this example, you should consider making your system more sensitive to false positives, and precision would be a more relevant metric for evaluation.
-
-As another example, if your scenario involves categorizing email as "*important*" or "*spam*", an incorrect prediction could cause you to miss a useful email if it's labeled "*spam*". However, if a spam email is labeled *important* you can simply disregard it. In this example, you should consider making your system more sensitive to false negatives, and recall would be a more relevant metric for evaluation.
-
-If you want to optimize for general purpose scenarios or when precision and recall are both important, you can utilize the F1 score. Evaluation scores are subjective based on your scenario and acceptance criteria. There is no absolute metric that works for every scenario.
-
-## See also
-
-* [View the model evaluation](../how-to/view-model-evaluation.md)
-* [Train a model](../how-to/train-model.md)
cognitive-services Fail Over https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-classification/fail-over.md
- Title: Back up and recover your custom text classification models-
-description: Learn how to save and recover your custom text classification models.
------ Previously updated : 02/07/2022----
-# Back up and recover your custom text classification models
-
-When you create a Language resource, you specify a region for it to be created in. From then on, your resource and all of the operations related to it take place in the specified Azure server region. It's rare, but not impossible, to encounter a network issue that hits an entire region. If your solution needs to always be available, then you should design it to either fail-over into another region. This requires two Azure Language resources in different regions and the ability to sync custom models across regions.
-
-If your app or business depends on the use of a custom text classification model, we recommend that you create a replica of your project into another supported region. So that if a regional outage occurs, you can then access your model in the other fail-over region where you replicated your project.
-
-Replicating a project means that you export your project metadata and assets and import them into a new project. This only makes a copy of your project settings and tagged data. You still need to [train](./how-to/train-model.md) and [deploy](how-to/call-api.md#deploy-your-model) the models to be available for use with [prediction APIs](https://aka.ms/ct-runtime-swagger).
-
-In this article, you will learn to how to use the export and import APIs to replicate your project from one resource to another existing in different supported geographical regions, guidance on keeping your projects in sync and changes needed to your runtime consumption.
-
-## Prerequisites
-
-* Two Azure Language resources in different Azure regions. Follow the instructions mentioned [here](./how-to/create-project.md#azure-resources) to create your resources and link it to Azure storage account. It is recommended that you link both your Language resources to the same storage account, this might introduce a bit higher latency in importing and training.
-
-## Get your resource keys endpoint
-
-Use the following steps to get the keys and endpoint of your primary and secondary resources. These will be used in the following steps.
-
-* Go to your resource overview page in the [Azure portal](https://ms.portal.azure.com/#home)
-
-* From the menu of the left side of the screen, select **Keys and Endpoint**. Use endpoint for the API requests and youΓÇÖll need the key for `Ocp-Apim-Subscription-Key` header.
-
-> [!TIP]
-> Keep a note of keys and endpoints for both primary and secondary resources. Use these values to replace the following placeholders:
-`{YOUR-PRIMARY-ENDPOINT}`, `{YOUR-PRIMARY-RESOURCE-KEY}`, `{YOUR-SECONDARY-ENDPOINT}` and `{YOUR-SECONDARY-RESOURCE-KEY}`.
-> Also take note of your project name, your model name and your deployment name. Use these values to replace the following placeholders: `{PROJECT-NAME}`, `{MODEL-NAME}` and `{DEPLOYMENT-NAME}`.
-
-## Export your primary project assets
-
-Start by exporting the project assets from the project in your primary resource.
-
-### Submit export job
-
-Create a **POST** request using the following URL, headers, and JSON body to export project metadata and assets.
-
-Use the following URL to export your primary project assets. Replace the placeholder values below with your own values.
-
-```rest
-{YOUR-PRIMARY-ENDPOINT}/language/analyze-text/projects/{PROJECT-NAME}/:export?api-version=2021-11-01-preview
-```
-
-|Placeholder |Value | Example |
-||||
-|`{YOUR-PRIMARY-ENDPOINT}` | The endpoint for authenticating your API request. | `https://<your-custom-subdomain>.cognitiveservices.azure.com` |
-|`{PROJECT-NAME}` | The name for your project. This value is case-sensitive. | `myProject` |
-
-#### Headers
-
-Use the following headers to authenticate your request.
-
-|Key|Description|Value|
-|--|--|--|
-|`Ocp-Apim-Subscription-Key`| The key to your resource. Used for authenticating your API requests.| `{YOUR-PRIMARY-RESOURCE-KEY}` |
-|`format`| The format you want to use for the exported assets. | `JSON` |
-
-#### Body
-
-Use the following JSON in your request body specifying that you want to export all the assets.
-
-```json
-{
- "assetsToExport": ["*"]
-}
-```
-
-Once you send your API request, youΓÇÖll receive a `202` response indicating success. In the response headers, extract the `location` value. It will be formatted like this:
-
-```rest
-{YOUR-PRIMARY-ENDPOINT}/language/analyze-text/projects/{PROJECT-NAME}/export/jobs/{JOB-ID}?api-version=2021-11-01-preview
-```
-
-`JOB-ID` is used to identify your request, since this operation is asynchronous. YouΓÇÖll use this URL in the next step to get the export job status.
-
-### Get export job status
-
-Use the following **GET** request to query the status of your export job status. You can use the URL you received from the previous step, or replace the placeholder values below with your own values.
-
-```rest
-{YOUR-PRIMARY-ENDPOINT}/language/analyze-text/projects/{PROJECT-NAME}/export/jobs/{JOB-ID}?api-version=2021-11-01-preview
-```
-
-|Placeholder |Value | Example |
-||||
-|`{YOUR-PRIMARY-ENDPOINT}` | The endpoint for authenticating your API request. | `https://<your-custom-subdomain>.cognitiveservices.azure.com` |
-|`{PROJECT-NAME}` | The name for your project. This value is case-sensitive. | `myProject` |
-|`{JOB-ID}` | The ID for locating your export job status. This is in the `location` header value you received in the previous step. | `xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxxx` |
-
-#### Headers
-
-Use the following header to authenticate your request.
-
-|Key|Description|Value|
-|--|--|--|
-|`Ocp-Apim-Subscription-Key`| The key to your resource. Used for authenticating your API requests.| `{YOUR-PRIMARY-RESOURCE-KEY}` |
-
-#### Response body
-
-```json
-{
- "resultUrl": "{RESULT-URL}",
- "jobId": "string",
- "createdDateTime": "2021-10-19T23:24:41.572Z",
- "lastUpdatedDateTime": "2021-10-19T23:24:41.572Z",
- "expirationDateTime": "2021-10-19T23:24:41.572Z",
- "status": "unknown",
- "errors": [
- {
- "code": "unknown",
- "message": "string"
- }
- ]
-}
-```
-
-Use the url from the `resultUrl` key in the body to view the exported assets from this job.
-
-### Get export results
-
-Submit a **GET** request using the `{RESULT-URL}` you received from the previous step to view the results of the export job.
-
-#### Headers
-
-Use the following header to authenticate your request.
-
-|Key|Description|Value|
-|--|--|--|
-|`Ocp-Apim-Subscription-Key`| The key to your resource. Used for authenticating your API requests.| `{YOUR-PRIMARY-RESOURCE-KEY}` |
-
-Copy the response body as you will use it as the body for the next import job.
-
-## Import to a new project
-
-Now go ahead and import the exported project assets in your new project in the secondary region so you can replicate it.
-
-### Submit import job
-
-Create a **POST** request using the following URL, headers, and JSON body to create your project and import the tags file.
-
-Use the following URL to create a project and import your tags file. Replace the placeholder values below with your own values.
-
-```rest
-{YOUR-SECONDARY-ENDPOINT}/language/analyze-text/projects/{PROJECT-NAME}/:import?api-version=2021-11-01-preview
-```
-
-|Placeholder |Value | Example |
-||||
-|`{YOUR-SECONDARY-ENDPOINT}` | The endpoint for authenticating your API request. | `https://<your-custom-subdomain>.cognitiveservices.azure.com` |
-|`{PROJECT-NAME}` | The name for your project. This value is case-sensitive. | `myProject` |
-
-### Headers
-
-Use the following header to authenticate your request.
-
-|Key|Description|Value|
-|--|--|--|
-|`Ocp-Apim-Subscription-Key`| The key to your resource. Used for authenticating your API requests.| `{YOUR-SECONDARY-RESOURCE-KEY}` |
-
-### Body
-
-Use the response body you got from the previous export step. It will have a format similar to this:
-
-```json
-{
- "api-version": "2021-11-01-preview",
- "metadata": {
- "name": "myProject",
- "multiLingual": true,
- "description": "Trying out custom text classification",
- "modelType": "multiClassification",
- "language": "string",
- "storageInputContainerName": "YOUR-CONTAINER-NAME",
- "settings": {}
- },
- "assets": {
- "classifiers": [
- {
- "name": "Class1"
- }
- ],
- "documents": [
- {
- "location": "doc1.txt",
- "language": "en-us",
- "classifiers": [
- {
- "classifierName": "Class1"
- }
- ]
- }
- ]
- }
-}
-```
-Once you send your API request, youΓÇÖll receive a `202` response indicating success. In the response headers, extract the `location` value. It will be formatted like this:
-
-```rest
-{YOUR-PRIMARY-ENDPOINT}/language/analyze-text/projects/{PROJECT-NAME}/import/jobs/{JOB-ID}?api-version=2021-11-01-preview
-```
-
-`JOB-ID` is used to identify your request, since this operation is asynchronous. YouΓÇÖll use this URL in the next step to get the import status.
-
-### Get import job status
-
-Use the following **GET** request to query the status of your import job status. You can use the URL you received from the previous step, or replace the placeholder values below with your own values.
-
-```rest
-{YOUR-PRIMARY-ENDPOINT}/language/analyze-text/projects/{PROJECT-NAME}/import/jobs/{JOB-ID}?api-version=2021-11-01-preview
-```
-
-|Placeholder |Value | Example |
-||||
-|`{YOUR-PRIMARY-ENDPOINT}` | The endpoint for authenticating your API request. | `https://<your-custom-subdomain>.cognitiveservices.azure.com` |
-|`{PROJECT-NAME}` | The name for your project. This value is case-sensitive. | `myProject` |
-|`{JOB-ID}` | The ID for locating your export job status. This is in the `location` header value you received in the previous step. | `xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxxx` |
-
-#### Headers
-
-Use the following header to authenticate your request.
-
-|Key|Description|Value|
-|--|--|--|
-|`Ocp-Apim-Subscription-Key`| The key to your resource. Used for authenticating your API requests.| `{YOUR-PRIMARY-RESOURCE-KEY}` |
-
-#### Response body
-
-```json
-{
- "jobId": "string",
- "createdDateTime": "2021-10-19T23:24:41.572Z",
- "lastUpdatedDateTime": "2021-10-19T23:24:41.572Z",
- "expirationDateTime": "2021-10-19T23:24:41.572Z",
- "status": "unknown",
- "errors": [
- {
- "code": "unknown",
- "message": "string"
- }
- ]
-}
-```
-
-Now you have replicated your project into another resource in another region.
-
-## Train your model
-
-After importing your project, you only have copied the project's assets and metadata and assets. You still need to train your model, which will incur usage on your account.
-
-### Submit training job
-
-Create a **POST** request using the following URL, headers, and JSON body to start training an NER model. Replace the placeholder values below with your own values.
-
-```rest
-{YOUR-SECONDARY-ENDPOINT}/language/analyze-text/projects/{PROJECT-NAME}/:train?api-version=2021-11-01-preview
-```
-
-|Placeholder |Value | Example |
-||||
-|`{YOUR-SECONDARY-ENDPOINT}` | The endpoint for authenticating your API request. | `https://<your-custom-subdomain>.cognitiveservices.azure.com` |
-|`{PROJECT-NAME}` | The name for your project. This value is case-sensitive. | `myProject` |
-
-### Headers
-
-Use the following header to authenticate your request.
-
-|Key|Description|Value|
-|--|--|--|
-|`Ocp-Apim-Subscription-Key`| The key to your resource. Used for authenticating your API requests.| `{YOUR-SECONDARY-RESOURCE-KEY}` |
-
-### Request body
-
-Use the following JSON in your request. Use the same model name and `runValidation` setting you have in your primary project for consistency.
-
-```json
-{
- "modelLabel": "{MODEL-NAME}",
- "runValidation": true
-}
-```
-
-|Key |Value | Example |
-||||
-|`modelLabel ` | Your Model name. | {MODEL-NAME} |
-|`runValidation` | Boolean value to run validation on the test set. | true |
-
-Once you send your API request, youΓÇÖll receive a `202` response indicating success. In the response headers, extract the `location` value. It will be formatted like this:
-
-```rest
-{YOUR-SECONDARY-ENDPOINT}/language/analyze-text/projects/{PROJECT-NAME}/train/jobs/{JOB-ID}?api-version=2021-11-01-preview
-```
-
-`JOB-ID` is used to identify your request, since this operation is asynchronous. YouΓÇÖll use this URL in the next step to get the training status.
-
-### Get Training Status
-
-Use the following **GET** request to query the status of your model's training process. You can use the URL you received from the previous step, or replace the placeholder values below with your own values.
-
-```rest
-{YOUR-SECONDARY-ENDPOINT}/language/analyze-text/projects/{PROJECT-NAME}/train/jobs/{JOB-ID}?api-version=2021-11-01-preview
-```
-
-|Placeholder |Value | Example |
-||||
-|`{YOUR-SECONDARY-ENDPOINT}` | The endpoint for authenticating your API request. | `https://<your-custom-subdomain>.cognitiveservices.azure.com` |
-|`{PROJECT-NAME}` | The name for your project. This value is case-sensitive. | `myProject` |
-|`{JOB-ID}` | The ID for locating your model's training status. This is in the `location` header value you received in the previous step. | `xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxxx` |
-
-#### Headers
-
-Use the following header to authenticate your request.
-
-|Key|Description|Value|
-|--|--|--|
-|`Ocp-Apim-Subscription-Key`| The key to your resource. Used for authenticating your API requests.| `{YOUR-SECONDARY-RESOURCE-KEY}` |
-
-## Deploy your model
-
-This is the step where you make your trained model available form consumption via the [runtime prediction API](https://aka.ms/ct-runtime-swagger).
-
-> [!TIP]
-> Use the same deployment name as your primary project for easier maintenance and minimal changes to your system to handle redirecting your traffic.
-
-## Submit deploy job
-
-Create a **PUT** request using the following URL, headers, and JSON body to start deploying a custom NER model.
-
-```rest
-{YOUR-SECONDARY-ENDPOINT}/language/analyze-text/projects/{PROJECT-NAME}/deployments/{DEPLOYMENT-NAME}?api-version=2021-11-01-preview
-```
-
-|Placeholder |Value | Example |
-||||
-|`{YOUR-SECONDARY-ENDPOINT}` | The endpoint for authenticating your API request. | `https://<your-custom-subdomain>.cognitiveservices.azure.com` |
-|`{PROJECT-NAME}` | The name for your project. This value is case-sensitive. | `myProject` |
-|`{DEPLOYMENT-NAME}` | The name of your deployment. This value is case-sensitive. | `prod` |
-
-#### Headers
-
-Use the following header to authenticate your request.
-
-|Key|Description|Value|
-|--|--|--|
-|`Ocp-Apim-Subscription-Key`| The key to your resource. Used for authenticating your API requests.| `{YOUR-SECONDARY-RESOURCE-KEY}` |
-
-#### Request body
-
-Use the following JSON in your request. Use the name of the model you wan to deploy.
-
-```json
-{
- "trainedModelLabel": "{MODEL-NAME}",
- "deploymentName": "{DEPLOYMENT-NAME}"
-}
-```
-
-Once you send your API request, youΓÇÖll receive a `202` response indicating success. In the response headers, extract the `location` value. It will be formatted like this:
-
-```rest
-{YOUR-SECONDARY-ENDPOINT}/language/analyze-text/projects/{PROJECT-NAME}/deployments/{DEPLOYMENT-NAME}/jobs/{JOB-ID}?api-version=2021-11-01-preview
-```
-
-`JOB-ID` is used to identify your request, since this operation is asynchronous. You will use this URL in the next step to get the publishing status.
-
-### Get the deployment status
-
-Use the following **GET** request to query the status of your model's publishing process. You can use the URL you received from the previous step, or replace the placeholder values below with your own values.
-
-```rest
-{YOUR-SECONDARY-ENDPOINT}/language/analyze-text/projects/{PROJECT-NAME}/deployments/{DEPLOYMENT-NAME}/jobs/{JOB-ID}?api-version=2021-11-01-preview
-```
-
-|Placeholder |Value | Example |
-||||
-|`{YOUR-SECONDARY-ENDPOINT}` | The endpoint for authenticating your API request. | `https://<your-custom-subdomain>.cognitiveservices.azure.com` |
-|`{PROJECT-NAME}` | The name for your project. This value is case-sensitive. | `myProject` |
-|`{DEPLOYMENT-NAME}` | The name of your deployment. This value is case-sensitive. | `prod` |
-|`{JOB-ID}` | The ID for locating your model's training status. This is in the `location` header value you received in the previous step. | `xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxxx` |
-
-#### Headers
-
-Use the following header to authenticate your request.
-
-|Key|Description|Value|
-|--|--|--|
-|`Ocp-Apim-Subscription-Key`| The key to your resource. Used for authenticating your API requests.| `{YOUR-SECONDARY-RESOURCE-KEY}` |
-
-At this point you have replicated your project into another resource, which is in another region, trained and deployed the model. Now you would want to make changes to your system to handle traffic redirection in case of failure.
-
-## Changes in calling the runtime
-
-Within your system, at the step where you call [runtime prediction API](https://aka.ms/ct-runtime-swagger) check for the response code returned from the submit task API. If you observe a **consistent** failure in submitting the request, this could indicate an outage in your primary region. Failure once doesn't mean an outage, it may be transient issue. Retry submitting the job through the secondary resource you have created. For the second request use your `{YOUR-SECONDARY-ENDPOINT}` and secondary key, if you have followed the steps above, `{PROJECT-NAME}` and `{DEPLOYMENT-NAME}` would be the same so no changes are required to the request body.
-
-In case you revert to using your secondary resource you will observe slight increase in latency because of the difference in regions where your model is deployed.
-
-## Check if your projects are out of sync
-
-Maintaining the freshness of both projects is an important part of process. You need to frequently check if any updates were made to your primary project so that you move them over to your secondary project. This way if your primary region fail and you move into the secondary region you should expect similar model performance since it already contains the latest updates. Setting the frequency of checking if your projects are in sync is an important choice, we recommend that you do this check daily in order to guarantee the freshness of data in your secondary model.
-
-### Get project details
-
-Use the following url to get your project details, one of the keys returned in the body
-
-Use the following **GET** request to get your project details. You can use the URL you received from the previous step, or replace the placeholder values below with your own values.
-
-```rest
-{YOUR-PRIMARY-ENDPOINT}/language/analyze-text/projects/{PROJECT-NAME}?api-version=2021-11-01-preview
-```
-
-|Placeholder |Value | Example |
-||||
-|`{YOUR-PRIMARY-ENDPOINT}` | The endpoint for authenticating your API request. | `https://<your-custom-subdomain>.cognitiveservices.azure.com` |
-|`{PROJECT-NAME}` | The name for your project. This value is case-sensitive. | `myProject` |
-
-#### Headers
-
-Use the following header to authenticate your request.
-
-|Key|Description|Value|
-|--|--|--|
-|`Ocp-Apim-Subscription-Key`| The key to your resource. Used for authenticating your API requests.| `{YOUR-PRIMARY-RESOURCE-KEY}` |
-
-#### Response body
-
-```json
- {
- "createdDateTime": "2021-10-19T23:24:41.572Z",
- "lastModifiedDateTime": "2021-10-19T23:24:41.572Z",
- "lastTrainedDateTime": "2021-10-19T23:24:41.572Z",
- "lastDeployedDateTime": "2021-10-19T23:24:41.572Z",
- "modelType": "multiClassification",
- "storageInputContainerName": "YOUR-CONTAINER-NAME",
- "name": "myProject",
- "multiLingual": true,
- "description": "string",
- "language": "en-us",
- "settings": {}
- }
-```
-
-Repeat the same steps for your replicated project using `{YOUR-SECONDARY-ENDPOINT}` and `{YOUR-SECONDARY-RESOURCE-KEY}`. Compare the returned `lastModifiedDateTime` from both project. If your primary project was modified sooner than your secondary one, you need to repeat the steps of [exporting](#export-your-primary-project-assets), [importing](#import-to-a-new-project), [training](#train-your-model) and [deploying](#deploy-your-model).
--
-## Next steps
-
-In this article, you have learned how to use the export and import APIs to replicate your project to a secondary Language resource in other region. Next, explore the API reference docs to see what else you can do with authoring APIs.
-
-* [Authoring REST API reference ](https://westus.dev.cognitive.microsoft.com/docs/services/language-authoring-clu-apis-2022-03-01-preview/operations/Projects_TriggerImportProjectJob)
-
-* [Runtime prediction REST API reference ](https://aka.ms/ct-runtime-swagger)
cognitive-services Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-classification/faq.md
- Title: Custom text classification FAQ-
-description: Learn about Frequently asked questions when using the custom text classification API.
------ Previously updated : 04/05/2022-----
-# Frequently asked questions
-
-Find answers to commonly asked questions about concepts, and scenarios related to custom text classification in Azure Cognitive Service for Language.
-
-## How do I get started with the service?
-
-See the [quickstart](./quickstart.md) to quickly create your first project, or view [how to create projects](how-to/create-project.md) for more details.
-
-## What are the service limits?
-
-See the [service limits article](service-limits.md) for more information.
-
-## Which languages are supported in this feature?
-
-See the [language support](./language-support.md) article.
-
-## How many tagged files are needed?
-
-Generally, diverse and representative [tagged data](how-to/tag-data.md) leads to better results, given that the tagging is done precisely, consistently and completely. There is no set number of tagged classes that will make every model perform well. Performance is highly dependent on your schema and the ambiguity of your schema. Ambiguous classes need more tags. Performance also depends on the quality of your tagging. The recommended number of tagged instances per class is 50.
-
-## Training is taking a long time, is this expected?
-
-The training process can take some time. As a rough estimate, the expected training time for files with a combined length of 12,800,000 chars is 6 hours.
-
-## How do I build my custom model programmatically?
-
-You can use the [REST APIs](https://westus.dev.cognitive.microsoft.com/docs/services/language-authoring-clu-apis-2022-03-01-preview/operations/Projects_TriggerImportProjectJob) to build your custom models. Follow this [quickstart](quickstart.md?pivots=rest-api) to get started with creating a project and creating a model through APIs for examples of how to call the Authoring API.
-
-When you're ready to start [using your model to make predictions](#how-do-i-use-my-trained-model-to-make-predictions), you can use the REST API, or the client library.
-
-## What is the recommended CI/CD process?
-
-You can train multiple models on the same dataset within the same project. After you have trained your model successfully, you can [view its evaluation](how-to/view-model-evaluation.md). You can [deploy and test](quickstart.md#deploy-your-model) your model within [Language studio](https://aka.ms/languageStudio). You can add or remove tags from your data and train a **new** model and test it as well. View [service limits](service-limits.md)to learn about maximum number of trained models with the same project. When you [tag your data](how-to/tag-data.md#tag-your-data) you can determine how your dataset is split into training and testing sets.
-
-## Does a low or high model score guarantee bad or good performance in production?
-
-Model evaluation may not always be comprehensive. This is dependent on:
-* If the **test set** is too small, the good/bad scores are not representative of model's actual performance. Also if a specific class is missing or under-represented in your test set it will affect model performance.
-* **Data diversity** if your data only covers few scenarios/examples of the text you expect in production, your model will not be exposed to all possible scenarios and might perform poorly on the scenarios it hasn't been trained on.
-* **Data representation** if the dataset used to train the model is not representative of the data that would be introduced to the model in production, model performance will be affected greatly.
-
-See the [data selection and schema design](how-to/design-schema.md) article for more information.
-
-## How do I improve model performance?
-
-* View the model [confusion matrix](how-to/view-model-evaluation.md), if you notice that a certain class is frequently classified incorrectly, consider adding more tagged instances for this class. If you notice that two classes are frequently classified as each other, this means the schema is ambiguous, consider merging them both into one class for better performance.
-
-* [Examine Data distribution](how-to/improve-model.md#examine-data-distribution-from-language-studio) If one of the classes has a lot more tagged instances than the others, your model may be biased towards this class. Add more data to the other classes or remove most of the examples from the dominating class.
-
-* Learn more about data selection and schema design [here](how-to/design-schema.md).
-
-* [Review your test set](how-to/improve-model.md) to see predicted and tagged classes side-by-side so you can get a better idea of your model performance, and decide if any changes in the schema or the tags are necessary.
-
-## When I retrain my model I get different results, why is this?
-
-* When you [tag your data](how-to/tag-data.md#tag-your-data) you can determine how your dataset is split into training and testing sets. You can also have your data split randomly into training and testing sets, so there is no guarantee that the reflected model evaluation is on the same test set, so results are not comparable.
-
-* If you are retraining the same model, your test set will be the same, but you might notice a slight change in predictions made by the model. This is because the trained model is not robust enough, which is a factor of how representative and distinct your data is, and the quality of your tagged data.
-
-## How do I get predictions in different languages?
-
-First, you need to enable the multilingual option when [creating your project](how-to/create-project.md) or you can enable it later from the project settings page. After you train and deploy your model, you can start querying it in [multiple languages](language-support.md#multiple-language-support). You may get varied results for different languages. To improve the accuracy of any language, add more tagged instances to your project in that language to introduce the trained model to more syntax of that language.
-
-## I trained my model, but I can't test it
-
-You need to [deploy your model](quickstart.md#deploy-your-model) before you can test it.
-
-## How do I use my trained model to make predictions?
-
-After deploying your model, you [call the prediction API](how-to/call-api.md), using either the [REST API](how-to/call-api.md?tabs=rest-api) or [client libraries](how-to/call-api.md?tabs=client).
-
-## Data privacy and security
-
-Custom text classification is a data processor for General Data Protection Regulation (GDPR) purposes. In compliance with GDPR policies, custom text classification users have full control to view, export, or delete any user content either through the [Language Studio](https://aka.ms/languageStudio) or programmatically by using [REST APIs](https://westus.dev.cognitive.microsoft.com/docs/services/language-authoring-clu-apis-2022-03-01-preview/operations/Projects_TriggerImportProjectJob).
-
-Your data is only stored in your Azure Storage account. Custom text classification only has access to read from it during training.
-
-## How to clone my project?
-
-To clone your project you need to use the export API to export the project assets and then import them into a new project. See [REST APIs](https://westus.dev.cognitive.microsoft.com/docs/services/language-authoring-clu-apis-2022-03-01-preview/operations/Projects_TriggerImportProjectJob) reference for both operations.
-
-## Next steps
-
-* [Custom text classification overview](overview.md)
-* [Quickstart](quickstart.md)
cognitive-services Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-classification/glossary.md
- Title: Definitions used in custom text classification-
-description: Learn about definitions used in custom text classification.
------ Previously updated : 11/02/2021----
-# Terms and definitions used in custom text classification
-
-Learn about definitions used in custom text classification.
-
-## Project
-
-A project is a work area for building your custom AI models based on your data. Your project can only be accessed by you and others who have contributor access to the Azure resource being used.
-As a prerequisite to creating a Custom text classification project, you have to [connect your resource to a storage account](how-to/create-project.md).
-As part of the project creation flow, you need connect it to a blob container where you have uploaded your dataset. Your project automatically includes all the `.txt` files available in your container. You can have multiple models within your project all built on the same dataset. See the [service limits](service-limits.md) article for more information.
-
-Within your project you can do the following operations:
-
-* [Tag your data](./how-to/tag-data.md): The process of tagging each file of your dataset with the respective class/classes so that when you train your model it learns how to classify your files.
-* [Build and train your model](./how-to/train-model.md): The core step of your project. In this step, your model starts learning from your tagged data.
-* [View the model evaluation details](./how-to/view-model-evaluation.md): Review your model performance to decide if there is room for improvement or you are satisfied with the results.
-* [Improve the model (optional)](./how-to/improve-model.md): Determine what went wrong with your model and improve performance.
-* [Deploy the model](quickstart.md?pivots=language-studio#deploy-your-model): Make your model available for use.
-* [Test model](quickstart.md?pivots=language-studio#test-your-model): Test your model and see how it performs.
-
-### Project types
-
-Custom text classification supports two types of projects
-
-* **Single label classification**: You can only assign one class for each file of your dataset. For example, if it is a movie script, your file can only be `Action`, `Thriller` or `Romance`.
-
-* **Multiple label classification**: You can assign **multiple** classes for each file of your dataset. For example, if it is a movie script, your file can be `Action` or `Action` and `Thriller` or `Romance`.
-
-## Model
-
-A model is an object that is trained to do a certain task, in our case custom text classification. Models are trained by providing tagged data to learn from so they can later be used for classifying text. After you're satisfied with the model's performance, it can be deployed, which makes it [available for consumption](https://aka.ms/ct-runtime-swagger).
-
-## Class
-
-A class is a user defined category that is used to indicate the overall classification of the text. You will tag your data with your assigned classes before passing it to the model for training.
cognitive-services Call Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-classification/how-to/call-api.md
- Title: How to submit custom text classification tasks-
-description: Learn about sending a request for custom text classification.
------ Previously updated : 03/15/2022----
-# Deploy a model and classify text using the runtime API
-
-After you're satisfied with your model, and made any necessary improvements, you can deploy it and start classifying text. Deploying a model makes it available for use through the runtime API.
-
-## Prerequisites
-
-* [A custom text classification project](create-project.md) with a configured Azure blob storage account,
-* Text data that has [been uploaded](create-project.md#prepare-training-data) to your storage account.
-* [Tagged data](tag-data.md) and successfully [trained model](train-model.md)
-* Reviewed the [model evaluation details](view-model-evaluation.md) to determine how your model is performing.
-* (optional) [Made improvements](improve-model.md) to your model if its performance isn't satisfactory.
-
-See the [application development lifecycle](../overview.md#project-development-lifecycle) for more information.
-
-## Deploy your model
-
-Deploying a model hosts it and makes it available for predictions through an endpoint.
---
-When a model is deployed, you will be able to test the model directly in the portal or by calling the API associated with it.
-
-> [!NOTE]
-> You can only have ten deployment names.
-
-
-### Delete deployment
-
-To delete a deployment, select the deployment you want to delete and click **Delete deployment**
-
-> [!TIP]
-> You can [test your model in Language Studio](../quickstart.md?pivots=language-studio#test-your-model) by sending samples of text for it to classify.
-
-## Send a text classification request to your model
-
-# [Using Language Studio](#tab/language-studio)
-
-### Using Language studio
-
-1. After the deployment is completed, select the model you want to use and from the top menu click on **Get prediction URL** and copy the URL and body.
-
- :::image type="content" source="../media/get-prediction-url-1.png" alt-text="run-inference" lightbox="../media/get-prediction-url-1.png":::
-
-2. In the window that appears, under the **Submit** pivot, copy the sample request into your command line
-
-3. Replace `<YOUR_DOCUMENT_HERE>` with the actual text you want to classify.
-
- :::image type="content" source="../media/get-prediction-url-2.png" alt-text="run-inference-2" lightbox="../media/get-prediction-url-2.png":::
-
-4. Submit the request
-
-5. In the response header you receive extract `jobId` from `operation-location`, which has the format: `{YOUR-ENDPOINT}/text/analytics/v3.2-preview.2/analyze/jobs/<jobId}>`
-
-6. Copy the retrieve request and replace `<OPERATION-ID>` with `jobId` received from the last step and submit the request.
-
- :::image type="content" source="../media/get-prediction-url-3.png" alt-text="run-inference-3" lightbox="../media/get-prediction-url-3.png":::
-
-You will need to use the REST API. Click on the **REST API** tab above for more information.
-
-# [Using the API](#tab/rest-api)
-
-### Using the REST API
-
-First you will need to get your resource key and endpoint
-
-1. Go to your resource overview page in the [Azure portal](https://portal.azure.com/#home)
-
-2. From the menu on the left side, select **Keys and Endpoint**. Use endpoint for the API requests and you will need the key for `Ocp-Apim-Subscription-Key` header.
-
- :::image type="content" source="../media/get-endpoint-azure.png" alt-text="Get the Azure endpoint" lightbox="../media/get-endpoint-azure.png":::
-
-### Submit a custom text classification task
--
-### Get the results for a custom text classification task
--
-# [Using the client libraries (Azure SDK)](#tab/client)
-
-## Use the client libraries
-
-1. Go to your resource overview page in the [Azure portal](https://portal.azure.com/#home)
-
-2. From the menu on the left side, select **Keys and Endpoint**. Use endpoint for the API requests and you will need the key for `Ocp-Apim-Subscription-Key` header.
-
- :::image type="content" source="../../custom-classification/media/get-endpoint-azure.png" alt-text="Get the Azure endpoint" lightbox="../../custom-classification/media/get-endpoint-azure.png":::
-
-3. Download and install the client library package for your language of choice:
-
- |Language |Package version |
- |||
- |.NET | [5.2.0-beta.2](https://www.nuget.org/packages/Azure.AI.TextAnalytics/5.2.0-beta.2) |
- |Java | [5.2.0-beta.2](https://mvnrepository.com/artifact/com.azure/azure-ai-textanalytics/5.2.0-beta.2) |
- |JavaScript | [5.2.0-beta.2](https://www.npmjs.com/package/@azure/ai-text-analytics/v/5.2.0-beta.2) |
- |Python | [5.2.0b2](https://pypi.org/project/azure-ai-textanalytics/5.2.0b2/) |
-
-4. After you've installed the client library, use the following samples on GitHub to start calling the API.
-
- Single label classification:
- * [C#](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/textanalytics/Azure.AI.TextAnalytics/samples/Sample10_SingleCategoryClassify.md)
- * [Java](https://github.com/Azure/azure-sdk-for-java/blob/main/sdk/textanalytics/azure-ai-textanalytics/src/samples/java/com/azure/ai/textanalytics/lro/ClassifyDocumentSingleCategory.java)
- * [JavaScript](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/textanalytics/ai-text-analytics/samples/v5/javascript/customText.js)
- * [Python](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/textanalytics/azure-ai-textanalytics/samples/sample_single_category_classify.py)
-
- Multi label classification:
- * [C#](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/textanalytics/Azure.AI.TextAnalytics/samples/Sample11_MultiCategoryClassify.md)
- * [Java](https://github.com/Azure/azure-sdk-for-java/blob/main/sdk/textanalytics/azure-ai-textanalytics/src/samples/java/com/azure/ai/textanalytics/lro/ClassifyDocumentMultiCategory.java)
- * [JavaScript](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/textanalytics/ai-text-analytics/samples/v5/javascript/customText.js)
- * [Python](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/textanalytics/azure-ai-textanalytics/samples/sample_multi_category_classify.py)
-
-5. See the following reference documentation for more information on the client, and return object:
-
- * [C#](/dotnet/api/azure.ai.textanalytics?view=azure-dotnet-preview&preserve-view=true)
- * [Java](/java/api/overview/azure/ai-textanalytics-readme?view=azure-java-preview&preserve-view=true)
- * [JavaScript](/javascript/api/overview/azure/ai-text-analytics-readme?view=azure-node-preview&preserve-view=true)
- * [Python](/python/api/azure-ai-textanalytics/azure.ai.textanalytics?view=azure-python-preview&preserve-view=true)
--
cognitive-services Create Project https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-classification/how-to/create-project.md
- Title: How to create custom text classification projects-
-description: Learn about the steps for using Azure resources with custom text classification.
------ Previously updated : 03/21/2022----
-# How to create custom text classification projects
-
-Use this article to learn how to set up these requirements and create a project.
-
-## Prerequisites
-
-Before you start using custom text classification, you will need several things:
-
-* An Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services).
-* An Azure Language resource
-* An Azure storage account to store data for your project
-* You should have an idea of the [project schema](design-schema.md) you will use for your data.
-
-## Azure resources
-
-Before you start using custom text classification, you will need an Azure Language resource. We recommend following the steps below for creating your resource in the Azure portal. Creating a resource in the Azure portal lets you create an Azure storage account at the same time, with all of the required permissions pre-configured. You can also read further in the article to learn how to use a pre-existing resource, and configure it to work with custom text classification.
-
-You also will need an Azure storage account where you will upload your `.txt` files that will be used to train a model to classify text.
-
-# [Azure portal](#tab/portal)
--
-<!-- :::image type="content" source="../../media/azure-portal-resource-credentials.png" alt-text="A screenshot showing the resource creation screen in Language Studio." lightbox="../../media/azure-portal-resource-credentials.png"::: -->
-
-# [Language Studio](#tab/studio)
-
-### Create a new resource from Language Studio
-
-If it's your first time logging in, you'll see a window in [Language Studio](https://aka.ms/languageStudio) that will let you choose a language resource or create a new one. You can also create a resource by clicking the settings icon in the top-right corner, selecting **Resources**, then clicking **Create a new resource**.
-
-> [!IMPORTANT]
-> * To use custom text classification, you'll need a Language resource in **West US 2** or **West Europe** with the Standard (**S**) pricing tier.
-> * Be sure to to select **Managed Identity** when you create a resource.
--
-To use custom text classification, you'll need to [create an Azure storage account](../../../../storage/common/storage-account-create.md) if you don't have one already.
-
-Next you'll need to assign the [correct roles](#roles-for-your-storage-account) for the storage account to connect it to your Language resource.
-
-# [Azure PowerShell](#tab/powershell)
-
-### Create a new resource with the Azure PowerShell
-
-You can create a new resource and a storage account using the following CLI [template](https://github.com/Azure-Samples/cognitive-services-sample-data-files) and [parameters](https://github.com/Azure-Samples/cognitive-services-sample-data-files) files, which are hosted on GitHub.
-
-Edit the following values in the parameters file:
-
-| Parameter name | Value description |
-|--|--|
-|`name`| Name of your Language resource|
-|`location`| Region in which your resource is hosted. Custom text classification is only available in **West US 2** and **West Europe**.|
-|`sku`| Pricing tier of your resource. Custom text only works with **S** tier|
-|`storageResourceName`| Name of your storage account|
-|`storageLocation`| Region in which your storage account is hosted.|
-|`storageSkuType`| SKU of your [storage account](/rest/api/storagerp/srp_sku_types).|
-|`storageResourceGroupName`| Resource group of your storage account|
-<!-- |builtInRoleType| Set this role to **"Contributor"**| -->
-
-Use the following PowerShell command to deploy the Azure Resource Manager (ARM) template with the files you edited.
-
-```powershell
-New-AzResourceGroupDeployment -Name ExampleDeployment -ResourceGroupName ExampleResourceGroup `
- -TemplateFile <path-to-arm-template> `
- -TemplateParameterFile <path-to-parameters-file>
-```
-
-See the ARM template documentation for information on [deploying templates](../../../../azure-resource-manager/templates/deploy-powershell.md#parameter-files) and [parameter files](../../../../azure-resource-manager/templates/parameter-files.md?tabs=json).
-
-
-
-## Using a pre-existing Azure resource
-
-You can use an existing Language resource to get started with custom text classification as long as this resource meets the below requirements:
-
-|Requirement |Description |
-|||
-|Regions | Make sure your existing resource is provisioned in one of the two supported regions, **West US 2** or **West Europe**. If not, you will need to create a new resource in these regions. |
-|Pricing tier | Make sure your existing resource is in the Standard (**S**) pricing tier. Only this pricing tier is supported. If your resource doesn't use this pricing tier, you will need to create a new resource. |
-|Managed identity | Make sure that the resource-managed identity setting is enabled. Otherwise, read the next section. |
-
-To use custom text classification, you'll need to [create an Azure storage account](../../../../storage/common/storage-account-create.md) if you don't have one already.
-
-Next you'll need to assign the [correct roles](#roles-for-your-storage-account) for the storage account to connect it to your Language resource.
-
-## Roles for your Azure Language resource
-
-You should have the **owner** or **contributor** role assigned on your Azure Language resource.
-
-## Enable identity management for your resource
-
-Your Language resource must have identity management, which can be enabled either using the Azure portal or from Language Studio. To enable it using [Language Studio](https://aka.ms/languageStudio):
-1. Click the settings icon in the top right corner of the screen
-2. Select **Resources**
-3. Select **Managed Identity** for your Azure resource.
-
-## Roles for your storage account
-
-Your Azure blob storage account must have the below roles:
-
-* Your resource has the **owner** or **contributor** role on the storage account.
-* Your resource has the **Storage blob data owner** or **Storage blob data contributor** role on the storage account.
-* Your resource has the **Reader** role on the storage account.
-
-To set proper roles on your storage account:
-
-1. Go to your storage account page in the [Azure portal](https://portal.azure.com/).
-2. Select **Access Control (IAM)** in the left navigation menu.
-3. Select **Add** to **Add Role Assignments**, and choose the appropriate role for your Language resource.
-4. Select **Managed identity** under **Assign access to**.
-5. Select **Members** and find your resource. In the window that appears, select your subscription, and **Language** as the managed identity. You can search for user names in the **Select** field. Repeat this for all roles.
--
-### Enable CORS for your storage account
-
-Make sure to allow (**GET, PUT, DELETE**) methods when enabling Cross-Origin Resource Sharing (CORS). Make sure to add an asterisk (`*`) to the fields, and add the recommended value of 500 for the maximum age.
--
-## Prepare training data
-
-* As a prerequisite for creating a custom text classification project, your training data needs to be uploaded to a blob container in your storage account. You can create and upload training files from Azure directly or through using the Azure Storage Explorer tool. Using Azure Storage Explorer tool allows you to upload more data in less time.
-
- * [Create and upload files from Azure](../../../../storage/blobs/storage-quickstart-blobs-portal.md#create-a-container)
- * [Create and upload files using Azure Storage Explorer](../../../../vs-azure-tools-storage-explorer-blobs.md)
-
-* You can only use `.txt`. files for custom text classification. If your data is in other format, you can use [Cognitive Services Language Utilities tool](https://aka.ms/CognitiveServicesLanguageUtilities) to parse your file to `.txt` format.
-
-* You can either upload tagged data, or you can tag your data in Language Studio. Tagged data must follow the [tags file format](../concepts/data-formats.md).
-
->[!TIP]
-> See [How to design a schema](design-schema.md) for information on data selection and preparation.
-
-## Create a project
--
-## Next steps
-
-After your project is created, you can start [tagging your data](tag-data.md), which will inform your text classification model how to interpret text, and is used for training and evaluation.
cognitive-services Design Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-classification/how-to/design-schema.md
- Title: How to prepare data and define a schema-
-description: Learn about data selection, preparation, and creating a schema for custom text classification projects.
------ Previously updated : 11/02/2021----
-# How to prepare data and define a schema
-
-In order to create a custom text classification model, you will need quality data to train it. This article covers how you should approach selecting and preparing your data, along with defining a schema. A schema defines the classes that you need your model to classify your text into at runtime, and is the first step of [developing a custom classification application](../overview.md#project-development-lifecycle).
--
-## Data selection
-
-The quality of data you train your model with affects model performance greatly.
-
-* Use real-life data that reflects your domain's problem space to effectively train your model. You can use synthetic data to accelerate the initial model training process, but it will likely differ from your real-life data and make your model less effective when used.
-
-* Balance your data distribution as much as possible without deviating far from the distribution in real-life.
-
-* Use diverse data whenever possible to avoid overfitting your model. Less diversity in training data may lead to your model learning spurious correlations that may not exist in real-life data.
-
-* Avoid duplicate files in your data. Duplicate data has a negative effect on the training process, model metrics, and model performance.
-
-* Consider where your data comes from. If you are collecting data from one person, department, or part of your scenario, you are likely missing diversity that may be important for your model to learn about.
-
-> [!NOTE]
-> If your files are in multiple languages, select the **multiple languages** option during [project creation](../quickstart.md) and set the **language** option to the language of the majority of your files.
-
-## Data preparation
-
-As a prerequisite for creating a custom text classification project, your training data needs to be uploaded to a blob container in your storage account. You can create and upload training files from Azure directly, or through using the Azure Storage Explorer tool. Using the Azure Storage Explorer tool allows you to upload more data quickly.
-
-* [Create and upload files from Azure](../../../../storage/blobs/storage-quickstart-blobs-portal.md#create-a-container)
-* [Create and upload files using Azure Storage Explorer](../../../../vs-azure-tools-storage-explorer-blobs.md)
-
-You can only use `.txt`. files for custom text. If your data is in other format, you can use [CLUtils parse command](https://github.com/microsoft/CognitiveServicesLanguageUtilities/blob/main/CustomTextAnalytics.CLUtils/Solution/CogSLanguageUtilities.ViewLayer.CliCommands/Commands/ParseCommand/README.md) to change your file format.
-
- You can upload an annotated dataset, or you can upload an unannotated one and [tag your data](../how-to/tag-data.md) in Language studio.
-
-## Schema design
-
-The schema defines the classes that you need your model to classify your text into at runtime.
-
-* **Review and identify**: Review files in your dataset to be familiar with their structure and content, then identify how you want to classify your data.
-
- For example, if you are classifying support tickets, you might need the following classes: *login issue*, *hardware issue*, *connectivity issue*, and *new equipment request*.
-
-* **Avoid ambiguity in classes**: Ambiguity arises when the classes you specify share similar meaning to one another. The more ambiguous your schema is, the more tagged data you may need to train your model.
-
- For example, if you are classifying food recipes, they may be similar to an extent. To differentiate between *dessert recipe* and *main dish recipe*, you may need to tag more examples to help your model distinguish between the two classes. Avoiding ambiguity saves time and yields better results.
-
-* **Out of scope data**: When using your model in production, consider adding an *out of scope* class to your schema if you expect files that don't belong to any of your classes. Then add a few files to your dataset to be tagged as *out of scope*. The model can learn to recognize irrelevant files, and predict their tags accordingly.
-
-## Next steps
-
-If you haven't already, create a custom text classification project. If it's your first time using custom text classification, consider following the [quickstart](../quickstart.md) to create an example project. You can also see the [project requirements](../how-to/create-project.md) for more details on what you need to create a project.
cognitive-services Improve Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-classification/how-to/improve-model.md
- Title: How to improve custom text classification model performance-
-description: Learn about improving a model for Custom Text Classification.
------ Previously updated : 11/02/2021----
-# Improve model performance
-
-After you've trained your model you reviewed its evaluation details, you can decide if you need to improve your model's performance. In this article, you will review inconsistencies between the predicted classes and classes tagged by the model, and examine data distribution.
-
-## Prerequisites
-
-To optionally improve a model, you will need to have:
-
-* [A custom text classification project](create-project.md) with a configured Azure blob storage account,
-* Text data that has [been uploaded](create-project.md#prepare-training-data) to your storage account.
-* [Tagged data](tag-data.md) to successfully [train a model](train-model.md)
-* Reviewed the [model evaluation details](view-model-evaluation.md) to determine how your model is performing.
-* Familiarized yourself with the [evaluation metrics](../concepts/evaluation.md) used for evaluation
-
-See the [application development lifecycle](../overview.md#project-development-lifecycle) for more information.
-
-## Review test set predictions
-
-Using Language Studio, you can review how your model performs vs how you expected it to perform. You can review predicted and tagged classes side by side for each model you have trained.
-
-1. Go to your project page in [Language Studio](https://aka.ms/languageStudio)).
- 1. Look for the section in Language Studio labeled **Classify text**.
- 2. Select **Custom text classification**.
-
-2. Select **Improve model** from the left side menu.
-
-3. Select **Review test set**.
-
-4. Choose your trained model from the **Model** drop-down menu.
-
-5. For easier analysis, you can toggle on **Show incorrect predictions only** to view mistakes only.
-
- :::image type="content" source="../media/review-validation-set.png" alt-text="Review the validation set" lightbox="../media/review-validation-set.png":::
-
-6. If a file that should belong to class `X` is constantly classified as class `Y`, it means that there is ambiguity between these classes and you need to reconsider your schema.
-
-## Examine data distribution from Language studio
-
-By examining data distribution in your files, you can decide if any class is underrepresented. Data imbalance happens when the files used for training are not distributed equally among the classes and introduces a risk to model performance. For example, if *class 1* has 50 tagged files while *class 2* has 10 tagged files only, this is a data imbalance where *class 1* is over represented and *class 2* is underrepresented.
-
-In this case, the model is biased towards classifying your file as *class 1* and might overlook *class 2*. A more complex issue may arise from data imbalance if the schema is ambiguous. If the two classes don't have clear distinction between them and *class 2* is underrepresented the model most likely will classify the text as *class 1*.
-
-In the [evaluation metrics](../concepts/evaluation.md), when a class is over represented it tends to have a higher recall than other classes while under represented classes have lower recall.
-
-To examine data distribution in your dataset:
-
-1. Go to your project page in [Language Studio](https://aka.ms/languageStudio).
- 1. Look for the section in Language Studio labeled **Classify text**.
- 2. Select **Custom text classification**.
-
-2. Select **Improve model** from the left side menu.
-
-3. Select **Examine data distribution**
-
- :::image type="content" source="../media/examine-data-distribution.png" alt-text="Examine the data distribution" lightbox="../media/examine-data-distribution.png":::
-
-4. Go back to the **Tag data** page, and make adjustments once you have formed an idea on how you should tag your data differently.
-
-## Next steps
-
-* Once you're satisfied with how your model performs, you can start [sending text classification requests](call-api.md) using the runtime API.
cognitive-services Tag Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-classification/how-to/tag-data.md
- Title: How to tag your data for custom classification - Azure Cognitive Services-
-description: Learn about how to tag your data for use with the custom text classification API.
------ Previously updated : 04/05/2022----
-# Tag text data for training your model
-
-Before creating a custom text classification model, you need to have tagged data first. If your data isn't tagged already, you can tag it in the language studio. Tagged data informs the model how to interpret text, and is used for training and evaluation.
-
-## Prerequisites
-
-Before you can tag data, you need:
-* [A successfully created project](create-project.md) with a configured Azure blob storage account,
-* Text data that has [been uploaded](create-project.md#prepare-training-data) to your storage account.
-
-See the [application development lifecycle](../overview.md#project-development-lifecycle) for more information.
-
-## Tag your data
-
-After training data is uploaded to your Azure storage account, you will need to tag it, so your model knows which words will be associated with the classes you need. When you tag data in Language Studio (or manually tag your data), these tags will be stored in [the JSON format](../concepts/data-formats.md) that your model will use during training.
-
-As you tag your data, keep in mind:
-
-* In general, more tagged data leads to better results, provided the data is tagged accurately.
-
-* Although we recommended having around 50 tagged files per class, there's no fixed number that can guarantee your model will perform the best, because model performance also depends on possible ambiguity in your [schema](design-schema.md), and the quality of your tagged data.
-
-Use the following steps to tag your data
-
-1. Go to your project page in [Language Studio](https://aka.ms/custom-classification).
-
-1. From the left side menu, select **Tag data**
-
-3. You can find a list of all .txt files available in your projects to the left. You can select the file you want to start tagging or you can use the **Back** and **Next** button from the bottom of the page to navigate.
-
-4. You can either view all files or only tagged files by changing the view from the **Viewing** drop-down menu.
-
- > [!NOTE]
- > If you enabled multiple languages for your project, you will find an additional **Language** drop-down menu. Select the language of each document.
-
-5. Before you start tagging, add classes to your project from the top-right corner
-
- :::image type="content" source="../media/tag-1.png" alt-text="A screenshot showing the data tagging screen" lightbox="../media/tag-1.png":::
-
-6. Start tagging your files. In the images below:
-
- * *Section 1*: is where the content of the text file is displayed.
-
- * *Section 2*: includes your project's classes and distribution across your files and tags.
-
- * *Section 3* is the split project data toggle. You can choose to add the selected text file to your training set or the testing set. By default, the toggle is off, and all text files are added to your training set.
-
- **Single label classification**: your file can only be tagged with one class; you can do so by selecting one of the buttons next to the class you want to tag this file with.
-
- :::image type="content" source="../media/single.png" alt-text="A screenshot showing the single label classification tag page" lightbox="../media/single.png":::
-
- **Multi label classification**: your file can be tagged with multiple classes, you can do so by selecting all applicable check boxes next to the classes you want to tag this file with.
-
- :::image type="content" source="../media/multiple.png" alt-text="A screenshot showing the multiple label classification tag page." lightbox="../media/multiple.png":::
-
-For distribution section, you can **View class distribution across** Training and Testing sets.
-
-
-To add a text file to a training or testing set, use the buttons choose the set it belongs to.
-
-> [!TIP]
-> It is recommended to define your testing set.
-
-Your changes will be saved periodically as you add tags. If they have not been saved yet you will find a warning at the top of your page. If you want to save manually, select **Save tags** at the top of the page.
-
-## Remove tags
-
-If you want to remove a tag, uncheck the button next to the class.
-
-## Delete or classes
-
-To delete/rename a class,
-
-1. Select the class you want to edit from the right side menu
-2. Click on the three dots and select the option you want from the drop-down menu.
-
-## Next steps
-
-After you've tagged your data, you can begin [training a model](train-model.md) that will learn based on your data.
cognitive-services Train Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-classification/how-to/train-model.md
- Title: How to train your custom text classification model - Azure Cognitive Services-
-description: Learn about how to train your model for custom text classification.
------ Previously updated : 04/05/2022----
-# How to train a custom text classification model
--
-Training is the process where the model learns from your [tagged data](tag-data.md). After training is completed, you will be able to [use the model evaluation metrics](../how-to/view-model-evaluation.md) to determine if you need to [improve your model](../how-to/improve-model.md).
-
-## Prerequisites
-
-Before you train your model you need:
-
-* [A successfully created project](create-project.md) with a configured Azure blob storage account,
-* Text data that has [been uploaded](create-project.md#prepare-training-data) to your storage account.
-* [Tagged data](tag-data.md)
-
-See the [application development lifecycle](../overview.md#project-development-lifecycle) for more information.
-
-## Train model in Language Studio
-
-1. Go to your project page in [Language Studio](https://aka.ms/LanguageStudio).
-
-2. Select **Train** from the left side menu.
-
-3. Select **Start a training job** from the top menu.
-
-4. To train a new model, select **Train a new model** and type in the model name in the text box below. You can **overwrite an existing model** by selecting this option and select the model you want from the dropdown below.
-
- :::image type="content" source="../media/train-model.png" alt-text="Create a new model" lightbox="../media/train-model.png":::
-
-If you have enabled the [**Split project data manually** selection](tag-data.md#tag-your-data) when you were tagging your data, you will see two training options:
-
-* **Automatic split the testing**: The data will be randomly split for each class between training and testing sets, according to the percentages you choose. The default value is 80% for training and 20% for testing. To change these values, choose which set you want to change and write the new value.
-
-* **Use a manual split**: Assign each document to either the training or testing set, this required first adding files in the test dataset.
-
-5. Click on the **Train** button.
-
-6. You can check the status of the training job in the same page. Only successfully completed training jobs will generate models.
-
-You can only have one training job running at a time. You cannot create or start other tasks in the same project.
-
-<!-- After training has completed successfully, keep in mind:
-
-* [View the model's evaluation details](../how-to/view-model-evaluation.md) After model training, model evaluation is done against the [test set](../how-to/train-model.md#data-splits), which was not introduced to the model during training. By viewing the evaluation, you can get a sense of how the model performs in real-life scenarios.
-
-* [Examine data distribution](../how-to/improve-model.md#examine-data-distribution-from-language-studio) Make sure that all classes are well represented and that you have a balanced data distribution to make sure that all your classes are adequately represented. If a certain class is tagged far less frequent than the others, this class is likely under-represented and most occurrences probably won't be recognized properly by the model at runtime. In this case, consider adding more files that belong to this class to your dataset.
- -->
-* [Improve performance (optional)](../how-to/improve-model.md) Other than revising [tagged data](tag-data.md) based on error analysis, you may want to increase the number of tags for under-performing entity types, or improve the diversity of your tagged data. This will help your model learn to give correct predictions, over potential linguistic phenomena that cause failure.
-
-<!-- * Define your own test set: If you are using a random split option and the resulting test set was not comprehensive enough, consider defining your own test to include a variety of data layouts and balanced tagged classes.
- -->
-## Next steps
-
-After training is completed, you will be able to [use the model evaluation metrics](../how-to/view-model-evaluation.md) to optionally [improve your model](../how-to/improve-model.md). Once you're satisfied with your model, you can deploy it, making it available to use for [classifying text](call-api.md).
cognitive-services View Model Evaluation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-classification/how-to/view-model-evaluation.md
- Title: View a custom text classification model evaluation - Azure Cognitive Services-
-description: Learn how to view the evaluation scores for a custom text classification model
------ Previously updated : 11/02/2021----
-# View the model evaluation
-
-Reviewing model evaluation is an important step in developing a custom text classification model. It helps you learn how well your model is performing, and gives you an idea about how it will perform when used in production.
--
-## Prerequisites
-
-Before you train your model you need:
-* [A custom text classification project](create-project.md) with a configured Azure blob storage account,
-* Text data that has [been uploaded](create-project.md#prepare-training-data) to your storage account.
-* [Tagged data](tag-data.md)
-* A successfully [trained model](train-model.md)
-
-See the [application development lifecycle](../overview.md#project-development-lifecycle) for more information.
-
-## Model evaluation
-
-The evaluation process uses the trained model to predict user-defined classes for files in the test set, and compares them with the provided data tags. The test set consists of data that was not introduced to the model during the training process.
-
-## View the model details using Language Studio
---
-Under the **Test set confusion matrix**, you can find the confusion matrix for the model.
-
-> [!NOTE]
-> The confusion matrix is currently not supported for multi label classification projects.
-
-**Single label classification**
--
-<!-- **Multi label classification**
--
-## Next steps
-
-As you review your how your model performs, learn about the [evaluation metrics](../concepts/evaluation.md) that are used. Once you know whether your model performance needs to improve, you can begin [improving the model](improve-model.md).
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-classification/language-support.md
- Title: Language support in custom text classification-
-description: Learn about which languages are supported by custom text classification.
------ Previously updated : 03/14/2022----
-# Language support
-
-Use this article to learn which languages are supported by custom text classification.
-
-## Multiple language support
-
-With custom text classification, you can train a model in one language and test in another language. This feature is very powerful because it helps you save time and effort, instead of building separate projects for every language, you can handle multi-lingual dataset in one project. Your dataset doesn't have to be entirely in the same language but you have to specify this option at project creation. If you notice your model performing poorly in certain languages during the evaluation process, consider adding more data in this language to your training set.
-
-> [!NOTE]
-> To enable support for multiple languages, you need to enable this option when [creating your project](how-to/create-project.md) or you can enbale it later form the project settings page.
-
-## Languages supported by custom text classification
-
-Custom text classification supports `.txt` files in the following languages:
-
-| Language | Language Code |
-| | |
-| Afrikaans | `af` |
-| Amharic | `am` |
-| Arabic | `ar` |
-| Assamese | `as` |
-| Azerbaijani | `az` |
-| Belarusian | `be` |
-| Bulgarian | `bg` |
-| Bengali | `bn` |
-| Breton | `br` |
-| Bosnian | `bs` |
-| Catalan | `ca` |
-| Czech | `cs` |
-| Welsh | `cy` |
-| Danish | `da` |
-| German | `de`
-| Greek | `el` |
-| English (US) | `en-us` |
-| Esperanto | `eo` |
-| Spanish | `es` |
-| Estonian | `et` |
-| Basque | `eu` |
-| Persian (Farsi) | `fa` |
-| Finnish | `fi` |
-| French | `fr` |
-| Western Frisian | `fy` |
-| Irish | `ga` |
-| Scottish Gaelic | `gd` |
-| Galician | `gl` |
-| Gujarati | `gu` |
-| Hausa | `ha` |
-| Hebrew | `he` |
-| Hindi | `hi` |
-| Croatian | `hr` |
-| Hungarian | `hu` |
-| Armenian | `hy` |
-| Indonesian | `id` |
-| Italian | `it` |
-| Japanese | `ja` |
-| Javanese | `jv` |
-| Georgian | `ka` |
-| Kazakh | `kk` |
-| Khmer | `km` |
-| Kannada | `kn` |
-| Korean | `ko` |
-| Kurdish (Kurmanji) | `ku` |
-| Kyrgyz | `ky` |
-| Latin | `la` |
-| Lao | `lo` |
-| Lithuanian | `lt` |
-| Latvian | `lv` |
-| Malagasy | `mg` |
-| Macedonian | `mk` |
-| Malayalam | `ml` |
-| Mongolian | `mn` |
-| Marathi | `mr` |
-| Malay | `ms` |
-| Burmese | `my` |
-| Nepali | `ne` |
-| Dutch | `nl` |
-| Norwegian (Bokmal) | `nb` |
-| Oriya | `or` |
-| Punjabi | `pa` |
-| Polish | `pl` |
-| Pashto | `ps` |
-| Portuguese (Brazil) | `pt-br` |
-| Portuguese (Portugal) | `pt-pt` |
-| Romanian | `ro` |
-| Russian | `ru` |
-| Sanskrit | `sa` |
-| Sindhi | `sd` |
-| Sinhala | `si` |
-| Slovak | `sk` |
-| Slovenian | `sl` |
-| Somali | `so` |
-| Albanian | `sq` |
-| Serbian | `sr` |
-| Sundanese | `su` |
-| Swedish | `sv` |
-| Swahili | `sw` |
-| Tamil | `ta` |
-| Telugu | `te` |
-| Thai | `th` |
-| Filipino | `tl` |
-| Turkish | `tr` |
-| Uyghur | `ug` |
-| Ukrainian | `uk` |
-| Urdu | `ur` |
-| Uzbek | `uz` |
-| Vietnamese | `vi` |
-| Xhosa | `xh` |
-| Yiddish | `yi` |
-| Chinese (Simplified) | `zh-hans` |
-| Zulu | `zu` |
-
-## Next steps
-
-* [Custom text classification overview](overview.md)
-* [Data limits](service-limits.md)
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-classification/overview.md
- Title: What is custom text classification (preview) in Azure Cognitive Services for Language?-
-description: Learn how use custom text classification.
------ Previously updated : 11/02/2021----
-# What is custom text classification (preview)?
-
-Custom text classification is one of the features offered by [Azure Cognitive Service for Language](../overview.md). It is a cloud-based API service that applies machine-learning intelligence to enable you to build custom models for text classification tasks.
-
-Custom text classification is offered as part of the custom features within Azure Cognitive for Language. This feature enables its users to build custom AI models to classify text into custom categories pre-defined by the user. By creating a custom text classification project, developers can iteratively tag data, train, evaluate, and improve model performance before making it available for consumption. The quality of the tagged data greatly impacts model performance. To simplify building and customizing your model, the service offers a custom web portal that can be accessed through the [Language studio](https://aka.ms/languageStudio). You can easily get started with the service by following the steps in this [quickstart](quickstart.md).
-
-Custom text classification supports two types of projects:
-
-* **Single label classification** - you can assign a single class for each file of your dataset. For example, a movie script could only be classified as "Action" or "Thriller".
-* **Multiple label classification** - You can assign multiple classes for each file of your dataset. For example, a movie script could be classified as "Action" or "Action and Thriller".
-
-This documentation contains the following article types:
-
-* [Quickstarts](quickstart.md) are getting-started instructions to guide you through making requests to the service.
-* [Concepts](concepts/evaluation.md) provide explanations of the service functionality and features.
-* [How-to guides](how-to/tag-data.md) contain instructions for using the service in more specific or customized ways.
-
-## Example usage scenarios
-
-### Automatic emails/ticket triage
-
-Support centers of all types receive thousands to hundreds of thousands of emails/tickets containing unstructured, free-form text, and attachments. Timely review, acknowledgment, and routing to subject matter experts within internal teams is critical. However, email triage at this scale involving people to review and route to the right departments takes time and precious resources. Custom text classification can be used to analyze incoming text triage and categorize the content to be automatically routed to the relevant department to take necessary actions.
-
-### Knowledge mining to enhance/enrich semantic search
-
-Search is foundational to apps that display text content to users, with common scenarios including: catalog or document search, retail product search, or knowledge mining for data science. Many enterprises across various industries are looking into building a rich search experience over private, heterogeneous content, which includes both structured and unstructured documents. As a part of their pipeline, developers can use custom text classification to categorize text into classes that are relevant to their industry. The predicted classes could be used to enrich the indexing of the file for a more customized search experience.
-
-## Project development lifecycle
-
-Creating a custom text classification project typically involves several different steps.
--
-Follow these steps to get the most out of your model:
-
-1. **Define schema**: Know your data and identify the classes you want differentiate between, avoid ambiguity.
-
-2. **Tag data**: The quality of data tagging is a key factor in determining model performance. Tag all the files you want to include in training. Files that belong to the same class should always have the same class, if you have a file that can fall into two classes use **Multiple class classification** projects. Avoid class ambiguity, make sure that your classes are clearly separable from each other, especially with Single class classification projects.
-
-3. **Train model**: Your model starts learning from your tagged data.
-
-4. **View model evaluation details**: View the evaluation details for your model to determine how well it performs when introduced to new data.
-
-5. **Improve model**: Work on improving your model performance by examining the incorrect model predictions and examining data distribution.
-
-6. **Deploy model**: Deploying a model makes it available for use via the [Analyze API](https://aka.ms/ct-runtime-swagger).
-
-7. **Classify text**: Use your custom modeled for text classification tasks.
-
-## Next steps
-
-* Use the [quickstart article](quickstart.md) to start using custom text classification.
-
-* As you go through the project development lifecycle, review the [glossary](glossary.md) to learn more about the terms used throughout the documentation for this feature.
-
-* Remember to view the [service limits](service-limits.md) for information such as [regional availability](service-limits.md#regional-availability).
cognitive-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-classification/quickstart.md
- Title: "Quickstart: Custom text classification"-
-description: Use this quickstart to start using the custom text classification feature.
------ Previously updated : 02/28/2022--
-zone_pivot_groups: usage-custom-language-features
--
-# Quickstart: Custom text classification (preview)
-
-Use this article to get started with creating a custom text classification project where you train custom models for text classification. A model is a machine learning object that will learn from example data we provide, and trained to classify text afterwards.
-------
-## Next steps
-
-After you've created a custom text classification model, you can:
-* [Use the runtime API to classify text](how-to/call-api.md)
-
-When you start to create your own custom text classification projects, use the how-to articles to learn more about developing your model in greater detail:
-
-* [Data selection and schema design](how-to/design-schema.md)
-* [Tag data](how-to/tag-data.md)
-* [Train a model](how-to/train-model.md)
-* [View model evaluation](how-to/view-model-evaluation.md)
-* [Improve a model](how-to/improve-model.md)
cognitive-services Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-classification/service-limits.md
- Title: Custom text classification limits-
-description: Learn about the data and rate limits when using custom text classification.
--- Previously updated : 01/25/2022-------
-# Custom text classification limits
-
-Use this article to learn about the data and rate limits when using custom text classification
-
-## File limits
-
-* You can only use `.txt` files for custom text classification. If your data is in other format, you can use [CLUtils parse command](https://aka.ms/CognitiveServicesLanguageUtilities) to crack your document and extract the text.
-
-* All files uploaded in your container must contain data, no empty files are allowed for training.
-
-* All files should be available at the root of your container.
-
-* Your [training dataset](how-to/train-model.md) should include at least 10 files and no more than 1,000,000 files.
-
-## API limits
-
-**Authoring API**
-
-* You can send a maximum of 10 POST requests and 100 GET requests per minute.
-
-**Analyze API**
-
-* You can send a maximum of 20 GET or POST requests per minute.
-
-* The maximum size of files per request is 125,000 characters. You can send up to 25 files, as long as the collective size of them does not exceed 125,000 characters.
-
-> [!NOTE]
-> If you need to send larger files than the limit allows, you can break the text into smaller chunks of text before sending them to the API. You use can the [chunk command from CLUtils](https://github.com/microsoft/CognitiveServicesLanguageUtilities/blob/main/CustomTextAnalytics.CLUtils/Solution/CogSLanguageUtilities.ViewLayer.CliCommands/Commands/ChunkCommand/README.md) for this process.
-
-## Azure resource limits
-
-* You can only connect 1 storage account per resource. This process is irreversible. If you connect a storage account to your resource you can't disconnect it later.
-
-* You can have up to 500 projects per resource.
-
-* Project names have to be unique within the same resource, across both the custom Named Entity Recognition (NER) and custom text classification features.
-
-## Regional availability
-
-Custom text classification is only available select Azure regions. When you create an [Azure resource](how-to/create-project.md), it must be deployed into one of the following regions:
-* **West US 2**
-* **West Europe**
-
-## Project limits
-
-* You can only connect 1 storage container for each project. This process is irreversible, if you connect a container to your project you can't disconnect it later.
-
-* You can have only 1 tags file per project. You can't change the tags file later, you can only update the tags within your project.
-
-* You can't rename your project after creation.
-
-* Your project name must only contain alphanumeric characters (letters and numbers). Spaces and special characters are not allowed. Project names can have a maximum of 50 characters.
-
-* You must have minimum of 10 files in your project and a maximum of 1,000,000 files.
-
-* You can have up to 10 trained models per project.
-
-* Model names have to be unique within the same project.
-
-* Model name must only contain alphanumeric characters (letters and numbers). Spaces and special characters are not allowed. Project names can have a maximum of 50 characters.
-
-* You can't rename your model after creation.
-
-* You can only train one model at a time per project.
-
-## Classes limits
-
-* You must have at least 2 classes in your project. <!-- The maximum is 200 classes. -->
-
-* It is recommended to have around 200 tagged instances per class, and you must have a minimum of 10 of tagged instances per class.
-
-## Naming limits
-
-| Attribute | Limits |
-|--|--|
-| Project name | You can only use letters `(a-z, A-Z)`, and numbers `(0-9)` with no spaces. Maximum allowed length is 50 characters. |
-| Model name | You can only use letters `(a-z, A-Z)`, numbers `(0-9)` and symbols `@ # _ . , ^ \ [ ]`. Maximum allowed length is 50 characters. |
-| Class name| You can only use letters `(a-z, A-Z)`, numbers `(0-9)` and symbols `@ # _ . , ^ \ [ ]`. Maximum allowed length is 50 characters. |
-| File name | You can only use letters `(a-z, A-Z)`, and numbers `(0-9)` with no spaces. |
cognitive-services Cognitive Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-classification/tutorials/cognitive-search.md
- Title: Enrich a Cognitive Search index with custom classes-
-description: Improve your cognitive search indices using custom text classification
------ Previously updated : 02/28/2022----
-# Tutorial: Enrich Cognitive search index with custom classes from your data
-
-With the abundance of electronic documents within the enterprise, the problem of search through them becomes a tiring and expensive task. [Azure Cognitive Search](../../../../search/search-create-service-portal.md) helps with searching through your files based on their indices. Custom text classification helps in enriching the indexing of these files by classifying them into your custom classes.
-
-In this tutorial, you will learn how to:
-
-* Create a custom text classification project.
-* Publish Azure function.
-* Add Index to your Azure Cognitive search.
-
-## Prerequisites
-
-* [An Azure Language resource connected to an Azure blob storage account](../how-to/create-project.md).
- * We recommend following the instructions for creating a resource using the Azure portal, for easier setup.
-
-* [An Azure Cognitive Search service](../../../../search/search-create-service-portal.md) in your current subscription
- * You can use any tier, and any region for this service.
-
-* An [Azure function app](../../../../azure-functions/functions-create-function-app-portal.md)
-
-* Download this [sample data](https://github.com/Azure-Samples/cognitive-services-sample-data-files/raw/master/language-service/Custom%20text%20classification/Custom%20multi%20classification%20-%20movies%20summary.zip).
-
-## Create a custom text classification project through Language studio
--
-## Train your model
--
-## Deploy your model
-
-To deploy your model, go to your project in [Language Studio](https://aka.ms/custom-classification). You can also use the [REST API](https://westus.dev.cognitive.microsoft.com/docs/services/language-authoring-clu-apis-2022-03-01-preview/operations/Projects_TriggerImportProjectJob).
--
-If you deploy your model through Language Studio, your `deployment-name` will be `prod`.
-
-## Use CogSvc language utilities tool for Cognitive search integration
-
-### Publish your Azure Function
-
-1. Download and use the [provided sample function](https://aka.ms/CustomTextAzureFunction).
-
-2. After you download the sample function, open the *program.cs* file in Visual Studio and [publish the function to Azure](../../../../azure-functions/functions-develop-vs.md?tabs=in-process#publish-to-azure).
-
-### Prepare configuration file
-
-1. Download [sample configuration file](https://aka.ms/CognitiveSearchIntegrationToolAssets) and open it in a text editor.
-
-2. Get your storage account connection string by:
-
- 1. Navigating to your storage account overview page in the [Azure portal](https://portal.azure.com/#home).
- 2. In the **Access Keys** section in the menu to the left of the screen, copy your **Connection string** to the `connectionString` field in the configuration file, under `blobStorage`.
- 3. Go to the container where you have the files you want to index and copy container name to the `containerName` field in the configuration file, under `blobStorage`.
-
-3. Get your cognitive search endpoint and keys by:
-
- 1. Navigating to your resource overview page in the [Azure portal](https://portal.azure.com/#home).
- 2. Copy the **Url** at the top-right section of the page to the `endpointUrl` field within `cognitiveSearch`.
- 3. Go to the **Keys** section in the menu to the left of the screen. Copy your **Primary admin key** to the `apiKey` field within `cognitiveSearch`.
-
-4. Get Azure Function endpoint and keys
-
- 1. To get your Azure Function endpoint and keys, go to your function overview page in the [Azure portal](https://portal.azure.com/#home).
- 2. Go to **Functions** menu on the left of the screen, and click on the function you created.
- 3. From the top menu, click **Get Function Url**. The URL will be formatted like this: `YOUR-ENDPOINT-URL?code=YOUR-API-KEY`.
- 4. Copy `YOUR-ENDPOINT-URL` to the `endpointUrl` field in the configuration file, under `azureFunction`.
- 5. Copy `YOUR-API-KEY` to the `apiKey` field in the configuration file, under `azureFunction`.
-
-5. Get your resource keys endpoint
-
- 1. Navigate to your resource in the [Azure portal](https://portal.azure.com/#home).
- 2. From the menu on the left side, select **Keys and Endpoint**. You will need the endpoint and one of the keys for the API requests.
-
- :::image type="content" source="../../media/azure-portal-resource-credentials.png" alt-text="A screenshot showing the key and endpoint screen in the Azure portal" lightbox="../../media/azure-portal-resource-credentials.png":::
-
-6. Get your custom text classification project secrets
-
- 1. You will need your **project-name**, project names are case-sensitive.
-
- 2. You will also need the **deployment-name**.
- * If youΓÇÖve deployed your model via Language Studio, your deployment name will be `prod` by default.
- * If youΓÇÖve deployed your model programmatically, using the API, this is the deployment name you assigned in your request.
-
-### Run the indexer command
-
-After youΓÇÖve published your Azure function and prepared your configs file, you can run the indexer command.
-```cli
- indexer index --index-name <name-your-index-here> --configs <absolute-path-to-configs-file>
-```
-
-Replace `name-your-index-here` with the index name that appears in your Cognitive Search instance.
-
-## Next steps
-
-* [Search your app with with the Cognitive Search SDK](../../../../search/search-howto-dotnet-sdk.md#run-queries)
cognitive-services Data Formats https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-named-entity-recognition/concepts/data-formats.md
Previously updated : 11/02/2021 Last updated : 05/24/2022 -+
-# Data formats accepted by custom NER
+# Accepted custom NER data formats
-When data is used by your model for learning, it expects the data to be in a specific format. When you tag your data in Language Studio, it gets converted to the JSON format described in this article. You can also manually tag your files.
+If you are trying to [import your data](../how-to/create-project.md#import-project) into custom NER, it has to follow a specific format. If you don't have data to import, you can [create your project](../how-to/create-project.md) and use the Language Studio to [label your documents](../how-to/tag-data.md).
+## Labels file format
-## JSON file format
-
-When you tag entities, the tags are saved as in the following JSON format. If you upload a tags file, it should follow the same format.
+Your Labels file should be in the `json` format below to be used in [importing](../how-to/create-project.md#import-project) your labels into a project.
```json {
- "extractors": [
- {
- "name": "Entity1"
- },
- {
- "name": "Entity2"
- }
+ "projectFileVersion": "2022-05-01",
+ "stringIndexType": "Utf16CodeUnit",
+ "metadata": {
+ "projectKind": "CustomEntityRecognition",
+ "storageInputContainerName": "{CONTAINER-NAME}",
+ "projectName": "{PROJECT-NAME}",
+ "multilingual": false,
+ "description": "Project-description",
+ "language": "en-us"
+ },
+ "assets": {
+ "projectKind": "CustomEntityRecognition",
+ "entities": [
+ {
+ "category": "Entity1"
+ },
+ {
+ "category": "Entity2"
+ }
], "documents": [
- {
- "location": "file1.txt",
- "language": "en-us",
- "extractors": [
- {
- "regionOffset": 0,
- "regionLength": 5129,
- "labels": [
- {
- "extractorName": "Entity1",
- "offset": 77,
- "length": 10
- },
- {
- "extractorName": "Entity2",
- "offset": 3062,
- "length": 8
- }
- ]
- }
+ {
+ "location": "{DOCUMENT-NAME}",
+ "language": "{LANGUAGE-CODE}",
+ "dataset": "{DATASET}",
+ "entities": [
+ {
+ "regionOffset": 0,
+ "regionLength": 500,
+ "labels": [
+ {
+ "category": "Entity1",
+ "offset": 25,
+ "length": 10
+ },
+ {
+ "category": "Entity2",
+ "offset": 120,
+ "length": 8
+ }
]
- },
- {
- "location": "file2.txt",
- "language": "en-us",
- "extractors": [
- {
- "regionOffset": 0,
- "regionLength": 6873,
- "labels": [
- {
- "extractorName": "Entity2",
- "offset": 60,
- "length": 7
- },
- {
- "extractorName": "Entity1",
- "offset": 2805,
- "length": 10
- }
- ]
- }
+ }
+ ]
+ },
+ {
+ "location": "{DOCUMENT-NAME}",
+ "language": "{LANGUAGE-CODE}",
+ "dataset": "{DATASET}",
+ "entities": [
+ {
+ "regionOffset": 0,
+ "regionLength": 100,
+ "labels": [
+ {
+ "category": "Entity2",
+ "offset": 20,
+ "length": 5
+ }
]
- }
+ }
+ ]
+ }
]
+ }
}+ ```
-### Data description
+|Key |Placeholder |Value | Example |
+|||-|--|
+| `multilingual` | `true`| A boolean value that enables you to have documents in multiple languages in your dataset and when your model is deployed you can query the model in any supported language (not necessarily included in your training documents). See [language support](../language-support.md#multi-lingual-option) to learn more about multilingual support. | `true`|
+|`projectName`|`{PROJECT-NAME}`|Project name|`myproject`|
+| storageInputContainerName|`{CONTAINER-NAME}`|Container name|`mycontainer`|
+| `entities` | | Array containing all the entity types you have in the project. These are the entity types that will be extracted from your documents into.| |
+| `documents` | | Array containing all the documents in your project and list of the entities labeled within each document. | [] |
+| `location` | `{DOCUMENT-NAME}` | The location of the documents in the storage container. Since all the documents are in the root of the container this should be the document name.|`doc1.txt`|
+| `dataset` | `{DATASET}` | The test set to which this file will go to when split before training. Learn more about data splitting [here](../how-to/train-model.md#data-splitting) . Possible values for this field are `Train` and `Test`. |`Train`|
+| `regionOffset` | | The inclusive character position of the start of the text. |`0`|
+| `regionLength` | | The length of the bounding box in terms of UTF16 characters. Training only considers the data in this region. |`500`|
+| `category` | | The type of entity associated with the span of text specified. | `Entity1`|
+| `offset` | | The type of entity associated with the span of text specified. | `25`|
+| `length` | | The length of the entity in terms of UTF16 characters. | `20`|
+| `language` | `{LANGUAGE-CODE}` | A string specifying the language code for the document used in your project. If your project is a multilingual project, choose the language code of the majority of the documents. See [Language support](../language-support.md) for more information about supported language codes. |`en-us`|
-* `extractors`: An array of extractors for your data. Each extractor represents one of the entities you want to extract from your data.
-* `documents`: An array of tagged documents.
- * `location`: The path of the file. The file has to be in root of the storage container.
- * `language`: Language of the file. Use one of the [supported culture locales](../language-support.md).
- * `extractors`: Array of extractor objects to be extracted from the file.
- * `regionOffset`: The inclusive character position of the start of the text.
- * `regionLength`: The length of the bounding box in terms of UTF16 characters. Training only considers the data in this region.
- * `labels`: Array of all the tagged entities within the specified region.
- * `extractorName`: Type of the entity to be extracted.
- * `offset`: The inclusive character position of the start of the entity. This is not relative to the bounding box.
- * `length`: The length of the entity in terms of UTF16 characters.
-## Next steps
-See the [how-to article](../how-to/tag-data.md) more information about tagging your data. When you're done tagging your data, you can [train your model](../how-to/train-model.md).
+## Next steps
+* You can import your labeled data into your project directly. Learn how to [import project](../how-to/create-project.md#import-project)
+* See the [how-to article](../how-to/tag-data.md) more information about labeling your data. When you're done labeling your data, you can [train your model](../how-to/train-model.md).
cognitive-services Evaluation Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-named-entity-recognition/concepts/evaluation-metrics.md
- Previously updated : 11/02/2021+ Last updated : 05/24/2022 -+
-# Evaluation metrics for Custom NER models
+# Evaluation metrics for custom named entity recognition models
-Model evaluation in custom entity extraction uses the following metrics:
+Your [dataset is split](../how-to/train-model.md#data-splitting) into two parts: a set for training, and a set for testing. The training set is used to train the model, while the testing set is used as a test for model after training to calculate the model performance and evaluation. The testing set is not introduced to the model through the training process, to make sure that the model is tested on new data.
+Model evaluation is triggered automatically after training is completed successfully. The evaluation process starts by using the trained model to predict user defined entities for documents in the test set, and compares them with the provided data tags (which establishes a baseline of truth). The results are returned so you can review the modelΓÇÖs performance. For evaluation, custom NER uses the following metrics:
-|Metric |Description |Calculation |
-||||
-|Precision | The ratio of successful recognitions to all attempted recognitions. This shows how many times the model's entity recognition is truly a good recognition. | `Precision = #True_Positive / (#True_Positive + #False_Positive)` |
-|Recall | The ratio of successful recognitions to the actual number of entities present. | `Recall = #True_Positive / (#True_Positive + #False_Negatives)` |
-|F1 score | The combination of precision and recall. | `F1 Score = 2 * Precision * Recall / (Precision + Recall)` |
+* **Precision**: Measures how precise/accurate your model is. It is the ratio between the correctly identified positives (true positives) and all identified positives. The precision metric reveals how many of the predicted entities are correctly labeled.
+
+ `Precision = #True_Positive / (#True_Positive + #False_Positive)`
+
+* **Recall**: Measures the model's ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted entities are correct.
+
+ `Recall = #True_Positive / (#True_Positive + #False_Negatives)`
+
+* **F1 score**: The F1 score is a function of Precision and Recall. It's needed when you seek a balance between Precision and Recall.
+
+ `F1 Score = 2 * Precision * Recall / (Precision + Recall)` <br>
+
+>[!NOTE]
+> Precision, recall and F1 score are calculated for each entity separately (*entity-level* evaluation) and for the model collectively (*model-level* evaluation).
## Model-level and entity-level evaluation metrics
So what does it actually mean to have high precision or high recall for a certai
## Confusion matrix A Confusion matrix is an N x N matrix used for model performance evaluation, where N is the number of entities.
-The matrix compares the actual tags with the tags predicted by the model.
+The matrix compares the expected labels with the ones predicted by the model.
This gives a holistic view of how well the model is performing and what kinds of errors it is making. You can use the Confusion matrix to identify entities that are too close to each other and often get mistaken (ambiguity). In this case consider merging these entity types together. If that isn't possible, consider adding more tagged examples of both entities to help the model differentiate between them.
Similarly,
## Next steps
-[View a model's evaluation in Language Studio](../how-to/view-model-evaluation.md)
+* [View a model's performance in Language Studio](../how-to/view-model-evaluation.md)
+* [Train a model](../how-to/train-model.md)
cognitive-services Fail Over https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-named-entity-recognition/fail-over.md
Previously updated : 02/07/2022 Last updated : 04/25/2022 -+ # Back up and recover your custom NER models
When you create a Language resource, you specify a region for it to be created i
If your app or business depends on the use of a custom NER model, we recommend that you create a replica of your project in an additional supported region. If a regional outage occurs, you can then access your model in the other fail-over region where you replicated your project.
-Replicating a project means that you export your project metadata and assets, and import them into a new project. This only makes a copy of your project settings and tagged data. You still need to [train](./how-to/train-model.md) and [deploy](how-to/call-api.md#deploy-your-model) the models to be available for use with [prediction APIs](https://aka.ms/ct-runtime-swagger).
+Replicating a project means that you export your project metadata and assets, and import them into a new project. This only makes a copy of your project settings and tagged data. You still need to [train](./how-to/train-model.md) and [deploy](how-to/deploy-model.md) the models to be available for use with [prediction APIs](https://aka.ms/ct-runtime-swagger).
In this article, you will learn to how to use the export and import APIs to replicate your project from one resource to another existing in different supported geographical regions, guidance on keeping your projects in sync and changes needed to your runtime consumption. ## Prerequisites
-* Two Azure Language resources in different Azure regions. [Create your resources](./how-to/create-project.md#azure-resources) and link them to an Azure storage account. It's recommended that you link both of your Language resources to the same storage account, though this might introduce slightly higher latency when importing your project, and training a model.
+* Two Azure Language resources in different Azure regions. [Create your resources](./how-to/create-project.md#create-a-language-resource) and connect them to an Azure storage account. It's recommended that you connect both of your Language resources to the same storage account, though this might introduce slightly higher latency when importing your project, and training a model.
-## Get your resource keys and endpoint
+## Get your resource keys endpoint
Use the following steps to get the keys and endpoint of your primary and secondary resources. These will be used in the following steps.
-1. Go to your resource overview page in the [Azure portal](https://ms.portal.azure.com/#home)
-2. From the menu of the left side of the screen, select **Keys and Endpoint**. Use endpoint for the API requests and youΓÇÖll need the key for `Ocp-Apim-Subscription-Key` header.
-
- :::image type="content" source="../media/azure-portal-resource-credentials.png" alt-text="A screenshot showing the key and endpoint screen for an Azure resource." lightbox="../media/azure-portal-resource-credentials.png":::
-
- > [!TIP]
- > Keep a note of the keys and endpoints for both your primary and secondary resources. You will use these values to replace the placeholder values in the following examples.
+> [!TIP]
+> Keep a note of keys and endpoints for both primary and secondary resources. Use these values to replace the following placeholders:
+`{PRIMARY-ENDPOINT}`, `{PRIMARY-RESOURCE-KEY}`, `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}`.
+> Also take note of your project name, your model name and your deployment name. Use these values to replace the following placeholders: `{PROJECT-NAME}`, `{MODEL-NAME}` and `{DEPLOYMENT-NAME}`.
## Export your primary project assets
Start by exporting the project assets from the project in your primary resource.
### Submit export job
-Create a **POST** request using the following URL, headers, and JSON body to export project metadata and assets.
-
-Use the following URL to export your primary project assets. Replace the placeholder values below with your own values.
-
-```rest
-{YOUR-PRIMARY-ENDPOINT}/language/analyze-text/projects/{PROJECT-NAME}/:export?api-version=2021-11-01-preview
-```
-
-|Placeholder |Value | Example |
-||||
-|`{YOUR-PRIMARY-ENDPOINT}` | The endpoint for authenticating your API request. This is the endpoint for your primary resource. | `https://<your-custom-subdomain>.cognitiveservices.azure.com` |
-|`{PROJECT-NAME}` | The name for your project. This value is case-sensitive. | `myProject` |
-
-#### Headers
-
-Use the following headers to authenticate your request.
+Replace the placeholders in the following request with your `{PRIMARY-ENDPOINT}` and `{PRIMARY-RESOURCE-KEY}` that you obtained in the first step.
-|Key|Description|Value|
-|--|--|--|
-|`Ocp-Apim-Subscription-Key`| The key to your primary resource. Used for authenticating your API requests.| `{YOUR-PRIMARY-RESOURCE-KEY}` |
-|`format`| The format you want to use for the exported assets. | `JSON` |
-
-#### Body
-
-Use the following JSON in your request body, specifying that you want to export all the assets.
-
-```json
-{
- "assetsToExport": ["*"]
-}
-```
-
-Once you send your API request, youΓÇÖll receive a `202` response indicating success. In the response headers, extract the `location` value. It will be formatted like this:
-
-```rest
-{YOUR-PRIMARY-ENDPOINT}/language/analyze-text/projects/{PROJECT-NAME}/export/jobs/{JOB-ID}?api-version=2021-11-01-preview
-```
-
-`JOB-ID` is used to identify your request, since this operation is asynchronous. YouΓÇÖll use this URL in the next step to get the export status.
### Get export job status
-Use the following **GET** request to query the status of your export job status. You can use the URL you received from the previous step, or replace the placeholder values below with your own values.
-
-```rest
-{YOUR-PRIMARY-ENDPOINT}/language/analyze-text/projects/{PROJECT-NAME}/export/jobs/{JOB-ID}?api-version=2021-11-01-preview
-```
+Replace the placeholders in the following request with your `{PRIMARY-ENDPOINT}` and `{PRIMARY-RESOURCE-KEY}` that you obtained in the first step.
-|Placeholder |Value | Example |
-||||
-|`{YOUR-PRIMARY-ENDPOINT}` | The endpoint for authenticating your API request. This is the endpoint for your primary resource. | `https://<your-custom-subdomain>.cognitiveservices.azure.com` |
-|`{PROJECT-NAME}` | The name for your project. This value is case-sensitive. | `myProject` |
-|`{JOB-ID}` | The ID for locating your export job status. This is in the `location` header value you received in the previous step. | `xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxxx` |
-#### Headers
-
-Use the following header to authenticate your request.
-
-|Key|Description|Value|
-|--|--|--|
-|`Ocp-Apim-Subscription-Key`| The key to your primary resource. Used for authenticating your API requests.| `{YOUR-PRIMARY-RESOURCE-KEY}` |
-
-#### Response body
-
-```json
-{
- "resultUrl": "{RESULT-URL}",
- "jobId": "string",
- "createdDateTime": "2021-10-19T23:24:41.572Z",
- "lastUpdatedDateTime": "2021-10-19T23:24:41.572Z",
- "expirationDateTime": "2021-10-19T23:24:41.572Z",
- "status": "unknown",
- "errors": [
- {
- "code": "unknown",
- "message": "string"
- }
- ]
-}
-```
-
-Use the url from the `resultUrl` key in the body to view the exported assets from this job.
-
-### Get export results
-
-Submit a **GET** request using the `{RESULT-URL}` you received from the previous step to view the results of the export job.
-
-#### Headers
-
-Use the following header to authenticate your request.
-
-|Key|Description|Value|
-|--|--|--|
-|`Ocp-Apim-Subscription-Key`| The key to your primary resource. Used for authenticating your API requests.| `{YOUR-PRIMARY-RESOURCE-KEY}` |
Copy the response body as you will use it as the body for the next import job.
Now go ahead and import the exported project assets in your new project in the s
### Submit import job
-Create a **POST** request using the following URL, headers, and JSON body to create your project and import the tags file.
-
-Use the following URL to create a project and import your tags file. Replace the placeholder values below with your own values.
-
-```rest
-{YOUR-SECONDARY-ENDPOINT}/language/analyze-text/projects/{PROJECT-NAME}/:import?api-version=2021-11-01-preview
-```
-
-|Placeholder |Value | Example |
-||||
-|`{YOUR-SECONDARY-ENDPOINT}` | The endpoint for authenticating your API request. This is the endpoint for your secondary resource. | `https://<your-custom-subdomain>.cognitiveservices.azure.com` |
-|`{PROJECT-NAME}` | The name for your project. This value is case-sensitive. | `myProject` |
--
-#### Headers
-
-Use the following header to authenticate your request.
-
-|Key|Description|Value|
-|--|--|--|
-|`Ocp-Apim-Subscription-Key`| The key to your secondary resource. Used for authenticating your API requests.| `{YOUR-SECONDARY-RESOURCE-KEY}` |
-
-#### Body
-
-Use the response body you got from the previous export step. It will be formatted like this:
-
-```json
-{
- "api-version": "2021-11-01-preview",
- "metadata": {
- "name": "myProject",
- "multiLingual": true,
- "description": "Trying out custom NER",
- "modelType": "Extraction",
- "language": "en-us",
- "storageInputContainerName": "YOUR-CONTAINER-NAME",
- "settings": {}
- },
- "assets": {
- "extractors": [
- {
- "name": "Entity1"
- },
- {
- "name": "Entity2"
- }
- ],
- "documents": [
- {
- "location": "doc1.txt",
- "language": "en-us",
- "extractors": [
- {
- "regionOffset": 0,
- "regionLength": 500,
- "labels": [
- {
- "extractorName": "Entity1",
- "offset": 25,
- "length": 10
- },
- {
- "extractorName": "Entity2",
- "offset": 120,
- "length": 8
- }
- ]
- }
- ]
- },
- {
- "location": "doc2.txt",
- "language": "en-us",
- "extractors": [
- {
- "regionOffset": 0,
- "regionLength": 100,
- "labels": [
- {
- "extractorName": "Entity2",
- "offset": 20,
- "length": 5
- }
- ]
- }
- ]
- }
- ]
- }
-}
-```
-
-Now you have replicated your project into another resource in another region.
-
-## Train your model
-
-Importing a project only copies its assets and metadata. You still need to train your model, which will incur usage on your account.
-
-### Submit training job
+Replace the placeholders in the following request with your `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}` that you obtained in the first step.
-Create a **POST** request using the following URL, headers, and JSON body to start training an NER model. Replace the placeholder values below with your own values.
-```rest
-{YOUR-SECONDARY-ENDPOINT}/language/analyze-text/projects/{PROJECT-NAME}/:train?api-version=2021-11-01-preview
-```
+### Get import job status
-|Placeholder |Value | Example |
-||||
-|`{YOUR-SECONDARY-ENDPOINT}` | The endpoint for authenticating your API request. This is the endpoint for your secondary resource. | `https://<your-custom-subdomain>.cognitiveservices.azure.com` |
-|`{PROJECT-NAME}` | The name for your project. This value is case-sensitive. | `myProject` |
+Replace the placeholders in the following request with your `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}` that you obtained in the first step.
-### Headers
-Use the following header to authenticate your request.
-|Key|Description|Value|
-|--|--|--|
-|`Ocp-Apim-Subscription-Key`| The key to your secondary resource. Used for authenticating your API requests.| `{YOUR-SECONDARY-RESOURCE-KEY}` |
-
-### Request body
-
-Use the following JSON in your request. Use the same model name and the `runValidation` setting if you ran validation in your primary project, for consistency.
-
-```json
-{
- "modelLabel": "{MODEL-NAME}",
- "runValidation": true
-}
-```
-
-|Key |Value | Example |
-||||
-|`modelLabel` | Your Model name. | {MODEL-NAME} |
-|`runValidation` | Boolean value to run validation on the test set. | true |
-
-Once you send your API request, youΓÇÖll receive a `202` response indicating success. In the response headers, extract the `location` value. It will be formatted like this:
-
-```rest
-{YOUR-SECONDARY-ENDPOINT}/language/analyze-text/projects/{PROJECT-NAME}/train/jobs/{JOB-ID}?api-version=2021-11-01-preview
-```
+## Train your model
-`JOB-ID` is used to identify your request, since this operation is asynchronous. YouΓÇÖll use this URL in the next step to get the training status.
+After importing your project, you only have copied the project's assets and metadata and assets. You still need to train your model, which will incur usage on your account.
-### Get Training Status
+### Submit training job
-Use the following **GET** request to query the status of your model's training process. You can use the URL you received from the previous step, or replace the placeholder values below with your own values.
+Replace the placeholders in the following request with your `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}` that you obtained in the first step.
-```rest
-{YOUR-SECONDARY-ENDPOINT}/language/analyze-text/projects/{PROJECT-NAME}/train/jobs/{JOB-ID}?api-version=2021-11-01-preview
-```
-|Placeholder |Value | Example |
-||||
-|`{YOUR-SECONDARY-ENDPOINT}` | The endpoint for authenticating your API request. This is the endpoint for your secondary resource. | `https://<your-custom-subdomain>.cognitiveservices.azure.com` |
-|`{PROJECT-NAME}` | The name for your project. This value is case-sensitive. | `myProject` |
-|`{JOB-ID}` | The ID for locating your model's training status. This is in the `location` header value you received in the previous step. | `xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxxx` |
-#### Headers
+### Get training status
-Use the following header to authenticate your request.
+Replace the placeholders in the following request with your `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}` that you obtained in the first step.
-|Key|Description|Value|
-|--|--|--|
-|`Ocp-Apim-Subscription-Key`| The key to your secondary resource. Used for authenticating your API requests.| `{YOUR-SECONDARY-RESOURCE-KEY}` |
## Deploy your model This is the step where you make your trained model available form consumption via the [runtime prediction API](https://aka.ms/ct-runtime-swagger). > [!TIP]
-> Use the same deployment name as your primary project for easier maintenance, and more minimal changes to your system for redirecting your traffic.
-
-## Submit deploy job
-
-Create a **PUT** request using the following URL, headers, and JSON body to start deploying a custom NER model.
-
-```rest
-{YOUR-SECONDARY-ENDPOINT}/language/analyze-text/projects/{PROJECT-NAME}/deployments/{DEPLOYMENT-NAME}?api-version=2021-11-01-preview
-```
-
-|Placeholder |Value | Example |
-||||
-|`{YOUR-SECONDARY-ENDPOINT}` | The endpoint for authenticating your API request. This is the endpoint for your secondary resource. | `https://<your-custom-subdomain>.cognitiveservices.azure.com` |
-|`{PROJECT-NAME}` | The name for your project. This value is case-sensitive. | `myProject` |
-|`{DEPLOYMENT-NAME}` | The name of your deployment. This value is case-sensitive. | `prod` |
-
-#### Headers
-
-Use the following header to authenticate your request.
-
-|Key|Description|Value|
-|--|--|--|
-|`Ocp-Apim-Subscription-Key`| The key to your secondary resource. Used for authenticating your API requests.| `{YOUR-SECONDARY-RESOURCE-KEY}` |
+> Use the same deployment name as your primary project for easier maintenance and minimal changes to your system to handle redirecting your traffic.
-#### Request body
+### Submit deployment job
-Use the following JSON in your request. Use the name of the model you wan to deploy.
+Replace the placeholders in the following request with your `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}` that you obtained in the first step.
-```json
-{
- "trainedModelLabel": "{MODEL-NAME}",
- "deploymentName": "{DEPLOYMENT-NAME}"
-}
-```
-
-Once you send your API request, youΓÇÖll receive a `202` response indicating success. In the response headers, extract the `location` value. It will be formatted like this:
-
-```rest
-{YOUR-SECONDARY-ENDPOINT}/language/analyze-text/projects/{PROJECT-NAME}/deployments/{DEPLOYMENT-NAME}/jobs/{JOB-ID}?api-version=2021-11-01-preview
-```
-
-`JOB-ID` is used to identify your request, since this operation is asynchronous. You will use this URL in the next step to get the publishing status.
### Get the deployment status
-Use the following **GET** request to query the status of your model's publishing process. You can use the URL you received from the previous step, or replace the placeholder values below with your own values.
-
-```rest
-{YOUR-SECONDARY-ENDPOINT}/language/analyze-text/projects/{PROJECT-NAME}/deployments/{DEPLOYMENT-NAME}/jobs/{JOB-ID}?api-version=2021-11-01-preview
-```
-
-|Placeholder |Value | Example |
-||||
-|`{YOUR-SECONDARY-ENDPOINT}` | The endpoint for authenticating your API request. This is the endpoint for your secondary resource. | `https://<your-custom-subdomain>.cognitiveservices.azure.com` |
-|`{PROJECT-NAME}` | The name for your project. This value is case-sensitive. | `myProject` |
-|`{DEPLOYMENT-NAME}` | The name of your deployment. This value is case-sensitive. | `prod` |
-|`{JOB-ID}` | The ID for locating your model's training status. This is in the `location` header value you received in the previous step. | `xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxxx` |
-
-#### Headers
-
-Use the following header to authenticate your request.
+Replace the placeholders in the following request with your `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}` that you obtained in the first step.
-|Key|Description|Value|
-|--|--|--|
-|`Ocp-Apim-Subscription-Key`| The key to your secondary resource. Used for authenticating your API requests.| `{YOUR-SECONDARY-RESOURCE-KEY}` |
-
-At this point you have:
-* Replicated your project into another resource, which is in another region
-* Trained and deployed the model.
-
-Now you should make changes to your system to handle traffic redirection in case of failure.
## Changes in calling the runtime
-Within your system, at the step where you call [runtime prediction API](https://aka.ms/ct-runtime-swagger) check for the response code returned from the submit task API. If you observe a **consistent** failure in submitting the request, this could indicate an outage in your primary region. Failure once doesn't mean an outage, it may be transient issue. Retry submitting the job through the secondary resource you have created. For the second request use your `{YOUR-SECONDARY-ENDPOINT}` and secondary key, if you have followed the steps above, `{PROJECT-NAME}` and `{DEPLOYMENT-NAME}` would be the same so no changes are required to the request body.
+Within your system, at the step where you call [runtime prediction API](https://aka.ms/ct-runtime-swagger) check for the response code returned from the submit task API. If you observe a **consistent** failure in submitting the request, this could indicate an outage in your primary region. Failure once doesn't mean an outage, it may be transient issue. Retry submitting the job through the secondary resource you have created. For the second request use your `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}`, if you have followed the steps above, `{PROJECT-NAME}` and `{DEPLOYMENT-NAME}` would be the same so no changes are required to the request body.
-If you revert to using your secondary resource, you will observe a slight increase in latency because of the difference in regions where your model is deployed.
+In case you revert to using your secondary resource you will observe slight increase in latency because of the difference in regions where your model is deployed.
## Check if your projects are out of sync
-It's important to maintain the freshness of both projects. You need to frequently check if any updates were made to your primary project, so that you move them over to your secondary project. This way if your primary region fails and you move into the secondary region, you can expect similar model performance since it already contains the latest updates. It's important to determining the frequency at which you check if your projects are in sync. We recommend that you check daily in order to guarantee the freshness of data in your secondary model.
+Maintaining the freshness of both projects is an important part of the process. You need to frequently check if any updates were made to your primary project so that you move them over to your secondary project. This way if your primary region fails and you move into the secondary region you should expect similar model performance since it already contains the latest updates. Setting the frequency of checking if your projects are in sync is an important choice. We recommend that you do this check daily in order to guarantee the freshness of data in your secondary model.
### Get project details
-Use the following url to get your project details, one of the keys returned in the body
-
-Use the following **GET** request to get your project details. You can use the URL you received from the previous step, or replace the placeholder values below with your own values.
-
-```rest
-{YOUR-PRIMARY-ENDPOINT}/language/analyze-text/projects/{PROJECT-NAME}?api-version=2021-11-01-preview
-```
-
-|Placeholder |Value | Example |
-||||
-|`{YOUR-PRIMARY-ENDPOINT}` | The endpoint for authenticating your API request. This is the endpoint for your primary resource. | `https://<your-custom-subdomain>.cognitiveservices.azure.com` |
-|`{PROJECT-NAME}` | The name for your project. This value is case-sensitive. | `myProject` |
-
-#### Headers
-
-Use the following header to authenticate your request.
+Use the following url to get your project details, one of the keys returned in the body indicates the last modified date of the project.
+Repeat the following step twice, one for your primary project and another for your secondary project and compare the timestamp returned for both of them to check if they are out of sync.
-|Key|Description|Value|
-|--|--|--|
-|`Ocp-Apim-Subscription-Key`| The key to your primary resource. Used for authenticating your API requests.| `{YOUR-PRIMARY-RESOURCE-KEY}` |
+ [!INCLUDE [get project details](./includes/rest-api/get-project-details.md)]
-#### Response body
-```json
- {
- "createdDateTime": "2021-10-19T23:24:41.572Z",
- "lastModifiedDateTime": "2021-10-19T23:24:41.572Z",
- "lastTrainedDateTime": "2021-10-19T23:24:41.572Z",
- "lastDeployedDateTime": "2021-10-19T23:24:41.572Z",
- "modelType": "Extraction",
- "storageInputContainerName": "YOUR-CONTAINER-NAME",
- "name": "myProject",
- "multiLingual": true,
- "description": "string",
- "language": "en-us",
- "settings": {}
- }
-```
+Repeat the same steps for your replicated project using `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}`. Compare the returned `lastModifiedDateTime` from both projects. If your primary project was modified sooner than your secondary one, you need to repeat the steps of [exporting](#export-your-primary-project-assets), [importing](#import-to-a-new-project), [training](#train-your-model) and [deploying](#deploy-your-model).
-Repeat the same steps for your replicated project using your secondary endpoint and resource key. Compare the returned `lastModifiedDateTime` from both projects. If your primary project was modified later than your secondary one, you need to repeat the process of:
-1. [Exporting your primary project information](#export-your-primary-project-assets)
-2. [Importing the project information to a secondary project](#import-to-a-new-project)
-3. [Training your model](#train-your-model)
-4. [Deploying your model](#deploy-your-model)
## Next steps In this article, you have learned how to use the export and import APIs to replicate your project to a secondary Language resource in other region. Next, explore the API reference docs to see what else you can do with authoring APIs.
-* [Authoring REST API reference ](https://westus.dev.cognitive.microsoft.com/docs/services/language-authoring-clu-apis-2022-03-01-preview/operations/Projects_TriggerImportProjectJob)
+* [Authoring REST API reference](https://aka.ms/ct-authoring-swagger)
-* [Runtime prediction REST API reference ](https://aka.ms/ct-runtime-swagger)
+* [Runtime prediction REST API reference](https://aka.ms/ct-runtime-swagger)
cognitive-services Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-named-entity-recognition/faq.md
Previously updated : 04/05/2022 Last updated : 05/09/2022 -+
See the [data selection and schema design](how-to/design-schema.md) article for
* View the model [confusion matrix](how-to/view-model-evaluation.md). If you notice that a certain entity type is frequently not predicted correctly, consider adding more tagged instances for this class. If you notice that two entity types are frequently predicted as each other, this means the schema is ambiguous and you should consider merging them both into one entity type for better performance.
-* [Examine the data distribution](how-to/improve-model.md#examine-data-distribution). If one of the entity types has a lot more tagged instances than the others, your model may be biased towards this type. Add more data to the other entity types or remove examples from the dominating type.
+* [Review test set predictions](how-to/improve-model.md#review-test-set-predictions). If one of the entity types has a lot more tagged instances than the others, your model may be biased towards this type. Add more data to the other entity types or remove examples from the dominating type.
* Learn more about [data selection and schema design](how-to/design-schema.md).
See the [data selection and schema design](how-to/design-schema.md) article for
## How do I get predictions in different languages?
-First, you need to enable the multilingual option when [creating your project](how-to/create-project.md) or you can enable it later from the project settings page. After you train and deploy your model, you can start querying it in [multiple languages](language-support.md#multiple-language-support). You may get varied results for different languages. To improve the accuracy of any language, add more tagged instances to your project in that language to introduce the trained model to more syntax of that language.
+First, you need to enable the multilingual option when [creating your project](how-to/create-project.md) or you can enable it later from the project settings page. After you train and deploy your model, you can start querying it in [multiple languages](language-support.md#multi-lingual-option). You may get varied results for different languages. To improve the accuracy of any language, add more tagged instances to your project in that language to introduce the trained model to more syntax of that language.
## I trained my model, but I can't test it
cognitive-services Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-named-entity-recognition/glossary.md
Previously updated : 11/02/2021 Last updated : 05/06/2022 -+
-# Custom Named Entity Recognition (NER) definitions and terms
+# Custom named entity recognition definitions and terms
Use this article to learn about some of the definitions and terms you may encounter when using custom NER.
-## Project
+## Entity
-A project is a work area for building your custom AI models based on your data. Your project can only be accessed by you and other people who have contributor access to the Azure resource you are using.
-As a prerequisite to creating a custom entity extraction project, you have to connect your resource to a storage account with your dataset when you [create a new project](how-to/create-project.md). Your project automatically includes all the `.txt` files available in your container.
+An entity is a span of text that indicates a certain type of information. The text span can consist of one or more words. In the scope of custom NER, entities represent the information that the user wants to extract from the text. Developers tag entities within their data with the needed entities before passing it to the model for training. For example "Invoice number", "Start date", "Shipment number", "Birthplace", "Origin city", "Supplier name" or "Client address".
-Within your project you can do the following:
+For example, in the sentence "*John borrowed 25,000 USD from Fred.*" the entities might be:
-* **Tag your data**: The process of tagging your data so that when you train your model it learns what you want to extract.
-* **Build and train your model**: The core step of your project, where your model starts learning from your tagged data.
-* **View model evaluation details**: Review your model performance to decide if there is room for improvement, or you are satisfied with the results.
-* **Improve model**: When you know what went wrong with your model, and how to improve performance.
-* **Deploy model**: Make your model available for use.
-* **Test model**: Test your model on a dataset.
+| Entity name/type | Entity |
+|--|--|
+| Borrower Name | *John* |
+| Lender Name | *Fred* |
+| Loan Amount | *25,000 USD* |
+
+## F1 score
+The F1 score is a function of Precision and Recall. It's needed when you seek a balance between [precision](#precision) and [recall](#recall).
## Model
-A model is an object that has been trained to do a certain task, in this case custom named entity recognition.
+A model is an object that is trained to do a certain task, in this case custom entity recognition. Models are trained by providing labeled data to learn from so they can later be used for recognition tasks.
-* **Model training** is the process of teaching your model what to extract based on your tagged data.
+* **Model training** is the process of teaching your model what to extract based on your labeled data.
* **Model evaluation** is the process that happens right after training to know how well does your model perform.
-* **Model deployment** is the process of making it available for use.
+* **Deployment** is the process of assigning your model to a deployment to make it available for use via the [prediction API](https://aka.ms/ct-runtime-swagger).
-## Entity
+## Precision
+Measures how precise/accurate your model is. It's the ratio between the correctly identified positives (true positives) and all identified positives. The precision metric reveals how many of the predicted classes are correctly labeled.
-An entity is a span of text that indicates a certain type of information. The text span can consist of one or more words (or tokens). In Custom Named Entity Recognition (NER), entities represent the information that you want to extract from the text.
+## Project
-For example, in the sentence "*John borrowed 25,000 USD from Fred.*" the entities might be:
+A project is a work area for building your custom ML models based on your data. Your project can only be accessed by you and others who have access to the Azure resource being used.
+As a prerequisite to creating a custom entity extraction project, you have to connect your resource to a storage account with your dataset when you [create a new project](how-to/create-project.md). Your project automatically includes all the `.txt` files available in your container.
-| Entity name/type | Entity |
-| -- | -- |
-| Borrower Name | *John* |
-| Lender Name | *Fred* |
-| Loan Amount | *25,000 USD* |
+Within your project you can do the following actions:
+
+* **Label your data**: The process of labeling your data so that when you train your model it learns what you want to extract.
+* **Build and train your model**: The core step of your project, where your model starts learning from your labeled data.
+* **View model evaluation details**: Review your model performance to decide if there is room for improvement, or you are satisfied with the results.
+* **Improve model**: When you know what went wrong with your model, and how to improve performance.
+* **Deployment**: After you have reviewed the model's performance and decided it can be used in your environment, you need to assign it to a deployment to use it. Assigning the model to a deployment makes it available for use through the [prediction API](https://aka.ms/ct-runtime-swagger).
+* **Test model**: After deploying your model, test your deployment in [Language Studio](https://aka.ms/LanguageStudio) to see how it would perform in production.
+
+## Recall
+Measures the model's ability to predict actual positive classes. It's the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
## Next steps
cognitive-services Call Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-named-entity-recognition/how-to/call-api.md
Title: Submit a Custom Named Entity Recognition (NER) task
+ Title: Send a Named Entity Recognition (NER) request to your custom model
+description: Learn how to send a request for custom NER.
-description: Learn about sending a request for Custom Named Entity Recognition (NER).
Previously updated : 04/06/2022 Last updated : 05/24/2022 -
+ms.devlang: csharp, python
+
-# Deploy a model and extract entities from text using the runtime API.
+# Query deployment to extract entities
-Once you are satisfied with how your model performs, it is ready to be deployed, and used to recognize entities in text. You can only send entity recognition tasks through the API, not from Language Studio.
+After the deployment is added successfully, you can query the deployment to extract entities from your text based on the model you assigned to the deployment.
+You can query the deployment programmatically using the [Prediction API](https://aka.ms/ct-runtime-api) or through the [Client libraries (Azure SDK)](#get-task-results).
-## Prerequisites
+## Test deployed model
-* A successfully [created project](create-project.md) with a configured Azure blob storage account
- * Text data that [has been uploaded](create-project.md#prepare-training-data) to your storage account.
-* [Tagged data](tag-data.md)
-* A [successfully trained model](train-model.md)
-* Reviewed the [model evaluation details](view-model-evaluation.md) to determine how your model is performing.
- * (optional) [Made improvements](improve-model.md) to your model if its performance isn't satisfactory.
+You can use the Language Studio to submit the custom entity recognition task and visualize the results.
-See the [application development lifecycle](../overview.md#application-development-lifecycle) for more information.
-## Deploy your model
-
-Deploying a model hosts it, and makes it available for predictions through an endpoint.
-
-When a model is deployed, you will be able to test the model directly in the portal or by calling the API associated with it.
-
-> [!NOTE]
-> You can only have ten deployment names
-
-
-### Delete deployment
-
-To delete a deployment, select the deployment you want to delete and select **Delete deployment**
-
-> [!TIP]
-> You can test your model in Language Studio by sending samples of text for it to classify.
-> 1. Select **Test model** from the menu on the left side of your project in Language Studio.
-> 2. Select the model you want to test.
-> 3. Add your text to the textbox, you can also upload a `.txt` file.
-> 4. Click on **Run the test**.
-> 5. In the **Result** tab, you can see the extracted entities from your text. You can also view the JSON response under the **JSON** tab.
+ ## Send an entity recognition request to your model
-# [Using Language Studio](#tab/language-studio)
-
-### Using Language studio
+# [Language Studio](#tab/language-studio)
-1. After the deployment is completed, select the model you want to use and from the top menu click on **Get prediction URL** and copy the URL and body.
- :::image type="content" source="../../custom-classification/media/get-prediction-url-1.png" alt-text="run-inference" lightbox="../../custom-classification/media/get-prediction-url-1.png":::
-
-2. In the window that appears, under the **Submit** pivot, copy the sample request into your command line
-
-3. Replace `<YOUR_DOCUMENT_HERE>` with the actual text you want to classify.
-
- :::image type="content" source="../../custom-classification/media/get-prediction-url-2.png" alt-text="run-inference-2" lightbox="../../custom-classification/media/get-prediction-url-2.png":::
-
-4. Submit the request
-
-5. In the response header you receive extract `jobId` from `operation-location`, which has the format: `{YOUR-ENDPOINT}/text/analytics/v3.2-preview.2/analyze/jobs/<jobId}>`
-
-6. Copy the retrieve request and replace `jobId` and submit the request.
-
- :::image type="content" source="../../custom-classification/media/get-prediction-url-3.png" alt-text="run-inference-3" lightbox="../../custom-classification/media/get-prediction-url-3.png":::
-
- ## Retrieve the results of your job
+# [REST API](#tab/rest-api)
-1. Select **Retrieve** from the same window you got the example request you got earlier and copy the sample request into a text editor.
+First you will need to get your resource key and endpoint:
- :::image type="content" source="../media/get-prediction-retrieval-url.png" alt-text="Screenshot showing the prediction retrieval request and URL" lightbox="../media/get-prediction-retrieval-url.png":::
-2. Replace `<OPERATION_ID>` with the `jobId` from the previous step.
+### Submit a custom NER task
-3. Submit the `GET` cURL request in your terminal or command prompt. You'll receive a 202 response with the API results if the request was successful.
+### Get task results
-# [Using the REST API](#tab/rest-api)
-## Use the REST API
+# [Client libraries (Azure SDK)](#tab/client)
-First you will need to get your resource key and endpoint
+First you will need to get your resource key and endpoint:
-1. Go to your resource overview page in the [Azure portal](https://portal.azure.com/#home)
-
-2. From the menu on the left side, select **Keys and Endpoint**. Use endpoint for the API requests and you will need the key for `Ocp-Apim-Subscription-Key` header.
-
- :::image type="content" source="../../custom-classification/media/get-endpoint-azure.png" alt-text="Get the Azure endpoint" lightbox="../../custom-classification/media/get-endpoint-azure.png":::
-
-### Submit custom NER task
--
-### Get the task results
--
-# [Using the client libraries (Azure SDK)](#tab/client)
-
-## Use the client libraries
-
-1. Go to your resource overview page in the [Azure portal](https://portal.azure.com/#home)
-
-2. From the menu on the left side, select **Keys and Endpoint**. Use endpoint for the API requests and you will need the key for `Ocp-Apim-Subscription-Key` header.
-
- :::image type="content" source="../../custom-classification/media/get-endpoint-azure.png" alt-text="Get the Azure endpoint" lightbox="../../custom-classification/media/get-endpoint-azure.png":::
3. Download and install the client library package for your language of choice:
First you will need to get your resource key and endpoint
+## Next steps
+
+* [Enrich a Cognitive Search index tutorial](../tutorials/cognitive-search.md)
+++
cognitive-services Create Project https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-named-entity-recognition/how-to/create-project.md
Previously updated : 03/21/2022 Last updated : 05/24/2022 -+
-# How to create custom NER projects
+# How to create custom NER project
-Before you start using custom NER, you will need several things:
-
-* An Azure Language resource
-* An Azure storage account where you will upload your `.txt` files that will be used to train an AI model to classify text
-
-Use this article to learn how to prepare the requirements for using custom NER.
+Use this article to learn how to set up the requirements for starting with custom NER and create a project.
## Prerequisites
-* An Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services).
-You should have an idea of the [project schema](design-schema.md) you will use for your data.
-
-## Azure resources
-
-Before you start using custom NER, you will need an Azure Language resource. We recommend the steps in the [quickstart](../quickstart.md) for creating one in the Azure portal. Creating a resource in the Azure portal lets you create an Azure storage account at the same time, with all of the required permissions pre-configured. You can also read further in the article to learn how to use a pre-existing resource, and configure it to work with custom NER.
-
-# [Azure portal](#tab/portal)
--
-# [Language Studio](#tab/studio)
-
-### Create a new resource from Language Studio
-
-If it's your first time logging in, you'll see a window in [Language Studio](https://aka.ms/languageStudio) that will let you choose a language resource or create a new one. You can also create a resource by clicking the settings icon in the top-right corner, selecting **Resources**, then clicking **Create a new resource**.
-
-> [!IMPORTANT]
-> * To use Custom NER, you'll need a Language resource in **West US 2** or **West Europe** with the Standard (**S**) pricing tier.
-> * Be sure to to select **Managed Identity** when you create a resource.
-
+Before you start using custom NER, you will need:
-To use custom NER, you'll need to [create an Azure storage account](../../../../storage/common/storage-account-create.md) if you don't have one already.
-
-Next you'll need to assign the [correct roles](#required-roles-for-your-storage-account) for the storage account to connect it to your Language resource.
-
-# [Azure PowerShell](#tab/powershell)
-
-### Create a new resource with the Azure PowerShell
+* An Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services).
-You can create a new resource and a storage account using the following CLI [template](https://github.com/Azure-Samples/cognitive-services-sample-data-files) and [parameters](https://github.com/Azure-Samples/cognitive-services-sample-data-files) files, which are hosted on GitHub.
+## Create a Language resource
-Edit the following values in the parameters file:
+Before you start using custom NER, you will need an Azure Language resource. It is recommended to create your Language resource and connect a storage account to it in the Azure portal. Creating a resource in the Azure portal lets you create an Azure storage account at the same time, with all of the required permissions pre-configured. You can also read further in the article to learn how to use a pre-existing resource, and configure it to work with custom named entity recognition.
-| Parameter name | Value description |
-|--|--|
-|`name`| Name of your Language resource|
-|`location`| Region in which your resource is hosted. Custom NER is only available in **West US 2** and **West Europe**.|
-|`sku`| Pricing tier of your resource. This feature only works with **S** tier|
-|`storageResourceName`| Name of your storage account|
-|`storageLocation`| Region in which your storage account is hosted.|
-|`storageSkuType`| SKU of your [storage account](/rest/api/storagerp/srp_sku_types).|
-|`storageResourceGroupName`| Resource group of your storage account|
-<!-- |builtInRoleType| Set this role to **"Contributor"**| -->
+You also will need an Azure storage account where you will upload your `.txt` files that will be used to train a model to extract entities.
-Use the following PowerShell command to deploy the Azure Resource Manager (ARM) template with the files you edited.
+> [!NOTE]
+> * You need to have an **owner** role assigned on the resource group to create a Language resource.
+> * If you will connect a pre-existing storage account, you should have an owner role assigned to it.
-```powershell
-New-AzResourceGroupDeployment -Name ExampleDeployment -ResourceGroupName ExampleResourceGroup `
- -TemplateFile <path-to-arm-template> `
- -TemplateParameterFile <path-to-parameters-file>
-```
+## Create Language resource and connect storage account
-See the ARM template documentation for information on [deploying templates](../../../../azure-resource-manager/templates/deploy-powershell.md#parameter-files) and [parameter files](../../../../azure-resource-manager/templates/parameter-files.md?tabs=json).
+You can create a resource in the following ways:
-
+* The Azure portal
+* Language Studio
+* PowerShell
-## Using a pre-existing Azure resource
-You can use an existing Language resource to get started with custom NER as long as this resource meets the below requirements:
-|Requirement |Description |
-|||
-|Regions | Make sure your existing resource is provisioned in one of the two supported regions, **West US 2** or **West Europe**. If not, you will need to create a new resource in these regions. |
-|Pricing tier | Make sure your existing resource is in the Standard (**S**) pricing tier. Only this pricing tier is supported. If your resource doesn't use this pricing tier, you will need to create a new resource. |
-|Managed identity | Make sure that the resource-managed identity setting is enabled. Otherwise, read the next section. |
-To use custom NER, you'll need to [create an Azure storage account](../../../../storage/common/storage-account-create.md) if you don't have one already, and assign the [correct roles](#required-roles-for-your-storage-account) to connect it to your Language resource.
> [!NOTE]
-> Custom NER currently does not currently support Data Lake Storage Gen 2.
+> * The process of connecting a storage account to your Language resource is irreversible, it cannot be disconnected later.
+> * You can only connect your language resource to one storage account.
-## Required roles for Azure Language resources
+## Using a pre-existing Language resource
-To access and use custom NER projects, your account must have one of the following roles in your Language resource. If you have contributors who need access to your projects, they will also need one of these roles to access the Language resource's managed identity:
-* *owner*
-* *contributor*
-### Enable managed identities for your Language resource
+## Create a custom named entity recognition project
-Your Language resource must have identity management, which can be enabled either using the Azure portal or from Language Studio. To enable it using [Language Studio](https://aka.ms/languageStudio):
-1. Click the settings icon in the top right corner of the screen
-2. Select **Resources**
-3. Select **Managed Identity** for your Azure resource.
+Once your resource and storage container are configured, create a new custom NER project. A project is a work area for building your custom AI models based on your data. Your project can only be accessed by you and others who have access to the Azure resource being used. If you have labeled data, you can use it to get started by [importing a project](#import-project).
-### Add roles to your Language resource
+### [Language Studio](#tab/language-studio)
-After you've enabled managed identities for your resource, add the appropriate owner or contributor role assignments for your account, and your contributors' Azure accounts:
-1. Go to your Language resource in the [Azure portal](https://portal.azure.com/).
-2. Select **Access Control (IAM)** in the left navigation menu.
-3. Select **Add** then **Add Role Assignments**, and choose the **Owner** or **Contributor** role. You can search for user names in the **Select** field.
+### [Rest APIs](#tab/rest-api)
-## Required roles for your storage account
-Your Language resource must have the below roles assigned within your Azure blob storage account:
+
-* *owner* or *contributor*, and
-* *storage blob data owner* or *storage blob data contributor*, and
-* *reader*
+## Import project
-### Add roles to your storage account
+If you have already labeled data, you can use it to get started with the service. Make sure that your labeled data follows the [accepted data formats](../concepts/data-formats.md).
-To set proper roles on your storage account:
+### [Language Studio](#tab/language-studio)
-1. Go to your storage account page in the [Azure portal](https://portal.azure.com/).
-2. Select **Access Control (IAM)** in the left navigation menu.
-3. Select **Add** to **Add Role Assignments**, and choose the appropriate role for your Language resource.
-4. Select **Managed identity** under **Assign access to**.
-5. Select **Members** and find your resource. In the window that appears, select your subscription, and **Language** as the managed identity. You can search for user names in the **Select** field. Repeat this for all roles.
+### [Rest APIs](#tab/rest-api)
-For information on authorizing access to your Azure blob storage account and data, see [Authorize access to data in Azure storage](../../../../storage/common/authorize-data-access.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json).
-### Enable CORS for your storage account
+
-Make sure to allow (**GET, PUT, DELETE**) methods when enabling Cross-Origin Resource Sharing (CORS). Then, add an asterisk (`*`) to the fields and add the recommended value of 500 for the maximum age.
+## Get project details
+### [Language Studio](#tab/language-studio)
-## Prepare training data
-* As a prerequisite for creating a custom NER project, your training data needs to be uploaded to a blob container in your storage account. You can create and upload training files from Azure directly or through using the Azure Storage Explorer tool. Using Azure Storage Explorer tool allows you to upload more data in less time.
+### [Rest APIs](#tab/rest-api)
- * [Create and upload files from Azure](../../../../storage/blobs/storage-quickstart-blobs-portal.md#create-a-container)
- * [Create and upload files using Azure Storage Explorer](../../../../vs-azure-tools-storage-explorer-blobs.md)
-* You can only use `.txt`. files for custom NER. If your data is in other format, you can use [Cognitive Services Language Utilities tool](https://aka.ms/CognitiveServicesLanguageUtilities) to parse your file to `.txt` format.
+
-* You can either upload tagged data, or you can tag your data in Language Studio. Tagged data must follow the [tags file format](../concepts/data-formats.md).
+## Delete project
->[!TIP]
-> Review [Prepare data and define a schema](../how-to/design-schema.md) for information on data selection and preparation.
+### [Language Studio](#tab/language-studio)
-## Create a custom named entity recognition project
-Once your resource and storage container are configured, create a new custom NER project. A project is a work area for building your custom AI models based on your data. Your project can only be accessed by you and others who have contributor access to the Azure resource being used.
+### [Rest APIs](#tab/rest-api)
-Review the data you entered and select **Create Project**.
+ ## Next steps
-After your project is created, you can start [tagging your data](tag-data.md), which will inform your entity extraction model how to interpret text, and is used for training and evaluation.
+* You should have an idea of the [project schema](design-schema.md) you will use to label your data.
+
+* After your project is created, you can start [labeling your data](tag-data.md), which will inform your entity extraction model how to interpret text, and is used for training and evaluation.
cognitive-services Deploy Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-named-entity-recognition/how-to/deploy-model.md
+
+ Title: Submit a Custom Named Entity Recognition (NER) task
+
+description: Learn about sending a request for Custom Named Entity Recognition (NER).
++++++ Last updated : 05/09/2022++++
+# Deploy a model and extract entities from text using the runtime API
+
+Once you are satisfied with how your model performs, it is ready to be deployed and used to recognize entities in text. Deploying a model makes it available for use through the [prediction API](https://aka.ms/ct-runtime-swagger).
+
+## Prerequisites
+
+* A successfully [created project](create-project.md) with a configured Azure storage account.
+* Text data that has [been uploaded](design-schema.md#data-preparation) to your storage account.
+* [Labeled data](tag-data.md) and successfully [trained model](train-model.md)
+* Reviewed the [model evaluation details](view-model-evaluation.md) to determine how your model is performing.
+* (optional) [Made improvements](improve-model.md) to your model if its performance isn't satisfactory.
+
+See [project development lifecycle](../overview.md#project-development-lifecycle) for more information.
+
+## Deploy model
+
+After you've reviewed your model's performance and decided it can be used in your environment, you need to assign it to a deployment. Assigning the model to a deployment makes it available for use through the [prediction API](https://aka.ms/ct-runtime-swagger). It is recommended to create a deployment named *production* to which you assign the best model you have built so far and use it in your system. You can create another deployment called *staging* to which you can assign the model you're currently working on to be able to test it. You can have a maximum of 10 deployments in your project.
+
+# [Language Studio](#tab/language-studio)
+
+
+# [REST APIs](#tab/rest-api)
+
+### Submit deployment job
++
+### Get deployment job status
++++
+## Swap deployments
+
+After you are done testing a model assigned to one deployment and you want to assign this model to another deployment you can swap these two deployments. Swapping deployments involves taking the model assigned to the first deployment, and assigning it to the second deployment. Then taking the model assigned to second deployment, and assigning it to the first deployment. You can use this process to swap your *production* and *staging* deployments when you want to take the model assigned to *staging* and assign it to *production*.
+
+# [Language Studio](#tab/language-studio)
++
+# [REST APIs](#tab/rest-api)
+++++
+## Delete deployment
+
+# [Language Studio](#tab/language-studio)
++
+# [REST APIs](#tab/rest-api)
++++
+## Next steps
+
+After you have a deployment, you can use it to [extract entities](call-api.md) from text.
cognitive-services Design Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-named-entity-recognition/how-to/design-schema.md
Previously updated : 11/02/2021 Last updated : 05/09/2022 -+ # How to prepare data and define a schema for custom NER
-In order to create a custom NER model, you will need quality data to train it. This article covers how you should approach selecting and preparing your data, along with defining a schema. The schema defines the entity types/categories that you need your model to extract from text at runtime, and is the first step of [developing an entity extraction model](../overview.md#application-development-lifecycle).
+In order to create a custom NER model, you will need quality data to train it. This article covers how you should select and prepare your data, along with defining a schema. Defining the schema is the first step in [project development lifecycle](../overview.md#project-development-lifecycle), and it defines the entity types/categories that you need your model to extract from the text at runtime.
## Schema design
-The schema defines the entity types/categories that you need your model to extract from text at runtime.
+The schema defines the entity types/categories that you need your model to extract from text at runtime.
-* Review files in your dataset to be familiar with their format and structure.
+* Review documents in your dataset to be familiar with their format and structure.
* Identify the [entities](../glossary.md#entity) you want to extract from the data.
- For example, if you are extracting entities from support emails, you might need to extract "Customer name", "Product name", "Customer's problem", "Request date", and "Contact information".
+ For example, if you are extracting entities from support emails, you might need to extract "Customer name", "Product name", "Request date", and "Contact information".
* Avoid entity types ambiguity.
- **Ambiguity** happens when the types you select are similar to each other. The more ambiguous your schema the more tagged data you will train your model.
+ **Ambiguity** happens when entity types you select are similar to each other. The more ambiguous your schema the more labeled data you will need to differentiate between different entity types.
- For example, if you are extracting data from a legal contract, to extract "Name of first party" and "Name of second party" you will need to add more examples to overcome ambiguity since the names of both parties look similar. Avoiding ambiguity saves time, effort, and yields better results.
+ For example, if you are extracting data from a legal contract, to extract "Name of first party" and "Name of second party" you will need to add more examples to overcome ambiguity since the names of both parties look similar. Avoid ambiguity as it saves time, effort, and yields better results.
* Avoid complex entities. Complex entities can be difficult to pick out precisely from text, consider breaking it down into multiple entities.
- For example, the model would have a hard time extracting "Address" if it was not broken down into smaller entities. There are so many variations of how addresses appear, it would take large number of tagged entities to teach the model to extract an address, as a whole, without breaking it down. However, if you replace "Address" with "Street Name", "PO Box", "City", "State" and "Zip", the model will require fewer tags per entity.
+ For example, extracting "Address" would be challenging if it's not broken down to smaller entities. There are so many variations of how addresses appear, it would take large number of labeled entities to teach the model to extract an address, as a whole, without breaking it down. However, if you replace "Address" with "Street Name", "PO Box", "City", "State" and "Zip", the model will require fewer labels per entity.
## Data selection
The quality of data you train your model with affects model performance greatly.
* Use diverse data whenever possible to avoid overfitting your model. Less diversity in training data may lead to your model learning spurious correlations that may not exist in real-life data.
-* Avoid duplicate files in your data. Duplicate data has a negative effect on the training process, model metrics, and model performance.
+* Avoid duplicate documents in your data. Duplicate data has a negative effect on the training process, model metrics, and model performance.
* Consider where your data comes from. If you are collecting data from one person, department, or part of your scenario, you are likely missing diversity that may be important for your model to learn about. > [!NOTE]
-> If your files are in multiple languages, select the **multiple languages** option during [project creation](../quickstart.md) and set the **language** option to the language of the majority of your files.
+> If your documents are in multiple languages, select the **enable multi-lingual** option during [project creation](../quickstart.md) and set the **language** option to the language of the majority of your documents.
## Data preparation
-As a prerequisite for creating a project, your training data needs to be uploaded to a blob container in your storage account. You can create and upload training files from Azure directly, or through using the Azure Storage Explorer tool. Using the Azure Storage Explorer tool allows you to upload more data quickly.
+As a prerequisite for creating a project, your training data needs to be uploaded to a blob container in your storage account. You can create and upload training documents from Azure directly, or through using the Azure Storage Explorer tool. Using the Azure Storage Explorer tool allows you to upload more data quickly.
-* [Create and upload files from Azure](../../../../storage/blobs/storage-quickstart-blobs-portal.md#create-a-container)
-* [Create and upload files using Azure Storage Explorer](../../../../vs-azure-tools-storage-explorer-blobs.md)
+* [Create and upload documents from Azure](../../../../storage/blobs/storage-quickstart-blobs-portal.md#create-a-container)
+* [Create and upload documents using Azure Storage Explorer](../../../../vs-azure-tools-storage-explorer-blobs.md)
-You can only use `.txt` files. If your data is in other format, you can use [CLUtils parse command](https://github.com/microsoft/CognitiveServicesLanguageUtilities/blob/main/CustomTextAnalytics.CLUtils/Solution/CogSLanguageUtilities.ViewLayer.CliCommands/Commands/ParseCommand/README.md) to change your file format.
+You can only use `.txt` documents. If your data is in other format, you can use [CLUtils parse command](https://github.com/microsoft/CognitiveServicesLanguageUtilities/blob/main/CustomTextAnalytics.CLUtils/Solution/CogSLanguageUtilities.ViewLayer.CliCommands/Commands/ParseCommand/README.md) to change your document format.
- You can upload an annotated dataset, or you can upload an unannotated one and [tag your data](../how-to/tag-data.md) in Language studio.
+You can upload an annotated dataset, or you can upload an unannotated one and [label your data](../how-to/tag-data.md) in Language studio.
+## Test set
+
+When defining the testing set, make sure to include example documents that are not present in the training set. Defining the testing set is an important step to calculate the [model performance](view-model-evaluation.md#model-details). Also, make sure that the testing set include documents that represent all entities used in your project.
+ ## Next steps
-If you haven't already, create a custom NER project. If it's your first time using custom NER, consider following the [quickstart](../quickstart.md) to create an example project. You can also see the [how-to article](../how-to/create-project.md) for more details on what you need to create a project.
+If you haven't already, create a custom NER project. If it's your first time using custom NER, consider following the [quickstart](../quickstart.md) to create an example project. You can also see the [how-to article](../how-to/create-project.md) for more details on what you need to create a project.
cognitive-services Improve Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-named-entity-recognition/how-to/improve-model.md
Previously updated : 04/05/2022 Last updated : 05/06/2022 -+
-# Improve the performance of Custom Named Entity Recognition (NER) models
+# Improve model performance
-After you've trained your model you reviewed its evaluation details, you can start to improve model performance. In this article, you will review inconsistencies between the predicted classes and classes tagged by the model, and examine data distribution.
+In some cases, the model is expected to extract entities that are inconsistent with your labeled ones. In this page you can observe these inconsistencies and decide on the needed changes needed to improve your model performance.
## Prerequisites * A successfully [created project](create-project.md) with a configured Azure blob storage account
- * Text data that [has been uploaded](create-project.md#prepare-training-data) to your storage account.
-* [Tagged data](tag-data.md)
+ * Text data that [has been uploaded](design-schema.md#data-preparation) to your storage account.
+* [Labeled data](tag-data.md)
* A [successfully trained model](train-model.md) * Reviewed the [model evaluation details](view-model-evaluation.md) to determine how your model is performing.
- * Familiarized yourself with the [evaluation metrics](../concepts/evaluation-metrics.md) used for evaluation.
+* Familiarized yourself with the [evaluation metrics](../concepts/evaluation-metrics.md).
-See the [application development lifecycle](../overview.md#application-development-lifecycle) for more information.
+See the [project development lifecycle](../overview.md#project-development-lifecycle) for more information.
-## Improve model
+## Review test set predictions
-After you have reviewed your [model's evaluation](view-model-evaluation.md), you'll have formed an idea on what's wrong with your model's prediction.
+After you have viewed your [model's evaluation](view-model-evaluation.md), you'll have formed an idea on your model performance. In this page, you can view how your model performs vs how it's expected to perform. You can view predicted and labeled entities side by side for each document in your test set. You can review entities that were extracted differently than they were originally labeled.
-> [!NOTE]
-> This guide focuses on data from the [validation set](train-model.md) that was created during training.
-### Review test set
+To review inconsistent predictions in the [test set](train-model.md) from within the [Language Studio](https://aka.ms/LanguageStudio):
-Using Language Studio, you can review how your model performs against how you expected it to perform. You can review predicted and tagged classes for each model you have trained.
+1. Select **Improve model** from the left side menu.
-1. Go to your project page in [Language Studio](https://aka.ms/languageStudio).
- 1. Look for the section in Language Studio labeled **Classify text**.
- 2. Select **Custom text classification**.
+2. Choose your trained model from **Model** drop-down menu.
-2. Select **Improve model** from the left side menu.
+3. For easier analysis, you can toggle **Show incorrect predictions only** to view entities that were incorrectly predicted only. You should see all documents that include entities that were incorrectly predicted.
-3. Select **Review test set**.
+5. You can expand each document to see more details about predicted and labeled entities.
-4. Choose your trained model from **Model** drop-down menu.
+ Use the following information to help guide model improvements.
+
+ * If entity `X` is constantly identified as entity `Y`, it means that there is ambiguity between these entity types and you need to reconsider your schema. Learn more about [data selection and schema design](design-schema.md#schema-design). Another solution is to consider labeling more instances of these entities, to help the model improve and differentiate between them.
+
+ * If a complex entity is repeatedly not predicted, consider [breaking it down to simpler entities](design-schema.md#schema-design) for easier extraction.
+
+ * If an entity is predicted while it was not labeled in your data, this means to you need to review your labels. Be sure that all instances of an entity are properly labeled in all documents.
+
+
+ :::image type="content" source="../media/review-predictions.png" alt-text="A screenshot showing model predictions in Language Studio." lightbox="../media/review-predictions.png":::
-5. For easier analysis, you can toggle **Show incorrect predictions only** to view mistakes only.
-Use the following information to help guide model improvements.
-* If entity `X` is constantly identified as entity `Y`, it means that there is ambiguity between these entity types and you need to reconsider your schema.
-
-* If a complex entity is repeatedly not extracted, consider breaking it down to simpler entities for easier extraction.
-
-* If an entity is predicted while it was not tagged in your data, this means to you need to review your tags. Be sure that all instances of an entity are properly tagged in all files.
-
-### Examine data distribution
-
-By examining data distribution in your files, you can decide if any entity is underrepresented. Data imbalance happens when tags are not distributed equally among your entities, and is a risk to your model's performance. For example, if *entity 1* has 50 tags while *entity 2* has 10 tags only, this is an example of data imbalance where *entity 1* is over represented, and *entity 2* is underrepresented. The model is biased towards extracting *entity 1* and might overlook *entity 2*. More complex issues may come from data imbalance if the schema is ambiguous. If the two entities are some how similar and *entity 2* is underrepresented the model most likely will extract it as *entity 1*.
-
-In [model evaluation](view-model-evaluation.md), entities that are over represented tend to have a higher recall than other entities, while under represented entities have lower recall.
-
-To examine data distribution in your dataset:
-
-1. Go to your project page in [Language Studio](https://aka.ms/languageStudio).
- 1. Look for the section in Language Studio labeled **Extract information**.
- 2. Select **Custom named entity extraction**.
-2. Select **Improve model** from the left side menu.
-
-3. Select **Examine data distribution**.
## Next steps
-Once you're satisfied with how your model performs, you can start [sending entity extraction requests](call-api.md) using the runtime API.
+Once you're satisfied with how your model performs, you can [deploy your model](call-api.md).
cognitive-services Tag Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-named-entity-recognition/how-to/tag-data.md
Title: How to tag your data for Custom Named Entity Recognition (NER)
+ Title: How to label your data for Custom Named Entity Recognition (NER)
-description: Learn how to tag your data for use with Custom Named Entity Recognition (NER).
+description: Learn how to label your data for use with Custom Named Entity Recognition (NER).
Previously updated : 04/05/2022 Last updated : 05/24/2022 -+
-# Tag your data for Custom Named Entity Recognition (NER) in language studio
+# Label your data in Language Studio
-Before building your custom entity extraction models, you need to have tagged data. If your data is not tagged already, you can tag it in the language studio. To tag your data, you must have [created a project](../quickstart.md).
+Before training your model you need to label your documents with the custom entities you want to extract. Data labeling is a crucial step in development lifecycle. In this step you can create the entity types you want to extract from your data and label these entities within your documents. This data will be used in the next step when training your model so that your model can learn from the labeled data. If you already have labeled data, you can directly [import](create-project.md#import-project) it into your project but you need to make sure that your data follows the [accepted data format](../concepts/data-formats.md). See [create project](create-project.md#import-project) to learn more about importing labeled data into your project.
+
+Before creating a custom NER model, you need to have labeled data first. If your data isn't labeled already, you can label it in the [Language Studio](https://aka.ms/languageStudio). Labeled data informs the model how to interpret text, and is used for training and evaluation.
## Prerequisites
-Before you can tag data, you need:
+Before you can label your data, you need:
* A successfully [created project](create-project.md) with a configured Azure blob storage account
- * Text data that [has been uploaded](create-project.md#prepare-training-data) to your storage account.
+* Text data that [has been uploaded](design-schema.md#data-preparation) to your storage account.
-See the [application development lifecycle](../overview.md#application-development-lifecycle) for more information.
+See the [project development lifecycle](../overview.md#project-development-lifecycle) for more information.
-## Tag your data
+## Data labeling guidelines
-After training data is uploaded to your Azure storage account, you will need to tag it, so your model knows which words will be associated with the classes you need. When you tag data in Language Studio (or manually tag your data), these tags will be stored in [the JSON format](../concepts/data-formats.md) that your model will use during training.
+After [preparing your data, designing your schema](design-schema.md) and [creating your project](create-project.md), you will need to label your data. Labeling your data is important so your model knows which words will be associated with the entity types you need to extract. When you label your data in [Language Studio](https://aka.ms/languageStudio) (or import labeled data), these labels will be stored in the JSON document in your storage container that you have connected to this project.
-As you tag your data, remember the following:
+As you label your data, keep in mind:
-* **Tag precisely**: Tag each entity to its right type always. Only include what you want extracted, avoid unnecessary data in your tag.
-* **Tag consistently**: The same entity should have the same tag across all the files.
-* **Tag completely**: Tag all the instances of the entity in all your files.
+* In general, more labeled data leads to better results, provided the data is labeled accurately.
-The precision, consistency and completeness of your tagged data are key factors to determining model performance. To tag your data:
+* The precision, consistency and completeness of your labeled data are key factors to determining model performance.
-1. Go to the projects page in [Language Studio](https://aka.ms/custom-extraction) and select your project.
+ * **Label precisely**: Label each entity to its right type always. Only include what you want extracted, avoid unnecessary data in your labels.
+ * **Label consistently**: The same entity should have the same label across all the documents.
+ * **Label completely**: Label all the instances of the entity in all your documents. You can use the [auto-labeling feature](use-autotagging.md) to ensure complete labeling.
-2. From the left side menu, select **Tag data**
+ > [!NOTE]
+ > There is no fixed number of labels that can guarantee your model will perform the best. Model performance is dependent on possible ambiguity in your [schema](design-schema.md), and the quality of your labeled data. Nevertheless, we recommend having around 50 labeled instances per entity type.
-3. You can find a list of all `.txt` files available in your projects to the left. You can select the file you want to start tagging or you can use the **Back** and **Next** button from the bottom of the page to navigate.
+## Label your data
-4. To start tagging, click **Add entities** in the top-right corner. You can either view all files or only tagged files by changing the view from the **Viewing** drop down filter.
+Use the following steps to label your data:
- :::image type="content" source="../media/tagging-screen.png" alt-text="A screenshot showing the Language Studio screen for tagging data." lightbox="../media/tagging-screen.png":::
+1. Go to your project page in [Language Studio](https://aka.ms/languageStudio).
- In the image above:
-
- * *Section 1*: is where the content of the text file is displayed and tagging takes place. You have [two options for tagging](#tagging-options) your files.
-
- * *Section 2*: includes your project's entities and distribution across your files and tags.
- If you click **Distribution**, you can view your tag distribution across:
-
- * Files: View the distribution of files across one single entity.
- * Tags: view the distribution of tags across all files.
-
- :::image type="content" source="../media/distribution-ner.png" alt-text="A screenshot showing the distribution section." lightbox="../media/distribution-ner.png":::
-
-
- * *Section 3*: This is the split project data toggle. You can choose to add a selected text file to your training set or the testing set. By default, the toggle is off, and all text files are added to your training set.
-
-To add a text file to a training or testing set, simply choose from the radio buttons to which set it belongs.
+2. From the left side menu, select **Data labeling**. You can find a list of all documents in your storage container.
->[!TIP]
->It is recommended to define your testing set.
+ <!--:::image type="content" source="../media/tagging-files-view.png" alt-text="A screenshot showing the Language Studio screen for labeling data." lightbox="../media/tagging-files-view.png":::-->
-If you enabled multiple languages for your project, you will find a **Language** dropdown, which lets you select the language of each document.
+ >[!TIP]
+ > You can use the filters in top menu to view the unlabeled documents so that you can start labeling them.
+ > You can also use the filters to view the documents that are labeled with a specific entity type.
-While tagging, your changes will be synced periodically, if they have not been saved yet you will find a warning at the top of your page. If you want to save manually, click on **Save tags** button at the top of the page.
+3. Change to a single document view from the left side in the top menu or select a specific document to start labeling. You can find a list of all `.txt` documents available in your project to the left. You can use the **Back** and **Next** button from the bottom of the page to navigate through your documents.
-## Tagging options
+ > [!NOTE]
+ > If you enabled multiple languages for your project, you will find a **Language** dropdown in the top menu, which lets you select the language of each document.
-You have two options to tag your document:
+4. In the right side pane, **Add entity type** to your project so you can start labeling your data with them.
+ <!--:::image type="content" source="../media/tag-1.png" alt-text="A screenshot showing complete data labeling." lightbox="../media/tag-1.png":::-->
-|Option |Description |
-|||
-|Tag using a brush | Select the brush icon next to an entity in the top-right corner of the screen, then highlight words in the document you want to associate with the entity |
-|Tag using a menu | Highlight the word you want to tag as an entity, and a menu will appear. Select the tag you want to assign for this entity. |
-
-The below screenshot shows tagging using a brush.
+6. You have two options to label your document:
+
+ |Option |Description |
+ |||
+ |Label using a brush | Select the brush icon next to an entity type in the right pane, then highlight the text in the document you want to annotate with this entity type. |
+ |Label using a menu | Highlight the word you want to label as an entity, and a menu will appear. Select the entity type you want to assign for this entity. |
+
+ The below screenshot shows labeling using a brush.
+
+ :::image type="content" source="../media/tag-options.png" alt-text="A screenshot showing the labeling options offered in Custom NER." lightbox="../media/tag-options.png":::
+
+6. In the right side pane under the **Labels** pivot you can find all the entity types in your project and the count of labeled instances per each.
+6. In the bottom section of the right side pane you can add the current document you are viewing to the training set or the testing set. By default all the documents are added to your training set. Learn more about [training and testing sets](train-model.md#data-splitting) and how they are used for model training and evaluation.
-## Remove tags
+ > [!TIP]
+ > If you are planning on using **Automatic** data splitting, use the default option of assigning all the documents into your training set.
-To remove a tag
+7. Under the **Distribution** pivot you can view the distribution across training and testing sets. You have two options for viewing:
+ * *Total instances* where you can view count of all labeled instances of a specific entity type.
+ * *documents with at least one label* where each document is counted if it contains at least one labeled instance of this entity.
+
+7. When you're labeling, your changes will be synced periodically, if they have not been saved yet you will find a warning at the top of your page. If you want to save manually, click on **Save labels** button at the bottom of the page.
-1. Select the entity you want to remove a tag from.
-2. Scroll through the menu that appears, and select **Remove Tag**.
+## Remove labels
-## Delete or rename entities
+To remove a label
-To delete or rename an entity:
+1. Select the entity you want to remove a label from.
+2. Scroll through the menu that appears, and select **Remove label**.
-1. Select the entity you want to edit in the top-right corner of the menu.
-2. Click on the three dots next to the entity, and select the option you want from the drop-down menu.
+## Delete entities
+To delete an entity, select the delete icon next to the entity you want to remove. Deleting an entity will remove all its labeled instances from your dataset.
-## Next Steps
+## Next steps
-After you've tagged your data, you can begin [training a model](train-model.md) that will learn based on your data.
+After you've labeled your data, you can begin [training a model](train-model.md) that will learn based on your data.
cognitive-services Train Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-named-entity-recognition/how-to/train-model.md
Previously updated : 11/02/2021 Last updated : 05/06/2022 -+
-# Train your Custom Named Entity Recognition (NER) model
+# Train your custom named entity recognition model
-Training is the process where the model learns from your [tagged data](tag-data.md). After training is completed, you will be able to [use the model evaluation metrics](../how-to/view-model-evaluation.md) to determine if you need to [improve your model](../how-to/improve-model.md).
+Training is the process where the model learns from your [labeled data](tag-data.md). After training is completed, you'll be able to [view model performance](view-model-evaluation.md) to determine if you need to [improve your model](improve-model.md).
+
+To train a model, you start a training job and only successfully completed jobs create a model. Training jobs expire after seven days, which means you won't be able to retrieve the job details after this time. If your training job completed successfully and a model was created, the model won't be affected. You can only have one training job running at a time, and you can't start other jobs in the same project.
+
+The training times can be anywhere from a few minutes when dealing with few documents, up to several hours depending on the dataset size and the complexity of your schema.
-The time to train a model varies on the dataset, and may take up to several hours. You can only train one model at a time, and you cannot create or train other models if one is already training in the same project.
## Prerequisites * A successfully [created project](create-project.md) with a configured Azure blob storage account
- * Text data that [has been uploaded](create-project.md#prepare-training-data) to your storage account.
-* [Tagged data](tag-data.md)
+* Text data that [has been uploaded](design-schema.md#data-preparation) to your storage account.
+* [Labeled data](tag-data.md)
+
+See the [project development lifecycle](../overview.md#project-development-lifecycle) for more information.
+
+## Data splitting
-See the [application development lifecycle](../overview.md#application-development-lifecycle) for more information.
+Before you start the training process, labeled documents in your project are divided into a training set and a testing set. Each one of them serves a different function.
+The **training set** is used in training the model, this is the set from which the model learns the labeled entities and what spans of text are to be extracted as entities.
+The **testing set** is a blind set that is not introduced to the model during training but only during evaluation.
+After model training is completed successfully, the model is used to make predictions from the documents in the testing and based on these predictions [evaluation metrics](../concepts/evaluation-metrics.md) are calculated.
+It's recommended to make sure that all your entities are adequately represented in both the training and testing set.
-## Train model in Language studio
+Custom NER supports two methods for data splitting:
-1. Go to your project page in [Language Studio](https://aka.ms/LanguageStudio).
+* **Automatically splitting the testing set from training data**:The system will split your labeled data between the training and testing sets, according to the percentages you choose. The recommended percentage split is 80% for training and 20% for testing.
-2. Select **Train** from the left side menu.
+ > [!NOTE]
+ > If you choose the **Automatically splitting the testing set from training data** option, only the data assigned to training set will be split according to the percentages provided.
-3. Select **Start a training job** from the top menu.
+* **Use a manual split of training and testing data**: This method enables users to define which labeled documents should belong to which set. This step is only enabled if you have added documents to your testing set during [data labeling](tag-data.md).
-4. To train a new model, select **Train a new model** and type in the model name in the text box below. You can **overwrite an existing model** by selecting this option and select the model you want from the dropdown below.
+## Train model
- :::image type="content" source="../media/train-model.png" alt-text="Create a new training job" lightbox="../media/train-model.png":::
-
-If you have enabled [your project data to be split manually](tag-data.md) when you were tagging your data, you will see two training options:
+# [Language studio](#tab/Language-studio)
-* **Automatic split the testing**: The data will be randomly split for each class between training and testing sets, according to the percentages you choose. The default value is 80% for training and 20% for testing. To change these values, choose which set you want to change and write the new value.
-* **Use a manual split**: Assign each document to either the training or testing set, this required first adding files in the test dataset.
+# [REST APIs](#tab/REST-APIs)
-5. Click on the **Train** button.
+### Start training job
-6. You can check the status of the training job in the same page. Only successfully completed training jobs will generate models.
-You can only have one training job running at a time. You cannot create or start other tasks in the same project.
+### Get training job status
+
+Training could take sometime depending on the size of your training data and complexity of your schema. You can use the following request to keep polling the status of the training job until it's successfully completed.
+
+ [!INCLUDE [get training model status](../includes/rest-api/get-training-status.md)]
+++
+### Cancel training job
+
+# [Language Studio](#tab/language-studio)
++
+# [REST APIs](#tab/rest-api)
+++ ## Next steps
-After training is completed, you will be able to use the [model evaluation metrics](view-model-evaluation.md) to optionally [improve your model](improve-model.md). Once you're satisfied with your model, you can deploy it, making it available to use for [extracting entities](call-api.md) from text.
+After training is completed, you'll be able to [view model performance](view-model-evaluation.md) to optionally [improve your model](improve-model.md) if needed. Once you're satisfied with your model, you can deploy it, making it available to use for [extracting entities](call-api.md) from text.
cognitive-services Use Autotagging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-named-entity-recognition/how-to/use-autotagging.md
+
+ Title: How to use autotagging in custom named entity recognition
+
+description: Learn how to use autotagging in custom named entity recognition.
+++++++ Last updated : 05/09/2022+++
+# How to use auto-labeling
+
+[Labeling process](tag-data.md) is an important part of preparing your dataset. Since this process requires a lot of time and effort, you can use the auto-labeling feature to automatically label your entities. With auto-labeling, you can start labeling a few of your documents, train a model, then create an auto-labeling job to produce labeling entities on your behalf, automatically. This feature can save you the time and effort of manually labeling your entities.
+
+## Prerequisites
+
+Before you can use auto-labeling, you must have a [trained model](train-model.md).
++
+## Trigger an auto-labeling job
+
+When you trigger an auto-labeling job, there's a monthly limit of 5,000 text records per month, per resource. This means the same limit will apply on all projects within the same resource.
+
+> [!TIP]
+> A text record is calculated as the ceiling of (Number of characters in a document / 1,000). For example, if a document has 8921 characters, the number of text records is:
+>
+> `ceil(8921/1000) = ceil(8.921)`, which is 9 text records.
+
+1. From the left navigation menu, select **Data auto-labeling**.
+2. Select **Trigger Auto-label** to start an auto-labeling job
++
+ :::image type="content" source="../media/trigger-autotag.png" alt-text="A screenshot showing how to trigger an autotag job." lightbox="../media/trigger-autotag.png":::
+
+3. Choose a trained model. It's recommended to check the model performance before using it for auto-labeling.
+
+ :::image type="content" source="../media/choose-model.png" alt-text="A screenshot showing how to choose trained model for autotagging." lightbox="../media/choose-model.png":::
++
+4. Choose the entities you want to be included in the auto-labeling job. By default, all entities are selected. You can see the total labels, precision and recall of each entity. It's recommended to include entities that perform well to ensure the quality of the automatically labeled entities.
+
+ :::image type="content" source="../media/choose-entities.png" alt-text="A screenshot showing which entities to be included in autotag job." lightbox="../media/choose-entities.png":::
+
+5. Choose the documents you want to be automatically labeled. You'll see the number of text records of each document. When you select one or more documents, you should see the number of texts records selected. It's recommended to choose the unlabeled documents from the filter.
+
+ > [!NOTE]
+ > * If an entity was automatically labeled, but has a user defined label, only the user defined label will be used and be visible.
+ > * You can view the documents by clicking on the document name.
+
+ :::image type="content" source="../media/choose-files.png" alt-text="A screenshot showing which documents to be included in the autotag job." lightbox="../media/choose-files.png":::
+
+6. Select **Autolabel** to trigger the auto-labeling job.
+You should see the model used, number of documents included in the auto-labeling job, number of text records and entities to be automatically labeled. Auto-labeling jobs can take anywhere from a few seconds to a few minutes, depending on the number of documents you included.
++
+ :::image type="content" source="../media/review-autotag.png" alt-text="A screenshot showing the review screen for an autotag job." lightbox="../media/review-autotag.png":::
+
+## Review the auto labeled documents
+
+When the auto-labeling job is complete, you can see the output documents in the **Data labeling** page of Language Studio. Select **Review documents with autolabels** to view the documents with the **Auto labeled** filter applied.
++
+Entities that have been automatically labeled will appear with a dotted line. These entities will have two selectors (a checkmark and an "X") that will let you accept or reject the automatic label.
+
+Once an entity is accepted, the dotted line will change to solid line, and this label will be included in any further model training and be a user defined label.
+
+Alternatively, you can accept or reject all automatically labeled entities within the document, using **Accept all** or **Reject all** in the top right corner of the screen.
+
+After you accept or reject the labeled entities, select **Save labels** to apply the changes.
+
+> [!NOTE]
+> * We recommend validating automatically labeled entities before accepting them.
+> * All labels that were not accepted will be deleted when you train your model.
++
+## Next steps
+
+* Learn more about [labeling your data](tag-data.md).
cognitive-services View Model Evaluation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-named-entity-recognition/how-to/view-model-evaluation.md
Previously updated : 04/05/2022 Last updated : 05/24/2022 -+
-# View the model's evaluation and details
+# View the custom NER model's evaluation and details
-After your model has finished training, you can view the model details and see how well does it perform against the test set, which contains 10% of your data at random, which is created during [training](train-model.md). The test set consists of data that was not introduced to the model during the training process. For the evaluation process to complete there must be at least 10 files in your dataset. You must also have a [custom NER project](../quickstart.md) with a [trained model](train-model.md).
+After your model has finished training, you can view the model performance and see the extracted entities for the documents in the test set.
+
+> [!NOTE]
+> Using the **Automatically split the testing set from training data** option may result in different model evaluation result every time you [train a new model](train-model.md), as the test set is selected randomly from the data. To make sure that the evaulation is calcualted on the same test set every time you train a model, make sure to use the **Use a manual split of training and testing data** option when starting a training job and define your **Test** documents when [labeling data](tag-data.md).
## Prerequisites
-* A successfully [created project](create-project.md) with a configured Azure blob storage account
- * Text data that [has been uploaded](create-project.md#prepare-training-data) to your storage account.
-* [Tagged data](tag-data.md)
+Before viewing model evaluation, you need:
+
+* A successfully [created project](create-project.md) with a configured Azure blob storage account.
+* Text data that [has been uploaded](design-schema.md#data-preparation) to your storage account.
+* [Labeled data](tag-data.md)
* A [successfully trained model](train-model.md)
-See the [application development lifecycle](../overview.md#application-development-lifecycle) for more information.
+See the [project development lifecycle](../overview.md#project-development-lifecycle) for more information.
+
+## Model details
+
+### [Language studio](#tab/language-studio)
-## View the model's evaluation details
+### [REST APIs](#tab/rest-api)
+++
+## Delete model
+
+### [Language studio](#tab/language-studio)
++
+### [REST APIs](#tab/rest-api)
+++ ## Next steps
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-named-entity-recognition/language-support.md
Title: Language and region support for Custom Named Entity Recognition (NER)
+ Title: Language and region support for custom named entity recognition
-description: Learn about the languages and regions supported by Custom Named Entity Recognition (NER).
+description: Learn about the languages and regions supported by custom named entity recognition.
Previously updated : 03/14/2022- Last updated : 05/06/2022+
-# Language support for Custom Named Entity Recognition (NER)
+# Language support for custom named entity recognition
-Use this article to learn about the languages and regions currently supported by Custom Named Entity Recognition (NER).
+Use this article to learn about the languages currently supported by custom named entity recognition feature.
-## Multiple language support
+## Multi-lingual option
-With custom NER, you can train a model in one language and test in another language. This feature is very powerful because it helps you save time and effort, instead of building separate projects for every language, you can handle multi-lingual dataset in one project. Your dataset doesn't have to be entirely in the same language but you have to specify this option at project creation. If you notice your model performing poorly in certain languages during the evaluation process, consider adding more data in this language to your training set.
+With custom NER, you can train a model in one language and use to extract entities from documents in another language. This feature is powerful because it helps save time and effort. Instead of building separate projects for every language, you can handle multi-lingual dataset in one project. Your dataset doesn't have to be entirely in the same language but you should enable the multi-lingual option for your project while creating or later in project settings. If you notice your model performing poorly in certain languages during the evaluation process, consider adding more data in these languages to your training set.
-> [!NOTE]
-> To enable support for multiple languages, you need to enable this option when [creating your project](how-to/create-project.md) or you can enable it later form the project settings page.
+
+You can train your project entirely with English documents, and query it in: French, German, Mandarin, Japanese, Korean, and others. Custom named entity recognition
+makes it easy for you to scale your projects to multiple languages by using multilingual technology to train your models.
+
+Whenever you identify that a particular language is not performing as well as other languages, you can add more documents for that language in your project. In the [data labeling](how-to/tag-data.md) page in Language Studio, you can select the language of the document you're adding. When you introduce more documents for that language to the model, it is introduced to more of the syntax of that language, and learns to predict it better.
+
+You aren't expected to add the same number of documents for every language. You should build the majority of your project in one language, and only add a few documents in languages you observe aren't performing well. If you create a project that is primarily in English, and start testing it in French, German, and Spanish, you might observe that German doesn't perform as well as the other two languages. In that case, consider adding 5% of your original English documents in German, train a new model and test in German again. You should see better results for German queries. The more labeled documents you add, the more likely the results are going to get better.
+
+When you add data in another language, you shouldn't expect it to negatively affect other languages.
## Language support
Custom NER supports `.txt` files in the following languages:
## Next steps
-[Custom NER overview](overview.md)
+* [Custom NER overview](overview.md)
+* [Service limits](service-limits.md)
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-named-entity-recognition/overview.md
Title: What is Custom Named Entity Recognition (NER) in Azure Cognitive Service for Language (preview)
+ Title: What is custom named entity recognition in Azure Cognitive Service for Language (preview)
-description: Learn how use Custom Named Entity Recognition (NER).
+description: Learn how use custom named entity recognition.
Previously updated : 11/02/2021 Last updated : 05/06/2022 -+
-# What is custom named entity recognition (NER) (preview)?
+# What is custom named entity recognition?
-Custom NER is one of the features offered by [Azure Cognitive Service for Language](../overview.md). It is a cloud-based API service that applies machine-learning intelligence to enable you to build custom models for text custom NER tasks.
+Custom NER is one of the custom features offered by [Azure Cognitive Service for Language](../overview.md). It is a cloud-based API service that applies machine-learning intelligence to enable you to build custom models for custom named entity recognition tasks.
-Custom NER is offered as part of the custom features within [Azure Cognitive Service for Language](../overview.md). This feature enables its users to build custom AI models to extract domain-specific entities from unstructured text, such as contracts or financial documents. By creating a Custom NER project, developers can iteratively tag data, train, evaluate, and improve model performance before making it available for consumption. The quality of the tagged data greatly impacts model performance. To simplify building and customizing your model, the service offers a custom web portal that can be accessed through the [Language studio](https://aka.ms/languageStudio). You can easily get started with the service by following the steps in this [quickstart](quickstart.md).
+Custom NER enables users to build custom AI models to extract domain-specific entities from unstructured text, such as contracts or financial documents. By creating a Custom NER project, developers can iteratively label data, train, evaluate, and improve model performance before making it available for consumption. The quality of the labeled data greatly impacts model performance. To simplify building and customizing your model, the service offers a custom web portal that can be accessed through the [Language studio](https://aka.ms/languageStudio). You can easily get started with the service by following the steps in this [quickstart](quickstart.md).
This documentation contains the following article types:
This documentation contains the following article types:
## Example usage scenarios
+Custom named entity recognition can be used in multiple scenarios across a variety of industries:
+ ### Information extraction
-Many financial and legal organizations extract and normalize data from thousands of complex unstructured text, such as bank statements, legal agreements, or bank forms on a daily basis. Instead of manually processing these forms, custom NER can help automate this process and save cost, time, and effort..
+Many financial and legal organizations extract and normalize data from thousands of complex, unstructured text sources on a daily basis. Such sources include bank statements, legal agreements, or bank forms. For example, mortgage application data extraction done manually by human reviewers may take several days to extract. Automating these steps by building a custom NER model simplifies the process and saves cost, time, and effort.
### Knowledge mining to enhance/enrich semantic search
-Search is foundational to any app that surfaces text content to users, with common scenarios including catalog or document search, retail product search, or knowledge mining for data science. Many enterprises across various industries are looking into building a rich search experience over private, heterogeneous content, which includes both structured and unstructured documents. As a part of their pipeline, developers can use Custom NER for extracting entities from the text that are relevant to their industry. These entities could be used to enrich the indexing of the file for a more customized search experience.
+Search is foundational to any app that surfaces text content to users. Common scenarios include catalog or document search, retail product search, or knowledge mining for data science. Many enterprises across various industries want to build a rich search experience over private, heterogeneous content, which includes both structured and unstructured documents. As a part of their pipeline, developers can use custom NER for extracting entities from the text that are relevant to their industry. These entities can be used to enrich the indexing of the file for a more customized search experience.
### Audit and compliance
-Instead of manually reviewing significantly long text files to audit and apply policies, IT departments in financial or legal enterprises can use custom NER to build automated solutions. These solutions help enforce compliance policies, and set up necessary business rules based on knowledge mining pipelines that process structured and unstructured contents.
+Instead of manually reviewing significantly long text files to audit and apply policies, IT departments in financial or legal enterprises can use custom NER to build automated solutions. These solutions can be helpful to enforce compliance policies, and set up necessary business rules based on knowledge mining pipelines that process structured and unstructured content.
-## Application development lifecycle
+## Project development lifecycle
-Using Custom NER typically involves several different steps.
+Using custom NER typically involves several different steps.
-1. **Define your schema**: Know your data and identify the entities you want extracted. Avoid ambiguity.
+1. **Define your schema**: Know your data and identify the [entities](glossary.md#entity) you want extracted. Avoid ambiguity.
-2. **Tag your data**: Tagging data is a key factor in determining model performance. Tag precisely, consistently and completely.
- 1. **Tag precisely**: Tag each entity to its right type always. Only include what you want extracted, avoid unnecessary data in your tag.
- 2. **Tag consistently**: The same entity should have the same tag across all the files.
- 3. **Tag completely**: Tag all the instances of the entity in all your files.
+2. **Label your data**: Labeling data is a key factor in determining model performance. Label precisely, consistently and completely.
+ 1. **Label precisely**: Label each entity to its right type always. Only include what you want extracted, avoid unnecessary data in your labels.
+ 2. **Label consistently**: The same entity should have the same label across all the files.
+ 3. **Label completely**: Label all the instances of the entity in all your files.
-3. **Train model**: Your model starts learning from you tagged data.
+3. **Train model**: Your model starts learning from your labeled data.
4. **View the model evaluation details**: After training is completed, view the model's evaluation details and its performance. 5. **Improve the model**: After reviewing model evaluation details, you can go ahead and learn how you can improve the model.
-6. **Deploy the model**: Deploying a model is to make it available for use.
+6. **Deploy model**: Deploying a model makes it available for use via the [Analyze API](https://aka.ms/ct-runtime-swagger).
+
+8. **Extract entities**: Use your custom models for entity extraction tasks.
+
+## Reference documentation and code samples
+
+As you use custom NER, see the following reference documentation and samples for Azure Cognitive Services for Language:
+
+|Development option / language |Reference documentation |Samples |
+||||
+|REST APIs (Authoring) | [REST API documentation](https://aka.ms/ct-authoring-swagger) | |
+|REST APIs (Runtime) | [REST API documentation](https://aka.ms/ct-runtime-swagger) | |
+|C# | [C# documentation](/dotnet/api/azure.ai.textanalytics?view=azure-dotnet-preview&preserve-view=true) | [C# samples](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/textanalytics/Azure.AI.TextAnalytics/samples/Sample9_RecognizeCustomEntities.md) |
+| Java | [Java documentation](/java/api/overview/azure/ai-textanalytics-readme?view=azure-java-preview&preserve-view=true) | [Java Samples](https://github.com/Azure/azure-sdk-for-java/blob/main/sdk/textanalytics/azure-ai-textanalytics/src/samples/java/com/azure/ai/textanalytics/lro/RecognizeCustomEntities.java) |
+|JavaScript | [JavaScript documentation](/javascript/api/overview/azure/ai-text-analytics-readme?view=azure-node-preview&preserve-view=true) | [JavaScript samples](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/textanalytics/ai-text-analytics/samples/v5/javascript/customText.js) |
+|Python | [Python documentation](/python/api/azure-ai-textanalytics/azure.ai.textanalytics?view=azure-python-preview&preserve-view=true) | [Python samples](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/textanalytics/azure-ai-textanalytics/samples/sample_recognize_custom_entities.py) |
+
+## Responsible AI
+
+An AI system includes not only the technology, but also the people who will use it, the people who will be affected by it, and the environment in which it is deployed. Read the [transparency note for custom NER]() to learn about responsible AI use and deployment in your systems. You can also see the following articles for more information:
-7. **Extract entities**: Use your custom models for entity extraction tasks.
## Next steps * Use the [quickstart article](quickstart.md) to start using custom named entity recognition.
-* As you go through the application development lifecycle, review the [glossary](glossary.md) to learn more about the terms used throughout the documentation for this feature.
+* As you go through the project development lifecycle, review the [glossary](glossary.md) to learn more about the terms used throughout the documentation for this feature.
* Remember to view the [service limits](service-limits.md) for information such as [regional availability](service-limits.md#regional-availability).
cognitive-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-named-entity-recognition/quickstart.md
Title: Quickstart - Custom Named Entity Recognition (NER)
+ Title: Quickstart - Custom named entity recognition (NER)
description: Use this article to quickly get started using Custom Named Entity Recognition (NER) with Language Studio
Previously updated : 01/24/2022 Last updated : 04/25/2022 -+ zone_pivot_groups: usage-custom-language-features
-# Quickstart: Custom Named Entity Recognition (preview)
+# Quickstart: Custom named entity recognition (preview)
+
+Use this article to get started with creating a custom NER project where you can train custom models for custom entity recognition. A model is an object that's trained to do a certain task. For this system, the models extract named entities. Models are trained by learning from tagged data.
In this article, we use the Language studio to demonstrate key concepts of custom Named Entity Recognition (NER). As an example weΓÇÖll build a custom NER model to extract relevant entities from loan agreements.
cognitive-services Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-named-entity-recognition/service-limits.md
Previously updated : 04/05/2022 Last updated : 05/06/2022 -+
-# Custom Named Entity Recognition (NER) service limits
+# Custom named entity recognition service limits
-Use this article to learn about the data and service limits when using Custom NER.
+Use this article to learn about the data and service limits when using custom NER.
-## File limits
+## Language resource limits
-* You can only use `.txt`. files. If your data is in another format, you can use the [CLUtils parse command](https://github.com/microsoft/CognitiveServicesLanguageUtilities/blob/main/CustomTextAnalytics.CLUtils/Solution/CogSLanguageUtilities.ViewLayer.CliCommands/Commands/ParseCommand/README.md) to open your document and extract the text.
-
-* All files uploaded in your container must contain data. Empty files are not allowed for training.
-
-* All files should be available at the root of your container.
-
-* Maximum allowed length for your file is 128,000 characters, which is approximately 28,000 words or 56 pages.
-
-* Your [training dataset](how-to/train-model.md) should include at least 10 files and not more than 100,000 files.
-
-## APIs limits
-
-* The Authoring API has a maximum of 10 POST requests and 100 GET requests per minute.
-
-* The Analyze API has a maximum of 20 GET or POST requests per minute.
-
-* The maximum file size per request is 125,000 characters. You can send up to 25 files as long as they collectively do not exceed 125,000 characters.
+* Your Language resource has to be created in one of the [supported regions](#regional-availability).
-> [!NOTE]
-> If you need to send larger files than the limit allows, you can break the text into smaller chunks of text before sending them to the API. You can use the [Chunk command from CLUtils](https://github.com/microsoft/CognitiveServicesLanguageUtilities/blob/main/CustomTextAnalytics.CLUtils/Solution/CogSLanguageUtilities.ViewLayer.CliCommands/Commands/ChunkCommand/README.md) for this process.
-
-## Azure resource limits
-
-* You can only connect 1 storage account per resource. This process is irreversible. If you connect a storage account to your resource, you cannot unlink it later.
+* Your resource must be one of the supported pricing tiers:
+
+ |Tier|Description|Limit|
+ |--|--|--|
+ |F0 |Free tier|You are only allowed one F0 tier Language resource per subscription.|
+ |S |Paid tier|You can have unlimited Language S tier resources per subscription. |
+
+
+* You can only connect 1 storage account per resource. This process is irreversible. If you connect a storage account to your resource, you cannot unlink it later. Learn more about [connecting a storage account](how-to/create-project.md#create-language-resource-and-connect-storage-account)
* You can have up to 500 projects per resource.
-* Project names have to be unique within the same resource across both custom NER and [custom text classification](../custom-classification/overview.md).
+* Project names have to be unique within the same resource across all custom features.
## Regional availability
-Custom text classification is only available select Azure regions. When you create an [Azure resource](how-to/create-project.md), it must be deployed into one of the following regions:
-* **West US 2**
-* **West Europe**
-
-## Project limits
-
-* You can only connect 1 storage account for each project. This process is irreversible. If you connect a storage account to your project, you cannot disconnect it later.
+Custom named entity recognition is only available in some Azure regions. To use custom named entity recognition, you must choose a Language resource in one of following regions:
-* You can only have 1 [tags file](how-to/tag-data.md) per project. You cannot change to a different tags file later. You can only update the tags within your project.
+* West US 2
+* East US
+* East US 2
+* West US 3
+* South Central US
+* West Europe
+* North Europe
+* UK south
+* Southeast Asia
+* Australia East
+* Sweden Central
-* You cannot rename your project after creation.
-* Your project name must only contain alphanumeric characters (letters and numbers). Spaces and special characters are not allowed. Project names can have a maximum of 50 characters.
+## API limits
-* You must have minimum of 10 tagged files in your project and a maximum of 100,000 files.
+|Item|Request type| Maximum limit|
+|:-|:-|:-|
+|Authoring API|POST|10 per minute|
+|Authoring API|GET|100 per minute|
+|Prediction API|GET/POST|1,000 per minute|
+|Document size|--|125,000 characters. You can send up to 25 documents as long as they collectively do not exceed 125,000 characters|
-* You can have up to 50 trained models per project.
+> [!TIP]
+> If you need to send larger files than the limit allows, you can break the text into smaller chunks of text before sending them to the API. You use can the [chunk command from CLUtils](https://github.com/microsoft/CognitiveServicesLanguageUtilities/blob/main/CustomTextAnalytics.CLUtils/Solution/CogSLanguageUtilities.ViewLayer.CliCommands/Commands/ChunkCommand/README.md) for this process.
-* Model names have to be unique within the same project.
+## Quota limits
-* Model names must only contain alphanumeric characters, only letters and numbers, no spaces or special characters are allowed). Model name must have a maximum of 50 characters.
+|Pricing tier |Item |Limit |
+| | | |
+|F|Training time| 1 hour per month |
+|S|Training time| Unlimited, Pay as you go |
+|F|Prediction Calls| 5,000 text records per month |
+|S|Prediction Calls| Unlimited, Pay as you go |
-* You cannot rename your model after creation.
+## Document limits
-* You can only train one model at a time per project.
+* You can only use `.txt`. files. If your data is in another format, you can use the [CLUtils parse command](https://github.com/microsoft/CognitiveServicesLanguageUtilities/blob/main/CustomTextAnalytics.CLUtils/Solution/CogSLanguageUtilities.ViewLayer.CliCommands/Commands/ParseCommand/README.md) to open your document and extract the text.
-## Entity limits
+* All files uploaded in your container must contain data. Empty files are not allowed for training.
-* Your tagged entity is recommended not to exceed 10 words but the maximum allowed is 100 characters.
+* All files should be available at the root of your container.
-* You must have at least 1 entity type in your project and the maximum is 200 entity types.
+## Data limits
-* It is recommended to have around 200 tagged instances per entity and you must have a minimum of 10 of tagged instances per entity.
+The following limits are observed for the custom named entity recognition.
-* Entity names must have a maximum of 50 characters.
+|Item|Lower Limit| Upper Limit |
+| | | |
+|Documents count | 10 | 100,000 |
+|Document length in characters | 1 | 128,000 characters; approximately 28,000 words or 56 pages. |
+|Count of entity types | 1 | 200 |
+|Entity length in characters | 1 | 500 |
+|Count of trained models per project| 0 | 50 |
+|Count of deployments per project| 0 | 10 |
## Naming limits
-| Attribute | Limits |
+| Item | Limits |
|--|--| | Project name | You can only use letters `(a-z, A-Z)`, and numbers `(0-9)` with no spaces. Maximum length allowed is 50 characters. |
-| Model name | You can only use letters `(a-z, A-Z)`, and numbers `(0-9)` with no spaces. Maximum length allowed is 50 characters. |
+| Model name | You can only use letters `(a-z, A-Z)`, numbers `(0-9)` and symbols `@ # _ . , ^ \ [ ]`. Maximum allowed length is 50 characters. |
+| Deployment name | You can only use letters `(a-z, A-Z)`, numbers `(0-9)` and symbols `@ # _ . , ^ \ [ ]`. Maximum allowed length is 50 characters. |
| Entity name | You can only use letters `(a-z, A-Z)`, and numbers `(0-9)` and symbols `@ # _ . , ^ \ [ ]`. Maximum length allowed is 50 characters. |
-| File name | You can only use letters `(a-z, A-Z)`, and numbers `(0-9)` with no spaces. |
+| Document name | You can only use letters `(a-z, A-Z)`, and numbers `(0-9)` with no spaces. |
## Next steps
-[Custom NER overview](../overview.md)
+* [Custom NER overview](overview.md)
cognitive-services Cognitive Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-named-entity-recognition/tutorials/cognitive-search.md
Previously updated : 02/04/2022 Last updated : 04/26/2022 -+ # Tutorial: Enrich a Cognitive Search index with custom entities from your data
In this tutorial, you learn how to:
* [An Azure Cognitive Search service](../../../../search/search-create-service-portal.md) in your current subscription * You can use any tier, and any region for this service. * An [Azure function app](../../../../azure-functions/functions-create-function-app-portal.md)
-* Download this [sample data](https://go.microsoft.com/fwlink/?linkid=2175226).
-## Create a custom NER project through Language studio
+## Upload sample data to blob container
-Select the container where youΓÇÖve uploaded your data. For this tutorial weΓÇÖll use the tags file you downloaded from the sample data. Review the data you entered and select **Create Project**.
+# [Language Studio](#tab/Language-studio)
+
+## Create a custom named entity recognition project
+
+Once your resource and storage account are configured, create a new custom NER project. A project is a work area for building your custom ML models based on your data. Your project can only be accessed by you and others who have access to the Language resource being used.
+ ## Train your model
+Typically after you create a project, you go ahead and start [tagging the documents](../how-to/tag-data.md) you have in the container connected to your project. For this tutorial, you have imported a sample tagged dataset and initialized your project with the sample JSON tags file.
+ ## Deploy your model
+Generally after training a model you would review its [evaluation details](../how-to/view-model-evaluation.md) and [make improvements](../how-to/improve-model.md) if necessary. In this quickstart, you will just deploy your model, and make it available for you to try in Language Studio, or you can call the [prediction API](https://aka.ms/ct-runtime-swagger).
++
+# [REST APIs](#tab/REST-APIs)
+
+### Get your resource keys and endpoint
++
+## Create a custom NER project
+
+Once your resource and storage account are configured, create a new custom NER project. A project is a work area for building your custom ML models based on your data. Your project can only be accessed by you and others who have access to the Language resource being used.
+
+Use the tags file you downloaded from the [sample data](https://github.com/Azure-Samples/cognitive-services-sample-data-files) in the previous step and add it to the body of the following request.
+
+### Trigger import project job
++
+### Get import job status
+
+ [!INCLUDE [get import project status](../includes/rest-api/get-import-status.md)]
+
+## Train your model
-If you deploy your model through Language Studio, your `deployment-name` will be `prod`.
+Typically after you create a project, you go ahead and start [tagging the documents](../how-to/tag-data.md) you have in the container connected to your project. For this tutorial, you have imported a sample tagged dataset and initialized your project with the sample JSON tags file.
+
+### Start training job
+
+After your project has been imported, you can start training your model.
++
+### Get training job status
+
+Training could take sometime between 10 and 30 minutes for this sample dataset. You can use the following request to keep polling the status of the training job until it's successfully completed.
+
+ [!INCLUDE [get training model status](../includes/rest-api/get-training-status.md)]
+
+## Deploy your model
+
+Generally after training a model you would review its [evaluation details](../how-to/view-model-evaluation.md) and [make improvements](../how-to/improve-model.md) if necessary. In this tutorial, you will just deploy your model, and make it available for you to try in Language Studio, or you can call the [prediction API](https://aka.ms/ct-runtime-swagger).
+
+### Start deployment job
++
+### Get deployment job status
+++ ## Use CogSvc language utilities tool for Cognitive search integration
If you deploy your model through Language Studio, your `deployment-name` will be
5. Get your resource keys endpoint
- 1. Navigate to your resource in the [Azure portal](https://portal.azure.com/#home).
- 2. From the menu on the left side, select **Keys and Endpoint**. YouΓÇÖll need the endpoint and one of the keys for the API requests.
-
- :::image type="content" source="../../media/azure-portal-resource-credentials.png" alt-text="A screenshot showing the key and endpoint screen in the Azure portal" lightbox="../../media/azure-portal-resource-credentials.png":::
+ [!INCLUDE [Get resource keys and endpoint](../includes/get-keys-endpoint-azure.md)]
6. Get your custom NER project secrets
- 1. YouΓÇÖll need your **project-name**, project names are case-sensitive.
+ 1. You will need your **project-name**, project names are case-sensitive. Project names can be found in **project settings** page.
- 2. YouΓÇÖll also need the **deployment-name**.
- * If youΓÇÖve deployed your model via Language Studio, your deployment name will be `prod` by default.
- * If youΓÇÖve deployed your model programmatically, using the API, this is the deployment name you assigned in your request.
+ 2. You will also need the **deployment-name**. Deployment names can be found in **Deploying a model** page.
### Run the indexer command
cognitive-services Data Formats https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-classification/concepts/data-formats.md
+
+ Title: Custom text classification data formats
+
+description: Learn about the data formats accepted by custom text classification.
++++++ Last updated : 05/24/2022++++
+# Accepted data formats
+
+If you're trying to import your data into custom text classification, it has to follow a specific format. If you don't have data to import you can [create your project](../how-to/create-project.md) and use Language Studio to [label your documents](../how-to/tag-data.md).
+
+## Labels file format
+
+Your Labels file should be in the `json` format below. This will enable you to [import](../how-to/create-project.md#import-a-custom-text-classification-project) your labels into a project.
+
+# [Multi label classification](#tab/multi-classification)
+
+```json
+{
+ "projectFileVersion": "2022-05-01",
+ "stringIndexType": "Utf16CodeUnit",
+ "metadata": {
+ "projectKind": "CustomMultiLabelClassification",
+ "storageInputContainerName": "{CONTAINER-NAME}",
+ "projectName": "{PROJECT-NAME}",
+ "multilingual": false,
+ "description": "Project-description",
+ "language": "en-us"
+ },
+ "assets": {
+ "projectKind": "CustomMultiLabelClassification",
+ "classes": [
+ {
+ "category": "Class1"
+ },
+ {
+ "category": "Class2"
+ }
+ ],
+ "documents": [
+ {
+ "location": "{DOCUMENT-NAME}",
+ "language": "{LANGUAGE-CODE}",
+ "dataset": "{DATASET}",
+ "classes": [
+ {
+ "category": "Class1"
+ },
+ {
+ "category": "Class2"
+ }
+ ]
+ }
+ ]
+ }
+```
+
+|Key |Placeholder |Value | Example |
+|||-|--|
+| multilingual | `true`| A boolean value that enables you to have documents in multiple languages in your dataset and when your model is deployed you can query the model in any supported language (not necessarily included in your training documents). See [language support](../language-support.md#multi-lingual-option) to learn more about multilingual support. | `true`|
+|projectName|`{PROJECT-NAME}`|Project name|myproject|
+| storageInputContainerName|`{CONTAINER-NAME}`|Container name|`mycontainer`|
+| classes | [] | Array containing all the classes you have in the project. These are the classes you want to classify your documents into.| [] |
+| documents | [] | Array containing all the documents in your project and the classes labeled for this document. | [] |
+| location | `{DOCUMENT-NAME}` | The location of the documents in the storage container. Since all the documents are in the root of the container, this value should be the document name.|`doc1.txt`|
+| dataset | `{DATASET}` | The test set to which this file will go to when split before training. See [How to train a model](../how-to/train-model.md#data-splitting) for more information. Possible values for this field are `Train` and `Test`. |`Train`|
++
+# [Single label classification](#tab/single-classification)
+
+```json
+{
+
+ "projectFileVersion": "2022-05-01",
+ "stringIndexType": "Utf16CodeUnit",
+ "metadata": {
+ "projectKind": "CustomSingleLabelClassification",
+ "storageInputContainerName": "{CONTAINER-NAME}",
+ "settings": {},
+ "projectName": "{PROJECT-NAME}",
+ "multilingual": false,
+ "description": "Project-description",
+ "language": "en-us"
+ },
+ "assets": {
+ "projectKind": "CustomSingleLabelClassification",
+ "classes": [
+ {
+ "category": "Class1"
+ },
+ {
+ "category": "Class2"
+ }
+ ],
+ "documents": [
+ {
+ "location": "{DOCUMENT-NAME}",
+ "language": "{LANGUAGE-CODE}",
+ "dataset": "{DATASET}",
+ "class": {
+ "category": "Class2"
+ }
+ },
+ {
+ "location": "{DOCUMENT-NAME}",
+ "language": "{LANGUAGE-CODE}",
+ "dataset": "{DATASET}",
+ "class": {
+ "category": "Class1"
+ }
+ }
+ ]
+ }
+```
+|Key |Placeholder |Value | Example |
+|||-|--|
+|projectName|`{PROJECT-NAME}`|Project name|myproject|
+| storageInputContainerName|`{CONTAINER-NAME}`|Container name|`mycontainer`|
+| multilingual | `true`| A boolean value that enables you to have documents in multiple languages in your dataset and when your model is deployed you can query the model in any supported language (not necessarily included in your training documents). See [language support](../language-support.md#multi-lingual-option) to learn more about multilingual support. | `true`|
+| classes | [] | Array containing all the classes you have in the project. These are the classes you want to classify your documents into.| [] |
+| documents | [] | Array containing all the documents in your project and which class this document belongs to. | [] |
+| location | `{DOCUMENT-NAME}` | The location of the documents in the storage container. Since all the documents are in the root of the container this should be the document name.|`doc1.txt`|
+| dataset | `{DATASET}` | The test set to which this file will go to when split before training. See [How to train a model](../how-to/train-model.md#data-splitting) for more information. Possible values for this field are `Train` and `Test`. |`Train`|
++++
+## Next steps
+
+* You can import your labeled data into your project directly. See [How to create a project](../how-to/create-project.md#import-a-custom-text-classification-project) to learn more about importing projects.
+* See the [how-to article](../how-to/tag-data.md) more information about labeling your data. When you're done labeling your data, you can [train your model](../how-to/train-model.md).
cognitive-services Evaluation Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-classification/concepts/evaluation-metrics.md
+
+ Title: Custom text classification evaluation metrics
+
+description: Learn about evaluation metrics in custom text classification.
++++++ Last updated : 05/06/2022++++
+# Evaluation metrics
+
+Your [dataset is split](../how-to/train-model.md#data-splitting) into two parts: a set for training, and a set for testing. The training set is used to train the model, while the testing set is used as a test for model after training to calculate the model performance and evaluation. The testing set isn't introduced to the model through the training process, to make sure that the model is tested on new data.
+
+Model evaluation is triggered automatically after training is completed successfully. The evaluation process starts by using the trained model to predict user defined classes for documents in the test set, and compares them with the provided data tags (which establishes a baseline of truth). The results are returned so you can review the modelΓÇÖs performance. For evaluation, custom text classification uses the following metrics:
+
+* **Precision**: Measures how precise/accurate your model is. It's the ratio between the correctly identified positives (true positives) and all identified positives. The precision metric reveals how many of the predicted classes are correctly labeled.
+
+ `Precision = #True_Positive / (#True_Positive + #False_Positive)`
+
+* **Recall**: Measures the model's ability to predict actual positive classes. It's the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
+
+ `Recall = #True_Positive / (#True_Positive + #False_Negatives)`
+
+* **F1 score**: The F1 score is a function of Precision and Recall. It's needed when you seek a balance between Precision and Recall.
+
+ `F1 Score = 2 * Precision * Recall / (Precision + Recall)` <br>
+
+>[!NOTE]
+> Precision, recall and F1 score are calculated for each class separately (*class-level* evaluation) and for the model collectively (*model-level* evaluation).
+## Model-level and Class-level evaluation metrics
+
+The definitions of precision, recall, and evaluation are the same for both class-level and model-level evaluations. However, the count of *True Positive*, *False Positive*, and *False Negative* differ as shown in the following example.
+
+The below sections use the following example dataset:
+
+| Document | Actual classes | Predicted classes |
+|--|--|--|
+| 1 | action, comedy | comedy|
+| 2 | action | action |
+| 3 | romance | romance |
+| 4 | romance, comedy | romance |
+| 5 | comedy | action |
+
+### Class-level evaluation for the *action* class
+
+| Key | Count | Explanation |
+|--|--|--|
+| True Positive | 1 | Document 2 was correctly classified as *action*. |
+| False Positive | 1 | Document 5 was mistakenly classified as *action*. |
+| False Negative | 1 | Document 1 was not classified as *Action* though it should have. |
+
+**Precision** = `#True_Positive / (#True_Positive + #False_Positive) = 1 / (1 + 1) = 0.5`
+
+**Recall** = `#True_Positive / (#True_Positive + #False_Negatives) = 1 / (1 + 1) = 0.5`
+
+**F1 Score** = `2 * Precision * Recall / (Precision + Recall) = (2 * 0.5 * 0.5) / (0.5 + 0.5) = 0.5`
+
+### Class-level evaluation for the *comedy* class
+
+| Key | Count | Explanation |
+|--|--|--|
+| True positive | 1 | Document 1 was correctly classified as *comedy*. |
+| False positive | 0 | No documents were mistakenly classified as *comedy*. |
+| False negative | 2 | Documents 5 and 4 were not classified as *comedy* though they should have. |
+
+**Precision** = `#True_Positive / (#True_Positive + #False_Positive) = 1 / (1 + 0) = 1`
+
+**Recall** = `#True_Positive / (#True_Positive + #False_Negatives) = 1 / (1 + 2) = 0.33`
+
+**F1 Score** = `2 * Precision * Recall / (Precision + Recall) = (2 * 1 * 0.67) / (1 + 0.67) = 0.80`
+
+### Model-level evaluation for the collective model
+
+| Key | Count | Explanation |
+|--|--|--|
+| True Positive | 4 | Documents 1, 2, 3 and 4 were given correct classes at prediction. |
+| False Positive | 1 | Document 5 was given a wrong class at prediction. |
+| False Negative | 2 | Documents 1 and 4 were not given all correct class at prediction. |
+
+**Precision** = `#True_Positive / (#True_Positive + #False_Positive) = 4 / (4 + 1) = 0.8`
+
+**Recall** = `#True_Positive / (#True_Positive + #False_Negatives) = 4 / (4 + 2) = 0.67`
+
+**F1 Score** = `2 * Precision * Recall / (Precision + Recall) = (2 * 0.8 * 0.67) / (0.8 + 0.67) = 0.73`
+
+> [!NOTE]
+> For single-label classification models, the count of false negatives and false positives are always equal. Custom single-label classification models always predict one class for each document. If the prediction is not correct, FP count of the predicted class increases by one and FN of the actual class increases by one, overall count of FP and FN for the model will always be equal. This is not the case for multi-label classification, because failing to predict one of the classes of a document is counted as a false negative.
+## Interpreting class-level evaluation metrics
+
+So what does it actually mean to have a high precision or a high recall for a certain class?
+
+| Recall | Precision | Interpretation |
+|--|--|--|
+| High | High | This class is perfectly handled by the model. |
+| Low | High | The model can't always predict this class but when it does it is with high confidence. This may be because this class is underrepresented in the dataset so consider balancing your data distribution.|
+| High | Low | The model predicts this class well, however it is with low confidence. This may be because this class is over represented in the dataset so consider balancing your data distribution. |
+| Low | Low | This class is poorly handled by the model where it is not usually predicted and when it is, it is not with high confidence. |
+
+Custom text classification models are expected to experience both false negatives and false positives. You need to consider how each will affect the overall system, and carefully think through scenarios where the model will ignore correct predictions, and recognize incorrect predictions. Depending on your scenario, either *precision* or *recall* could be more suitable evaluating your model's performance.
+
+For example, if your scenario involves processing technical support tickets, predicting the wrong class could cause it to be forwarded to the wrong department/team. In this example, you should consider making your system more sensitive to false positives, and precision would be a more relevant metric for evaluation.
+
+As another example, if your scenario involves categorizing email as "*important*" or "*spam*", an incorrect prediction could cause you to miss a useful email if it's labeled "*spam*". However, if a spam email is labeled *important* you can disregard it. In this example, you should consider making your system more sensitive to false negatives, and recall would be a more relevant metric for evaluation.
+
+If you want to optimize for general purpose scenarios or when precision and recall are both important, you can utilize the F1 score. Evaluation scores are subjective depending on your scenario and acceptance criteria. There is no absolute metric that works for every scenario.
+
+## Confusion matrix
+
+> [!Important]
+> Confusion matrix is not available for multi-label classification projects.
+A Confusion matrix is an N x N matrix used for model performance evaluation, where N is the number of classes.
+The matrix compares the the expected labels with the ones predicted by the model.
+This gives a holistic view of how well the model is performing and what kinds of errors it is making.
+
+You can use the Confusion matrix to identify classes that are too close to each other and often get mistaken (ambiguity). In this case consider merging these classes together. If that isn't possible, consider labeling more documents with both classes to help the model differentiate between them.
+
+All correct predictions are located in the diagonal of the table, so it is easy to visually inspect the table for prediction errors, as they will be represented by values outside the diagonal.
++
+You can calculate the class-level and model-level evaluation metrics from the confusion matrix:
+
+* The values in the diagonal are the *True Positive* values of each class.
+* The sum of the values in the class rows (excluding the diagonal) is the *false positive* of the model.
+* The sum of the values in the class columns (excluding the diagonal) is the *false Negative* of the model.
+
+Similarly,
+
+* The *true positive* of the model is the sum of *true Positives* for all classes.
+* The *false positive* of the model is the sum of *false positives* for all classes.
+* The *false Negative* of the model is the sum of *false negatives* for all classes.
++
+## Next steps
+
+* [View a model's performance in Language Studio](../how-to/view-model-evaluation.md)
+* [Train a model](../how-to/train-model.md)
cognitive-services Fail Over https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-classification/fail-over.md
+
+ Title: Back up and recover your custom text classification models
+
+description: Learn how to save and recover your custom text classification models.
++++++ Last updated : 04/22/2022++++
+# Back up and recover your custom text classification models
+
+When you create a Language resource, you specify a region for it to be created in. From then on, your resource and all of the operations related to it take place in the specified Azure server region. It's rare, but not impossible, to encounter a network issue that hits an entire region. If your solution needs to always be available, then you should design it to either fail-over into another region. This requires two Azure Language resources in different regions and the ability to sync custom models across regions.
+
+If your app or business depends on the use of a custom text classification model, we recommend that you create a replica of your project into another supported region. So that if a regional outage occurs, you can then access your model in the other fail-over region where you replicated your project.
+
+Replicating a project means that you export your project metadata and assets and import them into a new project. This only makes a copy of your project settings and tagged data. You still need to [train](./how-to/train-model.md) and [deploy](how-to/deploy-model.md) the models to be available for use with [prediction APIs](https://aka.ms/ct-runtime-swagger).
+
+In this article, you will learn to how to use the export and import APIs to replicate your project from one resource to another existing in different supported geographical regions, guidance on keeping your projects in sync and changes needed to your runtime consumption.
+
+## Prerequisites
+
+* Two Azure Language resources in different Azure regions. [Create a Language resource](./how-to/create-project.md#create-a-language-resource) and connect them to an Azure storage account. It's recommended that you connect both of your Language resources to the same storage account, though this might introduce slightly higher latency when importing your project, and training a model.
+
+## Get your resource keys endpoint
+
+Use the following steps to get the keys and endpoint of your primary and secondary resources. These will be used in the following steps.
++
+> [!TIP]
+> Keep a note of keys and endpoints for both primary and secondary resources. Use these values to replace the following placeholders:
+`{PRIMARY-ENDPOINT}`, `{PRIMARY-RESOURCE-KEY}`, `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}`.
+> Also take note of your project name, your model name and your deployment name. Use these values to replace the following placeholders: `{PROJECT-NAME}`, `{MODEL-NAME}` and `{DEPLOYMENT-NAME}`.
+
+## Export your primary project assets
+
+Start by exporting the project assets from the project in your primary resource.
+
+### Submit export job
+
+Replace the placeholders in the following request with your `{PRIMARY-ENDPOINT}` and `{PRIMARY-RESOURCE-KEY}` that you obtained in the first step.
++
+### Get export job status
+
+Replace the placeholders in the following request with your `{PRIMARY-ENDPOINT}` and `{PRIMARY-RESOURCE-KEY}` that you obtained in the first step.
++
+Copy the response body as you will use it as the body for the next import job.
+
+## Import to a new project
+
+Now go ahead and import the exported project assets in your new project in the secondary region so you can replicate it.
+
+### Submit import job
+
+Replace the placeholders in the following request with your `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}` that you obtained in the first step.
++
+### Get import job status
+
+Replace the placeholders in the following request with your `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}` that you obtained in the first step.
+++
+## Train your model
+
+After importing your project, you only have copied the project's assets and metadata and assets. You still need to train your model, which will incur usage on your account.
+
+### Submit training job
+
+Replace the placeholders in the following request with your `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}` that you obtained in the first step.
+++
+### Get Training Status
+
+Replace the placeholders in the following request with your `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}` that you obtained in the first step.
++
+## Deploy your model
+
+This is the step where you make your trained model available form consumption via the [runtime prediction API](https://aka.ms/ct-runtime-swagger).
+
+> [!TIP]
+> Use the same deployment name as your primary project for easier maintenance and minimal changes to your system to handle redirecting your traffic.
+
+### Submit deployment job
+
+Replace the placeholders in the following request with your `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}` that you obtained in the first step.
++
+### Get the deployment status
+
+Replace the placeholders in the following request with your `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}` that you obtained in the first step.
++
+## Changes in calling the runtime
+
+Within your system, at the step where you call [runtime prediction API](https://aka.ms/ct-runtime-swagger) check for the response code returned from the submit task API. If you observe a **consistent** failure in submitting the request, this could indicate an outage in your primary region. Failure once doesn't mean an outage, it may be transient issue. Retry submitting the job through the secondary resource you have created. For the second request use your `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}`, if you have followed the steps above, `{PROJECT-NAME}` and `{DEPLOYMENT-NAME}` would be the same so no changes are required to the request body.
+
+In case you revert to using your secondary resource you will observe slight increase in latency because of the difference in regions where your model is deployed.
+
+## Check if your projects are out of sync
+
+Maintaining the freshness of both projects is an important part of process. You need to frequently check if any updates were made to your primary project so that you move them over to your secondary project. This way if your primary region fail and you move into the secondary region you should expect similar model performance since it already contains the latest updates. Setting the frequency of checking if your projects are in sync is an important choice, we recommend that you do this check daily in order to guarantee the freshness of data in your secondary model.
+
+### Get project details
+
+Use the following url to get your project details, one of the keys returned in the body indicates the last modified date of the project.
+Repeat the following step twice, one for your primary project and another for your secondary project and compare the timestamp returned for both of them to check if they are out of sync.
++
+Repeat the same steps for your replicated project using `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}`. Compare the returned `lastModifiedDateTime` from both project. If your primary project was modified sooner than your secondary one, you need to repeat the steps of [exporting](#export-your-primary-project-assets), [importing](#import-to-a-new-project), [training](#train-your-model) and [deploying](#deploy-your-model) your model.
++
+## Next steps
+
+In this article, you have learned how to use the export and import APIs to replicate your project to a secondary Language resource in other region. Next, explore the API reference docs to see what else you can do with authoring APIs.
+
+* [Authoring REST API reference](https://aka.ms/ct-authoring-swagger)
+* [Runtime prediction REST API reference](https://aka.ms/ct-runtime-swagger)
cognitive-services Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-classification/faq.md
+
+ Title: Custom text classification FAQ
+
+description: Learn about Frequently asked questions when using the custom text classification API.
++++++ Last updated : 04/22/2022+++++
+# Frequently asked questions
+
+Find answers to commonly asked questions about concepts, and scenarios related to custom text classification in Azure Cognitive Service for Language.
+
+## How do I get started with the service?
+
+See the [quickstart](./quickstart.md) to quickly create your first project, or view [how to create projects](how-to/create-project.md) for more details.
+
+## What are the service limits?
+
+See the [service limits article](service-limits.md).
+
+## Which languages are supported in this feature?
+
+See the [language support](./language-support.md) article.
+
+## How many tagged files are needed?
+
+Generally, diverse and representative [tagged data](how-to/tag-data.md) leads to better results, given that the tagging is done precisely, consistently and completely. There's no set number of tagged classes that will make every model perform well. Performance is highly dependent on your schema and the ambiguity of your schema. Ambiguous classes need more tags. Performance also depends on the quality of your tagging. The recommended number of tagged instances per class is 50.
+
+## Training is taking a long time, is this expected?
+
+The training process can take some time. As a rough estimate, the expected training time for files with a combined length of 12,800,000 chars is 6 hours.
+
+## How do I build my custom model programmatically?
+
+You can use the [REST APIs](https://westus.dev.cognitive.microsoft.com/docs/services/language-authoring-clu-apis-2022-03-01-preview/operations/Projects_TriggerImportProjectJob) to build your custom models. Follow this [quickstart](quickstart.md?pivots=rest-api) to get started with creating a project and creating a model through APIs for examples of how to call the Authoring API.
+
+When you're ready to start [using your model to make predictions](#how-do-i-use-my-trained-model-to-make-predictions), you can use the REST API, or the client library.
+
+## What is the recommended CI/CD process?
+
+You can train multiple models on the same dataset within the same project. After you have trained your model successfully, you can [view its evaluation](how-to/view-model-evaluation.md). You can [deploy and test](quickstart.md#deploy-your-model) your model within [Language studio](https://aka.ms/languageStudio). You can add or remove tags from your data and train a **new** model and test it as well. View [service limits](service-limits.md)to learn about maximum number of trained models with the same project. When you [tag your data](how-to/tag-data.md#label-your-data), you can determine how your dataset is split into training and testing sets.
+
+## Does a low or high model score guarantee bad or good performance in production?
+
+Model evaluation may not always be comprehensive, depending on:
+* If the **test set** is too small, the good/bad scores are not representative of model's actual performance. Also if a specific class is missing or under-represented in your test set it will affect model performance.
+* **Data diversity** if your data only covers few scenarios/examples of the text you expect in production, your model will not be exposed to all possible scenarios and might perform poorly on the scenarios it hasn't been trained on.
+* **Data representation** if the dataset used to train the model is not representative of the data that would be introduced to the model in production, model performance will be affected greatly.
+
+See the [data selection and schema design](how-to/design-schema.md) article for more information.
+
+## How do I improve model performance?
+
+* View the model [confusion matrix](how-to/view-model-evaluation.md), if you notice that a certain class is frequently classified incorrectly, consider adding more tagged instances for this class. If you notice that two classes are frequently classified as each other, this means the schema is ambiguous, consider merging them both into one class for better performance.
+
+* [Examine Data distribution](concepts/evaluation-metrics.md) If one of the classes has many more tagged instances than the others, your model may be biased towards this class. Add more data to the other classes or remove most of the examples from the dominating class.
+
+* Review the [data selection and schema design](how-to/design-schema.md) article for more information.
+
+* [Review your test set](how-to/improve-model.md) to see predicted and tagged classes side-by-side so you can get a better idea of your model performance, and decide if any changes in the schema or the tags are necessary.
+
+## When I retrain my model I get different results, why is this?
+
+* When you [tag your data](how-to/tag-data.md#label-your-data) you can determine how your dataset is split into training and testing sets. You can also have your data split randomly into training and testing sets, so there is no guarantee that the reflected model evaluation is on the same test set, so results are not comparable.
+
+* If you are retraining the same model, your test set will be the same, but you might notice a slight change in predictions made by the model. This is because the trained model is not robust enough, which is a factor of how representative and distinct your data is, and the quality of your tagged data.
+
+## How do I get predictions in different languages?
+
+First, you need to enable the multilingual option when [creating your project](how-to/create-project.md) or you can enable it later from the project settings page. After you train and deploy your model, you can start querying it in multiple languages. You may get varied results for different languages. To improve the accuracy of any language, add more tagged instances to your project in that language to introduce the trained model to more syntax of that language. See [language support](language-support.md#multi-lingual-option) for more information.
+
+## I trained my model, but I can't test it
+
+You need to [deploy your model](quickstart.md#deploy-your-model) before you can test it.
+
+## How do I use my trained model to make predictions?
+
+After deploying your model, you [call the prediction API](how-to/call-api.md), using either the [REST API](how-to/call-api.md?tabs=rest-api) or [client libraries](how-to/call-api.md?tabs=client).
+
+## Data privacy and security
+
+Custom text classification is a data processor for General Data Protection Regulation (GDPR) purposes. In compliance with GDPR policies, custom text classification users have full control to view, export, or delete any user content either through the [Language Studio](https://aka.ms/languageStudio) or programmatically by using [REST APIs](https://westus.dev.cognitive.microsoft.com/docs/services/language-authoring-clu-apis-2022-03-01-preview/operations/Projects_TriggerImportProjectJob).
+
+Your data is only stored in your Azure Storage account. Custom text classification only has access to read from it during training.
+
+## How to clone my project?
+
+To clone your project you need to use the export API to export the project assets and then import them into a new project. See [REST APIs](https://westus.dev.cognitive.microsoft.com/docs/services/language-authoring-clu-apis-2022-03-01-preview/operations/Projects_TriggerImportProjectJob) reference for both operations.
+
+## Next steps
+
+* [Custom text classification overview](overview.md)
+* [Quickstart](quickstart.md)
cognitive-services Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-classification/glossary.md
+
+ Title: Definitions used in custom text classification
+
+description: Learn about definitions used in custom text classification.
++++++ Last updated : 04/14/2022++++
+# Terms and definitions used in custom text classification
+
+Use this article to learn about some of the definitions and terms you may encounter when using custom text classification.
+
+## Class
+
+A class is a user-defined category that indicates the overall classification of the text. Developers label their data with their classes before they pass it to the model for training.
+
+## F1 score
+The F1 score is a function of Precision and Recall. It's needed when you seek a balance between [precision](#precision) and [recall](#recall].
+
+## Model
+
+A model is an object that's trained to do a certain task, in this case text classification tasks. Models are trained by providing labeled data to learn from so they can later be used for classification tasks.
+
+* **Model training** is the process of teaching your model how to classify documents based on your labeled data.
+* **Model evaluation** is the process that happens right after training to know how well does your model perform.
+* **Deployment** is the process of assigning your model to a deployment to make it available for use via the [prediction API](https://aka.ms/ct-runtime-swagger).
+
+## Precision
+Measures how precise/accurate your model is. It's the ratio between the correctly identified positives (true positives) and all identified positives. The precision metric reveals how many of the predicted classes are correctly labeled.
+
+## Project
+
+A project is a work area for building your custom ML models based on your data. Your project can only be accessed by you and others who have access to the Azure resource being used.
+As a prerequisite to creating a custom text classification project, you have to connect your resource to a storage account with your dataset when you [create a new project](how-to/create-project.md). Your project automatically includes all the `.txt` files available in your container.
+
+Within your project you can do the following:
+
+* **Label your data**: The process of labeling your data so that when you train your model it learns what you want to extract.
+* **Build and train your model**: The core step of your project, where your model starts learning from your labeled data.
+* **View model evaluation details**: Review your model performance to decide if there is room for improvement, or you are satisfied with the results.
+* **Improve model**: When you know what went wrong with your model, and how to improve performance.
+* **Deployment**: After you have reviewed model performance and decide it's fit to be used in your environment; you need to assign it to a deployment to be able to query it. Assigning the model to a deployment makes it available for use through the [prediction API](https://aka.ms/ct-runtime-swagger).
+* **Test model**: After deploying your model, you can use this operation in [Language Studio](https://aka.ms/LanguageStudio) to try it out your deployment and see how it would perform in production.
+
+### Project types
+
+Custom text classification supports two types of projects
+
+* **Single label classification** - you can assign a single class for each document in your dataset. For example, a movie script could only be classified as "Romance" or "Comedy".
+* **Multi label classification** - you can assign multiple classes for each document in your dataset. For example, a movie script could be classified as "Comedy" or "Romance" and "Comedy".
+
+## Recall
+Measures the model's ability to predict actual positive classes. It's the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
++
+## Next steps
+
+* [Data and service limits](service-limits.md).
+* [Custom text classification overview](../overview.md).
cognitive-services Call Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-classification/how-to/call-api.md
+
+ Title: Send a text classification request to your model
+description: Learn how to send a request for custom text classification.
+++++++ Last updated : 03/15/2022+
+ms.devlang: csharp, python
+++
+# Query deployment to classify text
+
+After the deployment is added successfully, you can query the deployment to classify text based on the model you assigned to the deployment.
+You can query the deployment programmatically [Prediction API](https://aka.ms/ct-runtime-api) or through the [client libraries (Azure SDK)](#get-task-results).
+
+## Test deployed model
+
+You can use the Language Studio to submit the custom text classification task and visualize the results.
++++
+## Send a text classification request to your model
+
+> [!TIP]
+> You can [test your model in Language Studio](../quickstart.md?pivots=language-studio#test-your-model) by sending sample text to classify it.
+
+# [Language Studio](#tab/language-studio)
++
+# [REST API](#tab/rest-api)
+
+First you will need to get your resource key and endpoint:
++
+### Submit a custom text classification task
+++
+### Get task results
++
+# [Client libraries (Azure SDK)](#tab/client-libraries)
+
+First you will need to get your resource key and endpoint:
++
+3. Download and install the client library package for your language of choice:
+
+ |Language |Package version |
+ |||
+ |.NET | [5.2.0-beta.2](https://www.nuget.org/packages/Azure.AI.TextAnalytics/5.2.0-beta.2) |
+ |Java | [5.2.0-beta.2](https://mvnrepository.com/artifact/com.azure/azure-ai-textanalytics/5.2.0-beta.2) |
+ |JavaScript | [5.2.0-beta.2](https://www.npmjs.com/package/@azure/ai-text-analytics/v/5.2.0-beta.2) |
+ |Python | [5.2.0b2](https://pypi.org/project/azure-ai-textanalytics/5.2.0b2/) |
+
+4. After you've installed the client library, use the following samples on GitHub to start calling the API.
+
+ Single label classification:
+ * [C#](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/textanalytics/Azure.AI.TextAnalytics/samples/Sample10_SingleCategoryClassify.md)
+ * [Java](https://github.com/Azure/azure-sdk-for-java/blob/main/sdk/textanalytics/azure-ai-textanalytics/src/samples/java/com/azure/ai/textanalytics/lro/ClassifyDocumentSingleCategory.java)
+ * [JavaScript](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/textanalytics/ai-text-analytics/samples/v5/javascript/customText.js)
+ * [Python](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/textanalytics/azure-ai-textanalytics/samples/sample_single_category_classify.py)
+
+ Multi label classification:
+ * [C#](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/textanalytics/Azure.AI.TextAnalytics/samples/Sample11_MultiCategoryClassify.md)
+ * [Java](https://github.com/Azure/azure-sdk-for-java/blob/main/sdk/textanalytics/azure-ai-textanalytics/src/samples/java/com/azure/ai/textanalytics/lro/ClassifyDocumentMultiCategory.java)
+ * [JavaScript](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/textanalytics/ai-text-analytics/samples/v5/javascript/customText.js)
+ * [Python](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/textanalytics/azure-ai-textanalytics/samples/sample_multi_category_classify.py)
+
+5. See the following reference documentation for more information on the client, and return object:
+
+ * [C#](/dotnet/api/azure.ai.textanalytics?view=azure-dotnet-preview&preserve-view=true)
+ * [Java](/java/api/overview/azure/ai-textanalytics-readme?view=azure-java-preview&preserve-view=true)
+ * [JavaScript](/javascript/api/overview/azure/ai-text-analytics-readme?view=azure-node-preview&preserve-view=true)
+ * [Python](/python/api/azure-ai-textanalytics/azure.ai.textanalytics?view=azure-python-preview&preserve-view=true)
++
+## Next steps
+
+* [Custom text classification overview](../overview.md)
cognitive-services Create Project https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-classification/how-to/create-project.md
+
+ Title: How to create custom text classification projects
+
+description: Learn about the steps for using Azure resources with custom text classification.
++++++ Last updated : 05/06/2022++++
+# How to create custom text classification project
+
+Use this article to learn how to set up the requirements for starting with custom text classification and create a project.
+
+## Prerequisites
+
+Before you start using custom text classification, you will need:
+
+* An Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services).
+
+## Create a Language resource
+
+Before you start using custom text classification, you will need an Azure Language resource. It is recommended to create your Language resource and connect a storage account to it in the Azure portal. Creating a resource in the Azure portal lets you create an Azure storage account at the same time, with all of the required permissions pre-configured. You can also read further in the article to learn how to use a pre-existing resource, and configure it to work with custom text classification.
+
+You also will need an Azure storage account where you will upload your `.txt` documents that will be used to train a model to classify text.
+
+> [!NOTE]
+> * You need to have an **owner** role assigned on the resource group to create a Language resource.
+> * If you will connect a pre-existing storage account, you should have an **owner** role assigned to it.
+
+## Create Language resource and connect storage account
+
+### [Using the Azure portal](#tab/azure-portal)
++
+### [Using Language Studio](#tab/language-studio)
++
+### [Using Azure PowerShell](#tab/azure-powershell)
++++
+> [!NOTE]
+> * The process of connecting a storage account to your Language resource is irreversible, it cannot be disconnected later.
+> * You can only connect your language resource to one storage account.
+
+## Using a pre-existing Language resource
+++
+## Create a custom text classification project
+
+Once your resource and storage container are configured, create a new custom text classification project. A project is a work area for building your custom AI models based on your data. Your project can only be accessed by you and others who have access to the Azure resource being used. If you have labeled data, you can [import it](#import-a-custom-text-classification-project) to get started.
+
+### [Language Studio](#tab/studio)
+++
+### [Rest APIs](#tab/apis)
++++
+## Import a custom text classification project
+
+If you have already labeled data, you can use it to get started with the service. Make sure that your labeled data follows the [accepted data formats](../concepts/data-formats.md).
+
+### [Language Studio](#tab/studio)
++
+### [Rest APIs](#tab/apis)
++++
+## Get project details
+
+### [Language Studio](#tab/studio)
++
+### [Rest APIs](#tab/apis)
++++
+## Delete project
+
+### [Language Studio](#tab/studio)
++
+### [Rest APIs](#tab/apis)
++++
+## Next steps
+
+* You should have an idea of the [project schema](design-schema.md) you will use to label your data.
+
+* After your project is created, you can start [labeling your data](tag-data.md), which will inform your text classification model how to interpret text, and is used for training and evaluation.
cognitive-services Deploy Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-classification/how-to/deploy-model.md
+
+ Title: How to submit custom text classification tasks
+
+description: Learn how to deploy a model for custom text classification.
++++++ Last updated : 05/04/2022++++
+# Deploy a model and classify text using the runtime API
+
+Once you are satisfied with how your model performs, it is ready to be deployed; and use it to classify text. Deploying a model makes it available for use through the [prediction API](https://aka.ms/ct-runtime-swagger).
+
+## Prerequisites
+
+* [A custom text classification project](create-project.md) with a configured Azure storage account,
+* Text data that has [been uploaded](design-schema.md#data-preparation) to your storage account.
+* [Labeled data](tag-data.md) and successfully [trained model](train-model.md)
+* Reviewed the [model evaluation details](view-model-evaluation.md) to determine how your model is performing.
+* (optional) [Made improvements](improve-model.md) to your model if its performance isn't satisfactory.
+
+See the [project development lifecycle](../overview.md#project-development-lifecycle) for more information.
+
+## Deploy model
+
+After you have reviewed your model's performance and decided it can be used in your environment, you need to assign it to a deployment to be able to query it. Assigning the model to a deployment makes it available for use through the [prediction API](https://aka.ms/ct-runtime-swagger). It is recommended to create a deployment named `production` to which you assign the best model you have built so far and use it in your system. You can create another deployment called `staging` to which you can assign the model you're currently working on to be able to test it. You can have a maximum on 10 deployments in your project.
+
+# [Language Studio](#tab/language-studio)
+
+
+# [REST APIs](#tab/rest-api)
+
+### Submit deployment job
++
+### Get deployment job status
++++
+## Swap deployments
+
+You can swap deployments after you've tested a model assigned to one deployment, and want to assign it to another. Swapping deployments involves taking the model assigned to the first deployment, and assigning it to the second deployment. Then taking the model assigned to second deployment and assign it to the first deployment. This could be used to swap your `production` and `staging` deployments when you want to take the model assigned to `staging` and assign it to `production`.
+
+# [Language Studio](#tab/language-studio)
++
+# [REST APIs](#tab/rest-api)
+++++
+## Delete deployment
+
+# [Language Studio](#tab/language-studio)
++
+# [REST APIs](#tab/rest-api)
+++
cognitive-services Design Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-classification/how-to/design-schema.md
+
+ Title: How to prepare data and define a custom classification schema
+
+description: Learn about data selection, preparation, and creating a schema for custom text classification projects.
++++++ Last updated : 05/05/2022++++
+# How to prepare data and define a text classification schema
+
+In order to create a custom text classification model, you will need quality data to train it. This article covers how you should select and prepare your data, along with defining a schema. Defining the schema is the first step in [project development lifecycle](../overview.md#project-development-lifecycle), and it defines the classes that you need your model to classify your text into at runtime.
+
+## Schema design
+
+The schema defines the classes that you need your model to classify your text into at runtime.
+
+* **Review and identify**: Review documents in your dataset to be familiar with their structure and content, then identify how you want to classify your data.
+
+ For example, if you are classifying support tickets, you might need the following classes: *login issue*, *hardware issue*, *connectivity issue*, and *new equipment request*.
+
+* **Avoid ambiguity in classes**: Ambiguity arises when the classes you specify share similar meaning to one another. The more ambiguous your schema is, the more labeled data you may need to differentiate between different classes.
+
+ For example, if you are classifying food recipes, they may be similar to an extent. To differentiate between *dessert recipe* and *main dish recipe*, you may need to label more examples to help your model distinguish between the two classes. Avoiding ambiguity saves time and yields better results.
+
+* **Out of scope data**: When using your model in production, consider adding an *out of scope* class to your schema if you expect documents that don't belong to any of your classes. Then add a few documents to your dataset to be labeled as *out of scope*. The model can learn to recognize irrelevant documents, and predict their labels accordingly.
++
+## Data selection
+
+The quality of data you train your model with affects model performance greatly.
+
+* Use real-life data that reflects your domain's problem space to effectively train your model. You can use synthetic data to accelerate the initial model training process, but it will likely differ from your real-life data and make your model less effective when used.
+
+* Balance your data distribution as much as possible without deviating far from the distribution in real-life.
+
+* Use diverse data whenever possible to avoid overfitting your model. Less diversity in training data may lead to your model learning spurious correlations that may not exist in real-life data.
+
+* Avoid duplicate documents in your data. Duplicate data has a negative effect on the training process, model metrics, and model performance.
+
+* Consider where your data comes from. If you are collecting data from one person, department, or part of your scenario, you are likely missing diversity that may be important for your model to learn about.
+
+> [!NOTE]
+> If your documents are in multiple languages, select the **multiple languages** option during [project creation](../quickstart.md) and set the **language** option to the language of the majority of your documents.
+
+## Data preparation
+
+As a prerequisite for creating a custom text classification project, your training data needs to be uploaded to a blob container in your storage account. You can create and upload training documents from Azure directly, or through using the Azure Storage Explorer tool. Using the Azure Storage Explorer tool allows you to upload more data quickly.
+
+* [Create and upload documents from Azure](../../../../storage/blobs/storage-quickstart-blobs-portal.md#create-a-container)
+* [Create and upload documents using Azure Storage Explorer](../../../../vs-azure-tools-storage-explorer-blobs.md)
+
+You can only use `.txt`. documents for custom text. If your data is in other format, you can use [CLUtils parse command](https://github.com/microsoft/CognitiveServicesLanguageUtilities/blob/main/CustomTextAnalytics.CLUtils/Solution/CogSLanguageUtilities.ViewLayer.CliCommands/Commands/ParseCommand/README.md) to change your file format.
+
+ You can upload an annotated dataset, or you can upload an unannotated one and [label your data](../how-to/tag-data.md) in Language studio.
+
+## Test set
+
+When defining the testing set, make sure to include example documents that are not present in the training set. Defining the testing set is an important step to calculate the [model performance](view-model-evaluation.md#model-details). Also, make sure that the testing set include documents that represent all classes used in your project.
+
+## Next steps
+
+If you haven't already, create a custom text classification project. If it's your first time using custom text classification, consider following the [quickstart](../quickstart.md) to create an example project. You can also see the [project requirements](../how-to/create-project.md) for more details on what you need to create a project.
cognitive-services Improve Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-classification/how-to/improve-model.md
+
+ Title: How to improve custom text classification model performance
+
+description: Learn about improving a model for Custom Text Classification.
++++++ Last updated : 05/05/2022++++
+# Improve custom text classification model performance
+
+In some cases, the model is expected to make predictions that are inconsistent with your labeled classes. Use this article to learn how to observe these inconsistencies and decide on the needed changes needed to improve your model performance.
++
+## Prerequisites
+
+To optionally improve a model, you'll need to have:
+
+* [A custom text classification project](create-project.md) with a configured Azure blob storage account,
+* Text data that has [been uploaded](design-schema.md#data-preparation) to your storage account.
+* [Labeled data](tag-data.md) to successfully [train a model](train-model.md).
+* Reviewed the [model evaluation details](view-model-evaluation.md) to determine how your model is performing.
+* Familiarized yourself with the [evaluation metrics](../concepts/evaluation-metrics.md).
+
+See the [project development lifecycle](../overview.md#project-development-lifecycle) for more information.
+
+## Review test set predictions
+
+After you have viewed your [model's evaluation](view-model-evaluation.md), you'll have formed an idea on your model performance. In this page, you can view how your model performs vs how it's expected to perform. You can view predicted and labeled classes side by side for each document in your test set. You can review documents that were predicted differently than they were originally labeled.
++
+To review inconsistent predictions in the [test set](train-model.md#data-splitting) from within the [Language Studio](https://aka.ms/LanguageStudio):
+
+1. Select **Improve model** from the left side menu.
+
+2. Choose your trained model from **Model** drop-down menu.
+
+3. For easier analysis, you can toggle **Show incorrect predictions only** to view documents that were incorrectly predicted only.
+
+Use the following information to help guide model improvements.
+
+* If a file that should belong to class `X` is constantly classified as class `Y`, it means that there is ambiguity between these classes and you need to reconsider your schema. Learn more about [data selection and schema design](design-schema.md#schema-design).
+
+* Another solution is to consider adding more data to these classes, to help the model improve and differentiate between them.
+
+* Consider adding more data, to help the model differentiate between different classes.
+
+ :::image type="content" source="../media/review-validation-set.png" alt-text="A screenshot showing model predictions in Language Studio." lightbox="../media/review-validation-set.png":::
++
+## Next steps
+
+* Once you're satisfied with how your model performs, you can [deploy your model](call-api.md).
cognitive-services Tag Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-classification/how-to/tag-data.md
+
+ Title: How to label your data for custom classification - Azure Cognitive Services
+
+description: Learn about how to label your data for use with the custom text classification.
++++++ Last updated : 05/05/2022++++
+# Label text data for training your model
+
+Before training your model you need to label your documents with the classes you want to categorize them into. Data labeling is a crucial step in development lifecycle; in this step you can create the classes you want to categorize your data into and label your documents with these classes. This data will be used in the next step when training your model so that your model can learn from the labeled data. If you already have labeled data, you can directly [import](create-project.md) it into your project but you need to make sure that your data follows the [accepted data format](../concepts/data-formats.md).
+
+Before creating a custom text classification model, you need to have labeled data first. If your data isn't labeled already, you can label it in the [Language Studio](https://aka.ms/languageStudio). Labeled data informs the model how to interpret text, and is used for training and evaluation.
+
+## Prerequisites
+
+Before you can label data, you need:
+
+* [A successfully created project](create-project.md) with a configured Azure blob storage account,
+* Text data that has [been uploaded](design-schema.md#data-preparation) to your storage account.
+
+See the [project development lifecycle](../overview.md#project-development-lifecycle) for more information.
+
+## Data labeling guidelines
+
+After [preparing your data, designing your schema](design-schema.md) and [creating your project](create-project.md), you will need to label your data. Labeling your data is important so your model knows which documents will be associated with the classes you need. When you label your data in [Language Studio](https://aka.ms/languageStudio) (or import labeled data), these labels will be stored in the JSON file in your storage container that you've connected to this project.
+
+As you label your data, keep in mind:
+
+* In general, more labeled data leads to better results, provided the data is labeled accurately.
+
+* There is no fixed number of labels that can guarantee your model will perform the best. Model performance on possible ambiguity in your [schema](design-schema.md), and the quality of your labeled data. Nevertheless, we recommend 50 labeled documents per class.
+
+## Label your data
+
+Use the following steps to label your data:
+
+1. Go to your project page in [Language Studio](https://aka.ms/languageStudio).
+
+2. From the left side menu, select **Data labeling**. You can find a list of all documents in your storage container. See the image below.
+
+ >[!TIP]
+ > You can use the filters in top menu to view the unlabeled files so that you can start labeling them.
+ > You can also use the filters to view the documents that are labeled with a specific class.
+
+3. Change to a single file view from the left side in the top menu or select a specific file to start labeling. You can find a list of all `.txt` files available in your projects to the left. You can use the **Back** and **Next** button from the bottom of the page to navigate through your documents.
+
+ > [!NOTE]
+ > If you enabled multiple languages for your project, you will find a **Language** dropdown in the top menu, which lets you select the language of each document.
++
+4. In the right side pane, **Add class** to your project so you can start labeling your data with them.
+
+5. Start labeling your files.
+
+ # [Multi label classification](#tab/multi-classification)
+
+ **Multi label classification**: your file can be labeled with multiple classes, you can do so by selecting all applicable check boxes next to the classes you want to label this document with.
+
+ :::image type="content" source="../media/multiple.png" alt-text="A screenshot showing the multiple label classification tag page." lightbox="../media/multiple.png":::
+
+ # [Single label classification](#tab/single-classification)
+
+ **Single label classification**: your file can only be labeled with one class; you can do so by selecting one of the buttons next to the class you want to label the document with.
+
+ :::image type="content" source="../media/single.png" alt-text="A screenshot showing the single label classification tag page" lightbox="../media/single.png":::
+
+
+
+6. In the right side pane under the **Labels** pivot you can find all the classes in your project and the count of labeled instances per each.
+
+7. In the bottom section of the right side pane you can add the current file you are viewing to the training set or the testing set. By default all the documents are added to your training set. Learn more about [training and testing sets](train-model.md#data-splitting) and how they are used for model training and evaluation.
+
+ > [!TIP]
+ > If you are planning on using **Automatic** data spliting use the default option of assigning all the documents into your training set.
+
+8. Under the **Distribution** pivot you can view the distribution across training and testing sets. You have two options for viewing:
+ * *Total instances* where you can view count of all labeled instances of a specific class.
+ * *documents with at least one label* where each document is counted if it contains at least one labeled instance of this class.
+
+9. While you're labeling, your changes will be synced periodically, if they have not been saved yet you will find a warning at the top of your page. If you want to save manually, click on **Save labels** button at the bottom of the page.
+
+## Remove labels
+
+If you want to remove a label, uncheck the button next to the class.
+
+## Delete or classes
+
+To delete a class, click on the delete icon next to the class you want to remove. Deleting a class will remove all its labeled instances from your dataset.
+
+## Next steps
+
+After you've labeled your data, you can begin [training a model](train-model.md) that will learn based on your data.
cognitive-services Train Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-classification/how-to/train-model.md
+
+ Title: How to train your custom text classification model - Azure Cognitive Services
+
+description: Learn about how to train your model for custom text classification.
++++++ Last updated : 05/05/2022++++
+# How to train a custom text classification model
+
+Training is the process where the model learns from your [labeled data](tag-data.md). After training is completed, you will be able to [view the model's performance](view-model-evaluation.md) to determine if you need to [improve your model](improve-model.md).
+
+To train a model, start a training job. Only successfully completed jobs create a usable model. Training jobs expire after seven days. After this period, you won't be able to retrieve the job details. If your training job completed successfully and a model was created, it won't be affected by the job expiration. You can only have one training job running at a time, and you can't start other jobs in the same project.
+
+The training times can be anywhere from a few minutes when dealing with few documents, up to several hours depending on the dataset size and the complexity of your schema.
+++
+## Prerequisites
+
+Before you train your model, you need:
+
+* [A successfully created project](create-project.md) with a configured Azure blob storage account,
+* Text data that has [been uploaded](design-schema.md#data-preparation) to your storage account.
+* [Labeled data](tag-data.md)
+
+See the [project development lifecycle](../overview.md#project-development-lifecycle) for more information.
+
+## Data splitting
+
+Before you start the training process, labeled documents in your project are divided into a training set and a testing set. Each one of them serves a different function.
+The **training set** is used in training the model, this is the set from which the model learns the class/classes assigned to each document.
+The **testing set** is a blind set that is not introduced to the model during training but only during evaluation.
+After the model is trained successfully, it is used to make predictions from the documents in the testing set. Based on these predictions, the model's [evaluation metrics](../concepts/evaluation-metrics.md) will be calculated.
+It is recommended to make sure that all your classes are adequately represented in both the training and testing set.
+
+Custom text classification supports two methods for data splitting:
+
+* **Automatically splitting the testing set from training data**: The system will split your labeled data between the training and testing sets, according to the percentages you choose. The recommended percentage split is 80% for training and 20% for testing.
+
+ > [!NOTE]
+ > If you choose the **Automatically splitting the testing set from training data** option, only the data assigned to training set will be split according to the percentages provided.
+
+* **Use a manual split of training and testing data**: This method enables users to define which labeled documents should belong to which set. This step is only enabled if you have added documents to your testing set during [data labeling](tag-data.md).
+
+## Train model
+
+# [Language studio](#tab/Language-studio)
++
+# [REST APIs](#tab/REST-APIs)
+
+### Start training job
++
+### Get training job status
+
+Training could take sometime depending on the size of your training data and complexity of your schema. You can use the following request to keep polling the status of the training job until it is successfully completed.
+
+ [!INCLUDE [get training model status](../includes/rest-api/get-training-status.md)]
+++
+### Cancel training job
+
+# [Language Studio](#tab/language-studio)
++
+# [REST APIs](#tab/rest-api)
++++
+## Next steps
+
+After training is completed, you will be able to [view the model's performance](view-model-evaluation.md) to optionally [improve your model](improve-model.md) if needed. Once you're satisfied with your model, you can deploy it, making it available to use for [classifying text](call-api.md).
cognitive-services View Model Evaluation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-classification/how-to/view-model-evaluation.md
+
+ Title: View a custom text classification model evaluation - Azure Cognitive Services
+
+description: Learn how to view the evaluation scores for a custom text classification model
++++++ Last updated : 05/24/2022++++
+# View your text classification model's evaluation and details
+
+After your model has finished training, you can view the model performance and see the predicted classes for the documents in the test set.
+
+> [!NOTE]
+> Using the **Automatically split the testing set from training data** option may result in different model evaluation result every time you [train a new model](train-model.md), as the test set is selected randomly from the data. To make sure that the evaluation is calculated on the same test set every time you train a model, make sure to use the **Use a manual split of training and testing data** option when starting a training job and define your **Test** documents when [labeling data](tag-data.md).
+
+## Prerequisites
+
+Before viewing model evaluation you need:
+
+* [A custom text classification project](create-project.md) with a configured Azure blob storage account.
+* Text data that has [been uploaded](design-schema.md#data-preparation) to your storage account.
+* [Labeled data](tag-data.md)
+* A successfully [trained model](train-model.md)
+
+See the [project development lifecycle](../overview.md#project-development-lifecycle) for more information.
+
+## Model details
+
+### [Language studio](#tab/language-studio)
++
+### [REST APIs](#tab/rest-api)
+
+### Single label classification
+
+### Multi label classification
++++
+## Delete model
+
+### [Language studio](#tab/language-studio)
+++
+### [REST APIs](#tab/rest-api)
+++++
+## Next steps
+
+As you review your how your model performs, learn about the [evaluation metrics](../concepts/evaluation-metrics.md) that are used. Once you know whether your model performance needs to improve, you can begin [improving the model](improve-model.md).
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-classification/language-support.md
+
+ Title: Language support in custom text classification
+
+description: Learn about which languages are supported by custom text classification.
++++++ Last updated : 05/06/2022++++
+# Language support for custom text classification
+
+Use this article to learn about the languages currently supported by custom text classification feature.
+
+## Multi-lingual option
+
+With custom text classification, you can train a model in one language and use to classify documents in another language. This feature is useful because it helps save time and effort. Instead of building separate projects for every language, you can handle multi-lingual dataset in one project. Your dataset doesn't have to be entirely in the same language but you should enable the multi-lingual option for your project while creating or later in project settings. If you notice your model performing poorly in certain languages during the evaluation process, consider adding more data in these languages to your training set.
+
+You can train your project entirely with English documents, and query it in: French, German, Mandarin, Japanese, Korean, and others. Custom text classification
+makes it easy for you to scale your projects to multiple languages by using multilingual technology to train your models.
+
+Whenever you identify that a particular language is not performing as well as other languages, you can add more documents for that language in your project. In the [data labeling](how-to/tag-data.md) page in Language Studio, you can select the language of the document you're adding. When you introduce more documents for that language to the model, it is introduced to more of the syntax of that language, and learns to predict it better.
+
+You aren't expected to add the same number of documents for every language. You should build the majority of your project in one language, and only add a few documents in languages you observe aren't performing well. If you create a project that is primarily in English, and start testing it in French, German, and Spanish, you might observe that German doesn't perform as well as the other two languages. In that case, consider adding 5% of your original English documents in German, train a new model and test in German again. You should see better results for German queries. The more labeled documents you add, the more likely the results are going to get better.
+
+When you add data in another language, you shouldn't expect it to negatively affect other languages.
+
+## Languages supported by custom text classification
+
+Custom text classification supports `.txt` files in the following languages:
+
+| Language | Language Code |
+| | |
+| Afrikaans | `af` |
+| Amharic | `am` |
+| Arabic | `ar` |
+| Assamese | `as` |
+| Azerbaijani | `az` |
+| Belarusian | `be` |
+| Bulgarian | `bg` |
+| Bengali | `bn` |
+| Breton | `br` |
+| Bosnian | `bs` |
+| Catalan | `ca` |
+| Czech | `cs` |
+| Welsh | `cy` |
+| Danish | `da` |
+| German | `de`
+| Greek | `el` |
+| English (US) | `en-us` |
+| Esperanto | `eo` |
+| Spanish | `es` |
+| Estonian | `et` |
+| Basque | `eu` |
+| Persian (Farsi) | `fa` |
+| Finnish | `fi` |
+| French | `fr` |
+| Western Frisian | `fy` |
+| Irish | `ga` |
+| Scottish Gaelic | `gd` |
+| Galician | `gl` |
+| Gujarati | `gu` |
+| Hausa | `ha` |
+| Hebrew | `he` |
+| Hindi | `hi` |
+| Croatian | `hr` |
+| Hungarian | `hu` |
+| Armenian | `hy` |
+| Indonesian | `id` |
+| Italian | `it` |
+| Japanese | `ja` |
+| Javanese | `jv` |
+| Georgian | `ka` |
+| Kazakh | `kk` |
+| Khmer | `km` |
+| Kannada | `kn` |
+| Korean | `ko` |
+| Kurdish (Kurmanji) | `ku` |
+| Kyrgyz | `ky` |
+| Latin | `la` |
+| Lao | `lo` |
+| Lithuanian | `lt` |
+| Latvian | `lv` |
+| Malagasy | `mg` |
+| Macedonian | `mk` |
+| Malayalam | `ml` |
+| Mongolian | `mn` |
+| Marathi | `mr` |
+| Malay | `ms` |
+| Burmese | `my` |
+| Nepali | `ne` |
+| Dutch | `nl` |
+| Norwegian (Bokmal) | `nb` |
+| Oriya | `or` |
+| Punjabi | `pa` |
+| Polish | `pl` |
+| Pashto | `ps` |
+| Portuguese (Brazil) | `pt-br` |
+| Portuguese (Portugal) | `pt-pt` |
+| Romanian | `ro` |
+| Russian | `ru` |
+| Sanskrit | `sa` |
+| Sindhi | `sd` |
+| Sinhala | `si` |
+| Slovak | `sk` |
+| Slovenian | `sl` |
+| Somali | `so` |
+| Albanian | `sq` |
+| Serbian | `sr` |
+| Sundanese | `su` |
+| Swedish | `sv` |
+| Swahili | `sw` |
+| Tamil | `ta` |
+| Telugu | `te` |
+| Thai | `th` |
+| Filipino | `tl` |
+| Turkish | `tr` |
+| Uyghur | `ug` |
+| Ukrainian | `uk` |
+| Urdu | `ur` |
+| Uzbek | `uz` |
+| Vietnamese | `vi` |
+| Xhosa | `xh` |
+| Yiddish | `yi` |
+| Chinese (Simplified) | `zh-hans` |
+| Zulu | `zu` |
+
+## Next steps
+
+* [Custom text classification overview](overview.md)
+* [Service limits](service-limits.md)
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-classification/overview.md
+
+ Title: What is custom text classification (preview) in Azure Cognitive Services for Language?
+
+description: Learn how use custom text classification.
++++++ Last updated : 05/24/2022++++
+# What is custom text classification?
+
+Custom text classification is one of the custom features offered by [Azure Cognitive Service for Language](../overview.md). It is a cloud-based API service that applies machine-learning intelligence to enable you to build custom models for text classification tasks.
+
+Custom text classification enables users to build custom AI models to classify text into custom classes pre-defined by the user. By creating a custom text classification project, developers can iteratively label data, train, evaluate, and improve model performance before making it available for consumption. The quality of the labeled data greatly impacts model performance. To simplify building and customizing your model, the service offers a custom web portal that can be accessed through the [Language studio](https://aka.ms/languageStudio). You can easily get started with the service by following the steps in this [quickstart](quickstart.md).
+
+Custom text classification supports two types of projects:
+
+* **Single label classification** - you can assign a single class for each document in your dataset. For example, a movie script could only be classified as "Romance" or "Comedy".
+* **Multi label classification** - you can assign multiple classes for each document in your dataset. For example, a movie script could be classified as "Comedy" or "Romance" and "Comedy".
+
+This documentation contains the following article types:
+
+* [Quickstarts](quickstart.md) are getting-started instructions to guide you through making requests to the service.
+* [Concepts](concepts/evaluation-metrics.md) provide explanations of the service functionality and features.
+* [How-to guides](how-to/tag-data.md) contain instructions for using the service in more specific or customized ways.
+
+## Example usage scenarios
+
+Custom text classification can be used in multiple scenarios across a variety of industries:
+
+### Automatic emails or ticket triage
+
+Support centers of all types receive a high volume of emails or tickets containing unstructured, freeform text and attachments. Timely review, acknowledgment, and routing to subject matter experts within internal teams is critical. Email triage at this scale requires people to review and route to the right departments, which takes time and resources. Custom text classification can be used to analyze incoming text, and triage and categorize the content to be automatically routed to the relevant departments for further action.
+
+### Knowledge mining to enhance/enrich semantic search
+
+Search is foundational to any app that surfaces text content to users. Common scenarios include catalog or document searches, retail product searches, or knowledge mining for data science. Many enterprises across various industries are seeking to build a rich search experience over private, heterogeneous content, which includes both structured and unstructured documents. As a part of their pipeline, developers can use custom text classification to categorize their text into classes that are relevant to their industry. The predicted classes can be used to enrich the indexing of the file for a more customized search experience.
+
+## Project development lifecycle
+
+Creating a custom text classification project typically involves several different steps.
++
+Follow these steps to get the most out of your model:
+
+1. **Define schema**: Know your data and identify the [classes](glossary.md#class) you want differentiate between, avoid ambiguity.
+
+2. **Label data**: The quality of data labeling is a key factor in determining model performance. Documents that belong to the same class should always have the same class, if you have a document that can fall into two classes use **Multi label classification** projects. Avoid class ambiguity, make sure that your classes are clearly separable from each other, especially with single label classification projects.
+
+3. **Train model**: Your model starts learning from your labeled data.
+
+4. **View model evaluation details**: View the evaluation details for your model to determine how well it performs when introduced to new data.
+
+5. **Improve model**: Work on improving your model performance by examining the incorrect model predictions and examining data distribution.
+
+6. **Deploy model**: Deploying a model makes it available for use via the [Analyze API](https://aka.ms/ct-runtime-swagger).
+
+7. **Classify text**: Use your custom model for custom text classification tasks.
+
+## Reference documentation and code samples
+
+As you use custom text classification, see the following reference documentation and samples for Azure Cognitive Services for Language:
+
+|Development option / language |Reference documentation |Samples |
+||||
+|REST APIs (Authoring) | [REST API documentation](https://aka.ms/ct-authoring-swagger) | |
+|REST APIs (Runtime) | [REST API documentation](https://aka.ms/ct-runtime-swagger) | |
+|C# | [C# documentation](/dotnet/api/azure.ai.textanalytics?view=azure-dotnet-preview&preserve-view=true) | [C# samples - Single label classification](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/textanalytics/Azure.AI.TextAnalytics/samples/Sample10_SingleCategoryClassify.md) [C# samples - Multi label classification](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/textanalytics/Azure.AI.TextAnalytics/samples/Sample11_MultiCategoryClassify.md) |
+| Java | [Java documentation](/java/api/overview/azure/ai-textanalytics-readme?view=azure-java-preview&preserve-view=true) | [Java Samples - Single label classification](https://github.com/Azure/azure-sdk-for-java/blob/main/sdk/textanalytics/azure-ai-textanalytics/src/samples/java/com/azure/ai/textanalytics/lro/ClassifyDocumentSingleCategory.java) [Java Samples - Multi label classification](https://github.com/Azure/azure-sdk-for-java/blob/main/sdk/textanalytics/azure-ai-textanalytics/src/samples/java/com/azure/ai/textanalytics/lro/ClassifyDocumentMultiCategory.java) |
+|JavaScript | [JavaScript documentation](/javascript/api/overview/azure/ai-text-analytics-readme?view=azure-node-preview&preserve-view=true) | [JavaScript samples - Single label classification](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/textanalytics/ai-text-analytics/samples/v5/javascript/customText.js) [JavaScript samples - Multi label classification](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/textanalytics/ai-text-analytics/samples/v5/javascript/customText.js) |
+|Python | [Python documentation](/python/api/azure-ai-textanalytics/azure.ai.textanalytics?view=azure-python-preview&preserve-view=true) | [Python samples - Single label classification](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/textanalytics/azure-ai-textanalytics/samples/sample_single_category_classify.py) [Python samples - Multi label classification](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/textanalytics/azure-ai-textanalytics/samples/sample_multi_category_classify.py) |
+
+## Responsible AI
+
+An AI system includes not only the technology, but also the people who will use it, the people who will be affected by it, and the environment in which it is deployed. Read the [transparency note for custom text classification]() to learn about responsible AI use and deployment in your systems. You can also see the following articles for more information:
++
+## Next steps
+
+* Use the [quickstart article](quickstart.md) to start using custom text classification.
+
+* As you go through the project development lifecycle, review the [glossary](glossary.md) to learn more about the terms used throughout the documentation for this feature.
+
+* Remember to view the [service limits](service-limits.md) for information such as regional availability.
cognitive-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-classification/quickstart.md
+
+ Title: "Quickstart: Custom text classification"
+
+description: Use this quickstart to start using the custom text classification feature.
++++++ Last updated : 05/06/2022++
+zone_pivot_groups: usage-custom-language-features
++
+# Quickstart: Custom text classification (preview)
+
+Use this article to get started with creating a custom text classification project where you can train custom models for text classification. A model is an object that's trained to do a certain task. For this system, the models classify text. Models are trained by learning from tagged data.
+
+Custom text classification supports two types of projects:
+
+* **Single label classification** - you can assign a single class for each document in your dataset. For example, a movie script could only be classified as "Romance" or "Comedy".
+* **Multi label classification** - you can assign multiple classes for each document in your dataset. For example, a movie script could be classified as "Comedy" or "Romance" and "Comedy".
+
+In this quickstart you can use the sample datasets provided to build a multi label classification where you can classify movie scripts into one or more categories or you can use single label classification dataset where you can classify abstracts of scientific papers into one of the defined domains.
++++++++
+## Next steps
+
+After you've created a custom text classification model, you can:
+* [Use the runtime API to classify text](how-to/call-api.md)
+
+When you start to create your own custom text classification projects, use the how-to articles to learn more about developing your model in greater detail:
+
+* [Data selection and schema design](how-to/design-schema.md)
+* [Tag data](how-to/tag-data.md)
+* [Train a model](how-to/train-model.md)
+* [View model evaluation](how-to/view-model-evaluation.md)
+* [Improve a model](how-to/improve-model.md)
cognitive-services Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-classification/service-limits.md
+
+ Title: Custom text classification limits
+
+description: Learn about the data and rate limits when using custom text classification.
+++ Last updated : 01/25/2022+++++++
+# Custom text classification limits
+
+Use this article to learn about the data and service limits when using custom text classification.
+
+## Language resource limits
+
+* Your Language resource has to be created in one of the supported regions and pricing tiers listed below.
+
+* You can only connect 1 storage account per resource. This process is irreversible. If you connect a storage account to your resource, you cannot unlink it later. Learn more about [connecting a storage account](how-to/create-project.md#create-language-resource-and-connect-storage-account)
+
+* You can have up to 500 projects per resource.
+
+* Project names have to be unique within the same resource across all custom features.
+
+## Pricing tiers
+
+Custom text classification is available with the following pricing tiers:
+
+|Tier|Description|Limit|
+|--|--|--|
+|F0 |Free tier|You are only allowed one F0 tier Language resource per subscription.|
+|S |Paid tier|You can have unlimited Language S tier resources per subscription. |
++
+See [pricing](https://azure.microsoft.com/pricing/details/cognitive-services/language-service/) for more information.
+
+## Regional availability
+
+Custom text classification is only available in some Azure regions. To use custom text classification, you must choose a Language resource in one of following regions:
+
+* West US 2
+* East US
+* East US 2
+* West US 3
+* South Central US
+* West Europe
+* North Europe
+* UK south
+* Southeast Asia
+* Australia East
+* Sweden Central
+
+## API limits
+
+|Item|Request type| Maximum limit|
+|:-|:-|:-|
+|Authoring API|POST|10 per minute|
+|Authoring API|GET|100 per minute|
+|Prediction API|GET/POST|1,000 per minute|
+|Document size|--|125,000 characters. You can send up to 25 documents as long as they collectively do not exceed 125,000 characters|
+
+> [!TIP]
+> If you need to send larger files than the limit allows, you can break the text into smaller chunks of text before sending them to the API. You use can the [chunk command from CLUtils](https://github.com/microsoft/CognitiveServicesLanguageUtilities/blob/main/CustomTextAnalytics.CLUtils/Solution/CogSLanguageUtilities.ViewLayer.CliCommands/Commands/ChunkCommand/README.md) for this process.
+
+## Quota limits
+
+|Pricing tier |Item |Limit |
+| | | |
+|F|Training time| 1 hour per month|
+|S|Training time| Unlimited, Pay as you go |
+|F|Prediction Calls| 5,000 text records per month |
+|S|Prediction Calls| Unlimited, Pay as you go |
+
+## Document limits
+
+* You can only use `.txt`. files. If your data is in another format, you can use the [CLUtils parse command](https://github.com/microsoft/CognitiveServicesLanguageUtilities/blob/main/CustomTextAnalytics.CLUtils/Solution/CogSLanguageUtilities.ViewLayer.CliCommands/Commands/ParseCommand/README.md) to open your document and extract the text.
+
+* All files uploaded in your container must contain data. Empty files are not allowed for training.
+
+* All files should be available at the root of your container.
+
+## Data limits
+
+The following limits are observed for the custom text classification.
+
+|Item|Lower Limit| Upper Limit |
+| | | |
+|Documents count | 10 | 100,000 |
+|Document length in characters | 1 | 128,000 characters; approximately 28,000 words or 56 pages. |
+|Count of classes | 1 | 200 |
+|Count of trained models per project| 0 | 50 |
+|Count of deployments per project| 0 | 10 |
+
+## Naming limits
+
+| Item | Limits |
+|--|--|
+| Project name | You can only use letters `(a-z, A-Z)`, and numbers `(0-9)` with no spaces. Maximum allowed length is 50 characters. |
+| Model name | You can only use letters `(a-z, A-Z)`, numbers `(0-9)` and symbols `@ # _ . , ^ \ [ ]`. Maximum allowed length is 50 characters. |
+| Deployment name | You can only use letters `(a-z, A-Z)`, numbers `(0-9)` and symbols `@ # _ . , ^ \ [ ]`. Maximum allowed length is 50 characters. |
+| Class name| You can only use letters `(a-z, A-Z)`, numbers `(0-9)` and symbols `@ # _ . , ^ \ [ ]`. Maximum allowed length is 50 characters. |
+| Document name | You can only use letters `(a-z, A-Z)`, and numbers `(0-9)` with no spaces. |
+
+## Next steps
+
+* [Custom text classification overview](overview.md)
cognitive-services Cognitive Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-classification/tutorials/cognitive-search.md
+
+ Title: Enrich a Cognitive Search index with custom classes
+
+description: Improve your cognitive search indices using custom text classification
++++++ Last updated : 05/24/2022++++
+# Tutorial: Enrich Cognitive Search index with custom classes from your data
+
+With the abundance of electronic documents within the enterprise, the problem of search through them becomes a tiring and expensive task. [Azure Cognitive Search](../../../../search/search-create-service-portal.md) helps with searching through your files based on their indices. Custom text classification helps in enriching the indexing of these files by classifying them into your custom classes.
+
+In this tutorial, you will learn how to:
+
+* Create a custom text classification project.
+* Publish Azure function.
+* Add Index to your Azure Cognitive search.
+
+## Prerequisites
+
+* [An Azure Language resource connected to an Azure blob storage account](../how-to/create-project.md).
+ * We recommend following the instructions for creating a resource using the Azure portal, for easier setup.
+* [An Azure Cognitive Search service](../../../../search/search-create-service-portal.md) in your current subscription
+ * You can use any tier, and any region for this service.
+* An [Azure function app](../../../../azure-functions/functions-create-function-app-portal.md)
+
+## Upload sample data to blob container
++
+# [Language studio](#tab/Language-studio)
+
+## Create a custom text classification project
+
+Once your resource and storage container are configured, create a new custom text classification project. A project is a work area for building your custom ML models based on your data. Your project can only be accessed by you and others who have access to the Language resource being used.
++
+## Train your model
+
+Typically after you create a project, you go ahead and start [tagging the documents](../how-to/tag-data.md) you have in the container connected to your project. For this tutorial, you have imported a sample tagged dataset and initialized your project with the sample JSON tags file.
++
+## Deploy your model
+
+Generally after training a model you would review it's [evaluation details](../how-to/view-model-evaluation.md) and [make improvements](../how-to/improve-model.md) if necessary. In this quickstart, you will just deploy your model, and make it available for you to try in the Language studio, or you can call the [prediction API](https://aka.ms/ct-runtime-swagger).
+++
+# [REST APIs](#tab/REST-APIs)
+
+### Get your resource keys and endpoint
++
+## Create a custom text classification project
+
+Once your resource and storage container are configured, create a new custom text classification project. A project is a work area for building your custom ML models based on your data. Your project can only be accessed by you and others who have access to the Language resource being used.
+
+### Trigger import project job
++
+### Get import job Status
+
+ [!INCLUDE [get import project status](../includes/rest-api/get-import-status.md)]
+
+## Train your model
+
+Typically after you create a project, you go ahead and start [tagging the documents](../how-to/tag-data.md) you have in the container connected to your project. For this tutorial, you have imported a sample tagged dataset and initialized your project with the sample JSON tags file.
+
+### Start training your model
+
+After your project has been imported, you can start training your model.
++
+### Get training job status
+
+Training could take sometime between 10 and 30 minutes for this sample dataset. You can use the following request to keep polling the status of the training job until it is successfully completed.
+
+ [!INCLUDE [get training model status](../includes/rest-api/get-training-status.md)]
+
+## Deploy your model
+
+Generally after training a model you would review it's [evaluation details](../how-to/view-model-evaluation.md) and [make improvements](../how-to/improve-model.md) if necessary. In this quickstart, you will just deploy your model, and make it available for you to try in the Language studio, or you can call the [prediction API](https://aka.ms/ct-runtime-swagger).
+
+### Submit deployment job
++
+### Get deployment job status
+++++
+## Use CogSvc language utilities tool for Cognitive search integration
+
+### Publish your Azure Function
+
+1. Download and use the [provided sample function](https://aka.ms/CustomTextAzureFunction).
+
+2. After you download the sample function, open the *program.cs* file in Visual Studio and [publish the function to Azure](../../../../azure-functions/functions-develop-vs.md?tabs=in-process#publish-to-azure).
+
+### Prepare configuration file
+
+1. Download [sample configuration file](https://aka.ms/CognitiveSearchIntegrationToolAssets) and open it in a text editor.
+
+2. Get your storage account connection string by:
+
+ 1. Navigating to your storage account overview page in the [Azure portal](https://portal.azure.com/#home).
+ 2. In the **Access Keys** section in the menu to the left of the screen, copy your **Connection string** to the `connectionString` field in the configuration file, under `blobStorage`.
+ 3. Go to the container where you have the files you want to index and copy container name to the `containerName` field in the configuration file, under `blobStorage`.
+
+3. Get your cognitive search endpoint and keys by:
+
+ 1. Navigating to your resource overview page in the [Azure portal](https://portal.azure.com/#home).
+ 2. Copy the **Url** at the top-right section of the page to the `endpointUrl` field within `cognitiveSearch`.
+ 3. Go to the **Keys** section in the menu to the left of the screen. Copy your **Primary admin key** to the `apiKey` field within `cognitiveSearch`.
+
+4. Get Azure Function endpoint and keys
+
+ 1. To get your Azure Function endpoint and keys, go to your function overview page in the [Azure portal](https://portal.azure.com/#home).
+ 2. Go to **Functions** menu on the left of the screen, and click on the function you created.
+ 3. From the top menu, click **Get Function Url**. The URL will be formatted like this: `YOUR-ENDPOINT-URL?code=YOUR-API-KEY`.
+ 4. Copy `YOUR-ENDPOINT-URL` to the `endpointUrl` field in the configuration file, under `azureFunction`.
+ 5. Copy `YOUR-API-KEY` to the `apiKey` field in the configuration file, under `azureFunction`.
+
+5. Get your resource keys endpoint
+
+ [!INCLUDE [Get keys and endpoint Azure Portal](../includes/get-keys-endpoint-azure.md)]
+
+6. Get your custom text classification project secrets
+
+ 1. You will need your **project-name**, project names are case-sensitive. Project names can be found in **project settings** page.
+
+ 2. You will also need the **deployment-name**. Deployment names can be found in **Deploying a model** page.
+
+### Run the indexer command
+
+After youΓÇÖve published your Azure function and prepared your configs file, you can run the indexer command.
+```cli
+ indexer index --index-name <name-your-index-here> --configs <absolute-path-to-configs-file>
+```
+
+Replace `name-your-index-here` with the index name that appears in your Cognitive Search instance.
+
+## Next steps
+
+* [Search your app with with the Cognitive Search SDK](../../../../search/search-howto-dotnet-sdk.md#run-queries)
cognitive-services Data Formats https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/orchestration-workflow/concepts/data-formats.md
+
+ Title: Orchestration workflow data formats
+
+description: Learn about the data formats accepted by orchestration workflow.
++++++ Last updated : 05/19/2022++++
+# Data formats accepted by orchestration workflow
+
+When data is used by your model for learning, it expects the data to be in a specific format. When you tag your data in Language Studio, it gets converted to the JSON format described in this article. You can also manually tag your files.
++
+## JSON file format
+
+If you upload a tags file, it should follow this format.
+
+```json
+{
+ "projectFileVersion": "{API-VERSION}",
+ "stringIndexType": "Utf16CodeUnit",
+ "metadata": {
+ "projectKind": "Orchestration",
+ "projectName": "{PROJECT-NAME}",
+ "multilingual": false,
+ "description": "This is a description",
+ "language": "{LANGUAGE-CODE}"
+ },
+ "assets": {
+ "projectKind": "Orchestration",
+ "intents": [
+ {
+ "category": "{INTENT1}",
+ "orchestration": {
+ "targetProjectKind": "Luis|Conversation|QuestionAnswering",
+ "luisOrchestration": {
+ "appId": "{APP-ID}",
+ "appVersion": "0.1",
+ "slotName": "production"
+ },
+ "conversationOrchestration": {
+ "projectName": "{PROJECT-NAME}",
+ "deploymentName": "{DEPLOYMENT-NAME}"
+ },
+ "questionAnsweringOrchestration": {
+ "projectName": "{PROJECT-NAME}"
+ }
+ }
+ }
+ ],
+ "utterances": [
+ {
+ "text": "utterance 1",
+ "language": "{LANGUAGE-CODE}",
+ "dataset": "{DATASET}",
+ "intent": "intent1"
+ }
+ ]
+ }
+}
+```
+
+|Key |Placeholder |Value | Example |
+|||-|--|
+| `api-version` | `{API-VERSION}` | The version of the API you are calling. The value referenced here is for the latest released [model version](../../concepts/model-lifecycle.md#choose-the-model-version-used-on-your-data) released. | `2022-03-01-preview` |
+|`confidenceThreshold`|`{CONFIDENCE-THRESHOLD}`|This is the threshold score below which the intent will be predicted as [none intent](none-intent.md)|`0.7`|
+| `projectName` | `{PROJECT-NAME}` | The name of your project. This value is case-sensitive. | `EmailApp` |
+| `multilingual` | `false`| Orchestration doesn't support the multilingual feature | `false`|
+| `language` | `{LANGUAGE-CODE}` | A string specifying the language code for the utterances used in your project. See [Language support](../language-support.md) for more information about supported language codes. |`en-us`|
+| `intents` | `[]` | Array containing all the intent types you have in the project. These are the intents used in the orchestration project.| `[]` |
++
+## Utterance format
+
+```json
+[
+ {
+ "intent": "intent1",
+ "language": "{LANGUAGE-CODE}",
+ "text": "{Utterance-Text}",
+ },
+ {
+ "intent": "intent2",
+ "language": "{LANGUAGE-CODE}",
+ "text": "{Utterance-Text}",
+ }
+]
+
+```
+++
+## Next steps
+* You can import your labeled data into your project directly. Learn how to [import project](../how-to/create-project.md)
+* See the [how-to article](../how-to/tag-utterances.md) more information about labeling your data. When you're done labeling your data, you can [train your model](../how-to/train-model.md).
cognitive-services Evaluation Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/orchestration-workflow/concepts/evaluation-metrics.md
- Previously updated : 03/17/2022+ Last updated : 05/19/2022 # Evaluation metrics for orchestration workflow models
-Model evaluation in orchestration workflow uses the following metrics:
+Your dataset is split into two parts: a set for training, and a set for testing. The training set is used to train the model, while the testing set is used as a test for model after training to calculate the model performance and evaluation. The testing set isn't introduced to the model through the training process, to make sure that the model is tested on new data. <!--See [data splitting](../how-to/train-model.md#data-splitting) for more information-->
+
+Model evaluation is triggered automatically after training is completed successfully. The evaluation process starts by using the trained model to predict user defined intents for utterances in the test set, and compares them with the provided tags (which establishes a baseline of truth). The results are returned so you can review the modelΓÇÖs performance. For evaluation, orchestration workflow uses the following metrics:
+
+* **Precision**: Measures how precise/accurate your model is. It's the ratio between the correctly identified positives (true positives) and all identified positives. The precision metric reveals how many of the predicted classes are correctly labeled.
+
+ `Precision = #True_Positive / (#True_Positive + #False_Positive)`
+
+* **Recall**: Measures the model's ability to predict actual positive classes. It's the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
+
+ `Recall = #True_Positive / (#True_Positive + #False_Negatives)`
+
+* **F1 score**: The F1 score is a function of Precision and Recall. It's needed when you seek a balance between Precision and Recall.
+
+ `F1 Score = 2 * Precision * Recall / (Precision + Recall)`
++
+Precision, recall, and F1 score are calculated for:
+* Each intent separately (intent-level evaluation)
+* For the model collectively (model-level evaluation).
+
+The definitions of precision, recall, and evaluation are the same for intent-level and model-level evaluations. However, the counts for *True Positives*, *False Positives*, and *False Negatives* can differ. For example, consider the following text.
+
+### Example
+
+* Make a response with thank you very much
+* Call my friend
+* Hello
+* Good morning
+
+These are the intents used: *CLUEmail* and *Greeting*
+
+The model could make the following predictions:
+
+| Utterance | Predicted intent | Actual intent |
+|--|--|--|
+|Make a response with thank you very much|CLUEmail|CLUEmail|
+|Call my friend|Greeting|CLUEmail|
+|Hello|CLUEmail|Greeting|
+|Goodmorning| Greeting|Greeting|
+
+### Intent level evaluation for *CLUEmail* intent
+
+| Key | Count | Explanation |
+|--|--|--|
+| True Positive | 1 | Utterance 1 was correctly predicted as *CLUEmail*. |
+| False Positive | 1 |Utterance 3 was mistakenly predicted as *CLUEmail*. |
+| False Negative | 1 | Utterance 2 was mistakenly predicted as *Greeting*. |
+
+**Precision** = `#True_Positive / (#True_Positive + #False_Positive) = 1 / (1 + 1) = 0.5`
+
+**Recall** = `#True_Positive / (#True_Positive + #False_Negatives) = 1 / (1 + 1) = 0.5`
+
+**F1 Score** = `2 * Precision * Recall / (Precision + Recall) = (2 * 0.5 * 0.5) / (0.5 + 0.5) = 0.5`
+
+### Intent level evaluation for *Greeting* intent
+
+| Key | Count | Explanation |
+|--|--|--|
+| True Positive | 1 | Utterance 4 was correctly predicted as *Greeting*. |
+| False Positive | 1 |Utterance 2 was mistakenly predicted as *Greeting*. |
+| False Negative | 1 | Utterance 3 was mistakenly predicted as *CLUEmail*. |
+
+**Precision** = `#True_Positive / (#True_Positive + #False_Positive) = 1 / (1 + 1) = 0.5`
+
+**Recall** = `#True_Positive / (#True_Positive + #False_Negatives) = 1 / (1 + 1) = 0.5`
+
+**F1 Score** = `2 * Precision * Recall / (Precision + Recall) = (2 * 0.5 * 0.5) / (0.5 + 0.5) = 0.5`
++
+### Model-level evaluation for the collective model
+
+| Key | Count | Explanation |
+|--|--|--|
+| True Positive | 2 | Sum of TP for all intents |
+| False Positive | 2| Sum of FP for all intents |
+| False Negative | 2 | Sum of FN for all intents |
+
+**Precision** = `#True_Positive / (#True_Positive + #False_Positive) = 2 / (2 + 2) = 0.5`
+
+**Recall** = `#True_Positive / (#True_Positive + #False_Negatives) = 2 / (2 + 2) = 0.5`
+
+**F1 Score** = `2 * Precision * Recall / (Precision + Recall) = (2 * 0.5 * 0.5) / (0.5 + 0.5) = 0.5`
-|Metric |Description |Calculation |
-||||
-|Precision | The ratio of successful recognitions to all attempted recognitions. This shows how many times the model's entity recognition is truly a good recognition. | `Precision = #True_Positive / (#True_Positive + #False_Positive)` |
-|Recall | The ratio of successful recognitions to the actual number of entities present. | `Recall = #True_Positive / (#True_Positive + #False_Negatives)` |
-|F1 score | The combination of precision and recall. | `F1 Score = 2 * Precision * Recall / (Precision + Recall)` |
## Confusion matrix
You can calculate the model-level evaluation metrics from the confusion matrix:
## Next steps
-[Train a model in Language Studio](../how-to/train-model.md)
+[Train a model in Language Studio](../how-to/train-model.md)
cognitive-services Fail Over https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/orchestration-workflow/concepts/fail-over.md
+
+ Title: Save and recover orchestration workflow models
+
+description: Learn how to save and recover your orchestration workflow models.
++++++ Last updated : 05/19/2022++++
+# Back up and recover your orchestration workflow models
+
+When you create a Language resource in the Azure portal, you specify a region for it to be created in. From then on, your resource and all of the operations related to it take place in the specified Azure server region. It's rare, but not impossible, to encounter a network issue that hits an entire region. If your solution needs to always be available, then you should design it to either fail-over into another region. This requires two Azure Language resources in different regions and the ability to sync your orchestration workflow models across regions.
+
+If your app or business depends on the use of an orchestration workflow model, we recommend that you create a replica of your project into another supported region. So that if a regional outage occurs, you can then access your model in the other fail-over region where you replicated your project.
+
+Replicating a project means that you export your project metadata and assets and import them into a new project. This only makes a copy of your project settings, intents and utterances. You still need to [train](../how-to/train-model.md) and [deploy](../how-to/deploy-model.md) the models before you can [query them](../how-to/call-api.md) with the [runtime APIs](https://aka.ms/clu-apis).
++
+In this article, you will learn to how to use the export and import APIs to replicate your project from one resource to another existing in different supported geographical regions, guidance on keeping your projects in sync and changes needed to your runtime consumption.
+
+## Prerequisites
+
+* Two Azure Language resources in different Azure regions, each of them in a different region.
+
+## Get your resource keys endpoint
+
+Use the following steps to get the keys and endpoint of your primary and secondary resources. These will be used in the following steps.
++
+> [!TIP]
+> Keep a note of keys and endpoints for both primary and secondary resources. Use these values to replace the following placeholders:
+`{PRIMARY-ENDPOINT}`, `{PRIMARY-RESOURCE-KEY}`, `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}`.
+> Also take note of your project name, your model name and your deployment name. Use these values to replace the following placeholders: `{PROJECT-NAME}`, `{MODEL-NAME}` and `{DEPLOYMENT-NAME}`.
+
+## Export your primary project assets
+
+Start by exporting the project assets from the project in your primary resource.
+
+### Submit export job
+
+Replace the placeholders in the following request with your `{PRIMARY-ENDPOINT}` and `{PRIMARY-RESOURCE-KEY}` that you obtained in the first step.
++
+### Get export job status
+
+Replace the placeholders in the following request with your `{PRIMARY-ENDPOINT}` and `{PRIMARY-RESOURCE-KEY}` that you obtained in the first step.
++
+Copy the response body as you will use it as the body for the next import job.
+
+## Import to a new project
+
+Now go ahead and import the exported project assets in your new project in the secondary region so you can replicate it.
+
+### Submit import job
+
+Replace the placeholders in the following request with your `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}` that you obtained in the first step.
++
+### Get import job status
+
+Replace the placeholders in the following request with your `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}` that you obtained in the first step.
+++
+## Train your model
+
+After importing your project, you only have copied the project's assets and metadata and assets. You still need to train your model, which will incur usage on your account.
+
+### Submit training job
+
+Replace the placeholders in the following request with your `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}` that you obtained in the first step.
+++
+### Get Training Status
+
+Replace the placeholders in the following request with your `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}` that you obtained in the first step.
++
+## Deploy your model
+
+This is the step where you make your trained model available form consumption via the [runtime prediction API](https://aka.ms/ct-runtime-swagger).
+
+> [!TIP]
+> Use the same deployment name as your primary project for easier maintenance and minimal changes to your system to handle redirecting your traffic.
+
+### Submit deployment job
+
+Replace the placeholders in the following request with your `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}` that you obtained in the first step.
++
+### Get the deployment status
+
+Replace the placeholders in the following request with your `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}` that you obtained in the first step.
++
+## Changes in calling the runtime
+
+Within your system, at the step where you call [runtime API](https://aka.ms/clu-apis) check for the response code returned from the submit task API. If you observe a **consistent** failure in submitting the request, this could indicate an outage in your primary region. Failure once doesn't mean an outage, it may be transient issue. Retry submitting the job through the secondary resource you have created. For the second request use your `{YOUR-SECONDARY-ENDPOINT}` and secondary key, if you have followed the steps above, `{PROJECT-NAME}` and `{DEPLOYMENT-NAME}` would be the same so no changes are required to the request body.
+
+In case you revert to using your secondary resource you will observe slight increase in latency because of the difference in regions where your model is deployed.
+
+## Check if your projects are out of sync
+
+Maintaining the freshness of both projects is an important part of process. You need to frequently check if any updates were made to your primary project so that you move them over to your secondary project. This way if your primary region fail and you move into the secondary region you should expect similar model performance since it already contains the latest updates. Setting the frequency of checking if your projects are in sync is an important choice, we recommend that you do this check daily in order to guarantee the freshness of data in your secondary model.
+
+### Get project details
+
+Use the following url to get your project details, one of the keys returned in the body indicates the last modified date of the project.
+Repeat the following step twice, one for your primary project and another for your secondary project and compare the timestamp returned for both of them to check if they are out of sync.
++
+Repeat the same steps for your replicated project using `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}`. Compare the returned `lastModifiedDateTime` from both projects. If your primary project was modified sooner than your secondary one, you need to repeat the steps of [exporting](#export-your-primary-project-assets), [importing](#import-to-a-new-project), [training](#train-your-model) and [deploying](#deploy-your-model) your model.
+
+## Next steps
+
+In this article, you have learned how to use the export and import APIs to replicate your project to a secondary Language resource in other region. Next, explore the API reference docs to see what else you can do with authoring APIs.
+
+* [Authoring REST API reference](https://aka.ms/clu-authoring-apis)
+
+* [Runtime prediction REST API reference](https://aka.ms/clu-apis)
cognitive-services None Intent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/orchestration-workflow/concepts/none-intent.md
+
+ Title: Orchestration workflow none intent
+
+description: Learn about the default None intent in orchestration workflow.
++++++ Last updated : 05/19/2022+++++
+# The "None" intent in orchestration workflow
+
+Every project in orchestration workflow includes a default None intent. The None intent is a required intent and can't be deleted or renamed. The intent is meant to categorize any utterances that do not belong to any of your other custom intents.
+
+An utterance can be predicted as the None intent if the top scoring intent's score is **lower** than the None score threshold. It can also be predicted if the utterance is similar to examples added to the None intent.
+
+## None score threshold
+
+You can go to the **project settings** of any project and set the **None score threshold**. The threshold is a decimal score from **0.0** to **1.0**.
+
+For any query and utterance, the highest scoring intent ends up **lower** than the threshold score, the top intent will be automatically replaced with the None intent. The scores of all the other intents remain unchanged.
+
+The score should be set according to your own observations of prediction scores, as they may vary by project. A higher threshold score forces the utterances to be more similar to the examples you have in your training data.
+
+When you export a project's JSON file, the None score threshold is defined in the _**"settings"**_ parameter of the JSON as the _**"confidenceThreshold"**_, which accepts a decimal value between 0.0 and 1.0.
+
+The default score for Orchestration Workflow projects is set at **0.5** when creating new project in the language studio.
+
+> [!NOTE]
+> During model evaluation of your test set, the None score threshold is not applied.
+
+## Adding examples to the None intent
+
+The None intent is also treated like any other intent in your project. If there are utterances that you want predicted as None, consider adding similar examples to them in your training data. For example, if you would like to categorize utterances that are not important to your project as None, then add those utterances to your intent.
+
+## Next steps
+
+[Orchestration workflow overview](../overview.md)
cognitive-services Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/orchestration-workflow/faq.md
Previously updated : 01/10/2022 Last updated : 05/23/2022
See [How to create projects and build schemas](./how-to/create-project.md) for i
LUIS applications that use the Language resource as their authoring resource will be available for connection. You can only connect to LUIS applications that are owned by the same resource. This option will only be available for resources in West Europe, as it's the only common available region between LUIS and CLU.
+## Which question answering project can I connect to in orchestration workflow projects?
+
+Question answering projects that use the Language resource will be available for connection. You can only connect to question answering projects that are in the same Language resource.
+ ## Training is taking a long time, is this expected? For orchestration projects, long training times are expected. Based on the number of examples you have your training times may vary from 5 minutes to 1 hour or more.
Yes, only for predictions, and [samples are available](https://aka.ms/cluSampleC
## Are there APIs for this feature?
-Yes, all the APIs [are available](https://aka.ms/clu-apis).
+Yes, all the APIs are available.
+* [Authoring APIs](https://aka.ms/clu-authoring-apis)
+* [Prediction API](https://aka.ms/clu-runtime-api)
## Next steps
cognitive-services Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/orchestration-workflow/glossary.md
+
+ Title: Definitions used in orchestration workflow
+
+description: Learn about definitions used in orchestration workflow.
++++++ Last updated : 05/19/2022++++
+# Terms and definitions used in orchestration workflow
+Use this article to learn about some of the definitions and terms you may encounter when using orchestration workflow.
+
+## F1 score
+The F1 score is a function of Precision and Recall. It's needed when you seek a balance between [precision](#precision) and [recall](#recall).
+
+## Intent
+An intent represents a task or action the user wants to perform. It is a purpose or goal expressed in a user's input, such as booking a flight, or paying a bill.
+
+## Model
+A model is an object that's trained to do a certain task, in this case conversation understanding tasks. Models are trained by providing labeled data to learn from so they can later be used to understand utterances.
+
+* **Model evaluation** is the process that happens right after training to know how well does your model perform.
+* **Deployment** is the process of assigning your model to a deployment to make it available for use via the [prediction API](https://aka.ms/ct-runtime-swagger).
+
+## Overfitting
+Overfitting happens when the model is fixated on the specific examples and is not able to generalize well.
+
+## Precision
+Measures how precise/accurate your model is. It's the ratio between the correctly identified positives (true positives) and all identified positives. The precision metric reveals how many of the predicted classes are correctly labeled.
+
+## Project
+A project is a work area for building your custom ML models based on your data. Your project can only be accessed by you and others who have access to the Azure resource being used.
+
+## Recall
+Measures the model's ability to predict actual positive classes. It's the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
+
+## Schema
+Schema is defined as the combination of intents within your project. Schema design is a crucial part of your project's success. When creating a schema, you want think about which intents should be included in your project
+
+## Training data
+Training data is the set of information that is needed to train a model.
+
+## Utterance
+
+An utterance is user input that is short text representative of a sentence in a conversation. It is a natural language phrase such as "book 2 tickets to Seattle next Tuesday". Example utterances are added to train the model and the model predicts on new utterance at runtime
++
+## Next steps
+
+* [Data and service limits](service-limits.md).
+* [Orchestration workflow overview](../overview.md).
cognitive-services Build Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/orchestration-workflow/how-to/build-schema.md
+
+ Title: How to build an orchestration project schema
+description: Learn how to define intents for your orchestration workflow project.
+++++++ Last updated : 05/20/2022++++
+# How to build your project schema for orchestration workflow
+
+In orchestration workflow projects, the *schema* is defined as the combination of intents within your project. Schema design is a crucial part of your project's success. When creating a schema, you want think about which intents that should be included in your project.
+
+## Guidelines and recommendations
+
+Consider the following guidelines and recommendations for your project:
+
+* Build orchestration projects when you need to manage the NLU for a multi-faceted virtual assistant or chatbot, where the intents, entities, and utterances would begin to be far more difficult to maintain over time in one project.
+* Orchestrate between different domains. A domain is a collection of intents and entities that serve the same purpose, such as Email commands vs. Restaurant commands.
+* If there is an overlap of similar intents between domains, create the common intents in a separate domain and removing them from the others for the best accuracy.
+* For intents that are general across domains, such as ΓÇ£GreetingΓÇ¥, ΓÇ£ConfirmΓÇ¥, ΓÇ£RejectΓÇ¥, you can either add them in a separate domain or as direct intents in the Orchestration project.
+* Orchestrate to Custom question answering knowledge base when a domain has FAQ type questions with static answers. Ensure that the vocabulary and language used to ask questions is distinctive from the one used in the other Conversational Language Understanding projects and LUIS applications.
+* If an utterance is being misclassified and routed to an incorrect intent, then add similar utterances to the intent to influence its results. If the intent is connected to a project, then add utterances to the connected project itself. After you retrain your orchestration project, the new utterances in the connected project will influence predictions.
+* Add test data to your orchestration projects to validate there isnΓÇÖt confusion between linked projects and other intents.
++
+## Add intents
+
+To build a project schema within [Language Studio](https://aka.ms/languageStudio):
+
+1. Select **Schema definition** from the left side menu.
+
+2. To create an intent, select **Add** from the top menu. You will be prompted to type in a name for the intent.
+
+3. To connect your intent to other existing projects, select **Yes, I want to connect it to an existing project** option. You can alternatively create a non-connected intent by selecting the **No, I don't want to connect to a project** option.
+
+4. If you choose to create a connected intent, choose from **Connected service** the service you are connecting to, then choose the **project name**. You can connect your intent to only one project from the following
+
+ :::image type="content" source="../media/build-schema-page.png" alt-text="A screenshot showing the schema creation page in Language Studio." lightbox="../media/build-schema-page.png":::
+
+> [!TIP]
+> Use connected intents to connect to other projects (conversational language understanding, LUIS, and question answering)
+
+5. Click on **Add intent** to add your intent.
+
+## Next steps
+
+* [Add utterances](tag-utterances.md)
+
cognitive-services Call Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/orchestration-workflow/how-to/call-api.md
+
+ Title: How to send a Conversational Language Understanding job
+
+description: Learn about sending a request for Conversational Language Understanding.
++++++ Last updated : 05/20/2022+
+ms.devlang: csharp, python
+++
+# Query deployment for intent predictions
+
+After the deployment is added successfully, you can query the deployment for intent and entities predictions from your utterance based on the model you assigned to the deployment.
+You can query the deployment programmatically [Prediction API](https://aka.ms/ct-runtime-swagger) or through the [Client libraries (Azure SDK)](#send-an-orchestration-workflow-request).
+
+## Test deployed model
+
+You can use the Language Studio to submit an utterance, get predictions and visualize the results.
++++
+## Send an orchestration workflow request
+
+# [Language Studio](#tab/language-studio)
++
+# [REST APIs](#tab/REST-APIs)
+
+First you will need to get your resource key and endpoint:
++
+### Query your model
++
+# [Client libraries (Azure SDK)](#tab/azure-sdk)
+
+First you will need to get your resource key and endpoint:
++
+### Use the client libraries (Azure SDK)
+
+You can also use the client libraries provided by the Azure SDK to send requests to your model.
+
+> [!NOTE]
+> The client library for conversational language understanding is only available for:
+> * .NET
+> * Python
+
+1. Go to your resource overview page in the [Azure portal](https://portal.azure.com/#home)
+
+2. From the menu on the left side, select **Keys and Endpoint**. Use endpoint for the API requests and you will need the key for `Ocp-Apim-Subscription-Key` header.
+
+ :::image type="content" source="../../custom-text-classification/media/get-endpoint-azure.png" alt-text="Screenshot showing how to get the Azure endpoint." lightbox="../../custom-text-classification/media/get-endpoint-azure.png":::
++
+3. Download and install the client library package for your language of choice:
+
+ |Language |Package version |
+ |||
+ |.NET | [5.2.0-beta.2](https://www.nuget.org/packages/Azure.AI.TextAnalytics/5.2.0-beta.2) |
+ |Python | [5.2.0b2](https://pypi.org/project/azure-ai-textanalytics/5.2.0b2/) |
+
+4. After you've installed the client library, use the following samples on GitHub to start calling the API.
+
+ * [C#](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/cognitivelanguage/Azure.AI.Language.Conversations/samples)
+ * [Python](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/cognitivelanguage/azure-ai-language-conversations/samples)
+
+5. See the following reference documentation for more information:
+
+ * [C#](/dotnet/api/azure.ai.language.conversations?view=azure-dotnet-preview&preserve-view=true)
+ * [Python](/python/api/azure-ai-language-conversations/azure.ai.language.conversations?view=azure-python-preview&preserve-view=true)
+
++
+## Next steps
+
+* [Orchestration workflow overview](../overview.md)
cognitive-services Create Project https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/orchestration-workflow/how-to/create-project.md
Previously updated : 11/02/2021 Last updated : 05/20/2022
Orchestration workflow allows you to create projects that connect your applicati
* Custom Language Understanding * Question Answering * LUIS
-* QnA maker
-## Sign in to Language Studio
+## Prerequisites
-To get started, you have to first sign in to [Language Studio](https://aka.ms/languageStudio) and create a Language resource. Select **Done** once selection is complete.
+Before you start using orchestration workflow, you will need several things:
-In language studio, find the **Understand questions and conversational language** section, and select **Orchestration workflow**.
+* An Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services).
+* An Azure Language resource
-You will see the orchestration workflow projects page.
+### Create a Language resource
-<!--:::image type="content" source="../media/projects-page.png" alt-text="A screenshot showing the Conversational Language Understanding projects page." lightbox="../media/projects-page.png":::-->
+Before you start using orchestration workflow, you will need an Azure Language resource.
-## Create an orchestration workflow project
+> [!NOTE]
+> * You need to have an **owner** role assigned on the resource group to create a Language resource.
-Select **Create new project**. When creating your workflow project, you need to provide the following details:
-- Name: Project name-- Description: Optional project description-- Utterances primary language: The primary language of your utterances.
-## Building schema and adding intents
-Once you're done creating a project, you can connect it to the other projects and services you want to orchestrate to. Each connection is represented by its type and relevant data.
+## Sign in to Language Studio
-To create a new intent, click on *+Add* button and start by giving your intent a **name**. You will see two options, to connect to a project or not. You can connect to (LUIS, question answering (QnA), or Conversational Language Understanding) projects, or choose the **no** option.
+To create a new intent, click on *+Add* button and start by giving your intent a **name**. You will see two options, to connect to a project or not. You can connect to (LUIS, question answering, or Conversational Language Understanding) projects, or choose the **no** option.
-> [!NOTE]
-> The list of projects you can connect to are only projects that are owned by the same Language resource you are using to create the orchestration project.
+## Create an orchestration workflow project
+
+Once you have a Language resource created, create an orchestration workflow project.
+
+### [Language Studio](#tab/language-studio)
++
+### [REST APIs](#tab/rest-api)
+ +
+## Import an orchestration workflow project
-In Orchestration Workflow projects, the data used to train connected intents isn't provided within the project. Instead, the project pulls the data from the connected service (such as connected LUIS applications, Conversational Language Understanding projects, or Custom Question Answering knowledge bases) during training. However, if you create intents that are not connected to any service, you still need to add utterances to those intents.
+### [Language Studio](#tab/language-studio)
-## Export and import a project
+You can export an orchestration workflow project as a JSON file at any time by going to the orchestration workflow projects page, selecting a project, and from the top menu, clicking on **Export**.
-You can export an orchestration workflow project as a JSON file at any time by going to the projects page, selecting a project, and pressing **Export**.
That project can be reimported as a new project. If you import a project with the exact same name, it replaces the project's data with the newly imported project's data.
-To import a project, select the arrow button on the projects page next to **Create a new project** and select **Import**. Then select the orchestration workflow JSON file.
+To import a project, click on the arrow button next to **Create a new project** and select **Import**, then select the JSON file.
++
+### [REST APIs](#tab/rest-api)
+
+You can import an orchestration workflow JSON into the service
++++
+## Export project
+
+### [Language Studio](#tab/language-studio)
+
+You can export an orchestration workflow project as a JSON file at any time by going to the orchestration workflow projects page, selecting a project, and pressing **Export**.
+
+### [REST APIs](#tab/rest-api)
+
+You can export an orchestration workflow project as a JSON file at any time.
+++
+## Get orchestration project details
+
+### [Language Studio](#tab/language-studio)
++
+### [Rest APIs](#tab/rest-api)
++++
+## Delete resources
+
+### [Language Studio](#tab/language-studio)
++
+### [REST APIs](#tab/rest-api)
+
+When you don't need your project anymore, you can delete your project using the APIs.
++++++ ## Next Steps
cognitive-services Deploy Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/orchestration-workflow/how-to/deploy-model.md
+
+ Title: How to deploy an orchestration workflow project
+
+description: Learn about deploying orchestration workflow projects.
++++++ Last updated : 05/20/2022++++
+# Deploy an orchestration workflow model
+
+Once you are satisfied with how your model performs, it's ready to be deployed, and query it for predictions from utterances. Deploying a model makes it available for use through the [prediction API](https://aka.ms/ct-runtime-swagger).
+
+## Prerequisites
+
+* A successfully [created project](create-project.md)
+* [Labeled utterances](tag-utterances.md) and successfully [trained model](train-model.md)
+<!--* Reviewed the [model evaluation details](view-model-evaluation.md) to determine how your model is performing.-->
+
+See [project development lifecycle](../overview.md#project-development-lifecycle) for more information.
+
+## Deploy model
+
+After you have reviewed the model's performance and decide it's fit to be used in your environment, you need to assign it to a deployment to be able to query it. Assigning the model to a deployment makes it available for use through the [prediction API](https://aka.ms/clu-apis). It is recommended to create a deployment named `production` to which you assign the best model you have built so far and use it in your system. You can create another deployment called `staging` to which you can assign the model you're currently working on to be able to test it. You can have a maximum on 10 deployments in your project.
+
+# [Language Studio](#tab/language-studio)
+
+
+# [REST APIs](#tab/rest-api)
+
+### Submit deployment job
++
+### Get deployment job status
++++
+## Swap deployments
+
+After you are done testing a model assigned to one deployment, you might want to assign it to another deployment. Swapping deployments involves:
+* Taking the model assigned to the first deployment, and assigning it to the second deployment.
+* taking the model assigned to second deployment and assign it to the first deployment.
+
+This can be used to swap your `production` and `staging` deployments when you want to take the model assigned to `staging` and assign it to `production`.
+
+# [Language Studio](#tab/language-studio)
++
+# [REST APIs](#tab/rest-api)
++++
+## Delete deployment
+
+# [Language Studio](#tab/language-studio)
++
+# [REST APIs](#tab/rest-api)
++++
+## Next steps
+
+Use [prediction API to query your model](call-api.md)
cognitive-services Deploy Query Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/orchestration-workflow/how-to/deploy-query-model.md
- Title: How to send a API requests to an orchestration workflow project-
-description: Learn about sending a request to orchestration workflow projects.
------ Previously updated : 03/03/2022----
-# Deploy and test model
-
-After you have [trained a model](./train-model.md) on your dataset, you're ready to deploy it. After deploying your model, you'll be able to query it for predictions.
-
-> [!Tip]
-> Before deploying a model, make sure to view the model details to make sure that the model is performing as expected.
-> You can only have ten deployment names.
-
-## Orchestration workflow model deployments
-
-Deploying a model hosts and makes it available for predictions through an endpoint.
-
-When a model is deployed, you will be able to test the model directly in the portal or by calling the API associated to it.
-
-1. From the left side, click on **Deploy model**.
-
-2. Click on **Add deployment** to submit a new deployment job.
-
- In the window that appears, you can create a new deployment name by giving the deployment a name or override an existing deployment name. Then, you can add a trained model to this deployment name and press next.
-
-3. If you're connecting one or more LUIS applications or conversational language understanding projects, you have to specify the deployment name.
-
- No configurations are required for custom question answering or unlinked intents.
-
- LUIS projects **must be published** to the slot configured during the Orchestration deployment, and custom question answering KBs must also be published to their Production slots.
-
-1. Click on **Add deployment** to submit a new deployment job.
-
- Like conversation projects, In the window that appears, you can create a new deployment name by giving the deployment a name or override an existing deployment name. Then, you can add a trained model to this deployment name and press next.
-
- <!--:::image type="content" source="../media/create-deployment-job-orch.png" alt-text="A screenshot showing deployment job creation in Language Studio." lightbox="../media/create-deployment-job-orch.png":::-->
-
-2. If you're connecting one or more LUIS applications or conversational language understanding projects, specify the deployment name.
-
- No configurations are required for custom question answering or unlinked intents.
-
- LUIS projects **must be published** to the slot configured during the Orchestration deployment, and custom question answering KBs must also be published to their Production slots.
-
- :::image type="content" source="../media/deploy-connected-services.png" alt-text="A screenshot showing the deployment screen for orchestration workflow projects." lightbox="../media/deploy-connected-services.png":::
-
-## Send a request to your model
-
-Once your model is deployed, you can begin using the deployed model for predictions. Outside of the test model page, you can begin calling your deployed model via API requests to your provided custom endpoint. This endpoint request obtains the intent and entity predictions defined within the model.
-
-You can get the full URL for your endpoint by going to the **Deploy model** page, selecting your deployed model, and clicking on "Get prediction URL".
--
-Add your key to the `Ocp-Apim-Subscription-Key` header value, and replace the query and language parameters.
-
-> [!TIP]
-> As you construct your requests, see the [quickstart](../quickstart.md?pivots=rest-api#query-model) and REST API [reference documentation](https://aka.ms/clu-apis) for more information.
-
-### Use the client libraries (Azure SDK)
-
-You can also use the client libraries provided by the Azure SDK to send requests to your model.
-
-> [!NOTE]
-> The client library for Orchestration workflow is only available for:
-> * .NET
-> * Python
-
-1. Go to your resource overview page in the [Azure portal](https://portal.azure.com/#home)
-
-2. From the menu on the left side, select **Keys and Endpoint**. Use endpoint for the API requests and you will need the key for `Ocp-Apim-Subscription-Key` header.
-
- :::image type="content" source="../../custom-classification/media/get-endpoint-azure.png" alt-text="Get the Azure endpoint" lightbox="../../custom-classification/media/get-endpoint-azure.png":::
-
-3. Download and install the client library package for your language of choice:
-
- |Language |Package version |
- |||
- |.NET | [5.2.0-beta.2](https://www.nuget.org/packages/Azure.AI.TextAnalytics/5.2.0-beta.2) |
- |Python | [5.2.0b2](https://pypi.org/project/azure-ai-textanalytics/5.2.0b2/) |
-
-4. After you've installed the client library, use the following samples on GitHub to start calling the API.
-
- * [C#](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/cognitivelanguage/Azure.AI.Language.Conversations/samples)
- * [Python](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/cognitivelanguage/azure-ai-language-conversations/samples)
-
-5. See the following reference documentation for more information:
-
- * [C#](/dotnet/api/azure.ai.language.conversations?view=azure-dotnet-preview&preserve-view=true)
- * [Python](/python/api/azure-ai-language-conversations/azure.ai.language.conversations?view=azure-python-preview&preserve-view=true)
-
-## API response for an orchestration workflow project
-
-Orchestration workflow projects return with the response of the top scoring intent, and the response of the service it is connected to.
-- Within the intent, the *targetKind* parameter lets you determine the type of response that was returned by the orchestrator's top intent (conversation, LUIS, or QnA Maker).-- You will get the response of the connected service in the *result* parameter. -
-Within the request, you can specify additional parameters for each connected service, in the event that the orchestrator routes to that service.
-- Within the project parameters, you can optionally specify a different query to the connected service. If you don't specify a different query, the original query will be used.-- The *direct target* parameter allows you to bypass the orchestrator's routing decision and directly target a specific connected intent to force a response for it.-
-## Next steps
-
-* [Orchestration project overview](../overview.md)
cognitive-services Tag Utterances https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/orchestration-workflow/how-to/tag-utterances.md
Previously updated : 03/03/2022 Last updated : 05/20/2022
-# How to tag utterances in orchestration workflow projects
+# Add utterances in Language Studio
-Once you have [built a schema](create-project.md) for your project, you should add training utterances to your project. The utterances should be similar to what your users will use when interacting with the project. When you add an utterance you have to assign which intent it belongs to. You can only add utterances to the created intents within the project and not the connected intents.
+Once you have [built a schema](build-schema.md), you should add training and testing utterances to your project. The utterances should be similar to what your users will use when interacting with the project. When you add an utterance, you have to assign which intent it belongs to.
-## Filter Utterances
+Adding utterances is a crucial step in project development lifecycle; this data will be used in the next step when training your model so that your model can learn from the added data. If you already have utterances, you can directly [import it into your project](create-project.md#import-an-orchestration-workflow-project), but you need to make sure that your data follows the [accepted data format](../concepts/data-formats.md). Labeled data informs the model how to interpret text, and is used for training and evaluation.
-Clicking on **Filter** lets you view only the utterances associated to the intents you select in the filter pane.
-When clicking on an intent in the [build schema](./create-project.md) page then you'll be moved to the **Tag Utterances** page, with that intent filtered automatically.
+## Prerequisites
+
+* A successfully [created project](create-project.md).
+
+See the [project development lifecycle](../overview.md#project-development-lifecycle) for more information.
++
+## How to add utterances
+
+Use the following steps to add utterances:
+
+1. Go to your project page in [Language Studio](https://aka.ms/languageStudio).
+
+2. From the left side menu, select **Add utterances**.
+
+3. From the top pivots, you can change the view to be **training set** or **testing set**. Learn more about [training and testing sets](train-model.md#data-splitting) and how they're used for model training and evaluation.
+
+3. From the **Select intent** dropdown menu, select one of the intents. Type in your utterance, and press the enter key in the utterance's text box to add the utterance. You can also upload your utterances directly by clicking on **Upload utterance file** from the top menu, make sure it follows the [accepted format](../concepts/data-formats.md#utterance-format).
+
+ > [!Note]
+ > If you are planning on using **Automatically split the testing set from training data** splitting, add all your utterances to the training set.
+ > You can add training utterances to **non-connected** intents only.
+
+ :::image type="content" source="../media/tag-utterances.png" alt-text="A screenshot of the page for tagging utterances in Language Studio." lightbox="../media/tag-utterances.png":::
+
+5. Under **Distribution** you can view the distribution across training and testing sets. You can **view utterances per intent**:
+
+* Utterance per non-connected intent
+* Utterances per connected intent
## Next Steps
-* [Train and Evaluate Model](./train-model.md)
+* [Train Model](./train-model.md)
cognitive-services Train Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/orchestration-workflow/how-to/train-model.md
Title: How to train and evaluate models in orchestration workflow projects
+ Title: How to train and evaluate models in orchestration workflow
+description: Learn how to train a model for orchestration workflow projects.
-description: Use this article to train an orchestration model and view its evaluation details to make improvements.
Previously updated : 03/21/2022 Last updated : 05/20/2022
-# Train and evaluate orchestration workflow models
+# Train your orchestration workflow model
-After you have completed [tagging your utterances](./tag-utterances.md), you can train your model. Training is the act of converting the current state of your project's training data to build a model that can be used for predictions. Every time you train, you have to name your training instance.
+Training is the process where the model learns from your [labeled utterances](tag-utterances.md). After training is completed, you will be able to [view model performance](view-model-evaluation.md).
-You can create and train multiple models within the same project. However, if you re-train a specific model it overwrites the last state.
+To train a model, start a training job. Only successfully completed jobs create a model. Training jobs expire after seven days, after this time you will no longer be able to retrieve the job details. If your training job completed successfully and a model was created, it won't be affected by the job expiring. You can only have one training job running at a time, and you can't start other jobs in the same project.
-The training times can be anywhere from a few seconds, up to a couple of hours when you reach high numbers of utterances.
+The training times can be anywhere from a few seconds when dealing with simple projects, up to a couple of hours when you reach the [maximum limit](../service-limits.md) of utterances.
-## Train model
+Model evaluation is triggered automatically after training is completed successfully. The evaluation process starts by using the trained model to run predictions on the utterances in the testing set, and compares the predicted results with the provided labels (which establishes a baseline of truth). The results are returned so you can review the [modelΓÇÖs performance](view-model-evaluation.md).
-Select **Train model** on the left of the screen. Select **Start a training job** from the top menu.
+## Prerequisites
-Enter a new model name or select an existing model from the **Model Name** dropdown.
+* A successfully [created project](create-project.md) with a configured Azure blob storage account
+See the [project development lifecycle](../overview.md#project-development-lifecycle) for more information.
-Click the **Train** button and wait for training to complete. You will see the training status of your model in the view model details page. Only successfully completed jobs will generate models.
+## Data splitting
-## Evaluate model
+Before you start the training process, labeled utterances in your project are divided into a training set and a testing set. Each one of them serves a different function.
+The **training set** is used in training the model, this is the set from which the model learns the labeled utterances.
+The **testing set** is a blind set that isn't introduced to the model during training but only during evaluation.
-After model training is completed, you can view your model details and see how well it performs against the test set in the training step. Observing how well your model performed is called evaluation. The test set is composed of 20% of your utterances, and this split is done at random before training. The test set consists of data that was not introduced to the model during the training process. For the evaluation process to complete there must be at least 10 utterances in your training set.
+After the model is trained successfully, the model can be used to make predictions from the utterances in the testing set. These predictions are used to calculate [evaluation metrics](../concepts/evaluation-metrics.md).
-In the **view model details** page, you'll be able to see all your models, with their score. Scores are only available if you have enabled evaluation before hand.
+It is recommended to make sure that all your intents are adequately represented in both the training and testing set.
-* Click on the model name for more details. A model name is only clickable if you've enabled evaluation before hand.
-* In the **Overview** section you can find the macro precision, recall and F1 score for the collective intents.
-* Under the **Intents** tab you can find the micro precision, recall and F1 score for each intent separately.
+Orchestration workflow supports two methods for data splitting:
-> [!NOTE]
-> If you don't see any of the intents you have in your model displayed here, it is because they weren't in any of the utterances that were used for the test set.
+* **Automatically splitting the testing set from training data**: The system will split your tagged data between the training and testing sets, according to the percentages you choose. The recommended percentage split is 80% for training and 20% for testing.
-You can view the [confusion matrix](../concepts/evaluation-metrics.md) for intents by clicking on the **Test set confusion matrix** tab at the top fo the screen.
+ > [!NOTE]
+ > If you choose the **Automatically splitting the testing set from training data** option, only the data assigned to training set will be split according to the percentages provided.
+
+* **Use a manual split of training and testing data**: This method enables users to define which utterances should belong to which set. This step is only enabled if you have added utterances to your testing set during [labeling](tag-utterances.md).
+
+> [!Note]
+> You can only add utterances in the training dataset for non-connected intents only.
++
+## Train model
+
+### Start training job
+
+#### [Language Studio](#tab/language-studio)
++
+#### [REST APIs](#tab/rest-api)
++++
+### Get training job status
+
+#### [Language Studio](#tab/language-studio)
+
+Click on the Training Job ID from the list, a side pane will appear where you can check the **Training progress**, **Job status**, and other details for this job.
+
+<!--:::image type="content" source="../../../media/train-pane.png" alt-text="A screenshot showing the training job details." lightbox="../../../media/train-pane.png":::-->
++
+#### [REST APIs](#tab/rest-api)
+
+Training could take sometime depending on the size of your training data and complexity of your schema. You can use the following request to keep polling the status of the training job until it is successfully completed.
+++
+### Cancel training job
+
+# [Language Studio](#tab/language-studio)
++
+# [REST APIs](#tab/rest-api)
+++ ## Next steps * [Model evaluation metrics](../concepts/evaluation-metrics.md)
-* [Deploy and query the model](./deploy-query-model.md)
+* [Deploy and query the model](./deploy-model.md)
cognitive-services View Model Evaluation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/orchestration-workflow/how-to/view-model-evaluation.md
+
+ Title: How to view orchestration workflow models details
+description: Learn how to view details for your model and evaluate its performance.
+++++++ Last updated : 04/26/2022++++
+# View orchestration workflow model details
+
+After model training is completed, you can view your model details and see how well it performs against the test set. Observing how well your model performed is called evaluation. The test set consists of data that wasn't introduced to the model during the training process.
+
+> [!NOTE]
+> Using the **Automatically split the testing set from training data** option may result in different model evaluation result every time you [train a new model](train-model.md), as the test set is selected randomly from your utterances. To make sure that the evaulation is calcualted on the same test set every time you train a model, make sure to use the **Use a manual split of training and testing data** option when starting a training job and define your **Testing set** when [add your utterances](tag-utterances.md).
++
+## Prerequisites
+
+Before viewing a model's evaluation, you need:
+
+* [An orchestration workflow project](create-project.md).
+* A successfully [trained model](train-model.md)
+
+See the [project development lifecycle](../overview.md#project-development-lifecycle) for more information.
+
+## Model details
+
+### [Language studio](#tab/Language-studio)
+
+In the **view model details** page, you'll be able to see all your models, with their current training status, and the date they were last trained.
++
+### [REST APIs](#tab/REST-APIs)
++++
+## Delete model
+
+### [Language studio](#tab/Language-studio)
+++
+### [REST APIs](#tab/REST-APIs)
++++++
+## Next steps
+
+* As you review how your model performs, learn about the [evaluation metrics](../concepts/evaluation-metrics.md) that are used.
+* If you're happy with your model performance, you can [deploy your model](deploy-model.md)
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/orchestration-workflow/language-support.md
+
+ Title: Language support for orchestration workflow
+
+description: Learn about the languages supported by orchestration workflow.
++++++ Last updated : 05/17/2022++++
+# Language support for orchestration workflow projects
+
+Use this article to learn about the languages currently supported by orchestration workflow projects.
+
+## Multilingual options
+
+Orchestration workflow projects do not support the multi-lingual option.
++
+## Language support
+
+Orchestration workflow projects support the following languages:
+
+| Language | Language code |
+| | |
+| German | `de` |
+| English | `en-us` |
+| Spanish | `es` |
+| French | `fr` |
+| Italian | `it` |
+| Portuguese (Brazil) | `pt-br` |
++
+## Next steps
+
+* [Orchestration workflow overview](overview.md)
+* [Service limits](service-limits.md)
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/orchestration-workflow/overview.md
Previously updated : 03/18/2022 Last updated : 04/14/2022
-# What are orchestration workflows?
+# What is orchestration workflow?
-Orchestration workflow is a cloud-based service that enables you to train language models to connect your applications in:
-* Custom Language Understanding
-* Question Answering
-* LUIS
-* QnA maker
+Orchestration workflow is one of the features offered by [Azure Cognitive Service for Language](../overview.md). It is a cloud-based API service that applies machine-learning intelligence to enable you to build orchestration models to connect [Conversational Language Understanding (CLU)](../conversational-language-understanding/overview.md), [Question Answering](../question-answering/overview.md) projects and [LUIS](../../luis/what-is-luis.md) applications.
+By creating an orchestration workflow, developers can iteratively tag utterances, train and evaluate model performance before making it available for consumption.
+To simplify building and customizing your model, the service offers a custom web portal that can be accessed through the [Language studio](https://aka.ms/languageStudio). You can easily get started with the service by following the steps in this [quickstart](quickstart.md).
-The API is a part of [Azure Cognitive Services](../../index.yml), a collection of machine learning and AI algorithms in the cloud for your development projects. You can use these features with the REST API, or the client libraries.
-## Features
+This documentation contains the following article types:
-* Advanced natural language understanding technology using advanced neural networks.
-* Orchestration project types that allow you to connect services including other Conversational Language Understanding projects, custom question answering knowledge bases, and LUIS applications.
+* [Quickstarts](quickstart.md) are getting-started instructions to guide you through making requests to the service.
+* [Concepts](concepts/evaluation-metrics.md) provide explanations of the service functionality and features.
+* [How-to guides](how-to/tag-utterances.md) contain instructions for using the service in more specific or customized ways.
++
+## Example usage scenarios
+
+Orchestration workflow can be used in multiple scenarios across a variety of industries. Some examples are:
+
+### Enterprise chat bot
+
+In a large corporation, an enterprise chat bot may handle a variety of employee affairs. It may be able to handle frequently asked questions served by a custom question answering knowledge base, a calendar specific skill served by conversational language understanding, and an interview feedback skill served by LUIS. The bot needs to be able to appropriately route incoming requests to the correct service. Orchestration workflow allows you to connect those skills to one project that handles the routing of incoming requests appropriately to power the enterprise bot.
+
+## Project development lifecycle
+
+Creating an orchestration workflow project typically involves several different steps.
++
+Follow these steps to get the most out of your model:
+
+1. **Build schema**: Know your data and define the actions and relevant information that needs to be recognized from user's input utterances. Create the [intents](glossary.md#intent) that you want to assign to user's utterances and the projects you want to connect to your orchestration project.
+
+2. **Tag data**: The quality of data tagging is a key factor in determining model performance.
+<!-- TODO: TO INCLUDE MORE GUIDANCE -->
+
+3. **Train model**: Your model starts learning from your tagged data.
+
+4. **View model evaluation details**: View the evaluation details for your model to determine how well it performs when introduced to new data.
+
+5. **Deploy model**: Deploying a model makes it available for use via the [prediction API](https://aka.ms/clu-runtime-api).
+
+6. **Predict intents**: Use your custom model to predict intents from user's utterances.
## Reference documentation and code samples
-As you use orchestration workflow in your applications, see the following reference documentation and samples for Azure Cognitive Services for Language:
+As you use orchestration workflow, see the following reference documentation and samples for Azure Cognitive Services for Language:
|Development option / language |Reference documentation |Samples | ||||
-|REST API | [REST API documentation](https://westus2.dev.cognitive.microsoft.com/docs/services/TextAnalytics-v3-2-Preview-2/operations/Analyze) | |
-|C# | [C# documentation](/dotnet/api/azure.ai.textanalytics?view=azure-dotnet-preview&preserve-view=true) | [C# samples](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/textanalytics/Azure.AI.TextAnalytics/samples) |
-| Java | [Java documentation](/java/api/overview/azure/ai-textanalytics-readme?view=azure-java-preview&preserve-view=true) | [Java Samples](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/textanalytics/azure-ai-textanalytics/src/samples) |
-|JavaScript | [JavaScript documentation](/javascript/api/overview/azure/ai-text-analytics-readme?view=azure-node-preview&preserve-view=true) | [JavaScript samples](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/textanalytics/ai-text-analytics/samples/v5) |
-|Python | [Python documentation](/python/api/overview/azure/ai-textanalytics-readme?view=azure-python-preview&preserve-view=true) | [Python samples](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/textanalytics/azure-ai-textanalytics/samples) |
+|REST APIs (Authoring) | [REST API documentation](https://aka.ms/clu-authoring-apis) | |
+|REST APIs (Prediction) | [REST API documentation](https://aka.ms/clu-runtime-api) | |
+|C# | [C# documentation](/dotnet/api/azure.ai.textanalytics?view=azure-dotnet-preview&preserve-view=true) | [C# samples](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/cognitivelanguage/Azure.AI.Language.Conversations/samples) |
+|Python | [Python documentation](/python/api/overview/azure/ai-textanalytics-readme?view=azure-python-preview&preserve-view=true) | [Python samples](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/cognitivelanguage/azure-ai-language-conversations/samples) |
## Responsible AI
-An AI system includes not only the technology, but also the people who will use it, the people who will be affected by it, and the environment in which itΓÇÖs deployed. Read the following articles for more information:
+An AI system includes not only the technology, but also the people who will use it, the people who will be affected by it, and the environment in which it is deployed. Read the [transparency note for CLU and orchestration workflow]() to learn about responsible AI use and deployment in your systems. You can also see the following articles for more information:
[!INCLUDE [Responsible AI links](../includes/overview-responsible-ai-links.md)]+
+## Next steps
+
+* Use the [quickstart article](quickstart.md) to start using orchestration workflow.
+
+* As you go through the project development lifecycle, review the [glossary](glossary.md) to learn more about the terms used throughout the documentation for this feature.
+
+* Remember to view the [service limits](service-limits.md) for information such as regional availability.
cognitive-services Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/orchestration-workflow/service-limits.md
+
+ Title: Orchestration workflow limits
+
+description: Learn about the data, region, and throughput limits for Orchestration workflow
++++++ Last updated : 05/18/2022++++
+# Orchestration workflow limits
+
+Use this article to learn about the data and service limits when using orchestration workflow.
+
+## Language resource limits
+
+* Your Language resource has to be created in one of the [supported regions](#regional-support).
+
+* Pricing tiers
+
+ |Tier|Description|Limit|
+ |--|--|--|
+ |F0 |Free tier|You are only allowed one Language resource with the F0 tier per subscription.|
+ |S |Paid tier|You can have up to 100 Language resources in the S tier per region.|
++
+See [pricing](https://azure.microsoft.com/pricing/details/cognitive-services/language-service/) for more information.
+
+* You can have up to **500** projects per resource.
+
+* Project names have to be unique within the same resource across all custom features.
+
+## Regional support
+
+Orchestration workflow is only available in some Azure regions. To use orchestration workflow, you must choose a Language resource in one of following regions:
+
+* West US 2
+* East US
+* East US 2
+* West US 3
+* South Central US
+* West Europe
+* North Europe
+* UK south
+* Australia East
+
+## API limits
+
+|Item|Request type| Maximum limit|
+|:-|:-|:-|
+|Authoring API|POST|10 per minute|
+|Authoring API|GET|100 per minute|
+|Prediction API|GET/POST|1,000 per minute|
+
+## Quota limits
+
+|Pricing tier |Item |Limit |
+| | | |
+|F|Training time| 1 hour per month|
+|S|Training time| Unlimited, Pay as you go |
+|F|Prediction Calls| 5,000 request per month |
+|S|Prediction Calls| Unlimited, Pay as you go |
+
+## Data limits
+
+The following limits are observed for orchestration workflow.
+
+|Item|Lower Limit| Upper Limit |
+| | | |
+|Count of utterances per project | 1 | 15,000|
+|Utterance length in characters | 1 | 500 |
+|Count of intents per project | 1 | 500|
+|Count of trained models per project| 0 | 10 |
+|Count of deployments per project| 0 | 10 |
+
+## Naming limits
+
+| Attribute | Limits |
+|--|--|
+| Project name | You can only use letters `(a-z, A-Z)`, and numbers `(0-9)` with no spaces. Maximum allowed length is 50 characters. |
+| Model name | You can only use letters `(a-z, A-Z)`, numbers `(0-9)` and symbols `_ . -`. Maximum allowed length is 50 characters. |
+| Deployment name | You can only use letters `(a-z, A-Z)`, numbers `(0-9)` and symbols `_ . -`. Maximum allowed length is 50 characters. |
+| Intent name| You can only use letters `(a-z, A-Z)`, numbers `(0-9)` and symbols `_ . -`. Maximum allowed length is 50 characters. |
++
+## Next steps
+
+* [Orchestration workflow overview](overview.md)
cognitive-services Connect Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/orchestration-workflow/tutorials/connect-services.md
+
+ Title: Intergate custom question answering and conversational language understanding into orchestration workflows
+description: Learn how to connect different projects with orchestration workflow.
+keywords: conversational language understanding, bot framework, bot, language understanding, nlu
+++++++ Last updated : 05/17/2022++
+# Connect different services with orchestration workflow
+
+Orchestration workflow is a feature that allows you to connect different projects from LUIS, conversational language understanding, and custom question answering in one project. You can then use this project for predictions under one endpoint. The orchestration project makes a prediction on which project should be called and automatically routes the request to that project, and returns with its response.
+
+In this tutorial, you will learn how to connect a custom question answering knowledge base with a conversational language understanding project. You will then call the project using the .NET SDK sample for orchestration.
+
+This tutorial will include creating a **chit chat** knowledge base and **email commands** project. Chit chat will deal with common niceties and greetings with static responses.
++
+## Prerequisites
+
+- Create a [Language resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextAnalytics) **and select the custom question answering feature** in the Azure portal to get your key and endpoint. After it deploys, click **Go to resource**.
+ - You will need the key and endpoint from the resource you create to connect your bot to the API. You'll paste your key and endpoint into the code below later in the tutorial. Copy them from the **Keys and Endpoint** tab in your resource.
+ - When you enable custom question answering, you must select an Azure search resource to connect to.
+ - Make sure the region of your resource is supported by [conversational language understanding](../../conversational-language-understanding/service-limits.md#regional-availability).
+- Download the **OrchestrationWorkflowSample** sample in [**.NET**](https://aka.ms/orchestration-sample).
+
+## Create a custom question answering knowledge base
+
+1. Sign into the [Language Studio](https://language.cognitive.azure.com/) and select your Language resource.
+2. Find and select the [custom question answering](https://language.cognitive.azure.com/questionAnswering/projects/) card in the homepage.
+3. Click on **Create new project** and add the name **chitchat** with the language _English_ before clicking on **Create project**.
+4. When the project loads, click on **Add source** and select _Chit chat_. Select the professional personality for chit chat before
+
+ :::image type="content" source="../media/chit-chat.png" alt-text="A screenshot of the chit chat popup." lightbox="../media/chit-chat.png":::
+
+5. Go to **Deploy knowledge base** from the left navigation menu and click on **Deploy** and confirm the popup that shows up.
+
+You are now done with deploying your knowledge base for chit chat. You can explore the type of questions and answers to expect in the **Edit knowledge base** tab.
+
+## Create a conversational language understanding project
+
+1. In Language Studio, go to the [conversational language understanding](https://language.cognitive.azure.com/clu/projects) service.
+2. Download the **EmailProject.json** sample file [here](https://aka.ms/clu-sample-json).
+3. Click on the arrow next to **Create new project** and select **Import**. Browse to the downloaded EmailProject.json file you downloaded and press Done.
+
+ :::image type="content" source="../media/import-export.png" alt-text="A screenshot showing where to import a J son file." lightbox="../media/import-export.png":::
+
+4. Once the project is loaded, click on **Training** on the left. Press on Start a training job, provide the model name **v1** and press Train. All other settings such as **Standard Training** and the evaluation settings can be left as is.
+
+ :::image type="content" source="../media/train-model.png" alt-text="A screenshot of the training page." lightbox="../media/train-model.png":::
+
+5. Once training is complete, click to **Deployments** on the left. Click on Add Deployment and create a new deployment with the name **Testing**, and assign model **v1** to the deployment.
+
+ :::image type="content" source="../media/deploy-model-tutorial.png" alt-text="A screenshot showing the model deployment page." lightbox="../media/deploy-model-tutorial.png":::
+
+You are now done with deploying a conversational language understanding project for email commands. You can explore the different commands in the **Utterances** page.
+
+## Create an orchestration workflow project
+
+1. In Language Studio, go to the [orchestration workflow](https://language.cognitive.azure.com/orchestration/projects) service.
+2. Click on **Create new project**. Use the name **Orchestrator** and the language _English_ before clicking next then done.
+3. Once the project is created, click on **Add** in the **Build schema** page.
+4. Select _Yes, I want to connect it to an existing project_. Add the intent name **EmailIntent** and select **Conversational Language Understanding** as the connected service. Select the recently created **EmailProject** project for the project name before clicking on **Add Intent**.
++
+5. Add another intent but now select **Question Answering** as the service and select **chitchat** as the project name.
+6. Similar to conversational language understanding, go to **Training** and start a new training job with the name **v1** and press Train.
+7. Once training is complete, click to **Deployments** on the left. Click on Add deployment and create a new deployment with the name **Testing**, and assign model **v1** to the deployment and press Next.
+8. On the next page, select the deployment name **Testing** for the **EmailIntent**. This tells the orchestrator to call the **Testing** deployment in **EmailProject** when it routes to it. Custom question answering projects only have one deployment by default.
++
+Now your orchestration project is ready to be used. Any incoming request will be routed to either **EmailIntent** and the **EmailProject** in conversational language understanding or **ChitChatIntent** and the **chitchat** knowledge base.
+
+## Call the orchestration project with the Conversations SDK
+
+1. In the downloaded **OrchestrationWorkflowSample** solution, make sure to install all the required packages. In Visual Studio, go to _Tools_, _NuGet Package Manager_ and select _Package Manager Console_ and run the following command.
+
+```powershell
+dotnet add package Azure.AI.Language.Conversations
+```
+
+2. In `Program.cs`, replace `{api-key}` and the placeholder endpoint. Use the key and endpoint for the Language resource you created earlier. You can find them in the **Keys and Endpoint** tab in your Language resource in Azure.
+
+```csharp
+Uri endpoint = new Uri("https://myaccount.api.cognitive.microsoft.com");
+AzureKeyCredential credential = new AzureKeyCredential("{api-key}");
+```
+
+3. Replace the orchestrationProject parameters to **Orchestrator** and **Testing** as below if they are not set already.
+
+```csharp
+ConversationsProject orchestrationProject = new ConversationsProject("Orchestrator", "Testing");
+```
+
+4. Run the project or press F5 in Visual Studio.
+5. Input a query such as "read the email from matt" or "hello how are you". You'll now observe different responses for each, a conversational language understanding **EmailProject** response from the first, and the answer from the **chitchat** for the second query.
+
+**Conversational Language Understanding**:
+
+**Custom Question Answering**:
+
+You can now connect other projects to your orchestrator and begin building complex architectures with various different projects.
+
+## Next steps
+
+- Learn more about [conversational language understanding](./../../conversational-language-understanding/overview.md).
+- Learn more about [custom question answering](./../../question-answering/overview.md).
++
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/overview.md
Last updated 02/01/2022 -+ # What is Azure Cognitive Service for Language?
Azure Cognitive Service for Language provides the following features:
> | [Analyze sentiment and opinions](sentiment-opinion-mining/overview.md) | This pre-configured feature provides sentiment labels (such as "*negative*", "*neutral*" and "*positive*") for sentences and documents. This feature can additionally provide granular information about the opinions related to words that appear in the text, such as the attributes of products or services. | * [Language Studio](language-studio.md) <br> * [REST API and client-library](sentiment-opinion-mining/quickstart.md) <br> * [Docker container](sentiment-opinion-mining/how-to/use-containers.md) > |[Language detection](language-detection/overview.md) | This pre-configured feature evaluates text, and determines the language it was written in. It returns a language identifier and a score that indicates the strength of the analysis. | * [Language Studio](language-studio.md) <br> * [REST API and client-library](language-detection/quickstart.md) <br> * [Docker container](language-detection/how-to/use-containers.md) | > |[Custom text classification (preview)](custom-classification/overview.md) | Build an AI model to classify unstructured text into custom classes that you define. | * [Language Studio](custom-classification/quickstart.md?pivots=language-studio)<br> * [REST API](language-detection/quickstart.md?pivots=rest-api) |
-> | [Text Summarization (preview)](text-summarization/overview.md) | This pre-configured feature extracts key sentences that collectively convey the essence of a document. | * [Language Studio](language-studio.md) <br> * [REST API and client-library](text-summarization/quickstart.md) |
+> | [Document summarization (preview)](summarization/overview.md?tabs=document-summarization) | This pre-configured feature extracts key sentences that collectively convey the essence of a document. | * [Language Studio](language-studio.md) <br> * [REST API and client-library](summarization/quickstart.md) |
+> | [Conversation summarization (preview)](summarization/overview.md?tabs=conversation-summarization) | This pre-configured feature summarizes issues and summaries in transcripts of customer-service conversations. | * [Language Studio](language-studio.md) <br> * [REST API](summarization/quickstart.md?tabs=rest-api) |
> | [Conversational language understanding (preview)](conversational-language-understanding/overview.md) | Build an AI model to bring the ability to understand natural language into apps, bots, and IoT devices. | * [Language Studio](conversational-language-understanding/quickstart.md) > | [Question answering](question-answering/overview.md) | This pre-configured feature provides answers to questions extracted from text input, using semi-structured content such as: FAQs, manuals, and documents. | * [Language Studio](language-studio.md) <br> * [REST API and client-library](question-answering/quickstart/sdk.md) | > | [Orchestration workflow](orchestration-workflow/overview.md) | Train language models to connect your applications to question answering, conversational language understanding, and LUIS | * [Language Studio](orchestration-workflow/quickstart.md?pivots=language-studio) <br> * [REST API](orchestration-workflow/quickstart.md?pivots=rest-api) |
cognitive-services Conversations Entity Categories https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/personally-identifiable-information/concepts/conversations-entity-categories.md
+
+ Title: Entity categories recognized by Conversational Personally Identifiable Information (detection) in Azure Cognitive Service for Language
+
+description: Learn about the entities the Conversational PII feature (preview) can recognize from conversation inputs.
++++++ Last updated : 05/15/2022++++
+# Supported customer content (PII) entity categories in conversations
+
+Use this article to find the entity categories that can be returned by the [conversational PII detection feature](../how-to-call-for-conversations.md). This feature runs a predictive model to identify, categorize, and redact sensitive information from an input conversation.
+
+The PII preview feature includes the ability to detect personal (`PII`) information from conversations.
+
+## Entity categories
+
+The following entity categories are returned when you're sending API requests PII feature.
+
+## Category: Name
+
+This category contains the following entity:
+
+ :::column span="":::
+ **Entity**
+
+ Name
+
+ :::column-end:::
+ :::column span="2":::
+ **Details**
+
+ All first, middle, last or full name is considered PII regardless of whether it is the speakerΓÇÖs name, the agentΓÇÖs name, someone elseΓÇÖs name or a different version of the speakerΓÇÖs full name (Chris vs. Christopher).
+
+ To get this entity category, add `Name` to the `pii-categories` parameter. `Name` will be returned in the API response if detected.
+
+ :::column-end:::
+
+ :::column span="":::
+ **Supported document languages**
+
+ `en`
+ :::column-end:::
+
+## Category: PhoneNumber
+
+This category contains the following entity:
+
+ :::column span="":::
+ **Entity**
+
+ PhoneNumber
+
+ :::column-end:::
+ :::column span="2":::
+ **Details**
+
+ All telephone numbers (including toll-free numbers or numbers that may be easily found or considered public knowledge) are considered PII
+
+ To get this entity category, add `PhoneNumber` to the `pii-categories` parameter. `PhoneNumber` will be returned in the API response if detected.
+
+ :::column-end:::
+
+ :::column span="":::
+ **Supported document languages**
+
+ `en`
+
+ :::column-end:::
++
+## Category: Address
+
+This category contains the following entity:
+
+ :::column span="":::
+ **Entity**
+
+ Address
+
+ :::column-end:::
+ :::column span="2":::
+ **Details**
+
+ Complete or partial addresses are considered PII. All addresses regardless of what residence or institution the address belongs to (such as: personal residence, business, medical center, government agency, etc.) are covered under this category.
+ Note:
+ * If information is limited to City & State only, it will not be considered PII.
+ * If information contains street, zip code or house number, all information is considered as Address PII , including the city and state
+
+ To get this entity category, add `Address` to the `pii-categories` parameter. `Address` will be returned in the API response if detected.
+
+ :::column-end:::
+
+ :::column span="":::
+ **Supported document languages**
+
+ `en`
+
+ :::column-end:::
++
+## Category: Email
+
+This category contains the following entity:
+
+ :::column span="":::
+ **Entity**
+
+ Email
+
+ :::column-end:::
+ :::column span="2":::
+ **Details**
+
+ All email addresses are considered PII.
+
+ To get this entity category, add `Email` to the `pii-categories` parameter. `Email` will be returned in the API response if detected.
+
+ :::column-end:::
+ :::column span="":::
+ **Supported document languages**
+
+ `en`
+
+ :::column-end:::
+
+## Category: NumericIdentifier
+
+This category contains the following entities:
+
+ :::column span="":::
+ **Entity**
+
+ NumericIdentifier
+
+ :::column-end:::
+ :::column span="2":::
+ **Details**
+
+ Any numeric or alphanumeric identifier that could contain any PII information.
+ Examples:
+ * Case Number
+ * Member Number
+ * Ticket number
+ * Bank account number
+ * Installation ID
+ * IP Addresses
+ * Product Keys
+ * Serial Numbers (1:1 relationship with a specific item/product)
+ * Shipping tracking numbers, etc.
+
+ To get this entity category, add `NumericIdentifier` to the `pii-categories` parameter. `NumericIdentifier` will be returned in the API response if detected.
+
+ :::column-end:::
+ :::column span="2":::
+ **Supported document languages**
+
+ `en`
+
+ :::column-end:::
+
+## Category: Credit card
+
+This category contains the following entity:
+
+ :::column span="":::
+ **Entity**
+
+ Credit card
+
+ :::column-end:::
+ :::column span="2":::
+ **Details**
+
+ Any credit card number, any security code on the back, or the expiration date is considered as PII.
+
+ To get this entity category, add `CreditCard` to the `pii-categories` parameter. `CreditCard` will be returned in the API response if detected.
+
+ :::column-end:::
+ :::column span="2":::
+ **Supported document languages**
+
+ `en`
+
+ :::column-end:::
+
+## Next steps
+
+[How to detect PII in conversations](../how-to-call-for-conversations.md)
cognitive-services How To Call For Conversations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/personally-identifiable-information/how-to-call-for-conversations.md
+
+ Title: How to detect Personally Identifiable Information (PII) in conversations.
+
+description: This article will show you how to extract PII from chat and spoken transcripts and redact identifiable information.
++++++ Last updated : 05/10/2022+++++
+# How to detect and redact Personally Identifying Information (PII) in conversations
+
+The Conversational PII feature can evaluate conversations to extract sensitive information (PII) in the content across several pre-defined categories and redact them. This API operates on both transcribed text (referenced as transcripts) and chats.
+For transcripts, the API also enables redaction of audio segments, which contains the PII information by providing the audio timing information for those audio segments.
+
+## Determine how to process the data (optional)
+
+### Specify the PII detection model
+
+By default, this feature will use the latest available AI model on your input. You can also configure your API requests to use a specific [model version](../concepts/model-lifecycle.md).
+
+### Language support
+
+Currently the conversational PII preview API only supports English language.
+
+### Region support
+
+Currently the conversational PII preview API supports the following regions: East US, North Europe and UK south.
+
+## Submitting data
+
+You can submit the input to the API as list of conversation items. Analysis is performed upon receipt of the request. Because the API is asynchronous, there may be a delay between sending an API request, and receiving the results. For information on the size and number of requests you can send per minute and second, see the data limits below.
+
+When using the async feature, the API results are available for 24 hours from the time the request was ingested, and is indicated in the response. After this time period, the results are purged and are no longer available for retrieval.
+
+When you submit data to conversational PII, we can send one conversation (chat or spoken) per request.
+
+The API will attempt to detect all the [defined entity categories](concepts/conversations-entity-categories.md) for a given conversation input. If you want to specify which entities will be detected and returned, use the optional `piiCategories` parameter with the appropriate entity categories.
+
+For spoken transcripts, the entities detected will be returned on the `redactionSource` parameter value provided. Currently, the supported values for `redactionSource` are `text`, `lexical`, `itn`, and `maskedItn` (which maps to Microsoft Speech to Text API's `display`\\`displayText`, `lexical`, `itn` and `maskedItn` format respectively). Additionally, for the spoken transcript input, this API will also provide audio timing information to empower audio redaction. For using the audioRedaction feature, use the optional `includeAudioRedaction` flag with `true` value. The audio redaction is performed based on the lexical input format.
++
+## Getting PII results
+
+When you get results from PII detection, you can stream the results to an application or save the output to a file on the local system. The API response will include [recognized entities](concepts/conversations-entity-categories.md), including their categories and subcategories, and confidence scores. The text string with the PII entities redacted will also be returned.
+
+## Examples
+
+# [Client libraries (Azure SDK)](#tab/client-libraries)
+
+1. Go to your resource overview page in the [Azure portal](https://portal.azure.com/#home)
+
+2. From the menu on the left side, select **Keys and Endpoint**. You will need one of the keys and the endpoint to authenticate your API requests.
+
+3. Download and install the client library package for your language of choice:
+
+ |Language |Package version |
+ |||
+ |.NET | [5.2.0-beta.2](https://www.nuget.org/packages/Azure.AI.TextAnalytics/5.2.0-beta.2) |
+ |Python | [5.2.0b2](https://pypi.org/project/azure-ai-textanalytics/5.2.0b2/) |
+
+4. After you've installed the client library, use the following samples on GitHub to start calling the API.
+
+ * [C#](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/textanalytics/Azure.AI.TextAnalytics/samples/Sample9_RecognizeCustomEntities.md)
+ * [Java](https://github.com/Azure/azure-sdk-for-java/blob/main/sdk/textanalytics/azure-ai-textanalytics/src/samples/java/com/azure/ai/textanalytics/lro/RecognizeCustomEntities.java)
+ * [JavaScript](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/textanalytics/ai-text-analytics/samples/v5/javascript/customText.js)
+ * [Python](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/textanalytics/azure-ai-textanalytics/samples/sample_recognize_custom_entities.py)
+
+5. See the following reference documentation for more information on the client, and return object:
+
+ * [C#](/dotnet/api/azure.ai.textanalytics?view=azure-dotnet-preview&preserve-view=true)
+ * [Java](/java/api/overview/azure/ai-textanalytics-readme?view=azure-java-preview&preserve-view=true)
+ * [JavaScript](/javascript/api/overview/azure/ai-text-analytics-readme?view=azure-node-preview&preserve-view=true)
+ * [Python](/python/api/azure-ai-textanalytics/azure.ai.textanalytics?view=azure-python-preview&preserve-view=true)
+
+# [REST API](#tab/rest-api)
+
+## Submit transcripts using speech-to-text
+
+Use the following example if you have conversations transcribed using the Speech service's [speech-to-text](../../Speech-Service/speech-to-text.md) feature:
+
+```bash
+curl -i -X POST https://your-language-endpoint-here/language/analyze-conversations?api-version=2022-05-15-preview \
+-H "Content-Type: application/json" \
+-H "Ocp-Apim-Subscription-Key: your-key-here" \
+-d \
+'
+{
+ "displayName": "Analyze conversations from xxx",
+ "analysisInput": {
+ "conversations": [
+ {
+ "id": "23611680-c4eb-4705-adef-4aa1c17507b5",
+ "language": "en",
+ "modality": "transcript",
+ "conversationItems": [
+ {
+ "participantId": "agent_1",
+ "id": "8074caf7-97e8-4492-ace3-d284821adacd",
+ "text": "Good morning.",
+ "lexical": "good morning",
+ "itn": "good morning",
+ "maskedItn": "good morning",
+ "audioTimings": [
+ {
+ "word": "good",
+ "offset": 11700000,
+ "duration": 2100000
+ },
+ {
+ "word": "morning",
+ "offset": 13900000,
+ "duration": 3100000
+ }
+ ]
+ },
+ {
+ "participantId": "agent_1",
+ "id": "0d67d52b-693f-4e34-9881-754a14eec887",
+ "text": "Can I have your name?",
+ "lexical": "can i have your name",
+ "itn": "can i have your name",
+ "maskedItn": "can i have your name",
+ "audioTimings": [
+ {
+ "word": "can",
+ "offset": 44200000,
+ "duration": 2200000
+ },
+ {
+ "word": "i",
+ "offset": 46500000,
+ "duration": 800000
+ },
+ {
+ "word": "have",
+ "offset": 47400000,
+ "duration": 1500000
+ },
+ {
+ "word": "your",
+ "offset": 49000000,
+ "duration": 1500000
+ },
+ {
+ "word": "name",
+ "offset": 50600000,
+ "duration": 2100000
+ }
+ ]
+ },
+ {
+ "participantId": "customer_1",
+ "id": "08684a7a-5433-4658-a3f1-c6114fcfed51",
+ "text": "Sure that is John Doe.",
+ "lexical": "sure that is john doe",
+ "itn": "sure that is john doe",
+ "maskedItn": "sure that is john doe",
+ "audioTimings": [
+ {
+ "word": "sure",
+ "offset": 5400000,
+ "duration": 6300000
+ },
+ {
+ "word": "that",
+ "offset": 13600000,
+ "duration": 2300000
+ },
+ {
+ "word": "is",
+ "offset": 16000000,
+ "duration": 1300000
+ },
+ {
+ "word": "john",
+ "offset": 17400000,
+ "duration": 2500000
+ },
+ {
+ "word": "doe",
+ "offset": 20000000,
+ "duration": 2700000
+ }
+ ]
+ }
+ ]
+ }
+ ]
+ },
+ "tasks": [
+ {
+ "taskName": "analyze 1",
+ "kind": "ConversationalPIITask",
+ "parameters": {
+ "modelVersion": "2022-05-15-preview",
+ "redactionSource": "text",
+ "includeAudioRedaction": true,
+ "piiCategories": [
+ "all"
+ ]
+ }
+ }
+ ]
+}
+`
+```
+
+## Submit text chats
+
+Use the following example if you have conversations that originated in text. For example, conversations through a text-based chat client.
+
+```bash
+curl -i -X POST https://your-language-endpoint-here/language/analyze-conversations?api-version=2022-05-15-preview \
+-H "Content-Type: application/json" \
+-H "Ocp-Apim-Subscription-Key: your-key-here" \
+-d \
+'
+{
+ "displayName": "Analyze conversations from xxx",
+ "analysisInput": {
+ "conversations": [
+ {
+ "id": "23611680-c4eb-4705-adef-4aa1c17507b5",
+ "language": "en",
+ "modality": "text",
+ "conversationItems": [
+ {
+ "participantId": "agent_1",
+ "id": "8074caf7-97e8-4492-ace3-d284821adacd",
+ "text": "Good morning."
+ },
+ {
+ "participantId": "agent_1",
+ "id": "0d67d52b-693f-4e34-9881-754a14eec887",
+ "text": "Can I have your name?"
+ },
+ {
+ "participantId": "customer_1",
+ "id": "08684a7a-5433-4658-a3f1-c6114fcfed51",
+ "text": "Sure that is John Doe."
+ }
+ ]
+ }
+ ]
+ },
+ "tasks": [
+ {
+ "taskName": "analyze 1",
+ "kind": "ConversationalPIITask",
+ "parameters": {
+ "modelVersion": "2022-05-15-preview"
+ }
+ }
+ ]
+}
+`
+```
++
+## Get the result
+
+Get the `operation-location` from the response header. The value will look similar to the following URL:
+
+```rest
+https://your-language-endpoint/language/analyze-conversations/jobs/12345678-1234-1234-1234-12345678
+```
+
+To get the results of the request, use the following cURL command. Be sure to replace `my-job-id` with the numerical ID value you received from the previous `operation-location` response header:
+
+```bash
+curl -X GET https://your-language-endpoint/language/analyze-conversations/jobs/my-job-id \
+-H "Content-Type: application/json" \
+-H "Ocp-Apim-Subscription-Key: your-key-here"
+```
+++
+## Service and data limits
++
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/personally-identifiable-information/language-support.md
Use this article to learn which natural languages are supported by the PII featu
> [!NOTE] > * Languages are added as new [model versions](how-to-call.md#specify-the-pii-detection-model) are released.
-> * The current model version for PII is `2021-01-15`.
+
+# [PII for documents](#tab/documents)
## PII language support
Use this article to learn which natural languages are supported by the PII featu
| Portuguese (Portugal) | `pt-PT` | 2021-01-15 | `pt` also accepted | | Spanish | `es` | 2020-04-01 | |
+# [PII for conversations (preview)](#tab/conversations)
+
+## PII language support
+
+| Language | Language code | Starting with v3 model version: | Notes |
+|:-|:-:|:-:|::|
+| English | `en` | 2022-05-15-preview | |
+++ ## Next steps [PII feature overview](overview.md)
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/personally-identifiable-information/overview.md
# What is Personally Identifiable Information (PII) detection in Azure Cognitive Service for Language?
-PII detection is one of the features offered by [Azure Cognitive Service for Language](../overview.md), a collection of machine learning and AI algorithms in the cloud for developing intelligent applications that involve written language. The PII detection feature can identify, categorize, and redact sensitive information in unstructured text. For example: phone numbers, email addresses, and forms of identification.
+PII detection is one of the features offered by [Azure Cognitive Service for Language](../overview.md), a collection of machine learning and AI algorithms in the cloud for developing intelligent applications that involve written language. The PII detection feature can identify, categorize, and redact sensitive information in unstructured text. For example: phone numbers, email addresses, and forms of identification. The method for utilizing PII in conversations is different than other use cases, and articles for this use have been separated.
* [**Quickstarts**](quickstart.md) are getting-started instructions to guide you through making requests to the service. * [**How-to guides**](how-to-call.md) contain instructions for using the service in more specific or customized ways.
cognitive-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/personally-identifiable-information/quickstart.md
zone_pivot_groups: programming-languages-text-analytics
Use this article to get started detecting and redacting sensitive information in text, using the NER and PII client library and REST API. Follow these steps to try out examples code for mining text:
+> [!NOTE]
+> This quickstart only covers PII detection in documents. To learn more about detecting PII in conversations, see [How to detect and redact PII in conversations](how-to-call-for-conversations.md).
+ ::: zone pivot="programming-language-csharp" [!INCLUDE [C# quickstart](includes/quickstarts/csharp-sdk.md)]
cognitive-services Conversation Summarization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/summarization/how-to/conversation-summarization.md
+
+ Title: Summarize text with the conversation summarization API
+
+description: This article will show you how to summarize chat logs with the conversation summarization API.
++++++ Last updated : 04/27/2022++++
+# How to use conversation summarization (preview)
+
+> [!IMPORTANT]
+> conversation summarization feature is a preview capability provided ΓÇ£AS ISΓÇ¥ and ΓÇ£WITH ALL FAULTS.ΓÇ¥ As such, Conversation Summarization (preview) should not be implemented or deployed in any production use. The customer is solely responsible for any use of conversation summarization.
+
+Conversation summarization is designed to summarize text chat logs between customers and customer-service agents. This feature is capable of providing both issues and resolutions present in these logs.
+
+The AI models used by the API are provided by the service, you just have to send content for analysis.
+
+## Features
+
+The conversation summarization API uses natural language processing techniques to locate key issues and resolutions in text-based chat logs. Conversation summarization will return issues and resolutions found from the text input.
+
+There's another feature in Azure Cognitive Service for Language, [document summarization](../overview.md?tabs=document-summarization), that can summarize sentences from large documents. When you're deciding between document summarization and conversation summarization, consider the following points:
+* Extractive summarization returns sentences that collectively represent the most important or relevant information within the original content.
+* Conversation summarization returns summaries based on full chat logs including a reason for the chat (a problem), and the resolution. For example, a chat log between a customer and a customer service agent.
+
+## Submitting data
+
+You submit documents to the API as strings of text. Analysis is performed upon receipt of the request. Because the API is [asynchronous](../../concepts/use-asynchronously.md), there may be a delay between sending an API request and receiving the results. For information on the size and number of requests you can send per minute and second, see the data limits below.
+
+When using this feature, the API results are available for 24 hours from the time the request was ingested, and is indicated in the response. After this time period, the results are purged and are no longer available for retrieval.
+
+When you submit data to conversation summarization, we recommend sending one chat log per request, for better latency.
+
+## Getting conversation summarization results
+
+When you get results from language detection, you can stream the results to an application or save the output to a file on the local system.
+
+The following text is an example of content you might submit for summarization. This is only an example, the API can accept much longer input text. See [data limits](../../concepts/data-limits.md) for more information.
+
+**Agent**: "*Hello, how can I help you*?"
+
+**Customer**: "*How can I upgrade my Contoso subscription? I've been trying the entire day.*"
+
+**Agent**: "*Press the upgrade button please. Then sign in and follow the instructions.*"
+
+Summarization is performed upon receipt of the request by creating a job for the API backend. If the job succeeded, the output of the API will be returned. The output will be available for retrieval for 24 hours. After this time, the output is purged. Due to multilingual and emoji support, the response may contain text offsets. See [how to process offsets](../../concepts/multilingual-emoji-support.md) for more information.
+
+Using the above example, the API might return the following summarized sentences:
+
+|Summarized text | Aspect |
+||-|
+| "Customer wants to upgrade their subscription. Customer doesn't know how." | issue |
+| "Customer needs to press upgrade button, and sign in." | resolution |
++
+## See also
+
+* [Summarization overview](../overview.md)
cognitive-services Document Summarization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/summarization/how-to/document-summarization.md
+
+ Title: Summarize text with the extractive summarization API
+
+description: This article will show you how to summarize text with the extractive summarization API.
++++++ Last updated : 03/16/2022++++
+# How to use document summarization (preview)
+
+> [!IMPORTANT]
+> The extractive summarization feature is a preview capability provided ΓÇ£AS ISΓÇ¥ and ΓÇ£WITH ALL FAULTS.ΓÇ¥ As such, Extractive Summarization (preview) should not be implemented or deployed in any production use. The customer is solely responsible for any use of extractive summarization.
+
+In general, there are two approaches for automatic document summarization: extractive and abstractive. This API provides extractive summarization.
+
+Extractive summarization is a feature that produces a summary by extracting sentences that collectively represent the most important or relevant information within the original content.
+
+This feature is designed to shorten content that users consider too long to read. Extractive summarization condenses articles, papers, or documents to key sentences.
+
+The AI models used by the API are provided by the service, you just have to send content for analysis.
+
+## Features
+
+> [!TIP]
+> If you want to start using this feature, you can follow the [quickstart article](../quickstart.md) to get started. You can also make example requests using [Language Studio](../../language-studio.md) without needing to write code.
+
+The extractive summarization API uses natural language processing techniques to locate key sentences in an unstructured text document. These sentences collectively convey the main idea of the document.
+
+Extractive summarization returns a rank score as a part of the system response along with extracted sentences and their position in the original documents. A rank score is an indicator of how relevant a sentence is determined to be, to the main idea of a document. The model gives a score between 0 and 1 (inclusive) to each sentence and returns the highest scored sentences per request. For example, if you request a three-sentence summary, the service returns the three highest scored sentences.
+
+There is another feature in Azure Cognitive Service for Language, [key phrases extraction](./../../key-phrase-extraction/how-to/call-api.md), that can extract key information. When deciding between key phrase extraction and extractive summarization, consider the following:
+* key phrase extraction returns phrases while extractive summarization returns sentences
+* extractive summarization returns sentences together with a rank score, and. Top ranked sentences will be returned per request
+* extractive summarization also returns the following positional information:
+ * offset: The start position of each extracted sentence, and
+ * Length: is the length of each extracted sentence.
++
+## Determine how to process the data (optional)
+
+### Specify the document summarization model
+
+By default, document summarization will use the latest available AI model on your text. You can also configure your API requests to use a specific [model version](../../concepts/model-lifecycle.md).
+
+### Input languages
+
+When you submit documents to be processed by key phrase extraction, you can specify which of [the supported languages](../language-support.md) they're written in. if you don't specify a language, key phrase extraction will default to English. The API may return offsets in the response to support different [multilingual and emoji encodings](../../concepts/multilingual-emoji-support.md).
+
+## Submitting data
+
+You submit documents to the API as strings of text. Analysis is performed upon receipt of the request. Because the API is [asynchronous](../../concepts/use-asynchronously.md), there may be a delay between sending an API request, and receiving the results.
+
+When using this feature, the API results are available for 24 hours from the time the request was ingested, and is indicated in the response. After this time period, the results are purged and are no longer available for retrieval.
+
+You can use the `sentenceCount` parameter to specify how many sentences will be returned, with `3` being the default. The range is from 1 to 20.
+
+You can also use the `sortby` parameter to specify in what order the extracted sentences will be returned - either `Offset` or `Rank`, with `Offset` being the default.
++
+|parameter value |Description |
+|||
+|Rank | Order sentences according to their relevance to the input document, as decided by the service. |
+|Offset | Keeps the original order in which the sentences appear in the input document. |
+
+## Getting document summarization results
+
+When you get results from language detection, you can stream the results to an application or save the output to a file on the local system.
+
+The following is an example of content you might submit for summarization, which is extracted using the Microsoft blog article [A holistic representation toward integrative AI](https://www.microsoft.com/research/blog/a-holistic-representation-toward-integrative-ai/). This article is only an example, the API can accept much longer input text. See the data limits section for more information.
+
+*"At Microsoft, we have been on a quest to advance AI beyond existing techniques, by taking a more holistic, human-centric approach to learning and understanding. As Chief Technology Officer of Azure AI Cognitive Services, I have been working with a team of amazing scientists and engineers to turn this quest into a reality. In my role, I enjoy a unique perspective in viewing the relationship among three attributes of human cognition: monolingual text (X), audio or visual sensory signals, (Y) and multilingual (Z). At the intersection of all three, thereΓÇÖs magicΓÇöwhat we call XYZ-code as illustrated in Figure 1ΓÇöa joint representation to create more powerful AI that can speak, hear, see, and understand humans better. We believe XYZ-code will enable us to fulfill our long-term vision: cross-domain transfer learning, spanning modalities and languages. The goal is to have pre-trained models that can jointly learn representations to support a broad range of downstream AI tasks, much in the way humans do today. Over the past five years, we have achieved human performance on benchmarks in conversational speech recognition, machine translation, conversational question answering, machine reading comprehension, and image captioning. These five breakthroughs provided us with strong signals toward our more ambitious aspiration to produce a leap in AI capabilities, achieving multi-sensory and multilingual learning that is closer in line with how humans learn and understand. I believe the joint XYZ-code is a foundational component of this aspiration, if grounded with external knowledge sources in the downstream AI tasks."*
+
+The extractive summarization API is performed upon receipt of the request by creating a job for the API backend. If the job succeeded, the output of the API will be returned. The output will be available for retrieval for 24 hours. After this time, the output is purged. Due to multilingual and emoji support, the response may contain text offsets. See [how to process offsets](../../concepts/multilingual-emoji-support.md) for more information.
+
+Using the above example, the API might return the following summarized sentences:
+
+*"At Microsoft, we have been on a quest to advance AI beyond existing techniques, by taking a more holistic, human-centric approach to learning and understanding."*
+
+*"In my role, I enjoy a unique perspective in viewing the relationship among three attributes of human cognition: monolingual text (X), audio or visual sensory signals, (Y) and multilingual (Z)."*
+
+*"At the intersection of all three, thereΓÇÖs magicΓÇöwhat we call XYZ-code as illustrated in Figure 1ΓÇöa joint representation to create more powerful AI that can speak, hear, see, and understand humans better."*
+
+## Service and data limits
++
+## See also
+
+* [Document summarization overview](../overview.md)
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/summarization/language-support.md
+
+ Title: Document summarization language support
+
+description: Learn about which languages are supported by document summarization.
++++++ Last updated : 05/11/2022++++
+# Summarization language support
+
+Use this article to learn which natural languages are supported by document and conversation summarization.
+
+# [Document summarization](#tab/document-summarization)
+
+## Languages supported by document summarization
+
+Document summarization supports the following languages:
+
+| Language | Language code | Starting with v3 model version | Notes |
+|:-|:-:|:-:|::|
+| Chinese-Simplified | `zh-hans` | 2021-08-01 | `zh` also accepted |
+| English | `en` | 2021-08-01 | |
+| French | `fr` | 2021-08-01 | |
+| German | `de` | 2021-08-01 | |
+| Italian | `it` | 2021-08-01 | |
+| Japanese | `ja` | 2021-08-01 | |
+| Korean | `ko` | 2021-08-01 | |
+| Spanish | `es` | 2021-08-01 | |
+| Portuguese (Brazil) | `pt-BR` | 2021-08-01 | |
+| Portuguese (Portugal) | `pt-PT` | 2021-08-01 | `pt` also accepted |
+
+# [Conversation summarization](#tab/conversation-summarization)
+
+## Languages supported by conversation summarization
+
+Conversation summarization supports the following languages:
+
+| Language | Language code | Starting with model version | Notes |
+|:-|:-:|:-:|::|
+| English | `en` | `2022-05-15` | |
+++
+## Next steps
+
+[Document summarization overview](overview.md)
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/summarization/overview.md
+
+ Title: What is document and conversation summarization (preview)?
+
+description: Learn about summarizing text.
++++++ Last updated : 05/06/2022++++
+# What is document and conversation summarization (preview)?
+
+Document summarization is one of the features offered by [Azure Cognitive Service for Language](../overview.md), a collection of machine learning and AI algorithms in the cloud for developing intelligent applications that involve written language. Use this article to learn more about this feature, and how to use it in your applications.
+
+# [Document summarization](#tab/document-summarization)
+
+This documentation contains the following article types:
+
+* [**Quickstarts**](quickstart.md?pivots=rest-api&tabs=document-summarization) are getting-started instructions to guide you through making requests to the service.
+* [**How-to guides**](how-to/document-summarization.md) contain instructions for using the service in more specific or customized ways.
+
+Text summarization is a broad topic, consisting of several approaches to represent relevant information in text. The document summarization feature described in this documentation enables you to use extractive text summarization to produce a summary of a document. It extracts sentences that collectively represent the most important or relevant information within the original content. This feature is designed to shorten content that could be considered too long to read. For example, it can condense articles, papers, or documents to key sentences.
+
+As an example, consider the following paragraph of text:
+
+*"WeΓÇÖre delighted to announce that Cognitive Service for Language service now supports extractive summarization! In general, there are two approaches for automatic document summarization: extractive and abstractive. This feature provides extractive summarization. Document summarization is a feature that produces a text summary by extracting sentences that collectively represent the most important or relevant information within the original content. This feature is designed to shorten content that could be considered too long to read. Extractive summarization condenses articles, papers, or documents to key sentences."*
+
+The document summarization feature would simplify the text into the following key sentences:
++
+## Key features
+
+Document summarization supports the following features:
+
+* **Extracted sentences**: These sentences collectively convey the main idea of the document. TheyΓÇÖre original sentences extracted from the input documentΓÇÖs content.
+* **Rank score**: The rank score indicates how relevant a sentence is to a document's main topic. Document summarization ranks extracted sentences, and you can determine whether they're returned in the order they appear, or according to their rank.
+* **Maximum sentences**: Determine the maximum number of sentences to be returned. For example, if you request a three-sentence summary Document summarization will return the three highest scored sentences.
+* **Positional information**: The start position and length of extracted sentences.
+
+# [Conversation summarization](#tab/conversation-summarization)
+
+This documentation contains the following article types:
+
+* [**Quickstarts**](quickstart.md?pivots=rest-api&tabs=conversation-summarization) are getting-started instructions to guide you through making requests to the service.
+* [**How-to guides**](how-to/document-summarization.md) contain instructions for using the service in more specific or customized ways.
+
+Conversation summarization is a broad topic, consisting of several approaches to represent relevant information in text. The conversation summarization feature described in this documentation enables you to use abstractive text summarization to produce a summary of issues and resolutions in transcripts of web chats and service call transcripts between customer-service agents, and your customers.
++
+## When to use conversation summarization
+
+* When there are predefined aspects of an ΓÇ£issueΓÇ¥ and ΓÇ£resolutionΓÇ¥, such as:
+ * The reason for a service chat/call (the issue).
+ * That resolution for the issue.
+* You only want a summary that focuses on related information about issues and resolutions.
+* When there are two participants in the conversation, and you want to summarize what each had said.
+
+As an example, consider the following example conversation:
+
+**Agent**: "*Hello, youΓÇÖre chatting with Rene. How may I help you?*"
+
+**Customer**: "*Hi, I tried to set up wifi connection for Smart Brew 300 espresso machine, but it didnΓÇÖt work.*"
+
+**Agent**: "*IΓÇÖm sorry to hear that. LetΓÇÖs see what we can do to fix this issue. Could you push the wifi connection button, hold for 3 seconds, then let me know if the power light is slowly blinking?*"
+
+**Customer**: "*Yes, I pushed the wifi connection button, and now the power light is slowly blinking.*"
+
+**Agent**: "*Great. Thank you! Now, please check in your Contoso Coffee app. Does it prompt to ask you to connect with the machine?*"
+
+**Customer**: "*No. Nothing happened.*"
+
+**Agent**: "*I see. Thanks. LetΓÇÖs try if a factory reset can solve the issue. Could you please press and hold the center button for 5 seconds to start the factory reset.*"
+
+**Customer**: *"IΓÇÖve tried the factory reset and followed the above steps again, but it still didnΓÇÖt work."*
+
+**Agent**: "*IΓÇÖm very sorry to hear that. Let me see if thereΓÇÖs another way to fix the issue. Please hold on for a minute.*"
+
+Conversation summarization feature would simplify the text into the following:
+
+|Example summary | Format | Conversation aspect |
+||-|-|
+| Customer wants to use the wifi connection on their Smart Brew 300. They canΓÇÖt connect it using the Contoso Coffee app. | One or two sentences | issue |
+| Checked if the power light is blinking slowly. Tried to do a factory reset. | One or more sentences, generated from multiple lines of the transcript. | resolution |
+++
+## Get started with text summarization
+
+# [Document summarization](#tab/document-summarization)
+
+To use this feature, you submit raw unstructured text for analysis and handle the API output in your application. Analysis is performed as-is, with no additional customization to the model used on your data. There are two ways to use text summarization:
++
+|Development option |Description | Links |
+||||
+| Language Studio | A web-based platform that enables you to try document summarization without needing writing code. | ΓÇó [Language Studio website](https://language.cognitive.azure.com/tryout/summarization) <br> ΓÇó [Quickstart: Use the Language studio](../language-studio.md) |
+| REST API or Client library (Azure SDK) | Integrate document summarization into your applications using the REST API, or the client library available in a variety of languages. | ΓÇó [Quickstart: Use document summarization](quickstart.md) |
++
+# [Conversation summarization](#tab/conversation-summarization)
+
+To use this feature, you submit raw text for analysis and handle the API output in your application. Analysis is performed as-is, with no additional customization to the model used on your data. There are two ways to use conversation summarization:
++
+|Development option |Description | Links |
+||||
+| REST API | Integrate conversation summarization into your applications using the REST API. | [Quickstart: Use conversation summarization](quickstart.md) |
+++
+## Input requirements and service limits
+
+# [Document summarization](#tab/document-summarization)
+
+* Text summarization takes raw unstructured text for analysis. See [Data and service limits](../concepts/data-limits.md) in the how-to guide for more information.
+* Text summarization works with a variety of written languages. See [language support](language-support.md) for more information.
++
+# [Conversation summarization](#tab/conversation-summarization)
+
+* Conversation summarization takes structured text for analysis. See the [data and service limits](../concepts/data-limits.md) for more information.
+* Conversation summarization accepts text in English. See [language support](language-support.md) for more information.
+++
+## Reference documentation and code samples
+
+As you use document summarization in your applications, see the following reference documentation and samples for Azure Cognitive Services for Language:
+
+|Development option / language |Reference documentation |Samples |
+||||
+|REST API | [REST API documentation](https://westus2.dev.cognitive.microsoft.com/docs/services/TextAnalytics-v3-2-Preview-2/operations/Analyze) | |
+|C# | [C# documentation](/dotnet/api/azure.ai.textanalytics?view=azure-dotnet-preview&preserve-view=true) | [C# samples](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/textanalytics/Azure.AI.TextAnalytics/samples) |
+| Java | [Java documentation](/java/api/overview/azure/ai-textanalytics-readme?view=azure-java-preview&preserve-view=true) | [Java Samples](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/textanalytics/azure-ai-textanalytics/src/samples) |
+|JavaScript | [JavaScript documentation](/javascript/api/overview/azure/ai-text-analytics-readme?view=azure-node-preview&preserve-view=true) | [JavaScript samples](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/textanalytics/ai-text-analytics/samples/v5) |
+|Python | [Python documentation](/python/api/overview/azure/ai-textanalytics-readme?view=azure-python-preview&preserve-view=true) | [Python samples](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/textanalytics/azure-ai-textanalytics/samples) |
+
+## Responsible AI
+
+An AI system includes not only the technology, but also the people who will use it, the people who will be affected by it, and the environment in which itΓÇÖs deployed. Read the [transparency note for document summarization](/legal/cognitive-services/language-service/transparency-note-extractive-summarization?context=/azure/cognitive-services/language-service/context/context) to learn about responsible AI use and deployment in your systems. You can also see the following articles for more information:
+
cognitive-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/summarization/quickstart.md
+
+ Title: "Quickstart: Use Document Summarization (preview)"
+
+description: Use this quickstart to start using Document Summarization.
++++++ Last updated : 11/02/2021+
+ms.devlang: csharp, java, javascript, python
+
+zone_pivot_groups: programming-languages-text-analytics
++
+# Quickstart: using document summarization and conversation summarization (preview)
+
+Use this article to get started with document summarization and conversation summarization using the client library and REST API. Follow these steps to try out examples code for mining text:
++++++++++++++++
+## Clean up resources
+
+If you want to clean up and remove a Cognitive Services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it.
+
+* [Portal](../../cognitive-services-apis-create-account.md#clean-up-resources)
+* [Azure CLI](../../cognitive-services-apis-create-account-cli.md#clean-up-resources)
+
+## Next steps
+
+* [Summarization overview](overview.md)
cognitive-services Call Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/text-summarization/how-to/call-api.md
- Title: Summarize text with the extractive summarization API-
-description: This article will show you how to summarize text with the extractive summarization API.
------ Previously updated : 03/16/2022----
-# How to use text summarization (preview)
-
-> [!IMPORTANT]
-> The extractive summarization feature is a preview capability provided ΓÇ£AS ISΓÇ¥ and ΓÇ£WITH ALL FAULTS.ΓÇ¥ As such, Extractive Summarization (preview) should not be implemented or deployed in any production use. The customer is solely responsible for any use of extractive summarization.
-
-In general, there are two approaches for automatic text summarization: extractive and abstractive. This API provides extractive summarization.
-
-Extractive summarization is a feature that produces a summary by extracting sentences that collectively represent the most important or relevant information within the original content.
-
-This feature is designed to shorten content that users consider too long to read. Extractive summarization condenses articles, papers, or documents to key sentences.
-
-The AI models used by the API are provided by the service, you just have to send content for analysis.
-
-## Features
-
-> [!TIP]
-> If you want to start using this feature, you can follow the [quickstart article](../quickstart.md) to get started. You can also make example requests using [Language Studio](../../language-studio.md) without needing to write code.
-
-The extractive summarization API uses natural language processing techniques to locate key sentences in an unstructured text document. These sentences collectively convey the main idea of the document.
-
-Extractive summarization returns a rank score as a part of the system response along with extracted sentences and their position in the original documents. A rank score is an indicator of how relevant a sentence is determined to be, to the main idea of a document. The model gives a score between 0 and 1 (inclusive) to each sentence and returns the highest scored sentences per request. For example, if you request a three-sentence summary, the service returns the three highest scored sentences.
-
-There is another feature in Azure Cognitive Service for Language, [key phrases extraction](./../../key-phrase-extraction/how-to/call-api.md), that can extract key information. When deciding between key phrase extraction and extractive summarization, consider the following:
-* key phrase extraction returns phrases while extractive summarization returns sentences
-* extractive summarization returns sentences together with a rank score, and. Top ranked sentences will be returned per request
-* extractive summarization also returns the following positional information:
- * offset: The start position of each extracted sentence, and
- * Length: is the length of each extracted sentence.
--
-## Determine how to process the data (optional)
-
-### Specify the text summarization model
-
-By default, text summarization will use the latest available AI model on your text. You can also configure your API requests to use a specific [model version](../../concepts/model-lifecycle.md).
-
-### Input languages
-
-When you submit documents to be processed by key phrase extraction, you can specify which of [the supported languages](../language-support.md) they're written in. if you don't specify a language, key phrase extraction will default to English. The API may return offsets in the response to support different [multilingual and emoji encodings](../../concepts/multilingual-emoji-support.md).
-
-## Submitting data
-
-You submit documents to the API as strings of text. Analysis is performed upon receipt of the request. Because the API is [asynchronous](../../concepts/use-asynchronously.md), there may be a delay between sending an API request, and receiving the results.
-
-When using this feature, the API results are available for 24 hours from the time the request was ingested, and is indicated in the response. After this time period, the results are purged and are no longer available for retrieval.
-
-You can use the `sentenceCount` parameter to specify how many sentences will be returned, with `3` being the default. The range is from 1 to 20.
-
-You can also use the `sortby` parameter to specify in what order the extracted sentences will be returned - either `Offset` or `Rank`, with `Offset` being the default.
--
-|parameter value |Description |
-|||
-|Rank | Order sentences according to their relevance to the input document, as decided by the service. |
-|Offset | Keeps the original order in which the sentences appear in the input document. |
-
-## Getting text summarization results
-
-When you get results from language detection, you can stream the results to an application or save the output to a file on the local system.
-
-The following is an example of content you might submit for summarization, which is extracted using the Microsoft blog article [A holistic representation toward integrative AI](https://www.microsoft.com/research/blog/a-holistic-representation-toward-integrative-ai/). This article is only an example, the API can accept much longer input text. See the data limits section for more information.
-
-*"At Microsoft, we have been on a quest to advance AI beyond existing techniques, by taking a more holistic, human-centric approach to learning and understanding. As Chief Technology Officer of Azure AI Cognitive Services, I have been working with a team of amazing scientists and engineers to turn this quest into a reality. In my role, I enjoy a unique perspective in viewing the relationship among three attributes of human cognition: monolingual text (X), audio or visual sensory signals, (Y) and multilingual (Z). At the intersection of all three, thereΓÇÖs magicΓÇöwhat we call XYZ-code as illustrated in Figure 1ΓÇöa joint representation to create more powerful AI that can speak, hear, see, and understand humans better. We believe XYZ-code will enable us to fulfill our long-term vision: cross-domain transfer learning, spanning modalities and languages. The goal is to have pretrained models that can jointly learn representations to support a broad range of downstream AI tasks, much in the way humans do today. Over the past five years, we have achieved human performance on benchmarks in conversational speech recognition, machine translation, conversational question answering, machine reading comprehension, and image captioning. These five breakthroughs provided us with strong signals toward our more ambitious aspiration to produce a leap in AI capabilities, achieving multisensory and multilingual learning that is closer in line with how humans learn and understand. I believe the joint XYZ-code is a foundational component of this aspiration, if grounded with external knowledge sources in the downstream AI tasks."*
-
-The extractive summarization API is performed upon receipt of the request by creating a job for the API backend. If the job succeeded, the output of the API will be returned. The output will be available for retrieval for 24 hours. After this time, the output is purged. Due to multilingual and emoji support, the response may contain text offsets. See [how to process offsets](../../concepts/multilingual-emoji-support.md) for more information.
-
-Using the above example, the API might return the following summarized sentences:
-
-*"At Microsoft, we have been on a quest to advance AI beyond existing techniques, by taking a more holistic, human-centric approach to learning and understanding."*
-
-*"In my role, I enjoy a unique perspective in viewing the relationship among three attributes of human cognition: monolingual text (X), audio or visual sensory signals, (Y) and multilingual (Z)."*
-
-*"At the intersection of all three, thereΓÇÖs magicΓÇöwhat we call XYZ-code as illustrated in Figure 1ΓÇöa joint representation to create more powerful AI that can speak, hear, see, and understand humans better."*
-
-## Service and data limits
--
-## See also
-
-* [Text Summarization overview](../overview.md)
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/text-summarization/language-support.md
- Title: Text summarization language support-
-description: Learn about which languages are supported by text summarization.
------ Previously updated : 11/02/2021----
-# Text summarization language support
-
-Use this article to learn which natural languages are supported by text summarization feature.
-
-## Languages supported by text summarization
-
-Text summarization supports the following languages:
-
-| Language | Language code | Starting with v3 model version | Notes |
-|:-|:-:|:-:|::|
-| Chinese-Simplified | `zh-hans` | 2021-08-01 | `zh` also accepted |
-| English | `en` | 2021-08-01 | |
-| French | `fr` | 2021-08-01 | |
-| German | `de` | 2021-08-01 | |
-| Italian | `it` | 2021-08-01 | |
-| Japanese | `ja` | 2021-08-01 | |
-| Korean | `ko` | 2021-08-01 | |
-| Spanish | `es` | 2021-08-01 | |
-| Portuguese (Brazil) | `pt-BR` | 2021-08-01 | |
-| Portuguese (Portugal) | `pt-PT` | 2021-08-01 | `pt` also accepted |
-
-## Next steps
-
-[Text summarization overview](overview.md)
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/text-summarization/overview.md
- Title: What is text summarization in Azure Cognitive Service for Language (preview)?-
-description: Learn about summarizing text.
------ Previously updated : 03/16/2022----
-# What is text summarization (preview) in Azure Cognitive Service for Language?
-
-Text summarization is one of the features offered by [Azure Cognitive Service for Language](../overview.md), a collection of machine learning and AI algorithms in the cloud for developing intelligent applications that involve written language. Use this article to learn more about this feature, and how to use it in your applications.
-
-This documentation contains the following article types:
-
-* [**Quickstarts**](quickstart.md) are getting-started instructions to guide you through making requests to the service.
-* [**How-to guides**](how-to/call-api.md) contain instructions for using the service in more specific or customized ways.
-## Text summarization feature
-
-Text summarization uses extractive text summarization to produce a summary of a document. It extracts sentences that collectively represent the most important or relevant information within the original content. This feature is designed to shorten content that could be considered too long to read. For example, it can condense articles, papers, or documents to key sentences.
-
-As an example, consider the following paragraph of text:
-
-*"WeΓÇÖre delighted to announce that Cognitive Service for Language service now supports extractive summarization! In general, there are two approaches for automatic text summarization: extractive and abstractive. This feature provides extractive summarization. Text summarization is a feature that produces a text summary by extracting sentences that collectively represent the most important or relevant information within the original content. This feature is designed to shorten content that could be considered too long to read. Extractive summarization condenses articles, papers, or documents to key sentences."*
-
-The text summarization feature would simplify the text into the following key sentences:
--
-## Key features
-
-Text summarization supports the following features:
-
-* **Extracted sentences**: These sentences collectively convey the main idea of the document. TheyΓÇÖre original sentences extracted from the input documentΓÇÖs content.
-* **Rank score**: The rank score indicates how relevant a sentence is to a document's main topic. Text summarization ranks extracted sentences, and you can determine whether they're returned in the order they appear, or according to their rank.
-* **Maximum sentences**: Determine the maximum number of sentences to be returned. For example, if you request a three-sentence summary Text summarization will return the three highest scored sentences.
-* **Positional information**: The start position and length of extracted sentences.
-
-## Get started with text summarization
-
-To use this feature, you submit raw unstructured text for analysis and handle the API output in your application. Analysis is performed as-is, with no additional customization to the model used on your data. There are two ways to use text summarization:
--
-|Development option |Description | Links |
-||||
-| Language Studio | A web-based platform that enables you to try text summarization without needing writing code. | ΓÇó [Language Studio website](https://language.cognitive.azure.com/tryout/summarization) <br> ΓÇó [Quickstart: Use the Language studio](../language-studio.md) |
-| REST API or Client library (Azure SDK) | Integrate text summarization into your applications using the REST API, or the client library available in a variety of languages. | ΓÇó [Quickstart: Use text summarization](quickstart.md) |
-
-## Input requirements and service limits
-
-* Text summarization takes raw unstructured text for analysis. See [Data and service limits](../concepts/data-limits.md) in the how-to guide for more information.
-* Text summarization works with a variety of written languages. See [language support](language-support.md) for more information.
-
-## Reference documentation and code samples
-
-As you use text summarization in your applications, see the following reference documentation and samples for Azure Cognitive Services for Language:
-
-|Development option / language |Reference documentation |Samples |
-||||
-|REST API | [REST API documentation](https://westus2.dev.cognitive.microsoft.com/docs/services/TextAnalytics-v3-2-Preview-2/operations/Analyze) | |
-|C# | [C# documentation](/dotnet/api/azure.ai.textanalytics?view=azure-dotnet-preview&preserve-view=true) | [C# samples](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/textanalytics/Azure.AI.TextAnalytics/samples) |
-| Java | [Java documentation](/java/api/overview/azure/ai-textanalytics-readme?view=azure-java-preview&preserve-view=true) | [Java Samples](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/textanalytics/azure-ai-textanalytics/src/samples) |
-|JavaScript | [JavaScript documentation](/javascript/api/overview/azure/ai-text-analytics-readme?view=azure-node-preview&preserve-view=true) | [JavaScript samples](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/textanalytics/ai-text-analytics/samples/v5) |
-|Python | [Python documentation](/python/api/overview/azure/ai-textanalytics-readme?view=azure-python-preview&preserve-view=true) | [Python samples](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/textanalytics/azure-ai-textanalytics/samples) |
-
-## Responsible AI
-
-An AI system includes not only the technology, but also the people who will use it, the people who will be affected by it, and the environment in which itΓÇÖs deployed. Read the [transparency note for text summarization](/legal/cognitive-services/language-service/transparency-note-extractive-summarization?context=/azure/cognitive-services/language-service/context/context) to learn about responsible AI use and deployment in your systems. You can also see the following articles for more information:
-
cognitive-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/text-summarization/quickstart.md
- Title: "Quickstart: Use Text Summarization (preview)"-
-description: Use this quickstart to start using Text Summarization.
------ Previously updated : 11/02/2021--
-zone_pivot_groups: programming-languages-text-analytics
--
-# Quickstart: using the Text Summarization client library and REST API (preview)
-
-Use this article to get started with Text Summarization using the client library and REST API. Follow these steps to try out examples code for mining text:
----------------
-## Clean up resources
-
-If you want to clean up and remove a Cognitive Services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it.
-
-* [Portal](../../cognitive-services-apis-create-account.md#clean-up-resources)
-* [Azure CLI](../../cognitive-services-apis-create-account-cli.md#clean-up-resources)
-
-## Next steps
-
-* [Text Summarization overview](overview.md)
cognitive-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/whats-new.md
Previously updated : 04/20/2022 Last updated : 05/23/2022 -+ # What's new in Azure Cognitive Service for Language? Azure Cognitive Service for Language is updated on an ongoing basis. To stay up-to-date with recent developments, this article provides you with information about new releases and features.
+## May 2022
+
+* PII detection for conversations.
+* Rebranded Text Summarization to Document summarization.
+* Conversation summarization is now available in public preview.
+
+* The following features are now Generally Available (GA):
+ * Custom text classification
+ * Custom Named Entity Recognition (NER)
+ * Conversational language understanding
+ * Orchestration workflow
+
+* The following updates for custom text classification, custom Named Entity Recognition (NER), conversational language understanding, and orchestration workflow:
+ * Data splitting controls.
+ * Ability to cancel training jobs.
+ * Custom deployments can be named. You can have up to 10 deployments.
+ * Ability to swap deployments.
+ * Auto labeling (preview) for custom named entity recognition
+ * Enterprise readiness support
+ * Training modes for conversational language understanding
+ * Updated service limits
+ * Ability to use free (F0) tier for Language resources
+ * Expanded regional availability
+ * Updated model life cycle to add training configuration versions
+++ ## April 2022 * Fast Healthcare Interoperability Resources (FHIR) support is available in the [Language REST API preview](text-analytics-for-health/quickstart.md?pivots=rest-api&tabs=language) for Text Analytics for health. - ## March 2022 * Expanded language support for:
Azure Cognitive Service for Language is updated on an ongoing basis. To stay up-
## February 2022
-* Model improvements for latest model-version for [text summarization](text-summarization/overview.md)
+* Model improvements for latest model-version for [text summarization](summarization/overview.md)
* Model 2021-10-01 is Generally Available (GA) for [Sentiment Analysis and Opinion Mining](sentiment-opinion-mining/overview.md), featuring enhanced modeling for emojis and better accuracy across all supported languages.
Azure Cognitive Service for Language is updated on an ongoing basis. To stay up-
* [Named Entity Recognition (NER), Personally Identifying Information (PII)](named-entity-recognition/overview.md) * [Language Detection](language-detection/overview.md) * [Text Analytics for health](text-analytics-for-health/overview.md)
- * [Text summarization preview](text-summarization/overview.md)
+ * [Text summarization preview](summarization/overview.md)
* [Custom Named Entity Recognition (Custom NER) preview](custom-named-entity-recognition/overview.md) * [Custom Text Classification preview](custom-classification/overview.md) * [Conversational Language Understanding preview](conversational-language-understanding/overview.md)
Azure Cognitive Service for Language is updated on an ongoing basis. To stay up-
* SDK support for sending requests to custom models:
- * [Custom Named Entity Recognition](custom-named-entity-recognition/how-to/call-api.md?tabs=client#use-the-client-libraries)
- * [Custom text classification](custom-classification/how-to/call-api.md?tabs=api#use-the-client-libraries)
- * [Custom language understanding](conversational-language-understanding/how-to/deploy-query-model.md#use-the-client-libraries-azure-sdk)
+ * Custom Named Entity Recognition
+ * Custom text classification
+ * Custom language understanding
## Next steps
cognitive-services Engines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/concepts/engines.md
+
+ Title: Azure OpenAI Engines
+
+description: Learn about the different AI models or engines that are available.
++ Last updated : 05/24/2022++
+keywords:
++
+# Azure OpenAI Engines
+
+The service provides access to many different models. Engines describe a family of models and are broken out as follows:
+
+|Modes | Description|
+|--| -- |
+| GPT-3 series | A set of GPT-3 models that can understand and generate natural language |
+| Codex Series | A set of models that can understand and generate code, including translating natural language to code |
+| Embeddings Series | An embedding is a special format of data representation that can be easily utilized by machine learning models and algorithms. The embedding is an information dense representation of the semantic meaning of a piece of text. Currently we offer three families of embedding models for different functionalities: text search, text similarity, and code search |
+
+## GPT-3 Series
+
+The GPT-3 models can understand and generate natural language. The service offers four model types with different levels of capabilities suitable for different tasks. Davinci is the most capable model, and Ada is the fastest.
+
+While Davinci is the most capable, the other models provide significant speed advantages. Our recommendation is for users to start with Davinci while experimenting since it will produce the best results and validate the value our service can provide. Once you have a prototype working, you can then optimize your model choice with the best latency - performance tradeoff for your application.
+
+### Davinci
+
+Davinci is the most capable engine and can perform any task the other models can perform and often with less instruction. For applications requiring deep understanding of the content, like summarization for a specific audience and creative content generation, Davinci is going to produce the best results. These increased capabilities require more compute resources, so Davinci costs more and isn't as fast as the other engines.
+
+Another area where Davinci excels is in understanding the intent of text. Davinci is excellent at solving many kinds of logic problems and explaining the motives of characters. Davinci has been able to solve some of the most challenging AI problems involving cause and effect.
+
+**Use for**: Complex intent, cause and effect, summarization for audience
+
+### Curie
+
+Curie is extremely powerful, yet very fast. While Davinci is stronger when it comes to analyzing complicated text, Curie is quite capable for many nuanced tasks like sentiment classification and summarization. Curie is also quite good at answering questions and performing Q&A and as a general service chatbot.
+
+**Use for**: Language translation, complex classification, text sentiment, summarization
+
+### Babbage
+
+Babbage can perform straightforward tasks like simple classification. ItΓÇÖs also quite capable when it comes to Semantic Search ranking how well documents match up with search queries.
+
+**Use for**: Moderate classification, semantic search classification
+
+### Ada
+
+Ada is usually the fastest model and can perform tasks like parsing text, address correction, and certain kinds of classification tasks that donΓÇÖt require too much nuance. AdaΓÇÖs performance can often be improved by providing more context.
+
+**Use For** Parsing text, simple classification, address correction, keywords
+
+> [!NOTE]
+> Any task performed by a faster model like Ada can be performed by a more powerful model like Curie or Davinci.
+
+## Codex Series
+
+The Codex models are descendants of the base GPT-3 models that can understand and generate code. Their training data contains both natural language and billions of lines of public code from GitHub.
+
+TheyΓÇÖre most capable in Python and proficient in over a dozen languages including JavaScript, Go, Perl, PHP, Ruby, Swift, TypeScript, SQL, and even Shell.
+
+## Embeddings Models
+
+Currently we offer three families of embedding models for different functionalities: text search, text similarity, and code search. Each family includes up to four models across a spectrum of capabilities:
+
+Ada (1024 dimensions),
+Babbage (2048 dimensions),
+Curie (4096 dimensions),
+Davinci (12,288 dimensions).
+Davinci is the most capable, but is slower and more expensive than the other models. Ada is the least capable, but is significantly faster and cheaper.
+
+These embedding models are specifically created to be good at a particular task.
+
+### Similarity embeddings
+
+These models are good at capturing semantic similarity between two or more pieces of text. Similarity models are best for applications such as clustering, regression, anomaly detection, and visualization.
+
+### Text search embeddings
+
+These models help measure whether long documents are relevant to a short search query. There are two types: one for embedding the documents to be retrieved, and one for embedding the search query. Text search embeddings models are best for applications such as search, context relevance, and information retrieval.
+
+### Code search embeddings
+
+Similarly to search embeddings, there are two types: one for embedding code snippets to be retrieved and one for embedding natural language search queries. Code search embeddings models are best for applications such as code search and code relevance.
+
+## Finding the right model
+
+We recommend starting with the Davinci model since it will be the best way to understand what the service is capable of. After you have an idea of what you want to accomplish, you can either stay with Davinci if youΓÇÖre not concerned about cost and speed, or move onto Curie or another engine and try to optimize around its capabilities.
+
+## Next steps
+
+[Learn more about Azure OpenAI](../overview.md).
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/overview.md
+
+ Title: What is Azure OpenAI?
+
+description: Apply advanced language models to variety of use cases with the Azure OpenAI service
++++ Last updated : 5/24/2021+
+keywords:
++
+# What is Azure OpenAI?
+
+The Azure OpenAI service provides REST API access to OpenAI's powerful language models including the GPT-3, Codex and Embeddings model series. These models can be easily adapted to your specific task including but not limited to content generation, summarization, semantic search, and natural language to code translation. Users can access the service through REST APIs, Python SDK, or our web-based interface in the Azure OpenAI Studio.
+
+### Features overview
+
+| Feature | Azure OpenAI |
+| | |
+| Models available | GPT-3 base series <br> Codex Series <br> Embeddings Series <br> Learn more in our [Engines](./concepts/engines.md) page.|
+| Fine-tuning | Ada, <br>Babbage, <br> Curie,<br>Code-cushman-001 <br> Davinci<br> |
+| Billing Model| Coming Soon |
+| Virtual network support | Yes |
+| Managed Identity| Yes, via Azure Active Directory |
+| UI experience | **Azure Portal** for account & resource management, <br> **Azure OpenAI Service Studio** for model exploration and fine tuning |
+| Regional availability | South Central US, <br> West Europe |
+| Content filtering | Prompts and completions are evaluated against our content policy with automated systems. High severity content will be blocked. |
+
+## Responsible AI
+
+At Microsoft, we're committed to the advancement of AI driven by principles that put people first. Generative models such as the ones available in the Azure OpenAI service have significant potential benefits, but without careful design and thoughtful mitigations, such models have the potential to generate incorrect or even harmful content. Microsoft has made significant investments to help guard against abuse and unintended harm, which includes requiring applicants to show well-defined use cases, incorporating MicrosoftΓÇÖs [principles for responsible AI use](https://www.microsoft.com/ai/responsible-ai?activetab=pivot1:primaryr6), building content filters to support customers, and providing responsible AI implementation guidance to onboarded customers.
+
+## How do I get access to Azure OpenAI?
+
+How do I get access to Azure OpenAI Service?
+
+Access is currently limited as we navigate high demand, upcoming product improvements, and [MicrosoftΓÇÖs commitment to responsible AI](https://www.microsoft.com/ai/responsible-ai?activetab=pivot1:primaryr6). For now, we're working with customers with an existing partnership with Microsoft, lower risk use cases, and those committed to incorporating mitigations. In addition to applying for initial access, all solutions using the Azure OpenAI service are required to go through a use case review before they can be released for production use.
+
+More specific information is included in the application form. We appreciate your patience as we work to responsibly enable broader access to the Azure OpenAI service.
+
+Apply here for initial access or for a production review:
+
+[Apply now](https://aka.ms/oaiapply)
+
+All solutions using the Azure OpenAI service are also required to go through a use case review before they can be released for production use, and are evaluated on a case-by-case basis. In general, the more sensitive the scenario the more important risk mitigation measures will be for approval.
+
+## Terms of Use
+
+The use of Azure OpenAI service is governed by the terms of service that were agreed to upon onboarding. You may only use this service for the use case provided. You must complete an additional review before using the Azure OpenAI service in a "live" or production scenario, within your company, or with your customers (as compared to use solely for internal evaluation).
+
+## Next steps
+
+Learn more about the [underlying engines/models that power Azure OpenAI](./concepts/engines.md).
communication-services Real Time Inspection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/developer-tools/real-time-inspection.md
Title: Developer Tools - Real-Time Inspection for Azure Communication Services
-description: Conceptual documentation outlining the capabilities provided by the Real-Time Inspection tool.
+ Title: Developer Tools - Azure Communication Services Communication Monitoring
+description: Conceptual documentation outlining the capabilities provided by the Communication Monitoring tool.
-# Real-time Inspection Tool for Azure Communication Services
+# Azure Communication Services communication monitoring
[!INCLUDE [Private Preview Disclaimer](../../includes/private-preview-include-section.md)]
-The Real-time Inspection Tool enables Azure Communication Services developers to inspect the state of the `Call` to debug or monitor their solution. For developers building an Azure Communication Services solution, they might need visibility for debugging into general call information such as the `Call ID` or advanced states, such as did a user facing diagnostic fire. The Real-time Inspection Tool provides developers this information and more. It can be easily added to any JavaScript (Web) solution by downloading the npm package `azure/communication-tools`.
+The Azure Communication Services communication monitoring tool enables developers to inspect the state of the `Call` to debug or monitor their solution. For developers building an Azure Communication Services solution, they might need visibility for debugging into general call information such as the `Call ID` or advanced states, such as did a user facing diagnostic fire. The communication monitoring tool provides developers this information and more. It can be easily added to any JavaScript (Web) solution by downloading the npm package `@azure/communication-monitoring`.
>[!NOTE]
->Find the open-source repository for the tool [here](https://github.com/Azure/communication-inspection).
+>Find the open-source repository for the tool [here](https://github.com/Azure/communication-monitoring).
## Capabilities
-The Real-time Inspection Tool provides developers three categories of information that can be used for debugging purposes:
+The Communication Monitoring tool provides developers three categories of information that can be used for debugging purposes:
| Category | Descriptions | |--|--|
The Real-time Inspection Tool provides developers three categories of informatio
Data collected by the tool is only kept locally and temporarily. It can be downloaded from within the interface.
-Real-time Inspection Tool is compatible with the same browsers as the Calling SDK [here](../voice-video-calling/calling-sdk-features.md?msclkid=f9cf66e6a6de11ec977ae3f6d266ba8d#javascript-calling-sdk-support-by-os-and-browser).
+Communication Monitoring is compatible with the same browsers as the Calling SDK [here](../voice-video-calling/calling-sdk-features.md?msclkid=f9cf66e6a6de11ec977ae3f6d266ba8d#javascript-calling-sdk-support-by-os-and-browser).
-## Get started with Real-time Inspection Tool
+## Get started with Communication Monitoring
-The tool can be accessed through an npm package `azure/communication-inspection`. The package contains the `InspectionTool` object that can be attached to a `Call`. The Call Inspector requires an `HTMLDivElement` as part of its constructor on which it will be rendered. The `HTMLDivElement` will dictate the size of the Call Inspector.
+The tool can be accessed through an npm package `@azure/communication-monitoring`. The package contains the `CommunicationMonitoring` object that can be attached to a `Call`. The Call Inspector requires an `HTMLDivElement` as part of its constructor on which it will be rendered. The `HTMLDivElement` will dictate the size of the Call Inspector.
-### Installing Real-time Inspection Tool
+### Installing Communication Monitoring
```bash
-npm i @azure/communication-inspection
+npm i @azure/communication-monitoring
```
-### Initialize Real-time Inspection Tool
+### Initialize Communication Monitoring
```javascript
-import { CallClient, CallAgent } from "@azure/communication-calling";
-import { InspectionTool } from "@azure/communication-tools";
+import { CallAgent, CallClient } from '@azure/communication-calling'
+import { CommunicationMonitoring } from '@azure/communication-monitoring'
-const callClient = new callClient();
-const callAgent = await callClient.createCallAgent({INSERT TOKEN CREDENTIAL});
-const call = callAgent.startCall({INSERT CALL INFORMATION});
+interface Options {
+ callClient: CallClient
+ callAgent: CallAgent
+ divElement: HTMLDivElement
+}
-const inspectionTool = new InspectionTool(call, {HTMLDivElement});
+const selectedDiv = document.getElementById('selectedDiv')
+
+const options = {
+ callClient = this.callClient,
+ callAgent = this.callAgent,
+ divElement = selectedDiv,
+}
+
+const communicationMonitoring = new CommunicationMonitoring(options)
``` ## Usage
-`start`: enable the `InspectionTool` to start reading data from the call object and storing it locally for visualization.
+`start`: enable the `CommunicationMonitoring` instance to start reading data from the call object and storing it locally for visualization.
```javascript
-inspectionTool.start()
+communicationMonitoring.start()
```
-`stop`: disable the `InspectionTool` from reading data from the call object.
+`stop`: disable the `CommunicationMonitoring` instance from reading data from the call object.
```javascript
-inspectionTool.stop()
+communicationMonitoring.stop()
```
-`open`: Open the `InspectionTool` in the UI.
+`open`: Open the `CommunicationMonitoring` instance in the UI.
```javascript
-inspectionTool.open()
+communicationMonitoring.open()
```
-`close`: Dismiss the `InspectionTool` in the UI.
+`close`: Dismiss the `CommunicationMonitoring` instance in the UI.
```javascript
-inspectionTool.close()
+communicationMonitoring.close()
```
communication-services Email Pricing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/email-pricing.md
+
+ Title: Email pricing
+
+description: Learn about Communication Services Email pricing.
++++ Last updated : 04/15/2022++++
+# Email pricing in Azure Communication Services
++
+Prices for Azure Communication Services are generally based on a pay-as-you-go model and Email offers pay-as-you-go pricing as well. The prices in the following examples are for illustrative purposes and may not reflect the latest Azure pricing.
+
+## Email price
+
+ The price is based on number of messages sent to the recipient and amount of data transferred to each recipient which includes headers, message content (including text and images), and attachments. Messages can be sent to one more recipients.
++
+|Email Send |Data Transferred|
+|||
+|0.00025/email | $0.00012/MB|
+
+## Pricing example: A user of the Communication Services Virtual Visit Solution sends Appointment Reminders
+
+Alice is managing virtual visit solution for all the patients. Alice will be scheduling the visit and sends email invites to all patients reminding about their upcoming visit.
+
+Alice sends an Email of 1 MB Size to 100 patients every day and pricing for 30 days would be
+
+100 emails x 30 = 3000 Emails x 0.00025 = $0.75 USD
+
+1 MB x 100 x 30 = 3000 MB x 0.00012 = $0.36 USD
+
+## Next steps
+
+* [What is Email Communication Services](./email/prepare-email-communication-resource.md)
+
+* [What is Email Domains in Email Communication Services](./email/email-domain-and-sender-authentication.md)
+
+* [Get started with creating Email Communication Resource](../quickstarts/email/create-email-communication-resource.md)
+
+* [Get started by connecting Email Communication resource with a Communication Service resource](../quickstarts/email/connect-email-communication-resource.md)
+
+The following documents may be interesting to you:
+
+- Familiarize yourself with the [Email client library](./email/sdk-features.md)
+- How to send emails with custom verified domains?[Add custom domains](../quickstarts/email/add-custom-verified-domains.md)
+- How to send emails with Azure Communication Service managed domains?[Add Azure Managed domains](../quickstarts/email/add-azure-managed-domains.md)
communication-services Email Authentication Best Practice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/email/email-authentication-best-practice.md
+
+ Title: Best practices for sender authentication support in Azure Communication Services Email
+
+description: Learn about the best practices for Sender Authentication Support.
++++ Last updated : 04/15/2022++++
+# Best practices for sender authentication support in Azure Communication Services Email
++
+This article provides the best practices on how to use the sender authentication methods that help prevent attackers from sending messages that look like they come from your domain.
+
+## Email authentication
+Sending an email requires several steps which include verifying the sender of the email actually owns the domain, checking the domain reputation, virus scanning, filtering for spam, phishing attempts, malware etc. Configuring proper email authentication is a foundational principle for establishing trust in email and protecting your domainΓÇÖs reputation. If an email passes authentication checks, the receiving domain can apply policy to that email in keeping with the reputation already established for the identities associated with those authentication checks, and the recipient can be assured that those identities are valid.
+
+### SPF (Sender Policy Framework)
+SPF [RFC 7208](https://tools.ietf.org/html/rfc7208) is a mechanism that allows domain owners to publish and maintain, via a standard DNS TXT record, a list of systems authorized to send email on their behalf.
+
+### DKIM (Domain Keys Identified Mail)
+DKIM [RFC 6376](https://tools.ietf.org/html/rfc6376) allows an organization to claim responsibility for transmitting a message in a way that can be validated by the recipient
+
+### DMARC (Domain-based Message Authentication, Reporting, and Conformance)
+DMARC [RFC 7489](https://tools.ietf.org/html/rfc7489) is a scalable mechanism by which a mail-originating organization can express domain-level policies and preferences for message validation, disposition, and reporting that a mail-receiving organization can use to improve mail handling.
+
+### ARC (Authenticated Received Chain)
+The ARC protocol [RFC 8617](https://tools.ietf.org/html/rfc8617) provides an authenticated chain of custody for a message, allowing each entity that handles the message to identify what entities handled it previously as well as the messageΓÇÖs authentication assessment at each hop. ARC is not yet an internet standard, but adoption is increasing.
+
+### How Email authentication works
+Email authentication verifies that email messages from a sender (for example, notification@contoso.com) are legitimate and come from expected sources for that email domain (for example, contoso.com.)
+An email message may contain multiple originator or sender addresses. These addresses are used for different purposes. For example, consider these addresses:
+
+* Mail From address identifies the sender and specifies where to send return notices if any problems occur with the delivery of the message, such as non-delivery notices. This appears in the envelope portion of an email message and is not displayed by your email application. This is sometimes called the 5321.MailFrom address or the reverse-path address.
+
+* From address is the address displayed as the From address by your mail application. This address identifies the author of the email. That is, the mailbox of the person or system responsible for writing the message. This is sometimes called the 5322.From address.
+
+* Sender Policy Framework (SPF) helps validate outbound email sent from your mail from domain (is coming from who it says it is).
+
+* DomainKeys Identified Mail (DKIM) helps to ensure that destination email systems trust messages sent outbound from your mail from domain.
+
+* Domain-based Message Authentication, Reporting, and Conformance(DMARC) works with Sender Policy Framework (SPF) and DomainKeys Identified Mail (DKIM) to authenticate mail senders and ensure that destination email systems trust messages sent from your domain.
+
+### Implementing DMARC
+Implementing DMARC with SPF and DKIM provides rich protection against spoofing and phishing email. SPF uses a DNS TXT record to provide a list of authorized sending IP addresses for a given domain. Normally, SPF checks are only performed against the 5321.MailFrom address. This means that the 5322.From address is not authenticated when you use SPF by itself. This allows for a scenario where a user can receive a message, which passes an SPF check but has a spoofed 5322.From sender address.
+
+Like the DNS records for SPF, the record for DMARC is a DNS text (TXT) record that helps prevent spoofing and phishing. You publish DMARC TXT records in DNS. DMARC TXT records validate the origin of email messages by verifying the IP address of an email's author against the alleged owner of the sending domain. The DMARC TXT record identifies authorized outbound email servers. Destination email systems can then verify that messages they receive originate from authorized outbound email servers. This forces a mismatch between the 5321.MailFrom and the 5322.From addresses in all email sent from your domain and DMARC will fail for that email. To avoid this, you need to set up DKIM for your domain.
+
+A DMARC policy record allows a domain to announce that their email uses authentication; provides an email address to gather feedback about the use of their domain; and specifies a requested policy for the handling of messages that do not pass authentication checks. We recommend that
+- Policy statements domains publishing DMARC records be ΓÇ£p=rejectΓÇ¥ where possible, ΓÇ£p=quarantineΓÇ¥ otherwise.
+- The policy statement of ΓÇ£p=noneΓÇ¥, ΓÇ£sp=noneΓÇ¥, and pct<100 should only be viewed as transitional states, with the goal of removing them as quickly as possible.
+- Any published DMARC policy record should include, at a minimum, a "rua" tag that points to a mailbox for receiving DMARC aggregate reports and should send no replies back when receiving reports due to privacy concerns.
+
+## Next steps
+
+* [Best practices for implementing DMARC](https://docs.microsoft.com/microsoft-365/security/office-365-security/use-dmarc-to-validate-email?view=o365-worldwide#best-practices-for-implementing-dmarc-in-microsoft-365&preserve-view=true)
+
+* [Troubleshoot your DMARC implementation](https://docs.microsoft.com/microsoft-365/security/office-365-security/use-dmarc-to-validate-email?view=o365-worldwide#troubleshooting-your-dmarc-implementation&preserve-view=true)
+
+* [Email domains and sender authentication for Azure Communication Services](./email-domain-and-sender-authentication.md)
+
+* [Get started with create and manage Email Communication Service in Azure Communication Service](../../quickstarts/email/create-email-communication-resource.md)
+
+* [Get started by connecting Email Communication Service with a Azure Communication Service resource](../../quickstarts/email/connect-email-communication-resource.md)
+
+The following documents may be interesting to you:
+
+- Familiarize yourself with the [Email client library](../email/sdk-features.md)
+- How to send emails with custom verified domains?[Add custom domains](../../quickstarts/email/add-custom-verified-domains.md)
+- How to send emails with Azure Managed Domains?[Add Azure Managed domains](../../quickstarts/email/add-azure-managed-domains.md)
communication-services Email Domain And Sender Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/email/email-domain-and-sender-authentication.md
+
+ Title: Email domains and sender authentication for Azure Communication Services
+
+description: Learn about the Azure Communication Services Email Domains and Sender Authentication.
++++ Last updated : 04/15/2022++++
+# Email domains and sender authentication for Azure Communication Services
++
+An email domain is a unique name that appears after the @ sign-in email addresses. It typically takes the form of your organization's name and brand that is recognized in public. Using your domain in email allows users to trust that this message isn't a phishing attempt, and that it is coming from a trusted source, thereby building credibility for your brand. If you prefer, you can utilize the email domains that is offered through the Azure Communication Services. We offer an email domain that can be used to send emails on behalf of your organization.
+
+## Email domains and sender authentication
+Email Communication Services allows you to configure email with two types of domains: **Azure Managed Domains** and **Custom Domains**.
+
+### Azure Managed Domains
+Getting Azure manged Domains is one click setup. You can add a free Azure Subdomain to your email communication resource and you'll able to send emails using mail from domains like donotreply@xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx.azurecomm.net. Your Azure Managed domain will be pre-configured with required sender authentication support.
+### Custom Domains
+In this option you're adding a domain that you already own. You have to add your domain and verify the ownership to send email and then configure for required authentication support.
+
+### Sender authentication for domains
+Email authentication (also known as email validation) is a group of standards that tries to stop spoofing (email messages from forged senders). Our email pipeline uses these standards to verify the emails that are sent. Trust in email begins with Authentication and Azure communication Services Email helps senders to properly configure the following email authentication protocols to set proper authentication for the emails.
+
+**SPF (Sender Policy Framework)**
+SPF [RFC 7208](https://tools.ietf.org/html/rfc7208) is a mechanism that allows domain owners to publish and maintain, via a standard DNS TXT record, a list of systems authorized to send email on their behalf. Azure Commuication Services allows you to configure the required SPF record that needs to be added to your DNS to verify your custom domains.
+
+**DKIM (Domain Keys Identified Mail)**
+DKIM [RFC 6376](https://tools.ietf.org/html/rfc6376) allows an organization to claim responsibility for transmitting a message in a way that can be validated by the recipient. Azure Commuication Services allows you to configure the required DKIM records that need to be added to your DNS to verify your custom domains.
+
+Please follow the steps [to setup sender authentication for your domain.](../../quickstarts/email/add-custom-verified-domains.md)
+
+### Choosing the domain type
+You can choose the experience that works best for your business. You can start with development by using the Azure Managed domain and switch to a custom domain when you're ready to launch your applications.
+
+## How to connect a domain to send email
+Email Communication Service resources are designed to enable domain validation steps as decoupled as possible from application integration. Application Integration linked with Azure Communication Service and each communication service will be allowed to be linked with one of verified domains from Email Communication Services. Please follow the steps [to connect your verified domains](../../quickstarts/email/connect-email-communication-resource.md). To switch from one verified domain to other you need to [disconnect the domain and connect a different domain](../../quickstarts/email/connect-email-communication-resource.md).
+
+## Next steps
+
+* [Best practices for sender authentication support in Azure Communication Services Email](./email-authentication-best-practice.md)
+
+* [Get started with create and manage Email Communication Service in Azure Communication Service](../../quickstarts/email/create-email-communication-resource.md)
+
+* [Get started by connecting Email Communication Service with a Azure Communication Service resource](../../quickstarts/email/connect-email-communication-resource.md)
+
+The following documents may be interesting to you:
+
+- Familiarize yourself with the [Email client library](../email/sdk-features.md)
+- How to send emails with custom verified domains?[Add custom domains](../../quickstarts/email/add-custom-verified-domains.md)
+- How to send emails with Azure Managed Domains?[Add Azure Managed domains](../../quickstarts/email/add-azure-managed-domains.md)
communication-services Email Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/email/email-overview.md
+
+ Title: Email as service overview in Azure Communication Services
+
+description: Learn about Communication Services Email concepts.
++++ Last updated : 04/15/2022++++
+# Email in Azure Communication Services
++
+Azure Communication Services Email is a new primitive that facilitates high volume transactional, bulk and marketing emails on the Azure Communication Services platform and will enable Application-to-Person (A2P) use cases. Azure Communication Services Email is going to simplify the integration of email capabilities to your applications using production-ready email SDK options. Email enables rich collaboration in communication modalities combining with SMS and other communication channels to build collaborative applications to help reach your customers in their preferred communication channel.
+
+Azure Communication Services Email offers will improve your time-to-market with scalable, reliable email capabilities with your own SMTP domains. Like other communication modalities Email offering has the benefit of only paying for what you use.
+
+## Key principles of Azure Communication Services Email
+Key principles of Azure Communication Services Email Service include:
+
+- **Easy Onboarding** steps for adding Email capability to your applications.
+- **High Volume Sending** support for A2P (Application to Person) use cases.
+- **Custom Domain** support to enable emails to send from email domains that are verified by your Domain Providers.
+- **Reliable Delivery** status on emails sent from your application in near real-time.
+- **Email Analytics** to measure the success of delivery, richer breakdown of Engagement Tracking.
+- **Opt-Out** handling support to automatically detect and respect opt-outs managed in our suppression list.
+- **SDKs** to add rich collaboration capabilities to your applications.
+- **Security and Compliance** to honor and respect data handling and privacy requirements that Azure promises to our customers.
+
+## Key features
+Key features include:
+
+- **Azure Managed Domain** - Customers will be able to send mail from the pre-provisioned domain (xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx.azurecomm.net)
+- **Custom Domain** - Customers will be able to send mail from their own verified domain(notify.contoso.com).
+- **Sender Authentication Support** - Platform Enables support for SPF(Sender Policy Framework) and DKIM(Domain Keys Identified Mail) settings for both Azure managed and Custom Domains with ARC (Authenticated Received Chain) support which preserves the Email authentication result during transitioning.
+- **Email Spam Protection and Fraud Detection** - Platform performs email hygiene for all messages and offers comprehensive email protection leveraging Microsoft Defender components by enabling the existing transport rules for detecting malware's, URL Blocking and Content Heuristic.
+- **Email Analytics** - Email Analytics through Azure Insights. To meet GDPR requirements we will emit logs at the request level which will contain messageId and recipient information for diagnostic and auditing purposes.
+- **Engagement Tracking** - Bounce, Blocked, Open and Click Tracking.
+
+## Next steps
+
+* [What is Email Communication Communication Service](./prepare-email-communication-resource.md)
+
+* [Email domains and sender authentication for Azure Communication Services](./email-domain-and-sender-authentication.md)
+
+* [Get started with create and manage Email Communication Service in Azure Communication Service](../../quickstarts/email/create-email-communication-resource.md)
+
+* [Get started by connecting Email Communication Service with a Azure Communication Service resource](../../quickstarts/email/connect-email-communication-resource.md)
+
+The following documents may be interesting to you:
+
+- Familiarize yourself with the [Email client library](../email/sdk-features.md)
+- How to send emails with custom verified domains?[Add custom domains](../../quickstarts/email/add-custom-verified-domains.md)
+- How to send emails with Azure Managed Domains?[Add Azure Managed domains](../../quickstarts/email/add-azure-managed-domains.md)
communication-services Prepare Email Communication Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/email/prepare-email-communication-resource.md
+
+ Title: Prepare Email Communication Resource for Azure Communication Service
+
+description: Learn about the Azure Communication Services Email Communication Resources and Domains.
++++ Last updated : 04/15/2022++++
+# Prepare Email Communication resource for Azure Communication Service
++
+Similar to Chat, VoIP and SMS modalities under the Azure Communication Services, you'll be able to send an email using Azure Communication Resource. However sending an email requires certain pre-configuration steps and you have to rely on your organization admins help setting that up. The administrator of your organization needs to,
+- Approve the domain that your organization allows you to send mail from
+- Define the sender domain they'll use as the P1 sender email address (also known as MailFrom email address) that shows up on the envelope of the email [RFC 5321](https://tools.ietf.org/html/rfc5321)
+- Define the P2 sender email address that most email recipients will see on their email client [RFC 5322](https://tools.ietf.org/html/rfc5322).
+- Setup and verify the sender domain by adding necessary DNS records for sender verification to succeed.
+
+Once the sender domain is successfully configured correctly and verified you'll able to link the verified domains with your Azure Communication Services resource and start sending emails.
+
+One of the key principles for Azure Communication Services is to have a simplified developer experience. Our email platform will simplify the experience for developers and ease this back and forth operation with organization administrators and improve the end to end experience by allowing admin developers to configure the necessary sender authentication and other compliance related steps to send email and letting you focus on building the required payload.
+
+Your Azure Administrators will create a new resource of type ΓÇ£Email Communication ServicesΓÇ¥ and add the allowed email sender domains under this resource. The domains added under this resource type will contain all the sender authentication and engagement tracking configurations that are required to be completed before start sending emails. Once the sender domain is configured and verified, you'll able to link these domains with your Azure Communication Services resource and you can select which of the verified domains is suitable for your application and connect them to send emails from your application.
+
+## Organization Admins \ Admin developers responsibility
+
+- Plan all the required Email Domains for the applications in the organization
+- Create the new resource of type ΓÇ£Email Communication ServicesΓÇ¥
+- Add Custom Domains or get an Azure Managed Domain.
+- Perform the sender verification steps for Custom Domains
+- Set up DMARC Policy for the verified Sender Domains.
+
+## Developers responsibility
+- Connect the preferred domain to Azure Communication Service resources.
+- Generate email payload and define the required
+ - Email headers
+ - Body of email
+ - Recipient list
+ - Attachments if any
+- Submits to Communication Services Email API.
+- Verify the status of Email delivery.
+
+## Next steps
+
+* [Email domains and sender authentication for Azure Communication Services](./email-domain-and-sender-authentication.md)
+
+* [Get started with create and manage Email Communication Service in Azure Communication Service](../../quickstarts/email/create-email-communication-resource.md)
+
+* [Get started by connecting Email Communication Service with a Azure Communication Service resource](../../quickstarts/email/connect-email-communication-resource.md)
+
+The following documents may be interesting to you:
+
+- Familiarize yourself with the [Email client library](../email/sdk-features.md)
+- How to send emails with custom verified domains?[Add custom domains](../../quickstarts/email/add-custom-verified-domains.md)
+- How to send emails with Azure Managed Domains?[Add Azure Managed domains](../../quickstarts/email/add-azure-managed-domains.md)
communication-services Sdk Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/email/sdk-features.md
+
+ Title: Email client library overview for Azure Communication Services
+
+description: Learn about the Azure Communication Services Email client library.
++++ Last updated : 04/15/2022++++
+# Email client library overview for Azure Communication Services
++
+Azure Communication Services Email client libraries can be used to add transactional Email support to your applications.
+
+## Client libraries
+| Assembly | Protocols |Open vs. Closed Source| Namespaces | Capabilities |
+| - | | |-- | |
+| Azure Resource Manager | REST | Open | Azure.ResourceManager.Communication | Provision and manage Email Communication Services resources |
+| Email | REST | Open | Azure.Communication.Email | Send and get status on Email messages |
+
+### Azure Email Communication Resource
+Azure Resource Manager for Email Communication Services are meant for Email Domain Administration.
+
+| Area | JavaScript | .NET | Python | Java SE | iOS | Android | Other |
+| -- | - | - | | - | -- | -- | |
+| Azure Resource Manager | - | [NuGet](https://www.nuget.org/packages/Azure.ResourceManager.Communication) | - | - | - | - | [Go via GitHub](https://github.com/Azure/azure-sdk-for-go/releases/tag/v46.3.0) |
+
+## Email client library capabilities
+The following list presents the set of features that are currently available in the Communication Services Email client libraries.
+
+| Feature | Capability | JS | Java | .NET | Python |
+| -- | - | | - | - | |
+| Sendmail | Send Email messages </br> *Attachments are supported* | ✔️ | ❌ | ✔️ | ❌ |
+| Get Status | Receive Delivery Reports for messages sent | ✔️ | ❌ | ✔️ | ❌ |
++
+## API Throttling and Timeouts
+
+Your Azure account has a set of limitation on the number of email messages that you can send. For all the developers email sending is limited to 10 mails per minute, 25 mails in an hour and 100 mails in day. This sandbox setup is to help developers to start building the application and gradually you can request to increase the sending volume as soon as the application is ready to go live. Submit a support request to increase your sending limit.
+
+## Next steps
+
+* [Get started with create and manage Email Communication Service in Azure Communication Service](../../quickstarts/email/create-email-communication-resource.md)
+
+* [Get started by connecting Email Communication Service with a Azure Communication Service resource](../../quickstarts/email/connect-email-communication-resource.md)
+
+The following documents may be interesting to you:
+
+- How to send emails with custom verified domains?[Add custom domains](../../quickstarts/email/add-custom-verified-domains.md)
+- How to send emails with Azure Managed Domains?[Add Azure Managed domains](../../quickstarts/email/add-azure-managed-domains.md)
+- How to send emails with Azure Communication Service using Email client library? [How to send an Email?](../../quickstarts/email/send-email.md)
communication-services Custom Teams Endpoint Firewall Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/custom-teams-endpoint-firewall-configuration.md
Azure Communication Services provides the ability to leverage Communication Serv
The following articles might be of interest to you: - Learn more about [Azure Communication Services firewall configuration](../voice-video-calling/network-requirements.md).-- Learn about [Microsoft Teams firewall configuration](/microsoft-365/enterprise/urls-and-ip-address-ranges?view=o365-worldwide#skype-for-business-online-and-microsoft-teams).
+- Learn about [Microsoft Teams firewall configuration](/microsoft-365/enterprise/urls-and-ip-address-ranges?view=o365-worldwide&preserve-view=true#skype-for-business-online-and-microsoft-teams).
communication-services Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/known-issues.md
A number of specific Android devices fail to start, accept calls, and meetings.
* Sometimes when incoming PSTN is received the tab with the call or meeting will hang. Related webkit bugs [here](https://bugs.webkit.org/show_bug.cgi?id=233707) and [here](https://bugs.webkit.org/show_bug.cgi?id=233708#c0).
-### Device mutes and incoming video stops rendering when certain interruptions occur on iOS Safari.
+### Local microphone/camera mutes when certain interruptions occur on iOS Safari and Android Chrome.
This problem can occur if another application or the operating system takes over the control of the microphone or camera. Here are a few examples that might happen while a user is in the call:
This problem can occur if another application or the operating system takes over
- A user plays a YouTube video, for example, or starts a FaceTime call. Switching to another native application can capture access to the microphone or camera. - A user enables Siri, which will capture access to the microphone.
-To recover from all these cases, the user must go back to the application to unmute. In the case of video, the user must start the video in order to have the audio and video start flowing after the interruption.
+On iOS for example, while on an ACS call, if a PSTN call comes in, then a microphoneMutedUnexepectedly bad UFD will be raised and audio will stop flowing in the ACS call and the call will be marked as muted. Once the PSTN call is over, the user will have to go and unmute the ACS call for audio to start flowing again in the ACS call. In the case of Android Chrome when a PSTN call comes in, audio will stop flowing in the ACS call and the ACS call will not be marked as muted. Once the PSTN call is finished, android chrome will regain audio automatically and audio will start flowing normally again in the ACS call.
+
+In case camera is on and an interruption occurs, ACS call may or may not loose the camera. If lost then camera will be marked as off and user will have to go turn it back on after the interruption has released the camera.
Occasionally, microphone or camera devices won't be released on time, and that can cause issues with the original call. For example, if the user tries to unmute while watching a YouTube video, or if a PSTN call is active simultaneously.
communication-services Number Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/number-types.md
Azure Communication Services allows you to use phone numbers to make voice calls
## Available options + Azure Communication Services offers three types of Numbers: Toll-Free, Local, and Short Codes. - **To send or receive an SMS**, choose a Toll-Free Number or a Short Code
The table below summarizes these number types with supported capabilities:
| Type | Example | Send SMS | Receive SMS | Make Calls | Receive Calls | Typical Use Case | Restrictions | | :-- | :- | :: | :: | :--: | :--: | :- | :- |
-| [Toll-Free](../../quickstarts/telephony/get-phone-number.md) | +1 (8AB) XYZ PQRS | Yes | Yes | Yes | Yes | Receive calls on IVR bots, SMS Notifications | SMS in US only |
+| [Toll-Free](../../quickstarts/telephony/get-phone-number.md) | +1 (8AB) XYZ PQRS | Yes | Yes | Yes | Yes | Receive calls on IVR bots, SMS Notifications | SMS in US and CA only |
| [Local (Geographic)](../../quickstarts/telephony/get-phone-number.md) | +1 (ABC) XYZ PQRS | No | No | Yes | Yes | Geography Specific Number | Calling Only | | [Short-Codes](../../quickstarts/sms/apply-for-short-code.md) | ABC-XYZ | Yes | Yes | No | No | High-velocity SMS | SMS only |
communication-services Sub Eligibility Number Capability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/sub-eligibility-number-capability.md
Last updated 03/04/2022 + # Subscription eligibility and number capabilities
To acquire a phone number, you need to be on a paid Azure subscription. Phone nu
Additional details on eligible subscription types are as follows:
-| Number Type | Eligible Azure Agreement Type |
-| :- | :- |
-| Toll-Free and Local (Geographic) | Modern Customer Agreement (Field and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement |
-| Short-Codes | Modern Customer Agreement (Field Led) and Enterprise Agreement Only |
+| Number Type | Eligible Azure Agreement Type |
+| :- | :-- |
+| Toll-Free and Local (Geographic) | Modern Customer Agreement (Field and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement* |
+| Short-Codes | Modern Customer Agreement (Field Led) and Enterprise Agreement Only |
+
+\* Allowing the purchase of Italian phone numbers for CSP and LSP customers is planned only for General Availability launch.
## Number capabilities
The tables below summarize current availability:
| Number | Type | Send SMS | Receive SMS | Make Calls | Receive Calls | | :- | :- | :- | :- | :- | : | | USA & Puerto Rico | Toll-Free | General Availability | General Availability | General Availability | General Availability\* |
-| USA & Puerto Rico | Local | Not Available | Not Available | General Availability | General Availability\* |
-| USA | Short-Codes | Public Preview | Public Preview\* | Not Available | Not Available |
+| USA & Puerto Rico | Local | - | - | General Availability | General Availability\* |
+| USA | Short-Codes | Public Preview | Public Preview\* | - | - |
\* Available through Azure Bot Framework and Dynamics only ## Customers with UK Azure billing addresses
+| Number | Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
+| :-- | :- | :- | :- | : | : |
+| UK | Toll-Free | - | - | Public Preview | Public Preview\* |
+| UK | Local | - | - | Public Preview | Public Preview\* |
+| USA & Puerto Rico | Toll-Free | General Availability | General Availability | Public Preview | Public Preview\* |
+| USA & Puerto Rico | Local | - | - | Public Preview | Public Preview\* |
+| Canada | Toll-Free | Public Preview | Public Preview | Public Preview | Public Preview\* |
+| Canada | Local | - | - | Public Preview | Public Preview\* |
+
+\* Available through Azure Bot Framework and Dynamics only
+
+## Customers with Ireland Azure billing addresses
+
+| Number | Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
+| :- | :-- | :- | :- | :- | : |
+| Ireland | Toll-Free | - | - | Public Preview | Public Preview\* |
+| Ireland | Local | - | - | Public Preview | Public Preview\* |
+| USA & Puerto Rico | Toll-Free | General Availability | General Availability | Public Preview | Public Preview\* |
+| USA & Puerto Rico | Local | - | - | Public Preview | Public Preview\* |
+| Canada | Toll-Free | Public Preview | Public Preview | Public Preview | Public Preview\* |
+| Canada | Local | - | - | Public Preview | Public Preview\* |
+| UK | Toll-Free | - | - | Public Preview | Public Preview\* |
+| UK | Local | - | - | Public Preview | Public Preview\* |
++
+\* Available through Azure Bot Framework and Dynamics only
+
+## Customers with Denmark Azure billing addresses
+ | Number | Type | Send SMS | Receive SMS | Make Calls | Receive Calls | | :- | :-- | :- | :- | :- | : |
-| UK | Toll-Free | Not Available | Not Available | Public Preview | Public Preview\* |
-| UK | Local | Not Available | Not Available | Public Preview | Public Preview\* |
+| Denmark | Toll-Free | - | - | Public Preview | Public Preview\* |
+| Denmark | Local | - | - | Public Preview | Public Preview\* |
| USA & Puerto Rico | Toll-Free | General Availability | General Availability | Public Preview | Public Preview\* |
-| USA & Puerto Rico | Local | Not Available | Not Available | Public Preview | Public Preview\* |
+| USA & Puerto Rico | Local | - | - | Public Preview | Public Preview\* |
+| Canada | Toll-Free | Public Preview | Public Preview | Public Preview | Public Preview\* |
+| Canada | Local | - | - | Public Preview | Public Preview\* |
+| UK | Toll-Free | - | - | Public Preview | Public Preview\* |
+| UK | Local | - | - | Public Preview | Public Preview\* |
\* Available through Azure Bot Framework and Dynamics only
-## Customers with Ireland Azure billing addresses
+## Customers with Canada Azure billing addresses
+
+| Number | Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
+| :- | :-- | :- | :- | :- | : |
+| Canada | Toll-Free | Public Preview | Public Preview | Public Preview | Public Preview\* |
+| Canada | Local | - | - | Public Preview | Public Preview\* |
+| USA & Puerto Rico | Toll-Free | General Availability | General Availability | Public Preview | Public Preview\* |
+| USA & Puerto Rico | Local | - | - | Public Preview | Public Preview\* |
+| UK | Toll-Free | - | - | Public Preview | Public Preview\* |
+| UK | Local | - | - | Public Preview | Public Preview\* |
-| Number | Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
-| :- | :-- | :- | :- | :- | : |
-| USA & Puerto Rico | Toll-Free | General Availability | General Availability | General Availability | General Availability\* |
-| USA & Puerto Rico | Local | Not Available | Not Available | General Availability | General Availability\* |
\* Available through Azure Bot Framework and Dynamics only
-## Customers with Denmark Azure Billing Addresses
+## Customers with Italy Azure billing addresses
| Number | Type | Send SMS | Receive SMS | Make Calls | Receive Calls | | : | :-- | : | : | :- | : |
-| Denmark | Toll-Free | Not Available | Not Available | Public Preview | Public Preview\* |
-| Denmark | Local | Not Available | Not Available | Public Preview | Public Preview\* |
+| Italy | Toll-Free** | - | - | Public Preview | Public Preview\* |
+| Italy | Local** | - | - | Public Preview | Public Preview\* |
+
+\* Available through Azure Bot Framework and Dynamics only
+
+\** Allowing the purchase of Italian phone numbers for CSP and LSP customers is planned only for General Availability launch.
+
+## Customers with Sweden Azure billing addresses
+
+| Number | Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
+| :- | :-- | :- | :- | :- | : |
+| Sweden | Toll-Free | - | - | Public Preview | Public Preview\* |
+| Sweden | Local | - | - | Public Preview | Public Preview\* |
+| Canada | Toll-Free | Public Preview | Public Preview | Public Preview | Public Preview\* |
+| Canada | Local | - | - | Public Preview | Public Preview\* |
| USA & Puerto Rico | Toll-Free | General Availability | General Availability | Public Preview | Public Preview\* |
-| USA & Puerto Rico | Local | Not Available | Not Available | Public Preview | Public Preview\* |
+| USA & Puerto Rico | Local | - | - | Public Preview | Public Preview\* |
\* Available through Azure Bot Framework and Dynamics only
communication-services Pricing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/pricing.md
Rose sees the messages and starts chatting. In the meanwhile Casey gets a call a
- Number of messages sent (20 + 30 + 18 + 30 + 25 + 35) x $0.0008 = $0.1264
-## SMS (Short Messaging Service) and Telephony
+## SMS (Short Messaging Service)
-Please refer to the following links for details on SMS and Telephony pricing
+Azure Communication Services allows for adding SMS messaging capabilities to your applications. You can embed the experience into your applications using JavaScript, Java, Python, or .NET SDKs. Refer to our [full list of available SDKs](./sdk-options.md).
-- [SMS Pricing Details](./sms-pricing.md)-- [PSTN Pricing Details](./pstn-pricing.md)
+### Pricing
+
+The SMS usage price is a per-message segment charge based on the destination of the message. The carrier surcharge is calculated based on the destination of the message for sent messages and based on the sender of the message for received messages. Please refer to the [SMS Pricing Page](./sms-pricing.md) for pricing details.
+
+### Pricing example: 1:1 SMS sending
+
+Contoso is a healthcare company with clinics in US and Canada. Contoso has a Patient Appointment Reminder application that sends out SMS appointment reminders to patients regarding upcoming appointments.
+
+- The application sends appointment reminders to 20 US patients and 30 Canada patients using a US toll-free number.
+- Message length of the reminder message is 150 chars < 1 message segment*. Hence, total sent messages are 20 message segments for US and 30 message segments for CA.
+
+**Cost calculations**
+
+- US - 20 message segments x $0.0075 per sent message segment + 20 message segments x $0.0025 carrier surcharge per sent message segment = $0.20
+- CA - 30 message segments x $0.0075 per sent message segment + 30 message segments x $0.0085 carrier surcharge per sent message segment = $0.48
+
+**Total cost for the appointment reminders for 20 US patients and 30 CA patients**: $0.20 + $0.48 = $0.68
+### Pricing example: 1:1 SMS receiving
+
+Contoso is a healthcare company with clinics in US and Canada. Contoso has a Patient Appointment Reminder application that sends out SMS appointment reminders to patients regarding upcoming appointments. Patients can respond to the messages with "Reschedule" and include their date/time preference to reschedule their appointments.
+
+- The application sends appointment reminders to 20 US patients and 30 Canada patients using a CA toll-free number.
+- 6 US patients and 4 CA patients respond back to reschedule their appointments. Contoso receives 10 SMS responses in total.
+- Message length of the reschedule messages is less than 1 message segment*. Hence, total messages received are 6 message segments for US and 4 message segments for CA.
+
+**Cost calculations**
+
+- US - 6 message segments x $0.0075 per received message segment + 6 message segments x $0.0010 carrier surcharge per received message segment = $0.051
+- CA - 4 message segments x $0.0075 per received message segment = $0.03
+
+**Total cost for receiving patient responses from 6 US patients and 4 CA patients**: $0.051 + $0.03 = $0.081
+
+## Telephony
+Please refer to the following links for details on Telephony pricing
+
+- [PSTN Pricing Details](./pstn-pricing.md)
## Next Steps Get started with Azure Communication
communication-services Pstn Pricing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/pstn-pricing.md
Last updated 1/28/2022
-# Telephony (PSTN) Pricing
+# Telephony (PSTN) pricing
> [!IMPORTANT] > Number Retention and Portability: Phone numbers that are assigned to you during any preview program may need to be returned to Microsoft if you do not meet regulatory requirements before General Availability. During private preview and public preview, telephone numbers are not eligible for porting. [Details on offers in Public Preview / GA](../concepts/numbers/sub-eligibility-number-capability.md)
In most cases, customers with Azure subscriptions locations that match the count
All prices shown below are in USD.
-## United States Telephony Offers
+## United States telephony offers
-### Phone Number Leasing Charges
+### Phone number leasing charges
|Number type |Monthly fee | |--|--| |Geographic |USD 1.00/mo | |Toll-Free |USD 2.00/mo |
-### Usage Charges
+### Usage charges
|Number type |To make calls* |To receive calls| |--|--|| |Geographic |Starting at USD 0.0130/min |USD 0.0085/min |
All prices shown below are in USD.
\* For destination-specific pricing for making outbound calls, please refer to details [here](https://github.com/Azure/Communication/blob/master/pricing/communication-services-pstn-rates.csv)
-## United Kingdom Telephony Offers
+## United Kingdom telephony offers
-### Phone Number Leasing Charges
+### Phone number leasing charges
|Number type |Monthly fee | |--|--| |Geographic |USD 1.00/mo | |Toll-Free |USD 2.00/mo |
-### Usage Charges
+### Usage charges
|Number type |To make calls* |To receive calls| |--|--|| |Geographic |Starting at USD 0.0150/min |USD 0.0090/min |
All prices shown below are in USD.
\* For destination-specific pricing for making outbound calls, please refer to details [here](https://github.com/Azure/Communication/blob/master/pricing/communication-services-pstn-rates.csv)
-## Denmark Telephony Offers
+## Denmark telephony offers
-### Phone Number Leasing Charges
+### Phone number leasing charges
|Number type |Monthly fee | |--|--| |Geographic |USD 0.82/mo | |Toll-Free |USD 25.00/mo |
-### Usage Charges
+### Usage charges
|Number type |To make calls* |To receive calls| |--|--|| |Geographic |Starting at USD 0.0190/min |USD 0.0100/min |
All prices shown below are in USD.
\* For destination-specific pricing for making outbound calls, please refer to details [here](https://github.com/Azure/Communication/blob/master/pricing/communication-services-pstn-rates.csv)
+## Canada telephony offers
+
+### Phone number leasing charges
+|Number type |Monthly fee |
+|--|--|
+|Geographic |USD 1.00/mo |
+|Toll-Free |USD 2.00/mo |
+
+### Usage charges
+|Number type |To make calls* |To receive calls|
+|--|--||
+|Geographic |Starting at USD 0.0130/min |USD 0.0085/min |
+|Toll-free |Starting at USD 0.0130/min |USD 0.0220/min |
+
+\* For destination-specific pricing for making outbound calls, please refer to details [here](https://github.com/Azure/Communication/blob/master/pricing/communication-services-pstn-rates.csv)
+
+## Ireland telephony offers
+
+### Phone number leasing charges
+|Number type |Monthly fee |
+|--|--|
+|Geographic |USD 1.50/mo |
+|Toll-Free |USD 19.88/mo |
+
+### Usage charges
+|Number type |To make calls* |To receive calls|
+|--|--||
+|Geographic |Starting at USD 0.0160/min |USD 0.0100/min |
+|Toll-free |Starting at USD 0.0160/min |Starting at USD 0.0448/min |
+
+\* For destination-specific pricing for making outbound calls, please refer to details [here](https://github.com/Azure/Communication/blob/master/pricing/communication-services-pstn-rates.csv)
+
+## Italy telephony offers
+
+### Phone number leasing charges
+|Number type |Monthly fee |
+|--|--|
+|Geographic |USD 2.92/mo |
+|Toll-Free |USD 23.39/mo |
+
+### Usage charges
+|Number type |To make calls* |To receive calls|
+|--|--||
+|Geographic |Starting at USD 0.0160/min |USD 0.0100/min |
+|Toll-free |Starting at USD 0.0160/min |USD 0.3415/min |
+
+\* For destination-specific pricing for making outbound calls, please refer to details [here](https://github.com/Azure/Communication/blob/master/pricing/communication-services-pstn-rates.csv)
+
+## Sweden telephony offers
+
+### Phone number leasing charges
+|Number type |Monthly fee |
+|--|--|
+|Geographic |USD 1.00/mo |
+|Toll-Free |USD 21.05/mo |
+
+### Usage charges
+|Number type |To make calls* |To receive calls|
+|--|--||
+|Geographic |Starting at USD 0.0160/min |USD 0.0080/min |
+|Toll-free |Starting at USD 0.0160/min |USD 0.1138/min |
+
+\* For destination-specific pricing for making outbound calls, please refer to details [here](https://github.com/Azure/Communication/blob/master/pricing/communication-services-pstn-rates.csv)
+ *** Note: Pricing for all countries is subject to change as pricing is market-based and depends on third-party suppliers of telephony services. Additionally, pricing may include requisite taxes and fees.
communication-services Sms Pricing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/sms-pricing.md
zone_pivot_groups: acs-tollfree-shortcode
# SMS Pricing > [!IMPORTANT]
-> SMS messages can be sent to and received from United States phone numbers. Phone numbers located in other geographies are not yet supported by Communication Services SMS.
+> SMS messages can be sent to and received from United States and Canada phone numbers. Phone numbers located in other geographies are not yet supported by Communication Services SMS.
::: zone pivot="tollfree" [!INCLUDE [Toll-Free](./includes/sms-tollfree-pricing.md)]
In this quickstart, you learned how to send SMS messages using Azure Communicati
> [Learn more about SMS](../concepts/sms/concepts.md) The following documents may be interesting to you:-- Familiarize yourself with the [SMS SDK](../concepts/sms/sdk-features.md)
+- Familiarize yourself with one of the [SMS SDKs](../concepts/sms/sdk-features.md)
- Get an SMS capable [phone number](../quickstarts/telephony/get-phone-number.md) - Get a [short code](../quickstarts/sms/apply-for-short-code.md) - [Phone number types in Azure Communication Services](../concepts/telephony/plan-solution.md)
communication-services Sms Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/sms/sms-faq.md
Rate Limits for SMS:
## Carrier Fees ### What are the carrier fees for SMS?
-In July 2021, US carriers started charging an added fee for SMS messages sent and/or received from toll-free numbers and short codes. Carrier fees for SMS are charged per message segment based on the destination. Azure Communication Services charges a standard carrier fee per message segment. Carrier fees are subject to change by mobile carriers. Please refer to [SMS pricing](../sms-pricing.md) for more details.
+US and CA carriers charge an added fee for SMS messages sent and/or received from toll-free numbers and short codes. The carrier surcharge is calculated based on the destination of the message for sent messages and based on the sender of the message for received messages. Azure Communication Services charges a standard carrier fee per message segment. Carrier fees are subject to change by mobile carriers. Please refer to [SMS pricing](../sms-pricing.md) for more details.
### When will we come to know of changes to these surcharges? As with similar Azure services, customers will be notified at least 30 days prior to the implementation of any price changes. These charges will be reflected on our SMS pricing page along with the effective dates.
communication-services Network Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/network-requirements.md
Communication Services connections require internet connectivity to specific por
| Category | IP ranges or FQDN | Ports | | :-- | :-- | :-- |
-| Media traffic | [Range of Azure public cloud IP addresses](https://www.microsoft.com/download/confirmation.aspx?id=56519) | UDP 3478 through 3481, TCP ports 443 |
+| Media traffic | Range of Azure public cloud IP addresses 20.202.0.0/16 The range provided above is the range of IP addresses on either Media processor or ACS TURN service. | UDP 3478 through 3481, TCP ports 443 |
| Signaling, telemetry, registration| *.skype.com, *.microsoft.com, *.azure.net, *.azure.com, *.azureedge.net, *.office.com, *.trouter.io | TCP 443, 80 | ## Network optimization
communication-services Data Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/ui-library-sdk/data-model.md
+
+ Title: Custom Data Model Injection over the UI Library
+
+description: Use Azure Communication Services Mobile UI library to set up Custom Data Model Injection
++++ Last updated : 05/24/2022+
+zone_pivot_groups: acs-plat-web-ios-android
+
+#Customer intent: As a developer, I want to set up the Custom Data Model Injection in my application
++
+# Custom Data Model Injection
+
+Azure Communication Services uses an identity agnostic model where developers can [bring their own identities](../../concepts/identity-model.md). Contoso can get their data model and link it to Azure Communication Services identities. A developer's data model for a user most likely includes information such as their display name, profile picture or avatar, and other details. Information is used by developers to power their applications and platforms.
+
+The UI Library makes it simple for developers to inject that user data model into the UI Components. When rendered, they show users provided information rather than generic information that Azure Communication Services has.
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- A deployed Communication Services resource. [Create a Communication Services resource](../../quickstarts/create-communication-resource.md).
+- A `User Access Token` to enable the call client. For more information on [how to get a `User Access Token`](../../quickstarts/access-tokens.md)
+- Optional: Complete the quickstart for [getting started with the UI Library composites](../../quickstarts/ui-library/get-started-composites.md)
++++
+## Next steps
+
+- [Learn more about UI Library](../../quickstarts/ui-library/get-started-composites.md)
communication-services Localization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/ui-library-sdk/localization.md
Title: Localization over the UI Library
-description: Use Azure Communication Services Mobile UI library to setup localization
+description: Use Azure Communication Services Mobile UI library to set up localization
Last updated 04/03/2022
zone_pivot_groups: acs-plat-web-ios-android
-#Customer intent: As a developer, I want to setup the localization of my application
+#Customer intent: As a developer, I want to set up the localization of my application
# Localization
communication-services Theming https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/ui-library-sdk/theming.md
+
+ Title: Theming over the UI Library
+
+description: Use Azure Communication Services Mobile UI library to set up Theming
++++ Last updated : 05/24/2022+
+zone_pivot_groups: acs-plat-web-ios-android
+
+#Customer intent: As a developer, I want to set up the Theming of my application
++
+# Theming
+
+ACS UI Library uses components and icons from both [Fluent UI](https://developer.microsoft.com/fluentui), the cross-platform design system that's used by Microsoft. As a result, the components are built with usability, accessibility, and localization in mind.
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- A deployed Communication Services resource. [Create a Communication Services resource](../../quickstarts/create-communication-resource.md).
+- A `User Access Token` to enable the call client. For more information on [how to get a `User Access Token`](../../quickstarts/access-tokens.md)
+- Optional: Complete the quickstart for [getting started with the UI Library composites](../../quickstarts/ui-library/get-started-composites.md)
++++
+## Next steps
+
+- [Learn more about UI Library](../../quickstarts/ui-library/get-started-composites.md)
+- [Learn more about UI Library Design Kit](../../quickstarts/ui-library/get-started-ui-kit.md)
communication-services Add Azure Managed Domains https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/email/add-azure-managed-domains.md
+
+ Title: How to add Azure Managed Domains to Email Communication Service
+
+description: Learn about adding Azure Managed domains for Email Communication Services.
++++ Last updated : 04/15/2022++++
+# Quickstart: How to add Azure Managed Domains to Email Communication Service
++
+In this quick start, you'll learn about how to provision the Azure Managed domain in Azure Communication Services to send email.
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/dotnet/).
+- An Azure Email Communication Services Resource created and ready to provision the domains [Get started with Creating Email Communication Resource](../../quickstarts/email/create-email-communication-resource.md)
+
+## Provision Azure Managed Domain
+
+1. Go the overview page of the Email Communications Service resource that you created earlier.
+2. Create the Azure Managed Domain.
+ - (Option 1) Click the **1-click add** button under **Add a free Azure subdomain**. Move to the next step.
+
+ :::image type="content" source="./media/email-add-azure-domain.png" alt-text="Screenshot that highlights the adding a free Azure Managed Domain.":::
+
+ - (Option 2) Click **Provision Domains** on the left navigation panel.
+
+ :::image type="content" source="./media/email-add-azure-domain-navigation.png" alt-text="Screenshot that shows the Provision Domains navigation page.":::
+
+ - Click **Add domain** on the upper navigation bar.
+ - Select **Azure domain** from the dropdown.
+3. Wait for the deployment to complete.
+
+ :::image type="content" source="./media/email-add-azure-domain-progress.png" alt-text="Screenshot that shows the Deployment Progress." lightbox="media/email-add-azure-domain-progress-expanded.png":::
+
+4. After domain creation is completed, you'll see a list view with the created domain.
+
+ :::image type="content" source="./media/email-add-azure-domain-created.png" alt-text="Screenshot that shows the list of provisioned email domains." lightbox="media/email-add-azure-domain-created-expanded.png":::
+
+5. Click the name of the provisioned domain. This will navigate you to the overview page for the domain resource type.
+
+ :::image type="content" source="./media/email-azure-domain-overview.png" alt-text="Screenshot that shows Azure Managed Domain overview page." lightbox="media/email-azure-domain-overview-expanded.png":::
+
+## Sender authentication for Azure Managed Domain
+Azure communication Services Email automatically configures the required email authentication protocols to set proper authentication for the email as detailed in [Email Authentication best practices](../../concepts/email/email-authentication-best-practice.md).
+
+## Changing MailFrom and FROM display name for Azure Managed Domain
+When Azure Manged Domain is provisioned to send mail, it has default Mail From address as donotreply@xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx.azurecomm.net and the FROM display name would be the same. You'll able to configure and change the Mail from address and FROM display name to more user friendly value.
+
+1. Go the overview page of the Email Communications Service resource that you created earlier.
+2. Click **Provision Domains** on the left navigation panel. You'll see list of provisioned domains.
+3. Click on the Azure Manged Domain link
+
+ :::image type="content" source="./media/email-provisioned-domains.png" alt-text="Screenshot that shows Azure Managed Domain link in list of provisioned email domains." lightbox="media/email-provisioned-domains-expanded.png":::
+4. The navigation lands in Azure Managed Domain Overview page where you'll able to see Mailfrom and From attributes.
+
+ :::image type="content" source="./media/email-provisioned-domains-overview.png" alt-text="Screenshot that shows the overview page of provisioned email domain." lightbox="media/email-provisioned-domains-overview-expanded.png":::
+
+5. Click on edit link on MailFrom
+
+ :::image type="content" source="./media/email-domains-mailfrom.png" alt-text="Screenshot that explains how to change Mail From address and display name for an email address.":::
+
+6. You'll able to modify the Display Name and MailFrom address.
+
+ :::image type="content" source="./media/email-domains-mailfrom-change.png" alt-text="Screenshot that shows the submit button to save Mail From address and display name changes.":::
+
+7. Click **Save**. You'll see the updated values in the overview page.
+
+ :::image type="content" source="./media/email-domains-overview-updated.png" alt-text="Screenshot that shows Azure Managed Domain overview page with updated values." lightbox="media/email-provisioned-domains-overview-expanded.png":::
+
+**Your email domain is now ready to send emails.**
+
+## Next steps
+
+* [Get started with create and manage Email Communication Service in Azure Communication Service](../../quickstarts/email/create-email-communication-resource.md)
+
+* [Get started by connecting Email Communication Service with a Azure Communication Service resource](../../quickstarts/email/connect-email-communication-resource.md)
+
+The following documents may be interesting to you:
+
+- Familiarize yourself with the [Email client library](../../concepts/email/sdk-features.md)
+- How to send emails with custom verified domains?[Add custom domains](../../quickstarts/email/add-custom-verified-domains.md)
communication-services Add Custom Verified Domains https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/email/add-custom-verified-domains.md
+
+ Title: How to add custom verified domains to Email Communication Service
+
+description: Learn about adding Custom domains for Email Communication Services.
++++ Last updated : 04/15/2022++++
+# Quickstart: How to add custom verified domains to Email Communication Service
++
+In this quick start, you'll learn about how to add a custom domain and verify in Azure Communication Services to send email.
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/dotnet/).
+- An Azure Email Communication Services Resource created and ready to provision the domains [Get started with Creating Email Communication Resource](../../quickstarts/email/create-email-communication-resource.md)
+
+## Provision custom domain
+To provision a custom domain you need to
+
+* Verify the custom domain ownership by adding TXT record in your DNS.
+* Configure the sender authentication by adding SPF and DKIM records.
+
+### Verify custom domain
+
+1. Go the overview page of the Email Communications Service resource that you created earlier.
+2. Setup Custom Domain.
+ - (Option 1) Click the **Setup** button under **Setup a custom domain**. Move to the next step.
++
+ :::image type="content" source="./media/email-domains-custom.png" alt-text="Screenshot that shows how to setup a custom domain.":::
+
+ - (Option 2) Click **Provision Domains** on the left navigation panel.
+
+ :::image type="content" source="./media/email-domains-custom-navigation.png" alt-text="Screenshot that shows the navigation link to Provision Domains page.":::
+
+ - Click **Add domain** on the upper navigation bar.
+ - Select **Custom domain** from the dropdown.
+3. You'll be navigating to "Add a custom Domain".
+4. Enter your "Domain Name" and re enter domain name.
+5. Click **Confirm**
+
+ :::image type="content" source="./media/email-domains-custom-add.png" alt-text="Screenshot that shows where to enter the custom domain value.":::
+6. Ensure that domain name isn't misspelled or click edit to correct the domain name and confirm.
+7. Click **Add**
+
+ :::image type="content" source="./media/email-domains-custom-add-confirm.png" alt-text="Screenshot that shows how to add a custom domain of your choice.":::
+
+8. This will create custom domain configuration for your domain.
+
+ :::image type="content" source="./media/email-domains-custom-add-progress.png" alt-text="Screenshot that shows the progress of custom domain Deployment.":::
+
+9. You can verify the ownership of the domain by clicking **Verify Domain**
+
+ :::image type="content" source="./media/email-domains-custom-added.png" alt-text="Screenshot that shows that custom domain is successfully added for verification.":::.
+
+10. If you would like to resume the verification later, you can click **Close** and resume the verification from **Provision Domains** by clicking **Configure** .
+
+ :::image type="content" source="./media/email-domains-custom-configure.png" alt-text="Screenshot that shows the added domain ready for verification in the list of provisioned domains." lightbox="media/email-domains-custom-configure-expanded.png":::
+11. Clicking **Verify Domain** or **Configure** will navigate to "Verify Domain via TXT record" to follow.
+
+ :::image type="content" source="./media/email-domains-custom-verify.png" alt-text="Screenshot that shows the Configure link that you need to click to verify domain ownership." lightbox="media/email-domains-custom-verify-expanded.png":::
+
+12. You need add the above TXT record to your domain's registrar or DNS hosting provider. Click **Next** once you've completed this step.
+
+13. Verify that TXT record is created successfully in your DNS and Click **Done**
+14. DNS changes will take up to 15 to 30 minutes. Click **Close**
+
+ :::image type="content" source="./media/email-domains-custom-verify-progress.png" alt-text="Screenshot that shows the domain verification is in progress.":::
+15. Once your domain is verified, you can add your SPF and DKIM records to authenticate your domains.
+
+ :::image type="content" source="./media/email-domains-custom-verified.png" alt-text="Screenshot that shows the the custom domain is verified." lightbox="media/email-domains-custom-verified-expanded.png":::
++
+### Configure sender authentication for custom domain
+1. Navigate to **Provision Domains** and confirm that **Domain Status** is in "Verified" state.
+2. You can add SPF and DKIM by clicking **Configure**. You need add the following TXT record and CNAME records to your domain's registrar or DNS hosting provider. Click **Next** once you've completed this step.
+
+ :::image type="content" source="./media/email-domains-custom-spf.png" alt-text="Screenshot that shows the D N S records that you need to add for S P F validation for your verified domains.":::
+
+ :::image type="content" source="./media/email-domains-custom-dkim-1.png" alt-text="Screenshot that shows the D N S records that you need to add for D K I M.":::
+
+ :::image type="content" source="./media/email-domains-custom-dkim-2.png" alt-text="Screenshot that shows the D N S records that you need to add for additional D K I M records.":::
+
+3. Verify that TXT and CNAME records are created successfully in your DNS and Click **Done**
+
+ :::image type="content" source="./media/email-domains-custom-spf-dkim-verify.png" alt-text="Screenshot that shows the DNS records that you need to add for S P F and D K I M.":::
+
+4. DNS changes will take up to 15 to 30 minutes. Click **Close**
+
+ :::image type="content" source="./media/email-domains-custom-spf-dkim-verify-progress.png" alt-text="Screenshot that shows that the sender authentication verification is in progress.":::
+
+5. Wait for Verification to complete. You can check the verification status from **Provision Domains** page.
+
+ :::image type="content" source="./media/email-domains-custom-verification-status.png" alt-text="Screenshot that shows that the sender authentication verification is done." lightbox="media/email-domains-custom-verification-status-expanded.png":::
+
+6. Once your sender authentication configurations are successfully verified, your email domain will be ready to send emails using custom domain.
+
+ :::image type="content" source="./media/email-domains-custom-ready.png" alt-text="Screenshot that shows that your verified custom domain is ready to send Email." lightbox="media/email-domains-custom-ready-expanded.png":::
+
+## Changing MailFrom and FROM display name for custom domains
+
+When Azure Manged Domain is provisioned to send mail, it has default Mail from address as donotreply@xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx.azurecomm.net and the FROM display name would be the same. You'll able to configure and change the Mail from address and FROM display name to more user friendly value.
+
+1. Go the overview page of the Email Communications Service resource that you created earlier.
+2. Click **Provision Domains** on the left navigation panel. You'll see list of provisioned domains.
+3. Click on the Custom Domain name that you would like to update.
+
+ :::image type="content" source="./media/email-domains-custom-provision-domains.png" alt-text="Screenshot that shows how to get to overview page for verified Custom Domain from provisioned domains list.":::
+
+4. The navigation lands in Domain Overview page where you'll able to see Mailfrom and From attributes.
+
+ :::image type="content" source="./media/email-domains-custom-overview.png" alt-text="Screenshot that shows the overview page of the verified custom domain." lightbox="media/email-domains-custom-overview-expanded.png":::
+
+5. Click on edit link on MailFrom
+
+ :::image type="content" source="./media/email-domains-custom-mailfrom.png" alt-text="Screenshot that shows how to edit Mail From and display name for custom domain email address.":::
+
+6. You'll able to modify the Display Name and MailFrom address.
+
+ :::image type="content" source="./media/email-domains-custom-mailfrom-change.png" alt-text="Screenshot that shows that how to modify the Mail From and display name values.":::
+
+7. Click **Save**. You'll see the updated values in the overview page.
+
+ :::image type="content" source="./media/email-domains-overview-updated.png" alt-text="Screenshot that shows that how to save the modified values of Mail From and display name." lightbox="media/email-domains-custom-overview-expanded.png":::
+
+**Your email domain is now ready to send emails.**
+
+## Next steps
+
+* [Get started with create and manage Email Communication Service in Azure Communication Service](../../quickstarts/email/create-email-communication-resource.md)
+
+* [Get started by connecting Email Communication Service with a Azure Communication Service resource](../../quickstarts/email/connect-email-communication-resource.md)
++
+The following documents may be interesting to you:
+
+- Familiarize yourself with the [Email client library](../../concepts/email/sdk-features.md)
communication-services Connect Email Communication Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/email/connect-email-communication-resource.md
+
+ Title: How to connect a verified email domain with Azure Communication Service resource
+
+description: Learn about how to connect verified email domains with Azure Communication Services Resource.
++++ Last updated : 04/15/2022++++
+# Quickstart: How to connect a verified email domain with Azure Communication Service resource
++
+In this quick start, you'll learn about how to connect a verified domain in Azure Communication Services to send email.
+
+## Connect an email domain to a Communication Service Resource
+
+1. [Create a Communication Services Resources](../create-communication-resource.md) to connect to a verified domain.
+2. In the Azure Communication Service Resource overview page, click **Domains** on the left navigation panel under Email.
+
+ :::image type="content" source="./media/email-domains.png" alt-text="Screenshot that shows the left navigation panel for linking Email Domains." lightbox="media/email-domains-expanded.png":::
+
+3. Select one of the options below
+ - Click **Connect domain** in the upper navigation bar.
+ - Click **Connect domain** in the splash screen.
+
+ :::image type="content" source="./media/email-domains-connect.png" alt-text="Screenshot that shows how to connect one of your verified email domains." lightbox="media/email-domains-connect-expanded.png":::
+4. Select a one of the verified domains by filtering
+ - Subscription
+ - Resource Group
+ - Email Service
+ - Verified Domain
+
+ :::image type="content" source="./media/email-domains-connect-select.png" alt-text="Screenshot that shows how to filter and select one of the verified email domains to connect." lightbox="media/email-domains-connect-select-expanded.png":::
+5. Click Connect
+
+ :::image type="content" source="./media/email-domains-connected.png" alt-text="Screenshot that shows one of the verified email domain is now connected." lightbox="media/email-domains-connected-expanded.png":::
+
+## Disconnect an email domain from the Communication Service Resource
+
+1. In the Azure Communication Service Resource overview page, click **Domains** on the left navigation panel under Email.
+2. Select the Connected Domains click the ... and click Disconnect.
+
+ :::image type="content" source="./media/email-domains-connect-disconnect.png" alt-text="Screenshot that shows how to disconnect the connected domain." lightbox="media/email-domains-connect-disconnect-expanded.png":::
++
+## Next steps
+
+* [How to send an Email](../../quickstarts/email/send-email.md)
+
+* [What is Email Communication Resource for Azure Communication Service](../../concepts/email/prepare-email-communication-resource.md)
++
+The following documents may be interesting to you:
+
+- Familiarize yourself with the [Email client library](../../concepts/email/sdk-features.md)
communication-services Create Email Communication Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/email/create-email-communication-resource.md
+
+ Title: Quickstart - Create and manage Email Communication Service resource in Azure Communication Service
+
+description: In this quickstart, you'll learn how to create and manage your first Azure Email Communication Services resource.
++++ Last updated : 04/15/2022++++
+# Quickstart - Create and manage Email Communication Service resource in Azure Communication Service
++
+
+Get started with Email by provisioning your first Email Communication Services resource. Communication services resources can be provisioned through the [Azure portal](https://portal.azure.com) or with the .NET management client library. The management client library and the Azure portal allow you to create, configure, update and delete your resources and interface with [Azure Resource Manager](../../../azure-resource-manager/management/overview.md), Azure's deployment and management service. All functionality available in the client libraries is available in the Azure portal.
+
+## Create the Email Communications Service resource using portal
+
+1. Navigate to the [Azure portal](https://portal.azure.com/) to create a new resource.
+2. Search for Email Communication Services and hit enter. Select **Email Communication Services** and press **Create**
+
+ :::image type="content" source="./media/email-communication-search.png" alt-text="Screenshot that shows how to search Email Communication Service in market place.":::
+
+ :::image type="content" source="./media/email-communication-create.png" alt-text="Screenshot that shows Create link to create Email Communication Service.":::
+
+3. Complete the required information on the basics tab:
+ - Select an existing Azure subscription.
+ - Select an existing resource group, or create a new one by clicking the **Create new** link.
+ - Provide a valid name for the resource.
+ - Select **United States** as the data location.
+ - If you would like to add tags, click **Next: Tags**
+ - Add any name/value pairs.
+ - Click **Next: Review + create**.
+
+ :::image type="content" source="./media/email-communication-create-review.png" alt-text="Screenshot that shows how to the summary for review and create Email Communication Service.":::
+
+4. Wait for the validation to pass. Click **Create**
+5. Wait for the Deployment to complete. Click **Go to Resource** will land on Email Communication Service Overview Page.
+
+ :::image type="content" source="./media/email-communication-overview.png" alt-text="Screenshot that shows the overview of Email Communication Service resource.":::
+
+## Next steps
+
+* [Email domains and sender authentication for Azure Communication Services](../../concepts/email/email-domain-and-sender-authentication.md)
+
+* [Get started by connecting Email Communication Service with a Azure Communication Service resource](../../quickstarts/email/connect-email-communication-resource.md)
+
+The following documents may be interesting to you:
+
+- Familiarize yourself with the [Email client library](../../concepts/email/sdk-features.md)
+- How to send emails with custom verified domains?[Add custom domains](../../quickstarts/email/add-custom-verified-domains.md)
+- How to send emails with Azure Managed Domains?[Add Azure Managed domains](../../quickstarts/email/add-azure-managed-domains.md)
communication-services Send Email https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/email/send-email.md
+
+ Title: Quickstart - How to send an email using Azure Communication Service
+
+description: Learn how to send an email message using Azure Communication Services.
++++ Last updated : 04/15/2022+++
+zone_pivot_groups: acs-js-csharp
++
+# Quickstart: How to send an email using Azure Communication Service
++
+In this quick start, you'll learn about how to send email using our Email SDKs.
+++
+## Troubleshooting
+
+To troubleshoot issues related to email delivery, you can get status of the email delivery to capture delivery details.
+
+## Clean up resources
+
+If you want to clean up and remove a Communication Services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it. Learn more about [cleaning up resources](../create-communication-resource.md#clean-up-resources).
+
+## Next steps
+
+In this quick start, you learned how to send emails using Azure Communication Services.
+
+You may also want to:
+
+ - Learn about [Email concepts](../../concepts/email/email-overview.md)
+ - Familiarize yourself with [email client library](../../concepts/email/sdk-features.md)
communication-services Events Playbook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/events-playbook.md
+
+ Title: Build a custom event management platform with Microsoft Teams, Graph and Azure Communication Services
+
+description: Learn how to use Microsoft Teams, Graph and Azure Communication Services to build a custom event management platform.
+++++ Last updated : 03/31/2022+++++
+# Build a custom event management platform with Microsoft Teams, Graph and Azure Communication Services
+
+The goal of this document is to reduce the time it takes for Event Management Platforms to apply the power of Microsoft Teams Webinars through integration with Graph APIs and ACS UI Library. The target audience is developers and decision makers. To achieve the goal, this document provides the following two functions: 1) an aid to help event management platforms quickly decide what level of integration would be right for them, and 2) a step-by-step end-to-end QuickStart to speed up implementation.
+
+## What are virtual events and event management platforms?
+
+Microsoft empowers event platforms to integrate event capabilities using [Microsoft Teams](https://docs.microsoft.com/microsoftteams/quick-start-meetings-live-events), [Graph](https://docs.microsoft.com/graph/api/application-post-onlinemeetings?view=graph-rest-beta&tabs=http) and [Azure Communication Services](https://docs.microsoft.com/azure/communication-services/overview). Virtual Events are a communication modality where event organizers schedule and configure a virtual environment for event presenters and participants to engage with content through voice, video, and chat. Event management platforms enable users to configure events and for attendees to participate in those events, within their platform, applying in-platform capabilities and gamification. Learn more about[ Teams Meetings, Webinars and Live Events](https://docs.microsoft.com/microsoftteams/quick-start-meetings-live-events) that are used throughout this article to enable virtual event scenarios.
+
+## What are the building blocks of an event management platform?
+
+Event platforms require three core building blocks to deliver a virtual event experience.
+
+### 1. Event Scheduling and Management
+
+To get started, event organizers must schedule and configure the event. This process creates the virtual container that event attendees and presenters will enter to interact. As part of configuration, organizers might choose to add registration requirements for the event. Microsoft provides two patterns for organizers to create events:
+
+- Teams Client (Web or Desktop): Organizers can directly create events using their Teams client where they can choose a time and place, configure registration, and send to a list of attendees.
+
+- Microsoft Graph: Programmatically, event platforms can schedule and configure a Teams event on behalf of a user by using their Microsoft 365 license.
+
+### 2. Attendee experience
+
+For event attendees, they are presented with an experience that enables them to attend, participate, and engage with an eventΓÇÖs content. This experience might include capabilities like watching content, sharing their camera stream, asking questions, responding to polls, and more. Microsoft provides two options for attendees to consume events powered by Teams and Azure Communication
+
+- Teams Client (Web or Desktop): Attendees can directly join events using a Teams Client by using a provided join link. They get access to the full Teams experience.
+
+- Azure Communication
+
+### 3. Host & Organizer experience
+
+Event hosts and organizers require the ability to present content, manage attendees (mute, change roles, etc.) and manage the event (start, end, etc.).
+
+- Teams Client (Web or Desktop): Presenters can join using the fully fledged Teams client for web or mobile. The Teams client provides presenters a full set of capabilities to deliver their content. Learn more about [presenter capabilities for Teams](https://support.microsoft.com/office/present-in-a-live-event-in-teams-d58fc9db-ff5b-4633-afb3-b4b2ddef6c0a).
+
+## Building a custom solution for event management with Azure Communication Services and Microsoft Graph
+
+Throughout the rest of this tutorial, we will focus on how using Azure Communication Services and Microsoft Graph to build a custom event management platform. We will be using the sample architecture below. Based on that architecture we will be focusing on setting up scheduling and registration flows and embedding the attendee experience right on the event platform to join the event.
++
+## Leveraging Microsoft Graph to schedule events and register attendees
+
+Microsoft Graph enables event management platforms to empower organizers to schedule and manage their events directly through the event management platform. For attendees, event management platforms can build custom registration flows right on their platform that registers the attendee for the event and generates unique credentials for them to join the Teams hosted event.
+
+>[!NOTE]
+>For each required Graph API has different required scopes, ensure that your application has the correct scopes to access the data.
+
+### Scheduling registration-enabled events with Microsoft Graph
+
+1. Authorize application to use Graph APIs on behalf of service account. This authorization is required in order to have the application use credentials to interact with your tenant to schedule events and register attendees.
+
+ 1. Create an account that will own the meetings and is branded appropriately. This is the account that will create the events and which will receive notifications for it. We recommend to not user a personal production account given the overhead it might incur in the form of remainders.
+
+ 1. As part of the application setup, the service account is used to login into the solution once. With this permission the application can retrieve and store an access token on behalf of the service account that will own the meetings. Your application will need to store the tokens generated from the login and place them in a secure location such as a key vault. The application will need to store both the access token and the refresh token. Learn more about [auth tokens](https://docs.microsoft.com/azure/active-directory/develop/access-tokens). and [refresh tokens](https://docs.microsoft.com/azure/active-directory/develop/refresh-tokens).
+
+ 1. The application will require "on behalf of" permissions with the [offline scope](https://docs.microsoft.com/azure/active-directory/develop/v2-permissions-and-consent#offline_access) to act on behalf of the service account for the purpose of creating meetings. Individual Graph APIs require different scopes, learn more in the links detailed below as we introduce the required APIs.
+
+ 1. Refresh tokens can be revoked in the event of a breach or account termination
+
+ >[!NOTE]
+ >Authorization is required by both developers for testing and organizers who will be using your event platform to set up their events.
+
+2. Organizer logins to Contoso platform to create an event and generate a registration URL. To enable these capabilities developers should use:
+
+ 1. The [Create Calendar Event API](https://docs.microsoft.com/graph/api/user-post-events?view=graph-rest-1.0&tabs=http) to POST the new event to be created. The Event object returned will contain the join URL required for the next step. Need to set the following parameter: `isonlinemeeting: true` and `onlineMeetingProvider: "teamsForBusiness"`. Set a time zone for the event, using the `Prefer` header.
+
+ 1. Next, use the [Create Online Meeting API](https://docs.microsoft.com/graph/api/application-post-onlinemeetings?view=graph-rest-beta&tabs=http) to `GET` the online meeting information using the join URL generated from the step above. The `OnlineMeeting` object will contain the `meetingId` required for the registration steps.
+
+ 1. By using these APIs, developers are creating a calendar event to show up in the OrganizerΓÇÖs calendar and the Teams online meeting where attendees will join.
+
+>[!NOTE]
+>Known issue with double calendar entries for organizers when using the Calendar and Online Meeting APIs.
+
+3. To enable registration for an event, Contoso can use the [External Meeting Registration API](https://docs.microsoft.com/graph/api/resources/externalmeetingregistration?view=graph-rest-beta) to POST. The API requires Contoso to pass in the `meetingId` of the `OnlineMeeting` created above. Registration is optional. You can set options on who can register.
+
+### Register attendees with Microsoft Graph
+
+Event management platforms can use a custom registration flow to register attendees. This flow is powered by the [External Meeting Registrant API](https://docs.microsoft.com/graph/api/externalmeetingregistrant-post?view=graph-rest-beta&tabs=http). By using the API Contoso will receive a unique `Teams Join URL` for each attendee. This URL will be used as part of the attendee experience either through Teams or Azure Communication Services to have the attendee join the meeting.
+
+### Communicate with your attendees using Azure Communication Services
+
+Through Azure Communication Services, developers can use SMS and Email capabilities to send remainders to attendees for the event they have registered. Communication can also include confirmation for the event as well as information for joining and participating.
+- [SMS capabilities](https://docs.microsoft.com/azure/communication-services/quickstarts/sms/send) enable you to send text messages to your attendees.
+- [Email capabilities](https://docs.microsoft.com/azure/communication-services/quickstarts/email/send-email) support direct communication to your attendees using custom domains.
+
+### Leverage Azure Communication Services to build a custom attendee experience
+
+>[!NOTE]
+> Limitations when using Azure Communication Services as part of a Teams Webinar experience. Please visit our [documentation for more details.](https://docs.microsoft.com/azure/communication-services/concepts/join-teams-meeting#limitations-and-known-issues)
+
+Attendee experience can be directly embedded into an application or platform using [Azure Communication Services](https://docs.microsoft.com/azure/communication-services/overview) so that your attendees never need to leave your platform. It provides low-level calling and chat SDKs which support [interoperability with Teams Events](https://docs.microsoft.com/azure/communication-services/concepts/teams-interop), as well as a turn-key UI Library which can be used to reduce development time and easily embed communications. Azure Communication Services enables developers to have flexibility with the type of solution they need. Review [limitations](https://docs.microsoft.com/azure/communication-services/concepts/join-teams-meeting#limitations-and-known-issues) of using Azure Communication Services for webinar scenarios.
+
+1. To start, developers can leverage Microsoft Graph APIs to retrieve the join URL. This URL is provided uniquely per attendee during [registration](https://docs.microsoft.com/graph/api/externalmeetingregistrant-post?view=graph-rest-beta&tabs=http). Alternatively, it can be [requested for a given meeting](https://docs.microsoft.com/graph/api/onlinemeeting-get?view=graph-rest-beta&tabs=http).
+
+2. Before developers dive into using [Azure Communication Services](https://docs.microsoft.com/azure/communication-services/overview), they must [create a resource](https://docs.microsoft.com/azure/communication-services/quickstarts/create-communication-resource?tabs=windows&pivots=platform-azp).
+
+3. Once a resource is created, developers must [generate access tokens](https://docs.microsoft.com/azure/communication-services/quickstarts/access-tokens?pivots=programming-language-javascript) for attendees to access Azure Communication Services. We recommend using a [trusted service architecture](https://docs.microsoft.com/azure/communication-services/concepts/client-and-server-architecture).
+
+4. Developers can leverage [headless SDKs](https://docs.microsoft.com/azure/communication-services/concepts/teams-interop) or [UI Library](https://azure.github.io/communication-ui-library/) using the join link URL to join the Teams meeting through [Teams Interoperability](https://docs.microsoft.com/azure/communication-services/concepts/teams-interop). Details below:
+
+|Headless SDKs | UI Library |
+|-||
+| Developers can leverage the [calling](https://docs.microsoft.com/azure/communication-services/quickstarts/voice-video-calling/get-started-teams-interop?pivots=platform-javascript) and [chat](https://docs.microsoft.com/azure/communication-services/quickstarts/chat/meeting-interop?pivots=platform-javascript) SDKs to join a Teams meeting with your custom client | Developers can choose between the [call + chat](https://azure.github.io/communication-ui-library/?path=/docs/composites-meeting-basicexample--basic-example) or pure [call](https://azure.github.io/communication-ui-library/?path=/docs/composites-call-basicexample--basic-example) and [chat](https://azure.github.io/communication-ui-library/?path=/docs/composites-chat-basicexample--basic-example) composites to build their experience. Alternatively, developers can leverage [composable components](https://azure.github.io/communication-ui-library/?path=/docs/quickstarts-uicomponents--page) to build a custom Teams interop experience.|
++
+>[!NOTE]
+>Azure Communication Services is a consumption-based service billed through Azure. For more information on pricing visit our resources.
++
communication-services File Sharing Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/file-sharing-tutorial.md
+
+ Title: Enable file sharing using UI Library and Azure Blob Storage
+
+description: Learn how to use Azure Communication Services with the UI Library to enable file sharing through chat leveraging Azure Blob Storage.
+++++ Last updated : 04/04/2022+++++
+# Enable file sharing using UI Library and Azure Blob Storage
+
+In this tutorial, we'll be configuring the Azure Communication Services UI Library Chat Composite to enable file sharing. The UI Library Chat Composite provides a set of rich components and UI controls that can be used to enable file sharing. We will be leveraging Azure Blob Storage to enable the storage of the files that are shared through the chat thread.
+
+>[!IMPORTANT]
+>Azure Communication Services doesn't provide a file storage service. You will need to use your own file storage service for sharing files. For the pupose of this tutorial, we will be using Azure Blob Storage.**
+
+## Download code
+
+Access the full code for this tutorial on [GitHub](https://github.com/Azure-Samples/communication-services-javascript-quickstarts/tree/main/ui-library-filesharing-chat-composite). If you want to leverage file sharing using UI Components, reference [this sample](https://github.com/Azure-Samples/communication-services-javascript-quickstarts/tree/main/ui-library-filesharing-ui-components).
+
+## Prerequisites
+
+- An Azure account with an active subscription. For details, see [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- [Visual Studio Code](https://code.visualstudio.com/) on one of the [supported platforms](https://code.visualstudio.com/docs/supporting/requirements#_platforms).
+- [Node.js](https://nodejs.org/), Active LTS and Maintenance LTS versions (10.14.1 recommended). Use the `node --version` command to check your version.
+- An active Communication Services resource and connection string. [Create a Communication Services resource](../quickstarts/create-communication-resource.md).
+
+This tutorial assumes that you already know how to setup and run a Chat Composite. You can follow the [Chat Composite tutorial](https://azure.github.io/communication-ui-library/?path=/docs/quickstarts-composites--page) to learn how to set up and run a Chat Composite.
+
+## Overview
+
+The UI Library Chat Composite supports file sharing by enabling developers to pass the URL to a hosted file that is sent through the Azure Communication Services chat service. The UI Library renders the attached file and supports multiple extensions to configure the look and feel of the file sent. More specifically, it supports the following features:
+
+1. Attach file button for picking files through the OS File Picker
+2. Configure allowed file extensions.
+3. Enable/disable multiple uploads.
+4. File Icons for a wide variety of file types.
+5. File upload/download cards with progress indicators.
+6. Ability to dynamically validate each file upload and display errors on the UI.
+7. Ability to cancel an upload and remove an uploaded file before it is sent.
+8. View uploaded files in MessageThread, download them. Allows asynchronous downloads.
+
+The diagram below shows a typical flow of a file sharing scenario for both upload and download. The section marked as `Client Managed` shows the building blocks that need to be implemented by developers.
+
+![Filesharing typical flow](./media/filesharing-typical-flow.png "Diagram that shows the the file sharing typical flow.")
+
+## Setup File Storage using Azure Blob
+
+You can follow the tutorial [Upload file to Azure Blob Storage with an Azure Function](https://docs.microsoft.com/azure/developer/javascript/how-to/with-web-app/azure-function-file-upload) to write the backend code required for file sharing.
+
+Once implemented, you can call this Azure Function inside the `uploadHandler` function to upload files to Azure Blob Storage. For the remaining of the tutorial, we will assume you have generated the function using the tutorial for Azure Blob Storage linked above.
+
+UI Library requires a React environment to be setup. Next we will do that. If you already have a React App, you can skip this section.
+
+### Set Up React App
+
+We'll use the create-react-app template for this quickstart. For more information, see: [Get Started with React](https://reactjs.org/docs/create-a-new-react-app.html)
+
+```bash
+
+npx create-react-app ui-library-quickstart-composites --template typescript
+
+cd ui-library-quickstart-composites
+
+```
+
+At the end of this process, you should have a full application inside of the folder `ui-library-quickstart-composites`.
+For this quickstart, we'll be modifying files inside of the `src` folder.
+
+### Install the Package
+
+Use the `npm install` command to install the Azure Communication Services UI Library for JavaScript.
+
+```bash
+
+npm install @azure/communication-react
+
+```
+
+`@azure/communication-react` specifies core Azure Communication Services as `peerDependencies` so that
+you can most consistently use the API from the core libraries in your application. You need to install those libraries as well:
+
+```bash
+
+npm install @azure/communication-calling@1.4.4
+npm install @azure/communication-chat@1.2.0
+
+```
+
+### Run Create React App
+
+Let's test the Create React App installation by running:
+
+```bash
+
+npm run start
+
+```
+
+## Configuring Chat Composite to enable File Sharing
+
+You will need to replace the variable values for both common variable required to initialize the chat composite.
+
+`App.tsx`
+
+```javascript
+import { FileUploadHandler, FileUploadManager } from '@azure/communication-react';
+import { initializeFileTypeIcons } from '@fluentui/react-file-type-icons';
+import {
+ ChatComposite,
+ fromFlatCommunicationIdentifier,
+ useAzureCommunicationChatAdapter
+} from '@azure/communication-react';
+import React, { useMemo } from 'react';
+
+initializeFileTypeIcons();
+
+function App(): JSX.Element {
+ // Common variables
+ const endpointUrl = 'INSERT_ENDPOINT_URL';
+ const userId = ' INSERT_USER_ID';
+ const displayName = 'INSERT_DISPLAY_NAME';
+ const token = 'INSERT_ACCESS_TOKEN';
+ const threadId = 'INSERT_THREAD_ID';
+
+ // We can't even initialize the Chat and Call adapters without a well-formed token.
+ const credential = useMemo(() => {
+ try {
+ return new AzureCommunicationTokenCredential(token);
+ } catch {
+ console.error('Failed to construct token credential');
+ return undefined;
+ }
+ }, [token]);
+
+ // Memoize arguments to `useAzureCommunicationChatAdapter` so that
+ // a new adapter is only created when an argument changes.
+ const chatAdapterArgs = useMemo(
+ () => ({
+ endpoint: endpointUrl,
+ userId: fromFlatCommunicationIdentifier(userId) as CommunicationUserIdentifier,
+ displayName,
+ credential,
+ threadId
+ }),
+ [userId, displayName, credential, threadId]
+ );
+ const chatAdapter = useAzureCommunicationChatAdapter(chatAdapterArgs);
+
+ if (!!chatAdapter) {
+ return (
+ <>
+ <div style={containerStyle}>
+ <ChatComposite
+ adapter={chatAdapter}
+ options={{
+ fileSharing: {
+ uploadHandler: fileUploadHandler,
+ // If `fileDownloadHandler` is not provided. The file URL is opened in a new tab.
+ downloadHandler: fileDownloadHandler,
+ accept: 'image/png, image/jpeg, text/plain, .docx',
+ multiple: true
+ }
+ }} />
+ </div>
+ </>
+ );
+ }
+ if (credential === undefined) {
+ return <h3>Failed to construct credential. Provided token is malformed.</h3>;
+ }
+ return <h3>Initializing...</h3>;
+}
+
+const fileUploadHandler: FileUploadHandler = async (userId, fileUploads) => {
+ for (const fileUpload of fileUploads) {
+ try {
+ const { name, url, extension } = await uploadFileToAzureBlob(fileUpload);
+ fileUpload.notifyUploadCompleted({ name, extension, url });
+ } catch (error) {
+ if (error instanceof Error) {
+ fileUpload.notifyUploadFailed(error.message);
+ }
+ }
+ }
+}
+
+const uploadFileToAzureBlob = async (fileUpload: FileUploadManager) => {
+ // You need to handle the file upload here and upload it to Azure Blob Storage.
+ // Below you can find snippets for how to configure the upload
+ // Optionally, you can also update the file upload progress.
+ fileUpload.notifyUploadProgressChanged(0.2);
+ return {
+ name: 'SampleFile.jpg', // File name displayed during download
+ url: 'https://sample.com/sample.jpg', // Download URL of the file.
+ extension: 'jpeg' // File extension used for file icon during download.
+ };
+
+const fileDownloadHandler: FileDownloadHandler = async (userId, fileData) => {
+ return new URL(fileData.url);
+ }
+ };
+}
+
+```
+
+## Configure upload method to use Azure Blob Storage
+
+To enable Azure Blob Storage upload, we will modify the `uploadFileToAzureBlob` method we declared above with the following code. You will need to replace the Azure Function information below to enable to upload.
+
+`App.tsx`
+
+```javascript
+
+const uploadFileToAzureBlob = async (fileUpload: FileUploadManager) => {
+ const file = fileUpload.file;
+ if (!file) {
+ throw new Error('fileUpload.file is undefined');
+ }
+
+ const filename = file.name;
+ const fileExtension = file.name.split('.').pop();
+
+ // Following is an example of calling an Azure Function to handle file upload
+ // The https://docs.microsoft.com/en-us/azure/developer/javascript/how-to/with-web-app/azure-function-file-upload
+ // tutorial uses 'username' parameter to specify the storage container name.
+ // Note that the container in the tutorial is private by default. To get default downloads working in
+ // this sample, you need to change the container's access level to Public via Azure Portal.
+ const username = 'ui-library';
+
+ // You can get function url from the Azure Portal:
+ const azFunctionBaseUri='<YOUR_AZURE_FUNCTION_URL>';
+ const uri = `${azFunctionBaseUri}&username=${username}&filename=${filename}`;
+
+ const formData = new FormData();
+ formData.append(file.name, file);
+
+ const response = await axios.request({
+ method: "post",
+ url: uri,
+ data: formData,
+ onUploadProgress: (p) => {
+ // Optionally, you can update the file upload progess.
+ fileUpload.notifyUploadProgressChanged(p.loaded / p.total);
+ },
+ });
+
+ const storageBaseUrl = 'https://<YOUR_STORAGE_ACCOUNT>.blob.core.windows.net';
+
+ return {
+ name: filename,
+ url: `${storageBaseUrl}/${username}/${filename}`,
+ extension: fileExtension
+ };
+}
+
+```
+
+## Error Handling
+
+When an upload fails, the UI Library Chat Composite will display an error message.
+
+![File Upload Error Bar](./media/file-too-big.png "Screenshot that shows the File Upload Error Bar.")
+
+Here is sample code showcasing how you can fail an upload due to a size validation error by changing the `fileUploadHandler` above.
+
+`App.tsx`
+
+```javascript
+import { FileUploadHandler } from from '@azure/communication-react';
+
+const fileUploadHandler: FileUploadHandler = async (userId, fileUploads) => {
+ for (const fileUpload of fileUploads) {
+ if (fileUpload.file && fileUpload.file.size > 99 * 1024 * 1024) {
+ // Notify ChatComposite about upload failure.
+ // Allows you to provide a custom error message.
+ fileUpload.notifyUploadFailed('File too big. Select a file under 99 MB.');
+ }
+ }
+}
+```
+
+## File Downloads - Advanced Usage
+
+By default, the file `url` provided through `notifyUploadCompleted` method will be used to trigger a file download. However, if you need to handle a download in a different way, you can provide a custom `downloadHandler` to ChatComposite. Below we will modify the `fileDownloadHandler` that we declared above to check for an authorized user before allowing to download the file.
+
+`App.tsx`
+
+```javascript
+import { FileDownloadHandler } from "communication-react";
+
+const isUnauthorizedUser = (userId: string): boolean => {
+ // You need to write your own logic here for this example.
+}
+
+const fileDownloadHandler: FileDownloadHandler = async (userId, fileData) => {
+ if (isUnauthorizedUser(userId)) {
+ // Error message will be displayed to the user.
+ return { errorMessage: 'You donΓÇÖt have permission to download this file.' };
+ } else {
+ // If this function returns a Promise that resolves a URL string,
+ // the URL is opened in a new tab.
+ return new URL(fileData.url);
+ }
+}
+```
+
+Download errors will be displayed to users in an error bar on top of the Chat Composite.
+
+![File Download Error](./media/download-error.png "Screenshot that shows the File Download Error.")
++
+## Clean up resources
+
+If you want to clean up and remove a Communication Services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it. You can find out more about [cleaning up Azure Communication Service resources](../quickstarts/create-communication-resource.md#clean-up-resources) and [cleaning Azure Function Resources](../../azure-functions/create-first-function-vs-code-csharp.md#clean-up-resources).
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Check the rest of the UI Library](https://azure.github.io/communication-ui-library/)
+
+You may also want to:
+
+- [Add chat to your app](../quickstarts/chat/get-started.md)
+- [Creating user access tokens](../quickstarts/access-tokens.md)
+- [Learn about client and server architecture](../concepts/client-and-server-architecture.md)
+- [Learn about authentication](../concepts/authentication.md)
communication-services Virtual Visits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/virtual-visits.md
+
+ Title: Virtual visits with Azure Communication Services
+description: Learn concepts for virtual visit apps
+++++ Last updated : 01/10/2022+++++
+# Virtual visits
+
+This tutorial describes concepts for virtual visit applications. After completing this tutorial and the associated [Sample Builder](https://aka.ms/acs-sample-builder), you will understand common use cases that a virtual visits application delivers, the Microsoft technologies that can help you build those uses cases, and have built a sample application integrating Microsoft 365 and Azure that you can use to demo and explore further.
+
+Virtual visits are a communication pattern where a **consumer** and a **business** assemble for a scheduled appointment. The **organizational boundary** between consumer and business, and **scheduled** nature of the interaction, are key attributes of most virtual visits. Many industries operate virtual visits: meetings with a healthcare provider, a loan officer, or a product support technician.
+
+No matter the industry, there are at least three personas involved in a virtual visit and certain tasks they accomplish:
+- **Office Manager.** The office manager configures the businessΓÇÖ availability and booking rules for providers and consumers.
+- **Provider.** The provider gets on the call with the consumer. They must be able to view upcoming virtual visits and join the virtual visit and engage in communication.
+- **Consumer**. The consumer who schedules and motivates the visit. They must schedule a visit, enjoy reminders of the visit, typically through SMS or email, and join the virtual visit and engage in communication.
+
+Azure and Teams are interoperable. This interoperability gives organizations choice in how they deliver virtual visits using Microsoft's cloud. Three examples include:
+
+- **Microsoft 365** provides a zero-code suite for virtual visits using Microsoft [Teams](https://www.microsoft.com/microsoft-teams/group-chat-software/) and [Bookings](https://www.microsoft.com/microsoft-365/business/scheduling-and-booking-app). This is the easiest option but customization is limited. [Check out this video for an introduction.](https://www.youtube.com/watch?v=zqfGrwW2lEw)
+- **Microsoft 365 + Azure hybrid.** Combine Microsoft 365 Teams and Bookings with a custom Azure application for the consumer experience. Organizations take advantage of Microsoft 365's employee familiarity but customize and embed the consumer visit experience in their own application.
+- **Azure custom.** Build the entire solution on Azure primitives: the business experience, the consumer experience, and scheduling systems.
+
+![Diagram of virtual visit implementation options](./media/sample-builder/virtual-visit-options.svg)
+
+These three **implementation options** are columns in the table below, while each row provides a **use case** and the **enabling technologies**.
+
+|*Persona* | **Use Case** | **Microsoft 365** | **Microsoft 365 + Azure hybrid** | **Azure Custom** |
+|--||--|||
+| *Manager* | Configure Business Availability | Bookings | Bookings | Custom |
+| *Provider* | Managing upcoming visits | Outlook & Teams | Outlook & Teams | Custom |
+| *Provider* | Join the visit | Teams | Teams | ACS Calling & Chat |
+| *Consumer* | Schedule a visit | Bookings | Bookings | ACS Rooms |
+| *Consumer*| Be reminded of a visit | Bookings | Bookings | ACS SMS |
+| *Consumer*| Join the visit | Teams or Virtual Visits | ACS Calling & Chat | ACS Calling & Chat |
+
+There are other ways to customize and combine Microsoft tools to deliver a virtual visits experience:
+- **Replace Bookings with a custom scheduling experience with Graph.** You can build your own consumer-facing scheduling experience that controls Microsoft 365 meetings with Graph APIs.
+- **Replace TeamsΓÇÖ provider experience with Azure.** You can still use Microsoft 365 and Bookings to manage meetings but have the business user launch a custom Azure application to join the Teams meeting. This might be useful where you want to split or customize virtual visit interactions from day-to-day employee Teams activity.
+
+## Extend Microsoft 365 with Azure
+The rest of this tutorial focuses on Microsoft 365 and Azure hybrid solutions. These hybrid configurations are popular because they combine employee familiarity of Microsoft 365 with the ability to customize the consumer experience. TheyΓÇÖre also a good launching point to understanding more complex and customized architectures. The diagram below shows user steps for a virtual visit:
+
+![High-level architecture of a hybrid virtual visits solution](./media/sample-builder/virtual-visit-arch.svg)
+1. Consumer schedules the visit using Microsoft 365 Bookings.
+2. Consumer gets a visit reminder through SMS and Email.
+3. Provider joins the visit using Microsoft Teams.
+4. Consumer uses a link from the Bookings reminders to launch the Contoso consumer app and join the underlying Teams meeting.
+5. The users communicate with each other using voice, video, and text chat in a meeting.
+
+## Building a virtual visit sample
+In this section weΓÇÖre going to use a Sample Builder tool to deploy a Microsoft 365 + Azure hybrid virtual visits application to an Azure subscription. This application will be a desktop and mobile friendly browser experience, with code that you can use to explore and productionize.
+
+### Step 1 - Configure bookings
+
+This sample uses takes advantage of the Microsoft 365 Bookings app to power the consumer scheduling experience and create meetings for providers. Thus the first step is creating a Bookings calendar and getting the Booking page URL from https://outlook.office.com/bookings/calendar.
+
+![Booking configuration experience](./media/sample-builder/bookings-url.png)
+
+### Step 2 ΓÇô Sample Builder
+Use the Sample Builder to customize the consumer experience. You can reach the Sampler Builder using this [link](https://aka.ms/acs-sample-builder), or navigating to the page within the Azure Communication Services resource in the Azure portal. Step through the Sample Builder wizard and configure if Chat or Screen Sharing should be enabled. Change themes and text to you match your application. You can preview your configuration live from the page in both Desktop and Mobile browser form-factors.
+
+[ ![Sample builder start page](./media/sample-builder/sample-builder-start.png)](./media/sample-builder/sample-builder-start.png#lightbox)
+
+### Step 3 - Deploy
+At the end of the Sample Builder wizard, you can **Deploy to Azure** or download the code as a zip. The sample builder code is publicly available on [GitHub](https://github.com/Azure-Samples/communication-services-virtual-visits-js).
+
+[ ![Sample builder deployment page](./media/sample-builder/sample-builder-landing.png)](./media/sample-builder/sample-builder-landing.png#lightbox)
+
+The deployment launches an Azure Resource Manager (ARM) template that deploys the themed application you configured.
+
+![Sample builder arm template](./media/sample-builder/sample-builder-arm.png)
+
+After walking through the ARM template you can **Go to resource group**
+
+![Screenshot of a completed Azure Resource Manager Template](./media/sample-builder/azure-complete-deployment.png)
+
+### Step 4 - Test
+The Sample Builder creates three resources in the selected Azure subscriptions. The **App Service** is the consumer front end, powered by Azure Communication Services.
+
+![produced azure resources in azure portal](./media/sample-builder/azure-resources.png)
+
+Opening the App ServiceΓÇÖs URL and navigating to `https://<YOUR URL>/VISITS` allows you to try out the consumer experience and join a Teams meeting. `https://<YOUR URL>/BOOK` embeds the Booking experience for consumer scheduling.
+
+![final view of azure app service](./media/sample-builder/azure-resource-final.png)
+
+## Going to production
+The Sample Builder gives you the basics of a Microsoft 365 and Azure virtual visit: consumer scheduling via Bookings, consumer joins via custom app, and the provider joins via Teams. However, there are several things to consider as you take this scenario to production.
+
+### Launching patterns
+Consumers want to jump directly to the virtual visit from the scheduling reminders they receive from Bookings. In Bookings, you can provide a URL prefix that will be used in reminders. If your prefix is `https://<YOUR URL>/VISITS`, Bookings will point users to `https://<YOUR URL>/VISITS?=<TEAMID>.`
+
+### Integrate into your existing app
+The app service generated by the Sample Builder is a stand-alone artifact, designed for desktop and mobile browsers. However you may have a website or mobile application already and need to migrate these experiences to that existing codebase. The code generated by the Sample Builder should help, but you can also use:
+- **UI SDKs ΓÇô** [Production Ready Web and Mobile](../concepts/ui-library/ui-library-overview.md) components to build graphical applications.
+- **Core SDKs ΓÇô** The underlying [Call](../quickstarts/voice-video-calling/get-started-teams-interop.md) and [Chat](../quickstarts/chat/meeting-interop.md) services can be accessed and you can build any kind of user experience.
+
+### Identity & security
+The Sample BuilderΓÇÖs consumer experience does not authenticate the end user, but provides [Azure Communication Services user access tokens](../quickstarts/access-tokens.md) to any random visitor. That isnΓÇÖt realistic for most scenarios, and you will want to implement an authentication scheme.
container-apps Application Lifecycle Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/application-lifecycle-management.md
Title: Application lifecycle management in Azure Container Apps Preview
-description: Learn about the full application lifecycle in Azure Container Apps Preview
+ Title: Application lifecycle management in Azure Container Apps
+description: Learn about the full application lifecycle in Azure Container Apps
Last updated 11/02/2021 -+
-# Application lifecycle management in Azure Container Apps Preview
+# Application lifecycle management in Azure Container Apps
The Azure Container Apps application lifecycle revolves around [revisions](revisions.md).
container-apps Authentication Azure Active Directory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/authentication-azure-active-directory.md
Title: Enable authentication and authorization in Azure Container Apps Preview with Azure Active Directory
+ Title: Enable authentication and authorization in Azure Container Apps with Azure Active Directory
description: Learn to use the built-in Azure Active Directory authentication provider in Azure Container Apps. + Last updated 04/20/2022
-# Enable authentication and authorization in Azure Container Apps Preview with Azure Active Directory
+# Enable authentication and authorization in Azure Container Apps with Azure Active Directory
This article shows you how to configure authentication for Azure Container Apps so that your app signs in users with the [Microsoft identity platform](../active-directory/develop/v2-overview.md) (Azure AD) as the authentication provider.
container-apps Authentication Facebook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/authentication-facebook.md
Title: Enable authentication and authorization in Azure Container Apps Preview with Facebook
+ Title: Enable authentication and authorization in Azure Container Apps with Facebook
description: Learn to use the built-in Facebook authentication provider in Azure Container Apps. + Last updated 04/06/2022
-# Enable authentication and authorization in Azure Container Apps Preview with Facebook
+# Enable authentication and authorization in Azure Container Apps with Facebook
This article shows how to configure Azure Container Apps to use Facebook as an authentication provider.
container-apps Authentication Github https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/authentication-github.md
Title: Enable authentication and authorization in Azure Container Apps Preview with GitHub
+ Title: Enable authentication and authorization in Azure Container Apps with GitHub
description: Learn to use the built-in GitHub authentication provider in Azure Container Apps. + Last updated 04/20/2022
-# Enable authentication and authorization in Azure Container Apps Preview with GitHub
+# Enable authentication and authorization in Azure Container Apps with GitHub
This article shows how to configure Azure Container Apps to use GitHub as an authentication provider.
container-apps Authentication Google https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/authentication-google.md
Title: Enable authentication and authorization in Azure Container Apps Preview with Google
+ Title: Enable authentication and authorization in Azure Container Apps with Google
description: Learn to use the built-in Google authentication provider in Azure Container Apps. + Last updated 04/20/2022
-# Enable authentication and authorization in Azure Container Apps Preview with Google
+# Enable authentication and authorization in Azure Container Apps with Google
This article shows you how to configure Azure Container Apps to use Google as an authentication provider.
container-apps Authentication Openid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/authentication-openid.md
Title: Enable authentication and authorization in Azure Container Apps Preview with a Custom OpenID Connect provider
+ Title: Enable authentication and authorization in Azure Container Apps with a Custom OpenID Connect provider
description: Learn to use the built-in Custom OpenID Connect authentication provider in Azure Container Apps. + Last updated 04/20/2022
-# Enable authentication and authorization in Azure Container Apps Preview with a Custom OpenID Connect provider
+# Enable authentication and authorization in Azure Container Apps with a Custom OpenID Connect provider
This article shows you how to configure Azure Container Apps to use a custom authentication provider that adheres to the [OpenID Connect specification](https://openid.net/connect/). OpenID Connect (OIDC) is an industry standard used by many identity providers (IDPs). You don't need to understand the details of the specification in order to configure your app to use an adherent IDP.
container-apps Authentication Twitter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/authentication-twitter.md
Title: Enable authentication and authorization in Azure Container Apps Preview with Twitter
+ Title: Enable authentication and authorization in Azure Container Apps with Twitter
description: Learn to use the built-in Twitter authentication provider in Azure Container Apps. + Last updated 04/20/2022
-# Enable authentication and authorization in Azure Container Apps Preview with Twitter
+# Enable authentication and authorization in Azure Container Apps with Twitter
This article shows how to configure Azure Container Apps to use Twitter as an authentication provider.
Use the following guides for details on working with authenticated users.
> [Authentication and authorization overview](authentication.md) <!-- URLs. -->
-[Azure portal]: https://portal.azure.com/
+[Azure portal]: https://portal.azure.com/
container-apps Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/authentication.md
Title: Authentication and authorization in Azure Container Apps Preview
+ Title: Authentication and authorization in Azure Container Apps
description: Use built-in authentication in Azure Container Apps. + Last updated 04/20/2022
-# Authentication and authorization in Azure Container Apps Preview
+# Authentication and authorization in Azure Container Apps
Azure Container Apps provides built-in authentication and authorization features (sometimes referred to as "Easy Auth"), to secure your external ingress-enabled container app with minimal or no code.
container-apps Azure Resource Manager Api Spec https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/azure-resource-manager-api-spec.md
Title: Container Apps Preview ARM template API specification
+ Title: Container Apps ARM template API specification
description: Explore the available properties in the Container Apps ARM template.
Last updated 05/13/2022 -+
-# Container Apps Preview ARM template API specification
+# Container Apps ARM template API specification
Azure Container Apps deployments are powered by an Azure Resource Manager (ARM) template. Some Container Apps CLI commands also support using a YAML template to specify a resource.
properties:
maxReplicas: 3 ``` -+
container-apps Background Processing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/background-processing.md
Title: 'Tutorial: Deploy a background processing application with Azure Container Apps Preview'
+ Title: 'Tutorial: Deploy a background processing application with Azure Container Apps'
description: Learn to create an application that continuously runs in the background with Azure Container Apps
Last updated 11/02/2021 -+
-# Tutorial: Deploy a background processing application with Azure Container Apps Preview
+# Tutorial: Deploy a background processing application with Azure Container Apps
Using Azure Container Apps allows you to deploy applications without requiring the exposure of public endpoints. By using Container Apps scale rules, the application can scale up and down based on the Azure Storage queue length. When there are no messages on the queue, the container app scales down to zero.
Create a file named *queue.json* and paste the following configuration code into
{ "name": "queuereader", "type": "Microsoft.App/containerApps",
- "apiVersion": "2022-01-01-preview",
+ "apiVersion": "2022-03-01",
"kind": "containerapp", "location": "[parameters('location')]", "properties": {
container-apps Billing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/billing.md
Title: Billing in Azure Container Apps preview
-description: Learn how billing is calculated in Azure Container Apps preview
+ Title: Billing in Azure Container Apps
+description: Learn how billing is calculated in Azure Container Apps
+ Last updated 03/09/2022
-# Billing in Azure Container Apps preview
+# Billing in Azure Container Apps
Azure Container Apps billing consists of two types of charges:
container-apps Compare Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/compare-options.md
Last updated 11/03/2021 -+ # Comparing Container Apps with other Azure container options
There are many options for teams to build and deploy cloud native and containeri
There's no perfect solution for every use case and every team. The following explanation provides general guidance and recommendations as a starting point to help find the best fit for your team and your requirements.
-> [!IMPORTANT]
-> Azure Container Apps is currently in public preview while these other options are generally available (GA).
-- ## Container option comparisons ### Azure Container Apps
container-apps Connect Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/connect-apps.md
Title: Connect applications in Azure Container Apps Preview
+ Title: Connect applications in Azure Container Apps
description: Learn to deploy multiple applications that communicate together in Azure Container Apps.
Last updated 11/02/2021 -+
-# Connect applications in Azure Container Apps Preview
+# Connect applications in Azure Container Apps
Azure Container Apps exposes each container app through a domain name if [ingress](ingress.md) is enabled. Ingress endpoints can be exposed either publicly to the world or internally and only available to other container apps in the same [environment](environment.md).
container-apps Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/containers.md
Title: Containers in Azure Container Apps Preview
+ Title: Containers in Azure Container Apps
description: Learn how containers are managed and configured in Azure Container Apps
Last updated 05/12/2022 -+
-# Containers in Azure Container Apps Preview
+# Containers in Azure Container Apps
Azure Container Apps manages the details of Kubernetes and container orchestration for you. Containers in Azure Container Apps can use any runtime, programming language, or development stack of your choice.
container-apps Dapr Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/dapr-overview.md
description: Learn more about using Dapr on your Azure Container App service to
+ Last updated 05/10/2022
The following Pub/sub example demonstrates how Dapr works alongside your contain
| Label | Dapr settings | Description | | -- | - | -- |
-| 1 | Container Apps with Dapr enabled | Dapr is enabled at the container app level by configuring Dapr settings. Dapr settings exist at the app-level, meaning they apply across revisions. |
+| 1 | Container Apps with Dapr enabled | Dapr is enabled at the container app level by configuring Dapr settings. Dapr settings apply across all revisions of a given container app. |
| 2 | Dapr sidecar | Fully managed Dapr APIs are exposed to your container app via the Dapr sidecar. These APIs are available through HTTP and gRPC protocols. By default, the sidecar runs on port 3500 in Container Apps. | | 3 | Dapr component | Dapr components can be shared by multiple container apps. Using scopes, the Dapr sidecar will determine which components to load for a given container app at runtime. |
This resource defines a Dapr component called `dapr-pubsub` via Bicep. The Dapr
The `dapr-pubsub` component is scoped to the Dapr-enabled container apps with app ids `publisher-app` and `subscriber-app`: ```bicep
-resource daprComponent 'daprComponents@2022-01-01-preview' = {
+resource daprComponent 'daprComponents@2022-03-01' = {
name: 'dapr-pubsub' properties: { componentType: 'pubsub.azure.servicebus'
scopes:
- subscriber-app ```
+## Current supported Dapr version
+
+Azure Container Apps supports Dapr version 1.7.3.
+
+Version upgrades are handled transparently by Azure Container Apps. You can find the current version via the Azure portal and the CLI.
+ ## Limitations ### Unsupported Dapr capabilities - **Dapr Secrets Management API**: Use [Container Apps secret mechanism][aca-secrets] as an alternative. - **Custom configuration for Dapr Observability**: Instrument your environment with Application Insights to visualize distributed tracing.-- **Dapr Configuration spec**: Any capabilities that require use of the Dapr configuration spec, which includes preview features.
+- **Dapr Configuration spec**: Any capabilities that require use of the Dapr configuration spec.
- **Advanced Dapr sidecar configurations**: Container Apps allows you to specify sidecar settings including `app-protocol`, `app-port`, and `app-id`. For a list of unsupported configuration options, see [the Dapr documentation](https://docs.dapr.io/reference/arguments-annotations-overview/).-- **Dapr APIs in Preview state** ### Known limitations
container-apps Deploy Visual Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/deploy-visual-studio.md
Last updated 3/04/2022-+ # Tutorial: Deploy to Azure Container Apps using Visual Studio
-Azure Container Apps Preview enables you to run microservices and containerized applications on a serverless platform. With Container Apps, you enjoy the benefits of running containers while leaving behind the concerns of manually configuring cloud infrastructure and complex container orchestrators.
+Azure Container Apps enables you to run microservices and containerized applications on a serverless platform. With Container Apps, you enjoy the benefits of running containers while leaving behind the concerns of manually configuring cloud infrastructure and complex container orchestrators.
In this tutorial, you'll deploy a containerized ASP.NET Core 6.0 application to Azure Container Apps using Visual Studio. The steps below also apply to earlier versions of ASP.NET Core. ## Prerequisites - An Azure account with an active subscription is required. If you don't already have one, you can [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- Visual Studio 2022 Preview 3 or higher, available as a [free download](https://visualstudio.microsoft.com/vs/preview/).
+- Visual Studio 2022 version 17.2 or higher, available as a [free download](https://visualstudio.microsoft.com).
- [Docker Desktop](https://hub.docker.com/editions/community/docker-ce-desktop-windows) for Windows. Visual Studio uses Docker Desktop for various containerization features. ## Create the project
The Visual Studio publish dialogs will help you choose existing Azure resources,
:::image type="content" source="media/visual-studio/container-apps-deploy-azure.png" alt-text="A screenshot showing to publish to Azure.":::
-3) On the **Specific target** screen, choose **Azure Container Apps Preview (Linux)**, and then select **Next** again.
+3) On the **Specific target** screen, choose **Azure Container Apps (Linux)**, and then select **Next** again.
:::image type="content" source="media/visual-studio/container-apps-publish-azure.png" alt-text="A screenshot showing Container Apps selected.":::
container-apps Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/environment.md
Title: Azure Container Apps environments Preview
+ Title: Azure Container Apps environments
description: Learn how environments are managed in Azure Container Apps.
Last updated 12/05/2021 -+
-# Azure Container Apps Preview environments
+# Azure Container Apps environments
Individual container apps are deployed to a single Container Apps environment, which acts as a secure boundary around groups of container apps. Container Apps in the same environment are deployed in the same virtual network and write logs to the same Log Analytics workspace. You may provide an [existing virtual network](vnet-custom.md) when you create an environment.
Billing is relevant only to individual container apps and their resource usage.
## Next steps > [!div class="nextstepaction"]
-> [Containers](containers.md)
+> [Containers](containers.md)
container-apps Firewall Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/firewall-integration.md
Title: Securing a custom VNET in Azure Container Apps Preview
-description: Firewall settings to secure a custom VNET in Azure Container Apps Preview
+ Title: Securing a custom VNET in Azure Container Apps
+description: Firewall settings to secure a custom VNET in Azure Container Apps
+ Last updated 4/15/2022
container-apps Get Started Existing Container Image Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/get-started-existing-container-image-portal.md
Title: 'Quickstart: Deploy an existing container image in the Azure portal'
-description: Deploy an existing container image to Azure Container Apps Preview using the Azure portal.
+description: Deploy an existing container image to Azure Container Apps using the Azure portal.
+ Last updated 12/13/2021
zone_pivot_groups: container-apps-registry-types
# Quickstart: Deploy an existing container image in the Azure portal
-Azure Container Apps Preview enables you to run microservices and containerized applications on a serverless platform. With Container Apps, you enjoy the benefits of running containers while leaving behind the concerns of manually configuring cloud infrastructure and complex container orchestrators.
+Azure Container Apps enables you to run microservices and containerized applications on a serverless platform. With Container Apps, you enjoy the benefits of running containers while leaving behind the concerns of manually configuring cloud infrastructure and complex container orchestrators.
This article demonstrates how to deploy an existing container to Azure Container Apps using the Azure portal.
container-apps Get Started Existing Container Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/get-started-existing-container-image.md
Title: 'Quickstart: Deploy an existing container image with the Azure CLI'
-description: Deploy an existing container image to Azure Container Apps Preview with the Azure CLI.
+description: Deploy an existing container image to Azure Container Apps with the Azure CLI.
+ Last updated 03/21/2022
zone_pivot_groups: container-apps-registry-types
# Quickstart: Deploy an existing container image with the Azure CLI
-The Azure Container Apps Preview service enables you to run microservices and containerized applications on a serverless platform. With Container Apps, you enjoy the benefits of running containers while you leave behind the concerns of manual cloud infrastructure configuration and complex container orchestrators.
+The Azure Container Apps service enables you to run microservices and containerized applications on a serverless platform. With Container Apps, you enjoy the benefits of running containers while you leave behind the concerns of manual cloud infrastructure configuration and complex container orchestrators.
This article demonstrates how to deploy an existing container to Azure Container Apps.
This article demonstrates how to deploy an existing container to Azure Container
- An Azure account with an active subscription. - If you don't have one, you [can create one for free](https://azure.microsoft.com/free/). - Install the [Azure CLI](/cli/azure/install-azure-cli).
+- Access to a public or private container registry.
[!INCLUDE [container-apps-create-cli-steps.md](../../includes/container-apps-create-cli-steps.md)]
For details on how to provide values for any of these parameters to the `create`
::: zone pivot="container-apps-private-registry"
+If you are using Azure Container Registry (ACR), you can login to your registry and forego the need to use the `--registry-username` and `--registry-password` parameters in the `az containerapp create` command and eliminate the need to set the REGISTRY_USERNAME and REGISTRY_PASSWORD variables.
+
+# [Bash](#tab/bash)
+
+```azurecli
+az acr login --name <REGISTRY_NAME>
+```
+
+# [PowerShell](#tab/powershell)
+
+```powershell
+az acr login --name <REGISTRY_NAME>
+```
+++ # [Bash](#tab/bash) ```bash
REGISTRY_USERNAME=<REGISTRY_USERNAME>
REGISTRY_PASSWORD=<REGISTRY_PASSWORD> ```
-As you define these variables, replace the placeholders surrounded by `<>` with your values.
+(Replace the \<placeholders\> with your values.)
+
+If you have logged in to ACR, you can omit the `--registry-username` and `--registry-password` parameters in the `az containerapp create` command.
```azurecli az containerapp create \
$REGISTRY_USERNAME=<REGISTRY_USERNAME>
$REGISTRY_PASSWORD=<REGISTRY_PASSWORD> ```
-As you define these variables, replace the placeholders surrounded by `<>` with your values.
+(Replace the \<placeholders\> with your values.)
+
+If you have logged in to ACR, you can omit the `--registry-username` and `--registry-password` parameters in the `az containerapp create` command.
```powershell az containerapp create `
az monitor log-analytics query \
```powershell $LOG_ANALYTICS_WORKSPACE_CLIENT_ID=(az containerapp env show --name $CONTAINERAPPS_ENVIRONMENT --resource-group $RESOURCE_GROUP --query properties.appLogsConfiguration.logAnalyticsConfiguration.customerId --out tsv)
-$queryResults = Invoke-AzOperationalInsightsQuery -WorkspaceId $LOG_ANALYTICS_WORKSPACE_CLIENT_ID -Query "ContainerAppConsoleLogs_CL | where ContainerAppName_s == 'nodeapp' and (Log_s contains 'persisted' or Log_s contains 'order') | project ContainerAppName_s, Log_s, TimeGenerated | take 5"
-$queryResults.Results
+az monitor log-analytics query \
+ --workspace $LOG_ANALYTICS_WORKSPACE_CLIENT_ID `
+ --analytics-query "ContainerAppConsoleLogs_CL | where ContainerAppName_s == 'my-container-app' | project ContainerAppName_s, Log_s, TimeGenerated" `
--out table ```
az group delete \
# [PowerShell](#tab/powershell) ```powershell
-Remove-AzResourceGroup -Name $RESOURCE_GROUP -Force
+az group delete `
+ --name $RESOURCE_GROUP
```
container-apps Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/get-started.md
Title: 'Quickstart: Deploy your first container app'
-description: Deploy your first application to Azure Container Apps Preview.
+description: Deploy your first application to Azure Container Apps.
Last updated 03/21/2022 -+ ms.devlang: azurecli # Quickstart: Deploy your first container app
-The Azure Container Apps Preview service enables you to run microservices and containerized applications on a serverless platform. With Container Apps, you enjoy the benefits of running containers while you leave behind the concerns of manually configuring cloud infrastructure and complex container orchestrators.
+The Azure Container Apps service enables you to run microservices and containerized applications on a serverless platform. With Container Apps, you enjoy the benefits of running containers while you leave behind the concerns of manually configuring cloud infrastructure and complex container orchestrators.
In this quickstart, you create a secure Container Apps environment and deploy your first container app.
container-apps Github Actions Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/github-actions-cli.md
Title: Publish revisions with GitHub Actions in Azure Container Apps Preview
-description: Learn to automatically create new revisions using GitHub Actions in Azure Container Apps Preview
+ Title: Publish revisions with GitHub Actions in Azure Container Apps
+description: Learn to automatically create new revisions using GitHub Actions in Azure Container Apps
+ Last updated 12/30/2021
-# Publish revisions with GitHub Actions in Azure Container Apps Preview
+# Publish revisions with GitHub Actions in Azure Container Apps
Azure Container Apps allows you to use GitHub Actions to publish [revisions](revisions.md) to your container app. As commits are pushed to your GitHub repository, a GitHub Actions is triggered which updates the [container](containers.md) image in the container registry. Once the container is updated in the registry, Azure Container Apps creates a new revision based on the updated container image.
container-apps Health Probes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/health-probes.md
description: Check startup, liveness, and readiness with Azure Container Apps he
+ Last updated 03/30/2022
TCP probes wait for a connection to be established with the server to indicate s
The following code listing shows how you can define health probes for your containers.
-The `...` placeholders denote omitted code. Refer to [Container Apps Preview ARM template API specification](./azure-resource-manager-api-spec.md) for full ARM template details.
+The `...` placeholders denote omitted code. Refer to [Container Apps ARM template API specification](./azure-resource-manager-api-spec.md) for full ARM template details.
# [ARM template](#tab/arm-template)
container-apps Ingress https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/ingress.md
Title: Set up HTTPS ingress in Azure Container Apps Preview
+ Title: Set up HTTPS ingress in Azure Container Apps
description: Enable public and private endpoints in your app with Azure Container Apps
Last updated 11/02/2021 -+
-# Set up HTTPS ingress in Azure Container Apps Preview
+# Set up HTTPS ingress in Azure Container Apps
Azure Container Apps allows you to expose your container app to the public web by enabling ingress. When you enable ingress, you do not need to create an Azure Load Balancer, public IP address, or any other Azure resources to enable incoming HTTPS requests.
container-apps Manage Secrets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/manage-secrets.md
Title: Manage secrets in Azure Container Apps Preview
+ Title: Manage secrets in Azure Container Apps
description: Learn to store and consume sensitive configuration values in Azure Container Apps.
Last updated 11/02/2021 -+
-# Manage secrets in Azure Container Apps Preview
+# Manage secrets in Azure Container Apps
Azure Container Apps allows your application to securely store sensitive configuration values. Once defined at the application level, secured values are available to containers, inside scale rules, and via Dapr.
container-apps Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/managed-identity.md
description: Using managed identities in Container Apps
+ Last updated 04/11/2022
-# Managed identities in Azure Container Apps Preview
+# Managed identities in Azure Container Apps
A managed identity from Azure Active Directory (Azure AD) allows your container app to access other Azure AD-protected resources. For more about managed identities in Azure AD, see [Managed identities for Azure resources](../active-directory/managed-identities-azure-resources/overview.md).
container-apps Microservices Dapr Azure Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/microservices-dapr-azure-resource-manager.md
Last updated 01/31/2022 -+ zone_pivot_groups: container-apps
In this tutorial, you deploy the same applications from the Dapr [Hello World](h
The application consists of: -- A client (Python) container app to generates messages.
+- A client (Python) container app to generate messages.
- A service (Node) container app to consume and persist those messages in a state store The following architecture diagram illustrates the components that make up this tutorial:
Save the following file as _hello-world.json_:
"resources": [ { "type": "Microsoft.OperationalInsights/workspaces",
- "apiVersion": "2020-03-01-preview",
+ "apiVersion": "2021-06-01",
"name": "[variables('logAnalyticsWorkspaceName')]", "location": "[parameters('location')]", "properties": {
Save the following file as _hello-world.json_:
}, { "type": "Microsoft.App/managedEnvironments",
- "apiVersion": "2022-01-01-preview",
+ "apiVersion": "2022-03-01",
"name": "[parameters('environment_name')]", "location": "[parameters('location')]", "dependsOn": [
Save the following file as _hello-world.json_:
"appLogsConfiguration": { "destination": "log-analytics", "logAnalyticsConfiguration": {
- "customerId": "[reference(resourceId('Microsoft.OperationalInsights/workspaces/', variables('logAnalyticsWorkspaceName')), '2020-03-01-preview').customerId]",
- "sharedKey": "[listKeys(resourceId('Microsoft.OperationalInsights/workspaces/', variables('logAnalyticsWorkspaceName')), '2020-03-01-preview').primarySharedKey]"
+ "customerId": "[reference(resourceId('Microsoft.OperationalInsights/workspaces/', variables('logAnalyticsWorkspaceName')), '2021-06-01').customerId]",
+ "sharedKey": "[listKeys(resourceId('Microsoft.OperationalInsights/workspaces/', variables('logAnalyticsWorkspaceName')), '2021-06-01').primarySharedKey]"
} } },
Save the following file as _hello-world.json_:
{ "type": "daprComponents", "name": "statestore",
- "apiVersion": "2022-01-01-preview",
+ "apiVersion": "2022-03-01",
"dependsOn": [ "[resourceId('Microsoft.App/managedEnvironments/', parameters('environment_name'))]" ],
Save the following file as _hello-world.json_:
}, { "type": "Microsoft.App/containerApps",
- "apiVersion": "2022-01-01-preview",
+ "apiVersion": "2022-03-01",
"name": "nodeapp", "location": "[parameters('location')]", "dependsOn": [
Save the following file as _hello-world.json_:
}, { "type": "Microsoft.App/containerApps",
- "apiVersion": "2022-01-01-preview",
+ "apiVersion": "2022-03-01",
"name": "pythonapp", "location": "[parameters('location')]", "dependsOn": [
param storage_container_name string
var logAnalyticsWorkspaceName = 'logs-${environment_name}' var appInsightsName = 'appins-${environment_name}'
-resource logAnalyticsWorkspace'Microsoft.OperationalInsights/workspaces@2020-03-01-preview' = {
+resource logAnalyticsWorkspace'Microsoft.OperationalInsights/workspaces@2021-06-01' = {
name: logAnalyticsWorkspaceName location: location properties: any({
resource appInsights 'Microsoft.Insights/components@2020-02-02' = {
} }
-resource environment 'Microsoft.App/managedEnvironments@2022-01-01-preview' = {
+resource environment 'Microsoft.App/managedEnvironments@2022-03-01' = {
name: environment_name location: location properties: {
resource environment 'Microsoft.App/managedEnvironments@2022-01-01-preview' = {
appLogsConfiguration: { destination: 'log-analytics' logAnalyticsConfiguration: {
- customerId: reference(logAnalyticsWorkspace.id, '2020-03-01-preview').customerId
- sharedKey: listKeys(logAnalyticsWorkspace.id, '2020-03-01-preview').primarySharedKey
+ customerId: reference(logAnalyticsWorkspace.id, '2021-06-01').customerId
+ sharedKey: listKeys(logAnalyticsWorkspace.id, '2021-06-01').primarySharedKey
} } }
- resource daprComponent 'daprComponents@2022-01-01-preview' = {
+ resource daprComponent 'daprComponents@2022-03-01' = {
name: 'statestore' properties: { componentType: 'state.azure.blobstorage'
resource environment 'Microsoft.App/managedEnvironments@2022-01-01-preview' = {
} }
-resource nodeapp 'Microsoft.App/containerApps@2022-01-01-preview' = {
+resource nodeapp 'Microsoft.App/containerApps@2022-03-01' = {
name: 'nodeapp' location: location properties: {
resource nodeapp 'Microsoft.App/containerApps@2022-01-01-preview' = {
} }
-resource pythonapp 'Microsoft.App/containerApps@2022-01-01-preview' = {
+resource pythonapp 'Microsoft.App/containerApps@2022-03-01' = {
name: 'pythonapp' location: location properties: {
container-apps Microservices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/microservices.md
Title: Microservices with Azure Containers Apps Preview
+ Title: Microservices with Azure Containers Apps
description: Build a microservice in Azure Container Apps.
Last updated 11/02/2021 -+
-# Microservices with Azure Containers Apps Preview
+# Microservices with Azure Containers Apps
[Microservice architectures](https://azure.microsoft.com/solutions/microservice-applications/#overview) allow you to independently develop, upgrade, version, and scale core areas of functionality in an overall system. Azure Container Apps provides the foundation for deploying microservices featuring:
container-apps Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/monitor.md
Title: Monitor an app in Azure Container Apps Preview
+ Title: Monitor an app in Azure Container Apps
description: Learn how applications are monitored and logged in Azure Container Apps.
Last updated 11/02/2021 -+
-# Monitor an app in Azure Container Apps Preview
+# Monitor an app in Azure Container Apps
Azure Container Apps gathers a broad set of data about your container app and stores it using [Log Analytics](../azure-monitor/logs/log-analytics-tutorial.md). This article describes the available logs, and how to write and view logs.
container-apps Observability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/observability.md
Title: Observability in Azure Container Apps Preview
-description: Monitor your running app in Azure Container Apps Preview
+ Title: Observability in Azure Container Apps
+description: Monitor your running app in Azure Container Apps
+ Last updated 05/02/2022
-# Observability in Azure Container Apps Preview
+# Observability in Azure Container Apps
Azure Container Apps provides several built-in observability features that give you a holistic view of your container appΓÇÖs health throughout its application lifecycle. These features help you monitor and diagnosis the state of your app to improve performance and respond to critical problems.
Container Apps provides these metrics.
|Network in bytes|Network received bytes|RxBytes|bytes| |Network out bytes|Network transmitted bytes|TxBytes|bytes| |Requests|Requests processed|Requests|n/a|
+|Replica count| Number of active replicas| Replicas | n/a |
+|Replica Restart Count| Number of replica restarts | RestartCount | n/a |
The metrics namespace is `microsoft.app/containerapps`.
You can filter your metrics by revision or replica. For example, to filter by a
:::image type="content" source="media/observability/metrics-add-filter.png" alt-text="Screenshot of the metrics explorer showing the chart filter options.":::
-You can split the information in your chart by revision or replica. For example, to split by revision, select **Apply splitting** and select **Revision** from the **Values** drop-down list. Splitting is only available when the chart contains a single metric.
+When applying splitting, you can split the metric information in your chart by revision or replica (except for Replica count, which you can only split by revision). The requests metric can also be split by status code and status code category. For example, to split by revision, select **Apply splitting** and select **Revision** from the **Values** drop-down list. Splitting is only available when the chart contains a single metric.
:::image type="content" source="media/observability/metrics-apply-splitting.png" alt-text="Screenshot of the metrics explorer that shows a chart with metrics split by revision.":::
Container Apps manages updates to your container app by creating [revisions](rev
## Next steps -- [Monitor an app in Azure Container Apps Preview](monitor.md)
+- [Monitor an app in Azure Container Apps](monitor.md)
- [Health probes in Azure Container Apps](health-probes.md)
container-apps Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/overview.md
Title: Azure Container Apps Preview overview
+ Title: Azure Container Apps overview
description: Learn about common scenarios and uses for Azure Container Apps
Last updated 11/02/2021 -+
-# Azure Container Apps Preview overview
+# Azure Container Apps overview
Azure Container Apps enables you to run microservices and containerized applications on a serverless platform. Common uses of Azure Container Apps include:
container-apps Quickstart Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/quickstart-portal.md
Title: 'Quickstart: Deploy your first container app using the Azure portal'
-description: Deploy your first application to Azure Container Apps Preview using the Azure portal.
+description: Deploy your first application to Azure Container Apps using the Azure portal.
Last updated 12/13/2021 -+ # Quickstart: Deploy your first container app using the Azure portal
-Azure Container Apps Preview enables you to run microservices and containerized applications on a serverless platform. With Container Apps, you enjoy the benefits of running containers while leaving behind the concerns of manually configuring cloud infrastructure and complex container orchestrators.
+Azure Container Apps enables you to run microservices and containerized applications on a serverless platform. With Container Apps, you enjoy the benefits of running containers while leaving behind the concerns of manually configuring cloud infrastructure and complex container orchestrators.
In this quickstart, you create a secure Container Apps environment and deploy your first container app using the Azure portal.
container-apps Quotas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/quotas.md
Title: Quotas for Azure Container Apps Preview
+ Title: Quotas for Azure Container Apps
description: Learn about quotas for Azure Container Apps. + Last updated 05/03/2022
-# Quotas for Azure Container Apps Preview
+# Quotas for Azure Container Apps
The following quotas are on a per subscription basis for Azure Container Apps.
container-apps Revisions Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/revisions-manage.md
Title: Manage revisions in Azure Container Apps Preview
+ Title: Manage revisions in Azure Container Apps
description: Manage revisions and traffic splitting in Azure Container Apps.
Last updated 11/02/2021 -+
-# Manage revisions Azure Container Apps Preview
+# Manage revisions Azure Container Apps
Supporting multiple revisions in Azure Container Apps allows you to manage the versioning and amount of [traffic sent to each revision](#traffic-splitting). Use the following commands to control of how your container app manages revisions.
container-apps Revisions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/revisions.md
Title: Revisions in Azure Container Apps Preview
-description: Learn about revisions in Azure Container Apps
+ Title: Revisions in Azure Container Apps
+description: Learn how revisions are created in Azure Container Apps
Last updated 05/11/2022 -+
-# Revisions in Azure Container Apps Preview
+# Revisions in Azure Container Apps
Azure Container Apps implements container app versioning by creating revisions. A revision is an immutable snapshot of a container app version.
You aren't charged for the inactive revisions. You can have a maximum of 100 rev
> [!div class="nextstepaction"] > [Application lifecycle management](application-lifecycle-management.md)-
container-apps Vnet Custom Internal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/vnet-custom-internal.md
Title: Provide an internal virtual network to an Azure Container Apps Preview environment
+ Title: Provide an internal virtual network to an Azure Container Apps environment
description: Learn how to provide an internal VNET to an Azure Container Apps environment. + Last updated 5/16/2022 zone_pivot_groups: azure-cli-or-portal
-# Provide a virtual network to an internal Azure Container Apps (Preview) environment
+# Provide a virtual network to an internal Azure Container Apps environment
The following example shows you how to create a Container Apps environment in an existing virtual network.
container-apps Vnet Custom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/vnet-custom.md
Title: Provide an external virtual network to an Azure Container Apps Preview environment
+ Title: Provide an external virtual network to an Azure Container Apps environment
description: Learn how to provide an external VNET to an Azure Container Apps environment. + Last updated 05/16/2022 zone_pivot_groups: azure-cli-or-portal
-# Provide a virtual network to an external Azure Container Apps (Preview) environment
+# Provide a virtual network to an external Azure Container Apps environment
The following example shows you how to create a Container Apps environment in an existing virtual network.
container-instances Container Instances Github Action https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-github-action.md
In the GitHub workflow, you need to supply Azure credentials to authenticate to
First, get the resource ID of your resource group. Substitute the name of your group in the following [az group show][az-group-show] command: ```azurecli
-$groupId=$(az group show \
+groupId=$(az group show \
--name <resource-group-name> \ --query id --output tsv) ```
Update the Azure service principal credentials to allow push and pull access to
Get the resource ID of your container registry. Substitute the name of your registry in the following [az acr show][az-acr-show] command: ```azurecli
-$registryId=$(az acr show \
+registryId=$(az acr show \
--name <registry-name> \ --query id --output tsv) ```
cosmos-db Burst Capacity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/burst-capacity.md
+
+ Title: Burst capacity in Azure Cosmos DB (preview)
+description: Learn more about burst capacity in Azure Cosmos DB
++++++ Last updated : 05/09/2022++
+# Burst capacity in Azure Cosmos DB (preview)
+
+Azure Cosmos DB burst capacity (preview) allows you to take advantage of your database or container's idle throughput capacity to handle spikes of traffic. With burst capacity, each physical partition can accumulate up to 5 minutes of idle capacity, which can be consumed at a rate up to 3000 RU/s. With burst capacity, requests that would have otherwise been rate limited can now be served with burst capacity while it's available.
+
+Burst capacity applies only to Azure Cosmos DB accounts using provisioned throughput (manual and autoscale) and doesn't apply to serverless containers. The feature is configured at the Azure Cosmos DB account level and will automatically apply to all databases and containers in the account that have physical partitions with less than 3000 RU/s of provisioned throughput. Resources that have greater than or equal to 3000 RU/s per physical partition won't benefit from or be able to use burst capacity.
+
+## How burst capacity works
+
+> [!NOTE]
+> The current implementation of burst capacity is subject to change in the future. Usage of burst capacity is subject to system resource availability and is not guaranteed. Azure Cosmos DB may also use burst capacity for background maintenance tasks. If your workload requires consistent throughput beyond what you have provisioned, it's recommended to provision your RU/s accordingly without relying on burst capacity.
+
+Let's take an example of a physical partition that has 100 RU/s of provisioned throughput and is idle for 5 minutes. With burst capacity, it can accumulate a maximum of 100 RU/s * 300 seconds = 30,000 RU of burst capacity. The capacity can be consumed at a maximum rate of 3000 RU/s, so if there's a sudden spike in request volume, the partition can burst up to 3000 RU/s for up 30,000 RU / 3000 RU/s = 10 seconds. Without burst capacity, any requests that are consumed beyond the provisioned 100 RU/s would have been rate limited (429).
+
+After the 10 seconds is over, the burst capacity has been used up. If the workload continues to exceed the provisioned 100 RU/s, any requests that are consumed beyond the provisioned 100 RU/s would now be rate limited (429). The maximum amount of burst capacity a physical partition can accumulate at any point in time is equal to 300 seconds * the provisioned RU/s of the physical partition.
+
+## Getting started
+
+To get started using burst capacity, enroll in the preview by submitting a request for the **Azure Cosmos DB Burst Capacity** feature via the [**Preview Features** page](../azure-resource-manager/management/preview-features.md) in your Azure Subscription overview page.
+- Before submitting your request, verify that your Azure Cosmos DB account(s) meet all the [preview eligibility criteria](#preview-eligibility-criteria).
+- The Azure Cosmos DB team will review your request and contact you via email to confirm which account(s) in the subscription you want to enroll in the preview.
+
+## Limitations
+
+### Preview eligibility criteria
+To enroll in the preview, your Cosmos account must meet all the following criteria:
+ - Your Cosmos account is using provisioned throughput (manual or autoscale). Burst capacity doesn't apply to serverless accounts.
+ - If you're using SQL API, your application must use the Azure Cosmos DB .NET V3 SDK, version 3.27.0 or higher. When burst capacity is enabled on your account, all requests sent from non .NET SDKs, or older .NET SDK versions won't be accepted.
+ - There are no SDK or driver requirements to use the feature with Cassandra API, Gremlin API, Table API, or API for MongoDB.
+ - Your Cosmos account isn't using any unsupported connectors
+ - Azure Data Factory
+ - Azure Stream Analytics
+ - Logic Apps
+ - Azure Functions
+ - Azure Search
+
+### SDK requirements (SQL and Table API only)
+#### SQL API
+For SQL API accounts, burst capacity is supported only in the latest version of the .NET v3 SDK. When the feature is enabled on your account, you must only use the supported SDK. Requests sent from other SDKs or earlier versions won't be accepted. There are no driver or SDK requirements to use burst capacity with Gremlin API, Cassandra API, or API for MongoDB.
+
+Find the latest version of the supported SDK:
+
+| SDK | Supported versions | Package manager link |
+| | | |
+| **.NET SDK v3** | *>= 3.27.0* | <https://www.nuget.org/packages/Microsoft.Azure.Cosmos/> |
+
+Support for other SQL API SDKs is planned for the future.
+
+> [!TIP]
+> You should ensure that your application has been updated to use a compatible SDK version prior to enrolling in the preview. If you're using the legacy .NET V2 SDK, follow the [.NET SDK v3 migration guide](sql/migrate-dotnet-v3.md).
+
+#### Table API
+For Table API accounts, burst capacity is supported only when using the latest version of the Tables SDK. When the feature is enabled on your account, you must only use the supported SDK. Requests sent from other SDKs or earlier versions won't be accepted. The legacy SDK with namespace `Microsoft.Azure.CosmosDB.Table` isn't supported. Follow the [migration guide](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/tables/Azure.Data.Tables/MigrationGuide.md) to upgrade to the latest SDK.
+
+| SDK | Supported versions | Package manager link |
+| | | |
+| **Azure Tables client library for .NET** | *>= 12.0.0* | <https://www.nuget.org/packages/Azure.Data.Tables/> |
+| **Azure Tables client library for Java** | *>= 12.0.0* | <https://mvnrepository.com/artifact/com.azure/azure-data-tables> |
+| **Azure Tables client library for JavaScript** | *>= 12.0.0* | <https://www.npmjs.com/package/@azure/data-tables> |
+| **Azure Tables client library for Python** | *>= 12.0.0* | <https://pypi.org/project/azure-data-tables/> |
+
+### Unsupported connectors
+
+If you enroll in the preview, the following connectors will fail.
+
+* Azure Data Factory
+* Azure Stream Analytics
+* Logic Apps
+* Azure Functions
+* Azure Search
+
+Support for these connectors is planned for the future.
+
+## Next steps
+
+* See the FAQ on [burst capacity.](burst-capacity-faq.yml)
+* Learn more about [provisioned throughput.](set-throughput.md)
+* Learn more about [request units.](request-units.md)
+* Trying to decide between provisioned throughput and serverless? See [choose between provisioned throughput and serverless.](throughput-serverless.md)
+* Want to learn the best practices? See [best practices for scaling provisioned throughput.](scaling-provisioned-throughput-best-practices.md)
cosmos-db Cosmosdb Migrationchoices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cosmosdb-migrationchoices.md
For APIs other than the SQL API, Mongo API and the Cassandra API, there are vari
**Table API** * [Data Migration Tool](table/table-import.md#data-migration-tool)
-* [AzCopy](table/table-import.md#migrate-data-by-using-azcopy)
**Gremlin API**
cosmos-db Hierarchical Partition Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/hierarchical-partition-keys.md
+
+ Title: Hierarchical partition keys in Azure Cosmos DB (preview)
+description: Learn about subpartitioning in Azure Cosmos DB, how to use the feature, and how to manage logical partitions
++++++ Last updated : 05/09/2022++
+# Hierarchical partition keys in Azure Cosmos DB (preview)
+
+Azure Cosmos DB distributes your data across logical and physical partitions based on your partition key to enable horizontal scaling. With hierarchical partition keys, or subpartitoning, you can now configure up to a three level hierarchy for your partition keys to further optimize data distribution and enable higher scale.
+
+If you use synthetic keys today or have scenarios where partition keys can exceed 20 GB of data, subpartitioning can help. With this feature, logical partition key prefixes can exceed 20 GB and 10,000 RU/s, and queries by prefix are efficiently routed to the subset of partitions with the data.
+
+## Example use case
+
+Suppose you have a multi-tenant scenario where you store event information for users in each tenant. This event information could include event occurrences including, but not limited to, as sign-in, clickstream, or payment events.
+
+In a real world scenario, some tenants can grow large with thousands of users, while the many other tenants are smaller with a few users. Partitioning by **/TenantId** may lead to exceeding Cosmos DB's 20-GB storage limit on a single logical partition, while partitioning by **/UserId** will make all queries on a tenant cross-partition. Both approaches have significant downsides.
+
+Using a synthetic partition key that combines **TenantId** and **UserId** adds complexity to the application. Additionally, the synthetic partition key queries for a tenant will still be cross-partition, unless all users are known and specified in advance.
+
+With hierarchical partition keys, we can partition first on **TenantId**, and then **UserId**. We can even partition further down to another level, such as **SessionId**, as long as the overall depth doesn't exceed three levels. When a physical partition exceeds 50 GB of storage, Cosmos DB will automatically split the physical partition so that roughly half of the data on the will be on one physical partition, and half on the other. Effectively, subpartitioning means that a single TenantId can exceed 20 GB of data, and it's possible for a TenantId's data to span multiple physical partitions.
+
+Queries that specify either the **TenantId**, or both **TenantId** and **UserId** will be efficiently routed to only the subset of physical partitions that contain the relevant data. Specifying the full or prefix subpartitioned partition key path effectively avoids a full fan-out query. For example, if the container had 1000 physical partitions, but a particular **TenantId** was only on five of them, the query would only be routed to the much smaller number of relevant physical partitions.
+
+## Getting started
+
+> [!IMPORTANT]
+> Working with containers that use hierarchical partition keys is supported only in the preview versions of the .NET v3 and Java v4 SDK. You must use the supported SDK to create new containers with hierarchical partition keys and to perform CRUD/query operations on the data
+
+Find the latest preview version of each supported SDK:
+
+| SDK | Supported versions | Package manager link |
+| | | |
+| **.NET SDK v3** | *>= 3.17.0-preview* | <https://www.nuget.org/packages/Microsoft.Azure.Cosmos/> |
+| **Java SDK v4** | *>= 4.16.0-beta* | <https://mvnrepository.com/artifact/com.azure/azure-cosmos> |
+
+## Sample code
+
+### Create new container with hierarchical partition keys
+
+When creating a new container using the SDK, define a list of subpartitioning key paths up to three levels of depth. Use the list of subpartition keys when configuring the properties of the new container.
+
+#### [.NET SDK v3](#tab/net-v3)
+
+```csharp
+// List of partition keys, in hierarchical order. You can have up to three levels of keys.
+List<string> subpartitionKeyPaths = new List<string> {
+ "/TenantId",
+ "/UserId",
+ "/SessionId"
+};
+
+// Create container properties object
+ContainerProperties containerProperties = new ContainerProperties(
+ id: "<container-name>",
+ partitionKeyPaths: subpartitionKeyPaths
+);
+
+// Create container - subpartitioned by TenantId -> UserId -> SessionId
+Container container = await database.CreateContainerIfNotExistsAsync(containerProperties, throughput: 400);
+```
+
+#### [Java SDK v4](#tab/java-v4)
+
+```java
+// List of partition keys, in hierarchical order. You can have up to three levels of keys.
+List<String> subpartitionKeyPaths = new ArrayList<String>();
+subpartitionKeyPaths.add("/TenantId");
+subpartitionKeyPaths.add("/UserId");
+subpartitionKeyPaths.add("/SessionId");
+
+//Create a partition key definition object with Kind("MultiHash") and Version V2
+PartitionKeyDefinition subpartitionKeyDefinition = new PartitionKeyDefinition();
+subpartitionKeyDefinition.setPaths(subpartitionKeyPaths);
+subpartitionKeyDefinition.setKind(PartitionKind.MULTI_HASH);
+subpartitionKeyDefinition.setVersion(PartitionKeyDefinitionVersion.V2);
+
+// Create container properties object
+CosmosContainerProperties containerProperties = new CosmosContainerProperties("<container-name>", subpartitionKeyDefinition);
+
+// Create throughput properties object
+ThroughputProperties throughputProperties = ThroughputProperties.createManualThroughput(400);
+
+// Create container - subpartitioned by TenantId -> UserId -> SessionId
+Mono<CosmosContainerResponse> container = database.createContainerIfNotExists(containerProperties, throughputProperties);
+```
+++
+### Add an item to a container
+
+There are two options to add a new item to a container with hierarchical partition keys enabled.
+
+#### Automatic extraction
+
+If you pass in an object with the partition key value set, the SDK can automatically extract the full partition key path.
+
+##### [.NET SDK v3](#tab/net-v3)
+
+```csharp
+// Create new item
+UserSession item = new UserSession()
+{
+ id = "f7da01b0-090b-41d2-8416-dacae09fbb4a",
+ TenantId = "Microsoft",
+ UserId = "8411f20f-be3e-416a-a3e7-dcd5a3c1f28b",
+ SessionId = "0000-11-0000-1111"
+};
+
+// Pass in the object and the SDK will automatically extract the full partition key path
+ItemResponse<UserSession> createResponse = await container.CreateItemAsync(item);
+```
+
+##### [Java SDK v4](#tab/java-v4)
+
+```java
+// Create new item
+UserSession item = new UserSession();
+item.setId("f7da01b0-090b-41d2-8416-dacae09fbb4a");
+item.setTenantId("Microsoft");
+item.setUserId("8411f20f-be3e-416a-a3e7-dcd5a3c1f28b");
+item.setSessionId("0000-11-0000-1111");
+
+// Pass in the object and the SDK will automatically extract the full partition key path
+Mono<CosmosItemResponse<UserSession>> createResponse = container.createItem(item);
+```
+++
+#### Manually specify path
+
+The ``PartitionKeyBuilder`` class in the SDK can construct a value for a previously defined hierarchical partition key path. Use this class when adding a new item to a container that has subpartitioning enabled.
+
+> [!TIP]
+> At scale, it is often more performant to specify the full partition key path even if the SDK can extract the path from the object.
+
+##### [.NET SDK v3](#tab/net-v3)
+
+```csharp
+// Create new item object
+PaymentEvent item = new PaymentEvent()
+{
+ id = Guid.NewGuid().ToString(),
+ TenantId = "Microsoft",
+ UserId = "8411f20f-be3e-416a-a3e7-dcd5a3c1f28b",
+ SessionId = "0000-11-0000-1111"
+};
+
+// Specify the full partition key path when creating the item
+PartitionKey partitionKey = new PartitionKeyBuilder()
+ .Add(item.TenantId)
+ .Add(item.UserId)
+ .Build();
+
+// Create the item in the container
+ItemResponse<PaymentEvent> createResponse = await container.CreateItemAsync(item, partitionKey);
+```
+
+##### [Java SDK v4](#tab/java-v4)
+
+```java
+// Create new item object
+UserSession item = new UserSession();
+item.setTenantId("Microsoft");
+item.setUserId("8411f20f-be3e-416a-a3e7-dcd5a3c1f28b");
+item.setSessionId("0000-11-0000-1111");
+item.setId(UUID.randomUUID().toString());
+
+// Specify the full partition key path when creating the item
+PartitionKey partitionKey = new PartitionKeyBuilder()
+ .add(item.getTenantId())
+ .add(item.getUserId())
+ .add(item.getSessionId())
+ .build();
+
+// Create the item in the container
+Mono<CosmosItemResponse<UserSession>> createResponse = container.createItem(item, partitionKey);
+```
+++
+### Perform a key/value lookup (point read) of an item
+
+Key/value lookups (point reads) are performed in a manner similar to a non-subpartitioned container. For example, assume we have a hierarchical partition key composed of **TenantId -> UserId -> SessionId**. The unique identifier for the item is a Guid, represented as a string, that serves as a unique document transaction identifier. To perform a point read on a single item, pass in the ``id`` property of the item and the full value for the partition key including all three components of the path.
+
+#### [.NET SDK v3](#tab/net-v3)
+
+```csharp
+// Store the unique identifier
+string id = "f7da01b0-090b-41d2-8416-dacae09fbb4a";
+
+// Build the full partition key path
+PartitionKey partitionKey = new PartitionKeyBuilder()
+ .Add("Microsoft") //TenantId
+ .Add("8411f20f-be3e-416a-a3e7-dcd5a3c1f28b") //UserId
+ .Add("0000-11-0000-1111") //SessionId
+ .Build();
+
+// Perform a point read
+ItemResponse<UserSession> readResponse = await container.ReadItemAsync<UserSession>(
+ id,
+ partitionKey
+);
+```
+
+#### [Java SDK v4](#tab/java-v4)
+
+```java
+// Store the unique identifier
+String id = "f7da01b0-090b-41d2-8416-dacae09fbb4a";
+
+// Build the full partition key path
+PartitionKey partitionKey = new PartitionKeyBuilder()
+ .add("Microsoft") //TenantId
+ .add("8411f20f-be3e-416a-a3e7-dcd5a3c1f28b") //UserId
+ .add("0000-11-0000-1111") //SessionId
+ .build();
+
+// Perform a point read
+Mono<CosmosItemResponse<UserSession>> readResponse = container.readItem(id, partitionKey, UserSession.class);
+```
+++
+### Run a query
+
+The SDK code to run a query on a subpartitioned container is identical to running a query on a non-subpartitioned container.
+
+When the query specifies all values of the partition keys in the ``WHERE`` filter or a prefix of the key hierarchy, the SDK automatically routes the query to the corresponding physical partitions. Queries that provide only the "middle" of the hierarchy will be cross partition queries.
+
+For example, assume we have a hierarchical partition key composed of **TenantId -> UserId -> SessionId**. The components of the query's filter will determine if the query is a single-partition, targeted cross-partition, or fan out query.
+
+| Query | Routing |
+| | |
+| ``SELECT * FROM c WHERE c.TenantId = 'Microsoft' AND c.UserId = '8411f20f-be3e-416a-a3e7-dcd5a3c1f28b' AND c.SessionId = '0000-11-0000-1111'`` | Routed to the **single logical and physical partition** that contains the data for the specified values of ``TenantId``, ``UserId`` and ``SessionId``. |
+| ``SELECT * FROM c WHERE c.TenantId = 'Microsoft' AND c.UserId = '8411f20f-be3e-416a-a3e7-dcd5a3c1f28b'`` | Routed to only the **targeted subset of logical and physical partition(s)** that contain data for the specified values of ``TenantId`` and ``UserId``. This query is a targeted cross-partition query that returns data for a specific user in the tenant. |
+| ``SELECT * FROM c WHERE c.TenantId = 'Microsoft'`` | Routed to only the **targeted subset of logical and physical partition(s)** that contain data for the specified value of ``TenantId``. This query is a targeted cross-partition query that returns data for all users in a tenant. |
+| ``SELECT * FROM c WHERE c.UserId = '8411f20f-be3e-416a-a3e7-dcd5a3c1f28b'`` | Routed to **all physical partitions**, resulting in a fan-out cross-partition query. |
+| ``SELECT * FROM c WHERE c.SessionId = '0000-11-0000-1111'`` | Routed to **all physical partitions**, resulting in a fan-out cross-partition query. |
+
+#### Single-partition query on a subpartitioned container
+
+##### [.NET SDK v3](#tab/net-v3)
+
+```csharp
+// Define a single-partition query that specifies the full partition key path
+QueryDefinition query = new QueryDefinition(
+ "SELECT * FROM c WHERE c.TenantId = @tenant-id AND c.UserId = @user-id AND c.SessionId = @session-id")
+ .WithParameter("@tenant-id", "Microsoft")
+ .WithParameter("@user-id", "8411f20f-be3e-416a-a3e7-dcd5a3c1f28b")
+ .WithParameter("@session-id", "0000-11-0000-1111");
+
+// Retrieve an iterator for the result set
+using FeedIterator<PaymentEvent> results = container.GetItemQueryIterator<PaymentEvent>(query);
+
+while (results.HasMoreResults)
+{
+ FeedResponse<UserSession> resultsPage = await resultSet.ReadNextAsync();
+ foreach(UserSession result in resultsPage)
+ {
+ // Process result
+ }
+}
+```
+
+##### [Java SDK v4](#tab/java-v4)
+
+```java
+// Define a single-partition query that specifies the full partition key path
+String query = String.format(
+ "SELECT * FROM c WHERE c.TenantId = '%s' AND c.UserId = '%s' AND c.SessionId = '%s'",
+ "Microsoft",
+ "8411f20f-be3e-416a-a3e7-dcd5a3c1f28b",
+ "0000-11-0000-1111"
+);
+
+// Retrieve an iterator for the result set
+CosmosPagedFlux<UserSession> pagedResponse = container.queryItems(
+ query, options, UserSession.class);
+
+pagedResponse.byPage().flatMap(fluxResponse -> {
+ for (UserSession result : page.getResults()) {
+ // Process result
+ }
+ return Flux.empty();
+}).blockLast();
+```
+++
+#### Targeted multi-partition query on a subpartitioned container
+
+##### [.NET SDK v3](#tab/net-v3)
+
+```csharp
+// Define a targeted cross-partition query specifying prefix path[s]
+QueryDefinition query = new QueryDefinition(
+ "SELECT * FROM c WHERE c.TenantId = @tenant-id")
+ .WithParameter("@tenant-id", "Microsoft")
+
+// Retrieve an iterator for the result set
+using FeedIterator<PaymentEvent> results = container.GetItemQueryIterator<PaymentEvent>(query);
+
+while (results.HasMoreResults)
+{
+ FeedResponse<UserSession> resultsPage = await resultSet.ReadNextAsync();
+ foreach(UserSession result in resultsPage)
+ {
+ // Process result
+ }
+}
+```
+
+##### [Java SDK v4](#tab/java-v4)
+
+```java
+// Define a targeted cross-partition query specifying prefix path[s]
+String query = String.format(
+ "SELECT * FROM c WHERE c.TenantId = '%s'",
+ "Microsoft"
+);
+
+// Retrieve an iterator for the result set
+CosmosPagedFlux<UserSession> pagedResponse = container.queryItems(
+ query, options, UserSession.class);
+
+pagedResponse.byPage().flatMap(fluxResponse -> {
+ for (UserSession result : page.getResults()) {
+ // Process result
+ }
+ return Flux.empty();
+}).blockLast();
+```
+++
+## Using Azure Resource Manager templates
+
+The Azure Resource Manager template for a subpartitioned container is mostly identical to a standard container with the only key difference being the value of the ``properties/partitionKey`` path. For more information about creating an Azure Resource Manager template for an Azure Cosmos DB resource, see [the Azure Resource Manager template reference for Azure Cosmos DB](/azure/templates/microsoft.documentdb/databaseaccounts).
+
+Configure the ``partitionKey`` object with the following values to create a subpartitioned container.
+
+| Path | Value |
+| | |
+| **paths** | List of hierarchical partition keys (max three levels of depth) |
+| **kind** | ``MultiHash`` |
+| **version** | ``2`` |
+
+### Example partition key definition
+
+For example, assume we have a hierarchical partition key composed of **TenantId -> UserId -> SessionId**. The ``partitionKey`` object would be configured to include all three values in the **paths** property, a **kind** value of ``MultiHash``, and a **version** value of ``2``
+
+#### [Bicep](#tab/bicep)
+
+```bicep
+partitionKey: {
+ paths: [
+ 'TenantId',
+ 'UserId',
+ 'SessionId'
+ ]
+ kind: 'MultiHash'
+ version: 2
+}
+```
+
+#### [JSON](#tab/arm-json)
+
+```json
+"partitionKey": {
+ "paths": [
+ "TenantId",
+ "UserId",
+ "SessionId"
+ ],
+ "kind": "MultiHash",
+ "version": 2
+}
+```
+++
+For more information about the ``partitionKey`` object, see [ContainerPartitionKey specification](/azure/templates/microsoft.documentdb/databaseaccounts/sqldatabases/containers#containerpartitionkey).
+
+## Using the Azure Cosmos DB emulator
+
+You can test the subpartitioning feature using the latest version of the local emulator for Azure Cosmos DB. To enable subparitioning on the emulator, start the emulator from the installation directory with the ``/EnablePreview`` flag.
+
+```powershell
+.\CosmosDB.Emulator.exe /EnablePreview
+```
+
+For more information, see [Azure Cosmos DB emulator](/azure/cosmos-db/local-emulator).
+
+## Limitations and known issues
+
+* Working with containers that use hierarchical partition keys is supported only in the preview versions of the .NET v3 and Java v4 SDKs. You must use a supported SDK to create new containers with hierarchical partition keys and to perform CRUD/query operations on the data. Support for other SDK languages (Python, JavaScript) is planned and not yet available.
+* Passing in a partition key in ``QueryRequestOptions`` isn't currently supported when issuing queries from the SDK. You must specify the partition key paths in the query text itself.
+* Azure portal support is planned and not yet available.
+* Support for automation platforms (Azure PowerShell, Azure CLI) is planned and not yet available.
+* In the Data Explorer in the portal, you currently can't view documents in a container with hierarchical partition keys. You can read or edit these documents with the supported .NET v3 or Java v4 SDK version\[s\].
+* You can only specify hierarchical partition keys up to three layers in depth.
+* Hierarchical partition keys can currently only be enabled on new containers. The desired partition key paths must be specified at the time of container creation and can't be changed later.
+* Hierarchical partition keys are currently supported only for SQL API accounts (API for MongoDB and Cassandra API aren't currently supported).
+
+## Next steps
+
+* See the FAQ on [hierarchical partition keys.](hierarchical-partition-keys-faq.yml)
+* Learn more about [partitioning in Azure Cosmos DB.](partitioning-overview.md)
+* Learn more about [using Azure Resource Manager templates with Azure Cosmos DB.](/azure/templates/microsoft.documentdb/databaseaccounts)
cosmos-db How To Container Copy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-container-copy.md
+
+ Title: Create and manage intra-account container copy jobs in Azure Cosmos DB
+description: Learn how to create, monitor, and manage container copy jobs within an Azure Cosmos DB account using CLI commands.
+++ Last updated : 04/18/2022+++
+# Create and manage intra-account container copy jobs in Azure Cosmos DB (Preview)
+
+[Container copy jobs](intra-account-container-copy.md) creates offline copies of collections within an Azure Cosmos DB account.
+
+This article describes how to create, monitor, and manage intra-account container copy jobs using Azure CLI commands.
+
+## Set shell variables
+
+First, set all of the variables that each individual script will use.
+
+```azurecli-interactive
+$accountName = "<cosmos-account-name>"
+$resourceGroup = "<resource-group-name>"
+$jobName = ""
+$sourceDatabase = ""
+$sourceContainer = ""
+$destinationDatabase = ""
+$destinationContainer = ""
+```
+
+## Create an intra-account container copy job for SQL API account
+
+Create a job to copy a container within an Azure Cosmos DB SQL API account:
+
+```azurecli-interactive
+az cosmosdb dts copy \
+ --resource-group $resourceGroup \
+ --job-name $jobName \
+ --account-name $accountName \
+ --source-sql-container database=$sourceDatabase container=$sourceContainer \
+ --dest-sql-container database=$destinationDatabase container=$destinationContainer
+```
+
+## Create intra-account container copy job for Cassandra API account
+
+Create a job to copy a container within an Azure Cosmos DB Cassandra API account:
+
+```azurecli-interactive
+az cosmosdb dts copy \
+ --resource-group $resourceGroup \
+ --job-name $jobName \
+ --account-name $accountName \
+ --source-cassandra-table keyspace=$sourceKeySpace table=$sourceTable \
+ --dest-cassandra-table keyspace=$destinationKeySpace table=$destinationTable
+```
+
+## Monitor the progress of a container copy job
+
+View the progress and status of a copy job:
+
+```azurecli-interactive
+az cosmosdb dts show \
+ --account-name $accountName \
+ --resource-group $resourceGroup \
+ --job-name $jobName
+```
+
+## List all the container copy jobs created in an account
+
+To list all the container copy jobs created in an account:
+
+```azurecli-interactive
+az cosmosdb dts list \
+ --account-name $accountName \
+ --resource-group $resourceGroup
+```
+
+## Pause a container copy job
+
+In order to pause an ongoing container copy job, you may use the command:
+
+```azurecli-interactive
+az cosmosdb dts pause \
+ --account-name $accountName \
+ --resource-group $resourceGroup \
+ --job-name $jobName
+```
+
+## Resume a container copy job
+
+In order to resume an ongoing container copy job, you may use the command:
+
+```azurecli-interactive
+az cosmosdb dts resume \
+ --account-name $accountName \
+ --resource-group $resourceGroup \
+ --job-name $jobName
+```
+
+## Next steps
+
+- For more information about intra-account container copy jobs, see [Container copy jobs](intra-account-container-copy.md).
cosmos-db Import Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/import-data.md
This tutorial provides instructions on using the Azure Cosmos DB Data Migration
> The Azure Cosmos DB Data Migration tool is an open source tool designed for small migrations. For larger migrations, view our [guide for ingesting data](cosmosdb-migrationchoices.md). * **[SQL API](./introduction.md)** - You can use any of the source options provided in the Data Migration tool to import data at a small scale. [Learn about migration options for importing data at a large scale](cosmosdb-migrationchoices.md).
-* **[Table API](table/introduction.md)** - You can use the Data Migration tool or [AzCopy](table/table-import.md#migrate-data-by-using-azcopy) to import data. For more information, see [Import data for use with the Azure Cosmos DB Table API](table/table-import.md).
+* **[Table API](table/introduction.md)** - You can use the Data Migration tool to import data. For more information, see [Import data for use with the Azure Cosmos DB Table API](table/table-import.md).
* **[Azure Cosmos DB's API for MongoDB](mongodb/mongodb-introduction.md)** - The Data Migration tool doesn't support Azure Cosmos DB's API for MongoDB either as a source or as a target. If you want to migrate the data in or out of collections in Azure Cosmos DB, refer to [How to migrate MongoDB data to a Cosmos database with Azure Cosmos DB's API for MongoDB](../dms/tutorial-mongodb-cosmos-db.md?toc=%2fazure%2fcosmos-db%2ftoc.json%253ftoc%253d%2fazure%2fcosmos-db%2ftoc.json) for instructions. You can still use the Data Migration tool to export data from MongoDB to Azure Cosmos DB SQL API collections for use with the SQL API. * **[Cassandra API](graph-introduction.md)** - The Data Migration tool isn't a supported import tool for Cassandra API accounts. [Learn about migration options for importing data into Cassandra API](cosmosdb-migrationchoices.md#azure-cosmos-db-cassandra-api) * **[Gremlin API](graph-introduction.md)** - The Data Migration tool isn't a supported import tool for Gremlin API accounts at this time. [Learn about migration options for importing data into Gremlin API](cosmosdb-migrationchoices.md#other-apis)
While the import tool includes a graphical user interface (dtui.exe), it can als
### Build from source
- The migration tool source code is available on GitHub in [this repository](https://github.com/azure/azure-documentdb-datamigrationtool). You can download and compile the solution locally then run either:
+ The migration tool source code is available on GitHub in [this repository](https://github.com/Azure/azure-documentdb-datamigrationtool/tree/archive). You can download and compile the solution locally then run either:
* **Dtui.exe**: Graphical interface version of the tool * **Dt.exe**: Command-line version of the tool
cosmos-db Intra Account Container Copy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/intra-account-container-copy.md
+
+ Title: Intra-account container copy jobs in Azure Cosmos DB
+description: Learn about container data copy capability within an Azure Cosmos DB account.
+++ Last updated : 04/18/2022+++
+# Intra-account container copy jobs in Azure Cosmos DB (Preview)
+
+You can perform offline container copy within an Azure Cosmos DB account using container copy jobs.
+
+You may need to copy data within your Azure Cosmos DB account if you want to achieve any of these scenarios:
+
+* Copy all items from one container to another.
+* Change the [granularity at which throughput is provisioned - from database to container](set-throughput.md) and vice-versa.
+* Change the [partition key](partitioning-overview.md#choose-partitionkey) of a container.
+* Update the [unique keys](unique-keys.md) for a container.
+* Rename a container/database.
+* Adopt new features that are only supported on new containers.
+
+Intra-account container copy jobs can be currently [created and managed using CLI commands](how-to-container-copy.md).
+
+## Get started
+
+To get started using container copy jobs, enroll in the preview by filing a support ticket in the [Azure portal](https://portal.azure.com).
+
+## How does intra-account container copy work?
+
+Intra-account container copy jobs perform offline data copy using the source container's incremental change feed log.
+
+* Within the platform, we allocate two 4-vCPU 16-GB memory server-side compute instances per Azure Cosmos DB account by default.
+* The instances are allocated when one or more container copy jobs are created within the account.
+* The container copy jobs run on these instances.
+* The instances are shared by all the container copy jobs running within the same account.
+* The platform may de-allocate the instances if they're idle for >15 mins.
+
+> [!NOTE]
+> We currently only support offline container copy. So, we strongly recommend to stop performing any operations on the source container prior to beginning the container copy.
+> Item deletions and updates done on the source container after beginning the copy job may not be captured. Hence, continuing to perform operations on the source container while the container job is in progress may result in data missing on the target container.
+
+## Overview of steps needed to do a container copy
+
+1. Stop the operations on the source container by pausing the application instances or any clients connecting to it.
+2. [Create the container copy job](how-to-container-copy.md).
+3. [Monitor the progress of the container copy job](how-to-container-copy.md#monitor-the-progress-of-a-container-copy-job) and wait until it's completed.
+4. Resume the operations by appropriately pointing the application or client to the source or target container copy as intended.
+
+## Factors affecting the rate of container copy job
+
+The rate of container copy job progress is determined by these factors:
+
+* Source container/database throughput setting.
+
+* Target container/database throughput setting.
+
+* Server-side compute instances allocated to the Azure Cosmos DB account for the performing the data transfer.
+
+ > [!IMPORTANT]
+ > The default SKU offers two 4-vCPU 16-GB server-side instances per account. You may opt to sign up for [larger SKUs](#large-skus-preview) in preview.
+
+## FAQs
+
+### Is there an SLA for the container copy jobs?
+
+Container copy jobs are currently supported on best-effort basis. We don't provide any SLA guarantees for the time taken to complete these jobs.
+
+### Can I create multiple container copy jobs within an account?
+
+Yes, you can create multiple jobs within the same account. The jobs will run consecutively. You can [list all the jobs](how-to-container-copy.md#list-all-the-container-copy-jobs-created-in-an-account) created within an account and monitor their progress.
+
+### Can I copy an entire database within the Azure Cosmos DB account?
+
+You'll have to create a job for each collection in the database.
+
+### I have an Azure Cosmos DB account with multiple regions. In which region will the container copy job run?
+
+The container copy job will run in the write region. If there are accounts configured with multi-region writes, the job will run in one of the regions from the list.
+
+### What happens to the container copy jobs when the account's write region changes?
+
+The account's write region may change in the rare scenario of a region outage or due to manual failover. In such scenario, incomplete container copy jobs created within the account would fail. You would need to recreate such jobs. Recreated jobs would then run against the new (current) write region.
+
+## Large SKUs preview
+
+If you want to run the container copy jobs faster, you may do so by adjusting one of the [factors that affect the rate of the copy job](#factors-affecting-the-rate-of-container-copy-job). In order to adjust the configuration of the server-side compute instances, you may sign up for "Large SKU support for container copy" preview.
+
+This preview will allow you to choose larger a SKU size for the server-side instances. Large SKU sizes are billable at a higher rate. You can also choose a node count of up to 5 of these instances.
+
+## Next Steps
+
+- You can learn about [how to create, monitor and manage container copy jobs within Azure Cosmos DB account using CLI commands](how-to-container-copy.md).
cosmos-db Linux Emulator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/linux-emulator.md
Title: Run the Azure Cosmos DB Emulator on Docker for Linux description: Learn how to run and use the Azure Cosmos DB Linux Emulator on Linux, and macOS. Using the emulator you can develop and test your application locally for free, without an Azure subscription. + Previously updated : 06/04/2021+ Last updated : 05/09/2022 # Run the emulator on Docker for Linux (Preview)
-The Azure Cosmos DB Linux Emulator provides a local environment that emulates the Azure Cosmos DB service for development purposes. Currently, the Linux emulator only supports SQL API. Using the Azure Cosmos DB Emulator, you can develop and test your application locally, without creating an Azure subscription or incurring any costs. When you're satisfied with how your application is working in the Azure Cosmos DB Linux Emulator, you can switch to using an Azure Cosmos DB account in the cloud. This article describes how to install and use the emulator on macOS and Linux environments.
+The Azure Cosmos DB Linux Emulator provides a local environment that emulates the Azure Cosmos DB service for development purposes. Currently, the Linux emulator only supports SQL API and MongoDB API. Using the Azure Cosmos DB Emulator, you can develop and test your application locally, without creating an Azure subscription or incurring any costs. When you're satisfied with how your application is working in the Azure Cosmos DB Linux Emulator, you can switch to using an Azure Cosmos DB account in the cloud. This article describes how to install and use the emulator on macOS and Linux environments.
> [!NOTE]
-> The Cosmos DB Linux Emulator is currently in preview mode and supports only the SQL API. Users may experience slight performance degradations in terms of the number of requests per second processed by the emulator when compared to the Windows version. The default number of physical partitions which directly impacts the number of containers that can be provisioned is 10.
+> The Cosmos DB Linux Emulator is currently in preview mode and supports only the SQL and MongoDB APIs. Users may experience slight performance degradations in terms of the number of requests per second processed by the emulator when compared to the Windows version. The default number of physical partitions which directly impacts the number of containers that can be provisioned is 10.
> > We do not recommend use of the emulator (Preview) in production. For heavier workloads, use our [Windows emulator](local-emulator.md). ## How does the emulator work?
-The Azure Cosmos DB Linux Emulator provides a high-fidelity emulation of the Azure Cosmos DB service. It supports equivalent functionality as the Azure Cosmos DB, which includes creating data, querying data, provisioning and scaling containers, and executing stored procedures and triggers. You can develop and test applications using the Azure Cosmos DB Linux Emulator, and deploy them to Azure at global scale by updating the Azure Cosmos DB connection endpoint.
+The Azure Cosmos DB Linux Emulator provides a high-fidelity emulation of the Azure Cosmos DB service. The emulator supports equivalent functionality as the Azure Cosmos DB. Functionality includes creating data, querying data, provisioning and scaling containers, and executing stored procedures and triggers. You can develop and test applications using the Azure Cosmos DB Linux Emulator. You can also deploy applications to Azure at global scale by updating the Azure Cosmos DB connection endpoint from the emulator to a live account.
-Functionality that relies on the Azure infrastructure like global replication, single-digit millisecond latency for reads/writes, and tunable consistency levels are not applicable when you use the emulator.
+Functionality that relies on the Azure infrastructure like global replication, single-digit millisecond latency for reads/writes, and tunable consistency levels aren't applicable when you use the emulator.
## Differences between the Linux Emulator and the cloud service+ Since the Azure Cosmos DB Emulator provides an emulated environment that runs on the local developer workstation, there are some differences in functionality between the emulator and an Azure Cosmos account in the cloud: -- Currently, the **Data Explorer** pane in the emulator fully supports SQL API clients only.
+- Currently, the **Data Explorer** pane in the emulator fully supports SQL and MongoDB API clients only.
- With the Linux emulator, you can create an Azure Cosmos account in [provisioned throughput](set-throughput.md) mode only; currently it doesn't support [serverless](serverless.md) mode. -- The Linux emulator is not a scalable service and it doesn't support a large number of containers. When using the Azure Cosmos DB Emulator, by default, you can create up to 10 fixed size containers at 400 RU/s (only supported using Azure Cosmos DB SDKs), or 5 unlimited containers. For more information on how to change this value, see [Set the PartitionCount value](emulator-command-line-parameters.md#set-partitioncount) article.
+- The Linux emulator isn't a scalable service and it doesn't support a large number of containers. When using the Azure Cosmos DB Emulator, by default, you can create up to 10 fixed size containers at 400 RU/s (only supported using Azure Cosmos DB SDKs), or 5 unlimited containers. For more information on how to change this value, see [Set the PartitionCount value](emulator-command-line-parameters.md#set-partitioncount) article.
- While [consistency levels](consistency-levels.md) can be adjusted using command-line arguments for testing scenarios only (default setting is Session), a user might not expect the same behavior as in the cloud service. For instance, Strong and Bounded staleness consistency has no effect on the emulator, other than signaling to the Cosmos DB SDK the default consistency of the account. -- The Linux emulator does not offer [multi-region replication](distribute-data-globally.md).
+- The Linux emulator doesn't offer [multi-region replication](distribute-data-globally.md).
-- Because the copy of your Azure Cosmos DB Linux Emulator might not always be up to date with the most recent changes in the Azure Cosmos DB service, you should always refer to the [Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md) to accurately estimate the throughput (RUs) needs of your application.
+- Your Azure Cosmos DB Linux Emulator might not always be up to date with the most recent changes in the Azure Cosmos DB service. You should always refer to the [Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md) to accurately estimate the throughput (RUs) needs of your application.
- The Linux emulator supports a maximum ID property size of 254 characters.
-## <a id="run-on-macos"></a>Run the Linux Emulator on macOS
+## Run the Linux Emulator on macOS
> [!NOTE] > The emulator only supports MacBooks with Intel processors.
To get started, visit the Docker Hub and install [Docker Desktop for macOS](http
[!INCLUDE[linux-emulator-instructions](includes/linux-emulator-instructions.md)]
-## <a id="install-certificate"></a>Install the certificate
+## Install the certificate
1. After the emulator is running, using a different terminal, load the IP address of your local machine into a variable.
To get started, visit the Docker Hub and install [Docker Desktop for macOS](http
```
-## <a id="consume-endpoint-ui"></a>Consume the endpoint via UI
+## Consume the endpoint via UI
The emulator is using a self-signed certificate to secure the connectivity to its endpoint and needs to be manually trusted. Use the following steps to consume the endpoint via the UI using your desired web browser:
The emulator is using a self-signed certificate to secure the connectivity to it
1. You can now browse to `https://localhost:8081/_explorer/https://docsupdatetracker.net/index.html` or `https://{your_local_ip}:8081/_explorer/https://docsupdatetracker.net/index.html` and retrieve the connection string to the emulator.
-Optionally, you can disable SSL validation on your application. This is only recommended for development purposes and should not be done when running in a production environment.
+Optionally, you can disable TLS/SSL validation on your application. Disabling validation is only recommended for development purposes and shouldn't be done when running in a production environment.
-## <a id="run-on-linux"></a>Run the Linux Emulator on Linux OS
+## Run the Linux Emulator on Linux OS
To get started, use the `apt` package and install the latest version of Docker.
sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io ```
-If you are using Windows Subsystem for Linux (WSL), run the following command to get `ifconfig`:
+If you're using Windows Subsystem for Linux (WSL), run the following command to get `ifconfig`:
```bash sudo apt-get install net-tools
Use the following steps to run the emulator on Linux:
curl -k https://$ipaddr:8081/_explorer/emulator.pem > ~/emulatorcert.crt ```
-6. Copy the CRT file to the folder that contains custom certificates in your Linux distribution. Commonly on Debian distributions, it is located on `/usr/local/share/ca-certificates/`.
+6. Copy the CRT file to the folder that contains custom certificates in your Linux distribution. Commonly on Debian distributions, it's located on `/usr/local/share/ca-certificates/`.
```bash cp ~/emulatorcert.crt /usr/local/share/ca-certificates/
Use the following steps to run the emulator on Linux:
java -ea -Djavax.net.ssl.trustStore=~/cacerts -Djavax.net.ssl.trustStorePassword="changeit" $APPLICATION_ARGUMENTS ```
-## <a id="config-options"></a>Configuration options
+## Configuration options
|Name |Default |Description | |||| | Ports: `-p` | | Currently, only ports 8081 and 10251-10255 are needed by the emulator endpoint. | | `AZURE_COSMOS_EMULATOR_PARTITION_COUNT` | 10 | Controls the total number of physical partitions, which in return controls the number of containers that can be created and can exist at a given point in time. We recommend starting small to improve the emulator start up time, i.e 3. | | Memory: `-m` | | On memory, 3 GB or more is required. |
-| Cores: `--cpus` | | Make sure to provision enough memory and CPU cores; while the emulator might run with as little as 0.5 cores (very slow though) at least 2 cores are recommended. |
+| Cores: `--cpus` | | Make sure to allocate enough memory and CPU cores; while the emulator can run with as little as 0.5 cores, but at least two cores are recommended. |
|`AZURE_COSMOS_EMULATOR_ENABLE_DATA_PERSISTENCE` | false | This setting used by itself will help persist the data between container restarts. |
+|`AZURE_COSMOS_EMULATOR_ENABLE_MONGODB_ENDPOINT` | | This setting enables the MongoDB API endpoint for the emulator and configures the MongoDB server version. (Valid server version values include ``3.2``, ``3.6``, and ``4.0``) |
-## <a id="troubleshoot-issues"></a>Troubleshoot issues
+## Troubleshoot issues
This section provides tips to troubleshoot errors when using the Linux emulator. ### Connectivity issues
-#### My app can't connect to emulator endpoint ("The SSL connection could not be established") or I can't start the Data Explorer
+#### My app can't connect to emulator endpoint ("The TLS/SSL connection couldn't be established") or I can't start the Data Explorer
- Ensure the emulator is running with the following command:
This section provides tips to troubleshoot errors when using the Linux emulator.
- Try to access the endpoint and port for the emulator using the Docker container's IP address instead of "localhost". -- Make sure that the emulator self-signed certificate has been properly added to [KeyChain](#consume-endpoint-ui).
+- Make sure that the emulator self-signed certificate has been properly added to [KeyChain](#consume-the-endpoint-via-ui).
-- For Java applications, make sure you imported the certificate to the [Java Certificates Store section](#run-on-linux).
+- For Java applications, make sure you imported the certificate to the [Java Certificates Store section](#run-the-linux-emulator-on-linux-os).
-- For .NET applications you can disable SSL validation:
+- For .NET applications you can disable TLS/SSL validation:
# [.NET Standard 2.1+](#tab/ssl-netstd21)
-For any application running in a framework compatible with .NET Standard 2.1 or later, we can leverage the `CosmosClientOptions.HttpClientFactory`:
+For any application running in a framework compatible with .NET Standard 2.1 or later, we can use `CosmosClientOptions.HttpClientFactory`:
[!code-csharp[Main](~/samples-cosmosdb-dotnet-v3/Microsoft.Azure.Cosmos.Samples/Usage/HttpClientFactory/Program.cs?name=DisableSSLNETStandard21)] # [.NET Standard 2.0](#tab/ssl-netstd20)
-For any application running in a framework compatible with .NET Standard 2.0, we can leverage the `CosmosClientOptions.HttpClientFactory`:
+For any application running in a framework compatible with .NET Standard 2.0, we can use `CosmosClientOptions.HttpClientFactory`:
[!code-csharp[Main](~/samples-cosmosdb-dotnet-v3/Microsoft.Azure.Cosmos.Samples/Usage/HttpClientFactory/Program.cs?name=DisableSSLNETStandard20)]
The emulator errors out with the following message:
3. Sysadmin deliberately sets the system to run on legacy VA layout mode by adjusting a sysctl knob vm.legacy_va_layout. ```
-This error is likely because the current Docker Host processor type is incompatible with our Docker image; that is, the computer is a MacBook with a M1 chipset.
+This error is likely because the current Docker Host processor type is incompatible with our Docker image. For example, if the computer is using a unique chipset or processor architecture.
#### My app received too many connectivity-related timeouts -- The Docker container is not provisioned with enough resources [(cores or memory)](#config-options). We recommend increasing the number of cores and alternatively, reduce the number of physical partitions provisioned upon startup.
+- The Docker container isn't provisioned with enough resources [(cores or memory)](#configuration-options). We recommend increasing the number of cores and alternatively, reduce the number of physical partitions provisioned upon startup.
-- Ensure the number of TCP connections does not exceed your current OS settings.
+- Ensure the number of TCP connections doesn't exceed your current OS settings.
- Try reducing the size of the documents in your application.
-#### My app could not provision databases/containers
+#### My app couldn't create databases or containers
-The number of physical partitions provisioned on the emulator is too low. Either delete your unused databases/collections or start the emulator with a [larger number of physical partitions](#config-options).
+The number of physical partitions provisioned on the emulator is too low. Either delete your unused databases/collections or start the emulator with a [larger number of physical partitions](#configuration-options).
### Reliability and crashes - The emulator fails to start:
- - Make sure you are [running the latest image of the Cosmos DB emulator for Linux](#refresh-linux-container). Otherwise, see the section above regarding connectivity-related issues.
+ - Make sure you're [running the latest image of the Cosmos DB emulator for Linux](#refresh-linux-container). Otherwise, see the section above regarding connectivity-related issues.
- If the Cosmos DB emulator data folder is "volume mounted", ensure that the volume has enough space and is read/write.
- - Confirm that creating a container with the recommended settings works. If yes, most likely the cause of failure was the additional settings passed via the respective Docker command upon starting the container.
+ - Confirm that creating a container with the recommended settings works. If yes, most likely the cause of failure was the extra settings passed via the respective Docker command upon starting the container.
- If the emulator fails to start with the following error:
The number of physical partitions provisioned on the emulator is too low. Either
"Failed loading Emulator secrets certificate. Error: 0x8009000f or similar, a new policy might have been added to your host that prevents an application such as Azure Cosmos DB Emulator from creating and adding self signed certificate files into your certificate store." ```
- This can be the case even when you run in Administrator context, since the specific policy usually added by your IT department takes priority over the local Administrator. Using a Docker image for the emulator instead might help in this case, as long as you still have the permission to add the self-signed emulator SSL certificate into your host machine context (this is required by Java and .NET Cosmos SDK client application).
+ This failure can occur even when you run in Administrator context, since the specific policy usually added by your IT department takes priority over the local Administrator. Using a Docker image for the emulator instead might help in this case. The image can help as long as you still have the permission to add the self-signed emulator TLS/SSL certificate into your host machine context. The self-signed certificate is required by Java and .NET Cosmos SDK client applications.
- The emulator is crashing:
- - Confirm that creating a container with the [recommended settings](#run-on-linux) works. If yes, most likely the cause of failure is the additional settings passed via the respective Docker command upon starting the container.
+ - Confirm that creating a container with the [recommended settings](#run-the-linux-emulator-on-linux-os) works. If yes, most likely the cause of failure is the extra settings passed via the respective Docker command upon starting the container.
- Start the emulator's Docker container in an attached mode (see `docker start -it`).
The number of physical partitions provisioned on the emulator is too low. Either
Number of requests per second is low, latency of the requests is high: -- The Docker container is not provisioned with enough resources [(cores or memory)](#config-options). We recommend increasing the number of cores and alternatively, reduce the number of physical partitions provisioned upon startup.
+- The Docker container isn't provisioned with enough resources [(cores or memory)](#configuration-options). We recommend increasing the number of cores and alternatively, reduce the number of physical partitions provisioned upon startup.
-## <a id="refresh-linux-container"></a>Refresh Linux container
+## Refresh Linux container
Use the following steps to refresh the Linux container:
Use the following steps to refresh the Linux container:
docker pull mcr.microsoft.com/cosmosdb/linux/azure-cosmos-emulator ```
-1. To start a stopped container run the following:
+1. To start a stopped container, run the following command:
```bash docker start -ai ID_OF_CONTAINER
cosmos-db Merge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/merge.md
+
+ Title: Merge partitions in Azure Cosmos DB (preview)
+description: Learn more about the merge partitions capability in Azure Cosmos DB
++++++ Last updated : 05/09/2022++
+# Merge partitions in Azure Cosmos DB (preview)
+
+Merging partitions in Azure Cosmos DB (preview) allows you to reduce the number of physical partitions used for your container. With merge, containers that are fragmented in throughput (have low RU/s per partition) or storage (have low storage per partition) can have their physical partitions reworked. If a container's throughput has been scaled up and needs to be scaled back down, merge can help resolve throughput fragmentation issues. For the same amount of provisioned RU/s, having fewer physical partitions means each physical partition gets more of the overall RU/s. Minimizing partitions reduces the chance of rate limiting if a large quantity of data is removed from a container. Merge can help clear out unused or empty partitions, effectively resolving storage fragmentation problems.
+
+## Getting started
+
+To get started using merge, enroll in the preview by submitting a request for the **Azure Cosmos DB Partition Merge** feature via the [**Preview Features** page](../azure-resource-manager/management/preview-features.md) in your Azure Subscription overview page.
+- Before submitting your request, verify that your Azure Cosmos DB account(s) meet all the [preview eligibility criteria](#preview-eligibility-criteria).
+- The Azure Cosmos DB team will review your request and contact you via email to confirm which account(s) in the subscription you want to enroll in the preview.
+
+### Merging physical partitions
+
+In PowerShell, when the flag `-WhatIf` is passed in, Azure Cosmos DB will run a simulation and return the expected result of the merge, but won't run the merge itself. When the flag isn't passed in, the merge will execute against the resource. When finished, the command will output the current amount of storage in KB per physical partition post-merge.
+> [!TIP]
+> Before running a merge, it's recommended to set your provisioned RU/s (either manual RU/s or autoscale max RU/s) as close as possible to your desired steady state RU/s post-merge, to help ensure the system calculates an efficient partition layout.
+
+#### [PowerShell](#tab/azure-powershell)
+
+```azurepowershell
+// Add the preview extension
+Install-Module -Name Az.CosmosDB -AllowPrerelease -Force
+
+// SQL API
+Invoke-AzCosmosDBSqlContainerMerge `
+ -ResourceGroupName "<resource-group-name>" `
+ -AccountName "<cosmos-account-name>" `
+ -DatabaseName "<cosmos-database-name>" `
+ -Name "<cosmos-container-name>" `
+ -WhatIf
+
+// API for MongoDB
+Invoke-AzCosmosDBMongoDBCollectionMerge `
+ -ResourceGroupName "<resource-group-name>" `
+ -AccountName "<cosmos-account-name>" `
+ -DatabaseName "<cosmos-database-name>" `
+ -Name "<cosmos-collection-name>" `
+ -WhatIf
+```
+
+#### [Azure CLI](#tab/azure-cli)
+
+```azurecli
+// Add the preview extension
+az extension add --name cosmosdb-preview
+
+// SQL API
+az cosmosdb sql container merge \
+ --resource-group '<resource-group-name>' \
+ --account-name '<cosmos-account-name>' \
+ --database-name '<cosmos-database-name>' \
+ --name '<cosmos-container-name>'
+
+// API for MongoDB
+az cosmosdb mongodb collection merge \
+ --resource-group '<resource-group-name>' \
+ --account-name '<cosmos-account-name>' \
+ --database-name '<cosmos-database-name>' \
+ --name '<cosmos-collection-name>'
+```
+++
+### Monitor merge operations
+Partition merge is a long-running operation and there's no SLA on how long it takes to complete. The time depends on the amount of data in the container and the number of physical partitions. It's recommended to allow at least 5-6 hours for merge to complete.
+
+While partition merge is running on your container, it isn't possible to change the throughput or any container settings (TTL, indexing policy, unique keys, etc.). Wait until the merge operation completes before changing your container settings.
+
+You can track whether merge is still in progress by checking the **Activity Log** and filtering for the events **Merge the physical partitions of a MongoDB collection** or **Merge the physical partitions of a SQL container**.
+
+## Limitations
+
+### Preview eligibility criteria
+To enroll in the preview, your Cosmos account must meet all the following criteria:
+* Your Cosmos account uses SQL API or API for MongoDB with version >=3.6.
+* Your Cosmos account is using provisioned throughput (manual or autoscale). Merge doesn't apply to serverless accounts.
+ * Currently, merge isn't supported for shared throughput databases. You may enroll an account that has both shared throughput databases and containers with dedicated throughput (manual or autoscale).
+ * However, only the containers with dedicated throughput will be able to be merged.
+* Your Cosmos account is a single-write region account (merge isn't currently supported for multi-region write accounts).
+* Your Cosmos account doesn't use any of the following features:
+ * [Point-in-time restore](continuous-backup-restore-introduction.md)
+ * [Customer-managed keys](how-to-setup-cmk.md)
+ * [Analytical store](analytical-store-introduction.md)
+* Your Cosmos account uses bounded staleness, session, consistent prefix, or eventual consistency (merge isn't currently supported for strong consistency).
+* If you're using SQL API, your application must use the Azure Cosmos DB .NET V3 SDK, version 3.27.0 or higher. When merge preview enabled on your account, all requests sent from non .NET SDKs or older .NET SDK versions won't be accepted.
+ * There are no SDK or driver requirements to use the feature with API for MongoDB.
+* Your Cosmos account doesn't use any currently unsupported connectors:
+ * Azure Data Factory
+ * Azure Stream Analytics
+ * Logic Apps
+ * Azure Functions
+ * Azure Search
+
+### Account resources and configuration
+* Merge is only available for SQL API and API for MongoDB accounts. For API for MongoDB accounts, the MongoDB account version must be 3.6 or greater.
+* Merge is only available for single-region write accounts. Multi-region write account support isn't available.
+* Accounts using merge functionality can't also use these features (if these features are added to a merge enabled account, resources in the account will no longer be able to be merged):
+ * [Point-in-time restore](continuous-backup-restore-introduction.md)
+ * [Customer-managed keys](how-to-setup-cmk.md)
+ * [Analytical store](analytical-store-introduction.md)
+* Containers using merge functionality must have their throughput provisioned at the container level. Database-shared throughput support isn't available.
+* Merge is only available for accounts using bounded staleness, session, consistent prefix, or eventual consistency. It isn't currently supported for strong consistency.
+* After a container has been merged, it isn't possible to read the change feed with start time. Support for this feature is planned for the future.
+
+### SDK requirements (SQL API only)
+
+Accounts with the merge feature enabled are supported only when you use the latest version of the .NET v3 SDK. When the feature is enabled on your account (regardless of whether you run the merge), you must only use the supported SDK using the account. Requests sent from other SDKs or earlier versions won't be accepted. As long as you're using the supported SDK, your application can continue to run while a merge is ongoing.
+
+Find the latest version of the supported SDK:
+
+| SDK | Supported versions | Package manager link |
+| | | |
+| **.NET SDK v3** | *>= 3.27.0* | <https://www.nuget.org/packages/Microsoft.Azure.Cosmos/> |
+
+Support for other SDKs is planned for the future.
+
+> [!TIP]
+> You should ensure that your application has been updated to use a compatible SDK version prior to enrolling in the preview. If you're using the legacy .NET V2 SDK, follow the [.NET SDK v3 migration guide](sql/migrate-dotnet-v3.md).
+
+### Unsupported connectors
+
+If you enroll in the preview, the following connectors will fail.
+
+* Azure Data Factory
+* Azure Stream Analytics
+* Logic Apps
+* Azure Functions
+* Azure Search
+
+Support for these connectors is planned for the future.
+
+## Next steps
+
+* Learn more about [using Azure CLI with Azure Cosmos DB.](/cli/azure/azure-cli-reference-for-cosmos-db.md)
+* Learn more about [using Azure PowerShell with Azure Cosmos DB.](/powershell/module/az.cosmosdb/)
+* Learn more about [partitioning in Azure Cosmos DB.](partitioning-overview.md)
cosmos-db Manage With Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/manage-with-bicep.md
Previously updated : 09/13/2021 Last updated : 05/23/2022
In this article, you learn how to use Bicep to deploy and manage your Azure Cosmos DB accounts for MongoDB API, databases, and collections.
-This article shows Bicep samples for Gremlin API accounts. You can also find Bicep samples for [SQL](../sql/manage-with-bicep.md), [Cassandra](../cassandr) APIs.
+This article shows Bicep samples for MongoDB API accounts. You can also find Bicep samples for [SQL](../sql/manage-with-bicep.md), [Cassandra](../cassandr) APIs.
> [!IMPORTANT] >
To create any of the Azure Cosmos DB resources below, copy the following example
## MongoDB API with autoscale provisioned throughput
-This template will create an Azure Cosmos account for MongoDB API (3.2, 3.6 or 4.0) with two collections that share autoscale throughput at the database level.
+This template will create an Azure Cosmos account for MongoDB API (3.2, 3.6, 4.0, or 4.2) with two collections that share autoscale throughput at the database level.
:::code language="bicep" source="~/quickstart-templates/quickstarts/microsoft.documentdb/cosmosdb-mongodb-autoscale/main.bicep":::
This template will create an Azure Cosmos account for MongoDB API (3.2, 3.6 or 4
## MongoDB API with standard provisioned throughput
-Create an Azure Cosmos account for MongoDB API (3.2, 3.6 or 4.0) with two collections that share 400 RU/s standard (manual) throughput at the database level.
+Create an Azure Cosmos account for MongoDB API (3.2, 3.6, 4.0, or 4.2) with two collections that share 400 RU/s standard (manual) throughput at the database level.
:::code language="bicep" source="~/quickstart-templates/quickstarts/microsoft.documentdb/cosmosdb-mongodb/main.bicep":::
cosmos-db Resource Manager Template Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/resource-manager-template-samples.md
Previously updated : 08/26/2021 Last updated : 05/23/2022 # Manage Azure Cosmos DB MongoDB API resources using Azure Resource Manager templates+ [!INCLUDE[appliesto-mongodb-api](../includes/appliesto-mongodb-api.md)] In this article, you learn how to use Azure Resource Manager templates to help deploy and manage your Azure Cosmos DB accounts for MongoDB API, databases, and collections.
To create any of the Azure Cosmos DB resources below, copy the following example
## Azure Cosmos account for MongoDB with autoscale provisioned throughput
-This template will create an Azure Cosmos account for MongoDB API (3.2 or 3.6) with two collections that share autoscale throughput at the database level. This template is also available for one-click deploy from Azure Quickstart Templates Gallery.
+This template will create an Azure Cosmos DB account for MongoDB API (3.2, 3.6, 4.0 and 4.2) with two collections that share autoscale throughput at the database level. This template is also available for one-click deploy from Azure Quickstart Templates Gallery.
[:::image type="content" source="../../media/template-deployments/deploy-to-azure.svg" alt-text="Deploy to Azure":::](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.documentdb%2Fcosmosdb-mongodb-autoscale%2Fazuredeploy.json)
This template will create an Azure Cosmos account for MongoDB API (3.2 or 3.6) w
## Azure Cosmos account for MongoDB with standard provisioned throughput
-This template will create an Azure Cosmos account for MongoDB API (3.2 or 3.6) with two collections that share 400 RU/s standard (manual) throughput at the database level. This template is also available for one-click deploy from Azure Quickstart Templates Gallery.
+This template will create an Azure Cosmos DB account for MongoDB API (3.2, 3.6, 4.0 and 4.2) with two collections that share 400 RU/s standard (manual) throughput at the database level. This template is also available for one-click deploy from Azure Quickstart Templates Gallery.
[:::image type="content" source="../../media/template-deployments/deploy-to-azure.svg" alt-text="Deploy to Azure":::](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.documentdb%2Fcosmosdb-mongodb%2Fazuredeploy.json)
cosmos-db Serverless https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/serverless.md
Title: Consumption-based serverless offer in Azure Cosmos DB description: Learn more about Azure Cosmos DB's consumption-based serverless offer.--++ + Previously updated : 05/25/2021 Last updated : 05/09/2022+ # Azure Cosmos DB serverless [!INCLUDE[appliesto-all-apis](includes/appliesto-all-apis.md)]
-Azure Cosmos DB serverless lets you use your Azure Cosmos account in a consumption-based fashion where you are only charged for the Request Units consumed by your database operations and the storage consumed by your data. Serverless containers can serve thousands of requests per second with no minimum charge and no capacity planning required.
+The Azure Cosmos DB serverless offering lets you use your Azure Cosmos account in a consumption-based fashion. With serverless, you're only charged for the Request Units (RUs) consumed by your database operations and the storage consumed by your data. Serverless containers can serve thousands of requests per second with no minimum charge and no capacity planning required.
> [!IMPORTANT] > Do you have any feedback about serverless? We want to hear it! Feel free to drop a message to the Azure Cosmos DB serverless team: [azurecosmosdbserverless@service.microsoft.com](mailto:azurecosmosdbserverless@service.microsoft.com).
-When using Azure Cosmos DB, every database operation has a cost expressed in [Request Units](request-units.md). How you are charged for this cost depends on the type of Azure Cosmos account you are using:
+Every database operation in Azure Cosmos DB has a cost expressed in [Request Units (RUs)](request-units.md). How you're charged for this cost depends on the type of Azure Cosmos account you're using:
-- In [provisioned throughput](set-throughput.md) mode, you have to commit to a certain amount of throughput (expressed in Request Units per second) that is provisioned on your databases and containers. The cost of your database operations is then deducted from the number of Request Units available every second. At the end of your billing period, you get billed for the amount of throughput you have provisioned.-- In serverless mode, you don't have to provision any throughput when creating containers in your Azure Cosmos account. At the end of your billing period, you get billed for the number of Request Units that were consumed by your database operations.
+- In [provisioned throughput](set-throughput.md) mode, you have to commit to a certain amount of throughput (expressed in Request Units per second or RU/s) that is provisioned on your databases and containers. The cost of your database operations is then deducted from the number of Request Units available every second. At the end of your billing period, you get billed for the amount of throughput you've provisioned.
+- In serverless mode, you don't have to configure provisioned throughput when creating containers in your Azure Cosmos account. At the end of your billing period, you get billed for the number of Request Units that were consumed by your database operations.
## Use-cases
Azure Cosmos DB serverless best fits scenarios where you expect **intermittent a
- Developing, testing, prototyping and running in production new applications where the traffic pattern is unknown - Integrating with serverless compute services like [Azure Functions](../azure-functions/functions-overview.md)
-See the [how to choose between provisioned throughput and serverless](throughput-serverless.md) article for more guidance on how to choose the offer that best fits your use-case.
+For more information, see [choosing between provisioned throughput and serverless](throughput-serverless.md).
## Using serverless resources
-Serverless is a new Azure Cosmos account type, which means that you have to choose between **provisioned throughput** and **serverless** when creating a new account. You must create a new serverless account to get started with serverless. Migrating existing accounts to/from serverless mode is not currently supported.
+Serverless is a new Azure Cosmos account type, which means that you have to choose between **provisioned throughput** and **serverless** when creating a new account. You must create a new serverless account to get started with serverless. Migrating existing accounts to/from serverless mode isn't currently supported.
Any container that is created in a serverless account is a serverless container. Serverless containers expose the same capabilities as containers created in provisioned throughput mode, so you read, write and query your data the exact same way. However serverless accounts and containers also have specific characteristics: -- A serverless account can only run in a single Azure region. It is not possible to add additional Azure regions to a serverless account after you create it.-- Provisioning throughput is not required on serverless containers, so the following statements are applicable:
+- A serverless account can only run in a single Azure region. It isn't possible to add more Azure regions to a serverless account after you create it.
+- Provisioning throughput isn't required on serverless containers, so the following statements are applicable:
- You can't pass any throughput when creating a serverless container and doing so returns an error. - You can't read or update the throughput on a serverless container and doing so returns an error. - You can't create a shared throughput database in a serverless account and doing so returns an error. - Serverless containers can store a maximum of 50 GB of data and indexes.
+> [!NOTE]
+> Serverless containers up to 1 TB are currently in preview with Azure Cosmos DB. To try the new feature, register the *"Azure Cosmos DB Serverless 1 TB Container Preview"* [preview feature in your Azure subscription](/azure/azure-resource-manager/management/preview-features).
+ ## Monitoring your consumption
-If you have used Azure Cosmos DB in provisioned throughput mode before, you will find that serverless is more cost-effective when your traffic doesn't justify provisioned capacity. The trade-off is that your costs will become less predictable because you are billed based on the number of requests your database has processed. Because of that, it's important to keep an eye on your current consumption.
+If you have used Azure Cosmos DB in provisioned throughput mode before, you'll find serverless is more cost-effective when your traffic doesn't justify provisioned capacity. The trade-off is that your costs will become less predictable because you're billed based on the number of requests your database has processed. Because of the lack of predictability, it's important to keep an eye on your current consumption.
-When browsing the **Metrics** pane of your account, you will find a chart named **Request Units consumed** under the **Overview** tab. This chart shows how many Request Units your account has consumed:
+When browsing the **Metrics** pane of your account, you'll find a chart named **Request Units consumed** under the **Overview** tab. This chart shows how many Request Units your account has consumed:
:::image type="content" source="./media/serverless/request-units-consumed.png" alt-text="Chart showing the consumed Request Units" border="false":::
-You can find the same chart when using Azure Monitor, as described [here](monitor-request-unit-usage.md). Note that Azure Monitor lets you setup [alerts](../azure-monitor/alerts/alerts-metric-overview.md), which can be used to notify you when your Request Unit consumption has passed a certain threshold.
+You can find the same chart when using Azure Monitor, as described [here](monitor-request-unit-usage.md). Azure Monitor enables the ability to configure [alerts](../azure-monitor/alerts/alerts-metric-overview.md), which can be used to notify you when your Request Unit consumption has passed a certain threshold.
+
+## Performance
-## <a id="performance"></a>Performance
+Serverless resources yield specific performance characteristics that are different from what provisioned throughput resources deliver. Serverless containers don't offer predictable throughput or latency guarantees. If your container\[s\] requires these guarantees, use provisioned throughput.
-Serverless resources yield specific performance characteristics that are different from what provisioned throughput resources deliver. The latency of serverless containers are covered by a Service Level Objective (SLO) of 10 milliseconds or less for point-reads and 30 milliseconds or less for writes. A point-read operation consists in fetching a single item by its ID and partition key value.
+For more information, see [provisioned throughput](set-throughput.md).
## Next steps
cosmos-db Bulk Executor Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/bulk-executor-java.md
# Perform bulk operations on Azure Cosmos DB data [!INCLUDE[appliesto-sql-api](../includes/appliesto-sql-api.md)]
-This tutorial provides instructions on performing bulk operations in the [Azure Cosmos DB Java V4 SDK](sql-api-sdk-java-v4.md). This version of the SDK comes with the bulk executor library built-in. If you are using an older version of Java SDK, it's recommended to [migrate to the latest version](migrate-java-v4-sdk.md). Azure Cosmos DB Java V4 SDK is the current recommended solution for Java bulk support.
+This tutorial provides instructions on performing bulk operations in the [Azure Cosmos DB Java V4 SDK](sql-api-sdk-java-v4.md). This version of the SDK comes with the bulk executor library built-in. If you're using an older version of Java SDK, it's recommended to [migrate to the latest version](migrate-java-v4-sdk.md). Azure Cosmos DB Java V4 SDK is the current recommended solution for Java bulk support.
Currently, the bulk executor library is supported only by Azure Cosmos DB SQL API and Gremlin API accounts. To learn about using bulk executor .NET library with Gremlin API, see [perform bulk operations in Azure Cosmos DB Gremlin API](../graph/bulk-executor-graph-dotnet.md).
com.azure.cosmos.examples.bulk.async.SampleBulkQuickStartAsync
2. The `CosmosAsyncClient` object is initialized by using the following statements:
- ```java
- client = new CosmosClientBuilder().endpoint(AccountSettings.HOST).key(AccountSettings.MASTER_KEY)
- .preferredRegions(preferredRegions).contentResponseOnWriteEnabled(true)
- .consistencyLevel(ConsistencyLevel.SESSION).buildAsyncClient();
- ```
+ [!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/bulk/async/SampleBulkQuickStartAsync.java?name=CreateAsyncClient)]
3. The sample creates an async database and container. It then creates multiple documents on which bulk operations will be executed. It adds these documents to a `Flux<Family>` reactive stream object:
- ```java
- createDatabaseIfNotExists();
- createContainerIfNotExists();
+ [!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/bulk/async/SampleBulkQuickStartAsync.java?name=AddDocsToStream)]
- Family andersenFamilyItem = Families.getAndersenFamilyItem();
- Family wakefieldFamilyItem = Families.getWakefieldFamilyItem();
- Family johnsonFamilyItem = Families.getJohnsonFamilyItem();
- Family smithFamilyItem = Families.getSmithFamilyItem();
- // Setup family items to create
- Flux<Family> families = Flux.just(andersenFamilyItem, wakefieldFamilyItem, johnsonFamilyItem, smithFamilyItem);
- ```
+4. The sample contains methods for bulk create, upsert, replace, and delete. In each method we map the families documents in the BulkWriter `Flux<Family>` stream to multiple method calls in `CosmosBulkOperations`. These operations are added to another reactive stream object `Flux<CosmosItemOperation>`. The stream is then passed to the `executeBulkOperations` method of the async `container` we created at the beginning, and operations are executed in bulk. See bulk create method below as an example:
+ [!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/bulk/async/SampleBulkQuickStartAsync.java?name=BulkCreateItems)]
-4. The sample contains methods for bulk create, upsert, replace, and delete. In each method we map the families documents in the BulkWriter `Flux<Family>` stream to multiple method calls in `CosmosBulkOperations`. These operations are added to another reactive stream object `Flux<CosmosItemOperation>`. The stream is then passed to the `executeBulkOperations` method of the async `container` we created at the beginning, and operations are executed in bulk. See the `bulkCreateItems` method below as an example:
- ```java
- private void bulkCreateItems(Flux<Family> families) {
- Flux<CosmosItemOperation> cosmosItemOperations =
- families.map(family -> CosmosBulkOperations.getCreateItemOperation(family,
- new PartitionKey(family.getLastName())));
- container.executeBulkOperations(cosmosItemOperations).blockLast();
- }
- ```
+5. There's also a class `BulkWriter.java` in the same directory as the sample application. This class demonstrates how to handle rate limiting (429) and timeout (408) errors that may occur during bulk execution, and retrying those operations effectively. It is implemented in the below methods, also showing how to implement local and global throughput control.
+
+ [!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/bulk/async/SampleBulkQuickStartAsync.java?name=BulkWriterAbstraction)]
++
+6. Additionally, there are bulk create methods in the sample, which illustrate how to add response processing, and set execution options:
+
+ [!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/bulk/async/SampleBulkQuickStartAsync.java?name=BulkCreateItemsWithResponseProcessingAndExecutionOptions)]
-5. There is also a class `BulkWriter.java` in the same directory as the sample application. This class demonstrates how to handle rate limiting (429) and timeout (408) errors that may occur during bulk execution, and retrying those operations effectively. It is implemented in the `bulkCreateItemsSimple()` method in the application.
-
- ```java
- private void bulkCreateItemsSimple() {
- Family andersenFamilyItem = Families.getAndersenFamilyItem();
- Family wakefieldFamilyItem = Families.getWakefieldFamilyItem();
- CosmosItemOperation andersonItemOperation = CosmosBulkOperations.getCreateItemOperation(andersenFamilyItem, new PartitionKey(andersenFamilyItem.getLastName()));
- CosmosItemOperation wakeFieldItemOperation = CosmosBulkOperations.getCreateItemOperation(wakefieldFamilyItem, new PartitionKey(wakefieldFamilyItem.getLastName()));
- BulkWriter bulkWriter = new BulkWriter(container);
- bulkWriter.scheduleWrites(andersonItemOperation);
- bulkWriter.scheduleWrites(wakeFieldItemOperation);
- bulkWriter.execute().blockLast();
- }
- ```
-
-6. Additionally, there are bulk create methods in the sample which illustrate how to add response processing, and set execution options:
-
- ```java
- private void bulkCreateItemsWithResponseProcessing(Flux<Family> families) {
- Flux<CosmosItemOperation> cosmosItemOperations =
- families.map(family -> CosmosBulkOperations.getCreateItemOperation(family,
- new PartitionKey(family.getLastName())));
- container.executeBulkOperations(cosmosItemOperations).flatMap(cosmosBulkOperationResponse -> {
- CosmosBulkItemResponse cosmosBulkItemResponse = cosmosBulkOperationResponse.getResponse();
- CosmosItemOperation cosmosItemOperation = cosmosBulkOperationResponse.getOperation();
-
- if (cosmosBulkOperationResponse.getException() != null) {
- logger.error("Bulk operation failed", cosmosBulkOperationResponse.getException());
- } else if (cosmosBulkOperationResponse.getResponse() == null || !cosmosBulkOperationResponse.getResponse().isSuccessStatusCode()) {
- logger.error("The operation for Item ID: [{}] Item PartitionKey Value: [{}] did not complete successfully with " +
- "a" + " {} response code.", cosmosItemOperation.<Family>getItem().getId(),
- cosmosItemOperation.<Family>getItem().getLastName(), cosmosBulkItemResponse.getStatusCode());
- } else {
- logger.info("Item ID: [{}] Item PartitionKey Value: [{}]", cosmosItemOperation.<Family>getItem().getId(),
- cosmosItemOperation.<Family>getItem().getLastName());
- logger.info("Status Code: {}", String.valueOf(cosmosBulkItemResponse.getStatusCode()));
- logger.info("Request Charge: {}", String.valueOf(cosmosBulkItemResponse.getRequestCharge()));
- }
- return Mono.just(cosmosBulkItemResponse);
- }).blockLast();
- }
-
- private void bulkCreateItemsWithExecutionOptions(Flux<Family> families) {
- CosmosBulkExecutionOptions bulkExecutionOptions = new CosmosBulkExecutionOptions();
- ImplementationBridgeHelpers
- .CosmosBulkExecutionOptionsHelper
- .getCosmosBulkExecutionOptionsAccessor()
- .setMaxMicroBatchSize(bulkExecutionOptions, 10);
- Flux<CosmosItemOperation> cosmosItemOperations =
- families.map(family -> CosmosBulkOperations.getCreateItemOperation(family,
- new PartitionKey(family.getLastName())));
- container.executeBulkOperations(cosmosItemOperations, bulkExecutionOptions).blockLast();
- }
- ```
<!-- The importAll method accepts the following parameters:
Consider the following points for better performance when using bulk executor li
* For achieving higher throughput: * Set the JVM's heap size to a large enough number to avoid any memory issue in handling large number of documents. Suggested heap size: max(3 GB, 3 * sizeof(all documents passed to bulk import API in one batch)).
- * There is a preprocessing time, due to which you will get higher throughput when performing bulk operations with a large number of documents. So, if you want to import 10,000,000 documents, running bulk import 10 times on 10 bulk of documents each of size 1,000,000 is preferable than running bulk import 100 times on 100 bulk of documents each of size 100,000 documents.
+ * There's a preprocessing time, due to which you'll get higher throughput when performing bulk operations with a large number of documents. So, if you want to import 10,000,000 documents, running bulk import 10 times on 10 bulk of documents each of size 1,000,000 is preferable than running bulk import 100 times on 100 bulk of documents each of size 100,000 documents.
* It is recommended to instantiate a single CosmosAsyncClient object for the entire application within a single virtual machine that corresponds to a specific Azure Cosmos container.
Consider the following points for better performance when using bulk executor li
## Next steps
-* To learn about maven package details and release notes of bulk executor Java library, see[bulk executor SDK details](sql-api-sdk-bulk-executor-java.md).
+* For an overview of bulk executor functionality, see [bulk executor overview](../bulk-executor-overview.md).
cosmos-db Distribute Throughput Across Partitions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/distribute-throughput-across-partitions.md
+
+ Title: Redistribute throughput across partitions (preview) in Azure Cosmos DB
+description: Learn how to redistribute throughput across partitions (preview)
++++++ Last updated : 05/09/2022++
+# Redistribute throughput across partitions (preview)
+
+By default, Azure Cosmos DB distributes the provisioned throughput of a database or container equally across all physical partitions. However, scenarios may arise where due to a skew in the workload or choice of partition key, certain logical (and thus physical) partitions need more throughput than others. For these scenarios, Azure Cosmos DB gives you the ability to redistribute your provisioned throughput across physical partitions. Redistributing throughput across partitions helps you achieve better performance without having to configure your overall throughput based on the hottest partition.
+
+The throughput redistributing feature applies to databases and containers using provisioned throughput (manual and autoscale) and doesn't apply to serverless containers. You can change the throughput per physical partition using the Azure Cosmos DB PowerShell commands.
+
+## When to use this feature
+
+In general, usage of this feature is recommended for scenarios when both the following are true:
+
+- You're consistently seeing greater than 1-5% overall rate of 429 responses
+- You've a consistent, predictable hot partition
+
+If you aren't seeing 429 responses and your end to end latency is acceptable, then no action to reconfigure RU/s per partition is required. If you have a workload that has consistent traffic with occasional unpredictable spikes across *all your partitions*, it's recommended to use [autoscale](../provision-throughput-autoscale.md) and [burst capacity (preview)](../burst-capacity.md). Autoscale and burst capacity will ensure you can meet your throughput requirements.
+
+## Getting started
+
+To get started using distributed throughput across partitions, enroll in the preview by submitting a request for the **Azure Cosmos DB Throughput Redistribution Across Partitions** feature via the [**Preview Features** page](../../azure-resource-manager/management/preview-features.md) in your Azure Subscription overview page.
+- Before submitting your request, verify that your Azure Cosmos DB account(s) meet all the [preview eligibility criteria](#preview-eligibility-criteria).
+- The Azure Cosmos DB team will review your request and contact you via email to confirm which account(s) in the subscription you want to enroll in the preview.
++
+## Example scenario
+
+Suppose we have a workload that keeps track of transactions that take place in retail stores. Because most of our queries are by `StoreId`, we partition by `StoreId`. However, over time, we see that some stores have more activity than others and require more throughput to serve their workloads. We're seeing rate limiting (429) for requests against those StoreIds, and our [overall rate of 429 responses is greater than 1-5%](troubleshoot-request-rate-too-large.md#recommended-solution). Meanwhile, other stores are less active and require less throughput. Let's see how we can redistribute our throughput for better performance.
+
+## Step 1: Identify which physical partitions need more throughput
+
+There are two ways to identify if there's a hot partition.
+
+### Option 1: Use Azure Monitor metrics
+
+To verify if there's a hot partition, navigate to **Insights** > **Throughput** > **Normalized RU Consumption (%) By PartitionKeyRangeID**. Filter to a specific database and container.
+
+Each PartitionKeyRangeId maps to one physical partition. Look for one PartitionKeyRangeId that consistently has a higher normalized RU consumption than others. For example, one value is consistently at 100%, but others are at 30% or less. A pattern such as this can indicate a hot partition.
++
+### Option 2: Use Diagnostic Logs
+
+We can use the information from **CDBPartitionKeyRUConsumption** in Diagnostic Logs to get more information about the logical partition keys (and corresponding physical partitions) that are consuming the most RU/s at a second level granularity. Note the sample queries use 24 hours for illustrative purposes only - it's recommended to use at least seven days of history to understand the pattern.
+
+#### Find the physical partition (PartitionKeyRangeId) that is consuming the most RU/s over time
+
+```Kusto
+CDBPartitionKeyRUConsumption
+| where TimeGenerated >= ago(24hr)
+| where DatabaseName == "MyDB" and CollectionName == "MyCollection" // Replace with database and collection name
+| where isnotempty(PartitionKey) and isnotempty(PartitionKeyRangeId)
+| summarize sum(RequestCharge) by bin(TimeGenerated, 1s), PartitionKeyRangeId
+| render timechart
+```
+
+#### For a given physical partition, find the top 10 logical partition keys that are consuming the most RU/s over each hour
+
+```Kusto
+CDBPartitionKeyRUConsumption
+| where TimeGenerated >= ago(24hour)
+| where DatabaseName == "MyDB" and CollectionName == "MyCollection" // Replace with database and collection name
+| where isnotempty(PartitionKey) and isnotempty(PartitionKeyRangeId)
+| where PartitionKeyRangeId == 0 // Replace with PartitionKeyRangeId
+| summarize sum(RequestCharge) by bin(TimeGenerated, 1hour), PartitionKey
+| order by sum_RequestCharge desc | take 10
+```
+
+## Step 2: Determine the target RU/s for each physical partition
+
+### Determine current RU/s for each physical partition
+
+First, let's determine the current RU/s for each physical partition. You can use the new Azure Monitor metric **PhysicalPartitionThroughput** and split by the dimension **PhysicalPartitionId** to see how many RU/s you have per physical partition.
+
+Alternatively, if you haven't changed your throughput per partition before, you can use the formula:
+``Current RU/s per partition = Total RU/s / Number of physical partitions``
+
+Follow the guidance in the article [Best practices for scaling provisioned throughput (RU/s)](../scaling-provisioned-throughput-best-practices.md#step-1-find-the-current-number-of-physical-partitions) to determine the number of physical partitions.
+
+You can also use the PowerShell `Get-AzCosmosDBSqlContainerPerPartitionThroughput` and `Get-AzCosmosDBMongoDBCollectionPerPartitionThroughput` commands to read the current RU/s on each physical partition.
+
+```powershell
+// SQL API
+$somePartitions = Get-AzCosmosDBSqlContainerPerPartitionThroughput `
+ -ResourceGroupName "<resource-group-name>" `
+ -AccountName "<cosmos-account-name>" `
+ -DatabaseName "<cosmos-database-name>" `
+ -Name "<cosmos-container-name>" `
+ -PhysicalPartitionIds ("<PartitionId>", "<PartitionId">)
+
+$allPartitions = Get-AzCosmosDBSqlContainerPerPartitionThroughput `
+ -ResourceGroupName "<resource-group-name>" `
+ -AccountName "<cosmos-account-name>" `
+ -DatabaseName "<cosmos-database-name>" `
+ -Name "<cosmos-container-name>" `
+ -AllPartitions
+
+// API for MongoDB
+$somePartitions = Get-AzCosmosDBMongoDBCollectionPerPartitionThroughput `
+ -ResourceGroupName "<resource-group-name>" `
+ -AccountName "<cosmos-account-name>" `
+ -DatabaseName "<cosmos-database-name>" `
+ -Name "<cosmos-collection-name>" `
+ -PhysicalPartitionIds ("<PartitionId>", "<PartitionId">, ...)
+
+$allPartitions = Get-AzCosmosDBMongoDBCollectionPerPartitionThroughput `
+ -ResourceGroupName "<resource-group-name>" `
+ -AccountName "<cosmos-account-name>" `
+ -DatabaseName "<cosmos-database-name>" `
+ -Name "<cosmos-collection-name>" `
+ -AllPartitions
+```
+### Determine RU/s for target partition
+
+Next, let's decide how many RU/s we want to give to our hottest physical partition(s). Let's call this set our target partition(s). The most RU/s any physical partition can contain is 10,000 RU/s.
+
+The right approach depends on your workload requirements. General approaches include:
+- Increasing the RU/s by a percentage, measure the rate of 429 responses, and repeat until desired throughput is achieved.
+ - If you aren't sure the right percentage, you can start with 10% to be conservative.
+ - If you already know this physical partition requires most of the throughput of the workload, you can start by doubling the RU/s or increasing it to the maximum of 10,000 RU/s, whichever is lower.
+- Increasing the RU/s to `Total consumed RU/s of the physical partition + (Number of 429 responses per second * Average RU charge per request to the partition)`
+ - This approach tries to estimate what the "real" RU/s consumption would have been if the requests hadn't been rate limited.
+
+### Determine RU/s for source partition
+
+Finally, let's decide how many RU/s we want to keep on our other physical partitions. This selection will determine the partitions that the target physical partition takes throughput from.
+
+In the PowerShell APIs, we must specify at least one source partition to redistribute RU/s from. We can also specify a custom minimum throughput each physical partition should have after the redistribution. If not specified, by default, Azure Cosmos DB will ensure that each physical partition has at least 100 RU/s after the redistribution. It's recommended to explicitly specify the minimum throughput.
+
+The right approach depends on your workload requirements. General approaches include:
+- Taking RU/s equally from all source partitions (works best when there are <= 10 partitions)
+ - Calculate the amount we need to offset each source physical partition by. `Offset = Total desired RU/s of target partition(s) - total current RU/s of target partition(s)) / (Total physical partitions - number of target partitions)`
+ - Assign the minimum throughput for each source partition = `Current RU/s of source partition - offset`
+- Taking RU/s from the least active partition(s)
+ - Use Azure Monitor metrics and Diagnostic Logs to determine which physical partition(s) have the least traffic/request volume
+ - Calculate the amount we need to offset each source physical partition by. `Offset = Total desired RU/s of target partition(s) - total current RU/s of target partition) / Number of source physical partitions`
+ - Assign the minimum throughput for each source partition = `Current RU/s of source partition - offset`
+
+## Step 3: Programatically change the throughput across partitions
+
+You can use the PowerShell command `Update-AzCosmosDBSqlContainerPerPartitionThroughput` to redistribute throughput.
+
+To understand the below example, let's take an example where we have a container that has 6000 RU/s total (either 6000 manual RU/s or autoscale 6000 RU/s) and 3 physical partitions. Based on our analysis, we want a layout where:
+
+- Physical partition 0: 1000 RU/s
+- Physical partition 1: 4000 RU/s
+- Physical partition 2: 1000 RU/s
+
+We specify partitions 0 and 2 as our source partitions, and specify that after the redistribution, they should have a minimum RU/s of 1000 RU/s. Partition 1 is out target partition, which we specify should have 4000 RU/s.
+
+```powershell
+$SourcePhysicalPartitionObjects = @()
+$SourcePhysicalPartitionObjects += New-AzCosmosDBPhysicalPartitionThroughputObject -Id "0" -Throughput 1000
+$SourcePhysicalPartitionObjects += New-AzCosmosDBPhysicalPartitionThroughputObject -Id "2" -Throughput 1000
+
+$TargetPhysicalPartitionObjects = @()
+$TargetPhysicalPartitionObjects += New-AzCosmosDBPhysicalPartitionThroughputObject -Id "1" -Throughput 4000
+
+// SQL API
+Update-AzCosmosDBSqlContainerPerPartitionThroughput `
+ -ResourceGroupName "<resource-group-name>" `
+ -AccountName "<cosmos-account-name>" `
+ -DatabaseName "<cosmos-database-name>" `
+ -Name "<cosmos-container-name>" `
+ -SourcePhysicalPartitionThroughputObject $SourcePhysicalPartitionObjects `
+ -TargetPhysicalPartitionThroughputObject $TargetPhysicalPartitionObjects
+
+// API for MongoDB
+Update-AzCosmosDBMongoDBCollectionPerPartitionThroughput `
+ -ResourceGroupName "<resource-group-name>" `
+ -AccountName "<cosmos-account-name>" `
+ -DatabaseName "<cosmos-database-name>" `
+ -Name "<cosmos-collection-name>" `
+ -SourcePhysicalPartitionThroughputObject $SourcePhysicalPartitionObjects `
+ -TargetPhysicalPartitionThroughputObject $TargetPhysicalPartitionObjects
+```
+
+After you've completed the redistribution, you can verify the change by viewing the **PhysicalPartitionThroughput** metric in Azure Monitor. Split by the dimension **PhysicalPartitionId** to see how many RU/s you have per physical partition.
+
+If necessary, you can also reset the RU/s per physical partition so that the RU/s of your container are evenly distributed across all physical partitions.
+
+```powershell
+// SQL API
+$resetPartitions = Update-AzCosmosDBSqlContainerPerPartitionThroughput `
+ -ResourceGroupName "<resource-group-name>" `
+ -AccountName "<cosmos-account-name>" `
+ -DatabaseName "<cosmos-database-name>" `
+ -Name "<cosmos-container-name>" `
+ -EqualDistributionPolicy
+
+// API for MongoDB
+$resetPartitions = Update-AzCosmosDBMongoDBCollectionPerPartitionThroughput `
+ -ResourceGroupName "<resource-group-name>" `
+ -AccountName "<cosmos-account-name>" `
+ -DatabaseName "<cosmos-database-name>" `
+ -Name "<cosmos-collection-name>" `
+ -EqualDistributionPolicy
+```
+
+## Step 4: Verify and monitor your RU/s consumption
+
+After you've completed the redistribution, you can verify the change by viewing the **PhysicalPartitionThroughput** metric in Azure Monitor. Split by the dimension **PhysicalPartitionId** to see how many RU/s you have per physical partition.
+
+It's recommended to monitor your overall rate of 429 responses and RU/s consumption. For more information, review [Step 1](#step-1-identify-which-physical-partitions-need-more-throughput) to validate you've achieved the performance you expect.
+
+After the changes, assuming your overall workload hasn't changed, you'll likely see that both the target and source physical partitions have higher [Normalized RU consumption](../monitor-normalized-request-units.md) than previously. Higher normalized RU consumption is expected behavior. Essentially, you have allocated RU/s closer to what each partition actually needs to consume, so higher normalized RU consumption means that each partition is fully utilizing its allocated RU/s. You should also expect to see a lower overall rate of 429 exceptions, as the hot partitions now have more RU/s to serve requests.
+
+## Limitations
+
+### Preview eligibility criteria
+To enroll in the preview, your Cosmos account must meet all the following criteria:
+ - Your Cosmos account is using SQL API or API for MongoDB.
+ - If you're using API for MongoDB, the version must be >= 3.6.
+ - Your Cosmos account is using provisioned throughput (manual or autoscale). Distribution of throughput across partitions doesn't apply to serverless accounts.
+ - If you're using SQL API, your application must use the Azure Cosmos DB .NET V3 SDK, version 3.27.0 or higher. When the ability to redistribute throughput across partitions is enabled on your account, all requests sent from non .NET SDKs or older .NET SDK versions won't be accepted.
+ - Your Cosmos account isn't using any unsupported connectors:
+ - Azure Data Factory
+ - Azure Stream Analytics
+ - Logic Apps
+ - Azure Functions
+ - Azure Search
+
+### SDK requirements (SQL API only)
+
+Throughput redistribution across partitions is supported only with the latest version of the .NET v3 SDK. When the feature is enabled on your account, you must only use the supported SDK. Requests sent from other SDKs or earlier versions won't be accepted. There are no driver or SDK requirements to use this feature for API for MongoDB accounts.
+
+Find the latest preview version of the supported SDK:
+
+| SDK | Supported versions | Package manager link |
+| | | |
+| **.NET SDK v3** | *>= 3.27.0* | <https://www.nuget.org/packages/Microsoft.Azure.Cosmos/> |
+
+Support for other SDKs is planned for the future.
+
+> [!TIP]
+> You should ensure that your application has been updated to use a compatible SDK version prior to enrolling in the preview. If you're using the legacy .NET V2 SDK, follow the [.NET SDK v3 migration guide](migrate-dotnet-v3.md).
+
+### Unsupported connectors
+
+If you enroll in the preview, the following connectors will fail.
+
+* Azure Data Factory
+* Azure Stream Analytics
+* Logic Apps
+* Azure Functions
+* Azure Search
+
+Support for these connectors is planned for the future.
+
+## Next steps
+
+Learn about how to use provisioned throughput with the following articles:
+
+* Learn more about [provisioned throughput.](../set-throughput.md)
+* Learn more about [request units.](../request-units.md)
+* Need to monitor for hot partitions? See [monitoring request units.](../monitor-normalized-request-units.md#how-to-monitor-for-hot-partitions)
+* Want to learn the best practices? See [best practices for scaling provisioned throughput.](../scaling-provisioned-throughput-best-practices.md)
cosmos-db How To Migrate From Bulk Executor Library Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/how-to-migrate-from-bulk-executor-library-java.md
+
+ Title: Migrate from the bulk executor library to the bulk support in Azure Cosmos DB Java V4 SDK
+description: Learn how to migrate your application from using the bulk executor library to the bulk support in Azure Cosmos DB Java V4 SDK
++++ Last updated : 05/13/2022+
+ms.devlang: java
++++
+# Migrate from the bulk executor library to the bulk support in Azure Cosmos DB Java V4 SDK
+
+This article describes the required steps to migrate an existing application's code that uses the [Java bulk executor library](sql-api-sdk-bulk-executor-java.md) to the [bulk support](bulk-executor-java.md) feature in the latest version of the Java SDK.
+
+## Enable bulk support
+
+To use bulk support in the Java SDK, include the import below:
+
+ [!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/bulk/async/SampleBulkQuickStartAsync.java?name=CosmosBulkOperationsImport)]
+
+## Add documents to a reactive stream
+
+Bulk support in the Java V4 SDK works by adding documents to a reactive stream object. For example, you can add each document individually:
+
+ [!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/bulk/async/SampleBulkQuickStartAsync.java?name=AddDocsToStream)]
+
+Or you can add the documents to the stream from a list, using `fromIterable`:
+
+```java
+class SampleDoc {
+ public SampleDoc() {
+ }
+ public String getId() {
+ return id;
+ }
+ public void setId(String id) {
+ this.id = id;
+ }
+ private String id="";
+}
+List<SampleDoc> docList = new ArrayList<>();
+for (int i = 1; i <= 5; i++){
+ SampleDoc doc = new SampleDoc();
+ String id = "id-"+i;
+ doc.setId(id);
+ docList.add(doc);
+}
+
+Flux<SampleDoc> docs = Flux.fromIterable(docList);
+```
+
+If you want to do bulk create or upsert items (similar to using [DocumentBulkExecutor.importAll](/java/api/com.microsoft.azure.documentdb.bulkexecutor.documentbulkexecutor.importall)), you need to pass the reactive stream to a method like the following:
+
+ [!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/bulk/async/SampleBulkQuickStartAsync.java?name=BulkUpsertItems)]
+
+You can also use a method like the below, but this is only used for creating items:
+
+ [!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/bulk/async/SampleBulkQuickStartAsync.java?name=BulkCreateItems)]
++
+The [DocumentBulkExecutor.importAll](/java/api/com.microsoft.azure.documentdb.bulkexecutor.documentbulkexecutor.importall) method in the old BulkExecutor library was also used to bulk *patch* items. The old [DocumentBulkExecutor.mergeAll](/java/api/com.microsoft.azure.documentdb.bulkexecutor.documentbulkexecutor.mergeall) method was also used for patch, but only for the `set` patch operation type. To do bulk patch operations in the V4 SDK, first you need to create patch operations:
+
+ [!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/bulk/async/SampleBulkQuickStartAsync.java?name=PatchOperations)]
+
+Then you can pass the operations, along with the reactive stream of documents, to a method like the below. In this example, we apply both `add` and `set` patch operation types. The full set of patch operation types supported can be found [here](../partial-document-update.md#supported-operations) in our overview of [partial document update in Azure Cosmos DB](../partial-document-update.md).
+
+ [!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/bulk/async/SampleBulkQuickStartAsync.java?name=BulkPatchItems)]
+
+> [!NOTE]
+> In the above example, we apply `add` and `set` to patch elements whose root parent exists. However, you cannot do this where the root parent does **not** exist. This is because Azure Cosmos DB partial document update is [inspired by JSON Patch RFC 6902](../partial-document-update-faq.yml#is-this-an-implementation-of-json-patch-rfc-6902-). If patching where root parent does not exist, first read back the full documents, then use a method like the below to replace the documents:
+> [!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/bulk/async/SampleBulkQuickStartAsync.java?name=BulkReplaceItems)]
++
+And if you want to do bulk *delete* (similar to using [DocumentBulkExecutor.deleteAll](/java/api/com.microsoft.azure.documentdb.bulkexecutor.documentbulkexecutor.deleteall)), you need to use bulk delete:
+
+ [!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/bulk/async/SampleBulkQuickStartAsync.java?name=BulkDeleteItems)]
++
+## Retries, timeouts, and throughput control
+
+The bulk support in Java V4 SDK doesn't handle retries and timeouts natively. You can refer to the guidance in [Bulk Executor - Java Library](bulk-executor-java.md), which includes a [sample](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/jav#should-my-application-retry-on-errors) for more guidance on the different kinds of errors that can occur, and best practices for handling retries.
++
+## Next steps
+
+* [Bulk samples on GitHub](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/tree/main/src/main/java/com/azure/cosmos/examples/bulk/async)
+* Trying to do capacity planning for a migration to Azure Cosmos DB?
+ * If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
+ * If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
cosmos-db Powerbi Visualize https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/powerbi-visualize.md
To build a Power BI report/dashboard:
9. Click and expand on the database where the data for the report comes from. Now, select a collection that contains the data to retrieve.
- The Preview pane shows a list of **Record** items. A Document is represented as a **Record** type in Power BI. Similarly, a nested JSON block inside a document is also a **Record**.
+ The Preview pane shows a list of **Record** items. A Document is represented as a **Record** type in Power BI. Similarly, a nested JSON block inside a document is also a **Record**. To view the the properties documents as columns, click on the grey button with 2 arrows in opposite directions that symbolize the expansion of the record. It's located on the right of the container's name, in the same preview pane.
10. Power BI Desktop Report view is where you can start creating reports to visualize data. You can create reports by dragging and dropping fields into the **Report** canvas.
cosmos-db Sql Api Sdk Async Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-api-sdk-async-java.md
# Azure Cosmos DB Async Java SDK for SQL API (legacy): Release notes and resources [!INCLUDE[appliesto-sql-api](../includes/appliesto-sql-api.md)]
-> [!div class="op_single_selector"]
-> * [.NET SDK v3](sql-api-sdk-dotnet-standard.md)
-> * [.NET SDK v2](sql-api-sdk-dotnet.md)
-> * [.NET Core SDK v2](sql-api-sdk-dotnet-core.md)
-> * [.NET Change Feed SDK v2](sql-api-sdk-dotnet-changefeed.md)
-> * [Node.js](sql-api-sdk-node.md)
-> * [Java SDK v4](sql-api-sdk-java-v4.md)
-> * [Async Java SDK v2 (legacy)](sql-api-sdk-async-java.md)
-> * [Sync Java SDK v2 (legacy)](sql-api-sdk-java.md)
-> * [Spring Data v2 (legacy)](sql-api-sdk-java-spring-v2.md)
-> * [Spring Data v3](sql-api-sdk-java-spring-v3.md)
-> * [Spark 3 OLTP Connector](sql-api-sdk-java-spark-v3.md)
-> * [Spark 2 OLTP Connector](sql-api-sdk-java-spark.md)
-> * [Python](sql-api-sdk-python.md)
-> * [REST](/rest/api
-> * [REST Resource Provider](/azure/azure-resource-manager/management/azure-services-resource-providers)
-> * [SQL](sql-query-getting-started.md)
-> * [Bulk executor - .NET v2](sql-api-sdk-bulk-executor-dot-net.md)
-> * [Bulk executor - Java](sql-api-sdk-bulk-executor-java.md)
The SQL API Async Java SDK differs from the SQL API Java SDK by providing asynchronous operations with support of the [Netty library](https://netty.io/). The pre-existing [SQL API Java SDK](sql-api-sdk-java.md) does not support asynchronous operations.
cosmos-db Sql Api Sdk Bulk Executor Dot Net https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-api-sdk-bulk-executor-dot-net.md
# .NET bulk executor library: Download information (Legacy) [!INCLUDE[appliesto-sql-api](../includes/appliesto-sql-api.md)]
-> [!div class="op_single_selector"]
-> * [.NET SDK v3](sql-api-sdk-dotnet-standard.md)
-> * [.NET SDK v2](sql-api-sdk-dotnet.md)
-> * [.NET Core SDK v2](sql-api-sdk-dotnet-core.md)
-> * [.NET Change Feed SDK v2](sql-api-sdk-dotnet-changefeed.md)
-> * [Node.js](sql-api-sdk-node.md)
-> * [Java SDK v4](sql-api-sdk-java-v4.md)
-> * [Async Java SDK v2](sql-api-sdk-async-java.md)
-> * [Sync Java SDK v2](sql-api-sdk-java.md)
-> * [Spring Data v2](sql-api-sdk-java-spring-v2.md)
-> * [Spring Data v3](sql-api-sdk-java-spring-v3.md)
-> * [Spark 3 OLTP Connector](sql-api-sdk-java-spark-v3.md)
-> * [Spark 2 OLTP Connector](sql-api-sdk-java-spark.md)
-> * [Python](sql-api-sdk-python.md)
-> * [REST](/rest/api/cosmos-db/)
-> * [REST Resource Provider](/rest/api/cosmos-db-resource-provider/)
-> * [SQL](./sql-query-getting-started.md)
-> * [Bulk executor - .NET v2](sql-api-sdk-bulk-executor-dot-net.md)
-> * [Bulk executor - Java](sql-api-sdk-bulk-executor-java.md)
| | Link/notes | |||
cosmos-db Sql Api Sdk Bulk Executor Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-api-sdk-bulk-executor-java.md
# Java bulk executor library: Download information [!INCLUDE[appliesto-sql-api](../includes/appliesto-sql-api.md)]
-> [!div class="op_single_selector"]
-> * [.NET SDK v3](sql-api-sdk-dotnet-standard.md)
-> * [.NET SDK v2](sql-api-sdk-dotnet.md)
-> * [.NET Core SDK v2](sql-api-sdk-dotnet-core.md)
-> * [.NET Change Feed SDK v2](sql-api-sdk-dotnet-changefeed.md)
-> * [Node.js](sql-api-sdk-node.md)
-> * [Java SDK v4](sql-api-sdk-java-v4.md)
-> * [Async Java SDK v2](sql-api-sdk-async-java.md)
-> * [Sync Java SDK v2](sql-api-sdk-java.md)
-> * [Spring Data v2](sql-api-sdk-java-spring-v2.md)
-> * [Spring Data v3](sql-api-sdk-java-spring-v3.md)
-> * [Spark 3 OLTP Connector](sql-api-sdk-java-spark-v3.md)
-> * [Spark 2 OLTP Connector](sql-api-sdk-java-spark.md)
-> * [Python](sql-api-sdk-python.md)
-> * [REST](/rest/api/cosmos-db/)
-> * [REST Resource Provider](/rest/api/cosmos-db-resource-provider/)
-> * [SQL](sql-query-getting-started.md)
-> * [Bulk executor - .NET v2](sql-api-sdk-bulk-executor-dot-net.md)
-> * [Bulk executor - Java](sql-api-sdk-bulk-executor-java.md)
> [!IMPORTANT] > This is *not* the latest Java Bulk Executor for Azure Cosmos DB! Consider using [Azure Cosmos DB Java SDK v4](bulk-executor-java.md) for performing bulk operations. To upgrade, follow the instructions in the [Migrate to Azure Cosmos DB Java SDK v4](migrate-java-v4-sdk.md) guide and the [Reactor vs RxJava](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/reactor-rxjava-guide.md) guide.
cosmos-db Sql Api Sdk Dotnet Changefeed https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-api-sdk-dotnet-changefeed.md
# .NET Change Feed Processor SDK: Download and release notes (Legacy) [!INCLUDE[appliesto-sql-api](../includes/appliesto-sql-api.md)]
-> [!div class="op_single_selector"]
->
-> * [.NET SDK v3](sql-api-sdk-dotnet-standard.md)
-> * [.NET SDK v2](sql-api-sdk-dotnet.md)
-> * [.NET Core SDK v2](sql-api-sdk-dotnet-core.md)
-> * [.NET Change Feed SDK v2](sql-api-sdk-dotnet-changefeed.md)
-> * [Node.js](sql-api-sdk-node.md)
-> * [Java SDK v4](sql-api-sdk-java-v4.md)
-> * [Async Java SDK v2](sql-api-sdk-async-java.md)
-> * [Sync Java SDK v2](sql-api-sdk-java.md)
-> * [Spring Data v2](sql-api-sdk-java-spring-v2.md)
-> * [Spring Data v3](sql-api-sdk-java-spring-v3.md)
-> * [Spark 3 OLTP Connector](sql-api-sdk-java-spark-v3.md)
-> * [Spark 2 OLTP Connector](sql-api-sdk-java-spark.md)
-> * [Python](sql-api-sdk-python.md)
-> * [REST](/rest/api
-> * [REST Resource Provider](/rest/api
-> * [SQL](sql-query-getting-started.md)
-> * [Bulk executor - .NET v2](sql-api-sdk-bulk-executor-dot-net.md)
-> * [Bulk executor - Java](sql-api-sdk-bulk-executor-java.md)
| | Links | |||
cosmos-db Sql Api Sdk Dotnet Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-api-sdk-dotnet-core.md
# Azure Cosmos DB .NET Core SDK v2 for SQL API: Release notes and resources (Legacy) [!INCLUDE[appliesto-sql-api](../includes/appliesto-sql-api.md)]
-> [!div class="op_single_selector"]
-> * [.NET SDK v3](sql-api-sdk-dotnet-standard.md)
-> * [.NET SDK v2](sql-api-sdk-dotnet.md)
-> * [.NET Core SDK v2](sql-api-sdk-dotnet-core.md)
-> * [.NET Change Feed SDK v2](sql-api-sdk-dotnet-changefeed.md)
-> * [Node.js](sql-api-sdk-node.md)
-> * [Java SDK v4](sql-api-sdk-java-v4.md)
-> * [Async Java SDK v2](sql-api-sdk-async-java.md)
-> * [Sync Java SDK v2](sql-api-sdk-java.md)
-> * [Spring Data v2](sql-api-sdk-java-spring-v2.md)
-> * [Spring Data v3](sql-api-sdk-java-spring-v3.md)
-> * [Spark 3 OLTP Connector](sql-api-sdk-java-spark-v3.md)
-> * [Spark 2 OLTP Connector](sql-api-sdk-java-spark.md)
-> * [Python](sql-api-sdk-python.md)
-> * [REST](/rest/api
-> * [REST Resource Provider](/azure/azure-resource-manager/management/azure-services-resource-providers)
-> * [SQL](sql-query-getting-started.md)
-> * [Bulk executor - .NET v2](sql-api-sdk-bulk-executor-dot-net.md)
-> * [Bulk executor - Java](sql-api-sdk-bulk-executor-java.md)
+ | | Links | |||
cosmos-db Sql Api Sdk Dotnet Standard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-api-sdk-dotnet-standard.md
# Azure Cosmos DB .NET SDK v3 for SQL API: Download and release notes [!INCLUDE[appliesto-sql-api](../includes/appliesto-sql-api.md)]
-> [!div class="op_single_selector"]
-> * [.NET SDK v3](sql-api-sdk-dotnet-standard.md)
-> * [.NET SDK v2](sql-api-sdk-dotnet.md)
-> * [.NET Core SDK v2](sql-api-sdk-dotnet-core.md)
-> * [.NET Change Feed SDK v2](sql-api-sdk-dotnet-changefeed.md)
-> * [Node.js](sql-api-sdk-node.md)
-> * [Java SDK v4](sql-api-sdk-java-v4.md)
-> * [Async Java SDK v2](sql-api-sdk-async-java.md)
-> * [Sync Java SDK v2](sql-api-sdk-java.md)
-> * [Spring Data v2](sql-api-sdk-java-spring-v2.md)
-> * [Spring Data v3](sql-api-sdk-java-spring-v3.md)
-> * [Spark 3 OLTP Connector](sql-api-sdk-java-spark-v3.md)
-> * [Spark 2 OLTP Connector](sql-api-sdk-java-spark.md)
-> * [Python](sql-api-sdk-python.md)
-> * [REST](/rest/api/cosmos-db/)
-> * [REST Resource Provider](/rest/api/cosmos-db-resource-provider/)
-> * [SQL](sql-query-getting-started.md)
-> * [Bulk executor - .NET v2](sql-api-sdk-bulk-executor-dot-net.md)
-> * [Bulk executor - Java](sql-api-sdk-bulk-executor-java.md)
| | Links | |||
cosmos-db Sql Api Sdk Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-api-sdk-dotnet.md
# Azure Cosmos DB .NET SDK v2 for SQL API: Download and release notes (Legacy) [!INCLUDE[appliesto-sql-api](../includes/appliesto-sql-api.md)]
-> [!div class="op_single_selector"]
-> * [.NET SDK v3](sql-api-sdk-dotnet-standard.md)
-> * [.NET SDK v2](sql-api-sdk-dotnet.md)
-> * [.NET Core SDK v2](sql-api-sdk-dotnet-core.md)
-> * [.NET Change Feed SDK v2](sql-api-sdk-dotnet-changefeed.md)
-> * [Node.js](sql-api-sdk-node.md)
-> * [Java SDK v4](sql-api-sdk-java-v4.md)
-> * [Async Java SDK v2](sql-api-sdk-async-java.md)
-> * [Sync Java SDK v2](sql-api-sdk-java.md)
-> * [Spring Data v2](sql-api-sdk-java-spring-v2.md)
-> * [Spring Data v3](sql-api-sdk-java-spring-v3.md)
-> * [Spark 3 OLTP Connector](sql-api-sdk-java-spark-v3.md)
-> * [Spark 2 OLTP Connector](sql-api-sdk-java-spark.md)
-> * [Python](sql-api-sdk-python.md)
-> * [REST](/rest/api/cosmos-db/)
-> * [REST Resource Provider](/rest/api/cosmos-db-resource-provider/)
-> * [SQL](sql-query-getting-started.md)
-> * [Bulk executor - .NET v2](sql-api-sdk-bulk-executor-dot-net.md)
-> * [Bulk executor - Java](sql-api-sdk-bulk-executor-java.md)
| | Links | |||
cosmos-db Sql Api Sdk Go https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-api-sdk-go.md
+
+ Title: 'Azure Cosmos DB: SQL Go, SDK & resources'
+description: Learn all about the SQL API and Go SDK including release dates, retirement dates, and changes made between each version of the Azure Cosmos DB Go SDK.
+++
+ms.devlang: csharp
+ Last updated : 03/22/2022+++++
+# Azure Cosmos DB Go SDK for SQL API: Download and release notes
+++
+| | Links |
+|||
+|**Release notes**|[Release notes](https://github.com/Azure/azure-sdk-for-go/blob/main/sdk/dat)|
+|**SDK download**|[Go pkg](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/data/azcosmos)|
+|**API documentation**|[API reference documentation](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/data/azcosmos#pkg-types)|
+|**Samples**|[Code samples](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/data/azcosmos#pkg-overview)|
+|**Get started**|[Get started with the Azure Cosmos DB Go SDK](create-sql-api-go.md)|
+
+> [!IMPORTANT]
+> The Go SDK for Azure Cosmos DB is currently in beta. This beta is provided without a service-level agreement, and we don't recommend it for production workloads. Certain features might not be supported or might have constrained capabilities.
+>
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+## Release history
+
+Release history is maintained in the Azure Cosmos DB Go SDK source repo. For a detailed list of feature releases and bugs fixed in each release, see the [SDK changelog documentation](https://github.com/Azure/azure-sdk-for-go/blob/main/sdk/dat).
+
+## FAQ
++
+## See also
+
+To learn more about Cosmos DB, see [Microsoft Azure Cosmos DB](https://azure.microsoft.com/services/cosmos-db/) service page.
cosmos-db Sql Api Sdk Java Spark V3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-api-sdk-java-spark-v3.md
# Azure Cosmos DB Apache Spark 3 OLTP Connector for Core (SQL) API: Release notes and resources [!INCLUDE[appliesto-sql-api](../includes/appliesto-sql-api.md)]
-> [!div class="op_single_selector"]
-> * [.NET SDK v3](sql-api-sdk-dotnet-standard.md)
-> * [.NET SDK v2](sql-api-sdk-dotnet.md)
-> * [.NET Core SDK v2](sql-api-sdk-dotnet-core.md)
-> * [.NET Change Feed SDK v2](sql-api-sdk-dotnet-changefeed.md)
-> * [Node.js](sql-api-sdk-node.md)
-> * [Java SDK v4](sql-api-sdk-java-v4.md)
-> * [Async Java SDK v2](sql-api-sdk-async-java.md)
-> * [Sync Java SDK v2](sql-api-sdk-java.md)
-> * [Spring Data v2](sql-api-sdk-java-spring-v2.md)
-> * [Spring Data v3](sql-api-sdk-java-spring-v3.md)
-> * [Spark 3 OLTP Connector](sql-api-sdk-java-spark-v3.md)
-> * [Spark 2 OLTP Connector](sql-api-sdk-java-spark.md)
-> * [Python](sql-api-sdk-python.md)
-> * [REST](/rest/api/cosmos-db/)
-> * [REST Resource Provider](/rest/api/cosmos-db-resource-provider/)
-> * [SQL](sql-query-getting-started.md)
-> * [Bulk executor - .NET v2](sql-api-sdk-bulk-executor-dot-net.md)
-> * [Bulk executor - Java](sql-api-sdk-bulk-executor-java.md)
**Azure Cosmos DB OLTP Spark connector** provides Apache Spark support for Azure Cosmos DB using the SQL API. Azure Cosmos DB is a globally-distributed database service which allows developers to work with data using a variety of standard APIs, such as SQL, MongoDB, Cassandra, Graph, and Table.
cosmos-db Sql Api Sdk Java Spark https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-api-sdk-java-spark.md
# Azure Cosmos DB Apache Spark 2 OLTP Connector for Core (SQL) API: Release notes and resources [!INCLUDE[appliesto-sql-api](../includes/appliesto-sql-api.md)]
-> [!div class="op_single_selector"]
-> * [.NET SDK v3](sql-api-sdk-dotnet-standard.md)
-> * [.NET SDK v2](sql-api-sdk-dotnet.md)
-> * [.NET Core SDK v2](sql-api-sdk-dotnet-core.md)
-> * [.NET Change Feed SDK v2](sql-api-sdk-dotnet-changefeed.md)
-> * [Node.js](sql-api-sdk-node.md)
-> * [Java SDK v4](sql-api-sdk-java-v4.md)
-> * [Async Java SDK v2](sql-api-sdk-async-java.md)
-> * [Sync Java SDK v2](sql-api-sdk-java.md)
-> * [Spring Data v2](sql-api-sdk-java-spring-v2.md)
-> * [Spring Data v3](sql-api-sdk-java-spring-v3.md)
-> * [Spark 3 OLTP Connector](sql-api-sdk-java-spark-v3.md)
-> * [Spark 2 OLTP Connector](sql-api-sdk-java-spark.md)
-> * [Python](sql-api-sdk-python.md)
-> * [REST](/rest/api/cosmos-db/)
-> * [REST Resource Provider](/rest/api/cosmos-db-resource-provider/)
-> * [SQL](sql-query-getting-started.md)
-> * [Bulk executor - .NET v2](sql-api-sdk-bulk-executor-dot-net.md)
-> * [Bulk executor - Java](sql-api-sdk-bulk-executor-java.md)
You can accelerate big data analytics by using the Azure Cosmos DB Apache Spark 2 OLTP Connector for Core (SQL). The Spark Connector allows you to run [Spark](https://spark.apache.org/) jobs on data stored in Azure Cosmos DB. Batch and stream processing are supported.
cosmos-db Sql Api Sdk Java Spring V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-api-sdk-java-spring-v2.md
# Spring Data Azure Cosmos DB v2 for Core (SQL) API (legacy): Release notes and resources [!INCLUDE[appliesto-sql-api](../includes/appliesto-sql-api.md)]
-> [!div class="op_single_selector"]
-> * [.NET SDK v3](sql-api-sdk-dotnet-standard.md)
-> * [.NET SDK v2](sql-api-sdk-dotnet.md)
-> * [.NET Core SDK v2](sql-api-sdk-dotnet-core.md)
-> * [.NET Change Feed SDK v2](sql-api-sdk-dotnet-changefeed.md)
-> * [Node.js](sql-api-sdk-node.md)
-> * [Java SDK v4](sql-api-sdk-java-v4.md)
-> * [Async Java SDK v2 (legacy)](sql-api-sdk-async-java.md)
-> * [Sync Java SDK v2 (legacy)](sql-api-sdk-java.md)
-> * [Spring Data v2 (legacy)](sql-api-sdk-java-spring-v2.md)
-> * [Spring Data v3](sql-api-sdk-java-spring-v3.md)
-> * [Spark 3 OLTP Connector](sql-api-sdk-java-spark-v3.md)
-> * [Spark 2 OLTP Connector](sql-api-sdk-java-spark.md)
-> * [Python](sql-api-sdk-python.md)
-> * [REST](/rest/api/cosmos-db/)
-> * [REST Resource Provider](/rest/api/cosmos-db-resource-provider/)
-> * [SQL](sql-query-getting-started.md)
-> * [Bulk executor - .NET v2](sql-api-sdk-bulk-executor-dot-net.md)
-> * [Bulk executor - Java](sql-api-sdk-bulk-executor-java.md)
Spring Data Azure Cosmos DB version 2 for Core (SQL) allows developers to use Azure Cosmos DB in Spring applications. Spring Data Azure Cosmos DB exposes the Spring Data interface for manipulating databases and collections, working with documents, and issuing queries. Both Sync and Async (Reactive) APIs are supported in the same Maven artifact.
cosmos-db Sql Api Sdk Java Spring V3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-api-sdk-java-spring-v3.md
# Spring Data Azure Cosmos DB v3 for Core (SQL) API: Release notes and resources [!INCLUDE[appliesto-sql-api](../includes/appliesto-sql-api.md)]
-> [!div class="op_single_selector"]
-> * [.NET SDK v3](sql-api-sdk-dotnet-standard.md)
-> * [.NET SDK v2](sql-api-sdk-dotnet.md)
-> * [.NET Core SDK v2](sql-api-sdk-dotnet-core.md)
-> * [.NET Change Feed SDK v2](sql-api-sdk-dotnet-changefeed.md)
-> * [Node.js](sql-api-sdk-node.md)
-> * [Java SDK v4](sql-api-sdk-java-v4.md)
-> * [Async Java SDK v2](sql-api-sdk-async-java.md)
-> * [Sync Java SDK v2](sql-api-sdk-java.md)
-> * [Spring Data v2](sql-api-sdk-java-spring-v2.md)
-> * [Spring Data v3](sql-api-sdk-java-spring-v3.md)
-> * [Spark 3 OLTP Connector](sql-api-sdk-java-spark-v3.md)
-> * [Spark 2 OLTP Connector](sql-api-sdk-java-spark.md)
-> * [Python](sql-api-sdk-python.md)
-> * [REST](/rest/api/cosmos-db/)
-> * [REST Resource Provider](/rest/api/cosmos-db-resource-provider/)
-> * [SQL](sql-query-getting-started.md)
-> * [Bulk executor - .NET v2](sql-api-sdk-bulk-executor-dot-net.md)
-> * [Bulk executor - Java](sql-api-sdk-bulk-executor-java.md)
The Spring Data Azure Cosmos DB version 3 for Core (SQL) allows developers to use Azure Cosmos DB in Spring applications. Spring Data Azure Cosmos DB exposes the Spring Data interface for manipulating databases and collections, working with documents, and issuing queries. Both Sync and Async (Reactive) APIs are supported in the same Maven artifact.
cosmos-db Sql Api Sdk Java V4 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-api-sdk-java-v4.md
# Azure Cosmos DB Java SDK v4 for Core (SQL) API: release notes and resources [!INCLUDE[appliesto-sql-api](../includes/appliesto-sql-api.md)]
-> [!div class="op_single_selector"]
-> * [.NET SDK v3](sql-api-sdk-dotnet-standard.md)
-> * [.NET SDK v2](sql-api-sdk-dotnet.md)
-> * [.NET Core SDK v2](sql-api-sdk-dotnet-core.md)
-> * [.NET Change Feed SDK v2](sql-api-sdk-dotnet-changefeed.md)
-> * [Node.js](sql-api-sdk-node.md)
-> * [Java SDK v4](sql-api-sdk-java-v4.md)
-> * [Async Java SDK v2](sql-api-sdk-async-java.md)
-> * [Sync Java SDK v2](sql-api-sdk-java.md)
-> * [Spring Data v2](sql-api-sdk-java-spring-v2.md)
-> * [Spring Data v3](sql-api-sdk-java-spring-v3.md)
-> * [Spark 3 OLTP Connector](sql-api-sdk-java-spark-v3.md)
-> * [Spark 2 OLTP Connector](sql-api-sdk-java-spark.md)
-> * [Python](sql-api-sdk-python.md)
-> * [REST](/rest/api/cosmos-db/)
-> * [REST Resource Provider](/rest/api/cosmos-db-resource-provider/)
-> * [SQL](sql-query-getting-started.md)
-> * [Bulk executor - .NET v2](sql-api-sdk-bulk-executor-dot-net.md)
-> * [Bulk executor - Java](sql-api-sdk-bulk-executor-java.md)
+ The Azure Cosmos DB Java SDK v4 for Core (SQL) combines an Async API and a Sync API into one Maven artifact. The v4 SDK brings enhanced performance, new API features, and Async support based on Project Reactor and the [Netty library](https://netty.io/). Users can expect improved performance with Azure Cosmos DB Java SDK v4 versus the [Azure Cosmos DB Async Java SDK v2](sql-api-sdk-async-java.md) and the [Azure Cosmos DB Sync Java SDK v2](sql-api-sdk-java.md).
cosmos-db Sql Api Sdk Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-api-sdk-java.md
# Azure Cosmos DB Java SDK for SQL API (legacy): Release notes and resources [!INCLUDE[appliesto-sql-api](../includes/appliesto-sql-api.md)]
-> [!div class="op_single_selector"]
-> * [.NET SDK v3](sql-api-sdk-dotnet-standard.md)
-> * [.NET SDK v2](sql-api-sdk-dotnet.md)
-> * [.NET Core SDK v2](sql-api-sdk-dotnet-core.md)
-> * [.NET Change Feed SDK v2](sql-api-sdk-dotnet-changefeed.md)
-> * [Node.js](sql-api-sdk-node.md)
-> * [Java SDK v4](sql-api-sdk-java-v4.md)
-> * [Async Java SDK v2 (legacy)](sql-api-sdk-async-java.md)
-> * [Sync Java SDK v2 (legacy)](sql-api-sdk-java.md)
-> * [Spring Data v2 (legacy)](sql-api-sdk-java-spring-v2.md)
-> * [Spring Data v3](sql-api-sdk-java-spring-v3.md)
-> * [Spark 3 OLTP Connector](sql-api-sdk-java-spark-v3.md)
-> * [Spark 2 OLTP Connector](sql-api-sdk-java-spark.md)
-> * [Python](sql-api-sdk-python.md)
-> * [REST](/rest/api/cosmos-db/)
-> * [REST Resource Provider](/rest/api/cosmos-db-resource-provider/)
-> * [SQL](sql-query-getting-started.md)
-> * [Bulk executor - .NET v2](sql-api-sdk-bulk-executor-dot-net.md)
-> * [Bulk executor - Java](sql-api-sdk-bulk-executor-java.md)
+ This is the original Azure Cosmos DB Sync Java SDK v2 for SQL API which supports synchronous operations.
cosmos-db Sql Api Sdk Node https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-api-sdk-node.md
# Azure Cosmos DB Node.js SDK for SQL API: Release notes and resources [!INCLUDE[appliesto-sql-api](../includes/appliesto-sql-api.md)]
-> [!div class="op_single_selector"]
-> * [.NET SDK v3](sql-api-sdk-dotnet-standard.md)
-> * [.NET SDK v2](sql-api-sdk-dotnet.md)
-> * [.NET Core SDK v2](sql-api-sdk-dotnet-core.md)
-> * [.NET Change Feed SDK v2](sql-api-sdk-dotnet-changefeed.md)
-> * [Node.js](sql-api-sdk-node.md)
-> * [Java SDK v4](sql-api-sdk-java-v4.md)
-> * [Async Java SDK v2](sql-api-sdk-async-java.md)
-> * [Sync Java SDK v2](sql-api-sdk-java.md)
-> * [Spring Data v2](sql-api-sdk-java-spring-v2.md)
-> * [Spring Data v3](sql-api-sdk-java-spring-v3.md)
-> * [Spark 3 OLTP Connector](sql-api-sdk-java-spark-v3.md)
-> * [Spark 2 OLTP Connector](sql-api-sdk-java-spark.md)
-> * [Python](sql-api-sdk-python.md)
-> * [REST](/rest/api/cosmos-db/)
-> * [REST Resource Provider](/rest/api/cosmos-db-resource-provider/)
-> * [SQL](sql-query-getting-started.md)
-> * [Bulk executor - .NET v2](sql-api-sdk-bulk-executor-dot-net.md)
-> * [Bulk executor - Java](sql-api-sdk-bulk-executor-java.md)
+ |Resource |Link | |||
cosmos-db Sql Api Sdk Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-api-sdk-python.md
# Azure Cosmos DB Python SDK for SQL API: Release notes and resources [!INCLUDE[appliesto-sql-api](../includes/appliesto-sql-api.md)]
-> [!div class="op_single_selector"]
-> * [.NET SDK v3](sql-api-sdk-dotnet-standard.md)
-> * [.NET SDK v2](sql-api-sdk-dotnet.md)
-> * [.NET Core SDK v2](sql-api-sdk-dotnet-core.md)
-> * [.NET Change Feed SDK v2](sql-api-sdk-dotnet-changefeed.md)
-> * [Node.js](sql-api-sdk-node.md)
-> * [Java SDK v4](sql-api-sdk-java-v4.md)
-> * [Async Java SDK v2](sql-api-sdk-async-java.md)
-> * [Sync Java SDK v2](sql-api-sdk-java.md)
-> * [Spring Data v2](sql-api-sdk-java-spring-v2.md)
-> * [Spring Data v3](sql-api-sdk-java-spring-v3.md)
-> * [Spark 3 OLTP Connector](sql-api-sdk-java-spark-v3.md)
-> * [Spark 2 OLTP Connector](sql-api-sdk-java-spark.md)
-> * [Python](sql-api-sdk-python.md)
-> * [REST](/rest/api/cosmos-db/)
-> * [REST Resource Provider](/rest/api/cosmos-db-resource-provider/)
-> * [SQL](sql-query-getting-started.md)
-> * [Bulk executor - .NET v2](sql-api-sdk-bulk-executor-dot-net.md)
-> * [Bulk executor - Java](sql-api-sdk-bulk-executor-java.md)
| Page| Link | |||
cosmos-db Sql Query Join https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-query-join.md
The FROM source of the JOIN clause is an iterator. So, the flow in the preceding
The first item, `AndersenFamily`, contains only one `children` element, so the result set contains only a single object. The second item, `WakefieldFamily`, contains two `children`, so the cross product produces two objects, one for each `children` element. The root fields in both these items are the same, just as you would expect in a cross product.
+The preceeding example returns just the id property for the result of the query. If we want to return the entire document (all the fields) for each child, we can alter the SELECT portion of the query:
+
+```sql
+ SELECT VALUE f
+ FROM Families f
+ JOIN c IN f.children
+ WHERE f.id = 'WakefieldFamily'
+ ORDER BY f.address.city ASC
+```
+ The real utility of the JOIN clause is to form tuples from the cross product in a shape that's otherwise difficult to project. The example below filters on the combination of a tuple that lets the user choose a condition satisfied by the tuples overall. ```sql
cosmos-db Sql Sdk Connection Modes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-sdk-connection-modes.md
The two available connectivity modes are:
* Direct mode
- Direct mode supports connectivity through TCP protocol and offers better performance because there are fewer network hops. The application connects directly to the backend replicas. Direct mode is currently only supported on .NET and Java SDK platforms.
+ Direct mode supports connectivity through TCP protocol, using TLS for initial authentication and encrypting traffic, and offers better performance because there are fewer network hops. The application connects directly to the backend replicas. Direct mode is currently only supported on .NET and Java SDK platforms.
:::image type="content" source="./media/performance-tips/connection-policy.png" alt-text="The Azure Cosmos DB connectivity modes" border="false":::
The following table shows a summary of the connectivity modes available for vari
|Connection mode |Supported protocol |Supported SDKs |API/Service port | ||||| |Gateway | HTTPS | All SDKs | SQL (443), MongoDB (10250, 10255, 10256), Table (443), Cassandra (10350), Graph (443) <br> The port 10250 maps to a default Azure Cosmos DB API for MongoDB instance without geo-replication. Whereas the ports 10255 and 10256 map to the instance that has geo-replication. |
-|Direct | TCP | .NET SDK Java SDK | When using public/service endpoints: ports in the 10000 through 20000 range<br>When using private endpoints: ports in the 0 through 65535 range |
+|Direct | TCP (Encrypted via TLS) | .NET SDK Java SDK | When using public/service endpoints: ports in the 10000 through 20000 range<br>When using private endpoints: ports in the 0 through 65535 range |
## <a id="direct-mode"></a> Direct mode connection architecture
For specific SDK platform performance optimizations:
* Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning. * If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
- * If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
+ * If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
cosmos-db Table Import https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/table-import.md
# Migrate your data to an Azure Cosmos DB Table API account [!INCLUDE[appliesto-table-api](../includes/appliesto-table-api.md)]
-This tutorial provides instructions on importing data for use with the Azure Cosmos DB [Table API](introduction.md). If you have data stored in Azure Table Storage, you can use either the data migration tool or AzCopy to import your data to the Azure Cosmos DB Table API.
+This tutorial provides instructions on importing data for use with the Azure Cosmos DB [Table API](introduction.md). If you have data stored in Azure Table Storage, you can use the **Data migration tool** to import your data to the Azure Cosmos DB Table API.
-This tutorial covers the following tasks:
-
-> [!div class="checklist"]
-> * Importing data with the data migration tool
-> * Importing data with AzCopy
## Prerequisites
You can use the command-line data migration tool (dt.exe) in Azure Cosmos DB to
To migrate table data:
-1. Download the migration tool from [GitHub](https://github.com/azure/azure-documentdb-datamigrationtool).
+1. Download the migration tool from [GitHub](https://github.com/azure/azure-documentdb-datamigrationtool/tree/archive).
2. Run `dt.exe` by using the command-line arguments for your scenario. `dt.exe` takes a command in the following format: ```bash
Here's a command-line sample showing how to import from Table Storage to the Tab
```bash dt /s:AzureTable /s.ConnectionString:DefaultEndpointsProtocol=https;AccountName=<Azure Table storage account name>;AccountKey=<Account Key>;EndpointSuffix=core.windows.net /s.Table:<Table name> /t:TableAPIBulk /t.ConnectionString:DefaultEndpointsProtocol=https;AccountName=<Azure Cosmos DB account name>;AccountKey=<Azure Cosmos DB account key>;TableEndpoint=https://<Account name>.table.cosmos.azure.com:443 /t.TableName:<Table name> /t.Overwrite ```-
-## Migrate data by using AzCopy
-
-You can also use the AzCopy command-line utility to migrate data from Table Storage to the Azure Cosmos DB Table API. To use AzCopy, you first export your data as described in [Export data from Table Storage](/previous-versions/azure/storage/storage-use-azcopy#export-data-from-table-storage). Then, you import the data to Azure Cosmos DB Table API with the following command. You can also import into [Azure Table storage](/previous-versions/azure/storage/storage-use-azcopy#import-data-into-table-storage).
-
-Refer to the following sample when you're importing into Azure Cosmos DB. Note that the `/Dest` value uses `cosmosdb`, not `core`.
-
-Example import command:
-
-```bash
-AzCopy /Source:C:\myfolder\ /Dest:https://myaccount.table.cosmosdb.windows.net/mytable1/ /DestKey:key /Manifest:"myaccount_mytable_20140103T112020.manifest" /EntityOperation:InsertOrReplace
-```
- ## Next steps Learn how to query data by using the Azure Cosmos DB Table API.
cosmos-db Throughput Serverless https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/throughput-serverless.md
Title: How to choose between provisioned throughput and serverless on Azure Cosmos DB description: Learn about how to choose between provisioned throughput and serverless for your workload. --++ Previously updated : 03/24/2022-- Last updated : 05/09/2022++ # How to choose between provisioned throughput and serverless
Azure Cosmos DB is available in two different capacity modes: [provisioned throu
| Criteria | Provisioned throughput | Serverless | | | | | | Best suited for | Workloads with sustained traffic requiring predictable performance | Workloads with intermittent or unpredictable traffic and low average-to-peak traffic ratio |
-| How it works | For each of your containers, you provision some amount of throughput expressed in [Request Units](request-units.md) per second. Every second, this amount of Request Units is available for your database operations. Provisioned throughput can be updated manually or adjusted automatically with [autoscale](provision-throughput-autoscale.md). | You run your database operations against your containers without having to provision any capacity. |
-| Geo-distribution | Available (unlimited number of Azure regions) | Unavailable (serverless accounts can only run in 1 Azure region) |
-| Maximum storage per container | Unlimited | 50 GB |
-| Performance | < 10 ms latency for point-reads and writes covered by SLA | < 10 ms latency for point-reads and < 30 ms for writes covered by SLO |
-| Billing model | Billing is done on a per-hour basis for the RU/s provisioned, regardless of how many RUs were consumed. | Billing is done on a per-hour basis for the amount of RUs consumed by your database operations. |
+| How it works | For each of your containers, you configure some amount of provisioned throughput expressed in [Request Units (RUs)](request-units.md) per second. Every second, this quantity of Request Units is available for your database operations. Provisioned throughput can be updated manually or adjusted automatically with [autoscale](provision-throughput-autoscale.md). | You run your database operations against your containers without having to configure any previously provisioned capacity. |
+| Geo-distribution | Available (unlimited number of Azure regions) | Unavailable (serverless accounts can only run in a single Azure region) |
+| Maximum storage per container | Unlimited | 50 GB<sup>1</sup> |
+| Performance | < 10-ms latency for point-reads and writes covered by SLA | < 10-ms latency for point-reads and < 30 ms for writes covered by SLO |
+| Billing model | Billing is done on a per-hour basis for the RU/s provisioned, regardless of how many RUs were consumed. | Billing is done on a per-hour basis for the number of RUs consumed by your database operations. |
+
+<sup>1</sup> Serverless containers up to 1 TB are currently in preview with Azure Cosmos DB. To try the new feature, register the *"Azure Cosmos DB Serverless 1 TB Container Preview"* [preview feature in your Azure subscription](/azure/azure-resource-manager/management/preview-features).
## Estimating your expected consumption
-In some situations, it may be unclear whether provisioned throughput or serverless should be chosen for a given workload. To help with this decision, you can estimate your overall **expected consumption**, that is what's the total number of RUs you may consume over a month (you can estimate this with the help of the table shown [here](plan-manage-costs.md#estimating-serverless-costs))
+In some situations, it may be unclear whether provisioned throughput or serverless should be chosen for a given workload. To help with this decision, you can estimate your overall **expected consumption**, or the total number of RUs you may consume over a month.
+
+For more information, see [estimating serverless costs](plan-manage-costs.md#estimating-serverless-costs).
**Example 1**: a workload is expected to burst to a maximum of 500 RU/s and consume a total of 20,000,000 RUs over a month. -- In provisioned throughput mode, you would provision a container with 500 RU/s for a monthly cost of: $0.008 * 5 * 730 = **$29.20**
+- In provisioned throughput mode, you would configure a container with provisioned throughput at a quantity of 500 RU/s for a monthly cost of: $0.008 * 5 * 730 = **$29.20**
- In serverless mode, you would pay for the consumed RUs: $0.25 * 20 = **$5.00** **Example 2**: a workload is expected to burst to a maximum of 500 RU/s and consume a total of 250,000,000 RUs over a month. -- In provisioned throughput mode, you would provision a container with 500 RU/s for a monthly cost of: $0.008 * 5 * 730 = **$29.20**
+- In provisioned throughput mode, you would configure a container with provisioned throughput at a quantity of 500 RU/s for a monthly cost of: $0.008 * 5 * 730 = **$29.20**
- In serverless mode, you would pay for the consumed RUs: $0.25 * 250 = **$62.50**
-(These examples are not accounting for the storage cost, which is the same between the two modes.)
+(These examples aren't accounting for the storage cost, which is the same between the two modes.)
> [!NOTE] > The costs shown in the previous example are for demonstration purposes only. See the [pricing page](https://azure.microsoft.com/pricing/details/cosmos-db/) for the latest pricing information.
cosmos-db Try Free https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/try-free.md
+
+ Title: Try Azure Cosmos DB free
+description: Try Azure Cosmos DB free of charge. No sign-up or credit card required. It's easy to test your apps, deploy, and run small workloads free for 30 days. Upgrade your account at any time during your trial.
+++++ Last updated : 08/26/2021++
+# Try Azure Cosmos DB free
+
+[Try Azure Cosmos DB](https://aka.ms/trycosmosdb) makes it easy to try out Azure Cosmos DB for free before you commit. There's no credit card required to get started. Your account is free for 30 days. After expiration, a new sandbox account can be created. You can extend beyond 30 days for 24 hours. You can upgrade your active Try Azure Cosmos DB account at any time during the 30 day trial period. If you're using the SQL API, migrate your Try Azure Cosmos DB data to your upgraded account.
+
+This article walks you through how to create your account, limits, and upgrading your account. This article also walks through how to migrate your data from your Try Azure Cosmos DB sandbox to your own account using the SQL API.
+
+## Try Azure Cosmos DB limits
+
+The following table lists the limits for the [Try Azure Cosmos DB](https://aka.ms/trycosmosdb) for Free trial.
+
+| Resource | Limit |
+| | |
+| Duration of the trial | 30 days (a new trial can be requested after expiration) After expiration, the information stored is deleted. Prior to expiration you can upgrade your account and migrate the information stored. |
+| Maximum containers per subscription (SQL, Gremlin, Table API) | 1 |
+| Maximum containers per subscription (MongoDB API) | 3 |
+| Maximum throughput per container | 5,000 |
+| Maximum throughput per shared-throughput database | 20,000 |
+| Maximum total storage per account | 10 GB |
+
+Try Azure Cosmos DB supports global distribution in only the Central US, North Europe, and Southeast Asia regions. Azure support tickets can't be created for Try Azure Cosmos DB accounts. However, support is provided for subscribers with existing support plans.
+
+## Create your Try Azure Cosmos DB account
+
+From the [Try Azure Cosmos DB home page](https://aka.ms/trycosmosdb), select an API. Azure Cosmos DB provides five APIs: Core (SQL) and MongoDB for document data, Gremlin for graph data, Azure Table, and Cassandra.
+
+> [!NOTE]
+> Not sure which API will best meet your needs? To learn more about the APIs for Azure Cosmos DB, see [Choose an API in Azure Cosmos DB](choose-api.md).
++
+## Launch a Quick Start
+
+Launch the Quickstart in Data Explorer in Azure portal to start using Azure Cosmos DB or get started with our documentation.
+
+* [Core (SQL) API Quickstart](sql/create-cosmosdb-resources-portal.md#add-a-database-and-a-container)
+* [MongoDB API Quickstart](mongodb/create-mongodb-python.md#learn-the-object-model)
+* [Apache Cassandra API](cassandr)
+* [Gremlin (Graph) API](graph/create-graph-console.md#add-a-graph)
+* [Azure Table API](table/create-table-dotnet.md)
+
+You can also get started with one of the learning resources in Data Explorer.
++
+## Upgrade your account
+
+Your account is free for 30 days. After expiration, a new sandbox account can be created. You can upgrade your active Try Azure Cosmos DB account at any time during the 30 day trial period. Here are the steps to start an upgrade.
+
+1. Select the option to upgrade your current account in the Dashboard page or from the [Try Azure Cosmos DB](https://aka.ms/trycosmosdb) page.
+
+ :::image type="content" source="media/try-free/upgrade-account.png" lightbox="media/try-free/upgrade-account.png" alt-text="Confirmation page for the account upgrade experience.":::
+
+1. Select **Sign up for Azure Account** & create an Azure Cosmos DB account.
+
+You can migrate your database from Try Azure Cosmos DB to your new Azure account if you're utilizing the SQL API after you've signed up for an Azure account. Here are the steps to migrate.
+
+### Create an Azure Cosmos DB account
++
+Navigate back to the **Upgrade** page and select **Next** to move on to the third step and move your data.
+
+> [!NOTE]
+> You can have up to one free tier Azure Cosmos DB account per Azure subscription and must opt-in when creating the account. If you do not see the option to apply the free tier discount, this means another account in the subscription has already been enabled with free tier.
++
+## Migrate your Try Azure Cosmos DB data
+
+If you're using the SQL API, you can migrate your Try Azure Cosmos DB data to your upgraded account. HereΓÇÖs how to migrate your Try Azure Cosmos DB database to your new Azure Cosmos DB Core (SQL) API account.
+
+### Prerequisites
+
+* Must be using the Azure Cosmos DB Core (SQL) API.
+* Must have an active Try Azure Cosmos DB account and Azure account.
+* Must have an Azure Cosmos DB account using the Core (SQL) API in your Azure account.
+
+### Migrate your data
+
+1. Locate your **Primary Connection string** for the Azure Cosmos DB account you created for your data.
+
+ 1. Go to your Azure Cosmos DB Account in the Azure portal.
+
+ 1. Find the connection string of your new Cosmos DB account within the **Keys** page of your new account.
+
+ :::image type="content" source="media/try-free/migrate-data.png" lightbox="media/try-free/migrate-data.png" alt-text="Screenshot of the Keys page for an Azure Cosmos DB account.":::
+
+1. Insert the connection string of the new Cosmos DB account in the **Upgrade your account** page.
+
+1. Select **Next** to move the data to your account.
+
+1. Provide your email address to be notified by email once the migration has been completed.
+
+## Delete your account
+
+There can only be one free Try Azure Cosmos DB account per Microsoft account. You may want to delete your account or to try different APIs, you'll have to create a new account. HereΓÇÖs how to delete your account.
+
+1. Go to the [Try AzureCosmos DB](https://aka.ms/trycosmosdb) page
+
+1. Select Delete my account.
+
+ :::image type="content" source="media/try-free/upgrade-account.png" lightbox="media/try-free/upgrade-account.png" alt-text="Confirmation page for the account upgrade experience.":::
+
+## Next steps
+
+After you create a Try Azure Cosmos DB sandbox account, you can start building apps with Azure Cosmos DB with the following articles:
+
+* Use [SQL API to build a console app using .NET](sql/sql-api-get-started.md) to manage data in Azure Cosmos DB.
+* Use [MongoDB API to build a sample app using Python](mongodb/create-mongodb-python.md) to manage data in Azure Cosmos DB.
+* [Download a notebook from the gallery](publish-notebook-gallery.md#download-a-notebook-from-the-gallery) and analyze your data.
+* Learn more about [understanding your Azure Cosmos DB bill](understand-your-bill.md)
+* Get started with Azure Cosmos DB with one of our quickstarts:
+ * [Get started with Azure Cosmos DB SQL API](sql/create-cosmosdb-resources-portal.md#add-a-database-and-a-container)
+ * [Get started with Azure Cosmos DB API for MongoDB](mongodb/create-mongodb-python.md#learn-the-object-model)
+ * [Get started with Azure Cosmos DB Cassandra API](cassandr)
+ * [Get started with Azure Cosmos DB Gremlin API](graph/create-graph-console.md#add-a-graph)
+ * [Get started with Azure Cosmos DB Table API](table/create-table-dotnet.md)
+* Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for [capacity planning](sql/estimate-ru-with-capacity-planner.md).
+* If all you know is the number of vCores and servers in your existing database cluster, see [estimating request units using vCores or vCPUs](convert-vcore-to-request-unit.md).
+* If you know typical request rates for your current database workload, see [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md).
cost-management-billing Cancel Azure Subscription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/cancel-azure-subscription.md
After you cancel, your services are disabled. That means your virtual machines a
After your subscription is canceled, Microsoft waits 30 - 90 days before permanently deleting your data in case you need to access it or you change your mind. We don't charge you for keeping the data. To learn more, see [Microsoft Trust Center - How we manage your data](https://go.microsoft.com/fwLink/p/?LinkID=822930&clcid=0x409).
-## Delete free trial or pay-as-you-go subscriptions
+## Delete subscriptions
If you have a free trial or pay-as-you-go subscription, you don't have to wait 90 days for the subscription to automatically delete. You can delete your subscription *three days* after you cancel it. The **Delete subscription** option isn't available until three days after you cancel your subscription.
cost-management-billing Ea Transfers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/ea-transfers.md
Previously updated : 02/24/2022 Last updated : 05/23/2022
Keep the following points in mind when you transfer an enterprise account to a n
- Only the accounts specified in the request are transferred. If all accounts are chosen, then they're all transferred. - The source enrollment keeps its status as active or extended. You can continue using the enrollment until it expires.
+- You can't change account ownership during a transfer. After the account transfer is complete, the current account owner can change account ownership in the EA portal. Keep in mind that an EA administrator can't change account ownership.
### Prerequisites
-When you request an account transfer, provide the following information:
+When you request an account transfer with a support request, provide the following information:
- The number of the target enrollment, account name, and account owner email of account to transfer - The enrollment number and account to transfer for the source enrollment Other points to keep in mind before an account transfer: -- Approval from an EA Administrator is required for the target and source enrollment.
+- Approval from a full EA Administrator, not a read-only EA administrator, is required for the target and source enrollment.
+ - If you have only UPN (User Principal Name) entities configured as full EA administrators without access to e-mail, you must perform one of the following actions:
+ - Create a temporary full EA administrator account in the EA portal
+ &mdash; Or &mdash;
+ - Provide EA portal screenshot evidence of a user account associated with the UPN account
- You should consider an enrollment transfer if an account transfer doesn't meet your requirements. - Your account transfer moves all services and subscriptions related to the specific accounts. - Your transferred account appears inactive under the source enrollment and appears active under the target enrollment when the transfer is complete.
When you request to transfer an entire enterprise enrollment to an enrollment, t
- Usage transferred may take up to 72 hours to be reflected in the new enrollment. - If department administrator (DA) or account owner (AO) view charges were enabled on the transferred enrollment, they must be enabled on the new enrollment. - If you're using API reports or Power BI, generate a new API key under your new enrollment.
+ - For reporting, all APIs use either the old enrollment or the new one, not both. If you need reporting from APIs for the old and new enrollments, you must create your own reports.
- All Azure services, subscriptions, accounts, departments, and the entire enrollment structure, including all EA department administrators, transfer to a new target enrollment. - The enrollment status is set to _Transferred_. The transferred enrollment is available for historic usage reporting purposes only. - You can't add roles or subscriptions to a transferred enrollment. Transferred status prevents more usage against the enrollment. - Any remaining Azure Prepayment balance in the agreement is lost, including future terms.-- If the enrollment you're transferring from has reservation purchases, the reservation purchasing fee will remain in the source enrollment. However, all reservation benefits will be transferred across for use in the new enrollment.-- The marketplace one-time purchase fee and any monthly fixed fees already incurred on the old enrollment aren't transferred to the new enrollment. Consumption-based marketplace charges will be transferred.
+- If the enrollment you're transferring from has reservation purchases, the historic (past) reservation purchasing fee will remain in the source enrollment. All future purchasing fees transfer to the new enrollment. Additionally, all reservation benefits will be transferred across for use in the new enrollment.
+- The historic marketplace one-time purchase fee and any monthly fixed fees already incurred on the old enrollment aren't transferred to the new enrollment. Consumption-based marketplace charges will be transferred.
### Effective transfer date
-The effective transfer day can be on or after the start date of the target enrollment. Transfers can only be backdated till the first day of the month in which request is made.
+The effective transfer day can be on or after the start date of the target enrollment. Transfers can only be backdated till the first day of the month in which request is made. Additionally, if individual subscriptions are deleted or transferred in the current month, then the deletion/transfer date becomes the new earliest possible effective transfer date.
The source enrollment usage is charged against Azure Prepayment or as overage. Usage that occurs after the effective transfer date is transferred to the new enrollment and charged.
Other points to keep in mind before an enrollment transfer:
- Any API keys used in the source enrollment must be regenerated for the target enrollment. - If the source and destination enrollments are on different cloud instances, the transfer will fail. Support personnel can transfer only within the same cloud instance. - For reservations (reserved instances):
- - The enrollment or account transfer between different currencies affects monthly reservation purchases.
+ - The enrollment or account transfer between different currencies affects monthly reservation purchases. The following image illustrates the effects.
+ :::image type="content" source="./media/ea-transfers/cross-currency-reservation-transfer-effects.png" alt-text="Diagram illustrating the effects of cross currency reservation transfers." border="false" lightbox="./media/ea-transfers/cross-currency-reservation-transfer-effects.png":::
- Whenever there's is a currency change during or after an enrollment transfer, reservations paid for monthly are canceled for the source enrollment at the time of next monthly payment for an individual reservation. This cancellation is intentional and affects only the monthly reservation purchases.
- - You may have to repurchase the canceled monthly reservations from the source enrollment using the new enrollment in the local or new currency.
+ - You may have to repurchase the canceled monthly reservations from the source enrollment using the new enrollment in the local or new currency. If you repurchase a reservation, the purchase term (one or three years) is reset. The repurchase doesn't continue under the previous term.
### Auto enrollment transfer
cost-management-billing Elevate Access Global Admin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/elevate-access-global-admin.md
+
+ Title: Elevate access to manage billing accounts
+
+description: Describes how to elevate access for a Global Administrator to manage billing accounts using the Azure portal or REST API.
++
+tags: billing
+++ Last updated : 5/18/2022+++
+# Elevate access to manage billing accounts
+
+As a Global Administrator in Azure Active Directory (Azure AD), you might not have access to all billing accounts in your directory. This article describes the ways that you can elevate your access to all billing accounts.
+
+Elevating your access to manage all billing accounts gives you the ability to view and manage cost and billing for your accounts. You can view invoices, charges, products that are purchased, and the users that have access to the billing accounts. If you want to elevate your access to manage subscriptions, management groups, and resources, see [Elevate access to manage all Azure subscriptions and management groups](../../role-based-access-control/elevate-access-global-admin.md#elevate-access-to-manage-all-azure-subscriptions-and-management-groups).
+
+> [!NOTE]
+> Elevated access only works for Microsoft Customer Agreement (MCA) and Microsoft Partner Agreement (MPA) billing account types. As a Global Administrator you can't elevate your access to manage billing accounts for Enterprise Agreement (EA) and Microsoft Online Service Program (MOSP) types. To learn more about billing accounts, see [Billing accounts and scopes in the Azure portal](view-all-accounts.md).
+
+## Why elevate your access?
+
+If youΓÇÖre a Global Administrator, there might be times when you want to do the following actions:
+
+- See all users who have created individual billing accounts in your organization.
+- View invoices and charges for all individual billing accounts created in your organization.
+- Regain access to a billing account when a user has lost access.
+- Perform billing administration for an account when other administrators aren't available.
+
+## How does elevated access work?
+
+All Global Administrators in Azure Active Directory (Azure AD) get read-only access to all Microsoft Customer Agreement (MCA) and Microsoft Partner Agreement (MPA) billing accounts in their Azure Active Directory. They can view all billing accounts and the corresponding cost and billing information. Along with a read-only view, they get permission to manage role assignments on the billing accounts. They can add themselves as owners of the billing accounts to elevate themselves.
+
+## Elevate access to manage billing accounts
+
+### [Azure portal](#tab/portal)
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+2. Search for **Cost Management + Billing**.
+ ![Screenshot that shows Search in the Azure portal for cost management + billing.](./media/elevate-access-global-admin/billing-search-cost-management-billing.png)
+3. Select **Billing scopes** on the left side of the page.
+4. On the Billing scopes page, select the box to view all billing accounts.
+ :::image type="content" source="./media/elevate-access-global-admin/global-admin-view-all-accounts.png" alt-text="Screenshot that shows global admins selecting the box to view all accounts." lightbox="./media/elevate-access-global-admin/global-admin-view-all-accounts.png" :::
+ > [!NOTE]
+ > The billing scopes page shows only 200 scopes. However, you can use the search box in the page to search for accounts that are not part of the list.
+5. Select a Microsoft Customer Agreement or a Microsoft Partner Agreement billing account from the list. As a Global administrator, you have read access to the billing account. To elevate yourself to perform write operations, follow steps 6 to 8.
+6. Select **Access Control (IAM)** on the left side of the page.
+7. Select **Add** at the top of the page.
+ :::image type="content" source="./media/elevate-access-global-admin/role-assignment-list.png" alt-text="Screenshot showing global admins selecting Add." lightbox="./media/elevate-access-global-admin/role-assignment-list.png" :::
+8. In the Add permission window, in the **Role** list, select **Billing account owner**. Under the **Select** area, select your user name, and then select **Save** at the bottom of the window.
+ :::image type="content" source="./media/elevate-access-global-admin/role-assignment-add.png" alt-text="Screenshot showing a global admin adding themself as an owner." lightbox="./media/elevate-access-global-admin/role-assignment-add.png" :::
+
+### [REST API](#tab/rest)
+
+You can use the [Azure Billing](/rest/api/billing/) APIs to programmatically elevate yourself to manage all billing accounts in your directory.
+
+### Find all billing accounts in your directory
+
+```json
+GET https://management.azure.com/providers/Microsoft.Billing/billingAccounts?includeAllOrgs=true&api-version=2020-05-01
+```
+
+The API response returns a list of billing accounts in your directory.
+
+```json
+{
+ "value": [
+ {
+ "id": "/providers/Microsoft.Billing/billingAccounts/6e98e158-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx_xxxx-xx-xx",
+ "name": "6e98e158-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx_xxxx-xx-xx",
+ "properties": {
+ "accountStatus": "Active",
+ "accountType": "Individual",
+ "agreementType": "MicrosoftCustomerAgreement",
+ "billingProfiles": {
+ "hasMoreResults": false
+ },
+ "displayName": "Connie Wilson",
+ "hasReadAccess": true
+ },
+ "type": "Microsoft.Billing/billingAccounts"
+ },
+ {
+ "id": "/providers/Microsoft.Billing/billingAccounts/5e98e158-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx_xxxx-xx-xx",
+ "name": "5e98e158-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx_xxxx-xx-xx",
+ "properties": {
+ "accountStatus": "Active",
+ "accountType": "Enterprise",
+ "agreementType": "MicrosoftCustomerAgreement",
+ "billingProfiles": {
+ "hasMoreResults": false
+ },
+ "displayName": "Contoso",
+ "hasReadAccess": true
+ },
+ "type": "Microsoft.Billing/billingAccounts"
+ },
+ {
+ "id": "/providers/Microsoft.Billing/billingAccounts/4e98e158-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx_xxxx-xx-xx",
+ "name": "4e98e158-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx_xxxx-xx-xx",
+ "properties": {
+ "accountStatus": "Active",
+ "accountType": "Individual",
+ "agreementType": "MicrosoftCustomerAgreement",
+ "billingProfiles": {
+ "hasMoreResults": false
+ },
+ "displayName": "Tomas Wilson",
+ "hasReadAccess": true
+ },
+ "type": "Microsoft.Billing/billingAccounts"
+ }
+ ]
+}
+```
+
+Use the `displayName` property of the billing account to identify the billing account for which you want to elevate your access. Copy the `name` of the billing account. For example, if you want to elevate yourself as owner on the **Connie Wilson** billing account, you'd copy `6e98e158-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx_xxxx-xx-xx`. Paste the value somewhere so that you can use it in the next step.
+
+### Get definitions of roles available for your billing account
+
+Make the following request, replacing `<billingAccountName>` with the `name` copied in the first step (`6e98e158-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx_xxxx-xx-xx`).
+
+```json
+GET https://management.azure.com/providers/Microsoft.Billing/billingAccounts/<billingAccountName>/billingRoleDefinitions?api-version=2020-05-01
+```
+
+The API response returns list of roles available to your billing account.
+
+```json
+{
+ "value": [
+ {
+ "id": "/providers/Microsoft.Billing/billingAccounts/6e98e158-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx_xxxx-xx-xx/billingRoleDefinitions/50000000-aaaa-bbbb-cccc-100000000000",
+ "name": "50000000-aaaa-bbbb-cccc-100000000000",
+ "properties": {
+ "description": "The Owner role gives the user all permissions including access management on a billing account.",
+ "permissions": [
+ {
+ "actions": [
+ "50000000-aaaa-bbbb-cccc-200000000000",
+ "50000000-aaaa-bbbb-cccc-200000000001",
+ "50000000-aaaa-bbbb-cccc-200000000002",
+ "50000000-aaaa-bbbb-cccc-200000000003"
+ ]
+ }
+ ],
+ "roleName": "Billing account owner"
+ },
+ "type": "Microsoft.Billing/billingAccounts/billingRoleDefinitions"
+ },
+ {
+ "id": "/providers/Microsoft.Billing/billingAccounts/6e98e158-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx_xxxx-xx-xx/billingRoleDefinitions/50000000-aaaa-bbbb-cccc-100000000001",
+ "name": "50000000-aaaa-bbbb-cccc-100000000001",
+ "properties": {
+ "description": "The Contributor role gives the user all permissions except access management on a billing account.",
+ "permissions": [
+ {
+ "actions": [
+ "50000000-aaaa-bbbb-cccc-200000000001",
+ "50000000-aaaa-bbbb-cccc-200000000002",
+ "50000000-aaaa-bbbb-cccc-200000000003",
+ ]
+ }
+ ],
+ "roleName": "Billing account contributor"
+ },
+ "type": "Microsoft.Billing/billingAccounts/billingRoleDefinitions"
+ },
+ {
+ "id": "/providers/Microsoft.Billing/billingAccounts/6e98e158-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx_xxxx-xx-xx/billingRoleDefinitions/50000000-aaaa-bbbb-cccc-100000000002",
+ "name": "50000000-aaaa-bbbb-cccc-100000000002",
+ "properties": {
+ "description": "The Reader role gives the user read permissions to a billing account.",
+ "permissions": [
+ {
+ "actions": [
+ "50000000-aaaa-bbbb-cccc-200000000001",
+ "50000000-aaaa-bbbb-cccc-200000000006",
+ "50000000-aaaa-bbbb-cccc-200000000007",
+ ]
+ }
+ ],
+ "roleName": "Billing account reader"
+ },
+ "type": "Microsoft.Billing/billingAccounts/billingRoleDefinitions"
+ }
+ ]
+}
+```
+
+Use the `roleName` property to identify the owner role definition. Copy the `name` of the role definition. For example, from the above API response, you'd copy `50000000-aaaa-bbbb-cccc-100000000000`. Paste this value somewhere so that you can use it in the next step.
+
+### Add yourself as an owner
+
+Make the following request, replacing `<billingAccountName>` with the `name` copied in the first step (`6e98e158-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx_xxxx-xx-xx`).
+
+```json
+PUT https://management.azure.com/providers/Microsoft.Billing/billingAccounts/<billingAccountName>/createBillingRoleAssignment?api-version=2020-05-01
+```
+
+#### Request body
+
+To add yourself as an owner, you need to get your object ID. You can find the object ID either in the Users page of the Azure Active Directory section in the Azure portal or your can use the [Microsoft Graph API](/graph/api/resources/users?view=graph-rest-1.0&preserve-view=true) to get the object ID.
++
+In the request body, replace `<roleDefinitionName>` with the `name` copied from Step 2. Replace `<principalId>` with the object ID that you got either from the Azure portal or through the Microsoft Graph API.
+
+```json
+{
+ "principalId": "<principalId>",
+ "roleDefinitionId": "<roleDefinitionName>"
+}
+```
+++
+## Need help? contact support
+
+If you need help, [contact support](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade) to get your issue resolved quickly.
+
+## Next steps
+
+- [Manage billing roles in the Azure portal](understand-mca-roles.md#manage-billing-roles-in-the-azure-portal)
+- [Get billing ownership of Azure subscriptions from users in other billing accounts](mca-request-billing-ownership.md)
cost-management-billing Manage Tax Information https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/manage-tax-information.md
+
+ Title: Update tax details for an Azure billing account
+description: This article describes how to update your Azure billing account tax details.
++
+tags: billing
+++ Last updated : 05/23/2022++++
+# Update tax details for an Azure billing account
+
+When you buy Azure products and services, the taxes that you pay are determined by one of two things: your sold-to address, or your ship-to/service usage address, if it's different.
+
+This article helps you review and update the sold to information, ship-to/service usage address, and tax IDs for your Azure billing account. The instructions to update vary by the billing account type. For more information about billing accounts and how to identify your billing account type, see [View billing accounts in Azure portal](view-all-accounts.md). An Azure billing account is separate from your Azure user account and [Microsoft account](https://account.microsoft.com/).
+
+> [!NOTE]
+> When you update the sold to information, ship-to address, and Tax IDs in the Azure portal, the updated values are only used for invoices that are generated in the future. To make changes to an existing invoice, [create a support request](https://go.microsoft.com/fwlink/?linkid=2083458).
+
+## Update the sold to address
+
+1. Sign in to the Azure portal using the email address with an owner or a contributor role on the billing account for a Microsoft Customer Agreement (MCA). Or, sign in with an account administrator role for a Microsoft Online Subscription Program (MOSP) billing account. MOSP is also referred to as pay-as-you-go.
+1. Search for **Cost Management + Billing**.
+ ![Screenshot that shows where to search in the Azure portal.](./media/manage-tax-information/search-cmb.png)
+1. In the left menu, select **Properties** and then select **Update sold-to**.
+ :::image type="content" source="./media/manage-tax-information/update-sold-to.png" alt-text="Screenshot showing the properties for an M C A billing account where you modify the sold-to address." lightbox="./media/manage-tax-information/update-sold-to.png" :::
+1. Enter the new address and select **Save**.
+ > [!NOTE]
+ > Some accounts require additional verification before their sold-to can be updated. If your account requires manual approval, you are prompted to contact Azure support.
+
+## Update ship-to address for an MCA billing account
+
+Customers in Canada, Puerto Rico, and the United States can set the ship-to address for their MCA billing accounts. Each billing profile in their account can have its own ship-to address. To use multiple ship-to addresses, create multiple billing profiles, one for each ship-to address.
+
+1. Sign in to the Azure portal using the email address with an owner or a contributor role for the billing account or a billing profile for an MCA.
+1. Search for **Cost Management + Billing**.
+1. In the left menu under **Billing**, select **Billing profiles**.
+1. Select a billing profile to update the ship-to address.
+ :::image type="content" source="./media/manage-tax-information/select-billing-profile.png" alt-text="Screenshot showing the Billing profiles page where you select a billing profile." lightbox="./media/manage-tax-information/select-billing-profile.png" :::
+1. In the left menu under **Settings**, select **Properties**.
+1. Select **Update ship-to/service usage address**.
+ :::image type="content" source="./media/manage-tax-information/update-ship-to-01.png" alt-text="Screenshot showing where to update ship-to/service usage address." lightbox="./media/manage-tax-information/update-ship-to-01.png" :::
+1. Enter the new address and then select **Save**.
+
+## Update ship-to address for a MOSP billing account
+
+Customers with a Microsoft Online Service Program (MOSP) account, also called pay-as-you-go, can set ship-to address for their billing account. Each subscription in their account can have its own ship-to address. To use multiple ship-to addresses, create multiple subscriptions, one for each ship-to address.
+
+1. Sign in to the Azure portal using the email address that has account administrator permission on the account.
+1. Search for **Subscriptions**.
+ :::image type="content" source="./media/manage-tax-information/search-subscriptions.png" alt-text="Screenshot showing where to search for Subscriptions in the Azure portal." lightbox="./media/manage-tax-information/search-subscriptions.png" :::
+1. Select a subscription from the list.
+1. In the left menu under **Settings**, select **Properties**.
+1. Select **Update Address**.
+ :::image type="content" source="./media/manage-tax-information/update-ship-to-02.png" alt-text="Screenshot that shows where to update the address for the MOSP billing account." lightbox="./media/manage-tax-information/update-ship-to-02.png" :::
+1. Enter the new address and then select **Save**.
+
+## Add your tax IDs
+
+In the Azure portal, tax IDs can only be updated for Microsoft Online Service Program (MOSP) or Microsoft Customer Agreement billing accounts that are created through the Azure website.
+
+Customers in the following countries or regions can add their Tax IDs.
+
+|Country/region|Country/region|
+|||
+| Armenia | Australia |
+| Armenia | Australia |
+| Austria | Bahamas |
+| Bahrain | Bangladesh |
+| Belarus | Belgium |
+| Brazil | Bulgaria |
+|Cambodia | Cameroon |
+|Chile | Colombia |
+|Croatia | Cyprus |
+|Czech Republic | Denmark |
+| Estonia | Fiji |
+| Finland | France |
+|Georgia | Germany |
+|Ghana | Greece |
+|Guatemala | Hungary |
+|Iceland | Italy |
+| India <sup>1</sup> | Indonesia |
+|Ireland | Isle of Man |
+|Kenya | Korea |
+| Latvia | Liechtenstein |
+|Lithuania | Luxembourg |
+|Malaysia | Malta |
+| Mexico | Moldova |
+| Monaco | Netherlands |
+| New Zealand | Nigeria |
+| Oman | Philippines |
+| Poland | Portugal |
+| Romania | Saudi Arabia |
+| Serbia | Slovakia |
+| Slovenia | South Africa |
+|Spain | Sweden |
+|Switzerland | Taiwan |
+|Tajikistan | Thailand |
+|Turkey | Ukraine |
+|United Arab Emirates | United Kingdom |
+|Uzbekistan | Vietnam |
+|Zimbabwe | |
+
+1. Sign in to the Azure portal using the email address that has an owner or a contributor role on the billing account for an MCA or an account administrator role for a MOSP billing account.
+1. Search for **Cost Management + Billing**.
+ ![Screenshot that shows where to search for Cost Management + Billing.](./media/manage-tax-information/search-cmb.png)
+1. In the left menu under **Settings**, select **Properties**.
+1. Select **Manage Tax IDs**.
+ :::image type="content" source="./media/manage-tax-information/update-tax-id.png" alt-text="Screenshot showing where to update the Tax I D." lightbox="./media/manage-tax-information/update-tax-id.png" :::
+1. Enter new tax IDs and then select **Save**.
+ > [!NOTE]
+ > If you don't see the Tax IDs section, Tax IDs are not yet collected for your region. Or, updating Tax IDs in the Azure portal isn't supported for your account.
+
+<sup>1</sup> Follow the instructions in the next section to add your Goods and Services Taxpayer Identification Number (GSTIN).
+
+## Add your GSTIN for billing accounts in India
+
+1. Sign in to the Azure portal using the email address that has account administrator permission on the account.
+1. Search for **Subscriptions**.
+1. Select a subscription from the list.
+1. In the left menu, select **Properties**.
+1. Select **Update Address**.
+ :::image type="content" source="./media/manage-tax-information/update-address-india.png" alt-text="Screenshot that shows where to update the tax I D." lightbox="./media/manage-tax-information/update-address-india.png" :::
+1. Enter the new GSTIN and then select **Save**.
+ :::image type="content" source="./media/manage-tax-information/update-tax-id-india.png" alt-text="Screenshot that shows where to update the G S T I N." lightbox="./media/manage-tax-information/update-tax-id-india.png" :::
+
+## Need help? Contact us.
+
+If you have questions or need help, [create a support request](https://go.microsoft.com/fwlink/?linkid=2083458).
+
+## Next steps
+
+- [View your billing accounts](view-all-accounts.md)
cost-management-billing Pay By Invoice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/pay-by-invoice.md
tags: billing
Previously updated : 04/25/2022 Last updated : 05/24/2022 # Pay for your Azure subscription by check or wire transfer
-This article applies to customers with a Microsoft Customer Agreement (MCA) and to customers who signed up for Azure through the Azure website (for an Microsoft Online Services Program account also called pay-as-you-go account). If you signed up for Azure through a Microsoft representative, then your default payment method is already be set to *check or wire transfer*.
+This article applies to customers with a Microsoft Customer Agreement (MCA) and to customers who signed up for Azure through the Azure website (for a Microsoft Online Services Program account also called pay-as-you-go account). If you signed up for Azure through a Microsoft representative, then your default payment method is already be set to *check or wire transfer*.
If you switch to pay by check or wire transfer, that means you pay your bill within 30 days of the invoice date by check/wire transfer.
If you're not automatically approved, you can submit a request to Azure support
- Contact Phone: - Contact Email: - Justification about why you want the check or wire transfer payment option instead of a credit card:
+ - File upload: Attach legal documentation showing the legal company name and company address. Your information in the Azure portal should match the legal information registered in the legal document. You can provide one of the following examples:
+ - A certificate of incorporation signed by the companyΓÇÖs legal representatives.
+ - Any government-issued documents having the company name and address. For example, a tax certification.
+ - Company registration form signed and issued by the government.
- For cores increase, provide the following additional information: - (Old quota) Existing Cores: - (New quota) Requested cores:
cost-management-billing Prepare Buy Reservation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/prepare-buy-reservation.md
You can scope a reservation to a subscription or resource groups. Setting the sc
### Reservation scoping options
-You have three options to scope a reservation, depending on your needs:
+You have four options to scope a reservation, depending on your needs:
- **Single resource group scope** ΓÇö Applies the reservation discount to the matching resources in the selected resource group only. - **Single subscription scope** ΓÇö Applies the reservation discount to the matching resources in the selected subscription.
cost-management-billing Understand Invoice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/understand/understand-invoice.md
tags: billing
Previously updated : 09/15/2021 Last updated : 05/18/2022
first page and shows information about your profile and subscription.
| Customer PO No. |An optional purchase order number, assigned by you for tracking | | Invoice No. |A unique, Microsoft generated invoice number used for tracking purposes | | Billing cycle |Date range that this invoice covers |
-| Invoice date |Date that the invoice was generated, typically a day after end of the Billing cycle |
+| Invoice date |Date that the invoice was generated, typically on the same day of the month that the Azure account was created. However, they sometimes get generated a day or two later than day of the month that the Azure account was created.|
| Payment method |Type of payment used on the account (invoice or credit card) | | Bill to |Billing address that is listed for the account | | Subscription offer (ΓÇ£Pay-As-You-GoΓÇ¥) |Type of subscription offer that was purchased (Pay-As-You-Go, BizSpark Plus, Azure Pass, etc.). For more information, see [Azure offer types](https://azure.microsoft.com/support/legal/offer-details/). |
data-factory Concepts Integration Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-integration-runtime.md
Self-hosted | Data movement<br/>Activity dispatch | Data movement<br/>Activity d
Azure-SSIS | SSIS package execution | SSIS package execution | > [!NOTE]
-> Outbound controls vary by service for Azure IR. In Synapse, workspaces have options to limit outbound traffic from the [managed virtual network](../synapse-analytics/security/synapse-workspace-managed-vnet.md) when utilizing Azure IR. In Data Factory, all ports are opened for [outbound communications](managed-virtual-network-private-endpoint.md#outbound-communications-through-public-endpoint-from-adf-managed-virtual-network) when utilizing Azure IR. Azure-SSIS IR can be integrated with your vNET to provide [outbound communications](azure-ssis-integration-runtime-standard-virtual-network-injection.md) controls.
+> Outbound controls vary by service for Azure IR. In Synapse, workspaces have options to limit outbound traffic from the [managed virtual network](../synapse-analytics/security/synapse-workspace-managed-vnet.md) when utilizing Azure IR. In Data Factory, all ports are opened for [outbound communications](managed-virtual-network-private-endpoint.md#outbound-communications-through-public-endpoint-from-a-data-factory-managed-virtual-network) when utilizing Azure IR. Azure-SSIS IR can be integrated with your vNET to provide [outbound communications](azure-ssis-integration-runtime-standard-virtual-network-injection.md) controls.
## Azure integration runtime
An Azure integration runtime can:
### Azure IR network environment
-Azure Integration Runtime supports connecting to data stores and computes services with public accessible endpoints. Enabling Managed Virtual Network, Azure Integration Runtime supports connecting to data stores using private link service in private network environment. In Synapse, workspaces have options to limit outbound traffic from the IR [managed virtual network](../synapse-analytics/security/synapse-workspace-managed-vnet.md). In Data Factory, all ports are opened for [outbound communications](managed-virtual-network-private-endpoint.md#outbound-communications-through-public-endpoint-from-adf-managed-virtual-network). The Azure-SSIS IR can be integrated with your vNET to provide [outbound communications](azure-ssis-integration-runtime-standard-virtual-network-injection.md) controls.
+Azure Integration Runtime supports connecting to data stores and computes services with public accessible endpoints. Enabling Managed Virtual Network, Azure Integration Runtime supports connecting to data stores using private link service in private network environment. In Synapse, workspaces have options to limit outbound traffic from the IR [managed virtual network](../synapse-analytics/security/synapse-workspace-managed-vnet.md). In Data Factory, all ports are opened for [outbound communications](managed-virtual-network-private-endpoint.md#outbound-communications-through-public-endpoint-from-a-data-factory-managed-virtual-network). The Azure-SSIS IR can be integrated with your vNET to provide [outbound communications](azure-ssis-integration-runtime-standard-virtual-network-injection.md) controls.
### Azure IR compute resource and scaling Azure integration runtime provides a fully managed, serverless compute in Azure. You don't have to worry about infrastructure provision, software installation, patching, or capacity scaling. In addition, you only pay for the duration of the actual utilization.
data-factory Connector Azure Database For Mysql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-database-for-mysql.md
The below table lists the properties supported by Azure Database for MySQL sourc
| - | -- | -- | -- | - | | Table | If you select Table as input, data flow fetches all the data from the table specified in the dataset. | No | - |*(for inline dataset only)*<br>tableName | | Query | If you select Query as input, specify a SQL query to fetch data from source, which overrides any table you specify in dataset. Using queries is a great way to reduce rows for testing or lookups.<br><br>**Order By** clause is not supported, but you can set a full SELECT FROM statement. You can also use user-defined table functions. **select * from udfGetData()** is a UDF in SQL that returns a table that you can use in data flow.<br>Query example: `select * from mytable where customerId > 1000 and customerId < 2000` or `select * from "MyTable"`.| No | String | query |
+| Stored procedure | If you select Stored procedure as input, specify a name of the stored procedure to read data from the source table, or select Refresh to ask the service to discover the procedure names.| Yes (if you select Stored procedure as input) | String | procedureName |
+| Procedure parameters | If you select Stored procedure as input, specify any input parameters for the stored procedure in the order set in the procedure, or select Import to import all procedure parameters using the form `@paraName`. | No | Array | inputs |
| Batch size | Specify a batch size to chunk large data into batches. | No | Integer | batchSize | | Isolation Level | Choose one of the following isolation levels:<br>- Read Committed<br>- Read Uncommitted (default)<br>- Repeatable Read<br>- Serializable<br>- None (ignore isolation level) | No | <small>READ_COMMITTED<br/>READ_UNCOMMITTED<br/>REPEATABLE_READ<br/>SERIALIZABLE<br/>NONE</small> |isolationLevel |
data-factory Connector Azure Database For Postgresql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-database-for-postgresql.md
Previously updated : 04/22/2022 Last updated : 05/06/2022 # Copy and transform data in Azure Database for PostgreSQL using Azure Data Factory or Synapse Analytics
The below table lists the properties supported by Azure Database for PostgreSQL
| - | -- | -- | -- | - | | Table | If you select Table as input, data flow fetches all the data from the table specified in the dataset. | No | - |*(for inline dataset only)*<br>tableName | | Query | If you select Query as input, specify a SQL query to fetch data from source, which overrides any table you specify in dataset. Using queries is a great way to reduce rows for testing or lookups.<br><br>**Order By** clause is not supported, but you can set a full SELECT FROM statement. You can also use user-defined table functions. **select * from udfGetData()** is a UDF in SQL that returns a table that you can use in data flow.<br>Query example: `select * from mytable where customerId > 1000 and customerId < 2000` or `select * from "MyTable"`. Note in PostgreSQL, the entity name is treated as case-insensitive if not quoted.| No | String | query |
+| Schema name | If you select Stored procedure as input, specify a schema name of the stored procedure, or select Refresh to ask the service to discover the schema names.| No | String | schemaName |
+| Stored procedure | If you select Stored procedure as input, specify a name of the stored procedure to read data from the source table, or select Refresh to ask the service to discover the procedure names.| Yes (if you select Stored procedure as input) | String | procedureName |
+| Procedure parameters | If you select Stored procedure as input, specify any input parameters for the stored procedure in the order set in the procedure, or select Import to import all procedure parameters using the form `@paraName`. | No | Array | inputs |
| Batch size | Specify a batch size to chunk large data into batches. | No | Integer | batchSize | | Isolation Level | Choose one of the following isolation levels:<br>- Read Committed<br>- Read Uncommitted (default)<br>- Repeatable Read<br>- Serializable<br>- None (ignore isolation level) | No | <small>READ_COMMITTED<br/>READ_UNCOMMITTED<br/>REPEATABLE_READ<br/>SERIALIZABLE<br/>NONE</small> |isolationLevel |
data-factory Connector Dynamics Crm Office 365 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-dynamics-crm-office-365.md
Refer to the following table of supported authentication types and configuration
>With the [deprecation of regional Discovery Service](/power-platform/important-changes-coming#regional-discovery-service-is-deprecated), the service has upgraded to leverage [global Discovery Service](/powerapps/developer/data-platform/webapi/discover-url-organization-web-api#global-discovery-service) while using Office 365 Authentication. > [!IMPORTANT]
->If your tenant and user is configured in Azure Active Directory for [conditional access](../active-directory/conditional-access/overview.md) and/or Multi-Factor Authentication is required, you will not be able to use Office 365 Authentication type. For those situations, you must use a Azure Active Directory (Azure AD) service principal authentication.
+>If your tenant and user is configured in Azure Active Directory for [conditional access](../active-directory/conditional-access/overview.md) and/or Multi-Factor Authentication is required, you will not be able to use Office 365 Authentication type. For those situations, you must use an Azure Active Directory (Azure AD) service principal authentication.
For Dynamics 365 specifically, the following application types are supported: - Dynamics 365 for Sales
data-factory Create Self Hosted Integration Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/create-self-hosted-integration-runtime.md
Installation of the self-hosted integration runtime on a domain controller isn't
- [Visual C++ 2010 Redistributable](https://download.microsoft.com/download/3/2/2/3224B87F-CFA0-4E70-BDA3-3DE650EFEBA5/vcredist_x64.exe) Package (x64) - Java Runtime (JRE) version 8 from a JRE provider such as [Adopt OpenJDK](https://adoptopenjdk.net/). Ensure that the JAVA_HOME environment variable is set to the JDK folder (and not just the JRE folder). >[!NOTE]
- >It might be necessary to adjust the Java settings if memory errors occur, as described in the [Parquet format] documentation(format-parquet#using-self-hosted-integration-runtime).
+ >It might be necessary to adjust the Java settings if memory errors occur, as described in the [Parquet format](./format-parquet.md#using-self-hosted-integration-runtime) documentation.
>[!NOTE]
data-factory Data Factory Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-factory-private-link.md
Last updated 03/18/2022
[!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-xxx-md.md)]
-By using Azure private link, you can connect to various platforms as a service (PaaS) deployments in Azure via a private endpoint. A private endpoint is a private IP address within a specific virtual network and subnet. For a list of PaaS deployments that support private link functionality, see [Private Link documentation](../private-link/index.yml).
+By using Azure Private Link, you can connect to various platform as a service (PaaS) deployments in Azure via a private endpoint. A private endpoint is a private IP address within a specific virtual network and subnet. For a list of PaaS deployments that support Private Link functionality, see [Private Link documentation](../private-link/index.yml).
+
+## Secure communication between customer networks and Data Factory
-## Secure communication between customer networks and Azure Data Factory
You can set up an Azure virtual network as a logical representation of your network in the cloud. Doing so provides the following benefits:+ * You help protect your Azure resources from attacks in public networks.
-* You let the networks and data factory securely communicate with each other.
+* You let the networks and data factory securely communicate with each other.
+
+You can also connect an on-premises network to your virtual network. Set up an Internet Protocol security VPN connection, which is a site-to-site connection. Or set up an Azure ExpressRoute connection. which is a private peering connection.
-You can also connect an on-premises network to your virtual network by setting up an Internet Protocol security (IPsec) VPN (site-to-site) connection or an Azure ExpressRoute (private peering) connection.
+You can also install a self-hosted integration runtime (IR) on an on-premises machine or a virtual machine in the virtual network. Doing so lets you:
-You can also install a self-hosted integration runtime on an on-premises machine or a virtual machine in the virtual network. Doing so lets you:
* Run copy activities between a cloud data store and a data store in a private network.
-* Dispatch transform activities against compute resources in an on-premises network or an Azure virtual network.
+* Dispatch transform activities against compute resources in an on-premises network or an Azure virtual network.
-Several communication channels are required between Azure data factory and the customer virtual network, as shown in the following table:
+Several communication channels are required between Azure Data Factory and the customer virtual network, as shown in the following table:
| Domain | Port | Description | | - | -- | |
-| `adf.azure.com` | 443 | Azure data factory portal, required by data factory authoring and monitoring. |
-| `*.{region}.datafactory.azure.net` | 443 | Required by the self-hosted integration runtime to connect to the Data Factory service. |
-| `*.servicebus.windows.net` | 443 | Required by the self-hosted integration runtime for interactive authoring. |
-| `download.microsoft.com` | 443 | Required by the self-hosted integration runtime for downloading the updates. |
+| `adf.azure.com` | 443 | The Data Factory portal is required by Data Factory authoring and monitoring. |
+| `*.{region}.datafactory.azure.net` | 443 | Required by the self-hosted IR to connect to Data Factory. |
+| `*.servicebus.windows.net` | 443 | Required by the self-hosted IR for interactive authoring. |
+| `download.microsoft.com` | 443 | Required by the self-hosted IR for downloading the updates. |
> [!NOTE]
-> Disabling public network access is applicable only to the self-hosted integration runtime, not to Azure Integration Runtime and SQL Server Integration Services (SSIS) Integration Runtime.
+> Disabling public network access applies only to the self-hosted IR, not to Azure IR and SQL Server Integration Services IR.
-The communications to Azure data factory service go through private link and help provide secure private connectivity.
+The communications to Data Factory go through Private Link and help provide secure private connectivity.
+
+Enabling Private Link for each of the preceding communication channels offers the following functionality:
-Enabling the private link service for each of the preceding communication channels offers the following functionality:
- **Supported**:
- - You can author and monitor in the data factory portal from your virtual network, even if you block all outbound communications. It should be noted that even if you create a private endpoint for the portal, others can still access the Azure data factory portal through the public network.
- - The command communications between the self-hosted integration runtime and the Azure data factory service can be performed securely in a private network environment. The traffic between the self-hosted integration runtime and the Azure data factory service goes through private link.
+ - You can author and monitor in the Data Factory portal from your virtual network, even if you block all outbound communications. If you create a private endpoint for the portal, others can still access the Data Factory portal through the public network.
+ - The command communications between the self-hosted IR and Data Factory can be performed securely in a private network environment. The traffic between the self-hosted IR and Data Factory goes through Private Link.
- **Not currently supported**:
- - Interactive authoring that uses a self-hosted integration runtime, such as test connection, browse folder list and table list, get schema, and preview data, goes through Private Link.
- - The new version of the self-hosted integration runtime that can be automatically downloaded from Microsoft Download Center if you enable Auto-Update, isn't supported at this time.
-
- > [!NOTE]
- > For functionality that's not currently supported, you still need to configure the previously mentioned domain and port in the virtual network or your corporate firewall.
+ - Interactive authoring that uses a self-hosted IR, such as test connection, browse folder list and table list, get schema, and preview data, goes through Private Link.
+ - The new version of the self-hosted IR that can be automatically downloaded from Microsoft Download Center if you enable auto-update isn't supported at this time.
+
+ For functionality that isn't currently supported, you need to configure the previously mentioned domain and port in the virtual network or your corporate firewall.
- > [!NOTE]
- > Connecting to Azure data factory via private endpoint is only applicable to self-hosted integration runtime in data factory. It is not supported for Azure Synapse.
+ Connecting to Data Factory via private endpoint is only applicable to self-hosted IR in Data Factory. It isn't supported for Azure Synapse Analytics.
> [!WARNING]
-> If you enable private link in Azure data factory and block public access at the same time, it is recommended that you store your credentials in an Azure key vault to ensure they are secure.
+> If you enable Private Link Data Factory and block public access at the same time, store your credentials in Azure Key Vault to ensure they're secure.
+
+## Configure private endpoint for communication between self-hosted IR and Data Factory
-## Steps to configure private endpoint for communication between self-hosted integration runtime and Azure data factory
-This section will detail how to configure the private endpoint for communication between self-hosted integration runtime and Azure data factory.
+This section describes how to configure the private endpoint for communication between self-hosted IR and Data Factory.
-**Step 1: Create a private endpoint and set up a private link for Azure data factory.**
-The private endpoint is created in your virtual network for the communication between self-hosted integration runtime and Azure data factory service. Follow the details step in [Set up a private endpoint link for Azure Data Factory](#set-up-a-private-endpoint-link-for-azure-data-factory)
+### Create a private endpoint and set up a private link for Data Factory
-**Step 2: Make sure the DNS configuration is correct.**
-Follow the instructions [DNS changes for private endpoints](#dns-changes-for-private-endpoints) to check or configure your DNS settings.
+The private endpoint is created in your virtual network for the communication between self-hosted IR and Data Factory. Follow the steps in [Set up a private endpoint link for Data Factory](#set-up-a-private-endpoint-link-for-data-factory).
-**Step 3: Put FQDNs of Azure Relay and download center into the allow list of your firewall.**
-If your self-hosted integration runtime is installed on the virtual machine in your virtual network, allow outbound traffic to below FQDNs in the NSG of your virtual network.
+### Make sure the DNS configuration is correct
-If your self-hosted integration runtime is installed on the machine in your on-premises environment, please allow outbound traffic to below FQDNs in the firewall of your on-premises environment and NSG of your virtual network.
+Follow the instructions in [DNS changes for private endpoints](#dns-changes-for-private-endpoints) to check or configure your DNS settings.
+
+### Put FQDNs of Azure Relay and Download Center into the allowed list of your firewall
+
+If your self-hosted IR is installed on the virtual machine in your virtual network, allow outbound traffic to below FQDNs in the NSG of your virtual network.
+
+If your self-hosted IR is installed on the machine in your on-premises environment, allow outbound traffic to below FQDNs in the firewall of your on-premises environment and NSG of your virtual network.
| Domain | Port | Description | | - | -- | |
-| `*.servicebus.windows.net` | 443 | Required by the self-hosted integration runtime for interactive authoring. |
-| `download.microsoft.com` | 443 | Required by the self-hosted integration runtime for downloading the updates. |
+| `*.servicebus.windows.net` | 443 | Required by the self-hosted IR for interactive authoring |
+| `download.microsoft.com` | 443 | Required by the self-hosted IR for downloading the updates |
-> [!NOTE]
-> If you donΓÇÖt allow above outbound traffic in the firewall and NSG, self-hosted integration runtime is shown as limited status. But you can still use it to execute activities. Only interactive authoring and auto-update donΓÇÖt work.
+If you don't allow the preceding outbound traffic in the firewall and NSG, self-hosted IR is shown with a **Limited** status. But you can still use it to execute activities. Only interactive authoring and auto-update don't work.
> [!NOTE]
-> If one data factory (shared) has a self-hosted integration runtime and the self-hosted integration runtime is shared with other data factories (linked). You only need to create private endpoint for the shared data factory, other linked data factories can leverage this private link for the communications between self-hosted integration runtime and Azure data factory service.
+> If one data factory (shared) has a self-hosted IR and the self-hosted IR is shared with other data factories (linked), you only need to create a private endpoint for the shared data factory. Other linked data factories can leverage this private link for the communications between self-hosted IR and Data Factory.
## DNS changes for private endpoints
-When you create a private endpoint, the DNS CNAME resource record for the data factory is updated to an alias in a subdomain with the prefix 'privatelink'. By default, we also create a [private DNS zone](../dns/private-dns-overview.md), corresponding to the 'privatelink' subdomain, with the DNS A resource records for the private endpoints.
-When you resolve the data factory endpoint URL from outside the virtual network with the private endpoint, it resolves to the public endpoint of the data factory service. When resolved from the virtual network hosting the private endpoint, the storage endpoint URL resolves to the private endpoint's IP address.
+When you create a private endpoint, the DNS CNAME resource record for the data factory is updated to an alias in a subdomain with the prefix *privatelink*. By default, we also create a [private DNS zone](../dns/private-dns-overview.md), corresponding to the *privatelink* subdomain, with the DNS A resource records for the private endpoints.
+
+When you resolve the data factory endpoint URL from outside the virtual network with the private endpoint, it resolves to the public endpoint of Data Factory. When resolved from the virtual network hosting the private endpoint, the storage endpoint URL resolves to the private endpoint's IP address.
-For the illustrated example above, the DNS resource records for the data factory 'DataFactoryA', when resolved from outside the virtual network hosting the private endpoint, will be:
+For the preceding illustrated example, the DNS resource records for the data factory called DataFactoryA, when resolved from outside the virtual network hosting the private endpoint, will be:
| Name | Type | Value | | - | -- | |
-| DataFactoryA.{region}.datafactory.azure.net | CNAME | < data factory service public endpoint > |
-| < data factory service public endpoint > | A | < data factory service public IP address > |
+| DataFactoryA.{region}.datafactory.azure.net | CNAME | < Data Factory public endpoint > |
+| < Data Factory public endpoint > | A | < Data Factory public IP address > |
The DNS resource records for DataFactoryA, when resolved in the virtual network hosting the private endpoint, will be:
The DNS resource records for DataFactoryA, when resolved in the virtual network
| DataFactoryA.{region}.datafactory.azure.net | CNAME | DataFactoryA.{region}.privatelink.datafactory.azure.net | | DataFactoryA.{region}.privatelink.datafactory.azure.net | A | < private endpoint IP address > |
-If you're using a custom DNS server on your network, clients must be able to resolve the FQDN for the data factory endpoint to the private endpoint IP address. You should configure your DNS server to delegate your private link subdomain to the private DNS zone for the virtual network, or configure the A records for ' DataFactoryA.{region}.datafactory.azure.net' with the private endpoint IP address.
+If you're using a custom DNS server on your network, clients must be able to resolve the FQDN for the data factory endpoint to the private endpoint IP address. You should configure your DNS server to delegate your Private Link subdomain to the private DNS zone for the virtual network. Or you can configure the A records for DataFactoryA.{region}.datafactory.azure.net with the private endpoint IP address.
+ - [Name resolution for resources in Azure virtual networks](../virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md#name-resolution-that-uses-your-own-dns-server) - [DNS configuration for private endpoints](../private-link/private-endpoint-overview.md#dns-configuration) > [!NOTE]
- > There is currently only one Azure data factory portal endpoint and therefore only one private endpoint for portal in a DNS zone. Attempting to create a second or subsequent portal private endpoint will overwrite the previously created private DNS entry for portal.
+ > Currently, there's only one Data Factory portal endpoint, so there's only one private endpoint for the portal in a DNS zone. Attempting to create a second or subsequent portal private endpoint overwrites the previously created private DNS entry for portal.
+## Set up a private endpoint link for Data Factory
-## Set up a private endpoint link for Azure Data Factory
+In this section, you'll set up a private endpoint link for Data Factory.
-In this section, you'll set up a private endpoint link for Azure Data Factory.
+You can choose whether to connect your self-hosted IR to Data Factory by selecting **Public endpoint** or **Private endpoint** during the Data Factory creation step, shown here:
-You can choose whether to connect your Self-Hosted Integration Runtime (SHIR) to Azure Data Factory via public endpoint or private endpoint during the data factory creation step, shown here:
+You can change the selection any time after creation from the Data Factory portal page on the **Networking** pane. After you enable **Private endpoint** there, you must also add a private endpoint to the data factory.
-You can change the selection anytime after creation from the data factory portal page on the Networking blade. After you enable private endpoints there, you must also add a private endpoint to the data factory.
+A private endpoint requires a virtual network and subnet for the link. In this example, a virtual machine within the subnet is used to run the self-hosted IR, which connects via the private endpoint link.
-A private endpoint requires a virtual network and subnet for the link. In this example, a virtual machine within the subnet will be used to run the Self-Hosted Integration Runtime (SHIR), connecting via the private endpoint link.
+### Create a virtual network
-### Create the virtual network
-If you don't have an existing virtual network to use with your private endpoint link, you must create a one, and assign a subnet.
+If you don't have an existing virtual network to use with your private endpoint link, you must create one and assign a subnet.
-1. Sign into the Azure portal at https://portal.azure.com.
-2. On the upper-left side of the screen, select **Create a resource > Networking > Virtual network** or search for **Virtual network** in the search box.
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. In the upper-left corner of the screen, select **Create a resource** > **Networking** > **Virtual network** or search for **Virtual network** in the search box.
-3. In **Create virtual network**, enter or select this information in the **Basics** tab:
+1. In **Create virtual network**, enter or select this information on the **Basics** tab:
| **Setting** | **Value** | ||--|
- | **Project Details** | |
- | Subscription | Select your Azure subscription |
- | Resource Group | Select a resource group for your virtual network |
+ | **Project details** | |
+ | Subscription | Select your Azure subscription. |
+ | Resource group | Select a resource group for your virtual network. |
| **Instance details** | |
- | Name | Enter a name for your virtual network |
- | Region | IMPORTANT: Select the same region your private endpoint will use |
+ | Name | Enter a name for your virtual network. |
+ | Region | *Important:* Select the same region your private endpoint will use. |
-4. Select the **IP Addresses** tab or select the **Next: IP Addresses** button at the bottom of the page.
+1. Select the **IP Addresses** tab or select **Next: IP Addresses** at the bottom of the page.
-5. In the **IP Addresses** tab, enter this information:
+1. On the **IP Addresses** tab, enter this information:
| Setting | Value | |--|-|
- | IPv4 address space | Enter **10.1.0.0/16** |
+ | IPv4 address space | Enter **10.1.0.0/16**. |
-6. Under **Subnet name**, select the word **default**.
+1. Under **Subnet name**, select the word **default**.
-7. In **Edit subnet**, enter this information:
+1. In **Edit subnet**, enter this information:
| Setting | Value | |--|-|
- | Subnet name | Enter a name for your subnet |
- | Subnet address range | Enter **10.1.0.0/24** |
+ | Subnet name | Enter a name for your subnet. |
+ | Subnet address range | Enter **10.1.0.0/24**. |
+
+1. Select **Save**.
+
+1. Select the **Review + create** tab or select the **Review + create** button.
-8. Select **Save**.
+1. Select **Create**.
-9. Select the **Review + create** tab or select the **Review + create** button.
+### Create a virtual machine for the self-hosted IR
-10. Select **Create**.
+You must also create or assign an existing virtual machine to run the self-hosted IR in the new subnet created in the preceding steps.
-### Create a virtual machine for the Self-Hosted Integration Runtime (SHIR)
-You must also create or assign an existing virtual machine to run the Self-Hosted Integration Runtime in the new subnet created above.
+1. In the upper-left corner of the portal, select **Create a resource** > **Compute** > **Virtual machine** or search for **Virtual machine** in the search box.
-1. On the upper-left side of the portal, select **Create a resource** > **Compute** > **Virtual machine** or search for **Virtual machine** in the search box.
-
-2. In **Create a virtual machine**, type or select the values in the **Basics** tab:
+1. In **Create a virtual machine**, enter or select the values on the **Basics** tab:
| Setting | Value | |--|-|
- | **Project Details** | |
- | Subscription | Select your Azure subscription |
- | Resource Group | Select a resource group |
+ | **Project details** | |
+ | Subscription | Select your Azure subscription. |
+ | Resource group | Select a resource group. |
| **Instance details** | |
- | Virtual machine name | Enter a name for the virtual machine |
- | Region | Select the region used above for your virtual network |
- | Availability Options | Select **No infrastructure redundancy required** |
- | Image | Select **Windows Server 2019 Datacenter - Gen1** (or any other Windows image that supports the Self-Hosted Integration Runtime) |
- | Azure Spot instance | Select **No** |
- | Size | Choose VM size or take default setting |
+ | Virtual machine name | Enter a name for the virtual machine. |
+ | Region | Select the region you used for your virtual network. |
+ | Availability options | Select **No infrastructure redundancy required**. |
+ | Image | Select **Windows Server 2019 Datacenter - Gen1**, or any other Windows image that supports the self-hosted IR. |
+ | Azure spot instance | Select **No**. |
+ | Size | Choose the VM size or use the default setting. |
| **Administrator account** | |
- | Username | Enter a username |
- | Password | Enter a password |
- | Confirm password | Reenter password |
+ | Username | Enter a username. |
+ | Password | Enter a password. |
+ | Confirm password | Reenter the password. |
-3. Select the **Networking** tab, or select **Next: Disks**, then **Next: Networking**.
+1. Select the **Networking** tab, or select **Next: Disks** > **Next: Networking**.
-4. In the Networking tab, select or enter:
+1. On the **Networking** tab, select or enter:
| Setting | Value | |-|-| | **Network interface** | |
- | Virtual network | Select the virtual network created above. |
- | Subnet | Select the subnet created above. |
+ | Virtual network | Select the virtual network you created. |
+ | Subnet | Select the subnet you created. |
| Public IP | Select **None**. |
- | NIC network security group | **Basic**|
+ | NIC network security group | **Basic**.|
| Public inbound ports | Select **None**. |
-
-5. Select **Review + create**.
-
-6. Review the settings, and then select **Create**.
+
+1. Select **Review + create**.
+1. Review the settings, and then select **Create**.
[!INCLUDE [ephemeral-ip-note.md](../../includes/ephemeral-ip-note.md)]
-### Create the private endpoint
-Finally, you must create the private endpoint in your data factory.
+### Create a private endpoint
+
+Finally, you must create a private endpoint in your data factory.
-1. On the Azure portal page for your data factory, select the **Networking** blade and the **Private endpoint connections** tab, and then select **+ Private endpoint**.
+1. On the Azure portal page for your data factory, select **Networking** > **Private endpoint connections** and then select **+ Private endpoint**.
+ :::image type="content" source="./media/data-factory-private-link/create-private-endpoint.png" alt-text="Screenshot that shows the Private endpoint connections pane used for creating a private endpoint.":::
-2. In the **Basics** tab of **Create a private endpoint**, enter, or select this information:
+1. On the **Basics** tab of **Create a private endpoint**, enter or select this information:
| Setting | Value | | - | -- | | **Project details** | |
- | Subscription | Select your subscription |
- | Resource group | Select a resource group |
+ | Subscription | Select your subscription. |
+ | Resource group | Select a resource group. |
| **Instance details** | |
- | Name | Enter a name for your endpoint |
- | Region | Select the region of the virtual network created above |
+ | Name | Enter a name for your endpoint. |
+ | Region | Select the region of the virtual network you created. |
+
+1. Select the **Resource** tab or the **Next: Resource** button at the bottom of the screen.
-3. Select the **Resource** tab or the **Next: Resource** button at the bottom of the page.
-
-4. In **Resource**, enter or select this information:
+1. In **Resource**, enter or select this information:
| Setting | Value | | - | -- |
- | Connection method | Select **Connect to an Azure resource in my directory** |
- | Subscription | Select your subscription |
- | Resource type | Select **Microsoft.Datafactory/factories** |
- | Resource | Select your data factory |
- | Target sub-resource | If you want to use the private endpoint for command communications between the self-hosted integration runtime and the Azure Data Factory service, select **datafactory** as **Target sub-resource**. If you want to use the private endpoint for authoring and monitoring the data factory in your virtual network, select **portal** as **Target sub-resource**.|
+ | Connection method | Select **Connect to an Azure resource in my directory**. |
+ | Subscription | Select your subscription. |
+ | Resource type | Select **Microsoft.Datafactory/factories**. |
+ | Resource | Select your data factory. |
+ | Target sub-resource | If you want to use the private endpoint for command communications between the self-hosted IR and Data Factory, select **datafactory** as **Target sub-resource**. If you want to use the private endpoint for authoring and monitoring the data factory in your virtual network, select **portal** as **Target sub-resource**.|
-5. Select the **Configuration** tab or the **Next: Configuration** button at the bottom of the screen.
+1. Select the **Configuration** tab or the **Next: Configuration** button at the bottom of the screen.
-6. In **Configuration**, enter or select this information:
+1. In **Configuration**, enter or select this information:
| Setting | Value | | - | -- | | **Networking** | |
- | Virtual network | Select the virtual network created above. |
- | Subnet | Select the subnet created above. |
+ | Virtual network | Select the virtual network you created. |
+ | Subnet | Select the subnet you created. |
| **Private DNS integration** | | | Integrate with private DNS zone | Leave the default of **Yes**. | | Subscription | Select your subscription. |
- | Private DNS zones | Leave the default value in both Target sub-resources: 1. datafactory: **(New) privatelink.datafactory.azure.net**. 2. portal: **(New) privatelink.adf.azure.com**.|
-
+ | Private DNS zones | Leave the default value in both **Target sub-resources**: 1. datafactory: **(New) privatelink.datafactory.azure.net**. 2. portal: **(New) privatelink.adf.azure.com**.|
-7. Select **Review + create**.
+1. Select **Review + create**.
-8. Select **Create**.
+1. Select **Create**.
+## Restrict access for Data Factory resources by using Private Link
-## Restrict access for data factory resources using private link
-If you want to restrict access for data factory resources in your subscriptions by private link, please follow [Use portal to create private link for managing Azure resources](../azure-resource-manager/management/create-private-link-access-portal.md?source=docs)
-
+If you want to restrict access for Data Factory resources in your subscriptions by Private Link, follow the steps in [Use portal to create a private link for managing Azure resources](../azure-resource-manager/management/create-private-link-access-portal.md?source=docs).
## Known issue
-You're unable to access each other PaaS Resource when both sides are exposed to private Link and private endpoint. This is a known limitation of private link and private endpoint.
-For example, if A is using a private link to access the portal of data factory A in virtual network A. When data factory A doesnΓÇÖt block public access, B can access the portal of data factory A in virtual network B via public. But when customer B creates a private endpoint against data factory B in virtual network B, then customer B canΓÇÖt access data factory A via public in virtual network B anymore.
+You're unable to access each PaaS resource when both sides are exposed to Private Link and a private endpoint. This issue is a known limitation of Private Link and private endpoints.
+
+For example, A is using a private link to access the portal of data factory A in virtual network A. When data factory A doesn't block public access, B can access the portal of data factory A in virtual network B via public. But when customer B creates a private endpoint against data factory B in virtual network B, then customer B can't access data factory A via public in virtual network B anymore.
## Next steps
data-factory Data Factory Troubleshoot Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-factory-troubleshoot-guide.md
Previously updated : 03/03/2022 Last updated : 05/12/2022
The following table applies to U-SQL.
- **Recommendation**: Check Azure Machine Learning for more error logs, then fix the ML pipeline.
+## Azure Synapse Analytics
+
+### Error code: 3250
+
+- **Message**: `There are not enough resources available in the workspace, details: '%errorMessage;'`
+
+- **Cause**: Insufficient resources
+
+- **Recommendation**: Try ending the running job(s) in the workspace, reducing the numbers of vCores requested, increasing the workspace quota or using another workspace.
+
+### Error code: 3251
+
+- **Message**: `There are not enough resources available in the pool, details: '%errorMessage;'`
+
+- **Cause**: Insufficient resources
+
+- **Recommendation**: Try ending the running job(s) in the pool, reducing the numbers of vCores requested, increasing the pool maximum size or using another pool.
+
+### Error code: 3252
+
+- **Message**: `There are not enough vcores available for your spark job, details: '%errorMessage;'`
+
+- **Cause**: Insufficient vcores
+
+- **Recommendation**: Try reducing the numbers of vCores requested or increasing your vCore quota. For more information, see [Apache Spark core concepts](../synapse-analytics/spark/apache-spark-concepts.md).
+
+### Error code: 3253
+
+- **Message**: `There are substantial concurrent MappingDataflow executions which is causing failures due to throttling under the Integration Runtime used for ActivityId: '%activityId;'.`
+
+- **Cause**: Throttling threshold was reached.
+
+- **Recommendation**: Retry the request after a wait period.
+
+### Error code: 3254
+
+- **Message**: `AzureSynapseArtifacts linked service has invalid value for property '%propertyName;'.`
+
+- **Cause**: Bad format or missing definition of property '%propertyName;'.
+
+- **Recommendation**: Check if the linked service has property '%propertyName;' defined with correct data.
+ ## Common ### Error code: 2103
data-factory Data Flow Troubleshoot Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-troubleshoot-errors.md
Previously updated : 04/29/2022 Last updated : 05/12/2022 # Common error codes and messages
This article lists common error codes and messages reported by mapping data flow
- **Cause**: Invalid store configuration is provided. - **Recommendation**: Check the parameter value assignment in the pipeline. A parameter expression may contain invalid characters. -
-## Error code: 4502
-- **Message**: There are substantial concurrent MappingDataflow executions that are causing failures due to throttling under Integration Runtime.-- **Cause**: A large number of Data Flow activity runs are occurring concurrently on the integration runtime. For more information, see [Azure Data Factory limits](../azure-resource-manager/management/azure-subscription-service-limits.md#data-factory-limits).-- **Recommendation**: If you want to run more Data Flow activities in parallel, distribute them across multiple integration runtimes.-
-## Error code: 4510
-- **Message**: Unexpected failure during execution. -- **Cause**: Since debug clusters work differently from job clusters, excessive debug runs could wear the cluster over time, which could cause memory issues and abrupt restarts.-- **Recommendation**: Restart Debug cluster. If you are running multiple dataflows during debug session, use activity runs instead because activity level run creates separate session without taxing main debug cluster.- ## Error code: InvalidTemplate - **Message**: The pipeline expression cannot be evaluated. - **Cause**: The pipeline expression passed in the Data Flow activity isn't being processed correctly because of a syntax error.-- **Recommendation**: Check data flow activity name. Check expressions in activity monitoring to verify the expressions. For example, data flow activity name can not have a space or a hyphen.
+- **Recommendation**: Check data flow activity name. Check expressions in activity monitoring to verify the expressions. For example, data flow activity name can't have a space or a hyphen.
## Error code: 2011 - **Message**: The activity was running on Azure Integration Runtime and failed to decrypt the credential of data store or compute connected via a Self-hosted Integration Runtime. Please check the configuration of linked services associated with this activity, and make sure to use the proper integration runtime type.
This article lists common error codes and messages reported by mapping data flow
## Error code: DF-Xml-InvalidReferenceResource - **Message**: Reference resource in xml data file cannot be resolved.-- **Cause**: The reference resource in the XML data file cannot be resolved.
+- **Cause**: The reference resource in the XML data file can't be resolved.
- **Recommendation**: Check the reference resource in the XML data file. ## Error code: DF-Xml-InvalidSchema - **Message**: Schema validation failed. - **Cause**: The invalid schema is provided on the XML source.-- **Recommendation**: Check the schema settings on the XML source to make sure that it is the subset schema of the source data.
+- **Recommendation**: Check the schema settings on the XML source to make sure that it's the subset schema of the source data.
## Error code: DF-Xml-UnsupportedExternalReferenceResource - **Message**: External reference resource in xml data file is not supported.
This article lists common error codes and messages reported by mapping data flow
- **Message**: Partition key path cannot be empty for update and delete operations. - **Cause**: The partition key path is empty for update and delete operations. - **Recommendation**: Use the providing partition key in the Azure Cosmos DB sink settings.- - **Message**: Partition key is not mapped in sink for delete and update operations. - **Cause**: An invalid partition key is provided. - **Recommendation**: In Cosmos DB sink settings, use the right partition key that is same as your container's partition key.
This article lists common error codes and messages reported by mapping data flow
## Error code: DF-SQLDW-InvalidStorageType - **Message**: Storage type can either be blob or gen2. - **Cause**: An invalid storage type is provided for staging.-- **Recommendation**: Check the storage type of the linked service used for staging and make sure that it is Blob or Gen2.
+- **Recommendation**: Check the storage type of the linked service used for staging and make sure that it's Blob or Gen2.
## Error code: DF-SQLDW-InvalidGen2StagingConfiguration - **Message**: ADLS Gen2 storage staging only support service principal key credential.
This article lists common error codes and messages reported by mapping data flow
## Error code: DF-Cosmos-FailToResetThroughput - **Message**: Cosmos DB throughput scale operation cannot be performed because another scale operation is in progress, please retry after sometime.-- **Cause**: The throughput scale operation of the Azure Cosmos DB cannot be performed because another scale operation is in progress.
+- **Cause**: The throughput scale operation of the Azure Cosmos DB can't be performed because another scale operation is in progress.
- **Recommendation**: Login to Azure Cosmos DB account, and manually change container throughput to be auto scale or add a custom activity after mapping data flows to reset the throughput. ## Error code: DF-Executor-InvalidPath - **Message**: Path does not resolve to any file(s). Please make sure the file/folder exists and is not hidden.-- **Cause**: An invalid file/folder path is provided, which cannot be found or accessed.
+- **Cause**: An invalid file/folder path is provided, which can't be found or accessed.
- **Recommendation**: Please check the file/folder path, and make sure it is existed and can be accessed in your storage. ## Error code: DF-Executor-InvalidPartitionFileNames
This article lists common error codes and messages reported by mapping data flow
## Error code: DF-Executor-InvalidInputColumns - **Message**: The column in source configuration cannot be found in source data's schema. - **Cause**: Invalid columns are provided on the source.-- **Recommendation**: Check columns in the source configuration and make sure that it is the subset of the source data's schemas.
+- **Recommendation**: Check columns in the source configuration and make sure that it's the subset of the source data's schemas.
## Error code: DF-AdobeIntegration-InvalidMapToFilter - **Message**: Custom resource can only have one Key/Id mapped to filter.
This article lists common error codes and messages reported by mapping data flow
- Option-2: Use larger cluster size (for example, 48 cores) to run your data flow pipelines. You can learn more about cluster size through this document: [Cluster size](./concepts-integration-runtime-performance.md#cluster-size).
- - Option-3: Repartition your input data. For the task running on the data flow spark cluster, one partition is one task and runs on one node. If data in one partition is too large, the related task running on the node needs to consume more memory than the node itself, which causes failure. So you can use repartition to avoid data skew, and ensure that data size in each partition is average while the memory consumption is not too heavy.
+ - Option-3: Repartition your input data. For the task running on the data flow spark cluster, one partition is one task and runs on one node. If data in one partition is too large, the related task running on the node needs to consume more memory than the node itself, which causes failure. So you can use repartition to avoid data skew, and ensure that data size in each partition is average while the memory consumption isn't too heavy.
:::image type="content" source="media/data-flow-troubleshoot-guide/configure-partition.png" alt-text="Screenshot that shows the configuration of partitions.":::
This article lists common error codes and messages reported by mapping data flow
## Error code: DF-Cosmos-InvalidAccountKey - **Message**: The input authorization token can't serve the request. Please check that the expected payload is built as per the protocol, and check the key being used.-- **Cause**: There is no enough permission to read/write Azure Cosmos DB data.
+- **Cause**: There's no enough permission to read/write Azure Cosmos DB data.
- **Recommendation**: Please use the read-write key to access Azure Cosmos DB. ## Error code: DF-Cosmos-ResourceNotFound - **Message**: Resource not found.-- **Cause**: Invalid configuration is provided (for example, the partition key with invalid characters) or the resource does not exist.
+- **Cause**: Invalid configuration is provided (for example, the partition key with invalid characters) or the resource doesn't exist.
- **Recommendation**: To solve this issue, refer to [Diagnose and troubleshoot Azure Cosmos DB not found exceptions](../cosmos-db/troubleshoot-not-found.md). ## Error code: DF-Snowflake-IncompatibleDataType
This article lists common error codes and messages reported by mapping data flow
- **Cause**: Operation times out while reading data. - **Recommendation**: Increase the value in **Timeout** option in source transformation settings.
+## Error code: 4502
+- **Message**: There are substantial concurrent MappingDataflow executions that are causing failures due to throttling under Integration Runtime.
+- **Cause**: A large number of Data Flow activity runs are occurring concurrently on the integration runtime. For more information, see [Azure Data Factory limits](../azure-resource-manager/management/azure-subscription-service-limits.md#data-factory-limits).
+- **Recommendation**: If you want to run more Data Flow activities in parallel, distribute them across multiple integration runtimes.
+
+## Error code: 4503
+- **Message**: There are substantial concurrent MappingDataflow executions which is causing failures due to throttling under subscription '%subscriptionId;', ActivityId: '%activityId;'.
+- **Cause**: Throttling threshold was reached.
+- **Recommendation**: Retry the request after a wait period.
+
+## Error code: 4506
+- **Message**: Failed to provision cluster for '%activityId;' because the request computer exceeds the maximum concurrent count of 200. Integration Runtime '%IRName;'
+- **Cause**: Transient error
+- **Recommendation**: Retry the request after a wait period.
+
+## Error code: 4507
+- **Message**: Unsupported compute type and/or core count value.
+- **Cause**: Unsupported compute type and/or core count value was provided.
+- **Recommendation**: Use one of the supported compute type and/or core count values given on this [document](control-flow-execute-data-flow-activity.md#type-properties).
+
+## Error code: 4508
+- **Message**: Spark cluster not found.
+- **Recommendation**: Restart the debug session.
+
+## Error code: 4509
+- **Message**: Hit unexpected failure while allocating compute resources, please retry. If the problem persists, please contact Azure Support
+- **Cause**: Transient error
+- **Recommendation**: Retry the request after a wait period.
+
+## Error code: 4510
+- **Message**: Unexpected failure during execution.
+- **Cause**: Since debug clusters work differently from job clusters, excessive debug runs could wear the cluster over time, which could cause memory issues and abrupt restarts.
+- **Recommendation**: Restart Debug cluster. If you are running multiple dataflows during debug session, use activity runs instead because activity level run creates separate session without taxing main debug cluster.
+
+## Error code: 4511
+- **Message**: java.sql.SQLTransactionRollbackException. Deadlock found when trying to get lock; try restarting transaction. If the problem persists, please contact Azure Support
+- **Cause**: Transient error
+- **Recommendation**: Retry the request after a wait period.
++ ## Next steps For more help with troubleshooting, see these resources:
data-factory How To Manage Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-manage-settings.md
+
+ Title: Managing Azure Data Factory settings and preferences
+description: Learn how to manage Azure Data Factory settings and preferences.
+++++++ Last updated : 05/24/2022++
+# Manage Azure Data Factory settings and preferences
++
+You can change the default settings of your Azure Data Factory to meet your own preferences.
+Azure Data Factory settings are available in the Settings menu in the top right section of the global page header as indicated in the screenshot below.
++
+Clicking the **Settings** gear button will open a flyout.
++
+Here you can find the settings and preferences that you can set for your data factory.
+
+## Language and Region
+
+Choose your language and the regional format that will influence how data such as dates and currency will appear in your data factory.
+
+### Language
+
+Use the drop-down list to select from the list of available languages. This setting controls the language you see for text throughout your data factory. There are 18 languages supported in addition to English.
++
+To apply changes, select a language and make sure to hit the **Apply** button. Your page will refresh and reflect the changes made.
++
+> [!NOTE]
+> Applying language changes will discard any unsaved changes in your data factory.
+
+### Regional Format
+
+Use the drop-down list to select from the list of available regional formats. This setting controls the way dates, time, numbers, and currency are shown in your data factory.
+
+The default shown in **Regional format** will automatically change based on the option you selected for **Language**. You can still use the drop-down list to select a different format.
++
+For example, if you select **English** as your language and select **English (United States)** as the regional format, currency will be show in U.S. (United States) dollars. If you select **English** as your language and select **English (Europe)** as the regional format, currency will be show in euros.
+
+To apply changes, select a **Regional format** and make sure to hit the **Apply** button. Your page will refresh and reflect the changes made.
++
+> [!NOTE]
+> Applying regional format changes will discard any unsaved changes in your data factory.
+
+## Next steps
+- [Introduction to Azure Data Factory](introduction.md)
+- [Build a pipeline with a copy activity](quickstart-create-data-factory-powershell.md)
+- [Build a pipeline with a data transformation activity](tutorial-transform-data-spark-powershell.md)
++
data-factory How To Sqldb To Cosmosdb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-sqldb-to-cosmosdb.md
Previously updated : 01/27/2022 Last updated : 05/19/2022 # Migrate normalized database schema from Azure SQL Database to Azure Cosmos DB denormalized container This guide will explain how to take an existing normalized database schema in Azure SQL Database and convert it into an Azure Cosmos DB denormalized schema for loading into Azure Cosmos DB.
-SQL schemas are typically modeled using third normal form, resulting in normalized schemas that provide high levels of data integrity and fewer duplicate data values. Queries can join entities together across tables for reading. CosmosDB is optimized for super-quick transactions and querying within a collection or container via denormalized schemas with data self-contained inside a document.
+SQL schemas are typically modeled using third normal form, resulting in normalized schemas that provide high levels of data integrity and fewer duplicate data values. Queries can join entities together across tables for reading. Cosmos DB is optimized for super-quick transactions and querying within a collection or container via denormalized schemas with data self-contained inside a document.
Using Azure Data Factory, we'll build a pipeline that uses a single Mapping Data Flow to read from two Azure SQL Database normalized tables that contain primary and foreign keys as the entity relationship. ADF will join those tables into a single stream using the data flow Spark engine, collect joined rows into arrays and produce individual cleansed documents for insert into a new Azure Cosmos DB container.
-This guide will build a new container on the fly called "orders" that will use the ```SalesOrderHeader``` and ```SalesOrderDetail``` tables from the standard SQL Server AdventureWorks sample database. Those tables represent sales transactions joined by ```SalesOrderID```. Each unique detail records has its own primary key of ```SalesOrderDetailID```. The relationship between header and detail is ```1:M```. We'll join on ```SalesOrderID``` in ADF and then roll each related detail record into an array called "detail".
+This guide will build a new container on the fly called "orders" that will use the ```SalesOrderHeader``` and ```SalesOrderDetail``` tables from the standard SQL Server [Adventure Works sample database](https://docs.microsoft.com/sql/samples/adventureworks-install-configure?view=sql-server-ver15&tabs=ssms). Those tables represent sales transactions joined by ```SalesOrderID```. Each unique detail records has its own primary key of ```SalesOrderDetailID```. The relationship between header and detail is ```1:M```. We'll join on ```SalesOrderID``` in ADF and then roll each related detail record into an array called "detail".
The representative SQL query for this guide is:
The representative SQL query for this guide is:
FROM SalesLT.SalesOrderHeader o; ```
-The resulting CosmosDB container will embed the inner query into a single document and look like this:
+The resulting Cosmos DB container will embed the inner query into a single document and look like this:
:::image type="content" source="media/data-flow/cosmosb3.png" alt-text="Collection":::
The resulting CosmosDB container will embed the inner query into a single docume
4. We will construct this data flow graph below
+ :::image type="content" source="media/data-flow/cosmosb1.png" alt-text="Data Flow Graph":::
5. Define the source for "SourceOrderDetails". For dataset, create a new Azure SQL Database dataset that points to the ```SalesOrderDetail``` table. 6. Define the source for "SourceOrderHeader". For dataset, create a new Azure SQL Database dataset that points to the ```SalesOrderHeader``` table.
-7. On the top source, add a Derived Column transformation after "SourceOrderDetails". Call the new transformation "TypeCast". We need to round the ```UnitPrice``` column and cast it to a double data type for CosmosDB. Set the formula to: ```toDouble(round(UnitPrice,2))```.
+7. On the top source, add a Derived Column transformation after "SourceOrderDetails". Call the new transformation "TypeCast". We need to round the ```UnitPrice``` column and cast it to a double data type for Cosmos DB. Set the formula to: ```toDouble(round(UnitPrice,2))```.
8. Add another derived column and call it "MakeStruct". This is where we will create a hierarchical structure to hold the values from the details table. Remember, details is a ```M:1``` relation to header. Name the new structure ```orderdetailsstruct``` and create the hierarchy in this way, setting each subcolumn to the incoming column name:
+ :::image type="content" source="media/data-flow/cosmosdb-9.png" alt-text="Create Structure":::
9. Now, let's go to the sales header source. Add a Join transformation. For the right-side select "MakeStruct". Leave it set to inner join and choose ```SalesOrderID``` for both sides of the join condition. 10. Click on the Data Preview tab in the new join that you added so that you can see your results up to this point. You should see all of the header rows joined with the detail rows. This is the result of the join being formed from the ```SalesOrderID```. Next, we'll combine the details from the common rows into the details struct and aggregate the common rows.
+ :::image type="content" source="media/data-flow/cosmosb4.png" alt-text="Join":::
-11. Before we can create the arrays to denormalize these rows, we first need to remove unwanted columns and make sure the data values will match CosmosDB data types.
+11. Before we can create the arrays to denormalize these rows, we first need to remove unwanted columns and make sure the data values will match Cosmos DB data types.
12. Add a Select transformation next and set the field mapping to look like this:
+ :::image type="content" source="media/data-flow/cosmosb5.png" alt-text="Column scrubber":::
13. Now let's again cast a currency column, this time ```TotalDue```. Like we did above in step 7, set the formula to: ```toDouble(round(TotalDue,2))```.
The resulting CosmosDB container will embed the inner query into a single docume
15. In the aggregate formula, add a new column called "details" and use this formula to collect the values in the structure that we created earlier called ```orderdetailsstruct```: ```collect(orderdetailsstruct)```.
-16. The aggregate transformation will only output columns that are part of aggregate or group by formulas. So, we need to include the columns from the sales header as well. To do that, add a column pattern in that same aggregate transformation. This pattern will include all other columns in the output:
+16. The aggregate transformation will only output columns that are part of aggregate or group by formulas. So, we need to include the columns from the sales header as well. To do that, add a column pattern in that same aggregate transformation. This pattern will include all other columns in the output, excluding the columns listed below (OrderQty, UnitPrice, SalesOrderID):
`instr(name,'OrderQty')==0&&instr(name,'UnitPrice')==0&&instr(name,'SalesOrderID')==0`
-17. Use the "this" syntax in the other properties so that we maintain the same column names and use the ```first()``` function as an aggregate:
+17. Use the "this" syntax ($$) in the other properties so that we maintain the same column names and use the ```first()``` function as an aggregate. This tells ADF to keep the first matching value found:
+ :::image type="content" source="media/data-flow/cosmosb6.png" alt-text="Aggregate":::
-18. We're ready to finish the migration flow by adding a sink transformation. Click "new" next to dataset and add a CosmosDB dataset that points to your CosmosDB database. For the collection, we'll call it "orders" and it will have no schema and no documents because it will be created on the fly.
+18. We're ready to finish the migration flow by adding a sink transformation. Click "new" next to dataset and add a Cosmos DB dataset that points to your Cosmos DB database. For the collection, we'll call it "orders" and it will have no schema and no documents because it will be created on the fly.
-19. In Sink Settings, Partition Key to ```\SalesOrderID``` and collection action to "recreate". Make sure your mapping tab looks like this:
+19. In Sink Settings, Partition Key to ```/SalesOrderID``` and collection action to "recreate". Make sure your mapping tab looks like this:
+ :::image type="content" source="media/data-flow/cosmosb7.png" alt-text="Screenshot shows the Mapping tab.":::
20. Click on data preview to make sure that you are seeing these 32 rows set to insert as new documents into your new container:
+ :::image type="content" source="media/data-flow/cosmosb8.png" alt-text="Screenshot shows the Data preview tab.":::
-If everything looks good, you are now ready to create a new pipeline, add this data flow activity to that pipeline and execute it. You can execute from debug or a triggered run. After a few minutes, you should have a new denormalized container of orders called "orders" in your CosmosDB database.
+If everything looks good, you are now ready to create a new pipeline, add this data flow activity to that pipeline and execute it. You can execute from debug or a triggered run. After a few minutes, you should have a new denormalized container of orders called "orders" in your Cosmos DB database.
## Next steps
data-factory Industry Sap Connectors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/industry-sap-connectors.md
+
+ Title: SAP Connectors
+
+description: Overview of the SAP Connectors
+++++ Last updated : 04/20/2022++
+# SAP connectors overview
++
+Azure Data Factory and Azure Synapse Analytics pipelines provide several SAP connectors to support a wide variety of data extraction scenarios from SAP.
+
+>[!TIP]
+>To learn about overall support for the SAP data integration scenario, see [SAP data integration whitepaper](https://github.com/Azure/Azure-DataFactory/blob/master/whitepaper/SAP%20Data%20Integration%20using%20Azure%20Data%20Factory.pdf) with detailed introduction on each SAP connector, comparison and guidance.
+
+The following table shows the SAP connectors and in which activity scenarios they're supported as well as notes section for supported version information or other notes.
+
+| Data store | [Copy activity](copy-activity-overview.md) (source/sink) | [Mapping Data Flow](concepts-data-flow-overview.md) (source/sink) | [Lookup Activity](control-flow-lookup-activity.md) | Notes |
+| :-- | :-- | :-- | :-- | |
+|[SAP Business Warehouse Open Hub](connector-sap-business-warehouse-open-hub.md) | ✓/− | | ✓ | SAP Business Warehouse version 7.01 or higher. SAP BW/4HANA isn't supported by this connector. |
+|[SAP Business Warehouse via MDX](connector-sap-business-warehouse.md)| ✓/− | | ✓ | SAP Business Warehouse version 7.x. |
+| [SAP Cloud for Customer (C4C)](connector-sap-cloud-for-customer.md) | Γ£ô/Γ£ô | | Γ£ô | SAP Cloud for Customer including the SAP Cloud for Sales, SAP Cloud for Service, and SAP Cloud for Social Engagement solutions. |
+| [SAP ECC](connector-sap-ecc.md) | ✓/− | | ✓ | SAP ECC on SAP NetWeaver version 7.0 and later. |
+| [SAP HANA](connector-sap-hana.md) | Γ£ô/Γ£ô | | Γ£ô | Any version of SAP HANA database |
+| [SAP Table](connector-sap-table.md) | ✓/− | | ✓ | SAP ERP Central Component (SAP ECC) version 7.01 or later. SAP Business Warehouse (SAP BW) version 7.01 or later. SAP S/4HANA. Other products in SAP Business Suite version 7.01 or later. |
data-factory Industry Sap Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/industry-sap-overview.md
+
+ Title: SAP knowledge center overview
+
+description: Overview of the ADF SAP Knowledge Center and ADF SAP IP
+++++ Last updated : 04/20/2022++
+# SAP knowledge center overview
++
+Azure Data Factory and Azure Synapse Analytics pipelines provide a collection of assets to power your SAP workloads. These assets include SAP connectors and templates as well upcoming solution accelerators provided by both Microsoft and partners. The SAP knowledge center is a consolidated location summarizing the available assets along with giving a comparison of when to use which solution.
+
+## SAP connectors
+
+Azure Data Factory and Synapse pipelines support extracting data from the following SAP connectors
+
+- SAP Business Warehouse Open Hub
+- SAP Business Warehouse via MDX
+- SAP Cloud for Customer
+- SAP ECC
+- SAP HANA
+- SAP Table
+
+ For a more detailed breakdown of each SAP connector along with prerequisites, see [SAP connectors](industry-sap-connectors.md).
+
+## SAP templates
+
+Azure Data Factory and Synapse pipelines provide templates to help accelerate common patterns by creating pipelines and activities for you that just need to be connected to your SAP data sources.
+
+See [pipeline templates](solution-templates-introduction.md) for an overview of pipeline templates.
+
+Templates are offered for the following scenarios
+- Incrementally copy from SAP BW to ADLS Gen 2
+- Incrementally copy from SAP Table to Blob
+- Dynamically copy multiple tables from SAP ECC to ADLS Gen 2
+- Dynamically copy multiple tables from SAP HANA to ADLS Gen 2
+
+For a summary of the SAP specific templates and how to use them see [SAP templates](industry-sap-templates.md).
++
+## SAP whitepaper
+
+To learn about overall support for the SAP data integration scenario, see [SAP data integration whitepaper](https://github.com/Azure/Azure-DataFactory/blob/master/whitepaper/SAP%20Data%20Integration%20using%20Azure%20Data%20Factory.pdf) with detailed introduction on each SAP connector, comparison and guidance.
data-factory Industry Sap Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/industry-sap-templates.md
+
+ Title: SAP Templates
+
+description: Overview of the SAP templates
+++++ Last updated : 04/20/2022++
+# SAP templates overview
++
+Azure Data Factory and Azure Synapse Analytics pipelines provide SAP templates to quickly get started with a pattern based approach for various SAP scenarios.
+
+See [pipeline templates](solution-templates-introduction.md) for an overview of pipeline templates.
+
+## SAP templates summary
+
+The following table
+
+| SAP Data Store | Scenario | Description |
+| -- | -- | -- |
+| SAP BW via Open Hub | [Incremental copy to Azure Data Lake Storage Gen 2](load-sap-bw-data.md) | Use this template to incrementally copy SAP BW data via LastRequestID watermark to ADLS Gen 2 |
+| SAP ECC | Dynamically copy tables to Azure Data Lake Storage Gen 2 | Use this template to do a full copy of list of tables from SAP ECC to ADLS Gen 2 |
+| SAP HANA | Dynamically copy tables to Azure Data Lake Storage Gen 2 | Use this template to do a full copy of list of tables from SAP HANA to ADLS Gen 2 |
+| SAP Table | Incremental copy to Azure Blob Storage | Use this template to incrementally copy SAP Table data via a date timestamp watermark to Azure Blob Storage |
data-factory Managed Virtual Network Private Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/managed-virtual-network-private-endpoint.md
Title: Managed virtual network & managed private endpoints
+ Title: Managed virtual network and managed private endpoints
description: Learn about managed virtual network and managed private endpoints in Azure Data Factory.
Last updated 04/01/2022
-# Azure Data Factory Managed Virtual Network
+# Azure Data Factory managed virtual network
[!INCLUDE[appliesto-adf-xxx-md](includes/appliesto-adf-xxx-md.md)]
-This article will explain managed virtual network and Managed private endpoints in Azure Data Factory.
-
+This article explains managed virtual networks and managed private endpoints in Azure Data Factory.
## Managed virtual network
-When you create an Azure integration runtime within Azure Data Factory managed virtual network, the integration runtime will be provisioned with the managed virtual network and will use private endpoints to securely connect to supported data stores.
+When you create an Azure integration runtime within a Data Factory managed virtual network, the integration runtime is provisioned with the managed virtual network. It uses private endpoints to securely connect to supported data stores.
-Creating an Azure integration runtime within managed virtual network ensures that data integration process is isolated and secure.
+Creating an integration runtime within a managed virtual network ensures the data integration process is isolated and secure.
-Benefits of using managed virtual network:
+Benefits of using a managed virtual network:
-- With a managed virtual network, you can offload the burden of managing the virtual network to Azure Data Factory. You don't need to create a subnet for Azure Integration Runtime that could eventually use many private IPs from your virtual network and would require prior network infrastructure planning. -- It doesn't require deep Azure networking knowledge to do data integrations securely. Instead getting started with secure ETL is much simplified for data engineers. -- Managed virtual network along with managed private endpoints protects against data exfiltration.
+- With a managed virtual network, you can offload the burden of managing the virtual network to Data Factory. You don't need to create a subnet for an integration runtime that could eventually use many private IPs from your virtual network and would require prior network infrastructure planning.
+- Deep Azure networking knowledge isn't required to do data integrations securely. Instead, getting started with secure ETL is much simpler for data engineers.
+- A managed virtual network along with managed private endpoints protects against data exfiltration.
-> [!IMPORTANT]
->Currently, the managed virtual network is only supported in the same region as Azure Data Factory region.
+Currently, the managed virtual network is only supported in the same region as the Data Factory region.
> [!Note]
->Existing global Azure integration runtime can't switch to Azure integration runtime in Azure Data Factory managed virtual network and vice versa.
-
+> An existing global integration runtime can't switch to an integration runtime in a Data Factory managed virtual network and vice versa.
## Managed private endpoints
-Managed private endpoints are private endpoints created in the Azure Data Factory managed virtual network establishing a private link to Azure resources. Azure Data Factory manages these private endpoints on your behalf.
+Managed private endpoints are private endpoints created in the Data Factory managed virtual network that establish a private link to Azure resources. Data Factory manages these private endpoints on your behalf.
-Azure Data Factory supports private links. Private link enables you to access Azure (PaaS) services (such as Azure Storage, Azure Cosmos DB, Azure Synapse Analytics).
+Data Factory supports private links. You can use Azure Private Link to access Azure platform as a service (PaaS) services like Azure Storage, Azure Cosmos DB, and Azure Synapse Analytics.
When you use a private link, traffic between your data stores and managed virtual network traverses entirely over the Microsoft backbone network. Private Link protects against data exfiltration risks. You establish a private link to a resource by creating a private endpoint.
-Private endpoint uses a private IP address in the managed virtual network to effectively bring the service into it. Private endpoints are mapped to a specific resource in Azure and not the entire service. Customers can limit connectivity to a specific resource approved by their organization. Learn more about [private links and private endpoints](../private-link/index.yml).
+A private endpoint uses a private IP address in the managed virtual network to effectively bring the service into it. Private endpoints are mapped to a specific resource in Azure and not the entire service. Customers can limit connectivity to a specific resource approved by their organization. For more information, see [Private links and private endpoints](../private-link/index.yml).
> [!NOTE]
-> It's recommended that you create managed private endpoints to connect to all your Azure data sources.
+> Create managed private endpoints to connect to all your Azure data sources.
+
+Make sure the resource provider Microsoft.Network is registered to your subscription.
-> [!NOTE]
-> Make sure resource provider Microsoft.Network is registered to your subscription.
-
> [!WARNING]
-> If a PaaS data store (Blob, Azure Data Lake Storage Gen2, Azure Synapse Analytics) has a private endpoint already created against it, and even if it allows access from all networks, Azure Data Factory would only be able to access it using a managed private endpoint. If a private endpoint does not already exist, you must create one in such scenarios.
+> If a PaaS data store like Azure Blob Storage, Azure Data Lake Storage Gen2, and Azure Synapse Analytics has a private endpoint already created against it, even if it allows access from all networks, Data Factory would only be able to access it by using a managed private endpoint. If a private endpoint doesn't already exist, you must create one in such scenarios.
-A private endpoint connection is created in a "Pending" state when you create a managed private endpoint in Azure Data Factory. An approval workflow is initiated. The private link resource owner is responsible to approve or reject the connection.
+A private endpoint connection is created in a **Pending** state when you create a managed private endpoint in Data Factory. An approval workflow is initiated. The private link resource owner is responsible for approving or rejecting the connection.
-If the owner approves the connection, the private link is established. Otherwise, the private link won't be established. In either case, the managed private endpoint will be updated with the status of the connection.
+If the owner approves the connection, the private link is established. Otherwise, the private link won't be established. In either case, the managed private endpoint is updated with the status of the connection.
-Only a managed private endpoint in an approved state can send traffic to a given private link resource.
+Only a managed private endpoint in an approved state can send traffic to a specific private link resource.
## Interactive authoring
-Interactive authoring capabilities are used for functionalities like test connection, browse folder list and table list, get schema, and preview data. You can enable interactive authoring when creating or editing an Azure integration runtime, which is in Azure Data Factory managed virtual network. The backend service will pre-allocate compute for interactive authoring functionalities. Otherwise, the compute will be allocated every time any interactive operation is performed which will take more time. The Time-To-Live (TTL) for interactive authoring is 60 minutes, which means it will automatically become disabled after 60 minutes of the last interactive authoring operation.
+Interactive authoring capabilities are used for functionalities like test connection, browse folder list and table list, get schema, and preview data. You can enable interactive authoring when you create or edit an integration runtime in a Data Factory managed virtual network. The back-end service preallocates the compute for interactive authoring functionalities. Otherwise, the compute is allocated every time any interactive operation is performed, which takes more time.
-## Activity execution time using managed virtual network
-By design, Azure integration runtime in managed virtual network takes longer queue time than global Azure integration runtime as we aren't reserving one compute node per data factory, so there's a warm-up for each activity to start, and it occurs primarily on virtual network join rather than Azure integration runtime. For non-copy activities including pipeline activity and external activity, there's a 60 minutes Time-To-Live (TTL) when you trigger them at the first time. Within TTL, the queue time is shorter because the node is already warmed up.
-> [!NOTE]
-> Copy activity doesn't have TTL support yet.
+The Time-To-Live (TTL) for interactive authoring is 60 minutes. This means it will be automatically disabled 60 minutes after the last interactive authoring operation.
++
+## Activity execution time using a managed virtual network
+
+By design, an integration runtime in a managed virtual network takes longer queue time than a global integration runtime. One compute node isn't reserved per data factory, so warm-up is required before each activity starts. Warm-up occurs primarily on the virtual network join rather than the integration runtime.
+
+For non-Copy activities, including pipeline activity and external activity, there's a 60-minute TTL when you trigger them the first time. Within TTL, the queue time is shorter because the node is already warmed up.
+
+The Copy activity doesn't have TTL support yet.
> [!NOTE]
-> 2 DIU for Copy activity is not supported in managed virtual network.
+> The data integration unit (DIU) measure of 2 DIU isn't supported for the Copy activity in a managed virtual network.
+
+## Create a managed virtual network via Azure PowerShell
-## Create managed virtual network via Azure PowerShell
```powershell $subscriptionId = "" $resourceGroupName = ""
New-AzResource -ApiVersion "${apiVersion}" -ResourceId "${privateEndpointResourc
groupId = "blob" }
-# Create integration runtime resource enabled with VNET
+# Create integration runtime resource enabled with virtual network
New-AzResource -ApiVersion "${apiVersion}" -ResourceId "${integrationRuntimeResourceId}" -Properties @{ type = "Managed" typeProperties = @{
New-AzResource -ApiVersion "${apiVersion}" -ResourceId "${integrationRuntimeReso
``` > [!Note]
-> For **groupId** of other data sources, you can get them from [private link resource](../private-link/private-endpoint-overview.md#private-link-resource).
+> You can get the **groupId** of other data sources from a [private link resource](../private-link/private-endpoint-overview.md#private-link-resource).
## Limitations and known issues
+This section discusses limitations and known issues.
+ ### Supported data sources and services
-The following data sources and services have native private endpoint support and can be connected through private link from ADF managed virtual network.
-- Azure Blob Storage (not including Storage account V1)-- Azure Functions (Premium plan)+
+The following data sources and services have native private endpoint support. They can be connected through private link from a Data Factory managed virtual network:
+
+- Azure Blob Storage (not including storage account V1)
- Azure Cognitive Search - Azure Cosmos DB MongoDB API - Azure Cosmos DB SQL API
The following data sources and services have native private endpoint support and
- Azure Database for MariaDB - Azure Database for MySQL - Azure Database for PostgreSQL-- Azure Files (not including Storage account V1)
+- Azure Files (not including storage account V1)
+- Azure Functions (Premium plan)
- Azure Key Vault - Azure Machine Learning-- Azure Private Link Service
+- Azure Private Link
- Microsoft Purview-- Azure SQL Database -- Azure SQL Managed Instance - (public preview)
+- Azure SQL Database
+- Azure SQL Managed Instance (public preview)
- Azure Synapse Analytics-- Azure Table Storage (not including Storage account V1)
+- Azure Table Storage (not including storage account V1)
-> [!Note]
-> You still can access all data sources that are supported by Azure Data Factory through public network.
+You can access all data sources that are supported by Data Factory through a public network.
> [!NOTE]
-> Because Azure SQL Managed Instance native private endpoint in private preview, you can access it from managed virtual network using Private Linked Service and Load Balancer. Please see [How to access SQL Managed Instance from Azure Data Factory managed virtual network using private endpoint](tutorial-managed-virtual-network-sql-managed-instance.md).
+> Because SQL Managed Instance native private endpoint is in private preview, you can access it from a managed virtual network by using Private Link and Azure Load Balancer. For more information, see [Access SQL Managed Instance from a Data Factory managed virtual network using a private endpoint](tutorial-managed-virtual-network-sql-managed-instance.md).
### On-premises data sources
-To access on-premises data sources from managed virtual network using private endpoint, see this tutorial [How to access on-premises SQL Server from Azure Data Factory managed virtual network using private endpoint](tutorial-managed-virtual-network-on-premise-sql-server.md).
+To learn how to access on-premises data sources from a managed virtual network by using a private endpoint, see [Access on-premises SQL Server from a Data Factory managed virtual network using a private endpoint](tutorial-managed-virtual-network-on-premise-sql-server.md).
-### Outbound communications through public endpoint from ADF managed virtual network
-- All ports are opened for outbound communications.
+### Outbound communications through public endpoint from a Data Factory managed virtual network
-### Linked service creation of Azure Key Vault
-- When you create a linked service for Azure Key Vault, there's no Azure integration runtime reference. So you can't create private endpoint during linked service creation of Azure Key Vault. But when you create linked service for data stores, which references Azure Key Vault linked service and this linked service references Azure integration runtime with managed virtual network enabled, then you're able to create a private endpoint for the Azure Key Vault linked service during the creation. -- **Test connection** operation for linked service of Azure Key Vault only validates the URL format, but doesn't do any network operation.-- The column **Using private endpoint** is always shown as blank even if you create private endpoint for Azure Key Vault.
+All ports are opened for outbound communications.
-### Linked service creation of Azure HDI
-- The column **Using private endpoint** is always shown as blank even if you create private endpoint for HDI using private link service and load balancer with port forwarding.
+### Linked service creation for Key Vault
+When you create a linked service for Key Vault, there's no integration runtime reference. So, you can't create private endpoints during linked service creation of Key Vault. But when you create linked service for data stores that references Key Vault, and this linked service references an integration runtime with managed virtual network enabled, you can create a private endpoint for Key Vault during creation.
+
+- **Test connection:** This operation for a linked service of Key Vault only validates the URL format but doesn't do any network operation.
+- **Using private endpoint:** This column is always shown as blank even if you create a private endpoint for Key Vault.
+
+### Linked service creation of Azure HDInsight
+
+The column **Using private endpoint** is always shown as blank even if you create a private endpoint for HDInsight by using a private link service and a load balancer with port forwarding.
+ ## Next steps -- Tutorial: [Build a copy pipeline using managed virtual network and private endpoints](tutorial-copy-data-portal-private.md) -- Tutorial: [Build mapping dataflow pipeline using managed virtual network and private endpoints](tutorial-data-flow-private.md)
+See the following tutorials:
+
+- [Build a copy pipeline using managed virtual network and private endpoints](tutorial-copy-data-portal-private.md)
+- [Build mapping dataflow pipeline using managed virtual network and private endpoints](tutorial-data-flow-private.md)
data-factory Monitor Metrics Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/monitor-metrics-alerts.md
Here are some of the metrics emitted by Azure Data Factory version 2.
| IntegrationRuntimeCpuPercentage | CPU utilization for integration runtime | Percent | Total | The percetange of CPU utilization for the self-hosted integration runtime within a minute window. | | IntegrationRuntimeAverageTaskPickupDelay | Queue duration for integration runtime | Seconds | Total | The queue duration for the self-hosted integration runtime within a minute window. | | IntegrationRuntimeQueueLength | Queue length for integration runtime | Count | Total | The total queue length for the self-hosted integration runtime within a minute window. |
+| Maximum allowed entities count | Maxixum number of entities | Count | Total | The maximum number of entities in the Azure Data Factory instance. |
+| Maximum allowed factory size (GB unit) | Maximum size of entities | Gigabyte | Total | The maximum size of entities in the Azure Data Factory instance. |
+| Total entities count | Total number of entities | Count | Total | The total number of entities in the Azure Data Factory instance. |
+| Total factory size (GB unit) | Total size of entities | Gigabyte | Total | The total size of entities in the Azure Data Factory instance. |
+For service limits and quotas please see [quotas and limits](https://docs.microsoft.com/azure/azure-resource-manager/management/azure-subscription-service-limits#azure-data-factory-limits).
To access the metrics, complete the instructions in [Azure Monitor data platform](../azure-monitor/data-platform.md). > [!NOTE]
data-factory Security And Access Control Troubleshoot Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/security-and-access-control-troubleshoot-guide.md
For example: The Azure Blob Storage sink was using Azure IR (public, not Managed
#### Cause
-The service may still use Managed VNet IR, but you could encounter such error because the public endpoint to Azure Blob Storage in Managed VNet is not reliable based on the testing result, and Azure Blob Storage and Azure Data Lake Gen2 are not supported to be connected through public endpoint from the service's Managed Virtual Network according to [Managed virtual network & managed private endpoints](./managed-virtual-network-private-endpoint.md#outbound-communications-through-public-endpoint-from-adf-managed-virtual-network).
+The service may still use Managed VNet IR, but you could encounter such error because the public endpoint to Azure Blob Storage in Managed VNet is not reliable based on the testing result, and Azure Blob Storage and Azure Data Lake Gen2 are not supported to be connected through public endpoint from the service's Managed Virtual Network according to [Managed virtual network & managed private endpoints](./managed-virtual-network-private-endpoint.md#outbound-communications-through-public-endpoint-from-a-data-factory-managed-virtual-network).
#### Solution
data-factory Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/whats-new.md
Title: What's new in Azure Data Factory
-description: This page highlights new features and recent improvements for Azure Data Factory. Azure Data Factory is a managed cloud service that's built for complex hybrid extract-transform-load (ETL), extract-load-transform (ELT), and data integration projects.
+description: This page highlights new features and recent improvements for Azure Data Factory. Data Factory is a managed cloud service that's built for complex hybrid extract-transform-and-load (ETL), extract-load-and-transform (ELT), and data integration projects.
Last updated 01/21/2022
# What's new in Azure Data Factory
-The Azure Data Factory service is improved on an ongoing basis. To stay up to date with the most recent developments, this article provides you with information about:
+Azure Data Factory is improved on an ongoing basis. To stay up to date with the most recent developments, this article provides you with information about:
-- The latest releases-- Known issues-- Bug fixes-- Deprecated functionality-- Plans for changes
+- The latest releases.
+- Known issues.
+- Bug fixes.
+- Deprecated functionality.
+- Plans for changes.
-This page is updated monthly, so revisit it regularly.
+This page is updated monthly, so revisit it regularly.
## April 2022 <br> <table>
-<tr><td><b>Service Category</b></td><td><b>Service improvements</b></td><td><b>Details</b></td></tr>
+<tr><td><b>Service category</b></td><td><b>Service improvements</b></td><td><b>Details</b></td></tr>
-<tr><td rowspan=3><b>Data Flow</b></td><td>Data Preview and Debug Improvements in Mapping Data Flows</td><td>Debug sessions using the AutoResolve Azure IR will now startup in under 10 seconds. New updates to the data preview panel in mapping data flows: You can now sort the rows inside the data preview view by clicking on column headers, move columns around interactively, save the data preview results as a CSV using Export CSV.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory-blog/data-preview-and-debug-improvements-in-mapping-data-flows/ba-p/3268254">Learn more</a></td></tr>
+<tr><td rowspan=3><b>Data flow</b></td><td>Data preview and debug improvements in mapping data flows</td><td>Debug sessions using the AutoResolve Azure integration runtime (IR) will now start up in under 10 seconds. There are new updates to the data preview panel in mapping data flows. Now you can sort the rows inside the data preview view by selecting column headers. You can move columns around interactively. You can also save the data preview results as a CSV by using Export CSV.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory-blog/data-preview-and-debug-improvements-in-mapping-data-flows/ba-p/3268254">Learn more</a></td></tr>
+<tr><td>Dataverse connector is available for mapping data flows</td><td>Dataverse connector is available as source and sink for mapping data flows.<br><a href="connector-dynamics-crm-office-365.md">Learn more</a></td></tr>
+<tr><td>Support for user database schemas for staging with the Azure Synapse Analytics and PostgreSQL connectors in data flow sink</td><td>Data flow sink now supports using a user database schema for staging in both the Azure Synapse Analytics and PostgreSQL connectors.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory-blog/data-flow-sink-supports-user-db-schema-for-staging-in-azure/ba-p/3299210">Learn more</a></td></tr>
+
+<tr><td><b>Monitoring</b></td><td>Multiple updates to Data Factory monitoring experiences</td><td>New updates to the monitoring experience in Data Factory include the ability to export results to a CSV, clear all filters, and open a run in a new tab. Column and result caching is also improved.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory-blog/adf-monitoring-improvements/ba-p/3295531">Learn more</a></td></tr>
+
+<tr><td><b>User interface</b></td><td>New regional format support</td><td>Choose your language and the regional format that will influence how data such as dates and times appear in the Data Factory Studio monitoring. These language and regional settings affect only the Data Factory Studio user interface and don't change or modify your actual data.</td></tr>
-<tr><td>Dataverse Connector is available for Mapping data flows</td><td>Dataverse Connector is available as source and sink for Mapping data flows.<br><a href="connector-dynamics-crm-office-365.md">Learn more</a></td></tr>
-
-<tr><td>Support for user db schemas for staging with the Azure Synapse and PostgreSQL connectors in data flow sink</td><td>Data flow sink now supports using a user db schema for staging in both the Azure Synapse and PostgreSQL connectors.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory-blog/data-flow-sink-supports-user-db-schema-for-staging-in-azure/ba-p/3299210">Learn more</a></td></tr>
-
-<tr><td><b>Monitoring</b></td><td>Multiple updates to ADF monitoring experiences</td><td>New updates to the monitoring experience in Azure Data Factory including the ability to export results to a CSV, clear all filters, open a run in a new tab, and improved caching of columns and results.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory-blog/adf-monitoring-improvements/ba-p/3295531">Learn more</a></td></tr>
-
-<tr><td><b>User Interface</b></td><td>New Regional format support</td><td>Choose your language and the regional format that will influence how data such as dates and times appear in the Azure Data Factory Studio monitoring. These language and regional settings affect only the Azure Data Factory Studio user interface and do not change/modify your actual data.</td></tr>
-
</table> ## March 2022 <br> <table>
-<tr><td><b>Service Category</b></td><td><b>Service improvements</b></td><td><b>Details</b></td></tr>
+<tr><td><b>Service category</b></td><td><b>Service improvements</b></td><td><b>Details</b></td></tr>
-<tr><td rowspan=5><b>Data Flow</b></td><td>ScriptLines and Parameterized Linked Service support added mapping data flows</td><td>It is now super-easy to detect changes to your data flow script in Git with ScriptLines in your data flow JSON definition. Parameterized Linked Services can now also be used inside your data flows for flexible generic connection patterns.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory-blog/adf-mapping-data-flows-adds-scriptlines-and-link-service/ba-p/3249929#M589">Learn more</a></td></tr>
+<tr><td rowspan=5><b>Data flow</b></td><td>ScriptLines and parameterized linked service support added mapping data flows</td><td>It's now easy to detect changes to your data flow script in Git with ScriptLines in your data flow JSON definition. Parameterized linked services can now also be used inside your data flows for flexible generic connection patterns.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory-blog/adf-mapping-data-flows-adds-scriptlines-and-link-service/ba-p/3249929#M589">Learn more</a></td></tr>
+<tr><td>Flowlets general availability (GA)</td><td>Flowlets is now generally available to create reusable portions of data flow logic that you can share in other pipelines as inline transformations. Flowlets enable extract-transform-and-load (ETL) jobs to be composed of custom or common logic components.<br><a href="concepts-data-flow-flowlet.md">Learn more</a></td></tr>
-<tr><td>Flowlets General Availability (GA)</td><td>Flowlets is now generally available to create reusable portions of data flow logic that you can share in other pipelines as inline transformations. Flowlets enable ETL jobs to be composed of custom or common logic components.<br><a href="concepts-data-flow-flowlet.md">Learn more</a></td></tr>
-
-<tr><td>Change Feed connectors are available in 5 data flow source transformations</td><td>Change Feed connectors are available in data flow source transformations for Cosmos DB, Blob store, ADLS Gen1, ADLS Gen2, and CDM.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory-blog/flowlets-and-change-feed-now-ga-in-azure-data-factory/ba-p/3267450">Learn more</a></td></tr>
-
-<tr><td>Data Preview and Debug Improvements in Mapping Data Flows</td><td>A few new exciting features were added to data preview and the debug experience in Mapping Data Flows.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory-blog/data-preview-and-debug-improvements-in-mapping-data-flows/ba-p/3268254">Learn more</a></td></tr>
-
-<tr><td>SFTP connector for Mapping Data Flow</td><td>SFTP connector is available for Mapping Data Flow as both source and sink.<br><a href="connector-sftp.md?tabs=data-factory#mapping-data-flow-properties">Learn more</a></td></tr>
-
-<tr><td><b>Data Movement</b></td><td>Support Always Encrypted for SQL related connectors in Lookup Activity under Managed VNET</td><td>Always Encrypted is supported for SQL Server, Azure SQL DB, Azure SQL MI, Azure Synapse Analytics in Lookup Activity under Managed VNET.<br><a href="control-flow-lookup-activity.md">Learn more</a></td></tr>
-
-<tr><td><b>Integration Runtime</b></td><td>New UI layout in Azure integration runtime creation and edit page</td><td>The UI layout of the integration runtime creation/edit page has been changed to tab style including Settings, Virtual Network and Data flow runtime.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory-blog/new-ui-layout-in-azure-integration-runtime-creation-and-edit/ba-p/3248237">Learn more</a></td></tr>
-
-<tr><td rowspan=2><b>Orchestration</b></td><td>Transform data using the Script activity</td><td>You can use a Script activity to invoke a SQL script in Azure SQL Database, Azure Synapse Analytics, SQL Server Database, Oracle, or Snowflake.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory-blog/execute-sql-statements-using-the-new-script-activity-in-azure/ba-p/3239969">Learn more</a></td></tr>
-
-<tr><td>Web activity timeout improvement</td><td>You can configure response timeout in a Web activity to prevent it from timing out if the response period is more than 1 minute, especially in the case of synchronous APIs.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory-blog/web-activity-response-timeout-improvement/ba-p/3260307">Learn more</a></td></tr>
+<tr><td>Change Feed connectors are available in five data flow source transformations</td><td>Change Feed connectors are available in data flow source transformations for Azure Cosmos DB, Azure Blob Storage, Azure Data Lake Storage Gen1, Azure Data Lake Storage Gen2, and the common data model (CDM).<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory-blog/flowlets-and-change-feed-now-ga-in-azure-data-factory/ba-p/3267450">Learn more</a></td></tr>
+<tr><td>Data preview and debug improvements in mapping data flows</td><td>New features were added to data preview and the debug experience in mapping data flows.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory-blog/data-preview-and-debug-improvements-in-mapping-data-flows/ba-p/3268254">Learn more</a></td></tr>
+<tr><td>SFTP connector for mapping data flow</td><td>SFTP connector is available for mapping data flow as both source and sink.<br><a href="connector-sftp.md?tabs=data-factory#mapping-data-flow-properties">Learn more</a></td></tr>
+
+<tr><td><b>Data movement</b></td><td>Support Always Encrypted for SQL-related connectors in Lookup activity under Managed virtual network</td><td>Always Encrypted is supported for SQL Server, Azure SQL Database, Azure SQL Managed Instance, and Synapse Analytics in the Lookup activity under managed virtual network.<br><a href="control-flow-lookup-activity.md">Learn more</a></td></tr>
+
+<tr><td><b>Integration runtime</b></td><td>New UI layout in Azure IR creation and edit page</td><td>The UI layout of the IR creation and edit page now uses tab style for Settings, Virtual network, and Data flow runtime.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory-blog/new-ui-layout-in-azure-integration-runtime-creation-and-edit/ba-p/3248237">Learn more</a></td></tr>
+
+<tr><td rowspan=2><b>Orchestration</b></td><td>Transform data by using the Script activity</td><td>You can use a Script activity to invoke a SQL script in SQL Database, Azure Synapse Analytics, SQL Server, Oracle, or Snowflake.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory-blog/execute-sql-statements-using-the-new-script-activity-in-azure/ba-p/3239969">Learn more</a></td></tr>
+<tr><td>Web activity timeout improvement</td><td>You can configure response timeout in a Web activity to prevent it from timing out if the response period is more than one minute, especially in the case of synchronous APIs.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory-blog/web-activity-response-timeout-improvement/ba-p/3260307">Learn more</a></td></tr>
</table> ## February 2022 <br> <table>
-<tr><td><b>Service Category</b></td><td><b>Service improvements</b></td><td><b>Details</b></td></tr>
-
-<tr><td rowspan=4><b>Data Flow</b></td><td>Parameterized linked services supported in mapping data flows</td><td>You can now use your parameterized linked services in mapping data flows to make your data flow pipelines generic and flexible.<br><a href="parameterize-linked-services.md?tabs=data-factory">Learn more</a></td></tr>
+<tr><td><b>Service category</b></td><td><b>Service improvements</b></td><td><b>Details</b></td></tr>
-<tr><td>Azure SQL DB incremental source extract available in data flow (Public Preview)</td><td>A new option has been added on mapping data flow Azure SQL DB sources called <i>Enable incremental extract (preview)</i>. Now you can automatically pull only the rows that have changed on your SQL DB sources using data flows.<br><a href="connector-azure-sql-database.md?tabs=data-factory#mapping-data-flow-properties">Learn more</a></td></tr>
-
-<tr><td>Four new connectors available for mapping data flows (Public Preview)</td><td>Azure Data Factory now supports the four following new connectors (Public Preview) for mapping data flows: Quickbase Connector, Smartsheet Connector, TeamDesk Connector, and Zendesk Connector.<br><a href="connector-overview.md?tabs=data-factory">Learn more</a></td></tr>
-
+<tr><td rowspan=4><b>Data flow</b></td><td>Parameterized linked services supported in mapping data flows</td><td>You can now use your parameterized linked services in mapping data flows to make your data flow pipelines generic and flexible.<br><a href="parameterize-linked-services.md?tabs=data-factory">Learn more</a></td></tr>
+<tr><td>SQL Database incremental source extract available in data flow (public preview)</td><td>A new option has been added on mapping data flow SQL Database sources called <i>Enable incremental extract (preview)</i>. Now you can automatically pull only the rows that have changed on your SQL Database sources by using data flows.<br><a href="connector-azure-sql-database.md?tabs=data-factory#mapping-data-flow-properties">Learn more</a></td></tr>
+<tr><td>Four new connectors available for mapping data flows (public preview)</td><td>Data Factory now supports four new connectors (public preview) for mapping data flows: Quickbase connector, Smartsheet connector, TeamDesk connector, and Zendesk connector.<br><a href="connector-overview.md?tabs=data-factory">Learn more</a></td></tr>
<tr><td>Azure Cosmos DB (SQL API) for mapping data flow now supports inline mode</td><td>Azure Cosmos DB (SQL API) for mapping data flow can now use inline datasets.<br><a href="connector-azure-cosmos-db.md?tabs=data-factory#mapping-data-flow-properties">Learn more</a></td></tr>
-
-<tr><td rowspan=2><b>Data Movement</b></td><td>Get metadata driven data ingestion pipelines on ADF Copy Data Tool within 10 minutes (GA)</td><td>You can build large-scale data copy pipelines with metadata-driven approach on copy data tool (GA) within 10 minutes.<br><a href="copy-data-tool-metadata-driven.md">Learn more</a></td></tr>
-<tr><td>Azure Data Factory Google AdWords Connector API Upgrade Available</td><td>The Azure Data Factory Google AdWords connector now supports the new AdWords API version. No action is required for the new connector user as it is enabled by default.<br><a href="connector-troubleshoot-google-adwords.md#migrate-to-the-new-version-of-google-ads-api">Learn more</a></td></tr>
-
-<tr><td><b>Region Expansion</b></td><td>Azure Data Factory is now available in West US3 and Jio India West</td><td>Azure Data Factory is now available in two new regions: West US3 and Jio India West. You can co-locate your ETL workflow in these new regions if you are utilizing these regions for storing and managing your modern data warehouse. You can also use these regions for BCDR purposes in case you need to failover from another region within the geo.<br><a href="https://azure.microsoft.com/global-infrastructure/services/?products=data-factory&regions=all">Learn more</a></td></tr>
+<tr><td rowspan=2><b>Data movement</b></td><td>Get metadata-driven data ingestion pipelines on the Data Factory Copy Data tool within 10 minutes (GA)</td><td>You can build large-scale data copy pipelines with a metadata-driven approach on the Copy Data tool within 10 minutes.<br><a href="copy-data-tool-metadata-driven.md">Learn more</a></td></tr>
+<tr><td>Data Factory Google AdWords connector API upgrade available</td><td>The Data Factory Google AdWords connector now supports the new AdWords API version. No action is required for the new connector user because it's enabled by default.<br><a href="connector-troubleshoot-google-adwords.md#migrate-to-the-new-version-of-google-ads-api">Learn more</a></td></tr>
+
+<tr><td><b>Region expansion</b></td><td>Data Factory is now available in West US3 and Jio India West</td><td>Data Factory is now available in two new regions: West US3 and Jio India West. You can colocate your ETL workflow in these new regions if you're using these regions to store and manage your modern data warehouse. You can also use these regions for business continuity and disaster recovery purposes if you need to fail over from another region within the geo.<br><a href="https://azure.microsoft.com/global-infrastructure/services/?products=data-factory&regions=all">Learn more</a></td></tr>
-<tr><td><b>Security</b></td><td>Connect to an Azure DevOps account in another Azure Active Directory tenant</td><td>You can connect your Azure Data Factory to an Azure DevOps Account in a different Azure Active Directory tenant for source control purposes.<br><a href="cross-tenant-connections-to-azure-devops.md">Learn more</a></td></tr>
+<tr><td><b>Security</b></td><td>Connect to an Azure DevOps account in another Azure Active Directory (Azure AD) tenant</td><td>You can connect your Data Factory instance to an Azure DevOps account in a different Azure AD tenant for source control purposes.<br><a href="cross-tenant-connections-to-azure-devops.md">Learn more</a></td></tr>
</table> - ## January 2022 <br> <table>
-<tr><td><b>Service Category</b></td><td><b>Service improvements</b></td><td><b>Details</b></td></tr>
+<tr><td><b>Service category</b></td><td><b>Service improvements</b></td><td><b>Details</b></td></tr>
-<tr><td rowspan=5><b>Data Flow</b></td><td>Quick re-use is now automatic in all Azure IRs that use a TTL</td><td>You will no longer need to manually specify ΓÇ£quick reuseΓÇ¥. ADF mapping data flows can now start-up subsequent data flow activities in under 5 seconds once youΓÇÖve set a TTL.<br><a href="concepts-integration-runtime-performance.md#time-to-live">Learn more</a></td></tr>
-
-<tr><td>Retrieve your custom Assert description</td><td>In the Assert transformation, you can define your own dynamic description message. WeΓÇÖve added a new function called assertErrorMessage() that you can use to retrieve the row-by-row message and store it in your destination data.<br><a href="data-flow-expressions-usage.md#assertErrorMessages">Learn more</a></td></tr>
-
-<tr><td>Automatic schema detection in Parse transformation</td><td>A new feature has been added to the Parse transformation to make it easy to automatically detect the schema of an embedded complex field inside a string column. Click on the <b>Detect schema</b> button to set your target schema automatically.<br><a href="data-flow-parse.md">Learn more</a></td></tr>
-
-<tr><td>Support Dynamics 365 Connector as both sink and source</td><td>You can now connect directly to Dynamics 365 to transform your Dynamics data at scale using the new mapping data flow connector for Dynamics 365.<br><a href="connector-dynamics-crm-office-365.md?tabs=data-factory#mapping-data-flow-properties">Learn more</a></td></tr>
-
-<tr><td>Always Encrypted SQL connections now available in data flows</td><td>WeΓÇÖve added support for Always Encrypted to source transformations in SQL Server, Azure SQL Database, Azure SQL Managed Instance and Azure Synapse Analytics when using data flows.<br><a href="connector-azure-sql-database.md?tabs=data-factory">Learn more</a></td></tr>
-
-<tr><td rowspan=2><b>Data Movement</b></td><td>Azure Data Factory Databricks Delta Lake connector supports new authentication types</td><td>Azure Data Factory Databricks Delta Lake connector now supports two more authentication types: system-assigned managed identity authentication and user-assigned managed identity authentication.<br><a href="connector-azure-databricks-delta-lake.md">Learn more</a></td></tr>
-
-<tr><td>Azure Data Factory Copy Activity Supports Upsert in several additional connectors</td><td>Azure Data Factory copy activity now supports upsert while sinks data to SQL Server, Azure SQL Database, Azure SQL Managed Instance and Azure Synapse Analytics.<br><a href="connector-overview.md">Learn more</a></td></tr>
+<tr><td rowspan=5><b>Data flow</b></td><td>Quick reuse is now automatic in all Azure IRs that use Time to Live (TTL)</td><td>You no longer need to manually specify "quick reuse." Data Factory mapping data flows can now start up subsequent data flow activities in under five seconds after you set a TTL.<br><a href="concepts-integration-runtime-performance.md#time-to-live">Learn more</a></td></tr>
+<tr><td>Retrieve your custom Assert description</td><td>In the Assert transformation, you can define your own dynamic description message. You can use the new function <b>assertErrorMessage()</b> to retrieve the row-by-row message and store it in your destination data.<br><a href="data-flow-expressions-usage.md#assertErrorMessages">Learn more</a></td></tr>
+<tr><td>Automatic schema detection in Parse transformation</td><td>A new feature added to the Parse transformation makes it easy to automatically detect the schema of an embedded complex field inside a string column. Select the <b>Detect schema</b> button to set your target schema automatically.<br><a href="data-flow-parse.md">Learn more</a></td></tr>
+<tr><td>Support Dynamics 365 connector as both sink and source</td><td>You can now connect directly to Dynamics 365 to transform your Dynamics data at scale by using the new mapping data flow connector for Dynamics 365.<br><a href="connector-dynamics-crm-office-365.md?tabs=data-factory#mapping-data-flow-properties">Learn more</a></td></tr>
+<tr><td>Always Encrypted SQL connections now available in data flows</td><td>Always Encrypted can now source transformations in SQL Server, SQL Database, SQL Managed Instance, and Azure Synapse when you use data flows.<br><a href="connector-azure-sql-database.md?tabs=data-factory">Learn more</a></td></tr>
+
+<tr><td rowspan=2><b>Data movement</b></td><td>Data Factory Azure Databricks Delta Lake connector supports new authentication types</td><td>Data Factory Databricks Delta Lake connector now supports two more authentication types: system-assigned managed identity authentication and user-assigned managed identity authentication.<br><a href="connector-azure-databricks-delta-lake.md">Learn more</a></td></tr>
+<tr><td>Data Factory Copy activity supports upsert in several more connectors</td><td>Data Factory Copy activity now supports upsert while it sinks data to SQL Server, SQL Database, SQL Managed Instance, and Azure Synapse.<br><a href="connector-overview.md">Learn more</a></td></tr>
</table> ## December 2021 <br> <table>
-<tr><td><b>Service Category</b></td><td><b>Service improvements</b></td><td><b>Details</b></td></tr>
-
+<tr><td><b>Service category</b></td><td><b>Service improvements</b></td><td><b>Details</b></td></tr>
-<tr><td rowspan=9><b>Data Flow</b></td><td>Dynamics connector as native source and sink for mapping data flows</td><td>The Dynamics connector is now supported as both a source and sink for mapping data flows.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory-blog/mapping-data-flow-gets-new-native-connectors/ba-p/2866754">Learn more</a></td></tr>
-<tr><td>Native change data capture (CDC) now natively supported</td><td>CDC is now natively supported in Azure Data Factory for CosmosDB, Blob Store, Azure Data Lake Storage Gen1 and Gen2, and common data model (CDM).<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory-blog/cosmosdb-change-feed-is-supported-in-adf-now/ba-p/3037011">Learn more</a></td></tr>
+<tr><td rowspan=9><b>Data flow</b></td><td>Dynamics connector as native source and sink for mapping data flows</td><td>The Dynamics connector is now supported as source and sink for mapping data flows.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory-blog/mapping-data-flow-gets-new-native-connectors/ba-p/2866754">Learn more</a></td></tr>
+<tr><td>Native change data capture (CDC) is now natively supported</td><td>CDC is now natively supported in Data Factory for Azure Cosmos DB, Blob Storage, Data Lake Storage Gen1, Data Lake Storage Gen2, and CDM.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory-blog/cosmosdb-change-feed-is-supported-in-adf-now/ba-p/3037011">Learn more</a></td></tr>
<tr><td>Flowlets public preview</td><td>The flowlets public preview allows data flow developers to build reusable components to easily build composable data transformation logic.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory-blog/introducing-the-flowlets-preview-for-adf-and-synapse/ba-p/3030699">Learn more</a></td></tr>
-
-<tr><td>Map Data public preview</td><td>The Map Data preview enables business users to define column mapping and transformations to load Synapse Lake Databases<br><a href="../synapse-analytics/database-designer/overview-map-data.md">Learn more</a></td></tr>
-
-<tr><td>Multiple output destinations from Power Query</td><td>You can now map multiple output destinations from Power Query in Azure Data Factory for flexible ETL patterns for citizen data integrators.<br><a href="control-flow-power-query-activity.md#sink">Learn more</a></td></tr>
-
-<tr><td>External Call transformation support</td><td>Extend the functionality of Mapping Data Flows by using the External Call transformation. You can now add your own custom code as a REST endpoint or call a curated third party service row-by-row.<br><a href="data-flow-external-call.md">Learn more</a></td></tr>
-
-<tr><td>Enable quick re-use by Synapse Mapping Data Flows with TTL support</td><td>Synapse Mapping Data Flows now support quick re-use by setting a TTL in the Azure Integration Runtime. This will enable your subsequent data flow activities to execute in under 5 seconds.<br><a href="control-flow-execute-data-flow-activity.md#data-flow-integration-runtime">Learn more</a></td></tr>
-
-<tr><td>Assert transformation</td><td>Easily add data quality, data domain validation, and metadata checks to your Azure Data Factory pipelines by using the Assert transformation in Mapping Data Flows.<br><a href="data-flow-assert.md">Learn more</a></td></tr>
-
-<tr><td>IntelliSense support in expression builder for more productive pipeline authoring experiences</td><td>We have introduced IntelliSense support in expression builder / dynamic content authoring to make Azure Data Factory / Synapse pipeline developers more productive while writing complex expressions in their data pipelines.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory-blog/intellisense-support-in-expression-builder-for-more-productive/ba-p/3041459">Learn more</a></td></tr>
+<tr><td>Map Data public preview</td><td>The Map Data preview enables business users to define column mapping and transformations to load Azure Synapse lake databases.<br><a href="../synapse-analytics/database-designer/overview-map-data.md">Learn more</a></td></tr>
+<tr><td>Multiple output destinations from Power Query</td><td>You can now map multiple output destinations from Power Query in Data Factory for flexible ETL patterns for citizen data integrators.<br><a href="control-flow-power-query-activity.md#sink">Learn more</a></td></tr>
+<tr><td>External Call transformation support</td><td>Extend the functionality of mapping data flows by using the External Call transformation. You can now add your own custom code as a REST endpoint or call a curated third-party service row by row.<br><a href="data-flow-external-call.md">Learn more</a></td></tr>
+<tr><td>Enable quick reuse by Azure Synapse mapping data flows with TTL support</td><td>Azure Synapse mapping data flows now support quick reuse by setting a TTL in the Azure IR. Using a setting enables your subsequent data flow activities to execute in under five seconds.<br><a href="control-flow-execute-data-flow-activity.md#data-flow-integration-runtime">Learn more</a></td></tr>
+<tr><td>Assert transformation</td><td>Easily add data quality, data domain validation, and metadata checks to your Data Factory pipelines by using the Assert transformation in mapping data flows.<br><a href="data-flow-assert.md">Learn more</a></td></tr>
+<tr><td>IntelliSense support in expression builder for more productive pipeline authoring experiences</td><td>IntelliSense support in expression builder and dynamic content authoring makes Data Factory and Azure Synapse pipeline developers more productive while they write complex expressions in their data pipelines.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory-blog/intellisense-support-in-expression-builder-for-more-productive/ba-p/3041459">Learn more</a></td></tr>
</table> ## November 2021 <br> <table>
-<tr><td><b>Service Category</b></td><td><b>Service improvements</b></td><td><b>Details</b></td></tr>
+<tr><td><b>Service category</b></td><td><b>Service improvements</b></td><td><b>Details</b></td></tr>
<tr>
- <td><b>CI/CD</b></td>
+ <td><b>Continuous integration and continuous delivery (CI/CD)</b></td>
<td>GitHub integration improvements</td>
- <td>Improvements in ADF and GitHub integration removes limits on 1000 data factory resources per resource type (datasets, pipelines, etc.). For large data factories, this helps mitigate the impact of GitHub API rate limit.<br><a href="source-control.md">Learn more</a></td>
+ <td>Improvements in Data Factory and GitHub integration remove limits on 1,000 Data Factory resources per resource type, such as datasets and pipelines. For large data factories, this change helps mitigate the impact of the GitHub API rate limit.<br><a href="source-control.md">Learn more</a></td>
</tr>
-
-<tr><td rowspan=3><b>Data Flow</b></td><td>Set a custom error code and error message with the Fail activity</td><td>Fail Activity enables ETL developers to set the error message and custom error code for an Azure Data Factory pipeline.<br><a href="control-flow-fail-activity.md">Learn more</a></td></tr>
-<tr><td>External call transformation</td><td>Mapping Data Flows External Call transformation enables ETL developers to leverage transformations, and data enrichments provided by REST endpoints or 3rd party API services.<br><a href="data-flow-external-call.md">Learn more</a></td></tr>
-<tr><td>Synapse quick re-use</td><td>When executing Data flow in Synapse Analytics, use the TTL feature. The TTL feature uses the quick re-use feature so that sequential data flows will execute within a few seconds. You can set the TTL when configuring an Azure Integration runtime.<br><a href="control-flow-execute-data-flow-activity.md#data-flow-integration-runtime">Learn more</a></td></tr>
-
-<tr><td rowspan=3><b>Data Movement</b></td><td>Copy activity supports reading data from FTP/SFTP without chunking</td><td>Automatically determining the file length or the relevant offset to be read when copying data from an FTP or SFTP server. With this capability, Azure Data Factory will automatically connect to the FTP/SFTP server to determine the file length. Once this is determined, Azure Data Factory will dive the file into multiple chunks and read them in parallel.<br><a href="connector-ftp.md">Learn more</a></td></tr>
-<tr><td><i>UTF-8 without BOM</i> support in Copy activity</td><td>Copy activity supports writing data with encoding type <i>UTF-8 without BOM</i> for JSON and delimited text datasets.</td></tr>
-<tr><td>Multi-character column delimiter support</td><td>Copy activity supports using multi-character column delimiters (for delimited text datasets).</td></tr>
-
+
+<tr><td rowspan=3><b>Data flow</b></td><td>Set a custom error code and error message with the Fail activity</td><td>Fail activity enables ETL developers to set the error message and custom error code for a Data Factory pipeline.<br><a href="control-flow-fail-activity.md">Learn more</a></td></tr>
+<tr><td>External call transformation</td><td>Mapping data flows External Call transformation enables ETL developers to use transformations and data enrichments provided by REST endpoints or third-party API services.<br><a href="data-flow-external-call.md">Learn more</a></td></tr>
+<tr><td>Synapse quick reuse</td><td>When you execute data flow in Synapse Analytics, use the TTL feature. The TTL feature uses the quick reuse feature so that sequential data flows will execute within a few seconds. You can set the TTL when you configure an Azure IR.<br><a href="control-flow-execute-data-flow-activity.md#data-flow-integration-runtime">Learn more</a></td></tr>
+
+<tr><td rowspan=3><b>Data movement</b></td><td>Copy activity supports reading data from FTP or SFTP without chunking</td><td>Automatically determine the file length or the relevant offset to be read when you copy data from an FTP or SFTP server. With this capability, Data Factory automatically connects to the FTP or SFTP server to determine the file length. After the length is determined, Data Factory divides the file into multiple chunks and reads them in parallel.<br><a href="connector-ftp.md">Learn more</a></td></tr>
+<tr><td><i>UTF-8 without BOM</i> support in Copy activity</td><td>Copy activity supports writing data with encoding the type <i>UTF-8 without BOM</i> for JSON and delimited text datasets.</td></tr>
+<tr><td>Multicharacter column delimiter support</td><td>Copy activity supports using multicharacter column delimiters for delimited text datasets.</td></tr>
+ <tr>
- <td><b>Integration Runtime</b></td>
- <td>Run any process anywhere in 3 easy steps with SSIS in Azure Data Factory</td>
- <td>In this article, you will learn how to use the best of Azure Data Factory and SSIS capabilities in a pipeline. A sample SSIS package (with parameterized properties) is provided to help you jumpstart. Using Azure Data Factory Studio, the SSIS package can be easily dragged & dropped into a pipeline and used as part of an Execute SSIS Package activity.<br><br>This enables you to run the Azure Data Factory pipeline (with SSIS package) on self-hosted/SSIS integration runtimes (SHIR/SSIS IR). By providing run-time parameter values, you can leverage the powerful capabilities of Azure Data Factory and SSIS capabilities together. This article illustrates 3 easy steps to run any process (which can be any executable, such as application/program/utility/batch file) anywhere.
+ <td><b>Integration runtime</b></td>
+ <td>Run any process anywhere in three steps with SQL Server Integration Services (SSIS) in Data Factory</td>
+ <td>Learn how to use the best of Data Factory and SSIS capabilities in a pipeline. A sample SSIS package with parameterized properties helps you get a jump-start. With Data Factory Studio, the SSIS package can be easily dragged and dropped into a pipeline and used as part of an Execute SSIS Package activity.<br><br>This capability enables you to run the Data Factory pipeline with an SSIS package on self-hosted IRs or SSIS IRs. By providing run-time parameter values, you can use the powerful capabilities of Data Factory and SSIS capabilities together. This article illustrates three steps to run any process, which can be any executable, such as an application, program, utility, or batch file, anywhere.
<br><a href="https://techcommunity.microsoft.com/t5/sql-server-integration-services/run-any-process-anywhere-in-3-easy-steps-with-ssis-in-azure-data/ba-p/2962609">Learn more</a></td> </tr> </table>
This page is updated monthly, so revisit it regularly.
## October 2021 <br> <table>
-<tr><td><b>Service Category</b></td><td><b>Service improvements</b></td><td><b>Details</b></td></tr>
-
-<tr><td rowspan=3><b>Data Flow</b></td><td>Azure Data Explorer and Amazon Web Services S3 connectors</td><td>The Microsoft Data Integration team has just released two new connectors for mapping data flows. If you are using Azure Synapse, you can now connect directly to your AWS S3 buckets for data transformations. In both Azure Data Factory and Azure Synapse, you can now natively connect to your Azure Data Explorer clusters in mapping data flows.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory/mapping-data-flow-gets-new-native-connectors/ba-p/2866754">Learn more</a></td></tr>
-<tr><td>Power Query activity leaves preview for General Availability (GA)</td><td>Microsoft has released the Azure Data Factory Power Query pipeline activity as Generally Available. This new feature provides scaled-out data prep and data wrangling for citizen integrators inside the ADF browser UI for an integrated experience for data engineers. The Power Query data wrangling feature in ADF provides a powerful easy-to-use pipeline capability to solve your most complex data integration and ETL patterns in a single service.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory/data-wrangling-at-scale-with-adf-s-power-query-activity-now/ba-p/2824207">Learn more</a></td></tr>
-<tr><td>New Stringify data transformation in mapping data flows</td><td>Mapping data flows adds a new data transformation called Stringify to make it easy to convert complex data types like structs and arrays into string form that can be sent to structured output destinations.<br><a href="data-flow-stringify.md">Learn more</a></td></tr>
-
+<tr><td><b>Service category</b></td><td><b>Service improvements</b></td><td><b>Details</b></td></tr>
+
+<tr><td rowspan=3><b>Data flow</b></td><td>Azure Data Explorer and Amazon Web Services (AWS) S3 connectors</td><td>The Microsoft Data Integration team has released two new connectors for mapping data flows. If you're using Azure Synapse, you can now connect directly to your AWS S3 buckets for data transformations. In both Data Factory and Azure Synapse, you can now natively connect to your Azure Data Explorer clusters in mapping data flows.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory/mapping-data-flow-gets-new-native-connectors/ba-p/2866754">Learn more</a></td></tr>
+<tr><td>Power Query activity leaves preview for GA</td><td>The Data Factory Power Query pipeline activity is now generally available. This new feature provides scaled-out data prep and data wrangling for citizen integrators inside the Data Factory browser UI for an integrated experience for data engineers. The Power Query data wrangling feature in Data Factory provides a powerful, easy-to-use pipeline capability to solve your most complex data integration and ETL patterns in a single service.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory/data-wrangling-at-scale-with-adf-s-power-query-activity-now/ba-p/2824207">Learn more</a></td></tr>
+<tr><td>New Stringify data transformation in mapping data flows</td><td>Mapping data flows adds a new data transformation called Stringify to make it easy to convert complex data types like structs and arrays into string form. These data types then can be sent to structured output destinations.<br><a href="data-flow-stringify.md">Learn more</a></td></tr>
+ <tr>
- <td><b>Integration Runtime</b></td>
- <td>Express VNet injection for SSIS integration runtime (Public Preview)</td>
- <td>The SSIS integration runtime now supports express VNet injection.<br>
+ <td><b>Integration runtime</b></td>
+ <td>Express virtual network injection for SSIS IR (public preview)</td>
+ <td>The SSIS IR now supports express virtual network injection.<br>
Learn more:<br>
- <a href="join-azure-ssis-integration-runtime-virtual-network.md">Overview of VNet injection for SSIS integration runtime</a><br>
- <a href="azure-ssis-integration-runtime-virtual-network-configuration.md">Standard vs. express VNet injection for SSIS integration runtime</a><br>
- <a href="azure-ssis-integration-runtime-express-virtual-network-injection.md">Express VNet injection for SSIS integration runtime</a>
+ <a href="join-azure-ssis-integration-runtime-virtual-network.md">Overview of virtual network injection for SSIS IR</a><br>
+ <a href="azure-ssis-integration-runtime-virtual-network-configuration.md">Standard vs. express virtual network injection for SSIS IR</a><br>
+ <a href="azure-ssis-integration-runtime-express-virtual-network-injection.md">Express virtual network injection for SSIS IR</a>
</td> </tr>
-<tr><td rowspan=2><b>Security</b></td><td>Azure Key Vault integration improvement</td><td>We have improved Azure Key Vault integration by adding user selectable drop-downs to select the secret values in the linked service, increasing productivity and not requiring users to type in the secrets, which could result in human error.</td></tr>
-<tr><td>Support for user-assigned managed identity in Azure Data Factory</td><td>Credential safety is crucial for any enterprise. With that in mind, the Azure Data Factory (ADF) team is committed to making the data engineering process secure yet simple for data engineers. We are excited to announce the support for user-assigned managed identity (Preview) in all connectors/ linked services that support Azure Active Directory (Azure AD) based authentication.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory/support-for-user-assigned-managed-identity-in-azure-data-factory/ba-p/2841013">Learn more</a></td></tr>
+<tr><td rowspan=2><b>Security</b></td><td>Azure Key Vault integration improvement</td><td>Key Vault integration now has dropdowns so that users can select the secret values in the linked service. This capability increases productivity because users aren't required to type in the secrets, which could result in human error.</td></tr>
+<tr><td>Support for user-assigned managed identity in Data Factory</td><td>Credential safety is crucial for any enterprise. The Data Factory team is committed to making the data engineering process secure yet simple for data engineers. User-assigned managed identity (preview) is now supported in all connectors and linked services that support Azure AD-based authentication.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory/support-for-user-assigned-managed-identity-in-azure-data-factory/ba-p/2841013">Learn more</a></td></tr>
</table> ## September 2021 <br> <table>
-<tr><td><b>Service Category</b></td><td><b>Service improvements</b></td><td><b>Details</b></td></tr>
- <tr><td><b>Continuous integration and delivery (CI/CD)</b></td><td>Expanded CI/CD capabilities</td><td>You can now create a new Git branch based on any other branch in Azure Data Factory.<br><a href="source-control.md#version-control">Learn more</a></td></tr>
-<tr><td rowspan=3><b>Data Movement</b></td><td>Amazon Relational Database Service (RDS) for Oracle sources</td><td>The Amazon RDS for Oracle sources connector is now available in both Azure Data Factory and Azure Synapse.<br><a href="connector-amazon-rds-for-oracle.md">Learn more</a></td></tr>
-<tr><td>Amazon RDS for SQL Server sources</td><td>The Amazon RDS for SQL Server sources connector is now available in both Azure Data Factory and Azure Synapse.<br><a href="connector-amazon-rds-for-sql-server.md">Learn more</a></td></tr>
+<tr><td><b>Service category</b></td><td><b>Service improvements</b></td><td><b>Details</b></td></tr>
+
+ <tr><td><b>Continuous integration and continuous delivery</b></td><td>Expanded CI/CD capabilities</td><td>You can now create a new Git branch based on any other branch in Data Factory.<br><a href="source-control.md#version-control">Learn more</a></td></tr>
+
+<tr><td rowspan=3><b>Data movement</b></td><td>Amazon Relational Database Service (RDS) for Oracle sources</td><td>The Amazon RDS for Oracle sources connector is now available in both Data Factory and Azure Synapse.<br><a href="connector-amazon-rds-for-oracle.md">Learn more</a></td></tr>
+<tr><td>Amazon RDS for SQL Server sources</td><td>The Amazon RDS for the SQL Server sources connector is now available in both Data Factory and Azure Synapse.<br><a href="connector-amazon-rds-for-sql-server.md">Learn more</a></td></tr>
<tr><td>Support parallel copy from Azure Database for PostgreSQL</td><td>The Azure Database for PostgreSQL connector now supports parallel copy operations.<br><a href="connector-azure-database-for-postgresql.md">Learn more</a></td></tr>
-<tr><td rowspan=3><b>Data Flow</b></td><td>Use Azure Data Lake Storage (ADLS) Gen2 to execute pre- and post-processing commands</td><td>Hadoop Distributed File System (HDFS) pre- and post-processing commands can now be executed using ADLS Gen2 sinks in data flows<br><a href="connector-azure-data-lake-storage.md#pre-processing-and-post-processing-commands">Learn more</a></td></tr>
-<tr><td>Edit data flow properties for existing instances of the Azure Integration Runtime (IR)</td><td>The Azure Integration Runtime (IR) has been updated to allow editing of data flow properties for existing IRs. You can now modify data flow compute properties without needing to create a new Azure IR.<br><a href="concepts-integration-runtime.md">Learn more</a></td></tr>
-<tr><td>TTL setting for Azure Synapse to improve pipeline activities execution startup time</td><td>Azure Synapse Analytics has added TTL to the Azure Integration Runtime to enable your data flow pipeline activities to begin execution in seconds, greatly minimizing the runtime of your data flow pipelines.<br><a href="control-flow-execute-data-flow-activity.md#data-flow-integration-runtime">Learn more</a></td></tr>
-<tr><td><b>Integration Runtime</b></td><td>Azure Data Factory Managed vNet goes GA</td><td>You can now provision the Azure Integration Runtime as part of a managed Virtual Network and leverage Private Endpoints to securely connect to supported data stores. Data traffic goes through Azure Private Links which provide secured connectivity to the data source. In addition, it prevents data exfiltration to the public internet.<br><a href="managed-virtual-network-private-endpoint.md">Learn more</a></td></tr>
-<tr><td rowspan=2><b>Orchestration</b></td><td>Operationalize and Provide SLA for Data Pipelines</td><td>The new Elapsed Time Pipeline Run metric, combined with Data Factory Alerts, empowers data pipeline developers to better deliver SLAs to their customers, and you tell us how long a pipeline should run, and we will notify you proactively when the pipeline runs longer than expected.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory/operationalize-and-provide-sla-for-data-pipelines/ba-p/2767768">Learn more</a></td></tr>
-<tr><td>Fail Activity (Public Preview)</td><td>The new Fail activity allows you to throw an error in a pipeline intentionally for any reason. For example, you might use the Fail activity if a Lookup activity returns no matching data or a Custom activity finishes with an internal error.<br><a href="control-flow-fail-activity.md">Learn more</a></td></tr>
+
+<tr><td rowspan=3><b>Data flow</b></td><td>Use Data Lake Storage Gen2 to execute pre- and post-processing commands</td><td>Hadoop Distributed File System pre- and post-processing commands can now be executed by using Data Lake Storage Gen2 sinks in data flows.<br><a href="connector-azure-data-lake-storage.md#pre-processing-and-post-processing-commands">Learn more</a></td></tr>
+<tr><td>Edit data flow properties for existing instances of the Azure IR </td><td>The Azure IR has been updated to allow editing of data flow properties for existing IRs. You can now modify data flow compute properties without needing to create a new Azure IR.<br><a href="concepts-integration-runtime.md">Learn more</a></td></tr>
+<tr><td>TTL setting for Azure Synapse to improve pipeline activities execution startup time</td><td>Azure Synapse has added TTL to the Azure IR to enable your data flow pipeline activities to begin execution in seconds, which greatly minimizes the runtime of your data flow pipelines.<br><a href="control-flow-execute-data-flow-activity.md#data-flow-integration-runtime">Learn more</a></td></tr>
+
+<tr><td><b>Integration runtime</b></td><td>Data Factory managed virtual network GA</td><td>You can now provision the Azure IR as part of a managed virtual network and use private endpoints to securely connect to supported data stores. Data traffic goes through Azure Private Links, which provides secured connectivity to the data source. It also prevents data exfiltration to the public internet.<br><a href="managed-virtual-network-private-endpoint.md">Learn more</a></td></tr>
+
+<tr><td rowspan=2><b>Orchestration</b></td><td>Operationalize and provide SLA for data pipelines</td><td>The new Elapsed Time Pipeline Run metric, combined with Data Factory alerts, empowers data pipeline developers to better deliver SLAs to their customers. Now you can tell us how long a pipeline should run, and we'll notify you proactively when the pipeline runs longer than expected.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory/operationalize-and-provide-sla-for-data-pipelines/ba-p/2767768">Learn more</a></td></tr>
+<tr><td>Fail activity (public preview)</td><td>The new Fail activity allows you to throw an error in a pipeline intentionally for any reason. For example, you might use the Fail activity if a Lookup activity returns no matching data or a custom activity finishes with an internal error.<br><a href="control-flow-fail-activity.md">Learn more</a></td></tr>
</table> ## August 2021 <br> <table>
-<tr><td><b>Service Category</b></td><td><b>Service improvements</b></td><td><b>Details</b></td></tr>
- <tr><td><b>Continuous integration and delivery (CI/CD)</b></td><td>CICD Improvements with GitHub support in Azure Government and Azure China</td><td>We have added support for GitHub in Azure for U.S. Government and Azure China.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory/cicd-improvements-with-github-support-in-azure-government-and/ba-p/2686918">Learn more</a></td></tr>
-<tr><td rowspan=2><b>Data Movement</b></td><td>Azure Cosmos DB's API for MongoDB connector supports version 3.6 & 4.0 in Azure Data Factory</td><td>Azure Data Factory Cosmos DBΓÇÖs API for MongoDB connector now supports server version 3.6 & 4.0.<br><a href="connector-azure-cosmos-db-mongodb-api.md">Learn more</a></td></tr>
-<tr><td>Enhance using COPY statement to load data into Azure Synapse Analytics</td><td>The Azure Data Factory Azure Synapse Analytics connector now supports staged copy and copy source with *.* as wildcardFilename for COPY statement.<br><a href="connector-azure-sql-data-warehouse.md#use-copy-statement">Learn more</a></td></tr>
-<tr><td><b>Data Flow</b></td><td>REST endpoints are available as source and sink in Data Flow</td><td>Data flows in Azure Data Factory and Azure Synapse Analytics now support REST endpoints as both a source and sink with full support for both JSON and XML payloads.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory/rest-source-and-sink-now-available-for-data-flows/ba-p/2596484">Learn more</a></td></tr>
-<tr><td><b>Integration Runtime</b></td><td>Diagnostic tool is available for self-hosted integration runtime</td><td>A diagnostic tool for self-hosted integration runtime is designed for providing a better user experience and help users to find potential issues. The tool runs a series of test scenarios on the self-hosted integration runtime machine and every scenario has typical health check cases for common issues.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory/diagnostic-tool-for-self-hosted-integration-runtime/ba-p/2634905">Learn more</a></td></tr>
-<tr><td><b>Orchestration</b></td><td>Custom Event Trigger with Advanced Filtering Option is GA</td><td>You can now create a trigger that responds to a Custom Topic posted to Event Grid. Additionally, you can leverage Advanced Filtering to get fine-grain control over what events to respond to.<br><a href="how-to-create-custom-event-trigger.md">Learn more</a></td></tr>
+<tr><td><b>Service category</b></td><td><b>Service improvements</b></td><td><b>Details</b></td></tr>
+
+ <tr><td><b>Continuous integration and continuous delivery</b></td><td>CI/CD improvements with GitHub support in Azure Government and Azure China</td><td>We've added support for GitHub in Azure for US Government and Azure China.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory/cicd-improvements-with-github-support-in-azure-government-and/ba-p/2686918">Learn more</a></td></tr>
+
+<tr><td rowspan=2><b>Data movement</b></td><td>The Azure Cosmos DB API for MongoDB connector supports versions 3.6 and 4.0 in Data Factory</td><td>The Data Factory Azure Cosmos DB API for MongoDB connector now supports server versions 3.6 and 4.0.<br><a href="connector-azure-cosmos-db-mongodb-api.md">Learn more</a></td></tr>
+<tr><td>Enhance using COPY statement to load data into Azure Synapse</td><td>The Data Factory Azure Synapse connector now supports staged copy and copy source with *.* as wildcardFilename for the COPY statement.<br><a href="connector-azure-sql-data-warehouse.md#use-copy-statement">Learn more</a></td></tr>
+
+<tr><td><b>Data flow</b></td><td>REST endpoints are available as source and sink in data flow</td><td>Data flows in Data Factory and Azure Synapse now support REST endpoints as both a source and sink with full support for both JSON and XML payloads.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory/rest-source-and-sink-now-available-for-data-flows/ba-p/2596484">Learn more</a></td></tr>
+
+<tr><td><b>Integration runtime</b></td><td>Diagnostic tool is available for self-hosted IR</td><td>A diagnostic tool for self-hosted IR is designed to provide a better user experience and help users to find potential issues. The tool runs a series of test scenarios on the self-hosted IR machine. Every scenario has typical health check cases for common issues.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory/diagnostic-tool-for-self-hosted-integration-runtime/ba-p/2634905">Learn more</a></td></tr>
+
+<tr><td><b>Orchestration</b></td><td>Custom event trigger with advanced filtering option GA</td><td>You can now create a trigger that responds to a custom topic posted to Azure Event Grid. You can also use advanced filtering to get fine-grain control over what events to respond to.<br><a href="how-to-create-custom-event-trigger.md">Learn more</a></td></tr>
</table> ## July 2021 <br> <table>
-<tr><td><b>Service Category</b></td><td><b>Service improvements</b></td><td><b>Details</b></td></tr>
-<tr><td><b>Data Movement</b></td><td>Get metadata driven data ingestion pipelines on ADF Copy Data Tool within 10 minutes (Public Preview)</td><td>With this, you can build large-scale data copy pipelines with metadata-driven approach on copy data tool(Public Preview) within 10 minutes.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory/get-metadata-driven-data-ingestion-pipelines-on-adf-within-10/ba-p/2528219">Learn more</a></td></tr>
-<tr><td><b>Data Flow</b></td><td>New map functions added in data flow transformation functions</td><td>A new set of data flow transformation functions has been added to enable data engineers to easily generate, read, and update map data types and complex map structures.<br><a href="data-flow-map-functions.md">Learn more</a></td></tr>
-<tr><td><b>Integration Runtime</b></td><td>5 new regions available in Azure Data Factory Managed VNET (Public Preview)</td><td>These 5 new regions(China East2, China North2, US Gov Arizona, US Gov Texas, US Gov Virginia) are available in Azure Data Factory managed virtual network (Public Preview).<br></td></tr>
-<tr><td rowspan=2><b>Developer Productivity</b></td><td>ADF homepage improvements</td><td>The Data Factory home page has been redesigned with better contrast and reflow capabilities. Additionally, a few sections have been introduced on the homepage to help you improve productivity in your data integration journey.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory/the-new-and-refreshing-data-factory-home-page/ba-p/2515076">Learn more</a></td></tr>
-<tr><td>New landing page for Azure Data Factory Studio</td><td>The landing page for Data Factory blade in the Azure portal.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory/the-new-and-refreshing-data-factory-home-page/ba-p/2515076">Learn more</a></td></tr>
+<tr><td><b>Service category</b></td><td><b>Service improvements</b></td><td><b>Details</b></td></tr>
+
+<tr><td><b>Data movement</b></td><td>Get metadata-driven data ingestion pipelines on the Data Factory Copy Data tool within 10 minutes (public preview)</td><td>Now you can build large-scale data copy pipelines with a metadata-driven approach on the Copy Data tool (public preview) within 10 minutes.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory/get-metadata-driven-data-ingestion-pipelines-on-adf-within-10/ba-p/2528219">Learn more</a></td></tr>
+
+<tr><td><b>Data flow</b></td><td>New map functions added in data flow transformation functions</td><td>A new set of data flow transformation functions enables data engineers to easily generate, read, and update map data types and complex map structures.<br><a href="data-flow-map-functions.md">Learn more</a></td></tr>
+
+<tr><td><b>Integration runtime</b></td><td>Five new regions are available in Data Factory managed virtual network (public preview)</td><td>Five new regions, China East2, China North2, US Government Arizona, US Government Texas, and US Government Virginia, are available in the Data Factory managed virtual network (public preview).<br></td></tr>
+
+<tr><td rowspan=2><b>Developer productivity</b></td><td>Data Factory home page improvements</td><td>The Data Factory home page has been redesigned with better contrast and reflow capabilities. A few sections are introduced on the home page to help you improve productivity in your data integration journey.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory/the-new-and-refreshing-data-factory-home-page/ba-p/2515076">Learn more</a></td></tr>
+<tr><td>New landing page for Data Factory Studio</td><td>The landing page for the Data Factory pane in the Azure portal.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory/the-new-and-refreshing-data-factory-home-page/ba-p/2515076">Learn more</a></td></tr>
</table> ## June 2021 <br> <table>
-<tr><td><b>Service Category</b></td><td><b>Service improvements</b></td><td><b>Details</b></td></tr>
-<tr><td rowspan=4 valign="middle"><b>Data Movement</b></td><td>New user experience with Azure Data Factory Copy Data Tool</td><td>Redesigned Copy Data Tool is now available with improved data ingestion experience.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory/a-re-designed-copy-data-tool-experience/ba-p/2380634">Learn more</a></td></tr>
-<tr><td>MongoDB and MongoDB Atlas are Supported as both Source and Sink</td><td>This improvement supports copying data between any supported data store and MongoDB or MongoDB Atlas database.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory/new-connectors-available-in-adf-mongodb-and-mongodb-atlas-are/ba-p/2441482">Learn more</a></td></tr>
-<tr><td>Always Encrypted is supported for Azure SQL Database, Azure SQL Managed Instance, and SQL Server connectors as both source and sink</td><td>Always Encrypted is available in Azure Data Factory for Azure SQL Database, Azure SQL Managed Instance, and SQL Server connectors for copy activity.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory/azure-data-factory-copy-now-supports-always-encrypted-for-both/ba-p/2461346">Learn more</a></td></tr>
-<tr><td>Setting custom metadata is supported in copy activity when sinking to ADLS Gen2 or Azure Blob</td><td>When writing to ADLS Gen2 or Azure Blob, copy activity supports setting custom metadata or storage of the source file's last modified info as metadata.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory/support-setting-custom-metadata-when-writing-to-blob-adls-gen2/ba-p/2545506#M490">Learn more</a></td></tr>
-<tr><td rowspan=4 valign="middle"><b>Data Flow</b></td><td>SQL Server is now supported as a source and sink in data flows</td><td>SQL Server is now supported as a source and sink in data flows. Follow the link for instructions on how to configure your networking using the Azure Integration Runtime managed VNET feature to talk to your SQL Server on-premise and cloud VM-based instances.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory/new-data-flow-connector-sql-server-as-source-and-sink/ba-p/2406213">Learn more</a></td></tr>
-<tr><td>Dataflow Cluster quick reuse is now enabled by default for all new Azure Integration Runtimes</td><td>ADF is happy to announce the general availability of the popular data flow quick start-up reuse feature. All new Azure Integration Runtimes will now have quick reuse enabled by default.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory/how-to-startup-your-data-flows-execution-in-less-than-5-seconds/ba-p/2267365">Learn more</a></td></tr>
-<tr><td>Power Query activity (Public Preview)</td><td>You can now build complex field mappings to your Power Query sink using Azure Data Factory data wrangling. The sink is now configured in the pipeline in the Power Query (Public Preview) activity to accommodate this update.<br><a href="wrangling-tutorial.md">Learn more</a></td></tr>
-<tr><td>Updated data flows monitoring UI in Azure Data Factory</td><td>Azure Data Factory has a new update for the monitoring UI to make it easier to view your data flow ETL job executions and quickly identify areas for performance tuning.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory/updated-data-flows-monitoring-ui-in-adf-amp-synapse/ba-p/2432199">Learn more</a></td></tr>
-<tr><td><b>SQL Server Integration Services (SSIS)</b></td><td>Run any SQL anywhere in 3 simple steps with SSIS in Azure Data Factory</td><td>This post provides 3 simple steps to run any SQL statements/scripts anywhere with SSIS in Azure Data Factory.<ol><li>Prepare your Self-Hosted Integration Runtime/SSIS Integration Runtime.</li><li>Prepare an Execute SSIS Package activity in Azure Data Factory pipeline.</li><li>Run the Execute SSIS Package activity on your Self-Hosted Integration Runtime/SSIS Integration Runtime.</li></ol><a href="https://techcommunity.microsoft.com/t5/sql-server-integration-services/run-any-sql-anywhere-in-3-easy-steps-with-ssis-in-azure-data/ba-p/2457244">Learn more</a></td></tr>
+<tr><td><b>Service category</b></td><td><b>Service improvements</b></td><td><b>Details</b></td></tr>
+
+<tr><td rowspan=4 valign="middle"><b>Data movement</b></td><td>New user experience with Data Factory Copy Data tool</td><td>The redesigned Copy Data tool is now available with improved data ingestion experience.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory/a-re-designed-copy-data-tool-experience/ba-p/2380634">Learn more</a></td></tr>
+<tr><td>MongoDB and MongoDB Atlas are supported as both source and sink</td><td>This improvement supports copying data between any supported data store and MongoDB or MongoDB Atlas database.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory/new-connectors-available-in-adf-mongodb-and-mongodb-atlas-are/ba-p/2441482">Learn more</a></td></tr>
+<tr><td>Always Encrypted is supported for SQL Database, SQL Managed Instance, and SQL Server connectors as both source and sink</td><td>Always Encrypted is available in Data Factory for SQL Database, SQL Managed Instance, and SQL Server connectors for the Copy activity.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory/azure-data-factory-copy-now-supports-always-encrypted-for-both/ba-p/2461346">Learn more</a></td></tr>
+<tr><td>Setting custom metadata is supported in Copy activity when sinking to Data Lake Storage Gen2 or Blob Storage</td><td>When you write to Data Lake Storage Gen2 or Blob Storage, the Copy activity supports setting custom metadata or storage of the source file's last modified information as metadata.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory/support-setting-custom-metadata-when-writing-to-blob-adls-gen2/ba-p/2545506#M490">Learn more</a></td></tr>
+
+<tr><td rowspan=4 valign="middle"><b>Data flow</b></td><td>SQL Server is now supported as a source and sink in data flows</td><td>SQL Server is now supported as a source and sink in data flows. Follow the link for instructions on how to configure your networking by using the Azure IR managed virtual network feature to talk to your SQL Server on-premises and cloud VM-based instances.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory/new-data-flow-connector-sql-server-as-source-and-sink/ba-p/2406213">Learn more</a></td></tr>
+<tr><td>Dataflow Cluster quick reuse now enabled by default for all new Azure IRs</td><td>The popular data flow quick startup reuse feature is now generally available for Data Factory. All new Azure IRs now have quick reuse enabled by default.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory/how-to-startup-your-data-flows-execution-in-less-than-5-seconds/ba-p/2267365">Learn more</a></td></tr>
+<tr><td>Power Query (public preview) activity</td><td>You can now build complex field mappings to your Power Query sink by using Data Factory data wrangling. The sink is now configured in the pipeline in the Power Query (public preview) activity to accommodate this update.<br><a href="wrangling-tutorial.md">Learn more</a></td></tr>
+<tr><td>Updated data flows monitoring UI in Data Factory</td><td>Data Factory has a new update for the monitoring UI to make it easier to view your data flow ETL job executions and quickly identify areas for performance tuning.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory/updated-data-flows-monitoring-ui-in-adf-amp-synapse/ba-p/2432199">Learn more</a></td></tr>
+
+<tr><td><b>SQL Server Integration Services</b></td><td>Run any SQL statements or scripts anywhere in three steps with SSIS in Data Factory</td><td>This post provides three steps to run any SQL statements or scripts anywhere with SSIS in Data Factory.<ol><li>Prepare your self-hosted IR or SSIS IR.</li><li>Prepare an Execute SSIS Package activity in Data Factory pipeline.</li><li>Run the Execute SSIS Package activity on your self-hosted IR or SSIS IR.</li></ol><a href="https://techcommunity.microsoft.com/t5/sql-server-integration-services/run-any-sql-anywhere-in-3-easy-steps-with-ssis-in-azure-data/ba-p/2457244">Learn more</a></td></tr>
</table> ## More information
data-lake-analytics Understand Spark Code Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/understand-spark-code-concepts.md
Title: Understand Apache Spark code concepts for Azure Data Lake Analytics U-SQL developers. description: This article describes Apache Spark concepts to help U-SQL developers understand Spark code concepts.- Previously updated : 10/15/2019 Last updated : 05/17/2022 # Understand Apache Spark code for U-SQL developers
Spark is a scale-out framework offering several language bindings in Scala, Java
Thus when translating a U-SQL script to a Spark program, you will have to decide which language you want to use to at least generate the data frame abstraction (which is currently the most frequently used data abstraction) and whether you want to write the declarative dataflow transformations using the DSL or SparkSQL. In some more complex cases, you may need to split your U-SQL script into a sequence of Spark and other steps implemented with Azure Batch or Azure Functions.
-Furthermore, Azure Data Lake Analytics offers U-SQL in a serverless job service environment, while both Azure Databricks and Azure HDInsight offer Spark in form of a cluster service. When transforming your application, you will have to take into account the implications of now creating, sizing, scaling, and decommissioning the clusters.
+Furthermore, Azure Data Lake Analytics offers U-SQL in a serverless job service environment where resources are allocated for each job, while Azure Synapse Spark, Azure Databricks and Azure HDInsight offer Spark either in form of a cluster service or with so-called Spark pool templates. When transforming your application, you will have to take into account the implications of now creating, sizing, scaling, and decommissioning the clusters or pools.
## Transform U-SQL scripts
Spark programs are similar in that you would use Spark connectors to read the da
## Transform .NET code
-U-SQL's expression language is C# and it offers a variety of ways to scale out custom .NET code.
+U-SQL's expression language is C# and it offers a variety of ways to scale out custom .NET code with user-defined functions, user-defined operators and user-defined aggregators.
-Since Spark currently does not natively support executing .NET code, you will have to either rewrite your expressions into an equivalent Spark, Scala, Java, or Python expression or find a way to call into your .NET code. If your script uses .NET libraries, you have the following options:
+Azure Synapse and Azure HDInsight Spark both now natively support executing .NET code with .NET for Apache Spark. This means that you can potentially reuse some or all of your [.NET user-defined functions with Spark](#transform-user-defined-scalar-net-functions-and-user-defined-aggregators). Note though that U-SQL uses the .NET Framework while .NET for Apache Spark is based on .NET Core 3.1 or later.
-- Translate your .NET code into Scala or Python.-- Split your U-SQL script into several steps, where you use Azure Batch processes to apply the .NET transformations (if you can get acceptable scale)-- Use a .NET language binding available in Open Source called Moebius. This project is not in a supported state.
+[U-SQL user-defined operators (UDOs)](#transform-user-defined-operators-udos) are using the U-SQL UDO model to provide scaled-out execution of the operator's code. Thus, UDOs will have to be rewritten into user-defined functions to fit into the Spark execution model.
+
+.NET for Apache Spark currently does not support user-defined aggregators. Thus, [U-SQL user-defined aggregators](#transform-user-defined-scalar-net-functions-and-user-defined-aggregators) will have to be translated into Spark user-defined aggregators written in Scala.
+
+If you do not want to take advantage of the .NET for Apache Spark capabilities, you will have to rewrite your expressions into an equivalent Spark, Scala, Java, or Python expression, function, aggregator or connector.
In any case, if you have a large amount of .NET logic in your U-SQL scripts, please contact us through your Microsoft Account representative for further guidance.
The following details are for the different cases of .NET and C# usages in U-SQL
U-SQL's expression language is C#. Many of the scalar inline U-SQL expressions are implemented natively for improved performance, while more complex expressions may be executed through calling into the .NET framework.
-Spark has its own scalar expression language (either as part of the DSL or in SparkSQL) and allows calling into user-defined functions written in its hosting language.
+Spark has its own scalar expression language (either as part of the DSL or in SparkSQL) and allows calling into user-defined functions written for the JVM, .NET or Python runtime.
-If you have scalar expressions in U-SQL, you should first find the most appropriate natively understood Spark scalar expression to get the most performance, and then map the other expressions into a user-defined function of the Spark hosting language of your choice.
+If you have scalar expressions in U-SQL, you should first find the most appropriate natively understood Spark scalar expression to get the most performance, and then map the other expressions into a user-defined function of the Spark runtime language of your choice.
-Be aware that .NET and C# have different type semantics than the Spark hosting languages and Spark's DSL. See [below](#transform-typed-values) for more details on the type system differences.
+Be aware that .NET and C# have different type semantics than the JVM and Python runtimes and Spark's DSL. See [below](#transform-typed-values) for more details on the type system differences.
### Transform user-defined scalar .NET functions and user-defined aggregators
U-SQL provides ways to call arbitrary scalar .NET functions and to call user-def
Spark also offers support for user-defined functions and user-defined aggregators written in most of its hosting languages that can be called from Spark's DSL and SparkSQL.
+As mentioned above, .NET for Apache Spark supports user-defined functions written in .NET, but does not support user-defined aggregators. So for user-defined functions, .NET for Apache Spark can be used, while user-defined aggregators have to be authored in Scala for Spark.
+ ### Transform user-defined operators (UDOs) U-SQL provides several categories of user-defined operators (UDOs) such as extractors, outputters, reducers, processors, appliers, and combiners that can be written in .NET (and - to some extent - in Python and R).
Thus a SparkSQL `SELECT` statement that uses `WHERE column_name = NULL` returns
One major difference is that U-SQL Scripts can make use of its catalog objects, many of which have no direct Spark equivalent.
-Spark does provide support for the Hive Meta store concepts, mainly databases, and tables, so you can map U-SQL databases and schemas to Hive databases, and U-SQL tables to Spark tables (see [Moving data stored in U-SQL tables](understand-spark-data-formats.md#move-data-stored-in-u-sql-tables)), but it has no support for views, table-valued functions (TVFs), stored procedures, U-SQL assemblies, external data sources etc.
+Spark does provide support for the Hive Meta store concepts, mainly databases, tables and views, so you can map U-SQL databases and schemas to Hive databases, and U-SQL tables to Spark tables (see [Moving data stored in U-SQL tables](understand-spark-data-formats.md#move-data-stored-in-u-sql-tables)), but it has no support for table-valued functions (TVFs), stored procedures, U-SQL assemblies, external data sources etc.
The U-SQL code objects such as views, TVFs, stored procedures, and assemblies can be modeled through code functions and libraries in Spark and referenced using the host language's function and procedural abstraction mechanisms (for example, through importing Python modules or referencing Scala functions).
databox-online Azure Stack Edge Gpu Connect Virtual Machine Console https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-connect-virtual-machine-console.md
Title: Connect to the virtual machine console on Azure Stack Edge Pro GPU device
+ Title: Connect to the virtual machine console on Azure Stack Edge Pro GPU device
description: Describes how to connect to the virtual machine console on a VM running on Azure Stack Edge Pro GPU device.
[!INCLUDE [applies-to-GPU-and-pro-r-and-mini-r-skus](../../includes/azure-stack-edge-applies-to-gpu-pro-r-mini-r-sku.md)]
-Azure Stack Edge Pro GPU solution runs non-containerized workloads via the virtual machines. This article describes how to connect to the console of a virtual machine deployed on your device.
+Azure Stack Edge Pro GPU solution runs non-containerized workloads via the virtual machines. This article describes how to connect to the console of a virtual machine deployed on your device.
The virtual machine console allows you to access your VMs with keyboard, mouse, and screen features using the commonly available remote desktop tools. You can access the console and troubleshoot any issues experienced when deploying a virtual machine on your device. You can connect to the virtual machine console even if your VM has failed to provision.
You should have access to an Azure Stack Edge Pro GPU device that is activated.
Make sure that you have access to a client system that: - Can access the PowerShell interface of the device. The client is running a [Supported operating system](azure-stack-edge-gpu-system-requirements.md#supported-os-for-clients-connected-to-device).-- The client is running PowerShell 7.0 or later. This version of PowerShell works for Windows, Mac, and Linux clients. See instructions to [install PowerShell 7.0](/powershell/scripting/whats-new/what-s-new-in-powershell-70?view=powershell-7.1&preserve-view=true).
+- The client is running PowerShell 7.0 or later. This version of PowerShell works for Windows, Mac, and Linux clients. See instructions to [install PowerShell 7](/powershell/scripting/install/installing-powershell).
- Has remote desktop capabilities. Depending on whether you are using Windows, macOS, or Linux, you should install one of these [Remote desktop clients](/windows-server/remote/remote-desktop-services/clients/remote-desktop-clients). This article provides instructions with [Windows Remote Desktop](/windows-server/remote/remote-desktop-services/clients/windowsdesktop#install-the-client) and [FreeRDP](https://www.freerdp.com/). <!--Which version of FreeRDP to use?-->
Follow these steps to connect to the virtual machine console on your device.
### Connect to the PowerShell interface on your device
-The first step is to [Connect to the PowerShell interface](azure-stack-edge-gpu-connect-powershell-interface.md#connect-to-the-powershell-interface) of your device.
+The first step is to [Connect to the PowerShell interface](azure-stack-edge-gpu-connect-powershell-interface.md#connect-to-the-powershell-interface) of your device.
### Enable console access to the VM
-1. In the PowerShell interface, run the following command to enable access to the VM console.
+1. In the PowerShell interface, run the following command to enable access to the VM console.
- ```powershell
- Grant-HcsVMConnectAccess -ResourceGroupName <VM resource group> -VirtualMachineName <VM name>
- ```
+ ```powershell
+ Grant-HcsVMConnectAccess -ResourceGroupName <VM resource group> -VirtualMachineName <VM name>
+ ```
2. In the sample output, make a note of the virtual machine ID. You'll need this for a later step.
- ```powershell
- [10.100.10.10]: PS>Grant-HcsVMConnectAccess -ResourceGroupName mywindowsvm1rg -VirtualMachineName mywindowsvm1
-
- VirtualMachineId : 81462e0a-decb-4cd4-96e9-057094040063
- VirtualMachineHostName : 3V78B03
- ResourceGroupName : mywindowsvm1rg
- VirtualMachineName : mywindowsvm1
- Id : 81462e0a-decb-4cd4-96e9-057094040063
- [10.100.10.10]: PS>
- ```
+ ```powershell
+ [10.100.10.10]: PS>Grant-HcsVMConnectAccess -ResourceGroupName mywindowsvm1rg -VirtualMachineName mywindowsvm1
+
+ VirtualMachineId : 81462e0a-decb-4cd4-96e9-057094040063
+ VirtualMachineHostName : 3V78B03
+ ResourceGroupName : mywindowsvm1rg
+ VirtualMachineName : mywindowsvm1
+ Id : 81462e0a-decb-4cd4-96e9-057094040063
+ [10.100.10.10]: PS>
+ ```
### Connect to the VM
You can now use a Remote Desktop client to connect to the virtual machine consol
``` pcb:s:<VM ID from PowerShell>;EnhancedMode=0
- full address:s:<IP address of the device>
+ full address:s:<IP address of the device>
server port:i:2179 username:s:EdgeARMUser negotiate security layer:i:0 ```+ 1. Save the file as **.rdp* on your client system. You'll use this profile to connect to the VM. 1. Double-click the profile to connect to the VM. Provide the following credentials:
- - **Username**: Sign in as EdgeARMUser.
- - **Password**: Provide the local Azure Resource Manager password for your device. If you have forgotten the password, [Reset Azure Resource Manager password via the Azure portal](azure-stack-edge-gpu-set-azure-resource-manager-password.md#reset-password-via-the-azure-portal).
+ - **Username**: Sign in as EdgeARMUser.
+ - **Password**: Provide the local Azure Resource Manager password for your device. If you have forgotten the password, [Reset Azure Resource Manager password via the Azure portal](azure-stack-edge-gpu-set-azure-resource-manager-password.md#reset-password-via-the-azure-portal).
#### Use FreeRDP
-If using FreeRDP on your Linux client, run the following command:
+If using FreeRDP on your Linux client, run the following command:
```powershell ./wfreerdp /u:EdgeARMUser /vmconnect:<VM ID from PowerShell> /v:<IP address of the device>
If using FreeRDP on your Linux client, run the following command:
To revoke access to the VM console, return to the PowerShell interface of your device. Run the following command:
-```
+```powershell
Revoke-HcsVMConnectAccess -ResourceGroupName <VM resource group> -VirtualMachineName <VM name> ```+ Here is an example output: ```powershell
Id : 81462e0a-decb-4cd4-96e9-057094040063
[10.100.10.10]: PS> ```
-> [!NOTE]
-> We recommend that after you are done using the VM console, you either revoke the access or close the PowerShell window to exit the session.
+
+> [!NOTE]
+> We recommend that after you are done using the VM console, you either revoke the access or close the PowerShell window to exit the session.
## Next steps
databox-online Azure Stack Edge Gpu Deploy Install https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-deploy-install.md
Previously updated : 11/11/2021 Last updated : 05/17/2022 zone_pivot_groups: azure-stack-edge-device-deployment # Customer intent: As an IT admin, I need to understand how to install Azure Stack Edge Pro in datacenter so I can use it to transfer data to Azure.
Before you start cabling your device, you need the following things:
- At least one 1-GbE RJ-45 network cable to connect to the management interface. There are two 1-GbE network interfaces, one management and one data, on the device. - One 25/10-GbE SFP+ copper cable for each data network interface to be configured. At least one data network interface from among PORT 2, PORT 3, PORT 4, PORT 5, or PORT 6 needs to be connected to the Internet (with connectivity to Azure). - Access to two power distribution units (recommended).-- At least one 1-GbE network switch to connect a 1-GbE network interface to the Internet for data. The local web UI will not be accessible if the connected switch is not at least 1 GbE. If using 25/10-GbE interface for data, you will need a 25-GbE or 10-GbE switch.
+- At least one 1-GbE network switch to connect a 1-GbE network interface to the Internet for data. The local web UI won't be accessible if the connected switch isn't at least 1 GbE. If using 25/10-GbE interface for data, you'll need a 25-GbE or 10-GbE switch.
> [!NOTE] > - If you are connecting only one data network interface, we recommend that you use a 25/10-GbE network interface such as PORT 3, PORT 4, PORT 5, or PORT 6 to send data to Azure.
Before you start cabling your device, you need the following things:
Before you start cabling your device, you need the following things: - Both of your Azure Stack Edge physical devices, unpacked, and rack mounted.-- 4 power cables, 2 for each device node. <!-- check w/ PIT team around how the bezel is shipped or attached to the device -->
+- Four power cables, two for each device node. <!-- check w/ PIT team around how the bezel is shipped or attached to the device -->
- At least two 1-GbE RJ-45 network cables to connect Port 1 on each device node for initial configuration. <!-- check with Ernie if is clustered in the factory, only 1 node may be connected to mgmt --> - At least two 1-GbE RJ-45 network cables to connect Port 2 on each device node to the internet (with connectivity to Azure).-- 25/10-GbE SFP+ copper cables for Port 3 and Port 4 to be configured. Additional 25/10-GbR SFP+ copper cables if you will also connect Port 5 and Port 6. Port 5 and Port 6 must be connected if you intend to [Deploy network functions on Azure Stack Edge](../network-function-manager/deploy-functions.md).
+- 25/10-GbE SFP+ copper cables for Port 3 and Port 4 to be configured. Additional 25/10-GbR SFP+ copper cables if you'll also connect Port 5 and Port 6. Port 5 and Port 6 must be connected if you intend to [Deploy network functions on Azure Stack Edge](../network-function-manager/deploy-functions.md).
- 25-GbE or 10-GbE switches if opting for a switched network topology. See [Supported network topologies](azure-stack-edge-gpu-clustering-overview.md). - Access to two power distribution units (recommended).
The backplane of Azure Stack Edge device:
For a full list of supported cables, switches, and transceivers for these network adapter cards, see: - [`Qlogic` Cavium 25G NDC adapter interoperability matrix](https://www.marvell.com/documents/xalflardzafh32cfvi0z/).-- 25 GbE and 10 GbE cables and modules in [Mellanox dual port 25G ConnectX-4 channel network adapter compatible products](https://docs.mellanox.com/display/ConnectX4LxFirmwarev14271016/Firmware+Compatible+Products).
+- 25 GbE and 10 GbE cables and modules in [Mellanox dual port 25G ConnectX-4 channel network adapter compatible products](https://docs.mellanox.com/display/ConnectX4LxFirmwarev14271016/Firmware+Compatible+Products).
+
+> [!NOTE]
+> Using USB ports to connect any external device, including keyboards and monitors, is not supported for Azure Stack Edge devices.
### Power cabling
Use this configuration when you need port level redundancy through teaming.
#### Connect Port 3 via switch
-Use this configuration if you need an extra port for workload traffic and port level redundancy is not required.
+Use this configuration if you need an extra port for workload traffic and port level redundancy isn't required.
![Back plane of clustered device cabled for networking with switches and without NIC teaming](./media/azure-stack-edge-gpu-deploy-install/backplane-clustered-device-networking-switches-without-nic-teaming.png)
databox-online Azure Stack Edge Mini R Deploy Install https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-mini-r-deploy-install.md
Previously updated : 03/22/2021 Last updated : 05/17/2022 # Customer intent: As an IT admin, I need to understand how to install Azure Stack Edge Mini R device in datacenter so I can use it to transfer data to Azure.
Take the following steps to cable your device for power and network.
- If connecting PORT 2, use the RJ-45 network cable. - For the 10-GbE network interfaces, use the SFP+ copper cables.
+ > [!NOTE]
+ > Using USB ports to connect any external device, including keyboards and monitors, is not supported for Azure Stack Edge devices.
+ ## Next steps In this tutorial, you learned about Azure Stack Edge topics such as how to:
databox-online Azure Stack Edge Pro 2 Deploy Install https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-pro-2-deploy-install.md
Previously updated : 03/22/2022 Last updated : 05/17/2022 zone_pivot_groups: azure-stack-edge-device-deployment # Customer intent: As an IT admin, I need to understand how to install Azure Stack Edge Pro 2 in datacenter so I can use it to transfer data to Azure.
Follow these steps to cable your device for network:
![Back plane of a cabled device](./media/azure-stack-edge-pro-2-deploy-install/cabled-backplane-1.png)
+ > [!NOTE]
+ > Using USB ports to connect any external device, including keyboards and monitors, is not supported for Azure Stack Edge devices.
+ ::: zone-end ::: zone pivot="two-node"
Cable your device as shown in the following diagram:
1. Connect Port 3 on one device directly (without a switch) to the Port 3 on the other device node. Use a QSFP28 passive direct attached cable (tested in-house) for the connection. 1. Connect Port 4 on one device directly (without a switch) to the Port 4 on the other device node. Use a QSFP28 passive direct attached cable (tested in-house) for the connection.
+ > [!NOTE]
+ > Using USB ports to connect any external device, including keyboards and monitors, is not supported for Azure Stack Edge devices.
#### Using external switches
databox-online Azure Stack Edge Pro R Deploy Install https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-pro-r-deploy-install.md
Previously updated : 3/22/2022 Last updated : 5/17/2022 # Customer intent: As an IT admin, I need to understand how to install Azure Stack Edge Pro R in datacenter so I can use it to transfer data to Azure.
Take the following steps to cable your device for power and network.
- If connecting PORT 2, use the RJ-45 network cable. - For the 10/25-GbE network interfaces, use the SFP+ copper cables.
+ > [!NOTE]
+ > Using USB ports to connect any external device, including keyboards and monitors, is not supported for Azure Stack Edge devices.
+ ## Next steps In this tutorial, you learned about Azure Stack Edge Pro R topics such as how to:
databox Data Box Troubleshoot Time Sync https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-troubleshoot-time-sync.md
This article describes how to diagnose that your Data Box is out of sync and the
## About device time sync
-Data Box automatically synchronizes time when it is connected to the internet using the default Windows time server `time.windows.com`. However, if Data Box is not connected to the internet, the device time may be out of sync. This situation may affect the data copy from the source data to Data Box specifically if the copy is via the REST API or certain tools that have time constraints.
+Data Box automatically synchronizes time when it is connected to the internet using the default Windows time server `time.windows.com`. However, if Data Box is not connected to the internet, the device time may be out of sync. This situation may affect the data copy from the source data to Data Box specifically if the copy is via the REST API or certain tools that have time constraints.
-If you see any time difference between the time on Data Box and other local devices on your site, you can sync the time on your Data Box by accessing its PowerShell interface. The `Set-Date API` is used to modify the device time. For more information, see [Set-Date API](/powershell/module/microsoft.powershell.utility/set-date?view=powershell-7.1&preserve-view=true).
+If you see any time difference between the time on Data Box and other local devices on your site, you can sync the time on your Data Box by accessing its PowerShell interface. The `Set-Date API` is used to modify the device time. For more information, see [Set-Date cmdlet](/powershell/module/microsoft.powershell.utility/set-date).
## Connect to PowerShell interface
To change the device time, follow these steps.
1. If the device time is out of sync, use the `Set-Date` cmdlet to change the time on your Data Box. - Set the time forward by 2 minutes.
-
+ ```powershell Set-Date -Adjust <time change in hours:mins:secs format> -DisplayHint Time ```
To change the device time, follow these steps.
```powershell Set-Date -Adjust -<time change in hours:mins:secs format> -DisplayHint Time
- ```
+ ```
Here is an example output:
-
+ ```powershell [by506b4b5d0790.microsoftdatabox.com]: PS>Get-date Friday, September 3, 2021 2:22:50 PM
To change the device time, follow these steps.
Friday, September 3, 2021 2:23:42 PM [by506b4b5d0790.microsoftdatabox.com]: PS> ```
- For more information, see [Set-Date API](/powershell/module/microsoft.powershell.utility/set-date?view=powershell-7.1&preserve-view=true).
-
+ For more information, see [Set-Date cmdlet](/powershell/module/microsoft.powershell.utility/set-date).
+ ## Next steps To troubleshoot other Data Box issues, see one of the following articles:
ddos-protection Manage Ddos Protection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/manage-ddos-protection.md
na - Previously updated : 04/13/2022-++ Last updated : 05/04/2022++
Get started with Azure DDoS Protection Standard by using the Azure portal.
-A DDoS protection plan defines a set of virtual networks that have DDoS protection standard enabled, across subscriptions. You can configure one DDoS protection plan for your organization and link virtual networks from multiple subscriptions to the same plan.
+A DDoS protection plan defines a set of virtual networks that have DDoS Protection Standard enabled, across subscriptions. You can configure one DDoS protection plan for your organization and link virtual networks from multiple subscriptions under a single AAD tenant to the same plan.
In this quickstart, you'll create a DDoS protection plan and link it to a virtual network.
In this quickstart, you'll create a DDoS protection plan and link it to a virtua
1. Create a DDoS protection plan by completing the steps in [Create a DDoS protection plan](#create-a-ddos-protection-plan), if you don't have an existing DDoS protection plan. 1. Enter the name of the virtual network that you want to enable DDoS Protection Standard for in the **Search resources, services, and docs box** at the top of the Azure portal. When the name of the virtual network appears in the search results, select it.
-1. Select **DDoS protection**, under **SETTINGS**.
-1. Select **Standard**. Under **DDoS protection plan**, select an existing DDoS protection plan, or the plan you created in step 1, and then select **Save**. The plan you select can be in the same, or different subscription than the virtual network, but both subscriptions must be associated to the same Azure Active Directory tenant.
+1. Select **DDoS protection**, under **Settings**.
+1. Select **Enable**. Under **DDoS protection plan**, select an existing DDoS protection plan, or the plan you created in step 1, and then click **Save**. The plan you select can be in the same, or different subscription than the virtual network, but both subscriptions must be associated to the same Azure Active Directory tenant.
+
+You can also enable the DDoS protection plan for an existing virtual network from the DDoS Protection plan, not from the virtual network.
+1. Search for "DDoS protection plans" in the **Search resources, services, and docs box** at the top of the Azure portal. When **DDoS protection plans** appears in the search results, select it.
+1. Select the desired DDoS protection plan you want to enable for your virtual network.
+1. Select **Protected resources** under **Settings**.
+1. Click **+Add** and select the right subscription, resource group and the virtual network name. Click **Add** again.
## Configure an Azure DDoS Protection Plan using Azure Firewall Manager (preview)
To disable DDoS protection for a virtual network:
1. Enter the name of the virtual network you want to disable DDoS protection standard for in the **Search resources, services, and docs box** at the top of the portal. When the name of the virtual network appears in the search results, select it. 1. Under **DDoS Protection Standard**, select **Disable**.
-If you want to delete a DDoS protection plan, you must first dissociate all virtual networks from it.
+> [!NOTE]
+> If you want to delete a DDoS protection plan, you must first dissociate all virtual networks from it.
## Next steps
defender-for-cloud Adaptive Application Controls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/adaptive-application-controls.md
When you move a machine from one group to another, the application control polic
To manage your adaptive application controls programmatically, use our REST API.
-The relevant API documentation is available in [the Adaptive Application Controls section of Defender for Cloud's API docs](/rest/api/securitycenter/adaptiveapplicationcontrols).
+The relevant API documentation is available in [the Adaptive application Controls section of Defender for Cloud's API docs](/rest/api/securitycenter/adaptiveapplicationcontrols).
Some of the functions that are available from the REST API:
defender-for-cloud Adaptive Network Hardening https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/adaptive-network-hardening.md
For example, let's say the existing NSG rule is to allow traffic from 140.20.30.
* **Not enough data is available**: In order to generate accurate traffic hardening recommendations, Defender for Cloud requires at least 30 days of traffic data. * **VM is not protected by Microsoft Defender for Servers**: Only VMs protected with [Microsoft Defender for Servers](defender-for-servers-introduction.md) are eligible for this feature.
- :::image type="content" source="./media/adaptive-network-hardening/recommendation-details-page.png" alt-text="Details page of the recommendation Adaptive Network Hardening recommendations should be applied on internet facing virtual machines.":::
+ :::image type="content" source="./media/adaptive-network-hardening/recommendation-details-page.png" alt-text="Details page of the recommendation Adaptive network hardening recommendations should be applied on internet facing virtual machines.":::
1. From the **Unhealthy resources** tab, select a VM to view its alerts and the recommended hardening rules to apply.
defender-for-cloud Alerts Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/alerts-reference.md
Microsoft Defender for Containers provides security alerts on the cluster level
| **Detected suspicious use of the useradd command (Preview)**<br>(K8S.NODE_SuspectUserAddition) | Analysis of processes running within a container detected suspicious use of the useradd command. | Persistence | Medium | | **Digital currency mining container detected**<br>(K8S_MaliciousContainerImage) <sup>[2](#footnote2)</sup> | Kubernetes audit log analysis detected a container that has an image associated with a digital currency mining tool. | Execution | High | | **Digital currency mining related behavior detected (Preview)**<br>(K8S.NODE_DigitalCurrencyMining) | Analysis of host data detected the execution of a process or command normally associated with digital currency mining. | Execution | High |
-| **Docker build operation detected on a Kubernetes node (Preview)**<br>(K8S.NODE_ImageBuildOnNode) | Analysis of processes running within a container indicates a build operation of a container image on a Kubernetes node. While this behavior might be legitimate, attackers might build their malicious images locally to avoid detection. | DefenseEvasion | Low |
+| **Docker build operation detected on a Kubernetes node (Preview)**<br>(K8S.NODE_ImageBuildOnNode) | Analysis of processes running within a container or directly on a Kubernetes node, has detected a build operation of a container image on a Kubernetes node. While this behavior might be legitimate, attackers might build their malicious images locally to avoid detection. | DefenseEvasion | Low |
| **Excessive role permissions assigned in Kubernetes cluster (Preview)**<br>(K8S_ServiceAcountPermissionAnomaly) <sup>[3](#footnote3)</sup> | Analysis of the Kubernetes audit logs detected an excessive permissions role assignment to your cluster. The listed permissions for the assigned roles are uncommon to the specific service account. This detection considers previous role assignments to the same service account across clusters monitored by Azure, volume per permission, and the impact of the specific permission. The anomaly detection model used for this alert takes into account how this permission is used across all clusters monitored by Microsoft Defender for Cloud. | Privilege Escalation | Low | | **Executable found running from a suspicious location (Preview)**<br>(K8S.NODE_SuspectExecutablePath) | Analysis of host data detected an executable file that is running from a location associated with known suspicious files. This executable could either be legitimate activity, or an indication of a compromised host. | Execution | Medium | | **Execution of hidden file (Preview)**<br>(K8S.NODE_ExecuteHiddenFile) | Analysis of host data indicates that a hidden file was executed by the specified user account. | Persistence, DefenseEvasion | Informational |
defender-for-cloud Alerts Suppression Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/alerts-suppression-rules.md
There are a few ways you can create rules to suppress unwanted security alerts:
- To suppress alerts at the subscription level, you can use the Azure portal or the REST API as explained below > [!NOTE]
-> Suppression rules don't work retroactively - they'll only suppress alerts triggered _after_ the rule is created. Also, if a specific alert type has never been generated on a specific subscription, future alerts of that type wonn't be suppressed. For a rule to suppress an alert on a specific subscription, that alert type has to have been triggered at leaast once before the rule is created.
+> Suppression rules don't work retroactively - they'll only suppress alerts triggered _after_ the rule is created. Also, if a specific alert type has never been generated on a specific subscription, future alerts of that type won't be suppressed. For a rule to suppress an alert on a specific subscription, that alert type has to have been triggered at least once before the rule is created.
To create a rule directly in the Azure portal:
defender-for-cloud Cross Tenant Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/cross-tenant-management.md
The views and actions are basically the same. Here are some examples:
- **Remediate recommendations**: Monitor and remediate a [recommendation](review-security-recommendations.md) for many resources from various tenants at one time. You can then immediately tackle the vulnerabilities that present the highest risk across all tenants. - **Manage Alerts**: Detect [alerts](alerts-overview.md) throughout the different tenants. Take action on resources that are out of compliance with actionable [remediation steps](managing-and-responding-alerts.md). -- **Manage advanced cloud defense features and more**: Manage the various threat protection services, such as [just-in-time (JIT) VM access](just-in-time-access-usage.md), [Adaptive Network Hardening](adaptive-network-hardening.md), [adaptive application controls](adaptive-application-controls.md), and more.
+- **Manage advanced cloud defense features and more**: Manage the various threat protection services, such as [just-in-time (JIT) VM access](just-in-time-access-usage.md), [Adaptive network hardening](adaptive-network-hardening.md), [adaptive application controls](adaptive-application-controls.md), and more.
## Next steps This article explains how cross-tenant management works in Defender for Cloud. To discover how Azure Lighthouse can simplify cross-tenant management within an enterprise which uses multiple Azure AD tenants, see [Azure Lighthouse in enterprise scenarios](../lighthouse/concepts/enterprise.md).
defender-for-cloud Defender For Cloud Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-cloud-introduction.md
Title: What is Microsoft Defender for Cloud?
-description: Use Microsoft Defender for Cloud to protect your Azure, hybrid, and multi-cloud resources and workloads.
+description: Use Microsoft Defender for Cloud to protect your Azure, hybrid, and multicloud resources and workloads.
Last updated 05/11/2022
# What is Microsoft Defender for Cloud?
-Microsoft Defender for Cloud is a Cloud Workload Protection Platform (CWPP) that also delivers Cloud Security Posture Management (CSPM) for all of your Azure, on-premises, and multi-cloud (Amazon AWS and Google GCP) resources.
+Microsoft Defender for Cloud is a Cloud Workload Protection Platform (CWPP) that also delivers Cloud Security Posture Management (CSPM) for all of your Azure, on-premises, and multicloud (Amazon AWS and Google GCP) resources.
- [**Defender for Cloud recommendations**](security-policy-concept.md) identify cloud workloads that require security actions and provide you with steps to protect your workloads from security risks. - [**Defender for Cloud secure score**](secure-score-security-controls.md) gives you a clear view of your security posture based on the implementation of the security recommendations so you can track new security opportunities and precisely report on the progress of your security efforts.
When you open Defender for Cloud for the first time, it will meet the visibility
1. **Generate a secure score** for your subscriptions based on an assessment of your connected resources compared with the guidance in [Azure Security Benchmark](/security/benchmark/azure/overview). Use the score to understand your security posture, and the compliance dashboard to review your compliance with the built-in benchmark. When you've enabled the enhanced security features, you can customize the standards used to assess your compliance, and add other regulations (such as NIST and Azure CIS) or organization-specific security requirements. You can also apply recommendations, and score based on the AWS Foundational Security Best practices standards.
-1. **Provide hardening recommendations** based on any identified security misconfigurations and weaknesses. Use these security recommendations to strengthen the security posture of your organization's Azure, hybrid, and multi-cloud resources.
+1. **Provide hardening recommendations** based on any identified security misconfigurations and weaknesses. Use these security recommendations to strengthen the security posture of your organization's Azure, hybrid, and multicloud resources.
[Learn more about secure score](secure-score-security-controls.md).
When you open Defender for Cloud for the first time, it will meet the visibility
Defender for Cloud offers security alerts that are powered by [Microsoft Threat Intelligence](https://go.microsoft.com/fwlink/?linkid=2128684). It also includes a range of advanced, intelligent, protections for your workloads. The workload protections are provided through Microsoft Defender plans specific to the types of resources in your subscriptions. For example, you can enable **Microsoft Defender for Storage** to get alerted about suspicious activities related to your Azure Storage accounts.
-## Azure, hybrid, and multi-cloud protections
+## Azure, hybrid, and multicloud protections
Because Defender for Cloud is an Azure-native service, many Azure services are monitored and protected without needing any deployment.
-When necessary, Defender for Cloud can automatically deploy a Log Analytics agent to gather security-related data. For Azure machines, deployment is handled directly. For hybrid and multi-cloud environments, Microsoft Defender plans are extended to non Azure machines with the help of [Azure Arc](https://azure.microsoft.com/services/azure-arc/). CSPM features are extended to multi-cloud machines without the need for any agents (see [Defend resources running on other clouds](#defend-resources-running-on-other-clouds)).
+When necessary, Defender for Cloud can automatically deploy a Log Analytics agent to gather security-related data. For Azure machines, deployment is handled directly. For hybrid and multicloud environments, Microsoft Defender plans are extended to non Azure machines with the help of [Azure Arc](https://azure.microsoft.com/services/azure-arc/). CSPM features are extended to multicloud machines without the need for any agents (see [Defend resources running on other clouds](#defend-resources-running-on-other-clouds)).
### Azure-native protections
Defender for Cloud can protect resources in other clouds (such as AWS and GCP).
For example, if you've [connected an Amazon Web Services (AWS) account](quickstart-onboard-aws.md) to an Azure subscription, you can enable any of these protections: -- **Defender for Cloud's CSPM features** extend to your AWS resources. This agentless plan assesses your AWS resources according to AWS-specific security recommendations and these are included in your secure score. The resources will also be assessed for compliance with built-in standards specific to AWS (AWS CIS, AWS PCI DSS, and AWS Foundational Security Best Practices). Defender for Cloud's [asset inventory page](asset-inventory.md) is a multi-cloud enabled feature helping you manage your AWS resources alongside your Azure resources.
+- **Defender for Cloud's CSPM features** extend to your AWS resources. This agentless plan assesses your AWS resources according to AWS-specific security recommendations and these are included in your secure score. The resources will also be assessed for compliance with built-in standards specific to AWS (AWS CIS, AWS PCI DSS, and AWS Foundational Security Best Practices). Defender for Cloud's [asset inventory page](asset-inventory.md) is a multicloud enabled feature helping you manage your AWS resources alongside your Azure resources.
- **Microsoft Defender for Kubernetes** extends its container threat detection and advanced defenses to your **Amazon EKS Linux clusters**. - **Microsoft Defender for Servers** brings threat detection and advanced defenses to your Windows and Linux EC2 instances. This plan includes the integrated license for Microsoft Defender for Endpoint, security baselines and OS level assessments, vulnerability assessment scanning, adaptive application controls (AAC), file integrity monitoring (FIM), and more.
defender-for-cloud Defender For Containers Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-introduction.md
Defender for Cloud provides real-time threat protection for your containerized e
Threat protection at the cluster level is provided by the Defender profile and analysis of the Kubernetes audit logs. Examples of events at this level include exposed Kubernetes dashboards, creation of high-privileged roles, and the creation of sensitive mounts.
-In addition, our threat detection goes beyond the Kubernetes management layer. Defender for Containers includes **host-level threat detection** with over 60 Kubernetes-aware analytics, AI, and anomaly detections based on your runtime workload. Our global team of security researchers constantly monitor the threat landscape. They add container-specific alerts and vulnerabilities as they're discovered. Together, this solution monitors the growing attack surface of multi-cloud Kubernetes deployments and tracks the [MITRE ATT&CK® matrix for Containers](https://www.microsoft.com/security/blog/2021/04/29/center-for-threat-informed-defense-teams-up-with-microsoft-partners-to-build-the-attck-for-containers-matrix/), a framework that was developed by the [Center for Threat-Informed Defense](https://mitre-engenuity.org/ctid/) in close partnership with Microsoft and others.
+In addition, our threat detection goes beyond the Kubernetes management layer. Defender for Containers includes **host-level threat detection** with over 60 Kubernetes-aware analytics, AI, and anomaly detections based on your runtime workload. Our global team of security researchers constantly monitor the threat landscape. They add container-specific alerts and vulnerabilities as they're discovered. Together, this solution monitors the growing attack surface of multicloud Kubernetes deployments and tracks the [MITRE ATT&CK® matrix for Containers](https://www.microsoft.com/security/blog/2021/04/29/center-for-threat-informed-defense-teams-up-with-microsoft-partners-to-build-the-attck-for-containers-matrix/), a framework that was developed by the [Center for Threat-Informed Defense](https://mitre-engenuity.org/ctid/) in close partnership with Microsoft and others.
The full list of available alerts can be found in the [Reference table of alerts](alerts-reference.md#alerts-k8scluster). ## Architecture overview
defender-for-cloud Defender For Servers Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-servers-introduction.md
Microsoft Defender for Servers is one of the enhanced security features of Microsoft Defender for Cloud. Use it to add threat detection and advanced defenses to your Windows and Linux machines whether they're running in Azure, AWS, GCP, and on-premises environment.
-To protect machines in hybrid and multi-cloud environments, Defender for Cloud uses [Azure Arc](../azure-arc/index.yml). Connect your hybrid and multi-cloud machines as explained in the relevant quickstart:
+To protect machines in hybrid and multicloud environments, Defender for Cloud uses [Azure Arc](../azure-arc/index.yml). Connect your hybrid and multicloud machines as explained in the relevant quickstart:
- [Connect your non-Azure machines to Microsoft Defender for Cloud](quickstart-onboard-machines.md) - [Connect your AWS accounts to Microsoft Defender for Cloud](quickstart-onboard-aws.md)
The following table describes what's included in each plan at a high level.
| Adaptive application controls | | :::image type="icon" source="./media/icons/yes-icon.png"::: | | File integrity monitoring | | :::image type="icon" source="./media/icons/yes-icon.png"::: | | Just-in time VM access | | :::image type="icon" source="./media/icons/yes-icon.png"::: |
-| Adaptive Network Hardening | | :::image type="icon" source="./media/icons/yes-icon.png"::: |
+| Adaptive network hardening | | :::image type="icon" source="./media/icons/yes-icon.png"::: |
<!-- | Future ΓÇô TVM P2 | | :::image type="icon" source="./media/icons/yes-icon.png"::: | | Future ΓÇô disk scanning insights | | :::image type="icon" source="./media/icons/yes-icon.png"::: | -->
The threat detection and protection capabilities provided with Microsoft Defende
- **Adaptive network hardening (ANH)** - Applying network security groups (NSG) to filter traffic to and from resources, improves your network security posture. However, there can still be some cases in which the actual traffic flowing through the NSG is a subset of the NSG rules defined. In these cases, further improving the security posture can be achieved by hardening the NSG rules, based on the actual traffic patterns.
- Adaptive Network Hardening provides recommendations to further harden the NSG rules. It uses a machine learning algorithm that factors in actual traffic, known trusted configuration, threat intelligence, and other indicators of compromise. ANH then provides recommendations to allow traffic only from specific IP and port tuples. For more information, see [Improve your network security posture with adaptive network hardening](adaptive-network-hardening.md).
+ Adaptive network hardening provides recommendations to further harden the NSG rules. It uses a machine learning algorithm that factors in actual traffic, known trusted configuration, threat intelligence, and other indicators of compromise. ANH then provides recommendations to allow traffic only from specific IP and port tuples. For more information, see [Improve your network security posture with adaptive network hardening](adaptive-network-hardening.md).
- **Docker host hardening** - Microsoft Defender for Cloud identifies unmanaged containers hosted on IaaS Linux VMs, or other Linux machines running Docker containers. Defender for Cloud continuously assesses the configurations of these containers. It then compares them with the Center for Internet Security (CIS) Docker Benchmark. Defender for Cloud includes the entire ruleset of the CIS Docker Benchmark and alerts you if your containers don't satisfy any of the controls. For more information, see [Harden your Docker hosts](harden-docker-hosts.md).
For Windows, Microsoft Defender for Cloud integrates with Azure services to moni
For Linux, Defender for Cloud collects audit records from Linux machines by using auditd, one of the most common Linux auditing frameworks.
-For hybrid and multi-cloud scenarios, Defender for Cloud integrates with [Azure Arc](../azure-arc/index.yml) to ensure these non-Azure machines are seen as Azure resources.
+For hybrid and multicloud scenarios, Defender for Cloud integrates with [Azure Arc](../azure-arc/index.yml) to ensure these non-Azure machines are seen as Azure resources.
## Simulating alerts
defender-for-cloud Defender For Sql Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-sql-introduction.md
Microsoft Defender for SQL includes two Microsoft Defender plans that extend Mic
When you enable either of these plans, all supported resources that exist within the subscription are protected. Future resources created on the same subscription will also be protected.
+> [!NOTE]
+> Microsoft Defender for SQL database currently works for read-write replicas only.
+ ## What are the benefits of Microsoft Defender for SQL? These two plans include functionality for identifying and mitigating potential database vulnerabilities and detecting anomalous activities that could indicate threats to your databases.
defender-for-cloud Deploy Vulnerability Assessment Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/deploy-vulnerability-assessment-vm.md
Title: Defender for Cloud's integrated vulnerability assessment solution for Azure, hybrid, and multi-cloud machines
+ Title: Defender for Cloud's integrated vulnerability assessment solution for Azure, hybrid, and multicloud machines
description: Install a vulnerability assessment solution on your Azure machines to get recommendations in Microsoft Defender for Cloud that can help you protect your Azure and hybrid machines Last updated 04/13/2022
Deploy the vulnerability assessment solution that best meets your needs and bud
- **Integrated vulnerability assessment solution (powered by Qualys)** - Defender for Cloud includes vulnerability scanning for your machines at no extra cost. You don't need a Qualys license or even a Qualys account - everything's handled seamlessly inside Defender for Cloud. This page provides details of this scanner and instructions for how to deploy it. > [!TIP]
- > The integrated vulnerability assessment solution supports both Azure virtual machines and hybrid machines. To deploy the vulnerability assessment scanner to your on-premises and multi-cloud machines, connect them to Azure first with Azure Arc as described in [Connect your non-Azure machines to Defender for Cloud](quickstart-onboard-machines.md).
+ > The integrated vulnerability assessment solution supports both Azure virtual machines and hybrid machines. To deploy the vulnerability assessment scanner to your on-premises and multicloud machines, connect them to Azure first with Azure Arc as described in [Connect your non-Azure machines to Defender for Cloud](quickstart-onboard-machines.md).
> > Defender for Cloud's integrated vulnerability assessment solution works seamlessly with Azure Arc. When you've deployed Azure Arc, your machines will appear in Defender for Cloud and no Log Analytics agent is required.
The vulnerability scanner extension works as follows:
:::image type="content" source="./media/deploy-vulnerability-assessment-vm/recommendation-page-machine-groupings.png" alt-text="The groupings of the machines in the recommendation page." lightbox="./media/deploy-vulnerability-assessment-vm/recommendation-page-machine-groupings.png"::: > [!TIP]
- > The machine "server16-test" above, is an Azure Arc-enabled machine. To deploy the vulnerability assessment scanner to your on-premises and multi-cloud machines, see [Connect your non-Azure machines to Defender for Cloud](quickstart-onboard-machines.md).
+ > The machine "server16-test" above, is an Azure Arc-enabled machine. To deploy the vulnerability assessment scanner to your on-premises and multicloud machines, see [Connect your non-Azure machines to Defender for Cloud](quickstart-onboard-machines.md).
> > Defender for Cloud works seamlessly with Azure Arc. When you've deployed Azure Arc, your machines will appear in Defender for Cloud and no Log Analytics agent is required.
defender-for-cloud Enable Data Collection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/enable-data-collection.md
Here is a complete breakdown of the Security and App Locker event IDs for each s
> [!NOTE] > - If you are using Group Policy Object (GPO), it is recommended that you enable audit policies Process Creation Event 4688 and the *CommandLine* field inside event 4688. For more information about Process Creation Event 4688, see Defender for Cloud's [FAQ](./faq-data-collection-agents.yml#what-happens-when-data-collection-is-enabled-). For more information about these audit policies, see [Audit Policy Recommendations](/windows-server/identity/ad-ds/plan/security-best-practices/audit-policy-recommendations).
-> - To enable data collection for [Adaptive Application Controls](adaptive-application-controls.md), Defender for Cloud configures a local AppLocker policy in Audit mode to allow all applications. This will cause AppLocker to generate events which are then collected and leveraged by Defender for Cloud. It is important to note that this policy will not be configured on any machines on which there is already a configured AppLocker policy.
+> - To enable data collection for [Adaptive application controls](adaptive-application-controls.md), Defender for Cloud configures a local AppLocker policy in Audit mode to allow all applications. This will cause AppLocker to generate events which are then collected and leveraged by Defender for Cloud. It is important to note that this policy will not be configured on any machines on which there is already a configured AppLocker policy.
> - To collect Windows Filtering Platform [Event ID 5156](https://www.ultimatewindowssecurity.com/securitylog/encyclopedia/event.aspx?eventID=5156), you need to enable [Audit Filtering Platform Connection](/windows/security/threat-protection/auditing/audit-filtering-platform-connection) (Auditpol /set /subcategory:"Filtering Platform Connection" /Success:Enable) >
defender-for-cloud Enable Enhanced Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/enable-enhanced-security.md
Title: Enable Microsoft Defender for Cloud's integrated workload protections
-description: Learn how to enable enhanced security features to extend the protections of Microsoft Defender for Cloud to your hybrid and multi-cloud resources
+description: Learn how to enable enhanced security features to extend the protections of Microsoft Defender for Cloud to your hybrid and multicloud resources
defender-for-cloud Enhanced Security Features Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/enhanced-security-features-overview.md
Defender for Cloud is offered in two modes:
- **Microsoft Defender for Endpoint** - Microsoft Defender for Servers includes [Microsoft Defender for Endpoint](https://www.microsoft.com/microsoft-365/security/endpoint-defender) for comprehensive endpoint detection and response (EDR). Learn more about the benefits of using Microsoft Defender for Endpoint together with Defender for Cloud in [Use Defender for Cloud's integrated EDR solution](integration-defender-for-endpoint.md). - **Vulnerability assessment for virtual machines, container registries, and SQL resources** - Easily enable vulnerability assessment solutions to discover, manage, and resolve vulnerabilities. View, investigate, and remediate the findings directly from within Defender for Cloud.
- - **Multi-cloud security** - Connect your accounts from Amazon Web Services (AWS) and Google Cloud Platform (GCP) to protect resources and workloads on those platforms with a range of Microsoft Defender for Cloud security features.
+ - **Multicloud security** - Connect your accounts from Amazon Web Services (AWS) and Google Cloud Platform (GCP) to protect resources and workloads on those platforms with a range of Microsoft Defender for Cloud security features.
- **Hybrid security** ΓÇô Get a unified view of security across all of your on-premises and cloud workloads. Apply security policies and continuously assess the security of your hybrid cloud workloads to ensure compliance with security standards. Collect, search, and analyze security data from multiple sources, including firewalls and other partner solutions. - **Threat protection alerts** - Advanced behavioral analytics and the Microsoft Intelligent Security Graph provide an edge over evolving cyber-attacks. Built-in behavioral analytics and machine learning can identify attacks and zero-day exploits. Monitor networks, machines, data stores (SQL servers hosted inside and outside Azure, Azure SQL databases, Azure SQL Managed Instance, and Azure Storage) and cloud services for incoming attacks and post-breach activity. Streamline investigation with interactive tools and contextual threat intelligence. - **Track compliance with a range of standards** - Defender for Cloud continuously assesses your hybrid cloud environment to analyze the risk factors according to the controls and best practices in [Azure Security Benchmark](/security/benchmark/azure/introduction). When you enable the enhanced security features, you can apply a range of other industry standards, regulatory standards, and benchmarks according to your organization's needs. Add standards and track your compliance with them from the [regulatory compliance dashboard](update-regulatory-compliance-packages.md).
If the user's info isn't listed in the **Event initiated by** column, explore th
### What are the plans offered by Defender for Cloud?
-The free offering from Microsoft Defender for Cloud offers the secure score and related tools. Enabling enhanced security turns on all of the Microsoft Defender plans to provide a range of security benefits for all your resources in Azure, hybrid, and multi-cloud environments.
+The free offering from Microsoft Defender for Cloud offers the secure score and related tools. Enabling enhanced security turns on all of the Microsoft Defender plans to provide a range of security benefits for all your resources in Azure, hybrid, and multicloud environments.
### How do I enable Defender for Cloud's enhanced security for my subscription? You can use any of the following ways to enable enhanced security for your subscription:
defender-for-cloud Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/get-started.md
# Quickstart: Set up Microsoft Defender for Cloud
-Defender for Cloud provides unified security management and threat protection across your hybrid and multi-cloud workloads. While the free features offer limited security for your Azure resources only, enabling enhanced security features extends these capabilities to on-premises and other clouds. Defender for Cloud helps you find and fix security vulnerabilities, apply access and application controls to block malicious activity, detect threats using analytics and intelligence, and respond quickly when under attack. You can try the enhanced security features at no cost. To learn more, see the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/).
+Defender for Cloud provides unified security management and threat protection across your hybrid and multicloud workloads. While the free features offer limited security for your Azure resources only, enabling enhanced security features extends these capabilities to on-premises and other clouds. Defender for Cloud helps you find and fix security vulnerabilities, apply access and application controls to block malicious activity, detect threats using analytics and intelligence, and respond quickly when under attack. You can try the enhanced security features at no cost. To learn more, see the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/).
This quickstart section will walk you through all the recommended steps to enable Microsoft Defender for Cloud and the enhanced security features. When you've completed all the quickstart steps, you'll have:
This quickstart section will walk you through all the recommended steps to enabl
> * [Enhanced security features](enhanced-security-features-overview.md) enabled on your Azure subscriptions > * Automatic data collection set up > * [Email notifications set up](configure-email-notifications.md) for security alerts
-> * Your hybrid and multi-cloud machines connected to Azure
+> * Your hybrid and multicloud machines connected to Azure
## Prerequisites To get started with Defender for Cloud, you must have a subscription to Microsoft Azure. If you don't have a subscription, you can sign up for a [free account](https://azure.microsoft.com/pricing/free-trial/).
defender-for-cloud Information Protection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/information-protection.md
Last updated 11/09/2021
# Prioritize security actions by data sensitivity
-[Microsoft Purview](../purview/overview.md), Microsoft's data governance service, provides rich insights into the *sensitivity of your data*. With automated data discovery, sensitive data classification, and end-to-end data lineage, Microsoft Purview helps organizations manage and govern data in hybrid and multi-cloud environments.
+[Microsoft Purview](../purview/overview.md), Microsoft's data governance service, provides rich insights into the *sensitivity of your data*. With automated data discovery, sensitive data classification, and end-to-end data lineage, Microsoft Purview helps organizations manage and govern data in hybrid and multicloud environments.
Microsoft Defender for Cloud customers using Microsoft Purview can benefit from an additional vital layer of metadata in alerts and recommendations: information about any potentially sensitive data involved. This knowledge helps solve the triage challenge and ensures security professionals can focus their attention on threats to sensitive data.
defender-for-cloud Integration Defender For Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/integration-defender-for-endpoint.md
Title: Using Microsoft Defender for Endpoint in Microsoft Defender for Cloud to protect native, on-premises, and AWS machines.
-description: Learn about deploying Microsoft Defender for Endpoint from Microsoft Defender for Cloud to protect Azure, hybrid, and multi-cloud machines.
+description: Learn about deploying Microsoft Defender for Endpoint from Microsoft Defender for Cloud to protect Azure, hybrid, and multicloud machines.
defender-for-cloud Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/permissions.md
Title: Permissions in Microsoft Defender for Cloud | Microsoft Docs description: This article explains how Microsoft Defender for Cloud uses role-based access control to assign permissions to users and identify the permitted actions for each role. Previously updated : 01/27/2022 Last updated : 05/22/2022 # Permissions in Microsoft Defender for Cloud
Defender for Cloud assesses the configuration of your resources to identify secu
In addition to the built-in roles, there are two roles specific to Defender for Cloud: * **Security Reader**: A user that belongs to this role has viewing rights to Defender for Cloud. The user can view recommendations, alerts, a security policy, and security states, but cannot make changes.
-* **Security Admin**: A user that belongs to this role has the same rights as the Security Reader and can also update the security policy and dismiss alerts and recommendations.
+* **Security Admin**: A user that belongs to this role has the same rights as the Security Reader and can also update the security policy, dismiss alerts and recommendations, and apply recommendations.
> [!NOTE] > The security roles, Security Reader and Security Admin, have access only in Defender for Cloud. The security roles do not have access to other Azure services such as Storage, Web & Mobile, or Internet of Things.
The following table displays roles and allowed actions in Defender for Cloud.
| Edit security policy | - | Γ£ö | - | Γ£ö | Γ£ö | | Enable / disable Microsoft Defender plans | - | Γ£ö | - | Γ£ö | Γ£ö | | Dismiss alerts | - | Γ£ö | - | Γ£ö | Γ£ö |
-| Apply security recommendations for a resource</br> (and use [Fix](implement-security-recommendations.md#fix-button)) | - | - | Γ£ö | Γ£ö | Γ£ö |
+| Apply security recommendations for a resource</br> (and use [Fix](implement-security-recommendations.md#fix-button)) | - | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
| View alerts and recommendations | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
defender-for-cloud Quickstart Automation Alert https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-automation-alert.md
Title: Create a security automation for specific security alerts by using an Azure Resource Manager template (ARM template)
-description: Learn how to create a Microsoft Defender for Cloud automation to trigger a logic app, which will be triggered by specific Defender for Cloud alerts by using an Azure Resource Manager template (ARM template).
+ Title: Create a security automation for specific security alerts by using an Azure Resource Manager template (ARM template) or Bicep
+description: Learn how to create a Microsoft Defender for Cloud automation to trigger a logic app, which will be triggered by specific Defender for Cloud alerts by using an Azure Resource Manager template (ARM template) or Bicep.
Previously updated : 11/09/2021 Last updated : 05/16/2022
-# Quickstart: Create an automatic response to a specific security alert using an ARM template
+# Quickstart: Create an automatic response to a specific security alert using an ARM template or Bicep
-This quickstart describes how to use an Azure Resource Manager template (ARM template) to create a workflow automation that triggers a logic app when specific security alerts are received by Microsoft Defender for Cloud.
--
-If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
-
-[![Deploy to Azure.](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3a%2f%2fraw.githubusercontent.com%2fAzure%2fazure-quickstart-templates%2fmaster%2fquickstarts%2fmicrosoft.security%2fsecuritycenter-create-automation-for-alertnamecontains%2fazuredeploy.json)
+This quickstart describes how to use an Azure Resource Manager template (ARM template) or a Bicep file to create a workflow automation that triggers a logic app when specific security alerts are received by Microsoft Defender for Cloud.
## Prerequisites
If you don't have an Azure subscription, create a [free account](https://azure.m
For a list of the roles and permissions required to work with Microsoft Defender for Cloud's workflow automation feature, see [workflow automation](workflow-automation.md).
-## Review the template
+## ARM template tutorial
++
+If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
+
+[![Deploy to Azure.](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3a%2f%2fraw.githubusercontent.com%2fAzure%2fazure-quickstart-templates%2fmaster%2fquickstarts%2fmicrosoft.security%2fsecuritycenter-create-automation-for-alertnamecontains%2fazuredeploy.json)
+
+### Review the template
The template used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/securitycenter-create-automation-for-alertnamecontains/).
The template used in this quickstart is from [Azure Quickstart Templates](https:
For other Defender for Cloud quickstart templates, see these [community contributed templates](https://azure.microsoft.com/resources/templates/?resourceType=Microsoft.Security&pageNumber=1&sort=Popular).
-## Deploy the template
+### Deploy the template
- **PowerShell**:
For other Defender for Cloud quickstart templates, see these [community contribu
To find more information about this deployment option, see [Use a deployment button to deploy templates from GitHub repository](../azure-resource-manager/templates/deploy-to-azure-button.md).
-## Review deployed resources
+### Review deployed resources
Use the Azure portal to check the workflow automation has been deployed. 1. From the [Azure portal](https://portal.azure.com), open **Microsoft Defender for Cloud**.+ 1. From the top menu bar, select the filter icon, and select the specific subscription on which you deployed the new workflow automation.+ 1. From Microsoft Defender for Cloud's menu, open **workflow automation** and check for your new automation. :::image type="content" source="./media/quickstart-automation-alert/validating-template-run.png" alt-text="List of configured automations." lightbox="./media/quickstart-automation-alert/validating-template-run.png":::+ >[!TIP] > If you have many workflow automations on your subscription, use the **filter by name** option.
-## Clean up resources
+### Clean up resources
When no longer needed, delete the workflow automation using the Azure portal. 1. From the [Azure portal](https://portal.azure.com), open **Microsoft Defender for Cloud**.+ 1. From the top menu bar, select the filter icon, and select the specific subscription on which you deployed the new workflow automation.+ 1. From Microsoft Defender for Cloud's menu, open **workflow automation** and find the automation to be deleted. :::image type="content" source="./media/quickstart-automation-alert/deleting-workflow-automation.png" alt-text="Steps for removing a workflow automation." lightbox="./media/quickstart-automation-alert/deleting-workflow-automation.png":::+ 1. Select the checkbox for the item to be deleted.+ 1. From the toolbar, select **Delete**.
+## Bicep tutorial
++
+### Review the Bicep file
+
+The Bicep file used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/securitycenter-create-automation-for-alertnamecontains/).
++
+### Relevant resources
+
+- [**Microsoft.Security/automations**](/azure/templates/microsoft.security/automations): The automation that will trigger the logic app, upon receiving a Microsoft Defender for Cloud alert that contains a specific string.
+- [**Microsoft.Logic/workflows**](/azure/templates/microsoft.logic/workflows): An empty triggerable Logic App.
+
+For other Defender for Cloud quickstart templates, see these [community contributed templates](https://azure.microsoft.com/resources/templates/?resourceType=Microsoft.Security&pageNumber=1&sort=Popular).
+
+### Deploy the Bicep file
+
+1. Save the Bicep file as **main.bicep** to your local computer.
+
+1. Deploy the Bicep file using either Azure CLI or Azure PowerShell.
+
+ # [CLI](#tab/CLI)
+
+ ```azurecli
+ az group create --name exampleRG --location eastus
+ az deployment group create --resource-group exampleRG --template-file main.bicep --parameters automationName=<automation-name> logicAppName=<logic-name> logicAppResourceGroupName=<group-name> alertSettings={alert-settings}
+ ```
+
+ # [PowerShell](#tab/PowerShell)
+
+ ```azurepowershell
+ New-AzResourceGroup -Name exampleRG -Location eastus
+ New-AzResourceGroupDeployment -ResourceGroupName exampleRG -TemplateFile ./main.bicep -automationName "<automation-name>" -logicAppName "<logic-name>" -logicAppResourceGroupName "<group-name>" -alertSettings "{alert-settings}"
+ ```
+
+
+
+ You're required to enter the following parameters:
+
+ - **automationName**: Replace **\<automation-name\>** with the name of the automation. It has a minimum length of 3 characters and a maximum length of 24 characters.
+ - **logicAppName**: Replace **\<logic-name\>** with the name of the logic app. It has a minimum length of 3 characters.
+ - **logicAppResourceGroupName**: Replace **\<group-name\>** with the name of the resource group in which the resources are located. It has a minimum length of 3 characters.
+ - **alertSettings**: Replace **\{alert-settings\}** with the alert settings object used for deploying the automation.
+
+ > [!NOTE]
+ > When the deployment finishes, you should see a message indicating the deployment succeeded.
+
+### Review deployed resources
+
+Use the Azure portal, Azure CLI, or Azure PowerShell to list the deployed resources in the resource group.
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+az resource list --resource-group exampleRG
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell-interactive
+Get-AzResource -ResourceGroupName exampleRG
+```
+++
+### Clean up resources
+
+When no longer needed, use the Azure portal, Azure CLI, or Azure PowerShell to delete the resource group and all of its resources.
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+az group delete --name exampleRG
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell-interactive
+Remove-AzResourceGroup -Name exampleRG
+```
+++ ## Next steps
-For a step-by-step tutorial that guides you through the process of creating a template, see:
+For step-by-step tutorials that guide you through the process of creating an ARM template or a Bicep file, see:
> [!div class="nextstepaction"] > [Tutorial: Create and deploy your first ARM template](../azure-resource-manager/templates/template-tutorial-create-first-template.md)
+> [Quickstart: Create Bicep files with Visual Studio Code](../azure-resource-manager/bicep/quickstart-create-bicep-use-visual-studio-code.md)
defender-for-cloud Quickstart Onboard Aws https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-aws.md
description: Defend your AWS resources with Microsoft Defender for Cloud
Previously updated : 05/03/2022 Last updated : 05/17/2022 zone_pivot_groups: connect-aws-accounts
Microsoft Defender for Cloud protects workloads in Azure, Amazon Web Services (A
To protect your AWS-based resources, you can connect an account with one of two mechanisms: -- **Classic cloud connectors experience** - As part of the initial multi-cloud offering, we introduced these cloud connectors as a way to connect your AWS and GCP projects. If you've already configured an AWS connector through the classic cloud connectors experience, we recommend deleting these connectors (as explained in [Remove classic connectors](#remove-classic-connectors)), and connecting the account again using the newer mechanism. If you don't do this before creating the new connector through the environment settings page, do so afterwards to avoid seeing duplicate recommendations.
+- **Classic cloud connectors experience** - As part of the initial multicloud offering, we introduced these cloud connectors as a way to connect your AWS and GCP projects. If you've already configured an AWS connector through the classic cloud connectors experience, we recommend deleting these connectors (as explained in [Remove classic connectors](#remove-classic-connectors)), and connecting the account again using the newer mechanism. If you don't do this before creating the new connector through the environment settings page, do so afterwards to avoid seeing duplicate recommendations.
-- **Environment settings page (in preview)** (recommended) - This preview page provides a greatly improved, simpler, onboarding experience (including auto provisioning). This mechanism also extends Defender for Cloud's enhanced security features to your AWS resources:
+- **Environment settings page** (recommended) - This page provides a greatly improved, simpler, onboarding experience (including auto provisioning). This mechanism also extends Defender for Cloud's enhanced security features to your AWS resources:
- - **Defender for Cloud's CSPM features** extend to your AWS resources. This agentless plan assesses your AWS resources according to AWS-specific security recommendations and these are included in your secure score. The resources will also be assessed for compliance with built-in standards specific to AWS (AWS CIS, AWS PCI DSS, and AWS Foundational Security Best Practices). Defender for Cloud's [asset inventory page](asset-inventory.md) is a multi-cloud enabled feature helping you manage your AWS resources alongside your Azure resources.
+ - **Defender for Cloud's CSPM features** extend to your AWS resources. This agentless plan assesses your AWS resources according to AWS-specific security recommendations and these are included in your secure score. The resources will also be assessed for compliance with built-in standards specific to AWS (AWS CIS, AWS PCI DSS, and AWS Foundational Security Best Practices). Defender for Cloud's [asset inventory page](asset-inventory.md) is a multicloud enabled feature helping you manage your AWS resources alongside your Azure resources.
- **Microsoft Defender for Containers** brings threat detection and advanced defenses to your Amazon EKS clusters. This plan includes Kubernetes threat protection, behavioral analytics, Kubernetes best practices, admission control recommendations and more. You can view the full list of available features in [Defender for Containers feature availability](supported-machines-endpoint-solutions-clouds-containers.md).
- - **Microsoft Defender for Servers** brings threat detection and advanced defenses to your Windows and Linux EC2 instances. This plan includes the integrated license for Microsoft Defender for Endpoint, security baselines and OS level assessments, vulnerability assessment scanning, adaptive application controls (AAC), file integrity monitoring (FIM), and more. You can view the full list of available features in the [feature availability table](supported-machines-endpoint-solutions-clouds-servers.md?tabs=tab/features-multi-cloud).
+ - **Microsoft Defender for Servers** brings threat detection and advanced defenses to your Windows and Linux EC2 instances. This plan includes the integrated license for Microsoft Defender for Endpoint, security baselines and OS level assessments, vulnerability assessment scanning, adaptive application controls (AAC), file integrity monitoring (FIM), and more. You can view the full list of available features in the [feature availability table](supported-machines-endpoint-solutions-clouds-servers.md?tabs=tab/features-multicloud).
For a reference list of all the recommendations Defender for Cloud can provide for AWS resources, see [Security recommendations for AWS resources - a reference guide](recommendations-reference-aws.md).
This screenshot shows AWS accounts displayed in Defender for Cloud's [overview d
|Aspect|Details| |-|:-|
-|Release state:|Preview.<br>[!INCLUDE [Legalese](../../includes/defender-for-cloud-preview-legal-text.md)]|
-|Pricing:|The **CSPM plan** is free.<br>The **[Defender for Containers](defender-for-containers-introduction.md)** plan is free during the preview. After which, it will be billed for AWS at the same price as for Azure resources.<br>For every AWS machine connected to Azure with [Azure Arc-enabled servers](../azure-arc/servers/overview.md), the **Defender for Servers** plan is billed at the same price as the [Microsoft Defender for Servers](defender-for-servers-introduction.md) plan for Azure machines. If an AWS EC2 doesn't have the Azure Arc agent deployed, you won't be charged for that machine.|
-|Required roles and permissions:|**Contributor** permission for the relevant Azure subscription.|
+|Release state:|General Availability (GA)|
+|Pricing:| The **CSPM plan** is free.<br>The **[Defender for Containers](defender-for-containers-introduction.md)** plan for AWS is billed at the same price as for Azure resources. <br>For every AWS machine connected to Azure with [Azure Arc-enabled servers](../azure-arc/servers/overview.md), the **Defender for Servers** plan is billed at the same price as the [Microsoft Defender for Servers](defender-for-servers-introduction.md) plan for Azure machines. If an AWS EC2 doesn't have the Azure Arc agent deployed, you won't be charged for that machine.|
+|Required roles and permissions:|**Contributor** permission for the relevant Azure subscription. <br> **Administrator** on the AWS account.|
|Clouds:|:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/no-icon.png"::: National (Azure Government, Azure China 21Vianet)|
This screenshot shows AWS accounts displayed in Defender for Cloud's [overview d
- VA solution (TVM/ Qualys) - Log Analytics (LA) agent on Arc machines. Ensure the selected workspace has security solution installed.
- The LA agent is currently configured in the subscription level, such that all the multi-cloud accounts and projects (from both AWS and GCP) under the same subscription will inherit the subscription settings with regards to the LA agent.
+ The LA agent is currently configured in the subscription level, such that all the multicloud accounts and projects (from both AWS and GCP) under the same subscription will inherit the subscription settings with regards to the LA agent.
Learn how to [configure auto-provisioning on your subscription](enable-data-collection.md#configure-auto-provisioning-for-agents-and-extensions-from-microsoft-defender-for-cloud).
If you have any existing connectors created with the classic cloud connectors ex
1. Enter the details of the AWS account, including the location where you'll store the connector resource. :::image type="content" source="media/quickstart-onboard-aws/add-aws-account-details.png" alt-text="Step 1 of the add AWS account wizard: Enter the account details.":::
+
+ (Optional) Select **Management account** to create a connector to a management account. Connectors will be created for each member account discovered under the provided management account. Auto-provisioning will be enabled for all of the newly onboarded accounts.
1. Select **Next: Select plans**.
If you have any existing connectors created with the classic cloud connectors ex
1. Download the CloudFormation template.
-1. Using the downloaded CloudFormation template, create the stack in AWS as instructed on screen.
+1. Using the downloaded CloudFormation template, create the stack in AWS as instructed on screen. If you are onboarding a management account, you'll need to run the CloudFormation template both as Stack and as StackSet. Connectors will be created for the member accounts up to 24 hours after the onboarding.
1. Select **Next: Review and generate**.
AWS Systems Manager is required for automating tasks across your AWS resources.
:::image type="content" source="media/quickstart-onboard-gcp/classic-connectors-experience.png" alt-text="Switching back to the classic cloud connectors experience in Defender for Cloud."::: 1. Select **Add AWS account**.
- :::image type="content" source="./media/quickstart-onboard-aws/add-aws-account.png" alt-text="Add AWS account button on Defender for Cloud's multi-cloud connectors page":::
+ :::image type="content" source="./media/quickstart-onboard-aws/add-aws-account.png" alt-text="Add AWS account button on Defender for Cloud's multicloud connectors page":::
1. Configure the options in the **AWS authentication** tab: 1. Enter a **Display name** for the connector. 1. Confirm that the subscription is correct. It's the subscription that will include the connector and AWS Security Hub recommendations.
When the connector is successfully created, and AWS Security Hub has been config
## Monitoring your AWS resources
-As you can see in the previous screenshot, Defender for Cloud's security recommendations page displays your AWS resources. You can use the environments filter to enjoy Defender for Cloud's multi-cloud capabilities: view the recommendations for Azure, AWS, and GCP resources together.
+As you can see in the previous screenshot, Defender for Cloud's security recommendations page displays your AWS resources. You can use the environments filter to enjoy Defender for Cloud's multicloud capabilities: view the recommendations for Azure, AWS, and GCP resources together.
To view all the active recommendations for your resources by resource type, use Defender for Cloud's asset inventory page and filter to the AWS resource type in which you're interested:
For other operating systems, the SSM Agent should be installed manually using th
## Next steps
-Connecting your AWS account is part of the multi-cloud experience available in Microsoft Defender for Cloud. For related information, see the following page:
+Connecting your AWS account is part of the multicloud experience available in Microsoft Defender for Cloud. For related information, see the following page:
- [Security recommendations for AWS resources - a reference guide](recommendations-reference-aws.md). - [Connect your GCP projects to Microsoft Defender for Cloud](quickstart-onboard-gcp.md)
defender-for-cloud Quickstart Onboard Gcp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-gcp.md
description: Monitoring your GCP resources from Microsoft Defender for Cloud
Previously updated : 03/27/2022 Last updated : 05/17/2022 zone_pivot_groups: connect-gcp-accounts
Microsoft Defender for Cloud protects workloads in Azure, Amazon Web Services (A
To protect your GCP-based resources, you can connect an account in two different ways: -- **Classic cloud connectors experience** - As part of the initial multi-cloud offering, we introduced these cloud connectors as a way to connect your AWS and GCP projects.
+- **Classic cloud connectors experience** - As part of the initial multicloud offering, we introduced these cloud connectors as a way to connect your AWS and GCP projects.
- **Environment settings page** (Recommended) - This page provides the onboarding experience (including auto provisioning). This mechanism also extends Defender for Cloud's enhanced security features to your GCP resources:
- - **Defender for Cloud's CSPM features** extends to your GCP resources. This agentless plan assesses your GCP resources according to GCP-specific security recommendations and these are included in your secure score. The resources will also be assessed for compliance with built-in standards specific to GCP. Defender for Cloud's [asset inventory page](asset-inventory.md) is a multi-cloud enabled feature helping you manage your GCP resources alongside your Azure resources.
+ - **Defender for Cloud's CSPM features** extends to your GCP resources. This agentless plan assesses your GCP resources according to GCP-specific security recommendations and these are included in your secure score. The resources will also be assessed for compliance with built-in standards specific to GCP. Defender for Cloud's [asset inventory page](asset-inventory.md) is a multicloud enabled feature helping you manage your GCP resources alongside your Azure resources.
- **Microsoft Defender for Servers** brings threat detection and advanced defenses to your GCP VM instances. This plan includes the integrated license for Microsoft Defender for Endpoint, security baselines and OS level assessments, vulnerability assessment scanning, adaptive application controls (AAC), file integrity monitoring (FIM), and more. You can view the full list of available features in the [Supported features for virtual machines and servers table](supported-machines-endpoint-solutions-clouds-servers.md) - **Microsoft Defender for Containers** - Microsoft Defender for Containers brings threat detection and advanced defenses to your Google's Kubernetes Engine (GKE) Standard clusters. This plan includes Kubernetes threat protection, behavioral analytics, Kubernetes best practices, admission control recommendations and more. You can view the full list of available features in [Defender for Containers feature availability](supported-machines-endpoint-solutions-clouds-containers.md).
To protect your GCP-based resources, you can connect an account in two different
|-|:-| | Release state: | Preview <br> The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to the Azure features that are in beta, preview, or otherwise not yet released into general availability. | |Pricing:|The **CSPM plan** is free.<br> The **Defender for Servers** plan is billed at the same price as the [Microsoft Defender for Servers](defender-for-servers-introduction.md) plan for Azure machines. If a GCP VM instance doesn't have the Azure Arc agent deployed, you won't be charged for that machine. <br>The **[Defender for Containers](defender-for-containers-introduction.md)** plan is free during the preview. After which, it will be billed for GCP at the same price as for Azure resources.|
-|Required roles and permissions:| **Contributor** on the relevant Azure Subscription|
+|Required roles and permissions:| **Contributor** on the relevant Azure Subscription <br> **Owner** on the GCP organization or project|
|Clouds:|:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/no-icon.png"::: National (Azure Government, Azure China 21Vianet, Other Gov)| - ## Remove 'classic' connectors If you have any existing connectors created with the classic cloud connectors experience, remove them first:
Follow the steps below to create your GCP cloud connector.
:::image type="content" source="media/quickstart-onboard-gcp/create-connector.png" alt-text="Screenshot of the Create GCP connector page where you need to enter all relevant information.":::
+ (Optional) If you select **Organization (Preview)**, a management project and an organization custom role will be created on your GCP project for the onboarding process. Auto-provisioning will be enabled for the onboarding of new projects.
+ 1. Select the **Next: Select Plans**. 1. Toggle the plans you want to connect to **On**. By default all necessary prerequisites and components will be provisioned. (Optional) Learn how to [configure each plan](#optional-configure-selected-plans).
Follow the steps below to create your GCP cloud connector.
:::image type="content" source="media/quickstart-onboard-gcp/copy-button.png" alt-text="Screenshot showing the location of the copy button.":::
+ > [!NOTE]
+ > To discover GCP resources, and for the authentication process, the following APIs must be enabled: iam.googleapis.com, sts.googleapis.com, cloudresourcemanager.googleapis.com, iamcredentials.googleapis.com, compute.googleapis.com. If these APIs are not enabled, we'll enable them during the onboarding process by running the GCloud script.
+ 1. Select the **GCP Cloud Shell >**. 1. The GCP Cloud Shell will open.
To have full visibility to Microsoft Defender for Servers security content, ensu
- VA solution (TVM/ Qualys) - Log Analytics (LA) agent on Arc machines. Ensure the selected workspace has security solution installed.
- The LA agent is currently configured in the subscription level, such that all the multi-cloud accounts and projects (from both AWS and GCP) under the same subscription will inherit the subscription settings with regards to the LA agent.
+ The LA agent is currently configured in the subscription level, such that all the multicloud accounts and projects (from both AWS and GCP) under the same subscription will inherit the subscription settings with regards to the LA agent.
Learn how to [configure auto-provisioning on your subscription](enable-data-collection.md#configure-auto-provisioning-for-agents-and-extensions-from-microsoft-defender-for-cloud).
When the connector is successfully created and GCP Security Command Center has b
## Monitor your GCP resources
-As shown above, Microsoft Defender for Cloud's security recommendations page displays your GCP resources together with your Azure and AWS resources for a true multi-cloud view.
+As shown above, Microsoft Defender for Cloud's security recommendations page displays your GCP resources together with your Azure and AWS resources for a true multicloud view.
To view all the active recommendations for your resources by resource type, use Defender for Cloud's asset inventory page and filter to the GCP resource type in which you're interested:
Yes. To create, edit, or delete Defender for Cloud cloud connectors with a REST
## Next steps
-Connecting your GCP project is part of the multi-cloud experience available in Microsoft Defender for Cloud. For related information, see the following page:
+Connecting your GCP project is part of the multicloud experience available in Microsoft Defender for Cloud. For related information, see the following page:
- [Connect your AWS accounts to Microsoft Defender for Cloud](quickstart-onboard-aws.md) - [Google Cloud resource hierarchy](https://cloud.google.com/resource-manager/docs/cloud-platform-resource-hierarchy)--Learn about the Google Cloud resource hierarchy in Google's online docs
defender-for-cloud Recommendations Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/recommendations-reference.md
Title: Reference table for all Microsoft Defender for Cloud recommendations description: This article lists Microsoft Defender for Cloud's security recommendations that help you harden and protect your resources.-+ Previously updated : 03/13/2022- Last updated : 05/19/2022+ # Security recommendations - a reference guide
impact on your secure score.
|Install Azure Security Center for IoT security module to get more visibility into your IoT devices|Install Azure Security Center for IoT security module to get more visibility into your IoT devices.|Low| |Your machines should be restarted to apply system updates|Restart your machines to apply the system updates and secure the machine from vulnerabilities. (Related policy: System updates should be installed on your machines)|Medium| |Monitoring agent should be installed on your machines|This action installs a monitoring agent on the selected virtual machines. Select a workspace for the agent to report to. (No related policy)|High|-
+||||
## Next steps
defender-for-cloud Regulatory Compliance Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/regulatory-compliance-dashboard.md
In this tutorial, you learned about using Defender for CloudΓÇÖs regulatory comp
> * View and monitor your compliance posture regarding the standards and regulations that are important to you. > * Improve your compliance status by resolving relevant recommendations and watching the compliance score improve.
-The regulatory compliance dashboard can greatly simplify the compliance process, and significantly cut the time required for gathering compliance evidence for your Azure, hybrid, and multi-cloud environment.
+The regulatory compliance dashboard can greatly simplify the compliance process, and significantly cut the time required for gathering compliance evidence for your Azure, hybrid, and multicloud environment.
To learn more, see these related pages:
defender-for-cloud Release Notes Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes-archive.md
Other changes in November include:
### Azure Security Center and Azure Defender become Microsoft Defender for Cloud
-According to the [2021 State of the Cloud report](https://info.flexera.com/CM-REPORT-State-of-the-Cloud#download), 92% of organizations now have a multi-cloud strategy. At Microsoft, our goal is to centralize security across these environments and help security teams work more effectively.
+According to the [2021 State of the Cloud report](https://info.flexera.com/CM-REPORT-State-of-the-Cloud#download), 92% of organizations now have a multicloud strategy. At Microsoft, our goal is to centralize security across these environments and help security teams work more effectively.
-**Microsoft Defender for Cloud** (formerly known as Azure Security Center and Azure Defender) is a Cloud Security Posture Management (CSPM) and cloud workload protection (CWP) solution that discovers weaknesses across your cloud configuration, helps strengthen the overall security posture of your environment, and protects workloads across multi-cloud and hybrid environments.
+**Microsoft Defender for Cloud** (formerly known as Azure Security Center and Azure Defender) is a Cloud Security Posture Management (CSPM) and cloud workload protection (CWP) solution that discovers weaknesses across your cloud configuration, helps strengthen the overall security posture of your environment, and protects workloads across multicloud and hybrid environments.
At Ignite 2019, we shared our vision to create the most complete approach for securing your digital estate and integrating XDR technologies under the Microsoft Defender brand. Unifying Azure Security Center and Azure Defender under the new name **Microsoft Defender for Cloud**, reflects the integrated capabilities of our security offering and our ability to support any cloud platform.
A new **environment settings** page provides greater visibility and control over
When you've added your AWS accounts, Defender for Cloud protects your AWS resources with any or all of the following plans: -- **Defender for Cloud's CSPM features** extend to your AWS resources. This agentless plan assesses your AWS resources according to AWS-specific security recommendations and these are included in your secure score. The resources will also be assessed for compliance with built-in standards specific to AWS (AWS CIS, AWS PCI DSS, and AWS Foundational Security Best Practices). Defender for Cloud's [asset inventory page](asset-inventory.md) is a multi-cloud enabled feature helping you manage your AWS resources alongside your Azure resources.
+- **Defender for Cloud's CSPM features** extend to your AWS resources. This agentless plan assesses your AWS resources according to AWS-specific security recommendations and these are included in your secure score. The resources will also be assessed for compliance with built-in standards specific to AWS (AWS CIS, AWS PCI DSS, and AWS Foundational Security Best Practices). Defender for Cloud's [asset inventory page](asset-inventory.md) is a multicloud enabled feature helping you manage your AWS resources alongside your Azure resources.
- **Microsoft Defender for Kubernetes** extends its container threat detection and advanced defenses to your **Amazon EKS Linux clusters**. - **Microsoft Defender for Servers** brings threat detection and advanced defenses to your Windows and Linux EC2 instances. This plan includes the integrated license for Microsoft Defender for Endpoint, security baselines and OS level assessments, vulnerability assessment scanning, adaptive application controls (AAC), file integrity monitoring (FIM), and more.
Learn more about [connecting your AWS accounts to Microsoft Defender for Cloud](
### Prioritize security actions by data sensitivity (powered by Microsoft Purview) (in preview) Data resources remain a popular target for threat actors. So it's crucial for security teams to identify, prioritize, and secure sensitive data resources across their cloud environments.
-To address this challenge, Microsoft Defender for Cloud now integrates sensitivity information from [Microsoft Purview](../purview/overview.md). Microsoft Purview is a unified data governance service that provides rich insights into the sensitivity of your data within multi-cloud, and on-premises workloads.
+To address this challenge, Microsoft Defender for Cloud now integrates sensitivity information from [Microsoft Purview](../purview/overview.md). Microsoft Purview is a unified data governance service that provides rich insights into the sensitivity of your data within multicloud, and on-premises workloads.
The integration with Microsoft Purview extends your security visibility in Defender for Cloud from the infrastructure level down to the data, enabling an entirely new way to prioritize resources and security activities for your security teams.
This change is reflected in the names of the recommendation with a new prefix, *
### Prefix for Kubernetes alerts changed from "AKS_" to "K8S_"
-Azure Defender for Kubernetes recently expanded to protect Kubernetes clusters hosted on-premises and in multi-cloud environments. Learn more in [Use Azure Defender for Kubernetes to protect hybrid and multi-cloud Kubernetes deployments (in preview)](release-notes-archive.md#use-azure-defender-for-kubernetes-to-protect-hybrid-and-multi-cloud-kubernetes-deployments-in-preview).
+Azure Defender for Kubernetes recently expanded to protect Kubernetes clusters hosted on-premises and in multicloud environments. Learn more in [Use Azure Defender for Kubernetes to protect hybrid and multicloud Kubernetes deployments (in preview)](release-notes-archive.md#use-azure-defender-for-kubernetes-to-protect-hybrid-and-multicloud-kubernetes-deployments-in-preview).
To reflect the fact that the security alerts provided by Azure Defender for Kubernetes are no longer restricted to clusters on Azure Kubernetes Service, we've changed the prefix for the alert types from "AKS_" to "K8S_". Where necessary, the names and descriptions were updated too. For example, this alert:
To simplify the process of enabling these plans, use the recommendations:
Azure Security Center expands its offer for SQL protection with a new bundle to cover your open-source relational databases: - **Azure Defender for Azure SQL database servers** - defends your Azure-native SQL Servers-- **Azure Defender for SQL servers on machines** - extends the same protections to your SQL servers in hybrid, multi-cloud, and on-premises environments
+- **Azure Defender for SQL servers on machines** - extends the same protections to your SQL servers in hybrid, multicloud, and on-premises environments
- **Azure Defender for open-source relational databases** - defends your Azure Databases for MySQL, PostgreSQL, and MariaDB single servers Azure Defender for open-source relational databases constantly monitors your servers for security threats and detects anomalous database activities indicating potential threats to Azure Database for MySQL, PostgreSQL, and MariaDB. Some examples are:
Learn more about the [Assessments REST API](/rest/api/securitycenter/assessments
Security Center's asset inventory page offers many filters to quickly refine the list of resources displayed. Learn more in [Explore and manage your resources with asset inventory](asset-inventory.md).
-A new filter offers the option to refine the list according to the cloud accounts you've connected with Security Center's multi-cloud features:
+A new filter offers the option to refine the list according to the cloud accounts you've connected with Security Center's multicloud features:
:::image type="content" source="media/asset-inventory/filter-environment.png" alt-text="Inventory's environment filter":::
-Learn more about the multi-cloud capabilities:
+Learn more about the multicloud capabilities:
- [Connect your AWS accounts to Azure Security Center](quickstart-onboard-aws.md) - [Connect your GCP projects to Azure Security Center](quickstart-onboard-gcp.md)
Learn more about the multi-cloud capabilities:
Updates in April include: - [Refreshed resource health page (in preview)](#refreshed-resource-health-page-in-preview) - [Container registry images that have been recently pulled are now rescanned weekly (released for general availability (GA))](#container-registry-images-that-have-been-recently-pulled-are-now-rescanned-weekly-released-for-general-availability-ga)-- [Use Azure Defender for Kubernetes to protect hybrid and multi-cloud Kubernetes deployments (in preview)](#use-azure-defender-for-kubernetes-to-protect-hybrid-and-multi-cloud-kubernetes-deployments-in-preview)
+- [Use Azure Defender for Kubernetes to protect hybrid and multicloud Kubernetes deployments (in preview)](#use-azure-defender-for-kubernetes-to-protect-hybrid-and-multicloud-kubernetes-deployments-in-preview)
- [Microsoft Defender for Endpoint integration with Azure Defender now supports Windows Server 2019 and Windows 10 on Windows Virtual Desktop released for general availability (GA)](#microsoft-defender-for-endpoint-integration-with-azure-defender-now-supports-windows-server-2019-and-windows-10-on-windows-virtual-desktop-released-for-general-availability-ga) - [Recommendations to enable Azure Defender for DNS and Resource Manager (in preview)](#recommendations-to-enable-azure-defender-for-dns-and-resource-manager-in-preview) - [Three regulatory compliance standards added: Azure CIS 1.3.0, CMMC Level 3, and New Zealand ISM Restricted](#three-regulatory-compliance-standards-added-azure-cis-130-cmmc-level-3-and-new-zealand-ism-restricted)
Scanning is charged on a per image basis, so there's no additional charge for th
Learn more about this scanner in [Use Azure Defender for container registries to scan your images for vulnerabilities](defender-for-containers-usage.md).
-### Use Azure Defender for Kubernetes to protect hybrid and multi-cloud Kubernetes deployments (in preview)
+### Use Azure Defender for Kubernetes to protect hybrid and multicloud Kubernetes deployments (in preview)
Azure Defender for Kubernetes is expanding its threat protection capabilities to defend your clusters wherever they're deployed. This has been enabled by integrating with [Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/overview.md) and its new [extensions capabilities](../azure-arc/kubernetes/extensions.md).
This integration between Azure Security Center, Azure Defender, and Azure Arc-en
- Identified security threats from Azure Defender are reported in the new Security page of the Azure Arc Portal - Azure Arc-enabled Kubernetes clusters are integrated into the Azure Security Center platform and experience
-Learn more in [Use Azure Defender for Kubernetes with your on-premises and multi-cloud Kubernetes clusters](defender-for-kubernetes-azure-arc.md).
+Learn more in [Use Azure Defender for Kubernetes with your on-premises and multicloud Kubernetes clusters](defender-for-kubernetes-azure-arc.md).
:::image type="content" source="media/defender-for-kubernetes-azure-arc/extension-recommendation.png" alt-text="Azure Security Center's recommendation for deploying the Azure Defender extension for Azure Arc-enabled Kubernetes clusters." lightbox="media/defender-for-kubernetes-azure-arc/extension-recommendation.png":::
Learn more about how to [Explore and manage your resources with asset inventory]
Updates in January include: - [Azure Security Benchmark is now the default policy initiative for Azure Security Center](#azure-security-benchmark-is-now-the-default-policy-initiative-for-azure-security-center)-- [Vulnerability assessment for on-premise and multi-cloud machines is released for general availability (GA)](#vulnerability-assessment-for-on-premise-and-multi-cloud-machines-is-released-for-general-availability-ga)
+- [Vulnerability assessment for on-premise and multicloud machines is released for general availability (GA)](#vulnerability-assessment-for-on-premise-and-multicloud-machines-is-released-for-general-availability-ga)
- [Secure score for management groups is now available in preview](#secure-score-for-management-groups-is-now-available-in-preview) - [Secure score API is released for general availability (GA)](#secure-score-api-is-released-for-general-availability-ga) - [Dangling DNS protections added to Azure Defender for App Service](#dangling-dns-protections-added-to-azure-defender-for-app-service)-- [Multi-cloud connectors are released for general availability (GA)](#multi-cloud-connectors-are-released-for-general-availability-ga)
+- [Multicloud connectors are released for general availability (GA)](#multicloud-connectors-are-released-for-general-availability-ga)
- [Exempt entire recommendations from your secure score for subscriptions and management groups](#exempt-entire-recommendations-from-your-secure-score-for-subscriptions-and-management-groups) - [Users can now request tenant-wide visibility from their global administrator](#users-can-now-request-tenant-wide-visibility-from-their-global-administrator) - [35 preview recommendations added to increase coverage of Azure Security Benchmark](#35-preview-recommendations-added-to-increase-coverage-of-azure-security-benchmark)
To learn more, see the following pages:
- [Learn more about Azure Security Benchmark](/security/benchmark/azure/introduction) - [Customize the set of standards in your regulatory compliance dashboard](update-regulatory-compliance-packages.md)
-### Vulnerability assessment for on-premise and multi-cloud machines is released for general availability (GA)
+### Vulnerability assessment for on-premise and multicloud machines is released for general availability (GA)
In October, we announced a preview for scanning Azure Arc-enabled servers with [Azure Defender for Servers](defender-for-servers-introduction.md)' integrated vulnerability assessment scanner (powered by Qualys).
Learn more:
- [Introduction to Azure Defender for App Service](defender-for-app-service-introduction.md)
-### Multi-cloud connectors are released for general availability (GA)
+### Multicloud connectors are released for general availability (GA)
With cloud workloads commonly spanning multiple cloud platforms, cloud security services must do the same.
This capability means that Security Center provides visibility and protection ac
- Incorporate all of your resources into Security Center's secure score calculations - Regulatory compliance assessments of your AWS and GCP resources
-From Defender for Cloud's menu, select **Multi-cloud connectors** and you'll see the options for creating new connectors:
+From Defender for Cloud's menu, select **Multicloud connectors** and you'll see the options for creating new connectors:
Learn more in: - [Connect your AWS accounts to Azure Security Center](quickstart-onboard-aws.md)
Updates in December include:
Azure Security Center offers two Azure Defender plans for SQL Servers: - **Azure Defender for Azure SQL database servers** - defends your Azure-native SQL Servers -- **Azure Defender for SQL servers on machines** - extends the same protections to your SQL servers in hybrid, multi-cloud, and on-premises environments
+- **Azure Defender for SQL servers on machines** - extends the same protections to your SQL servers in hybrid, multicloud, and on-premises environments
With this announcement, **Azure Defender for SQL** now protects your databases and their data wherever they're located.
You can now see whether or not your subscriptions have the default Security Cent
## October 2020 Updates in October include:-- [Vulnerability assessment for on-premise and multi-cloud machines (preview)](#vulnerability-assessment-for-on-premise-and-multi-cloud-machines-preview)
+- [Vulnerability assessment for on-premise and multicloud machines (preview)](#vulnerability-assessment-for-on-premise-and-multicloud-machines-preview)
- [Azure Firewall recommendation added (preview)](#azure-firewall-recommendation-added-preview) - [Authorized IP ranges should be defined on Kubernetes Services recommendation updated with quick fix](#authorized-ip-ranges-should-be-defined-on-kubernetes-services-recommendation-updated-with-quick-fix) - [Regulatory compliance dashboard now includes option to remove standards](#regulatory-compliance-dashboard-now-includes-option-to-remove-standards) - [Microsoft.Security/securityStatuses table removed from Azure Resource Graph (ARG)](#microsoftsecuritysecuritystatuses-table-removed-from-azure-resource-graph-arg)
-### Vulnerability assessment for on-premise and multi-cloud machines (preview)
+### Vulnerability assessment for on-premise and multicloud machines (preview)
[Azure Defender for Servers](defender-for-servers-introduction.md)' integrated vulnerability assessment scanner (powered by Qualys) now scans Azure Arc-enabled servers.
Updates in September include:
- [Asset inventory tools are now generally available](#asset-inventory-tools-are-now-generally-available) - [Disable a specific vulnerability finding for scans of container registries and virtual machines](#disable-a-specific-vulnerability-finding-for-scans-of-container-registries-and-virtual-machines) - [Exempt a resource from a recommendation](#exempt-a-resource-from-a-recommendation)-- [AWS and GCP connectors in Security Center bring a multi-cloud experience](#aws-and-gcp-connectors-in-security-center-bring-a-multi-cloud-experience)
+- [AWS and GCP connectors in Security Center bring a multicloud experience](#aws-and-gcp-connectors-in-security-center-bring-a-multicloud-experience)
- [Kubernetes workload protection recommendation bundle](#kubernetes-workload-protection-recommendation-bundle) - [Vulnerability assessment findings are now available in continuous export](#vulnerability-assessment-findings-are-now-available-in-continuous-export) - [Prevent security misconfigurations by enforcing recommendations when creating new resources](#prevent-security-misconfigurations-by-enforcing-recommendations-when-creating-new-resources)
In such cases, you can create an exemption rule and ensure that resource isn't l
Learn more in [Exempt a resource from recommendations and secure score](exempt-resource.md).
-### AWS and GCP connectors in Security Center bring a multi-cloud experience
+### AWS and GCP connectors in Security Center bring a multicloud experience
With cloud workloads commonly spanning multiple cloud platforms, cloud security services must do the same.
Azure Security Center (ASC) has launched new networking recommendations and impr
## June 2019
-### Adaptive Network Hardening - generally available
+### Adaptive network hardening - generally available
One of the biggest attack surfaces for workloads running in the public cloud are connections to and from the public Internet. Our customers find it hard to know which Network Security Group (NSG) rules should be in place to make sure that Azure workloads are only available to required source ranges. With this feature, Security Center learns the network traffic and connectivity patterns of Azure workloads and provides NSG rule recommendations, for Internet facing virtual machines. This helps our customer better configure their network access policies and limit their exposure to attacks.
defender-for-cloud Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md
To learn about *planned* changes that are coming soon to Defender for Cloud, see
Updates in May include: -- [Multi-cloud settings of Servers plan are now available in connector level](#multi-cloud-settings-of-servers-plan-are-now-available-in-connector-level)
+- [Multicloud settings of Servers plan are now available in connector level](#multicloud-settings-of-servers-plan-are-now-available-in-connector-level)
- [JIT (Just-in-time) access for VMs is now available for AWS EC2 instances (Preview)](#jit-just-in-time-access-for-vms-is-now-available-for-aws-ec2-instances-preview)
-### Multi-cloud settings of Servers plan are now available in connector level
+### Multicloud settings of Servers plan are now available in connector level
-There are now connector-level settings for Defender for Servers in multi-cloud.
+There are now connector-level settings for Defender for Servers in multicloud.
The new connector-level settings provide granularity for pricing and auto-provisioning configuration per connector, independently of the subscription.
All auto-provisioning components available in the connector level (Azure Arc, MD
Updates in the UI include a reflection of the selected pricing tier and the required components configured. +
+### Changes to vulnerability assessment
+
+Defender for Containers now displays vulnerabilities that have medium and low severities that are not patchable.
+
+As part of this update, vulnerabilities that have medium and low severities are now shown, whether or not patches are available. This update provides maximum visibility, but still allows you to filter out undesired vulnerabilities by using the provided Disable rule.
++
+Learn more about [vulnerability management](deploy-vulnerability-assessment-tvm.md)
### JIT (Just-in-time) access for VMs is now available for AWS EC2 instances (Preview)
All Microsoft Defenders for IoT device alerts are no longer visible in Microsoft
### Posture management and threat protection for AWS and GCP released for general availability (GA) -- **Defender for Cloud's CSPM features** extend to your AWS and GCP resources. This agentless plan assesses your multi cloud resources according to cloud-specific security recommendations that are included in your secure score. The resources are assessed for compliance using the built-in standards. Defender for Cloud's asset inventory page is a multi-cloud enabled feature that allows you to manage your AWS resources alongside your Azure resources.
+- **Defender for Cloud's CSPM features** extend to your AWS and GCP resources. This agentless plan assesses your multi cloud resources according to cloud-specific security recommendations that are included in your secure score. The resources are assessed for compliance using the built-in standards. Defender for Cloud's asset inventory page is a multicloud enabled feature that allows you to manage your AWS resources alongside your Azure resources.
- **Microsoft Defender for Servers** brings threat detection and advanced defenses to your compute instances in AWS and GCP. The Defender for Servers plan includes an integrated license for Microsoft Defender for Endpoint, vulnerability assessment scanning, and more. Learn about all of the [supported features for virtual machines and servers](supported-machines-endpoint-solutions-clouds-servers.md). Automatic onboarding capabilities allow you to easily connect any existing or new compute instances discovered in your environment.
Learn how to [set up your Kubernetes workload protection](kubernetes-workload-pr
The new automated onboarding of GCP environments allows you to protect GCP workloads with Microsoft Defender for Cloud. Defender for Cloud protects your resources with the following plans: -- **Defender for Cloud's CSPM** features extend to your GCP resources. This agentless plan assesses your GCP resources according to the GCP-specific security recommendations, which are provided with Defender for Cloud. GCP recommendations are included in your secure score, and the resources will be assessed for compliance with the built-in GCP CIS standard. Defender for Cloud's asset inventory page is a multi-cloud enabled feature helping you manage your resources across Azure, AWS, and GCP.
+- **Defender for Cloud's CSPM** features extend to your GCP resources. This agentless plan assesses your GCP resources according to the GCP-specific security recommendations, which are provided with Defender for Cloud. GCP recommendations are included in your secure score, and the resources will be assessed for compliance with the built-in GCP CIS standard. Defender for Cloud's asset inventory page is a multicloud enabled feature helping you manage your resources across Azure, AWS, and GCP.
- **Microsoft Defender for Servers** brings threat detection and advanced defenses to your GCP compute instances. This plan includes the integrated license for Microsoft Defender for Endpoint, vulnerability assessment scanning, and more.
With the release of [Microsoft Defender for Containers](defender-for-containers-
The new plan: - **Combines the features of the two existing plans** - threat detection for Kubernetes clusters and vulnerability assessment for images stored in container registries-- **Brings new and improved features** - including multi-cloud support, host level threat detection with over **sixty** new Kubernetes-aware analytics, and vulnerability assessment for running images
+- **Brings new and improved features** - including multicloud support, host level threat detection with over **sixty** new Kubernetes-aware analytics, and vulnerability assessment for running images
- **Introduces Kubernetes-native at-scale onboarding** - by default, when you enable the plan all relevant components are configured to be deployed automatically With this release, the availability and presentation of Defender for Kubernetes and Defender for container registries has changed as follows:
defender-for-cloud Review Security Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/review-security-recommendations.md
Last updated 04/03/2022
# Review your security recommendations
-This article explains how to view and understand the recommendations in Microsoft Defender for Cloud to help you protect your multi-cloud resources.
+This article explains how to view and understand the recommendations in Microsoft Defender for Cloud to help you protect your multicloud resources.
## View your recommendations <a name="monitor-recommendations"></a>
defender-for-cloud Security Center Readiness Roadmap https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/security-center-readiness-roadmap.md
Use the following resources to learn how to use these capabilities in Defender f
Videos * [Defender for Cloud ΓÇô Just-in-time VM Access](https://youtu.be/UOQb2FcdQnU)
-* [Defender for Cloud - Adaptive Application Controls](https://youtu.be/wWWekI1Y9ck)
+* [Defender for Cloud - Adaptive application controls](https://youtu.be/wWWekI1Y9ck)
Articles * [Manage virtual machine access using just-in-time](./just-in-time-access-usage.md)
-* [Adaptive Application Controls in Defender for Cloud](./adaptive-application-controls.md)
+* [Adaptive application controls in Defender for Cloud](./adaptive-application-controls.md)
## Hands-on activities
defender-for-cloud Supported Machines Endpoint Solutions Clouds Servers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/supported-machines-endpoint-solutions-clouds-servers.md
The **tabs** below show the features of Microsoft Defender for Cloud that are av
-### [**Multi-cloud machines**](#tab/features-multi-cloud)
+### [**Multicloud machines**](#tab/features-multicloud)
| **Feature** | **Availability in AWS** | **Availability in GCP** | |--|:-:|
defender-for-cloud Upcoming Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/upcoming-changes.md
If you're looking for the latest release notes, you'll find them in the [What's
| Planned change | Estimated date for change | |--|--| | [Changes to recommendations for managing endpoint protection solutions](#changes-to-recommendations-for-managing-endpoint-protection-solutions) | May 2022 |
-| [Changes to vulnerability assessment](#changes-to-vulnerability-assessment) | May 2022 |
| [Key Vault recommendations changed to "audit"](#key-vault-recommendations-changed-to-audit) | May 2022 | | [Multiple changes to identity recommendations](#multiple-changes-to-identity-recommendations) | June 2022 | | [Deprecating three VM alerts](#deprecating-three-vm-alerts) | June 2022|
+| [Deprecating the "API App should only be accessible over HTTPS" policy](#deprecating-the-api-app-should-only-be-accessible-over-https-policy)|June 2022|
### Changes to recommendations for managing endpoint protection solutions
Learn more:
- [Defender for Cloud's supported endpoint protection solutions](supported-machines-endpoint-solutions-clouds-servers.md#endpoint-supported) - [How these recommendations assess the status of your deployed solutions](endpoint-protection-recommendations-technical.md)
-### Changes to vulnerability assessment
-
-**Estimated date for change:** May 2022
-
-Currently, Defender for Containers doesn't show vulnerabilities that have medium and low level severities that are not patchable.
-
-As part of this update, vulnerabilities that have medium and low severities, that don't have patches will be shown. This update will provide maximum visibility, while still allowing you to filter undesired vulnerabilities by using the provided Disable rule.
--
-Learn more about [vulnerability management](deploy-vulnerability-assessment-tvm.md)
- ### Key Vault recommendations changed to "audit" The Key Vault recommendations listed here are currently disabled so that they don't impact your secure score. We will change their effect to "audit".
The following table lists the alerts that will be deprecated during June 2022.
These alerts are used to notify a user about suspicious activity connected to a Kubernetes cluster. The alerts will be replaced with matching alerts that are part of the Microsoft Defender for Cloud Container alerts (`K8S.NODE_ImageBuildOnNode`, `K8S.NODE_ KubernetesAPI` and `K8S.NODE_ ContainerSSH`) which will provide improved fidelity and comprehensive context to investigate and act on the alerts. Learn more about alerts for [Kubernetes Clusters](alerts-reference.md).
+### Deprecating the "API App should only be accessible over HTTPS" policy
+
+**Estimated date for change:** June 2022
+
+The policy `API App should only be accessible over HTTPS` is set to be deprecated. This policy will be replaced with `Web Application should only be accessible over HTTPS`, which will be renamed to `App Service apps should only be accessible over HTTPS`.
+
+To learn more about policy definitions for Azure App Service, see [Azure Policy built-in definitions for Azure App Service](../azure-app-configuration/policy-reference.md)
+ ## Next steps For all recent changes to Defender for Cloud, see [What's new in Microsoft Defender for Cloud?](release-notes.md)
defender-for-iot Concept Event Aggregation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/concept-event-aggregation.md
Triggered based collectors are collectors that are triggered in a scheduled mann
Defender for IoT agents aggregate events for the interval period, or time window. Once the interval period has passed, the agent sends the aggregated events to the Azure cloud for further analysis. The aggregated events are stored in memory until being sent to the Azure cloud.
-The agent collects identical events to the ones that are already stored in memory. This collection causes the agent increases the hit count of this specific event to reduce the memory footprint of the agent. When the aggregation time window passes, the agent sends the hit count of each type of event that occurred. Event aggregation is simply the aggregation of the hit counts of each collected type of event.
+The agent collects identical events to the ones that are already stored in memory. This collection causes the agent to increase the hit count of this specific event to reduce the memory footprint of the agent. When the aggregation time window passes, the agent sends the hit count of each type of event that occurred. Event aggregation is simply the aggregation of the hit counts of each collected type of event.
## Process events (event based)
defender-for-iot Alert Engine Messages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/alert-engine-messages.md
Policy engine alerts describe detected deviations from learned baseline behavior
| Title | Description | Severity | |--|--|--|
-| Beckhoff Software Changed | Firmware was updated on a source device. This may be authorized activity, for example a planned maintenance procedure. | Major |
+| Beckhoff Software Changed | Firmware was updated on a source device. This may be authorized activity, for example a planned maintenance procedure. | Major |
| Database Login Failed | A failed sign-in attempt was detected from a source device to a destination server. This might be the result of human error, but could also indicate a malicious attempt to compromise the server or data on it. | Major | | Emerson ROC Firmware Version Changed | Firmware was updated on a source device. This may be authorized activity, for example a planned maintenance procedure. | Major | | External address within the network communicated with Internet | A source device defined as part of your network is communicating with Internet addresses. The source isn't authorized to communicate with Internet addresses. | Critical |
Policy engine alerts describe detected deviations from learned baseline behavior
| Firmware Change Detected | Firmware was updated on a source device. This may be authorized activity, for example a planned maintenance procedure. | Major | | Firmware Version Changed | Firmware was updated on a source device. This may be authorized activity, for example a planned maintenance procedure. | Major | | Foxboro I/A Unauthorized Operation | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
-| FTP Login Failed | A failed sign-in attempt was detected from a source device to a destination server. This might be the result of human error, but could also indicate a malicious attempt to compromise the server or data on it. | Major |
+| FTP Login Failed | A failed sign-in attempt was detected from a source device to a destination server. This alert might be the result of human error, but could also indicate a malicious attempt to compromise the server or data on it. | Major |
| Function Code Raised Unauthorized Exception | A source device (secondary) returned an exception to a destination device (primary). | Major | | GOOSE Message Type Settings | Message (identified by protocol ID) settings were changed on a source device. | Warning | | Honeywell Firmware Version Changed | Firmware was updated on a source device. This may be authorized activity, for example a planned maintenance procedure. | Major |
Anomaly engine alerts describe detected anomalies in network activity.
| Title | Description | Severity | |--|--|--|
-| Abnormal Exception Pattern in Slave | An excessive number of errors were detected on a source device. This may be the result of an operational issue. | Minor |
-| * Abnormal HTTP Header Length | The source device sent an abnormal message. This may indicate an attempt to attack the destination device. | Critical |
-| * Abnormal Number of Parameters in HTTP Header | The source device sent an abnormal message. This may indicate an attempt to attack the destination device. | Critical |
+| Abnormal Exception Pattern in Slave | An excessive number of errors were detected on a source device. This alert may be the result of an operational issue. | Minor |
+| * Abnormal HTTP Header Length | The source device sent an abnormal message. This alert may indicate an attempt to attack the destination device. | Critical |
+| * Abnormal Number of Parameters in HTTP Header | The source device sent an abnormal message. This alert may indicate an attempt to attack the destination device. | Critical |
| Abnormal Periodic Behavior In Communication Channel | A change in the frequency of communication between the source and destination devices was detected. | Minor |
-| Abnormal Termination of Applications | An excessive number of stop commands were detected on a source device. This may be the result of an operational issue or an attempt to manipulate the device. | Major |
+| Abnormal Termination of Applications | An excessive number of stop commands were detected on a source device. This alert may be the result of an operational issue or an attempt to manipulate the device. | Major |
| Abnormal Traffic Bandwidth | Abnormal bandwidth was detected on a channel. Bandwidth appears to be lower/higher than previously detected. For details, work with the Total Bandwidth widget. | Warning | | Abnormal Traffic Bandwidth Between Devices | Abnormal bandwidth was detected on a channel. Bandwidth appears to be lower/higher than previously detected. For details, work with the Total Bandwidth widget. | Warning | | Address Scan Detected | A source device was detected scanning network devices. This device hasn't been authorized as a network scanning device. | Critical | | ARP Address Scan Detected | A source device was detected scanning network devices using Address Resolution Protocol (ARP). This device address hasn't been authorized as valid ARP scanning address. | Critical | | ARP Address Scan Detected | A source device was detected scanning network devices using Address Resolution Protocol (ARP). This device address hasn't been authorized as valid ARP scanning address. | Critical |
-| ARP Spoofing | An abnormal quantity of packets was detected in the network. This could indicate an attack, for example, an ARP spoofing or ICMP flooding attack. | Warning |
-| Excessive Login Attempts | A source device was seen performing excessive sign-in attempts to a destination server. This may be a brute force attack. The server may be compromised by a malicious actor. | Critical |
-| Excessive Number of Sessions | A source device was seen performing excessive sign-in attempts to a destination server. This may be a brute force attack. The server may be compromised by a malicious actor. | Critical |
-| Excessive Restart Rate of an Outstation | An excessive number of restart commands were detected on a source device. This may be the result of an operational issue or an attempt to manipulate the device. | Major |
-| Excessive SMB login attempts | A source device was seen performing excessive sign-in attempts to a destination server. This may be a brute force attack. The server may be compromised by a malicious actor. | Critical |
-| ICMP Flooding | An abnormal quantity of packets was detected in the network. This could indicate an attack, for example, an ARP spoofing or ICMP flooding attack. | Warning |
+| ARP Spoofing | An abnormal quantity of packets was detected in the network. This alert could indicate an attack, for example, an ARP spoofing or ICMP flooding attack. | Warning |
+| Excessive Login Attempts | A source device was seen performing excessive sign-in attempts to a destination server. This alert may indicate a brute force attack. The server may be compromised by a malicious actor. | Critical |
+| Excessive Number of Sessions | A source device was seen performing excessive sign-in attempts to a destination server. This may indicate a brute force attack. The server may be compromised by a malicious actor. | Critical |
+| Excessive Restart Rate of an Outstation | An excessive number of restart commands were detected on a source device. These alerts may be the result of an operational issue or an attempt to manipulate the device. | Major |
+| Excessive SMB login attempts | A source device was seen performing excessive sign-in attempts to a destination server. This may indicate a brute force attack. The server may be compromised by a malicious actor. | Critical |
+| ICMP Flooding | An abnormal quantity of packets was detected in the network. This alert could indicate an attack, for example, an ARP spoofing or ICMP flooding attack. | Warning |
|* Illegal HTTP Header Content | The source device initiated an invalid request. | Critical |
-| Inactive Communication Channel | A communication channel between two devices was inactive during a period in which activity is usually seen. This might indicate that the program generating this traffic was changed, or the program might be unavailable. It's recommended to review the configuration of installed program and verify that it's configured properly. | Warning |
+| Inactive Communication Channel | A communication channel between two devices was inactive during a period in which activity is usually observed. This might indicate that the program generating this traffic was changed, or the program might be unavailable. It's recommended to review the configuration of installed program and verify that it's configured properly. | Warning |
| Long Duration Address Scan Detected | A source device was detected scanning network devices. This device hasn't been authorized as a network scanning device. | Critical |
-| Password Guessing Attempt Detected | A source device was seen performing excessive sign-in attempts to a destination server. This may be a brute force attack. The server may be compromised by a malicious actor. | Critical |
+| Password Guessing Attempt Detected | A source device was seen performing excessive sign-in attempts to a destination server. This may indicate a brute force attack. The server may be compromised by a malicious actor. | Critical |
| PLC Scan Detected | A source device was detected scanning network devices. This device hasn't been authorized as a network scanning device. | Critical | | Port Scan Detected | A source device was detected scanning network devices. This device hasn't been authorized as a network scanning device. | Critical |
-| Unexpected message length | The source device sent an abnormal message. This may indicate an attempt to attack the destination device. | Critical |
+| Unexpected message length | The source device sent an abnormal message. This alert may indicate an attempt to attack the destination device. | Critical |
| Unexpected Traffic for Standard Port | Traffic was detected on a device using a port reserved for another protocol. | Major | ## Protocol violation engine alerts
Protocol engine alerts describe detected deviations in the packet structure, or
| Title | Description | Severity | |--|--|--|
-| Excessive Malformed Packets In a Single Session | An abnormal number of malformed packets sent from the source device to the destination device. This might indicate erroneous communications, or an attempt to manipulate the targeted device. | Major |
+| Excessive Malformed Packets In a Single Session | An abnormal number of malformed packets sent from the source device to the destination device. This alert might indicate erroneous communications, or an attempt to manipulate the targeted device. | Major |
| Firmware Update | A source device sent a command to update firmware on a destination device. Verify that recent programming, configuration and firmware upgrades made to the destination device are valid. | Warning | | Function Code Not Supported by Outstation | The destination device received an invalid request. | Major | | Illegal BACNet message | The source device initiated an invalid request. | Major |
Malware engine alerts describe detected malicious network activity.
| Connection Attempt to Known Malicious IP | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Major | | Invalid SMB Message (DoublePulsar Backdoor Implant) | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | | Malicious Domain Name Request | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Major |
-| Malware Test File Detected - EICAR AV Success | An EICAR AV test file was detected in traffic between two devices (over any transport - TCP or UDP). The file isn't malware. It's used to confirm that the antivirus software is installed correctly; demonstrate what happens when a virus is found, and check internal procedures and reactions when a virus is found. Antivirus software should detect EICAR as if it were a real virus. | Major |
+| Malware Test File Detected - EICAR AV Success | An EICAR AV test file was detected in traffic between two devices (over any transport - TCP or UDP). The file isn't malware. It's used to confirm that the antivirus software is installed correctly. Demonstrate what happens when a virus is found, and check internal procedures and reactions when a virus is found. Antivirus software should detect EICAR as if it were a real virus. | Major |
| Suspicion of Conficker Malware | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Major |
-| Suspicion of Denial Of Service Attack | A source device attempted to initiate an excessive number of new connections to a destination device. This may be a Denial Of Service (DOS) attack against the destination device, and might interrupt device functionality, affect performance and service availability, or cause unrecoverable errors. | Critical |
+| Suspicion of Denial Of Service Attack | A source device attempted to initiate an excessive number of new connections to a destination device. This may indicate a Denial Of Service (DOS) attack against the destination device, and might interrupt device functionality, affect performance and service availability, or cause unrecoverable errors. | Critical |
| Suspicion of Malicious Activity | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Major | | Suspicion of Malicious Activity (BlackEnergy) | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | | Suspicion of Malicious Activity (DarkComet) | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical |
Operational engine alerts describe detected operational incidents, or malfunctio
| Title | Description | Severity | |--|--|--| | An S7 Stop PLC Command was Sent | The source device sent a stop command to a destination controller. The controller will stop operating until a start command is sent. | Warning |
-| BACNet Operation Failed | A server returned an error code. This indicates a server error or an invalid request by a client. | Major |
+| BACNet Operation Failed | A server returned an error code. This alert indicates a server error or an invalid request by a client. | Major |
| Bad MMS Device State | An MMS Virtual Manufacturing Device (VMD) sent a status message. The message indicates that the server may not be configured correctly, partially operational, or not operational at all. | Major | | Change of Device Configuration | A configuration change was detected on a source device. | Minor | | Continuous Event Buffer Overflow at Outstation | A buffer overflow event was detected on a source device. The event may cause data corruption, program crashes, or execution of malicious code. | Major |
Operational engine alerts describe detected operational incidents, or malfunctio
| EtherNet/IP CIP Service Request Failed | A server returned an error code. This indicates a server error or an invalid request by a client. | Major | | EtherNet/IP Encapsulation Protocol Command Failed | A server returned an error code. This indicates a server error or an invalid request by a client. | Major | | Event Buffer Overflow in Outstation | A buffer overflow event was detected on a source device. The event may cause data corruption, program crashes, or execution of malicious code. | Major |
-| Expected Backup Operation Did Not Occur | Expected backup/file transfer activity didn't occur between two devices. This may indicate errors in the backup / file transfer process. | Major |
-| GE SRTP Command Failure | A server returned an error code. This indicates a server error or an invalid request by a client. | Major |
+| Expected Backup Operation Did Not Occur | Expected backup/file transfer activity didn't occur between two devices. This alert may indicate errors in the backup / file transfer process. | Major |
+| GE SRTP Command Failure | A server returned an error code. This alert indicates a server error or an invalid request by a client. | Major |
| GE SRTP Stop PLC Command was Sent | The source device sent a stop command to a destination controller. The controller will stop operating until a start command is sent. | Warning |
-| GOOSE Control Block Requires Further Configuration | A source device sent a GOOSE message indicating that the device needs commissioning. This means the GOOSE control block requires further configuration and GOOSE messages are partially or completely non-operational. | Major |
+| GOOSE Control Block Requires Further Configuration | A source device sent a GOOSE message indicating that the device needs commissioning. This means that the GOOSE control block requires further configuration and GOOSE messages are partially or completely non-operational. | Major |
| GOOSE Dataset Configuration was Changed | A message (identified by protocol ID) dataset was changed on a source device. This means the device will report a different dataset for this message. | Warning | | Honeywell Controller Unexpected Status | A Honeywell Controller sent an unexpected diagnostic message indicating a status change. | Warning | |* HTTP Client Error | The source device initiated an invalid request. | Warning |
Operational engine alerts describe detected operational incidents, or malfunctio
| Outstation's Corrupted Configuration Detected | This DNP3 source device (outstation) reported a corrupted configuration. | Major | | Profinet DCP Command Failed | A server returned an error code. This indicates a server error or an invalid request by a client. | Major | | Profinet Device Factory Reset | A source device sent a factory reset command to a Profinet destination device. The reset command clears Profinet device configurations and stops its operation. | Warning |
-| * RPC Operation Failed | A server returned an error code. This indicates a server error or an invalid request by a client. | Major |
+| * RPC Operation Failed | A server returned an error code. This alert indicates a server error or an invalid request by a client. | Major |
| Sampled Values Message Dataset Configuration was Changed | A message (identified by protocol ID) dataset was changed on a source device. This means the device will report a different dataset for this message. | Warning | | Slave Device Unrecoverable Failure | An unrecoverable condition error was detected on a source device. This kind of error usually indicates a hardware failure or failure to perform a specific command. | Major | | Suspicion of Hardware Problems in Outstation | An unrecoverable condition error was detected on a source device. This kind of error usually indicates a hardware failure or failure to perform a specific command. | Major |
Operational engine alerts describe detected operational incidents, or malfunctio
## Next steps You can [Manage alert events](how-to-manage-the-alert-event.md).
-Learn how to [Forward alert information](how-to-forward-alert-information-to-partners.md).
+Learn how to [Forward alert information](how-to-forward-alert-information-to-partners.md).
defender-for-iot Dell Poweredge R340 Xl Legacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/appliance-catalog/dell-poweredge-r340-xl-legacy.md
The installation process takes about 20 minutes. After the installation, the sys
1. Verify that the version media is mounted to the appliance in one of the following ways:
- - Connect an external CD or disk-on-key that contains the sensor software you downloaded from the Azure portal.
+ - Connect an external CD or disk-on-key that contains the sensor software you downloaded from the Azure portal.
- Mount the ISO image by using iDRAC. After signing in to iDRAC, select the virtual console, and then select **Virtual Media**.
defender-for-iot Hpe Edgeline El300 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/appliance-catalog/hpe-edgeline-el300.md
A default administrative user is provided. We recommend that you change the pass
> Installation procedures are only relevant if you need to re-install software on a preconfigured device, or if you buy your own hardware and configure the appliance yourself. > - ### Enable remote access 1. Enter the iSM IP Address into your web browser.
defender-for-iot Virtual Management Hyper V https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/appliance-catalog/virtual-management-hyper-v.md
This article describes an on-premises management console deployment on a virtual
Before you begin the installation, make sure you have the following items: -- Microsoft Hyper-V hypervisor (Windows 10 Pro or Enterprise) installed and operational
+- Microsoft Hyper-V hypervisor (Windows 10 Pro or Enterprise) installed and operational. For more information, see [Introduction to Hyper-V on Windows 10](/virtualization/hyper-v-on-windows/about).
- Available hardware resources for the virtual machine. For more information, see [OT monitoring with virtual appliances](../ot-virtual-appliances.md).
This procedure describes how to create a virtual machine for your on-premises ma
The VM will start from the ISO image, and the language selection screen will appear.
-1. Continue with the [generic procedure for installing on-premises management console software](../how-to-install-software.md#install-on-premises-management-console-software).
+1. Continue with the [generic procedure for installing on-premises management console software](../how-to-install-software.md#install-ot-monitoring-software).
## Next steps
defender-for-iot Virtual Management Vmware https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/appliance-catalog/virtual-management-vmware.md
This procedure describes how to create a virtual machine for your on-premises ma
The VM will start from the ISO image, and the language selection screen will appear.
-1. Continue with the [generic procedure for installing on-premises management console software](../how-to-install-software.md#install-on-premises-management-console-software).
+1. Continue with the [generic procedure for installing on-premises management console software](../how-to-install-software.md#install-ot-monitoring-software).
## Next steps
defender-for-iot Virtual Sensor Hyper V https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/appliance-catalog/virtual-sensor-hyper-v.md
This article describes an OT sensor deployment on a virtual appliance using Micr
The on-premises management console supports both VMware and Hyper-V deployment options. Before you begin the installation, make sure you have the following items: -- Hyper-V hypervisor (Windows 10 Pro or Enterprise) installed and operational
+- Microsoft Hyper-V hypervisor (Windows 10 Pro or Enterprise) installed and operational. For more information, see [Introduction to Hyper-V on Windows 10](/virtualization/hyper-v-on-windows/about).
- Available hardware resources for the virtual machine. For more information, see [OT monitoring with virtual appliances](../ot-virtual-appliances.md).
This procedure describes how to create a virtual machine by using Hyper-V.
The VM will start from the ISO image, and the language selection screen will appear.
-1. Continue with the [generic procedure for installing sensor software](../how-to-install-software.md#install-ot-sensor-software).
+1. Continue with the [generic procedure for installing sensor software](../how-to-install-software.md#install-ot-monitoring-software).
## Configure a monitoring interface (SPAN)
For more information, see [Purdue reference model and Defender for IoT](../plan-
Before you start: -- Ensure that there is no instance of a virtual appliance running.
+- Ensure that there's no instance of a virtual appliance running.
- Enable Ensure SPAN on the data port, and not the management port.
defender-for-iot Virtual Sensor Vmware https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/appliance-catalog/virtual-sensor-vmware.md
This procedure describes how to create a virtual machine by using ESXi.
The VM will start from the ISO image, and the language selection screen will appear.
-1. Continue with the [generic procedure for installing sensor software](../how-to-install-software.md#install-ot-sensor-software).
+1. Continue with the [generic procedure for installing sensor software](../how-to-install-software.md#install-ot-monitoring-software).
## Configure a monitoring interface (SPAN)
For more information, see [Purdue reference model and Defender for IoT](../plan-
1. Select **OK**, and then select **Close** to close the vSwitch properties.
-1. Open the **XSense VM** properties.
+1. Open the **OT Sensor VM** properties.
1. For **Network Adapter 2**, select the **SPAN** network.
defender-for-iot Architecture Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/architecture-connections.md
For more information, see [Choose a sensor connection method](connect-sensors.md
## Proxy connections with an Azure proxy
-The following image shows how you can connect your sensors to the Defender for IoT portal in Azure through a proxy in the Azure VNET, ensuring confidentiality for all communications between your sensor and Azure.
+The following image shows how you can connect your sensors to the Defender for IoT portal in Azure through a proxy in the Azure VNET. This configuration ensures confidentiality for all communications between your sensor and Azure.
:::image type="content" source="media/architecture-connections/proxy.png" alt-text="Diagram of a proxy connection using an Azure proxy." border="false":::
The following image shows how you can connect your sensors to the Defender for I
:::image type="content" source="media/architecture-connections/direct.png" alt-text="Diagram of a direct connection to Azure.":::
-With direct connections:
+With direct connections
- Any sensors connected to Azure data centers directly over the internet have a secure and encrypted connection to the Azure data centers. Transport Layer Security (TLS) provides *always-on* communication between the sensor and Azure resources.
For more information, see [Update a standalone sensor version](how-to-manage-ind
## Next steps For more information, see [Connect your sensors to Microsoft Defender for IoT](connect-sensors.md).+
defender-for-iot Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/architecture.md
Specifically for OT networks, OT network sensors also provide the following anal
- **Anomaly detection engine**. Detects unusual machine-to-machine (M2M) communications and behaviors. By modeling ICS networks as deterministic sequences of states and transitions, the platform requires a shorter learning period than generic mathematical approaches or analytics originally developed for IT rather than OT. It also detects anomalies faster, with minimal false positives. Anomaly detection engine alerts include Excessive SMB sign-in attempts, and PLC Scan Detected alerts. -- **Operational incident detection**. Detects operational issues such as intermittent connectivity that can indicate early signs of equipment failure. For example, the device is thought to be disconnected (unresponsive), and Siemens S7 stop PLC command was sent alerts.
+- **Operational incident detection**. Detects operational issues such as intermittent connectivity that can indicate early signs of equipment failure. For example, the device might be disconnected (unresponsive), and Siemens S7 stop PLC command was sent alerts.
## Management options
defender-for-iot Concept Sentinel Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/concept-sentinel-integration.md
Without OT telemetry, context and integration with existing SOC tools and workfl
Microsoft Sentinel is a scalable cloud solution for security information event management (SIEM) security orchestration automated response (SOAR). SOC teams can use Microsoft Sentinel to collect data across networks, detect and investigate threats, and respond to incidents.
-The Defender for IoT and Microsoft Sentinel integration delivers out-of-the-box capabilities to SOC teams to help them efficiently and effectively view, analyze, and respond to OT security alerts, and the incidents they generate in a broader organizational threat context.
+The Defender for IoT and Microsoft Sentinel integration delivers out-of-the-box capabilities to SOC teams. This helps them to efficiently and effectively view, analyze, and respond to OT security alerts, and the incidents they generate in a broader organizational threat context.
Bring Defender for IoT's rich telemetry into Microsoft Sentinel to bridge the gap between OT and SOC teams with the Microsoft Sentinel data connector for Defender for IoT and the **IoT OT Threat Monitoring with Defender for IoT** solution.
Playbooks are collections of automated remediation actions that can be run from
For example, use SOAR playbooks to: -- Open an asset ticket in ServiceNow when a new asset is detected, such as a new engineering workstation. This can either be an unauthorized device that can be used by adversaries to reprogram PLCs.
+- Open an asset ticket in ServiceNow when a new asset is detected, such as a new engineering workstation. This alert can be an unauthorized device that can be used by adversaries to reprogram PLCs.
- Send an email to relevant stakeholders when suspicious activity is detected, for example unplanned PLC reprogramming. The mail may be sent to OT personnel, such as a control engineer responsible on the related production line.
defender-for-iot Getting Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/getting-started.md
If you're setting up network monitoring for enterprise IoT systems, you can skip
- To deploy Defender for IoT, you'll need network switches that support traffic monitoring via a SPAN port and hardware appliances for NTA sensors.
- For on-premises machines, including network sensors and on-premises management consoles for air-gapped environments, you'll need administrative user permissions for activities such as activation, managing SSL/TLS certificates, managing passwords, and so on.
+ For on-premises machines, including network sensors, on-premises management consoles and for air-gapped environments you'll need administrative user permissions for activities. These include activation, managing SSL/TLS certificates, managing passwords, and so on.
- Research your own network architecture and monitor bandwidth. Check requirements for creating certificates and other network details, and clarify the sensor appliances you'll need for your own network load.
defender-for-iot How To Accelerate Alert Incident Response https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-accelerate-alert-incident-response.md
# Accelerate alert workflows
-This article describes how to accelerate alert workflows by using alert comments, alert groups, and custom alert rules for standard protocols and proprietary protocols in Microsoft Defender for IoT. These tools help you:
+This article describes how to accelerate alert workflows by using alert comments, alert groups, and custom alert rules for standard protocols and proprietary protocols in Microsoft Defender for IoT. These tools help you
- Analyze and manage the large volume of alert events detected in your network.
Add custom alert rule to pinpoint specific activity as needed for your organizat
For example, you might want to define an alert for an environment running MODBUS to detect any write commands to a memory register, on a specific IP address and ethernet destination. Another example would be an alert for any access to a specific IP address.
-Use custom alert rule actions to instruct Defender for IT to take specific action when the alert is triggered, such as allowing users to access PCAP files from the alert, assigning alert severity, or generating an event that shows in the event timeline. Alert messages indicate that the alert was generated from a custom alert rule.
+Use custom alert rule actions to for IT to take specific action when the alert is triggered, such as allowing users to access PCAP files from the alert, assigning alert severity, or generating an event that shows in the event timeline. Alert messages indicate that the alert was generated from a custom alert rule.
**To create a custom alert rule**:
defender-for-iot How To Connect Sensor By Proxy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-connect-sensor-by-proxy.md
Last updated 02/06/2022
# Connect Microsoft Defender for IoT sensors without direct internet access by using a proxy (legacy)
-This article describes how to connect Microsoft Defender for IoT sensors to Defender for IoT via a proxy, with no direct internet access, and is only relevant if you are using a legacy connection method via your own IoT Hub.
+This article describes how to connect Microsoft Defender for IoT sensors to Defender for IoT via a proxy, with no direct internet access. This article is only relevant if you are using a legacy connection method via your own IoT Hub.
Starting with sensor software versions 22.1.x, updated connection methods are supported that don't require customers to have their own IoT Hub. For more information, see [Sensor connection methods](architecture-connections.md) and [Connect your sensors to Microsoft Defender for IoT](connect-sensors.md).
defender-for-iot How To Create And Manage Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-create-and-manage-users.md
This section describes how to define users. Cyberx, support, and administrator u
- **Role**: Define the user's role. For more information, see [Role-based permissions](#role-based-permissions). - **Access Group**: If you're creating a user for the on-premises management console, define the user's access group. For more information, see [Define global access control](how-to-define-global-user-access-control.md). - **Password**: Select the user type as follows:
- - **Local User**: Define a password for the user of a sensor or an on-premises management console. Password must have at least eight characters and contain lowercase and uppercase- alphabetic characters, numbers, and symbols.
- - **Active Directory User**: You can allow users to sign in to the sensor or management console by using Azure Active Directory credentials. Defined Azure Active Directory groups can be associated with specific permission levels. For example, configure a specific Azure Active Directory group and assign all users in the group to the Read-only user type.
+ - **Local User**: Define a password for the user of a sensor or an on-premises management console. Password must have at least eight characters and contain lowercase and uppercase alphabetic characters, numbers, and symbols.
+ - **Active Directory User**: You can allow users to sign in to the sensor or management console by using Active Directory credentials. Defined Active Directory groups can be associated with specific permission levels. For example, configure a specific Active Directory group and assign all users in the group to the Read-only user type.
## User session timeout
If users aren't active at the keyboard or mouse for a specific time, they're sig
When users haven't worked with their console mouse or keyboard for 30 minutes, a session sign out is forced.
-This feature is enabled by default and on upgrade but can be disabled. In addition, session counting times can be updated. Session times are defined in seconds. Definitions are applied per sensor and on-premises management console.
+This feature is enabled by default and on upgrade, but can be disabled. In addition, session counting times can be updated. Session times are defined in seconds. Definitions are applied per sensor and on-premises management console.
A session timeout message appears at the console when the inactivity timeout has passed.
You can track user activity in the event timeline on each sensor. The timeline d
1. Use the filters or Ctrl F option to find the information of interest to you.
-## Integrate with Active Directory servers
-
-Configure the sensor or on-premises management console to work with Active Directory. This allows Active Directory users to access the Defender for IoT consoles by using their Active Directory credentials.
-
-> [!Note]
-> LDAP v3 is supported.
-
-Two types of LDAP-based authentication are supported:
--- **Full authentication**: User details are retrieved from the LDAP server. Examples are the first name, last name, email, and user permissions.--- **Trusted user**: Only the user password is retrieved. Other user details that are retrieved are based on users defined in the sensor.-
-### Azure Active Directory and Defender for IoT permissions
-
-You can associate Azure Active Directory groups defined here with specific permission levels. For example, configure a specific Azure Active Directory group and assign Read-only permissions to all users in the group.
-
-### Azure Active Directory configuration guidelines
--- You must define the LDAP parameters here exactly as they appear in Azure Active Directory.-- For all the Azure Active Directory parameters, use lowercase only. Use lowercase even when the configurations in Azure Active Directory use uppercase.-- You can't configure both LDAP and LDAPS for the same domain. You can, however, use both for different domains at the same time.-
-**To configure Azure Active Directory**:
-
-1. From the left pane, select **System Settings**.
-1. Select **Integrations** and then select **Active Directory**.
-
-1. Enable the **Active Directory Integration Enabled** toggle.
-
-1. Set the Active Directory server parameters, as follows:
-
- | Server parameter | Description |
- |--|--|
- | Domain controller FQDN | Set the fully qualified domain name (FQDN) exactly as it appears on your LDAP server. For example, enter `host1.subdomain.domain.com`. |
- | Domain controller port | Define the port on which your LDAP is configured. |
- | Primary domain | Set the domain name (for example, `subdomain.domain.com`) and the connection type according to your LDAP configuration. |
- | Azure Active Directory groups | Enter the group names that are defined in your Azure Active Directory configuration on the LDAP server. You can enter a group name that you'll associate with Admin, Security Analyst and Read-only permission levels. Use these groups when creating new sensor users.|
- | Trusted domains | To add a trusted domain, add the domain name and the connection type of a trusted domain. <br />You can configure trusted domains only for users who were defined under users. |
-
-#### Azure Active Directory groups for the On-premises management console
-
-If you're creating Azure Active Directory groups for on-premises management console users, you must create an Access Group rule for each Azure Active Directory group. On-premises management console Azure Active Directory credentials won't work if an Access Group rule doesn't exist for the Azure Active Directory user group. For more information, see [Define global access control](how-to-define-global-user-access-control.md).
-
-1. Select **Save**.
-
-1. To add a trusted server, select **Add Server** and configure another server.
## Change a user's password
User passwords can be changed for users created with a local password.
**Administrator users**
-The Administrator can change the password for the Security Analyst, and Read-only role. The Administrator role user can't change their own password and must contact a higher-level role.
+The Administrator can change the password for the Security Analyst and Read-only roles. The Administrator role user can't change their own password and must contact a higher-level role.
**Security Analyst and Read-only users**
-The Security Analyst and Read-only roles can't reset their or any other role's passwords. The Security Analyst and Read-only roles need to contact a user with a higher role level to have their passwords reset.
+The Security Analyst and Read-only roles can't reset any passwords. The Security Analyst and Read-only roles need to contact a user with a higher role level to have their passwords reset.
**CyberX and Support users**
CyberX role can change the password for all user roles. The Support role can cha
:::image type="content" source="media/how-to-create-and-manage-users/change-password.png" alt-text="Screenshot of the Change password dialog for local sensor users.":::
-1. Enter and confirm the new password in **Change Password** section.
+1. Enter and confirm the new password in the **Change Password** section.
> [!NOTE] > Passwords must be at least 16 characters, contain lowercase and uppercase alphabetic characters, numbers and one of the following symbols: #%*+,-./:=?@[]^_{}~
CyberX role can change the password for all user roles. The Support role can cha
1. Locate your user and select the edit icon :::image type="icon" source="media/password-recovery-images/edit-icon.png" border="false"::: .
-1. Enter the new password in the **New Password**, and **Confirm New Password** fields.
+1. Enter the new password in the **New Password** and **Confirm New Password** fields.
> [!NOTE]
- > Passwords must be at least 16 characters, contain lowercase and uppercase alphabetic characters, numbers and one of the symbols: #%*+,-./:=?@[]^_{}~
+ > Passwords must be at least 16 characters, contain lowercase and uppercase alphabetic characters, numbers and one of the following symbols: #%*+,-./:=?@[]^_{}~
1. Select **Update**. ## Recover the password for the on-premises management console, or the sensor
-You can recover the password for the on-premises management console, or the sensor with the Password recovery feature. Only the CyberX, and Support user have access to the Password recovery feature.
+You can recover the password for the on-premises management console or the sensor with the Password recovery feature. Only the CyberX and Support users have access to the Password recovery feature.
**To recover the password for the on-premises management console, or the sensor**:
-1. On the sign in screen of either the on-premises management console, or the sensor, select **Password recovery**. The **Password recovery** screen opens.
+1. On the sign in screen of either the on-premises management console or the sensor, select **Password recovery**. The **Password recovery** screen opens.
:::image type="content" source="media/how-to-create-and-manage-users/password-recovery.png" alt-text="Screenshot of the Select Password recovery from the sign in screen of either the on-premises management console, or the sensor.":::
-1. Select either **CyberX**, or **Support** from the drop-down menu, and copy the unique identifier code.
+1. Select either **CyberX** or **Support** from the drop-down menu, and copy the unique identifier code.
:::image type="content" source="media/how-to-create-and-manage-users/password-recovery-screen.png" alt-text="Screenshot of selecting either the Defender for IoT user or the support user.":::
You can recover the password for the on-premises management console, or the sens
> [!NOTE] > An error message may appear indicating the file is invalid. To fix this error message, ensure you selected the right subscription before downloading the `password_recovery.zip` and download it again.
-1. Select **Next**, and your user, and system-generated password for your management console will then appear.
+1. Select **Next**, and your user, and a system-generated password for your management console will then appear.
## Next steps
You can recover the password for the on-premises management console, or the sens
- [Activate and set up your on-premises management console](how-to-activate-and-set-up-your-on-premises-management-console.md) - [Track sensor activity](how-to-track-sensor-activity.md)+
+- [Integrate with Active Directory servers](integrate-with-active-directory.md)
defender-for-iot How To Deploy Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-deploy-certificates.md
This section describes how to convert existing certificates files to supported f
|**Description** | **CLI command** | |--|--|
-| Convert .crt file to .pem file | `openssl x509 -inform PEM -in <full path>/<pem-file-name>.pem -out <fullpath>/<crt-file-name>.crt` |
+| Convert .crt file to .pem file | `openssl x509 -inform PEM -in <full path>/<pem-file-name>.crt -out <fullpath>/<crt-file-name>.pem` |
| Convert .pem file to .crt file | `openssl x509 -inform PEM -in <full path>/<pem-file-name>.pem -out <fullpath>/<crt-file-name>.crt` |
-| Convert a PKCS#12 file (.pfx .p12) containing a private key and certificates to .pem | `openssl pkcs12 -in keyStore.pfx -out keyStore.pem -nodes`. You can add -nocerts to only output the private key, or add -nokeys to only output the certificates. |
+| Convert a PKCS#12 file (.pfx .p12) containing a private key and certificates to .pem | `openssl pkcs12 -in keyStore.pfx -out keyStore.pem -nodes`. You can add -nocerts to only output the private key, or add -nokeys to only output the certificates. |
+| Convert .cer file to .crt file | `openssl x509 -inform PEM -in <filepath>/certificate.cer -out certificate.crt` <br> Make sure to specify the full path. <br><br>**Note**: Other options are available for the -inform flag. The value is usually `DER` or `PEM` but might also be `P12` or another value. For more information, see [`openssl-format-options`]( https://www.openssl.org/docs/manmaster/man1/openssl-format-options.html) and [openssl-x509]( https://www.openssl.org/docs/manmaster/man1/openssl-x509.html). |
-## Troubleshooting
+## Troubleshooting
This section covers various issues that may occur during certificate upload and validation, and steps to take to resolve the issues.
defender-for-iot How To Forward Alert Information To Partners https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-forward-alert-information-to-partners.md
The following Forwarding rules allow encryption and certificate validation:
- Select the severity level. This is the minimum incident to forward, in terms of severity level. For example, if you select **Minor**, minor alerts and any alert above this severity level will be forwarded. Levels are predefined. - Select a protocol(s) that should be detected.
- Information will forwarding if the traffic detected was running selected protocols.
+ Information will be forwarded if the traffic detected was running selected protocols.
- Select which engines the rule should apply to. Alert information detected from selected engines will be forwarded
Enter the following parameters:
| Syslog text message output fields | Description | |--|--| | Date and time | Date and time that the syslog server machine received the information. |
-| Priority | User.Alert |
+| Priority | User. Alert |
| Hostname | Sensor IP address | | Protocol | TCP or UDP | | Message | Sensor: The sensor name.<br /> Alert: The title of the alert.<br /> Type: The type of the alert. Can be **Protocol Violation**, **Policy Violation**, **Malware**, **Anomaly**, or **Operational**.<br /> Severity: The severity of the alert. Can be **Warning**, **Minor**, **Major**, or **Critical**.<br /> Source: The source device name.<br /> Source IP: The source device IP address.<br /> Destination: The destination device name.<br /> Destination IP: The IP address of the destination device.<br /> Message: The message of the alert.<br /> Alert group: The alert group associated with the alert. |
Once the Webhook Extended forwarding rule has been configured, you can test the
:::image type="content" source="media/how-to-forward-alert-information-to-partners/run-button.png" alt-text="Select the run button to test your forwarding rule.":::
-You will know the forwarding rule is working if you see the Success notification.
+You'll know the forwarding rule is working if you see the Success notification.
### NetWitness action
Test the connection between the sensor and the partner server that's defined in
1. Select **Delete** and confirm. 1. Select **Save**.
-## Forwarding rules and alert exclusion rules
+## Forwarding rules and alert exclusion rules
The administrator might have defined alert exclusion rules. These rules help administrators achieve more granular control over alert triggering by instructing the sensor to ignore alert events based on various parameters. These parameters might include device addresses, alert names, or specific sensors.
defender-for-iot How To Install Software https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-install-software.md
Mount the ISO file using one of the following options:
- **Virtual mount** ΓÇô use iLO for HPE appliances, or iDRAC for Dell appliances to boot the ISO file.
-## Install OT sensor software
+## Install OT monitoring software
+
+This section provides generic procedures for installing OT monitoring software on sensors or an on-premises management console.
+
+Select one of the following tabs, depending on which type of software you're installing.
+
+# [OT sensor](#tab/sensor)
This procedure describes how to install OT sensor software on a physical or virtual appliance.
This procedure describes how to install OT sensor software on a physical or virt
:::image type="content" source="media/tutorial-install-components/login-information.png" alt-text="Screenshot of the final screen of the installation with usernames, and passwords.":::
-## Install on-premises management console software
+# [On-premises management console](#tab/on-prem)
+ This procedure describes how to install on-premises management console software on a physical or virtual appliance.
sudo ethtool -p <port value> <time-in-seconds>
This command will cause the light on the port to flash for the specified time period. For example, entering `sudo ethtool -p eno1 120`, will have port eno1 flash for 2 minutes, allowing you to find the port on the back of your appliance. + ## Post-installation validation
-To validate the installation of a physical appliance, you need to perform many tests. The same validation process applies to all the appliance types.
+After you've finished installing OT monitoring software on your appliance, test your system to make sure that processes are running correctly. The same validation process applies to all appliance types.
-Perform the validation by using the GUI or the CLI. The validation is available to both the **Support** and **CyberX** users.
+System health validations are supported via the sensor or on-premises management console UI or CLI, and is available for both the **Support** and **CyberX** users.
-Post-installation validation must include the following tests:
+After installing OT monitoring software, make sure to run the following tests:
- **Sanity test**: Verify that the system is running.
Post-installation validation must include the following tests:
- **ifconfig**: Verify that all the input interfaces configured during the installation process are running.
-### Check system health
-
-Check your system health from the sensor or on-premises management console. For example:
--
-#### Sanity
--- **Appliance**: Runs the appliance sanity check. You can perform the same check by using the CLI command `system-sanity`.--- **Version**: Displays the appliance version.--- **Network Properties**: Displays the sensor network parameters.-
-#### Redis
--- **Memory**: Provides the overall picture of memory usage, such as how much memory was used and how much remained.--- **Longest Key**: Displays the longest keys that might cause extensive memory usage.-
-#### System
--- **Core Log**: Provides the last 500 rows of the core log, so that you can view the recent log rows without exporting the entire system log.--- **Task Manager**: Translates the tasks that appear in the table of processes to the following layers:
-
- - Persistent layer (Redis)
-
- - Cash layer (SQL)
--- **Network Statistics**: Displays your network statistics.--- **TOP**: Shows the table of processes. It's a Linux command that provides a dynamic real-time view of the running system.--- **Backup Memory Check**: Provides the status of the backup memory, checking the following:-
- - The location of the backup folder
-
- - The size of the backup folder
-
- - The limitations of the backup folder
-
- - When the last backup happened
-
- - How much space there are for the extra backup files
--- **ifconfig**: Displays the parameters for the appliance's physical interfaces.--- **CyberX nload**: Displays network traffic and bandwidth by using the six-second tests.--- **Errors from Core, log**: Displays errors from the core log file.-
-**To access the tool**:
-
-1. Sign in to the sensor with the **Support** user credentials.
-
-1. Select **System Statistics** from the **System Settings** window.
-
- :::image type="icon" source="media/tutorial-install-components/system-statistics-icon.png" border="false":::
-
-### Check system health by using the CLI
-
-Verify that the system is up and running prior to testing the system's sanity.
-
-**To test the system's sanity**:
-
-1. Connect to the CLI with the Linux terminal (for example, PuTTY) and the user **Support**.
-
-1. Enter `system sanity`.
-
-1. Check that all the services are green (running).
-
- :::image type="content" source="media/tutorial-install-components/support-screen.png" alt-text="Screenshot that shows running services.":::
-
-1. Verify that **System is UP! (prod)** appears at the bottom.
-
-Verify that the correct version is used:
-
-**To check the system's version**:
-
-1. Connect to the CLI with the Linux terminal (for example, PuTTY) and the user **Support**.
-
-1. Enter `system version`.
-
-1. Check that the correct version appears.
-
-Verify that all the input interfaces configured during the installation process are running:
-
-**To validate the system's network status**:
-
-1. Connect to the CLI with the Linux terminal (for example, PuTTY) and the **Support** user.
-
-1. Enter `network list` (the equivalent of the Linux command `ifconfig`).
-
-1. Validate that the required input interfaces appear. For example, if two quad Copper NICs are installed, there should be 10 interfaces in the list.
-
- :::image type="content" source="media/tutorial-install-components/interface-list-screen.png" alt-text="Screenshot that shows the list of interfaces.":::
-
-Verify that you can access the console web GUI:
-
-**To check that management has access to the UI**:
-
-1. Connect a laptop with an Ethernet cable to the management port (**Gb1**).
-
-1. Define the laptop NIC address to be in the same range as the appliance.
-
- :::image type="content" source="media/tutorial-install-components/access-to-ui.png" alt-text="Screenshot that shows management access to the UI.":::
-
-1. Ping the appliance's IP address from the laptop to verify connectivity (default: 10.100.10.1).
-
-1. Open the Chrome browser in the laptop and enter the appliance's IP address.
-
-1. In the **Your connection is not private** window, select **Advanced** and proceed.
-
-1. The test is successful when the Defender for IoT sign-in screen appears.
-
- :::image type="content" source="media/tutorial-install-components/defender-for-iot-sign-in-screen.png" alt-text="Screenshot that shows access to management console.":::
-
-## Troubleshooting
-
-### You can't connect by using a web interface
-
-1. Verify that the computer that you're trying to connect is on the same network as the appliance.
-
-1. Verify that the GUI network is connected to the management port.
-
-1. Ping the appliance's IP address. If there is no ping:
-
- 1. Connect a monitor and a keyboard to the appliance.
-
- 1. Use the **Support** user and password to sign in.
-
- 1. Use the command `network list` to see the current IP address.
-
- :::image type="content" source="media/tutorial-install-components/network-list.png" alt-text="Screenshot that shows the network list.":::
-
-1. If the network parameters are misconfigured, use the following procedure to change them:
-
- 1. Use the command `network edit-settings`.
-
- 1. To change the management network IP address, select **Y**.
-
- 1. To change the subnet mask, select **Y**.
-
- 1. To change the DNS, select **Y**.
-
- 1. To change the default gateway IP address, select **Y**.
-
- 1. For the input interface change (sensor only), select **N**.
-
- 1. To apply the settings, select **Y**.
-
-1. After restart, connect with the **Support** user credentials and use the `network list` command to verify that the parameters were changed.
-
-1. Try to ping and connect from the GUI again.
-
-### The appliance isn't responding
-
-1. Connect a monitor and keyboard to the appliance, or use PuTTY to connect remotely to the CLI.
-
-1. Use the **Support** user credentials to sign in.
-
-1. Use the `system sanity` command and check that all processes are running.
-
- :::image type="content" source="media/tutorial-install-components/system-sanity-screen.png" alt-text="Screenshot that shows the system sanity command.":::
-
-For any other issues, contact [Microsoft Support](https://support.microsoft.com/en-us/supportforbusiness/productselection?sapId=82c88f35-1b8e-f274-ec11-c6efdd6dd099).
-
+For more information, see [Check system health](how-to-troubleshoot-the-sensor-and-on-premises-management-console.md#check-system-health) in our sensor and on-premises management console troubleshooting article.
## Access sensors from the on-premises management console
You can enhance system security by preventing direct user access to the sensor.
## Next steps
-For more information, see [Set up your network](how-to-set-up-your-network.md).
+For more information, see:
+
+- [Prepare your OT network for Microsoft Defender for IoT](how-to-set-up-your-network.md)
+- [Troubleshoot the sensor and on-premises management console](how-to-troubleshoot-the-sensor-and-on-premises-management-console.md)
defender-for-iot How To Troubleshoot The Sensor And On Premises Management Console https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-troubleshoot-the-sensor-and-on-premises-management-console.md
Title: Troubleshoot the sensor and on-premises management console description: Troubleshoot your sensor and on-premises management console to eliminate any problems you might be having. Previously updated : 02/10/2022 Last updated : 05/22/2022 # Troubleshoot the sensor and on-premises management console
This article describes basic troubleshooting tools for the sensor and the on-pre
- **SNMP**: Sensor health is monitored through SNMP. Microsoft Defender for IoT responds to SNMP queries sent from an authorized monitoring server. - **System notifications**: When a management console controls the sensor, you can forward alerts about failed sensor backups and disconnected sensors.
+## Check system health
+
+Check your system health from the sensor or on-premises management console.
+
+**To access the system health tool**:
+
+1. Sign in to the sensor or on-premises management console with the **Support** user credentials.
+
+1. Select **System Statistics** from the **System Settings** window.
+
+ :::image type="icon" source="media/tutorial-install-components/system-statistics-icon.png" border="false":::
+
+1. System health data appears. Select an item on the left to view more details in the box. For example:
+
+ :::image type="content" source="media/tutorial-install-components/system-health-check-screen.png" alt-text="Screenshot that shows the system health check.":::
+
+System health checks include the following:
+
+|Name |Description |
+|||
+|**Sanity** | |
+|- Appliance | Runs the appliance sanity check. You can perform the same check by using the CLI command `system-sanity`. |
+|- Version | Displays the appliance version. |
+|- Network Properties | Displays the sensor network parameters. |
+|**Redis** | |
+|- Memory | Provides the overall picture of memory usage, such as how much memory was used and how much remained. |
+|- Longest Key | Displays the longest keys that might cause extensive memory usage. |
+|**System** | |
+|- Core Log | Provides the last 500 rows of the core log, so that you can view the recent log rows without exporting the entire system log. |
+|- Task Manager | Translates the tasks that appear in the table of processes to the following layers: <br><br> - Persistent layer (Redis)<br> - Cash layer (SQL) |
+|- Network Statistics | Displays your network statistics. |
+|- TOP | Shows the table of processes. It's a Linux command that provides a dynamic real-time view of the running system. |
+|- Backup Memory Check | Provides the status of the backup memory, checking the following:<br><br> - The location of the backup folder<br> - The size of the backup folder<br> - The limitations of the backup folder<br> - When the last backup happened<br> - How much space there are for the extra backup files |
+|- ifconfig | Displays the parameters for the appliance's physical interfaces. |
+|- CyberX nload | Displays network traffic and bandwidth by using the six-second tests. |
+|- Errors from Core, log | Displays errors from the core log file. |
+
+### Check system health by using the CLI
+
+Verify that the system is up and running prior to testing the system's sanity.
+
+**To test the system's sanity**:
+
+1. Connect to the CLI with the Linux terminal (for example, PuTTY) and the user **Support**.
+
+1. Enter `system sanity`.
+
+1. Check that all the services are green (running).
+
+ :::image type="content" source="media/tutorial-install-components/support-screen.png" alt-text="Screenshot that shows running services.":::
+
+1. Verify that **System is UP! (prod)** appears at the bottom.
+
+Verify that the correct version is used:
+
+**To check the system's version**:
+
+1. Connect to the CLI with the Linux terminal (for example, PuTTY) and the user **Support**.
+
+1. Enter `system version`.
+
+1. Check that the correct version appears.
+
+Verify that all the input interfaces configured during the installation process are running:
+
+**To validate the system's network status**:
+
+1. Connect to the CLI with the Linux terminal (for example, PuTTY) and the **Support** user.
+
+1. Enter `network list` (the equivalent of the Linux command `ifconfig`).
+
+1. Validate that the required input interfaces appear. For example, if two quad Copper NICs are installed, there should be 10 interfaces in the list.
+
+ :::image type="content" source="media/tutorial-install-components/interface-list-screen.png" alt-text="Screenshot that shows the list of interfaces.":::
+
+Verify that you can access the console web GUI:
+
+**To check that management has access to the UI**:
+
+1. Connect a laptop with an Ethernet cable to the management port (**Gb1**).
+
+1. Define the laptop NIC address to be in the same range as the appliance.
+
+ :::image type="content" source="media/tutorial-install-components/access-to-ui.png" alt-text="Screenshot that shows management access to the UI.":::
+
+1. Ping the appliance's IP address from the laptop to verify connectivity (default: 10.100.10.1).
+
+1. Open the Chrome browser in the laptop and enter the appliance's IP address.
+
+1. In the **Your connection is not private** window, select **Advanced** and proceed.
+
+1. The test is successful when the Defender for IoT sign-in screen appears.
+
+ :::image type="content" source="media/tutorial-install-components/defender-for-iot-sign-in-screen.png" alt-text="Screenshot that shows access to management console.":::
+ ## Troubleshoot sensors
+### You can't connect by using a web interface
+
+1. Verify that the computer that you're trying to connect is on the same network as the appliance.
+
+1. Verify that the GUI network is connected to the management port.
+
+1. Ping the appliance's IP address. If there is no ping:
+
+ 1. Connect a monitor and a keyboard to the appliance.
+
+ 1. Use the **Support** user and password to sign in.
+
+ 1. Use the command `network list` to see the current IP address.
+
+ :::image type="content" source="media/tutorial-install-components/network-list.png" alt-text="Screenshot that shows the network list.":::
+
+1. If the network parameters are misconfigured, use the following procedure to change them:
+
+ 1. Use the command `network edit-settings`.
+
+ 1. To change the management network IP address, select **Y**.
+
+ 1. To change the subnet mask, select **Y**.
+
+ 1. To change the DNS, select **Y**.
+
+ 1. To change the default gateway IP address, select **Y**.
+
+ 1. For the input interface change (sensor only), select **N**.
+
+ 1. To apply the settings, select **Y**.
+
+1. After restart, connect with the **Support** user credentials and use the `network list` command to verify that the parameters were changed.
+
+1. Try to ping and connect from the GUI again.
+
+### The appliance isn't responding
+
+1. Connect a monitor and keyboard to the appliance, or use PuTTY to connect remotely to the CLI.
+
+1. Use the **Support** user credentials to sign in.
+
+1. Use the `system sanity` command and check that all processes are running.
+
+ :::image type="content" source="media/tutorial-install-components/system-sanity-screen.png" alt-text="Screenshot that shows the system sanity command.":::
+
+For any other issues, contact [Microsoft Support](https://support.microsoft.com/en-us/supportforbusiness/productselection?sapId=82c88f35-1b8e-f274-ec11-c6efdd6dd099).
+++ ### Investigate password failure at initial sign in When signing into a preconfigured sensor for the first time, you'll need to perform password recovery as follows:
If the **Alerts** window doesn't show an alert that you expected, verify the fol
1. Check if the same alert already appears in the **Alerts** window as a reaction to a different security instance. If yes, and this alert has not been handled yet, the sensor console does not show a new alert. 1. Make sure you did not exclude this alert by using the **Alert Exclusion** rules in the management console.
-### Investigate dashboard that show no data
+### Investigate dashboard that shows no data
When the dashboards in the **Trends & Statistics** window show no data, do the following: 1. [Check system performance](#check-system-performance).
Sometimes ICS devices are configured with external IP addresses. These ICS devic
1. Generate a new data-mining report for internet connections. 1. In the data-mining report, enter the administrator mode and delete the IP addresses of your ICS devices.
-## On-premises management console troubleshooting tools
+### Clearing sensor data to factory default
+
+In cases where the sensor needs to be relocated or erased, the sensor can be reset to factory default data.
+
+> [!NOTE]
+> Network settings such as IP/DNS/GATEWAY will not be changed by clearing system data.
+**To clear system data**:
+1. Sign in to the sensor as the **cyberx** user.
+1. Select **Support** > **Clear system data**, and confirm that you do want to reset the sensor to factory default data.
+
+ :::image type="content" source="media/how-to-troubleshoot-the-sensor-and-on-premises-management-console/warning-screenshot.png" alt-text="Screenshot of warning message.":::
+
+All allowlists, policies, and configuration settings are cleared, and the sensor is restarted.
+++
+## Troubleshoot an on-premises management console
### Investigate a lack of expected alerts on the management console If an expected alert is not shown in the **Alerts** window, verify the following:
To limit the number of alerts, use the `notifications.max_number_to_report` prop
-## Export audit log from the management console
+### Export audit logs from the management console
Audit logs record key information at the time of occurrence. Audit logs are useful when you are trying to figure out what changes were made, and by who. Audit logs can be exported in the management console, and contain the following information:
Audit logs record key information at the time of occurrence. Audit logs are usef
The exported log is added to the **Archived Logs** list. Select the :::image type="icon" source="media/how-to-troubleshoot-the-sensor-and-on-premises-management-console/eye-icon.png" border="false"::: button to view the OTP. Send the OTP string to the support team in a separate message from the exported logs. The support team will be able to extract exported logs only by using the unique OTP that's used to encrypt the logs.
-## Clearing sensor data to factory default
-
-In cases where the sensor needs to be relocated or erased, the sensor can be reset to factory default data.
-
-> [!NOTE]
-> Network settings such as IP/DNS/GATEWAY will not be changed by clearing system data.
-
-**To clear system data**:
-1. Sign in to the sensor as the **cyberx** user.
-1. Select **Support** > **Clear system data**, and confirm that you do want to reset the sensor to factory default data.
-
- :::image type="content" source="media/how-to-troubleshoot-the-sensor-and-on-premises-management-console/warning-screenshot.png" alt-text="Screenshot of warning message.":::
-
-All allowlists, policies, and configuration settings are cleared, and the sensor is restarted.
-- ## Next steps - [View alerts](how-to-view-alerts.md)
defender-for-iot Integrate With Active Directory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/integrate-with-active-directory.md
+
+ Title: Integrate with Active Directory - Microsoft Defender for IoT
+description: Configure the sensor or on-premises management console to work with Active Directory.
Last updated : 05/17/2022+++
+# Integrate with Active Directory servers
+
+Configure the sensor or on-premises management console to work with Active Directory. This allows Active Directory users to access the Microsoft Defender for IoT consoles by using their Active Directory credentials.
+
+> [!Note]
+> LDAP v3 is supported.
+
+Two types of LDAP-based authentication are supported:
+
+- **Full authentication**: User details are retrieved from the LDAP server. Examples are the first name, last name, email, and user permissions.
+
+- **Trusted user**: Only the user password is retrieved. Other user details that are retrieved are based on users defined in the sensor.
+
+For more information, see [networking requirements](how-to-set-up-your-network.md#other-firewall-rules-for-external-services-optional).
+
+## Active Directory and Defender for IoT permissions
+
+You can associate Active Directory groups defined here with specific permission levels. For example, configure a specific Active Directory group and assign Read-only permissions to all users in the group.
+
+## Active Directory configuration guidelines
+
+- You must define the LDAP parameters here exactly as they appear in Active Directory.
+- For all the Active Directory parameters, use lowercase only. Use lowercase even when the configurations in Active Directory use uppercase.
+- You can't configure both LDAP and LDAPS for the same domain. You can, however, use both for different domains at the same time.
+
+**To configure Active Directory**:
+
+1. From the left pane, select **System Settings**.
+1. Select **Integrations** and then select **Active Directory**.
+
+1. Enable the **Active Directory Integration Enabled** toggle.
+
+1. Set the Active Directory server parameters, as follows:
+
+ | Server parameter | Description |
+ |--|--|
+ | Domain controller FQDN | Set the fully qualified domain name (FQDN) exactly as it appears on your LDAP server. For example, enter `host1.subdomain.domain.com`. |
+ | Domain controller port | Define the port on which your LDAP is configured. |
+ | Primary domain | Set the domain name (for example, `subdomain.domain.com`) and the connection type according to your LDAP configuration. |
+ | Active Directory groups | Enter the group names that are defined in your Active Directory configuration on the LDAP server. You can enter a group name that you'll associate with Admin, Security Analyst and Read-only permission levels. Use these groups when creating new sensor users.|
+ | Trusted domains | To add a trusted domain, add the domain name and the connection type of a trusted domain. <br />You can configure trusted domains only for users who were defined under users. |
+
+ ### Active Directory groups for the on-premises management console
+
+ If you're creating Active Directory groups for on-premises management console users, you must create an Access Group rule for each Active Directory group. On-premises management console Active Directory credentials won't work if an Access Group rule doesn't exist for the Active Directory user group. For more information, see [Define global access control](how-to-define-global-user-access-control.md).
+
+1. Select **Save**.
+
+1. To add a trusted server, select **Add Server** and configure another server.
++
+## Next steps
+
+For more information, see [how to create and manage users](/azure/defender-for-iot/organizations/how-to-create-and-manage-users).
defender-for-iot Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/overview.md
Enterprise IoT network protection extends agentless features beyond operational
When you expand Microsoft Defender for IoT into the enterprise network, you can apply Microsoft 365 Defender's features for asset discovery and use Microsoft Defender for Endpoint for a single, integrated package that can secure all of your IoT/OT infrastructure.
-Use Microsoft Defender for IoT's sensors as extra data sources, providing visibility in areas of your organization's network where Microsoft Defender for Endpoint isn't deployed, and when employees are accessing information remotely. Microsoft Defender for IoT's sensors provide visibility into both the IoT-to-IoT and the IoT-to-internet communications. Integrating Defender for IoT and Defender for Endpoint synchronizes any devices discovered on the network by either service.
+Use Microsoft Defender for IoT's sensors as extra data sources, providing visibility in areas of your organization's network where Microsoft Defender for Endpoint isn't deployed, and when employees are accessing information remotely. Microsoft Defender for IoT's sensors provide visibility into both the IoT-to-IoT and the IoT-to-internet communications. Integrating Defender for IoT and Defender for Endpoint synchronizes any enterprise IoT devices discovered on the network by either service.
For more information, see the [Microsoft 365 Defender](/microsoft-365/security/defender/microsoft-365-defender) and [Microsoft Defender for Endpoint documentation](/microsoft-365/security/defender-endpoint).
defender-for-iot Resources Frequently Asked Questions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/resources-frequently-asked-questions.md
You can work with CLI [commands](references-work-with-defender-for-iot-cli-comma
## How do I check the sanity of my deployment
-After installing the software for your sensor, or on-premises management console, you will want to perform the [Post-installation validation](how-to-install-software.md#post-installation-validation). There you will learn how to [Check system health by using the CLI](how-to-install-software.md#check-system-health-by-using-the-cli), perform a [Sanity](how-to-install-software.md#sanity) check, and review your overall [System statistics](how-to-install-software.md#system).
+After installing the software for your sensor or on-premises management console, you will want to perform the [Post-installation validation](how-to-install-software.md#post-installation-validation).
-You can follow these links, if [The appliance isn't responding](how-to-install-software.md#the-appliance-isnt-responding) or [You can't connect by using a web interface](how-to-install-software.md#you-cant-connect-by-using-a-web-interface).
+You can also use our [UI and CLI tools](how-to-troubleshoot-the-sensor-and-on-premises-management-console.md#check-system-health) to check system health and review your overall system statistics.
+
+For more information, see [Troubleshoot the sensor and on-premises management console](how-to-troubleshoot-the-sensor-and-on-premises-management-console.md).
## Next steps
defender-for-iot Tutorial Clearpass https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/tutorial-clearpass.md
CPPM runs on hardware appliances with pre-installed software or as a Virtual Mac
- Defender for IoT version 2.5.1 or higher. -- An Azure account. If you do not already have an Azure account, you can [create your Azure free account today](https://azure.microsoft.com/free/).
+- An Azure account. If you don't already have an Azure account, you can [create your Azure free account today](https://azure.microsoft.com/free/).
## Create a ClearPass API user
Once the sync has started, endpoint data is populated directly into the Policy M
:::image type="content" source="media/tutorial-clearpass/last-sync.png" alt-text="Screenshot of the view the time and date of your last sync.":::
-If Sync is not working, or shows an error, then, itΓÇÖs likely youΓÇÖve missed capturing some of the information. Recheck the data recorded, additionally you can view the API calls between Defender for IoT and ClearPass from **Guest** > **Administration** > **Support** > **Application Log**.
+If Sync is not working, or shows an error, then itΓÇÖs likely youΓÇÖve missed capturing some of the information. Recheck the data recorded, additionally you can view the API calls between Defender for IoT and ClearPass from **Guest** > **Administration** > **Support** > **Application Log**.
Below is an example of API logs between Defender for IoT and ClearPass.
There are no resources to clean up.
## Next steps
-In this tutorial, you learned how to get started with the ClearPass integration. Continue on to learn about our CyberArk.
+In this article, you learned how to get started with the ClearPass integration. Continue on to learn about our [CyberArk integration](./tutorial-cyberark.md).
+
-> [!div class="nextstepaction"]
-> [Next steps button](./tutorial-cyberark.md)
defender-for-iot Tutorial Cyberark https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/tutorial-cyberark.md
In this tutorial, you learn how to:
- Verify that you have CLI access to all Defender for IoT appliances in your enterprise. -- An Azure account. If you do not already have an Azure account, you can [create your Azure free account today](https://azure.microsoft.com/free/).
+- An Azure account. If you don't already have an Azure account, you can [create your Azure free account today](https://azure.microsoft.com/free/).
## Configure PSM CyberArk
In order to enable the integration, Syslog Server will need to be enabled in the
The integration between Microsoft Defender for IoT, and CyberArk PSM is performed via syslog messages. These messages are sent by the PSM solution to Defender for IoT, notifying Defender for IoT of any remote sessions, or verification failures.
-Once the Defender for IoT platform receives these messages from PSM, it correlates them with the data it sees in the network, thus validating that any remote access connections to the network were generated by the PSM solution, and not by an unauthorized user.
+Once the Defender for IoT platform receives these messages from PSM, it correlates them with the data it sees in the network. Thus validating that any remote access connections to the network were generated by the PSM solution and not by an unauthorized user.
### View alerts
-Whenever the Defender for IoT platform identifies remote sessions that have not been authorized by PSM, it will issue an `Unauthorized Remote Session`. To facilitate immediate investigation, the alert also shows the IP addresses and names of the source and destination devices.
+Whenever the Defender for IoT platform identifies remote sessions that haven't been authorized by PSM, it will issue an `Unauthorized Remote Session`. To facilitate immediate investigation, the alert also shows the IP addresses and names of the source and destination devices.
**To view alerts**:
There are no resources to clean up.
## Next steps
-In this tutorial, you learned how to get started with the CyberArk integration. Continue on to learn about our Forescout integration.
+In this article, you learned how to get started with the CyberArk integration. Continue on to learn about our [Forescout integration](./tutorial-forescout.md).
+
-> [!div class="nextstepaction"]
-> [Next steps button](./tutorial-forescout.md)
defender-for-iot Tutorial Forescout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/tutorial-forescout.md
Title: Integrate Forescout with Microsoft Defender for IoT
-description: In this tutorial, you will learn how to integrate Microsoft Defender for IoT with Forescout.
+description: In this tutorial, you'll learn how to integrate Microsoft Defender for IoT with Forescout.
Last updated 02/08/2022
In this tutorial, you learn how to:
> - View device attributes in Forescout > - Create Microsoft Defender for IoT policies in Forescout
-If you do not already have an Azure account, you can [create your Azure free account today](https://azure.microsoft.com/free/).
+If you don't already have an Azure account, you can [create your Azure free account today](https://azure.microsoft.com/free/).
## Prerequisites -- Microsoft Defender for IoT version 2.4 or above
+- Microsoft Defender for IoT version 2.4 or above
- Forescout version 8.0 or above
To make the Forescout platform, communicate with a different sensor, the configu
## Verify communication
-Once the connection has been configured, you will need to confirm that the two platforms are communicating.
+Once the connection has been configured, you'll need to confirm that the two platforms are communicating.
**To confirm the two platforms are communicating**:
There are no resources to clean up.
## Next steps
-In this tutorial, you learned how to get started with the Forescout integration. Continue on to learn about our [Palo Alto integration](./tutorial-palo-alto.md).
+In this article, you learned how to get started with the Forescout integration. Continue on to learn about our [Palo Alto integration](./tutorial-palo-alto.md).
+
-> [!div class="nextstepaction"]
-> [Next steps button](./tutorial-palo-alto.md)
defender-for-iot Tutorial Fortinet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/tutorial-fortinet.md
Title: Integrate Fortinet with Microsoft Defender for IoT
-description: In this tutorial, you will learn how to integrate Microsoft Defender for IoT with Fortinet.
+description: In this article, you'll learn how to integrate Microsoft Defender for IoT with Fortinet.
Last updated 11/09/2021
The FortiGate firewall can be used to block suspicious traffic.
1. To configure the FortiGate forwarding rule, set the following parameters:
- :::image type="content" source="media/tutorial-fortinet/configure.png" alt-text="Screenshot of the configure the Create Forwarding Rule window.":::
+ :::image type="content" source="media/tutorial-fortinet/configure.png" alt-text="Screenshot of the configure the Create Forwarding Rule window.":::
| Parameter | Description | |--|--|
There are no resources to clean up.
## Next steps
-In this tutorial, you learned how to get started with the Fortinet integration. Continue on to learn about our Palo Alto integration.
+In this article, you learned how to get started with the Fortinet integration. Continue on to learn about our [Palo Alto integration](./tutorial-palo-alto.md)
+
-> [!div class="nextstepaction"]
-> [Next steps button](./tutorial-palo-alto.md)
defender-for-iot Tutorial Palo Alto https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/tutorial-palo-alto.md
This table shows which incidents this integration is intended for:
| Incident type | Description | |--|--|
-|**Unauthorized PLC changes** | An update to the ladder logic, or firmware of a device. This can represent legitimate activity, or an attempt to compromise the device. For example, malicious code, such as a Remote Access Trojan (RAT), or parameters that cause the physical process, such as a spinning turbine, to operate in an unsafe manner. |
-|**Protocol Violation** | A packet structure, or field value that violates the protocol specification. This can represent a misconfigured application, or a malicious attempt to compromise the device. For example, causing a buffer overflow condition in the target device. |
+|**Unauthorized PLC changes** | An update to the ladder logic, or firmware of a device. This alert can represent legitimate activity, or an attempt to compromise the device. For example, malicious code, such as a Remote Access Trojan (RAT), or parameters that cause the physical process, such as a spinning turbine, to operate in an unsafe manner. |
+|**Protocol Violation** | A packet structure, or field value that violates the protocol specification. This alert can represent a misconfigured application, or a malicious attempt to compromise the device. For example, causing a buffer overflow condition in the target device. |
|**PLC Stop** | A command that causes the device to stop functioning, thereby risking the physical process that is being controlled by the PLC. | |**Industrial malware found in the ICS network** | Malware that manipulates ICS devices using their native protocols, such as TRITON and Industroyer. Defender for IoT also detects IT malware that has moved laterally into the ICS, and SCADA environment. For example, Conficker, WannaCry, and NotPetya. | |**Scanning malware** | Reconnaissance tools that collect data about system configuration in a pre-attack phase. For example, the Havex Trojan scans industrial networks for devices using OPC, which is a standard protocol used by Windows-based SCADA systems to communicate with ICS devices. |
There are no resources to clean up.
## Next step
-In this tutorial, you learned how to get started with the Palo Alto integration.
+In this article, you learned how to get started with the [Palo Alto integration](./tutorial-splunk.md).
-> [!div class="nextstepaction"]
-> [Next steps button](tutorial-splunk.md)
defender-for-iot Tutorial Servicenow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/tutorial-servicenow.md
Access to ServiceNow and Defender for IoT
## Download the Defender for IoT application in ServiceNow
-To access the Defender for IoT application within ServiceNow, you will need to download the application form the ServiceNow application store.
+To access the Defender for IoT application within ServiceNow, you will need to download the application from the ServiceNow application store.
**To access the Defender for IoT application in ServiceNow**:
Defender for IoT alerts will now appear as incidents in ServiceNow.
A token is needed in order to allow ServiceNow to communicate with Defender for IoT.
-You will need the `Client ID` and `Client Secret` that you entered when creating the Defender for IoT Forwarding rules. The Forwarding rules forward alert information to ServiceNow, and when configuring Defender for IoT to push device attributes to ServiceNow tables.
+You'll need the `Client ID` and `Client Secret` that you entered when creating the Defender for IoT Forwarding rules. The Forwarding rules forward alert information to ServiceNow, and when configuring Defender for IoT to push device attributes to ServiceNow tables.
## Send Defender for IoT device attributes to ServiceNow
Verify that the on-premises management console is connected to the ServiceNow in
## Set up the integrations using an HTTPS proxy
-When setting up the Defender for IoT and ServiceNow integration, the on-premises management console and the ServiceNow server communicate using port 443. If the ServiceNow server is behind a proxy, the default port cannot be used.
+When setting up the Defender for IoT and ServiceNow integration, the on-premises management console and the ServiceNow server communicate using port 443. If the ServiceNow server is behind a proxy, the default port can't be used.
Defender for IoT supports an HTTPS proxy in the ServiceNow integration by enabling the change of the default port used for integration.
There are no resources to clean up.
## Next steps
-In this tutorial, you learned how to get started with the ServiceNow integration. Continue on to learn about our [Cisco integration](./tutorial-forescout.md).
+In this article, you learned how to get started with the ServiceNow integration. Continue on to learn about our [Cisco integration](./tutorial-forescout.md).
+
-> [!div class="nextstepaction"]
-> [Next steps button](./tutorial-forescout.md)
digital-twins Concepts 3D Scenes Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/concepts-3d-scenes-studio.md
+
+# Mandatory fields.
+ Title: 3D Scenes Studio (preview) for Azure Digital Twins
+
+description: Learn about 3D Scenes Studio (preview) for Azure Digital Twins.
++ Last updated : 05/04/2022++++
+# Optional fields. Don't forget to remove # if you need a field.
+#
+#
+#
++
+# 3D Scenes Studio (preview) for Azure Digital Twins
+
+Azure Digital Twins [3D Scenes Studio (preview)](https://explorer.digitaltwins.azure.net/3dscenes) is an immersive 3D environment, where end users can monitor, diagnose, and investigate operational data with the visual context of 3D assets. 3D Scenes Studio empowers organizations to enrich existing 3D models with visualizations powered by Azure Digital Twins data, without the need for 3D expertise. The visualizations can be easily consumed from web browsers.
+
+With a digital twin graph and curated 3D model, subject matter experts can leverage the studio's low-code builder to map the 3D elements to the digital twin, and define UI interactivity and business logic for a 3D visualization of a business environment. The 3D scenes can then be consumed in the hosted [Azure Digital Twins Explorer 3D Scenes Studio](concepts-azure-digital-twins-explorer.md), or in a custom application that leverages the embeddable 3D viewer component.
+
+This article gives an overview of 3D Scenes Studio and its key features. For comprehensive, step-by-step instructions on how to use each feature, see [Use 3D Scenes Studio (preview)](how-to-use-3d-scenes-studio.md).
+
+## Studio overview
+
+Work in 3D Scenes Studio is built around the concept of *scenes*. A scene is a view of a single business environment, and is comprised of 3D content, custom business logic, and references to an Azure Digital Twins instance. You can have multiple scenes for a single digital twin instance.
+
+Scenes are configured in the [builder](#builder) inside the 3D Scenes Studio. Then, you can view your finished scenes in the studio's [built-in view experience](#viewer), or [embedded in custom web applications](#embeddable-viewer-component). You can extend the built-in viewer or create your own viewers that access the 3D Scenes files and your Azure Digital Twins graph.
+
+### Environment and storage
+
+From an Azure resource perspective, a *3D Scenes Studio environment* is formed from a unique pairing of an **Azure Digital Twins instance** and an **Azure storage container**. You'll create these Azure resources separately, and connect 3D Scenes Studio to both of them to set up a unique 3D Scenes Studio environment. You can then start building scenes in this environment.
+
+Each 3D scene relies on two files, which will be stored inside your storage container:
+* A 3D file, which contains scenario data and meshes for your visualization. You import this file into 3D Scenes Studio.
+* A configuration file, which is automatically created for you when you create a 3D Scenes Studio environment. This file contains the mapping definition between 3D content and Azure Digital Twins, as well as all of the user-defined business logic.
+
+>[!NOTE]
+>Because you manage the storage container in your Azure account, you'll be able to modify any of the stored scene files directly. However, it's **not recommended** to manually edit the configuration file, as this creates a risk of inconsistencies in the file that might not be handled correctly in the viewer experience.
+
+Once you've created a 3D Scenes Studio environment with an Azure Digital Twins instance and an Azure storage container, it's possible to switch out either of these resources for a different instance or container to change the environment. Here are the results of these actions:
+* Switching to a new Azure Digital Twins instance will switch the underlying digital twin data for the scene. This is **not recommended**, because it may result in broken digital twin references in your scene.
+* Switching to a new storage container means switching to a new configuration file, which will change the set of scenes that are showing in the studio.
+
+To share your scenes with someone else, the recipient will need at least *Reader*-level access to both the Azure Digital Twins instance and the storage container in the environment, as well as URL information about these resources. For detailed instructions on how to share your environment with someone else, see [Share your environment](how-to-use-3d-scenes-studio.md#share-your-environment).
+
+## Set up
+
+To work with 3D Scenes Studio, you'll need the following required resources:
+* An [Azure Digital Twins instance](how-to-set-up-instance-cli.md)
+ * You'll need *Azure Digital Twins Data Owner* or *Azure Digital Twins Data Reader* access to the instance
+ * The instance should be populated with [models](concepts-models.md) and [twins](concepts-twins-graph.md)
+
+* An [Azure storage account](/azure/storage/common/storage-account-create?tabs=azure-portal), and a [private container](/azure/storage/blobs/storage-quickstart-blobs-portal#create-a-container) in the storage account
+ * To **view** 3D scenes, you'll need at least *Storage Blob Data Reader* access to these storage resources. To **build** 3D scenes, you'll need *Storage Blob Data Contributor* or *Storage Blob Data Owner* access.
+
+ You can grant required roles at either the storage account level or the container level. For more information about Azure storage permissions, see [Assign an Azure role](/azure/storage/blobs/assign-azure-role-data-access?tabs=portal#assign-an-azure-role).
+ * You should also configure [CORS](/rest/api/storageservices/cross-origin-resource-sharing--cors--support-for-the-azure-storage-services) for your storage account, so that 3D Scenes Studio will be able to access your storage container. For complete CORS setting information, see [Use 3D Scenes Studio (preview)](how-to-use-3d-scenes-studio.md#prerequisites).
+
+Then, you can access 3D Scenes Studio at this link: [3D Scenes Studio](https://dev.explorer.azuredigitaltwins-test.net/3dscenes).
+
+Once there, you'll link your 3D environment to your storage resources, and configure your first scene. For detailed instructions on how to perform these actions, see [Initialize your 3D Scenes Studio environment](how-to-use-3d-scenes-studio.md#initialize-your-3d-scenes-studio-environment) and [Create, edit, and view scenes](how-to-use-3d-scenes-studio.md#create-edit-and-view-scenes).
+
+## Builder
+
+The *builder* in 3D Scenes Studio is the primary interface for configuring your scenes. It is a low-code, visual experience.
+
+Here's what the builder looks like:
++
+In the builder, you'll create *elements* and *behaviors* for your scene. The following sections explain these features in more detail.
+
+### Elements
+
+*Elements* are user-defined 3D meshes that are linked to digital twins, mapping the visualization pieces to relevant twin data.
+
+When creating an element in the builder, you'll define the following components:
+
+* **Primary twin**: Each element is connected to a primary digital twin counterpart. You connect the element to a twin in your Azure Digital Twins instance so that the element can represent your twin and its data within the 3D visualization.
+* **Name**: Each element needs a name. You might want to make it match the `$dtId` of its primary twin.
+* **Meshes**: Identify which components of the 3D model represent this element.
+* **Behaviors**: [Behaviors](#behaviors) describe how elements appear in the visualization. You can assign behaviors to this element here.
+* **Other twins**: If you want, you can add secondary digital twin data sources for an element. You should only add other twins when there are additional twins with data beyond your primary twin that you want to leverage in your behaviors. After configuring another twin, you'll be able to use properties from that twin when defining behaviors for that element.
+
+### Behaviors
+
+*Behaviors* are business logic rules that use digital twin data to drive visuals in the scene.
+
+When creating a behavior for an element, you'll define the following components:
+
+* **Elements**: Behaviors describe the visuals that are applied to each [element](#elements) in the visualization. You can choose which elements this behavior applies to.
+* **Twins**: Identify the set of twins whose data is available to this behavior. This includes the targeted elements' primary twins, and any other twins.
+* **Status**: States are data-driven overlays on your elements to indicate the health or status of the element.
+* **Alerts**: Alerts are conditional notifications to help you quickly see when an element requires attention.
+* **Widgets**: Widgets are data-driven visuals that provide additional data to help you diagnose and investigate the scenario that the behavior represents. Configuring widgets will help you make sure the right data is discoverable when an alert or status is active.
+
+You can also create **layers** in your scene to help organize your behaviors. Layers act like tags on the behaviors, enabling you to define which behaviors need to be seen together, thus creating custom views of your scene for different roles or tasks.
+
+## Viewer
+
+3D Scenes Studio also contains a *viewer*, which end users (like operators) can use to explore the 3D scene.
+
+Here's what the viewer looks like:
++
+You can use the **Elements** list to explore all the elements and active alerts in your scene, or you can click elements directly in the visualization to explore their details.
+
+## Embeddable viewer component
+
+3D Scenes Studio is extensible to support additional viewing needs. The [viewer component](#viewer) can be embedded into custom applications, and can work in conjunction with 3rd party components.
+
+Here's an example of what the embedded viewer might look like in an independent application:
++
+The 3D visualization components are available in GitHub, in the [iot-cardboard-js](https://github.com/microsoft/iot-cardboard-js) repository. For instructions on how to use the components to embed 3D experiences into custom applications, see the repository's wiki, [Embedding 3D Scenes](https://github.com/microsoft/iot-cardboard-js/wiki/Embedding-3D-Scenes).
+
+## Recommended limits
+
+When working with 3D Scenes Studio, it's recommended to stay within the following limits.
+
+| Capability | Recommended limit |
+| | |
+| Number of elements | 50 |
+| Size of 3D file | 100 MB |
+
+If you exceed these recommended limits, you may experience degraded performance or unintended application behavior.
+
+These limits are recommended because 3D Scenes Studio leverages the standard [Azure Digital Twins APIs](concepts-apis-sdks.md), and therefore is subject to the published [API rate limits](reference-service-limits.md#rate-limits). 3D Scenes Studio requests all relevant digital twins data every **10 seconds**. As the number of digital twins linked to the scenes increases, so does the amount of data that is pulled on this cadence. This means that you will see these additional API calls reflected in billing meters and operation throughput.
+
+## Next steps
+
+Try out 3D Scenes Studio with a sample scenario in [Get started with 3D Scenes Studio](quickstart-3d-scenes-studio.md).
+
+Or, learn how to use the studio's full feature set in [Use 3D Scenes Studio](how-to-use-3d-scenes-studio.md).
digital-twins How To Use 3D Scenes Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-use-3d-scenes-studio.md
+
+# Mandatory fields.
+ Title: Use 3D Scenes Studio (preview)
+
+description: Learn how to use all the features of 3D Scenes Studio (preview) for Azure Digital Twins.
++ Last updated : 05/03/2022++++
+# Optional fields. Don't forget to remove # if you need a field.
+#
+#
+#
++
+# Build 3D scenes with 3D Scenes Studio (preview) for Azure Digital Twins
+
+Azure Digital Twins [3D Scenes Studio (preview)](https://explorer.digitaltwins.azure.net/3dscenes) is an immersive 3D environment, where business and front-line workers can consume and investigate operational data from their Azure Digital Twins solutions with visual context.
+
+## Prerequisites
+
+To use 3D Scenes Studio, you'll need the following resources:
+* An Azure Digital Twins instance. For instructions, see [Set up an instance and authentication](how-to-set-up-instance-cli.md).
+ * Obtain *Azure Digital Twins Data Owner* or *Azure Digital Twins Data Reader* access to the instance. For instructions, see [Set up user access permissions](how-to-set-up-instance-cli.md#set-up-user-access-permissions).
+ * Take note of the *host name* of your instance to use later.
+* An Azure storage account. For instructions, see [Create a storage account](/azure/storage/common/storage-account-create?tabs=azure-portal).
+* A private container in the storage account. For instructions, see [Create a container](/azure/storage/blobs/storage-quickstart-blobs-portal#create-a-container).
+ * Take note of the *URL* of your storage container to use later.
+* *Storage Blob Data Owner* or *Storage Blob Data Contributor* access to your storage resources. You can grant required roles at either the storage account level or the container level. For instructions and more information about permissions to Azure storage, see [Assign an Azure role](/azure/storage/blobs/assign-azure-role-data-access?tabs=portal#assign-an-azure-role).
+
+You should also configure [CORS](/rest/api/storageservices/cross-origin-resource-sharing--cors--support-for-the-azure-storage-services) for your storage account, so that 3D Scenes Studio will be able to access your storage container. You can use the following [Azure CLI](/cli/azure/what-is-azure-cli) command to set the minimum required methods, origins, and headers. The command contains one placeholder for the name of your storage account.
+
+```azurecli
+az storage cors add --services b --methods GET OPTIONS POST PUT --origins https://explorer.digitaltwins.azure.net --allowed-headers Authorization x-ms-version x-ms-blob-type --account-name <your-storage-account>
+```
+
+Now you have all the necessary resources to work with scenes in 3D Scenes Studio.
+
+## Initialize your 3D Scenes Studio environment
+
+In this section, you'll set the environment in *3D Scenes Studio* and customize your scene for the sample graph that's in your Azure Digital Twins instance.
+
+1. Navigate to the [3D Scenes Studio](https://explorer.digitaltwins.azure.net/3dscenes). The studio will open, connected to the Azure Digital Twins instance that you accessed last in the Azure Digital Twins Explorer.
+1. Select the **Edit** icon next to the instance name to configure the instance and storage container details.
+
+ :::image type="content" source="media/how-to-use-3d-scenes-studio/studio-edit-environment-1.png" alt-text="Screenshot of 3D Scenes Studio highlighting the edit environment icon, which looks like a pencil." lightbox="media/how-to-use-3d-scenes-studio/studio-edit-environment-1.png":::
+
+ 1. The **Azure Digital Twins instance URL** should start with *https://*, followed by the *host name* of your instance from the [Prerequisites](#prerequisites) section.
+
+ 1. For the **Azure storage container URL**, enter the URL of your storage container from the [Prerequisites](#prerequisites) section.
+
+ 1. Select **Save**.
+
+ :::image type="content" source="media/how-to-use-3d-scenes-studio/studio-edit-environment-2.png" alt-text="Screenshot of 3D Scenes Studio highlighting the Save button for the environment." lightbox="media/how-to-use-3d-scenes-studio/studio-edit-environment-2.png":::
+
+## Create, edit, and view scenes
+
+The 3D representation of an environment in 3D Scenes Studio is called a *scene*. A scene consists of a 3D file and a configuration file that's created for you automatically.
+
+To create a scene, start with a segmented 3D file in *.GLTF* or *.GLB* format. You can download and view a sample 3D file using this link: [Download RobotArms.glb](https://cardboardresources.blob.core.windows.net/public/RobotArms.glb).
+
+>[!TIP]
+>3D Scenes Studio supports animation. If you use a 3D model file that contains animations, they will play in the scene.
+
+You can use 3D Scenes Studio with a 3D file that's already present in your storage container, or you can upload the file directly to 3D Scenes Studio, which will add it to the container automatically. Here are the steps to use a 3D file to create a new scene.
+
+1. From the home page of 3D Scenes Studio, select the **Add 3D scene** button to start creating a new scene.
+
+1. Enter a **Name** and **Description** for the scene.
+1. If you want the scene to show up in [globe view](#view-scenes-in-globe-view), toggle **Show on globe** to **On**. Enter **Latitude** and **Longitude** values for the scene.
+1. Select one of the following tabs in the **Link 3D file** section:
+ 1. **Choose file** to enter the URL of a 3D file that's already in your storage container
+ 1. **Upload file** to upload a 3D file from your computer
+
+ :::image type="content" source="media/how-to-use-3d-scenes-studio/add-scene.png" alt-text="Screenshot of 3D Scenes Studio, Create new scene dialog." lightbox="media/how-to-use-3d-scenes-studio/add-scene.png":::
+1. Select **Create**.
+
+### Edit scenes
+
+To edit or delete a scene after it's been created, use the **Actions** icons next to the scene in the 3D Scenes Studio home page.
++
+Editing a scene will reopen all of the scene properties you set while creating it, allowing you to change them and update the scene.
+
+### View scenes in globe view
+
+The home page of 3D Scenes Studio shows a **List view** of your scenes.
+
+You can also select **Globe view** to see your scenes placed visually on a globe.
++
+The resulting globe view looks like this:
++
+### View scenes individually
+
+You can select an individual scene from the home page to open it in **Build** mode. Here, you can see the 3D mesh for the scene and edit its [elements](#add-elements) and [behaviors](#add-behaviors).
++
+You can switch to **View** mode to enable filtering on specific elements and visualization of element behaviors that you've created.
++
+### Embed scenes in custom applications
+
+The viewer component can also be embedded into custom applications outside of 3D Scenes Studio, and can work in conjunction with 3rd party components.
+
+Here's an example of what the embedded viewer might look like in an independent application:
++
+The 3D visualization components are available in GitHub, in the [iot-cardboard-js](https://github.com/microsoft/iot-cardboard-js) repository. For instructions on how to use the components to embed 3D experiences into custom applications, see the repository's wiki, [Embedding 3D Scenes](https://github.com/microsoft/iot-cardboard-js/wiki/Embedding-3D-Scenes).
+
+## Add elements
+
+An *element* is a self-defined set of 3D meshes that is linked to data on one or more underlying digital twins.
+
+One way to create a new element is to select **New element** from the **Elements** tab in the **Build** view for a scene.
++
+Alternatively, you can select a mesh component directly from the visualization and create a new element that is connected to it already.
++
+This will open the **New element** panel where you can fill in element information.
++
+### Name and primary twin
+
+A *primary twin* is the main digital twin counterpart for an element. You connect the element to a twin in your Azure Digital Twins instance so that the element can represent your twin and its data within the 3D visualization.
+
+In the **New element** panel, the **Primary twin** dropdown list contains names of all the twins in the connected Azure Digital Twins instance.
++
+Select a twin to link to this element. This will automatically apply the digital twin ID (`$dtId`) as the element **Name**. You can rename the element if you want, to make it understandable for both builders and consumers of the 3D scene.
+
+>[!TIP]
+>[Azure Digital Twins Explorer](concepts-azure-digital-twins-explorer.md) can help you find the right twin to link to an element, by showing you a visual graph of your twins and letting you query for specific twin conditions.
+
+### Meshes
+
+The **Meshes** tab is where you specify which components of the visual 3D mesh represent this element.
+
+If you started element creation by selecting a mesh in the visualization, that mesh will already be filled in here. You can select meshes in the visualization now to add them to the element.
++
+### Behaviors
+
+A *behavior* is a scenario for your scene. Select **Add behavior** on this tab. From there, you can either select an existing behavior to add it to this element, or select **New behavior** to enter the flow for creating a new behavior.
++
+For more details on creating new behaviors, see [Add behaviors](#add-behaviors).
+
+### Other twins
+
+On the **other twins** tab, you can add secondary digital twin data sources for an element. You can add other twins to an element if the data on the primary twin won't be enough to define all the behaviors you want for the element, so you need access to the data of additional twins.
++
+You can't add other twins during new element creation. For instructions on adding other twins, see [Twins](#twins) as a behavior option.
+
+Once there are other twins added to the element, you'll be able to view and modify them on this tab.
+
+## Add behaviors
+
+A *behavior* is a scenario for your scene that will leverage particular data on the related element's digital twin to drive viewer visualizations.
+
+One way to create a new behavior is to select **New behavior** from the **Behaviors** tab of the **Build** view for a scene.
++
+Alternatively, you can select an element from the **Elements** tab, and create a new behavior from [that element's Behaviors tab](#behaviors).
+
+This will open the **New behavior** panel where you can fill in behavior information.
++
+### Name and scene layers
+
+Start by choosing a **Display name** for the behavior.
+
+>[!TIP]
+>Choose a name that will be clear to end users that will be viewing the scene, because this behavior name will be displayed as part of the scene visualization.
+
+For the **Scene layers** dropdown menu, you can add this behavior to an existing layer or create a new layer to help organize this behavior. For more information on layers, see [Manage layers](#manage-layers).
+
+### Elements
+
+In the **Elements** tab, select which elements this behavior should target.
+
+If you started the behavior creation process from a specific element, that element will already be selected here. Otherwise, you can choose elements here for the first time.
++
+### Twins
+
+On the **Twins** tab, you can modify the set of twins whose data is available to this behavior. This includes the targeted elements' primary twins, and any additional twins.
+
+You can add secondary digital twin data sources for an element. After configuring other twins, you'll be able to use properties from those twins in your behavior expressions for this element. You should only add other twins when there are additional twins with data beyond your primary twin that you want to leverage in your [status](#status), [alerts](#alerts), and [widgets](#widgets) for this behavior.
+
+To add a new twin data source, select **Add twin** and **Create twin**.
++
+This will open a **New twin** panel where you can name the additional twin and select a twin from your Azure Digital Twins instance to map.
++
+>[!TIP]
+>[Azure Digital Twins Explorer](concepts-azure-digital-twins-explorer.md) can help you see twins that might be related to the primary twin for this element. You can query your graph using `SELECT * FROM digitaltwins WHERE $dtId="<primary-twin-id>`, and then use the [double-click expansion feature](how-to-use-azure-digital-twins-explorer.md#control-twin-graph-expansion) to explore related twins.
+
+### Status
+
+In the **Status** tab, you can define states for your element. *States* are data-driven overlays on your elements to indicate the health or status of the element.
+
+To create a state, first choose whether the state is dependent on a **Single property** or a **Custom (advanced)** property expression. For a **Single property**, you'll get a dropdown list of numeric properties on the primary twin. For **Custom (advanced)**, you'll get a text box where you can write a custom JavaScript expression using one or more properties. The expression should have a numeric outcome. For more information about writing custom expressions, see [Use custom (advanced) expressions](#use-custom-advanced-expressions).
+
+Once you've defined your property expression, set value ranges to create state boundaries, and choose colors to represent each state in the visualization. The min of each value range is inclusive, and the max is exclusive.
++
+### Alerts
+
+In the **Alerts** tab, you can set conditional notifications to help you quickly see when an element requires your attention.
+
+First, enter a **Trigger expression**. This is a JavaScript expression involving one or more properties of *PrimaryTwin* that yields a boolean result. This expression will generate an alert badge in the visualization when it evaluates to true. For more information about writing custom expressions, see [Use custom (advanced) expressions](#use-custom-advanced-expressions).
+
+Then, customize your alert badge with an **Icon** and **Color**, and a string for **Scenario Description**.
++
+Notification text can also include calculation expressions with this syntax: `${<calculation-expression>}`. Expressions will be computed and displayed dynamically in the [viewer](#view-scenes-individually).
+
+For an example of notification text with an expression, consider a behavior for a pasteurization tank, whose twin has double properties for `InFlow` and `OutFlow`. To display the difference between the tank's inflow and outflow in the notification, you could use this notification text: `Too much flow (InFlow is ${PrimaryTwin.InFlow - PrimaryTwin.OutFlow} greater than OutFlow)`. The computed result of the expression will be shown in the alert text in the viewer.
++
+### Widgets
+
+Widgets are managed on the **Widgets** tab. *Widgets* are data-driven visuals that provide additional context and data, to help you understand the scenario that the behavior represents. Configuring widgets will help you make sure the right data is discoverable when an alert or status is active.
+
+Select **Add widget** to bring up the **Widget library**, where you can select from different type of available widgets.
++
+Here are the types of widget that you can create:
+
+* **Gauge**: For representing numerical data points visually
+
+ Enter a **Display name** and **Unit of measure**, then choose whether the gauge reflects a **Single property** or a **Custom (advanced)** property expression. For a **Single property**, you'll get a dropdown list of numeric properties on the primary twin. For **Custom (advanced)**, you'll get a text box where you can write a custom JavaScript expression using one or more properties. The expression should have a numeric outcome. For more information about writing custom expressions, see [Use custom (advanced) expressions](#use-custom-advanced-expressions).
+
+ Once you've defined your property expression, set value ranges to appear in certain colors on the gauge. The min of each value range is inclusive, and the max is exclusive.
+
+ :::image type="content" source="media/how-to-use-3d-scenes-studio/new-behavior-widgets-gauge.png" alt-text="Screenshot of creating a new gauge-type widget in 3D Scenes Studio." lightbox="media/how-to-use-3d-scenes-studio/new-behavior-widgets-gauge.png":::
+
+* **Link**: For including externally-referenced content via a linked URL
+
+ Enter a **Label** and destination **URL**.
+
+ :::image type="content" source="media/how-to-use-3d-scenes-studio/new-behavior-widgets-link.png" alt-text="Screenshot of creating a new link-type widget in 3D Scenes Studio." lightbox="media/how-to-use-3d-scenes-studio/new-behavior-widgets-link.png":::
+
+ Link URLs can also include calculation expressions with this syntax: `${<calculation-expression>}`. The screenshot above contains an expression for accessing a property of the primary twin. Expressions will be computed and displayed dynamically in the [viewer](#view-scenes-individually).
+
+* **Value**: For directly displaying twin property values
+
+ Enter a **Display name** and select a **Property expression** that you want to display. This can be a **Single property** of the primary twin, or a **Custom (advanced)** property expression. Custom expressions should be JavaScript expressions using one or more properties of the twin, and you'll select which outcome type the expression will produce. For more information about writing custom expressions, see [Use custom (advanced) expressions](#use-custom-advanced-expressions).
+
+ :::image type="content" source="media/how-to-use-3d-scenes-studio/new-behavior-widgets-value.png" alt-text="Screenshot of creating a new value-type widget in 3D Scenes Studio." lightbox="media/how-to-use-3d-scenes-studio/new-behavior-widgets-value.png":::
+
+ If your custom property expression outputs a string, you can also use JavaScript's template literal syntax to include a dynamic expression in the string output. Format the dynamic expression with this syntax: `${<calculation-expression>}`. Then, wrap the whole string output with backticks (`` ` ``).
+
+ Below is an example of a value widget that checks if the `InFlow` value of the primary twin exceeds 99. If so, it outputs a string with an expression containing the twin's `$dtId`. Otherwise, there will be no expression in the output, so no backticks are required.
+
+ Here's the value expression: `` PrimaryTwin.InFlow > 99 ? `${PrimaryTwin.$dtId} has an InFlow problem` : 'Everything looks good' ``. The computed result of the expression (the `$dtId`) will be shown in the widget in the viewer.
+
+ :::image type="content" source="media/how-to-use-3d-scenes-studio/new-behavior-widgets-value-expression.png" alt-text="Screenshots showing the notification text being entered on the value widget dialog, and how the widget appears in the Viewer." lightbox="media/how-to-use-3d-scenes-studio/new-behavior-widgets-value-expression.png":::
+
+### Use custom (advanced) expressions
+
+While defining [status](#status), [alerts](#alerts), and [widgets](#widgets) in your behaviors, you may want to use custom expressions to define a property condition.
++
+These expressions use the JavaScript language, and allow you to use one or more properties of associated twins to define custom logic.
+
+The following chart indicates which JavaScript operators are supported in 3D Scenes Studio.
+
+| Operator type | Supported? |
+| | |
+| Assignment operators | No |
+| Comparison operators | Yes |
+| Arithmetic operators | Yes |
+| Bitwise operators | Yes |
+| Logical operators | Yes |
+| String operators | Yes |
+| Conditional (ternary) operator | Yes |
+| Command operator | No |
+| Unary operators | No |
+| Relational operators | No |
+
+## Manage layers
+
+You can create *layers* in your scene to help organize your [behaviors](#add-behaviors). Layers act like tags on the behaviors, enabling you to define which behaviors need to be seen together, thus creating custom views of your scene for different roles or tasks.
+
+One way to create layers is to use the **Scene layers** button in the **Build** view for a scene.
++
+Selecting **New layer** will prompt you to enter a name for the new layer you want to create.
+
+Alternatively, you can create layers while [creating or modifying a behavior](#name-and-scene-layers). The behavior pane is also where you can add the behavior to a layer you've already created.
++
+When looking at your scene in the viewer, you can use the **Select layers** button to choose which layers show up in the visualization. Behaviors that aren't part of any layer are grouped under **Default layer**.
++
+## Modify theme
+
+In either the builder or viewer for a scene, select the **Theme** icon to change the style, object colors, and background color of the display.
++
+## Share your environment
+
+A *3D Scenes Studio environment* is formed from a unique pairing of an **Azure Digital Twins instance** and an **Azure storage container**. You can share your entire environment with someone, including all of your scenes, or share a specific scene.
+
+To share your environment with someone else, start by giving them the following permissions to your resources:
+* *Azure Digital Twins Data Reader* access (or greater) on the Azure Digital Twins instance
+* *Storage Blob Data Reader* access (or greater) to the storage container
+ * *Storage Blob Data Reader* will allow them to view your scenes.
+ * *Storage Blob Data Owner* or *Storage Blob Data Contributor* will allow them to edit your scenes.
+
+Then, follow the instructions in the rest of this section to share either your [entire environment](#share-general-environment) or a [specific scene](#share-a-specific-scene).
+
+### Share general environment
+
+Once someone has the required permissions, there are two ways to give them access to your entire environment. You can do either of the following things:
+* Use the Share button on the 3D Scenes Studio homepage to copy the **URL of your 3D Scenes Studio environment**. (The URL includes the URLs of both your Azure Digital Twins instance and your storage container.)
+ :::image type="content" source="media/how-to-use-3d-scenes-studio/copy-url.png" alt-text="Screenshot of the Share environment button in 3D Scenes Studio." lightbox="media/how-to-use-3d-scenes-studio/copy-url.png":::
+
+ Share it with the recipient, who can paste this URL directly into their browser to connect to your environment.
+* Share the **URL of your Azure Digital Twins instance** and the **URL of your Azure storage container** that you used when [initializing your 3D Scenes Studio environment](#initialize-your-3d-scenes-studio-environment). The recipient can access [3D Scenes Studio](https://dev.explorer.azuredigitaltwins-test.net/3dscenes) and initialize it with these same URL values to connect to your same environment.
+
+After this, the recipient can view and interact with your scenes in the studio.
+
+### Share a specific scene
+
+You can also share your environment with a link directly to a specific scene. To share a specific scene, open the scene in **View** mode.
+
+Use the **Share scene** icon to generate a link to your scene. You can choose whether you want to link to preserve your current layer and element selections.
++
+When the recipient pastes this URL into their browser, the specified scene will open in the viewer, with any chosen layers or elements selected.
+
+>[!NOTE]
+>When a scene is shared with someone in this way, the recipient will also be able to leave this scene and view other scenes in your environment if they choose.
+
+## Next steps
+
+Try out 3D Scenes Studio with a sample scenario in [Get started with 3D Scenes Studio](quickstart-3d-scenes-studio.md).
+
+Or, visualize your Azure Digital Twins graph differently using [Azure Digital Twins Explorer](how-to-use-azure-digital-twins-explorer.md).
digital-twins How To Use Azure Digital Twins Explorer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-use-azure-digital-twins-explorer.md
# Mandatory fields. Title: Use Azure Digital Twins Explorer
-description: Learn how to use the features of Azure Digital Twins Explorer
+description: Learn how to use all the features of Azure Digital Twins Explorer
Last updated 02/24/2022 + # Optional fields. Don't forget to remove # if you need a field. #
The panel positions will be reset upon refresh of the browser window.
Learn about writing queries for the Azure Digital Twins twin graph: * [Query language](concepts-query-language.md)
-* [Query the twin graph](how-to-query-graph.md)
+* [Query the twin graph](how-to-query-graph.md)
digital-twins How To Use Data History https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-use-data-history.md
Last updated 03/23/2022 + # Optional fields. Don't forget to remove # if you need a field. #
This article shows how to set up a working data history connection between Azure
It also contains a sample twin graph that you can use to see the historized twin property updates in Azure Data Explorer. >[!TIP]
->Although this article uses the Azure portal, you can also work with data history using the [2021-06-30-preview](https://github.com/Azure/azure-rest-api-specs/tree/main/specification/digitaltwins/data-plane/Microsoft.DigitalTwins/preview/2021-06-30-preview) version of the rest APIs.
+>Although this article uses the Azure portal, you can also work with data history using the [2021-06-30-preview](https://github.com/Azure/azure-rest-api-specs/tree/main/specification/digitaltwins/data-plane/Microsoft.DigitalTwins/preview/2021-06-30-preview) version of the rest APIs.
## Prerequisites
digital-twins Quickstart 3D Scenes Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/quickstart-3d-scenes-studio.md
+
+# Mandatory fields.
+ Title: Quickstart - Get started with 3D Scenes Studio (preview)
+
+description: Learn how to use 3D Scenes Studio (preview) for Azure Digital Twins by following this demo, where you'll create a sample scene with elements and behaviors.
++ Last updated : 05/04/2022++++
+# Optional fields. Don't forget to remove # if you need a field.
+#
+#
+#
++
+# Quickstart - Get started with 3D Scenes Studio (preview) for Azure Digital Twins
+
+Azure Digital Twins *3D Scenes Studio (preview)* is an immersive 3D environment, where business and front-line workers can consume and investigate operational data from their Azure Digital Twins solutions with visual context.
+
+In this article, you'll set up all the required resources for using 3D Scenes Studio, including an Azure Digital Twins instance with sample data, and Azure storage resources. Then, you'll create a scene in the studio that's connected to the sample Azure Digital Twins environment.
+
+This sample scene used in this quickstart monitors the carrying efficiency of robotic arms in a factory. The robotic arms pick up a certain number of boxes each hour, while video cameras monitor each of the arms to detect if the arm failed to pick up a box. Each arm has an associated digital twin in Azure Digital Twins, and the digital twins are updated with data whenever an arm misses a box. Given this scenario, this quickstart walks through setting up a 3D scene to visualize the arms in the factory, along with visual alerts each time a box is missed.
+
+The scene will look like this:
++
+## Prerequisites
+
+You'll need an Azure subscription to complete this quickstart. If you don't have one already, [create one for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) now.
+
+You'll also need to download a sample 3D file to use for the scene in this quickstart. [Select this link to download RobotArms.glb](https://cardboardresources.blob.core.windows.net/public/RobotArms.glb).
+
+## Set up Azure Digital Twins and sample data
+
+The first step in working with Azure Digital Twins is to create an Azure Digital Twins instance. After you create an instance of the service, you can link the instance to a 3D Scenes Studio visualization later in the quickstart.
+
+The rest of this section walks you through the instance creation. If you already have an Azure Digital Twins instance set up from an earlier quickstart, you can skip to the [next section](#generate-sample-models-and-twins).
++
+### Collect host name
+
+After deployment completes, use the **Go to resource** button to navigate to the instance's Overview page in the portal.
++
+Next, take note of the instance's **host name** value to use later.
++
+### Generate sample models and twins
+
+In this section, you'll use the *Azure Digital Twins data simulator* tool to generate sample models and twins to populate your instance. Then, you'll use the simulator to stream sample data to the twins in the graph.
+
+>[!NOTE]
+>Models, twins, and simulated data are provided for you in this quickstart to simplify the process of creating an environment that you can view in 3D Scenes Studio. When designing your own complete Azure Digital Twins solution, you'll create [models](concepts-models.md) and [twins](concepts-twins-graph.md) yourself to describe your own environment in detail, and [set up your own data flows](concepts-data-ingress-egress.md) accordingly.
+
+This sample scenario represents a package distribution center that contains six robotic arms. Each arm has a digital twin with properties to track how many boxes the arm fails to pick up, along with the IDs of the missed boxes.
+
+1. Navigate to the [data simulator](https://explorer.digitaltwins.azure.net/tools/data-pusher).
+1. In the **Instance URL** space, enter the *host name* of your Azure Digital Twins instance from the [previous section](#collect-host-name). Set the **Simulation Type** to *Robot Arms*.
+1. Use the **Generate environment** button to create a sample environment with models and twins. (If you already have models and twins in your instance, this will not delete them, it will just add more.)
+
+ :::image type="content" source="media/quickstart-3d-scenes-studio/data-simulator.png" alt-text="Screenshot of the Azure Digital Twins Data simulator. The Generate environment button is highlighted." lightbox="media/quickstart-3d-scenes-studio/data-simulator.png":::
+1. Select **Start simulation** to start sending simulated data to your Azure Digital Twins instance. The simulation will only run while this window is open and the **Start simulation** option is active.
+
+You can view the models and graph that have been created by using the Azure Digital Twins Explorer **Graph** tool. To switch to that tool, select the **Graph** icon from the left menu.
++
+Then, use the **Run Query** button to query for all the twins and relationships that have been created in the instance.
++
+You can select each twin to view them in more detail.
+
+To see the models that have been uploaded and how they relate to each other, select **Model graph**.
++
+>[!TIP]
+>For an introduction to Azure Digital Twins Explorer, see the quickstart [Get started with Azure Digital Twins Explorer](quickstart-azure-digital-twins-explorer.md).
+
+## Create storage resources
+
+Next, create a new storage account and a container in the storage account. 3D Scenes Studio will use this storage container to store your 3D file and configuration information.
+
+You'll also set up read and write permissions to the storage account. In order to set these backing resources up quickly, this section uses the [Azure Cloud Shell](/azure/cloud-shell/overview).
+
+1. Navigate to the [Cloud Shell](https://shell.azure.com) in your browser.
+
+ Run the following command to set the CLI context to your subscription for this session.
+
+ ```azurecli
+ az account set --subscription "<your-Azure-subscription-ID>"
+ ```
+1. Run the following command to create a storage account in your subscription. The command contains placeholders for you to enter a name and choose a region for your storage account, as well as a placeholder for your resource group.
+
+ ```azurecli
+ az storage account create --resource-group <your-resource-group> --name <name-for-your-storage-account> --location <region> --sku Standard_RAGRS
+ ```
+
+ When the command completes successfully, you'll see details of your new storage account in the output. Look for the `ID` value in the output and copy it to use in the next command.
+
+ :::image type="content" source="media/quickstart-3d-scenes-studio/storage-account-id.png" alt-text="Screenshot of Cloud Shell output. The I D of the storage account is highlighted." lightbox="media/quickstart-3d-scenes-studio/storage-account-id.png":::
+
+1. Run the following command to grant yourself the *Storage Blob Data Owner* on the storage account. This level of access will allow you to perform both read and write operations in 3D Scenes Studio. The command contains placeholders for your Azure account and the ID of your storage account from the previous step.
+
+ ```azurecli
+ az role assignment create --role "Storage Blob Data Owner" --assignee <your-Azure-account> --scope <ID-of-your-storage-account>
+ ```
+
+ When the command completes successfully, you'll see details of the role assignment in the output.
+
+1. Run the following command to configure CORS for your storage account. This will be necessary for 3D Scenes Studio to access your storage container. The command contains a placeholder for the name of your storage account.
+
+ ```azurecli
+ az storage cors add --services b --methods GET OPTIONS POST PUT --origins https://explorer.digitaltwins.azure.net --allowed-headers Authorization x-ms-version x-ms-blob-type --account-name <your-storage-account>
+ ```
+
+ This command doesn't have any output.
+
+1. Run the following command to create a private container in the storage account. Your 3D Scenes Studio files will be stored here. The command contains a placeholder for you to enter a name for your storage container, and a placeholder for the name of your storage account.
+ ```azurecli
+ az storage container create --name <name-for-your-container> --public-access off --account-name <your-storage-account>
+ ```
+
+ When the command completes successfully, the output will show `"created": true`.
+
+## Initialize your 3D Scenes Studio environment
+
+Now that all your resources are set up, you can use them to create an environment in *3D Scenes Studio*. In this section, you'll create a scene and customize it for the sample graph that's in your Azure Digital Twins instance.
+
+1. Navigate to the [3D Scenes Studio](https://explorer.digitaltwins.azure.net/3dscenes). The studio will open, connected to the Azure Digital Twins instance that you accessed last in the Azure Digital Twins Explorer. Dismiss the welcome demo.
+
+ :::image type="content" source="media/quickstart-3d-scenes-studio/studio-dismiss-demo.png" alt-text="Screenshot of 3D Scenes Studio with welcome demo." lightbox="media/quickstart-3d-scenes-studio/studio-dismiss-demo.png":::
+
+1. Select the **Edit** icon next to the instance name to configure the instance and storage container details.
+
+ :::image type="content" source="media/quickstart-3d-scenes-studio/studio-edit-environment-1.png" alt-text="Screenshot of 3D Scenes Studio highlighting the edit environment icon, which looks like a pencil." lightbox="media/quickstart-3d-scenes-studio/studio-edit-environment-1.png":::
+
+ 1. For the **Azure Digital Twins instance URL**, fill the *host name* of your instance from the [Collect host name](#collect-host-name) step into this URL: `https://<your-instance-host-name>`.
+
+ 1. For the **Azure Storage container URL**, fill the names of your storage account and container from the [Create storage resources](#create-storage-resources) step into this URL: `https://<your-storage-account>.blob.core.windows.net/<your-container>`.
+
+ 1. Select **Save**.
+
+ :::image type="content" source="media/quickstart-3d-scenes-studio/studio-edit-environment-2.png" alt-text="Screenshot of 3D Scenes Studio highlighting the Save button for the environment." lightbox="media/quickstart-3d-scenes-studio/studio-edit-environment-2.png":::
+
+### Add a new 3D scene
+
+In this section you'll create a new 3D scene, using the *RobotArms.glb* 3D model file you downloaded earlier in [Prerequisites](#prerequisites). A *scene* consists of a 3D model file, and a configuration file that's created for you automatically.
+
+This sample scene contains a visualization of the distribution center and its arms. You'll connect this visualization to the sample twins you created in the [Generate sample models and twins](#generate-sample-models-and-twins) step, and customize the data-driven view in later steps.
+
+1. Select the **Add 3D scene** button to start creating a new scene. Enter a **Name** and **Description** for your scene, and select **Upload file**.
+
+ :::image type="content" source="media/quickstart-3d-scenes-studio/add-scene-upload-file.png" alt-text="Screenshot of the Create new scene process in 3D Scenes Studio." lightbox="media/quickstart-3d-scenes-studio/add-scene-upload-file.png":::
+1. Browse for the *RobotArms.glb* file on your computer and open it. Select **Create**.
+
+ :::image type="content" source="media/quickstart-3d-scenes-studio/add-scene-create.png" alt-text="Screenshot of creating a new scene in 3D Scenes Studio. The robot arms file has been uploaded and the Create button is highlighted." lightbox="media/quickstart-3d-scenes-studio/add-scene-create.png":::
+
+ Once the file is uploaded, you'll see it listed back on the main screen of 3D Scenes Studio.
+1. Select the scene to open and view it. The scene will open in **Build** mode.
+
+ :::image type="content" source="media/quickstart-3d-scenes-studio/distribution-scene.png" alt-text="Screenshot of the factory scene in 3D Scenes Studio." lightbox="media/quickstart-3d-scenes-studio/distribution-scene.png":::
+
+## Create a scene element
+
+Next, you'll define an *element* in the 3D visualization and link it to a twin in the Azure Digital Twins graph you set up earlier.
+
+1. Select any robotic arm in the scene visualization. This will bring up the possible element actions. Select **+ Create new element**.
+
+ :::image type="content" source="media/quickstart-3d-scenes-studio/new-arm-element.png" alt-text="Screenshot of the factory scene in 3D Scenes Studio. A robotic arm is highlighted with an option to create a new element." lightbox="media/quickstart-3d-scenes-studio/new-arm-element.png":::
+1. In the **New element** panel, the **Primary twin** dropdown list contains names of all the twins in the connected Azure Digital Twins instance.
+
+ 1. Select *Arm1*. This will automatically apply the digital twin ID (`$dtId`) as the element name.
+
+ 1. Select **Create element**.
+
+ :::image type="content" source="media/quickstart-3d-scenes-studio/new-element-details.png" alt-text="Screenshot of the New element options in 3D Scenes Studio." lightbox="media/quickstart-3d-scenes-studio/new-element-details.png":::
+
+The element will now show up in the list of elements for the scene.
+
+## Create a behavior
+
+Next, you'll create a *behavior* for the element. These behaviors allow you to customize the element's data visuals and the associated business logic. Then, you can explore these data visuals to understand the state of the physical environment.
+
+1. Switch to the **Behaviors** list and select **New behavior**.
+
+ :::image type="content" source="media/quickstart-3d-scenes-studio/new-behavior.png" alt-text="Screenshot of the New behavior button in 3D Scenes Studio." lightbox="media/quickstart-3d-scenes-studio/new-behavior.png":::
+
+1. For **Display name**, enter *Packing Line Efficiency*. Under **Elements**, select *Arm1*.
+
+ :::image type="content" source="media/quickstart-3d-scenes-studio/new-behavior-elements.png" alt-text="Screenshot of the New behavior options in 3D Scenes Studio, showing the Elements options." lightbox="media/quickstart-3d-scenes-studio/new-behavior-elements.png":::
+
+1. Skip the **Twins** tab, which isn't used in this quickstart. Switch to the **Status** tab. *States* are data-driven overlays on your elements to indicate the health or status of the element. Here, you'll set value ranges for a property on the element and associate certain colors with each range.
+
+ 1. Keep the **Property expression** on **Single property** and open the property dropdown list. It contains names of all the properties on the primary twin for the *Arm1* element. Select *FailedPickupsLastHr*.
+
+ 1. In this sample scenario, you want to flag that an arm that misses three or more pickups in an hour requires maintenance, and an arm that misses one or two pickups may require maintenance in the future. Set two value ranges so that values *1-3* appear in one color, and values *3-Infinity* appear in another (the min range value is inclusive, and the max value is exclusive).
+
+ :::image type="content" source="media/quickstart-3d-scenes-studio/new-behavior-status.png" alt-text="Screenshot of the New behavior options in 3D Scenes Studio, showing the Status options." lightbox="media/quickstart-3d-scenes-studio/new-behavior-status.png":::
+
+1. Switch to the **Alerts** tab. *Alerts* help grab your attention to quickly understand that some situation is active for the associated element. Here, you'll create an alert badge that will appear when an arm fails to pick up a package. The primary arm twin has a property *PickupFailedAlert* that we will use to create a visual alert in the scene.
+
+ 1. For the **Trigger expression**, enter *PrimaryTwin.PickupFailedAlert*. `PickupFailedAlert` is a property on the primary twin that is set to True when a pickup was failed. Using it as the trigger expression means this alert will appear whenever the property value is True.
+
+ 1. Set the badge **Icon** and **Color**. For **Scenario description**, enter *${PrimaryTwin.PickupFailedBoxID} was missed, please track down this box and remediate.* This will use the primary twin's property `PickupFailedBoxID` to display a message about which box the arm failed to pick up.
+
+ :::image type="content" source="media/quickstart-3d-scenes-studio/new-behavior-alerts.png" alt-text="Screenshot of the New behavior options in 3D Scenes Studio, showing the Alerts options." lightbox="media/quickstart-3d-scenes-studio/new-behavior-alerts.png":::
+
+1. Switch to the **Widgets** tab. Widgets are data-driven visuals that provide additional context and data, to help you understand the scenario that the behavior represents. Here, you'll add two visual widgets to display property information for the arm element.
+
+ 1. First, create a widget to display a gauge of the arm's hydraulic pressure value.
+ 1. Select **Add widget**.
+
+ :::image type="content" source="media/quickstart-3d-scenes-studio/new-behavior-widgets.png" alt-text="Screenshot of the New behavior options in 3D Scenes Studio, showing the Widgets options." lightbox="media/quickstart-3d-scenes-studio/new-behavior-widgets.png":::
+
+ From the **Widget library**, select the **Gauge** widget and then **Add widget**.
+
+ 1. In the **New widget** options, add a **Display name** of *Hydraulic Pressure*, a **Unit of measure** of *m/s*, and a single-property **Property expression** of *PrimaryTwin.HydraulicPressure*.
+
+ Set three value ranges so that values *0-40* appear one color, *40-80* appear in a second color, and *80-Infinity* appear in a third color (remember that the min range value is inclusive, and the max value is exclusive).
+
+ 1. Select **Create widget**.
+
+ :::image type="content" source="media/quickstart-3d-scenes-studio/new-widget-gauge.png" alt-text="Screenshot of the New widget options in 3D Scenes Studio for the gauge widget." lightbox="media/quickstart-3d-scenes-studio/new-widget-gauge.png":::
+
+ 1. Next, create a widget with a link to a live camera stream of the arm.
+
+ 1. Select **Add widget**. From the **Widget library**, select the **Link** widget and then **Add widget**.
+
+ 1. In the **New widget** options, enter a **Label** of *Live arm camera*. For the **URL**, you can use the example URL *http://contoso.aws.armstreams.com/${PrimaryTwin.$dtId}*. (There's no live camera hosted at the URL for this sample, but the link represents where the video feed might be hosted in a real scenario.)
+
+ 1. Select **Create widget**.
+
+ :::image type="content" source="media/quickstart-3d-scenes-studio/new-widget-link.png" alt-text="Screenshot of the New widget options in 3D Scenes Studio for a link widget." lightbox="media/quickstart-3d-scenes-studio/new-widget-link.png":::
+
+1. The behavior options are now complete. Save the behavior by selecting **Create behavior**.
+
+ :::image type="content" source="media/quickstart-3d-scenes-studio/new-behavior-create.png" alt-text="Screenshot of the New behavior options in 3D Scenes Studio, highlighting Create behavior." lightbox="media/quickstart-3d-scenes-studio/new-behavior-create.png":::
+
+The *Packing Line Efficiency* behavior will now show up in the list of behaviors for the scene.
+
+## View scene
+
+So far, you've been working with 3D Scenes Studio in **Build** mode. Now, switch the mode to **View**.
++
+From the list of **Elements**, select the **Arm1** element that you created. The visualization will zoom in to show the visual element and display the behaviors you set up for it.
++
+## Apply behavior to additional elements
+
+Sometimes, an environment might contain multiple similar elements, which should all display similarly in the visualization (like the six different robot arms in this example). Now that you've created a behavior for one arm and confirmed what it looks like in the viewer, this section will show you how to quickly add the behavior to other arms so that they all display the same type of information in the viewer.
+
+1. Return to **Build** mode. Like you did in [Create a scene element](#create-a-scene-element), select a different arm in the visualization, and select **Create new element**.
+ :::image type="content" source="media/quickstart-3d-scenes-studio/new-arm-element-2.png" alt-text="Screenshot of the factory scene in 3D Scenes Studio. A different arm is highlighted with an option to create a new element." lightbox="media/quickstart-3d-scenes-studio/new-arm-element-2.png":::
+
+1. Select a **Primary twin** for the new element, then switch to the **Behaviors** tab.
+ :::image type="content" source="media/quickstart-3d-scenes-studio/new-element-details-2.png" alt-text="Screenshot of the New element options for Arm2 in 3D Scenes Studio." lightbox="media/quickstart-3d-scenes-studio/new-element-details-2.png":::
+
+1. Select **Add behavior**. Choose the **Packing Line Efficiency** behavior that you created in this quickstart.
+ :::image type="content" source="media/quickstart-3d-scenes-studio/new-element-behaviors.png" alt-text="Screenshot of the New element behavior options for Arm2 in 3D Scenes Studio." lightbox="media/quickstart-3d-scenes-studio/new-element-behaviors.png":::
+
+1. Select **Create element** to finish creating the new arm element.
+
+Switch to the **View** tab to see the behavior working on the new arm element. All the information you selected when [creating the behavior](#create-a-behavior) is now available for both of the arm elements in the scene.
++
+>[!TIP]
+>If you'd like, you can repeat the steps in this section to create elements for the remaining four arms, and apply the behavior to all of them to make the visualization complete.
+
+## Review and contextualize learnings
+
+This quickstart shows how to create an immersive dashboard for Azure Digital Twins data, to share with end users and increase access to important insights about your real world environment.
+
+In the quickstart, you created a sample 3D scene to represent a package distribution center with robotic arms that pick up packages. This visualization was connected to a digital twin graph, and you linked an arm in the visualization to its own specific digital twin that supplied backing data. You also created a visual behavior to display key information about that arm when viewing the full scene, including which box pickups have been failed by that arm in the last hour.
+
+In this quickstart, the sample models and twins for the factory scenario were quickly created for you, using the [Azure Digital Twins Data simulator](#generate-sample-models-and-twins). When using Azure Digital Twins with your own environment, you'll create your own [models](concepts-models.md) and [twins](concepts-twins-graph.md) to accurately describe the elements of your environment in detail. This quickstart also used the data simulator to simulate "live" data driving digital twin property updates when packages were missed. When using Azure Digital Twins with your own environment, [ingesting live data](concepts-data-ingress-egress.md) is a process you'll set up yourself according to your own environment sensors.
+
+## Clean up resources
+
+To clean up after this quickstart, choose which Azure Digital Twins resources you want to remove, based on what you want to do next.
+
+* If you plan to continue to the Azure Digital Twins tutorials, you can reuse the instance in this quickstart for those articles, and you don't need to remove it.
+
+
+* If you don't need your Azure Digital Twins instance anymore, you can delete it using the [Azure portal](https://portal.azure.com).
+
+ Navigate back to the instance's **Overview** page in the portal. (If you've already closed that tab, you can find the instance again by searching for its name in the Azure portal search bar and selecting it from the search results.)
+
+ Select **Delete** to delete the instance, including all of its models and twins.
+
+ :::image type="content" source="media/quickstart-azure-digital-twins-explorer/delete-instance.png" alt-text="Screenshot of the Overview page for an Azure Digital Twins instance in the Azure portal. The Delete button is highlighted.":::
+
+You can delete your storage resources by navigating to your storage account's **Overview** page in the [Azure portal](https://portal.azure.com), and selecting **Delete**. This will delete the storage account and the container inside it, along with the 3D scene files that were in the container.
++
+You may also want to delete the downloaded sample 3D file from your local machine.
+
+## Next steps
+
+Next, continue on to the Azure Digital Twins tutorials to build out your own Azure Digital Twins environment.
+
+> [!div class="nextstepaction"]
+> [Code a client app](tutorial-code.md)
digital-twins Quickstart Azure Digital Twins Explorer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/quickstart-azure-digital-twins-explorer.md
Title: Quickstart - Get started with Azure Digital Twins Explorer
-description: Learn how to use the Azure Digital Twins Explorer by following this demo, where you'll be using models to instantiate twins and interacting with the twin graph.
+description: Learn how to use Azure Digital Twins Explorer by following this demo, where you'll use models to instantiate twins and interact with the twin graph.
Last updated 02/25/2022 -+ # Quickstart - Get started with a sample scenario in Azure Digital Twins Explorer
The first step in working with Azure Digital Twins is to create an Azure Digital
The rest of this section walks you through the instance creation.
-### Create an Azure Digital Twins instance
--
-3. Fill in the fields on the **Basics** tab of setup, including your Subscription, Resource group, a Resource name for your new instance, and Region. Check the **Assign Azure Digital Twins Data Owner Role** box to give yourself permissions to manage data in the instance.
-
- :::image type="content" source="media/quickstart-azure-digital-twins-explorer/create-azure-digital-twins-basics.png" alt-text="Screenshot of the Create Resource process for Azure Digital Twins in the Azure portal. The described values are filled in.":::
-
- >[!NOTE]
- > If the Assign Azure Digital Twins Data Owner Role box is greyed out, it means you don't have permissions in your Azure subscription to manage user access to resources. You can continue creating the instance in this section, and then should have someone with the necessary permissions [assign you this role on the instance](how-to-set-up-instance-portal.md#assign-the-role-using-azure-identity-management-iam) before completing the rest of this quickstart.
- >
- > Common roles that meet this requirement are **Owner**, **Account admin**, or the combination of **User Access Administrator** and **Contributor**.
-
-4. Select **Review + Create** to finish creating your instance.
-
-5. You will see a summary page showing the details you've entered. Confirm and create the instance by selecting **Create**.
-
-This will take you to an Overview page tracking the deployment status of the instance.
--
-Wait for the page to say that your deployment is complete.
### Open instance in Azure Digital Twins Explorer
In this quickstart, you made the temperature update manually. It's common in Azu
To clean up after this quickstart, choose which Azure Digital Twins resources you want to remove, based on what you want to do next.
-* If you plan to continue to the Azure Digital Twins tutorials, you can reuse the instance in this quickstart for those articles, and you don't need to remove it.
+* If you plan to continue through the Azure Digital Twins quickstarts and tutorials, you can reuse the instance in this quickstart for those articles, and you don't need to remove it.
[!INCLUDE [digital-twins-cleanup-clear-instance.md](../../includes/digital-twins-cleanup-clear-instance.md)]
You may also want to delete the sample project folder from your local machine.
## Next steps
-Next, continue on to the Azure Digital Twins tutorials to build out your own Azure Digital Twins scenario and interaction tools.
+Move on to the next quickstart to visualize an Azure Digital Twins scenario in a 3D environment.
> [!div class="nextstepaction"]
-> [Code a client app](tutorial-code.md)
+> [Get started with 3D Scenes Studio](quickstart-3d-scenes-studio.md)
dms Known Issues Mongo Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/known-issues-mongo-cosmos-db.md
-+
+- "seo-lt-2019"
+- kr2b-contr-experiment
Previously updated : 02/27/2020 Last updated : 05/18/2022
-# Known issues/migration limitations with migrations from MongoDB to Azure Cosmos DB's API for MongoDB
+# Known issues with migrations from MongoDB to Azure Cosmos DB's API
-Known issues and limitations associated with migrations from MongoDB to Cosmos DB's API for MongoDB are described in the following sections.
+The following sections describe known issues and limitations associated with migrations from MongoDB to Cosmos DB's API for MongoDB.
-## Migration fails as a result of using the incorrect SSL Cert
+## Migration fails as a result of using the incorrect TLS/SSL Cert
-* **Symptom**: This issue is apparent when a user cannot connect to the MongoDB source server. Despite having all firewall ports open, the user still can't connect.
+This issue is apparent when a user can't connect to the MongoDB source server. Despite having all firewall ports open, the user still can't connect.
| Cause | Resolution | | - | - |
-| Using a self-signed certificate in Azure Database Migration Service may lead to the migration failing because of the incorrect SSL Cert. The Error message may include "The remote certificate is invalid according to the validation procedure." | Use a genuine certificate from CA. Self-signed certs are generally only used in internal tests. When you install a genuine cert from a CA authority, you can then use SSL in Azure Database Migration Service without issue (connections to Cosmos DB use SSL over Mongo API).<br><br> |
+| Using a self-signed certificate in Azure Database Migration Service might lead to the migration failing because of the incorrect TLS/SSL certificate. The error message might include "The remote certificate is invalid according to the validation procedure." | Use a genuine certificate from CA. Connections to Cosmos DB use TLS over Mongo API. Self-signed certs are generally only used in internal tests. When you install a genuine cert from a CA authority, you can then use TLS in Azure Database Migration Service without issue. |
## Unable to get the list of databases to map in DMS
-* **Symptom**: Unable to get DB list on the **Database setting** blade when using **Data from Azure Storage** mode on the **Select source** blade.
+Unable to get database list in the **Database setting** area when using **Data from Azure Storage** mode in the **Select source** area.
| Cause | Resolution | | - | - |
-| The storage account connection string is missing the SAS info and thus cannot be authenticated. | Create the SAS on the blob container in Storage Explorer and use the URL with container SAS info as the source detail connection string.<br><br> |
+| The storage account connection string is missing the shared access signature (SAS) information and can't be authenticated. | Create the SAS on the blob container in Storage Explorer and use the URL with container SAS info as the source detail connection string. |
## Using an unsupported version of the database
-* **Symptom**: The migration fails.
+The migration fails.
| Cause | Resolution | | - | - |
-| You attempt to migrate to Azure Cosmos DB from an unsupported version of MongoDB. | As new versions of MongoDB are released, they are tested to ensure compatibility with Azure Database Migration Service, and the service is being updated periodically to accept the latest version(s). If there is an immediate need to migrate, as a workaround you can export the databases/collections to Azure Storage and then point the source to the resulting dump. Create the SAS on the blob container in Storage Explorer, and then use the URL with container SAS info as the source detail connection string.<br><br> |
+| You attempt to migrate to Azure Cosmos DB from an unsupported version of MongoDB. | As new versions of MongoDB are released, they're tested to ensure compatibility with Azure Database Migration Service. The service is being updated periodically to accept the latest versions. If there's an immediate need to migrate, as a workaround you can export the databases or collections to Azure Storage and then point the source to the resulting dump. Create the SAS on the blob container in Storage Explorer, and then use the URL with container SAS information as the source detail connection string. |
## Next steps
dms Migrate Mysql To Azure Mysql Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/migrate-mysql-to-azure-mysql-powershell.md
Title: "PowerShell: Run offline migration from MySQL database to Azure Database for MySQL using DMS"
+ Title: "PowerShell: Run offline migration from MySQL database to Azure Database for MySQL using DMS"
description: Learn to migrate an on-premise MySQL database to Azure Database for MySQL by using Azure Database Migration Service through PowerShell script.
In this article, you migrate a MySQL database restored to an on-premises instanc
> [!NOTE]
-> Currently it is not possible to run complete database migration using the Az.DataMigration module. In the meantime, the sample PowerShell script is provided "as-is" that uses the [DMS REST API](/rest/api/datamigration/tasks/get) and allows you to automate migration. This script will be modified or deprecated, once official support is added in the Az.DataMigration module and Azure CLI.
+> Currently it is not possible to run complete database migration using the Az.DataMigration module. In the meantime, the sample PowerShell script is provided "as-is" that uses the [DMS REST API](/rest/api/datamigration/tasks/get) and allows you to automate migration. This script will be modified or deprecated, once official support is added in the Az.DataMigration module and Azure CLI.
> [!NOTE] > Amazon Relational Database Service (RDS) for MySQL and Amazon Aurora (MySQL-based) are also supported as sources for migration.
To complete these steps, you need:
* Have an Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free). * Have an on-premises MySQL database with version 5.6 or above. If not, then download and install [MySQL community edition](https://dev.mysql.com/downloads/mysql/) 5.6 or above.
-* [Create an instance in Azure Database for MySQL](../mysql/quickstart-create-mysql-server-database-using-azure-portal.md). Refer to the article [Use MySQL Workbench to connect and query data](../mysql/connect-workbench.md) for details about how to connect and create a database using the Workbench application. The Azure Database for MySQL version should be equal to or higher than the on-premises MySQL version . For example, MySQL 5.7 can migrate to Azure Database for MySQL 5.7 or upgraded to 8.
+* [Create an instance in Azure Database for MySQL](../mysql/quickstart-create-mysql-server-database-using-azure-portal.md). Refer to the article [Use MySQL Workbench to connect and query data](../mysql/connect-workbench.md) for details about how to connect and create a database using the Workbench application. The Azure Database for MySQL version should be equal to or higher than the on-premises MySQL version . For example, MySQL 5.7 can migrate to Azure Database for MySQL 5.7 or upgraded to 8.
* Create a Microsoft Azure Virtual Network for Azure Database Migration Service by using Azure Resource Manager deployment model, which provides site-to-site connectivity to your on-premises source servers by using either [ExpressRoute](../expressroute/expressroute-introduction.md) or [VPN](../vpn-gateway/vpn-gateway-about-vpngateways.md). For more information about creating a virtual network, see the [Virtual Network Documentation](../virtual-network/index.yml), and especially the quickstart articles with step-by-step details. > [!NOTE]
To complete these steps, you need:
* Azure Database for MySQL supports only InnoDB tables. To convert MyISAM tables to InnoDB, see the article [Converting Tables from MyISAM to InnoDB](https://dev.mysql.com/doc/refman/5.7/en/converting-tables-to-innodb.html) * The user must have the privileges to read data on the source database.
-* The guide uses PowerShell v7.1 with PSEdition Core which can be installed as per the [installation guide](/powershell/scripting/install/installing-powershell?view=powershell-7.1&preserve-view=true)
+* The guide uses PowerShell v7.2, which can be installed as per the [installation guide](/powershell/scripting/install/installing-powershell)
* Download and install following modules from the PowerShell Gallery by using [Install-Module PowerShell cmdlet](/powershell/module/powershellget/Install-Module); be sure to open the PowerShell command window using run as an Administrator: * Az.Resources * Az.Network
Execute the following script in MySQL Workbench on the target database to extrac
```sql SELECT
- SchemaName,
+ SchemaName,
GROUP_CONCAT(DropQuery SEPARATOR ';\n') as DropQuery, Concat('DELIMITER $$ \n\n', GROUP_CONCAT(AddQuery SEPARATOR '$$\n'), '$$\n\nDELIMITER ;') as AddQuery FROM (
-SELECT
- TRIGGER_SCHEMA as SchemaName,
- Concat('DROP TRIGGER `', TRIGGER_NAME, "`") as DropQuery,
- Concat('CREATE TRIGGER `', TRIGGER_NAME, '` ', ACTION_TIMING, ' ', EVENT_MANIPULATION,
- '\nON `', EVENT_OBJECT_TABLE, '`\n' , 'FOR EACH ', ACTION_ORIENTATION, ' ',
+SELECT
+ TRIGGER_SCHEMA as SchemaName,
+ Concat('DROP TRIGGER `', TRIGGER_NAME, "`") as DropQuery,
+ Concat('CREATE TRIGGER `', TRIGGER_NAME, '` ', ACTION_TIMING, ' ', EVENT_MANIPULATION,
+ '\nON `', EVENT_OBJECT_TABLE, '`\n' , 'FOR EACH ', ACTION_ORIENTATION, ' ',
ACTION_STATEMENT) as AddQuery
-FROM
- INFORMATION_SCHEMA.TRIGGERS
+FROM
+ INFORMATION_SCHEMA.TRIGGERS
ORDER BY EVENT_OBJECT_SCHEMA, EVENT_OBJECT_TABLE, ACTION_TIMING, EVENT_MANIPULATION, ACTION_ORDER ASC ) AS Queries GROUP BY SchemaName
function LogMessage([string] $Message, [bool] $IsProcessing = $false) {
} else { Write-Host "$(Get-Date -Format "yyyy-MM-dd HH:mm:ss"): $Message" -ForegroundColor Green
- }
+ }
} ```
else { LogMessage -Message "Resource group $ResourceGroupName exists." }
You can create new instance of Azure Database Migration Service by using the [New-AzDataMigrationService](/powershell/module/az.datamigration/new-azdatamigrationservice) command. This command expects the following required parameters: * *Azure Resource Group name*. You can use [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup) command to create Azure Resource group as previously shown and provide its name as a parameter.
-* *Service name*. String that corresponds to the desired unique service name for Azure Database Migration Service
+* *Service name*. String that corresponds to the desired unique service name for Azure Database Migration Service
* *Location*. Specifies the location of the service. Specify an Azure data center location, such as West US or Southeast Asia * *Sku*. This parameter corresponds to DMS Sku name. The currently supported Sku name are *Standard_1vCore*, *Standard_2vCores*, *Standard_4vCores*, *Premium_4vCores*.
-* *Virtual Subnet Identifier*. You can use [Get-AzVirtualNetworkSubnetConfig](/powershell/module/az.network/get-azvirtualnetworksubnetconfig) command to get the information of a subnet.
+* *Virtual Subnet Identifier*. You can use [Get-AzVirtualNetworkSubnetConfig](/powershell/module/az.network/get-azvirtualnetworksubnetconfig) command to get the information of a subnet.
The following script expects that the *myVirtualNetwork* virtual network exists with a subnet named *default* and then creates a Database Migration Service with the name *myDmService* under the resource group created in **Step 3** and in the same region.
$dmsService = Get-AzResource -ResourceId $dmsServiceResourceId -ErrorAction Sile
# Create Azure DMS service if not existing # Possible values for SKU currently are Standard_1vCore,Standard_2vCores,Standard_4vCores,Premium_4vCores
-if (-not($dmsService)) {
+if (-not($dmsService)) {
$virtualNetwork = Get-AzVirtualNetwork -ResourceGroupName $ResourceGroupName -Name $VirtualNetworkName if (-not ($virtualNetwork)) { throw "ERROR: Virtual Network $VirtualNetworkName does not exists" }
if (-not($dmsService)) {
-Location $resourceGroup.Location ` -Sku Premium_4vCores ` -VirtualSubnetId $Subnet.Id
-
+ $dmsService = Get-AzResource -ResourceId $dmsServiceResourceId LogMessage -Message "Created Azure Data Migration Service - $($dmsService.ResourceId)." }
else { LogMessage -Message "Azure DMS project $projectName exists." }
After creating the migration project, you will create the database connection information. This connection information will be used to connect to the source and target servers during the migration process.
-The following script takes the server name, user name and password for the source and target MySQL instances and creates the connection information objects. The script prompts the user to enter the password for the source and target MySQL instances. For silent scripts, the credentials can be fetched from Azure Key Vault.
+The following script takes the server name, user name and password for the source and target MySQL instances and creates the connection information objects. The script prompts the user to enter the password for the source and target MySQL instances. For silent scripts, the credentials can be fetched from Azure Key Vault.
```powershell # Initialize the source and target database server connections
function InitConnection(
"encryptConnection" = $true; "trustServerCertificate" = $true; "additionalSettings" = "";
- "type" = "MySqlConnectionInfo"
+ "type" = "MySqlConnectionInfo"
} $connectionInfo.dataSource = $ServerName;
LogMessage -Message "Source and target connection object initialization complete
## Extract the list of table names from the target database
-Database table list can be extracted using a migration task and connection information. The table list will be extracted from both the source database and target database so that proper mapping and validation can be done.
+Database table list can be extracted using a migration task and connection information. The table list will be extracted from both the source database and target database so that proper mapping and validation can be done.
-The following script takes the names of the source and target databases and then extracts the table list from the databases using the *GetUserTablesMySql* migration task.
+The following script takes the names of the source and target databases and then extracts the table list from the databases using the *GetUserTablesMySql* migration task.
```powershell # Run scenario to get the tables from the target database to build
The following script takes the names of the source and target databases and then
[string] $TargetDatabaseName = "migtargetdb" [string] $SourceDatabaseName = "migsourcedb"
-function RunScenario([object] $MigrationService,
- [object] $MigrationProject,
- [string] $ScenarioTaskName,
- [object] $TaskProperties,
+function RunScenario([object] $MigrationService,
+ [object] $MigrationProject,
+ [string] $ScenarioTaskName,
+ [object] $TaskProperties,
[bool] $WaitForScenario = $true) { # Check if the scenario task already exists, if so remove it LogMessage -Message "Removing scenario if already exists..." -IsProcessing $true
function RunScenario([object] $MigrationService,
-ResourceId "/subscriptions/$($global:currentSubscriptionId)/resourceGroups/$($MigrationService.ResourceGroupName)/providers/Microsoft.DataMigration/services/$($MigrationService.Name)/projects/$($MigrationProject.Name)/tasks/$($ScenarioTaskName)" ` -Properties $TaskProperties ` -Force | Out-Null;
-
+ LogMessage -Message "Waiting for $ScenarioTaskName scenario to complete..." -IsProcessing $true if ($WaitForScenario) { $progressCounter = 0;
function RunScenario([object] $MigrationService,
Write-Progress -Activity "Scenario Run $ScenarioTaskName (Marquee Progress Bar)" ` -Status $scenarioTask.ProjectTask.Properties.State ` -PercentComplete $progressCounter
-
+ $progressCounter += 10; if ($progressCounter -gt 100) { $progressCounter = 10 } }
function RunScenario([object] $MigrationService,
Write-Progress -Activity "Scenario Run $ScenarioTaskName" ` -Status $scenarioTask.ProjectTask.Properties.State ` -Completed
-
- # Now get it using REST APIs so we can expand the output
- LogMessage -Message "Getting expanded task results ..." -IsProcessing $true
+
+ # Now get it using REST APIs so we can expand the output
+ LogMessage -Message "Getting expanded task results ..." -IsProcessing $true
$psToken = (Get-AzAccessToken -ResourceUrl https://management.azure.com).Token; $token = ConvertTo-SecureString -String $psToken -AsPlainText -Force; $taskResource = Invoke-RestMethod `
function RunScenario([object] $MigrationService,
-ContentType "application/json" ` -Authentication Bearer ` -Token $token;
-
+ $taskResource.properties; }
-# create the get table task properties by initializing the connection and
+# create the get table task properties by initializing the connection and
# database name $getTablesTaskProperties = @{ "input" = @{
LogMessage -Message "List of tables from the source database acquired."
As part of configuring the migration task, you will create a mapping between the source and target tables. The mapping is at the table name level but the assumption is that the table structure (column count, column names, data types etc.) of the mapped tables is exactly the same.
-The following script creates a mapping based on the target and source table list extracted in **Step 7**. For partial data load, the user can provide a list of table to filter out the tables. If no user input is provided, then all target tables are mapped. The script also checks if a table with the same name exists in the source or not. If table name does not exists in the source, then the target table is ignored for migration.
+The following script creates a mapping based on the target and source table list extracted in **Step 7**. For partial data load, the user can provide a list of table to filter out the tables. If no user input is provided, then all target tables are mapped. The script also checks if a table with the same name exists in the source or not. If table name does not exists in the source, then the target table is ignored for migration.
```powershell # Create the source to target table map
foreach ($srcTable in $sourceTables) {
$tableMappingError = $true Write-Host "TABLE MAPPING ERROR: $($targetTables.name) does not exists in target." -ForegroundColor Red continue;
- }
+ }
$tableMap.Add("$($SourceDatabaseName).$tableName", "$($TargetDatabaseName).$tableName");
- }
+ }
} # In case of any table mapping errors identified, throw an error and stop the process
$migrationResult = RunScenario -MigrationService $dmsService `
LogMessage -Message "Migration completed with status - $($migrationResult.state)" #Checking for any errors or warnings captured by the task during migration
-$dbLevelResult = $migrationResult.output | Where-Object { $_.resultType -eq "DatabaseLevelOutput" }
+$dbLevelResult = $migrationResult.output | Where-Object { $_.resultType -eq "DatabaseLevelOutput" }
$migrationLevelResult = $migrationResult.output | Where-Object { $_.resultType -eq "MigrationLevelOutput" } if ($dbLevelResult.exceptionsAndWarnings) { Write-Host "Following database errors were captured: $($dbLevelResult.exceptionsAndWarnings)" -ForegroundColor Red
dns Private Dns Getstarted Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/private-dns-getstarted-cli.md
Previously updated : 10/20/2020 Last updated : 05/23/2022 #Customer intent: As an experienced network administrator, I want to create an Azure private DNS zone, so I can resolve host names on my private virtual networks.
az vm create \
--nsg NSG01 \ --nsg-rule RDP \ --image win2016datacenter
+```
+```azurecli
az vm create \ -n myVM02 \ --admin-username AzureAdmin \
az vm create \
--image win2016datacenter ```
-This will take a few minutes to complete.
+Creating a virtual machine will take a few minutes to complete.
## Create an additional DNS record
Repeat for myVM02.
ping myVM01.private.contoso.com ```
- You should see output that looks similar to this:
+ You should see an output that looks similar to what is shown below:
```output PS C:\> ping myvm01.private.contoso.com
Repeat for myVM02.
ping db.private.contoso.com ```
- You should see output that looks similar to this:
+ You should see an output that looks similar to what is shown below:
```output PS C:\> ping db.private.contoso.com
dns Private Dns Getstarted Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/private-dns-getstarted-portal.md
description: In this quickstart, you create and test a private DNS zone and reco
Previously updated : 10/20/2020 Last updated : 05/18/2022
The following example creates a DNS zone called **private.contoso.com** in a res
A DNS zone contains the DNS entries for a domain. To start hosting your domain in Azure DNS, you create a DNS zone for that domain name.
-![Private DNS zones search](media/private-dns-portal/search-private-dns.png)
1. On the portal search bar, type **private dns zones** in the search text box and press **Enter**. 1. Select **Private DNS zone**.
-2. Select **Create private dns zone**.
+1. Select **Create private dns zone**.
1. On the **Create Private DNS zone** page, type or select the following values:
In this section you'll need to replace the following parameters in the steps wit
| **\<resource-group-name>** | MyAzureResourceGroup (Select existing resource group) | | **\<virtual-network-name>** | MyAzureVNet | | **\<region-name>** | West Central US |
-| **\<IPv4-address-space>** | 10.2.0.0\16 |
+| **\<IPv4-address-space>** | 10.2.0.0/16 |
| **\<subnet-name>** | MyAzureSubnet |
-| **\<subnet-address-range>** | 10.2.0.0\24 |
+| **\<subnet-address-range>** | 10.2.0.0/24 |
[!INCLUDE [virtual-networks-create-new](../../includes/virtual-networks-create-new.md)]
In this section you'll need to replace the following parameters in the steps wit
To link the private DNS zone to a virtual network, you create a virtual network link.
-![Add virtual network link](media/private-dns-portal/dns-add-virtual-network-link.png)
+ 1. Open the **MyAzureResourceGroup** resource group and select the **private.contoso.com** private zone.
-2. On the left pane, select **Virtual network links**.
-3. Select **Add**.
-4. Type **myLink** for the **Link name**.
-5. For **Virtual network**, select **myAzureVNet**.
-6. Select the **Enable auto registration** check box.
-7. Select **OK**.
+1. On the left pane, select **Virtual network links**.
+1. Select **Add**.
+1. Type **myLink** for the **Link name**.
+1. For **Virtual network**, select **myAzureVNet**.
+1. Select the **Enable auto registration** check box.
+1. Select **OK**.
## Create the test virtual machines
Now, create two virtual machines so you can test your private DNS zone:
1. Type **myVM01** - for the name of the virtual machine. 1. Select **West Central US** for the **Region**. 1. Enter a name for the administrator user name.
-2. Enter a password and confirm the password.
-5. For **Public inbound ports**, select **Allow selected ports**, and then select **RDP (3389)** for **Select inbound ports**.
-10. Accept the other defaults for the page and then click **Next: Disks >**.
-11. Accept the defaults on the **Disks** page, then click **Next: Networking >**.
+1. Enter a password and confirm the password.
+1. For **Public inbound ports**, select **Allow selected ports**, and then select **RDP (3389)** for **Select inbound ports**.
+1. Accept the other defaults for the page and then click **Next: Disks >**.
+1. Accept the defaults on the **Disks** page, then click **Next: Networking >**.
1. Make sure that **myAzureVNet** is selected for the virtual network. 1. Accept the other defaults for the page, and then click **Next: Management >**.
-2. For **Boot diagnostics**, select **Off**, accept the other defaults, and then select **Review + create**.
+1. For **Boot diagnostics**, select **Disable**, accept the other defaults, and then select **Review + create**.
1. Review the settings and then click **Create**. Repeat these steps and create another virtual machine named **myVM02**.
It will take a few minutes for both virtual machines to complete.
The following example creates a record with the relative name **db** in the DNS Zone **private.contoso.com**, in resource group **MyAzureResourceGroup**. The fully qualified name of the record set is **db.private.contoso.com**. The record type is "A", with the IP address of **myVM01**. 1. Open the **MyAzureResourceGroup** resource group and select the **private.contoso.com** private zone.
-2. Select **+ Record set**.
-3. For **Name**, type **db**.
-4. For **IP Address**, type the IP address you see for **myVM01**. This should be auto registered when the virtual machine started.
-5. Select **OK**.
+1. Select **+ Record set**.
+1. For **Name**, type **db**.
+1. For **IP Address**, type the IP address you see for **myVM01**. This should be auto registered when the virtual machine started.
+1. Select **OK**.
+ ## Test the private zone
Now you can test the name resolution for your **private.contoso.com** private zo
You can use the ping command to test name resolution. So, configure the firewall on both virtual machines to allow inbound ICMP packets. 1. Connect to myVM01, and open a Windows PowerShell window with administrator privileges.
-2. Run the following command:
+1. Run the following command:
```powershell New-NetFirewallRule ΓÇôDisplayName "Allow ICMPv4-In" ΓÇôProtocol ICMPv4
Repeat for myVM02.
Minimum = 0ms, Maximum = 1ms, Average = 0ms PS C:\> ```
-2. Now ping the **db** name you created previously:
+1. Now ping the **db** name you created previously:
``` ping db.private.contoso.com ```
Repeat for myVM02.
When no longer needed, delete the **MyAzureResourceGroup** resource group to delete the resources created in this quickstart. - ## Next steps > [!div class="nextstepaction"]
dns Private Dns Getstarted Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/private-dns-getstarted-powershell.md
description: In this quickstart, you learn how to create and manage your first p
Previously updated : 10/20/2020 Last updated : 05/23/2022
If you prefer, you can complete this quickstart using [Azure CLI](private-dns-ge
First, create a resource group to contain the DNS zone:
-```azurepowershell
+```azurepowershell-interactive
New-AzResourceGroup -name MyAzureResourceGroup -location "eastus" ```
A DNS zone is created by using the `New-AzPrivateDnsZone` cmdlet.
The following example creates a virtual network named **myAzureVNet**. Then it creates a DNS zone named **private.contoso.com** in the **MyAzureResourceGroup** resource group, links the DNS zone to the **MyAzureVnet** virtual network, and enables automatic registration.
-```azurepowershell
+```azurepowershell-interactive
Install-Module -Name Az.PrivateDns -force $backendSubnet = New-AzVirtualNetworkSubnetConfig -Name backendSubnet -AddressPrefix "10.2.0.0/24"
If you want to create a zone just for name resolution (no automatic hostname reg
By omitting the zone name from `Get-AzPrivateDnsZone`, you can enumerate all zones in a resource group. This operation returns an array of zone objects.
-```azurepowershell
+```azurepowershell-interactive
$zones = Get-AzPrivateDnsZone -ResourceGroupName MyAzureResourceGroup $zones ``` By omitting both the zone name and the resource group name from `Get-AzPrivateDnsZone`, you can enumerate all zones in the Azure subscription.
-```azurepowershell
+```azurepowershell-interactive
$zones = Get-AzPrivateDnsZone $zones ```
$zones
Now, create two virtual machines so you can test your private DNS zone:
-```azurepowershell
+```azurepowershell-interactive
New-AzVm ` -ResourceGroupName "myAzureResourceGroup" ` -Name "myVM01" `
New-AzVm `
-OpenPorts 3389 ```
-This will take a few minutes to complete.
+Creating a virtual machine will take a few minutes to complete.
## Create an additional DNS record You create record sets by using the `New-AzPrivateDnsRecordSet` cmdlet. The following example creates a record with the relative name **db** in the DNS Zone **private.contoso.com**, in resource group **MyAzureResourceGroup**. The fully qualified name of the record set is **db.private.contoso.com**. The record type is "A", with IP address "10.2.0.4", and the TTL is 3600 seconds.
-```azurepowershell
+```azurepowershell-interactive
New-AzPrivateDnsRecordSet -Name db -RecordType A -ZoneName private.contoso.com ` -ResourceGroupName MyAzureResourceGroup -Ttl 3600 ` -PrivateDnsRecords (New-AzPrivateDnsRecordConfig -IPv4Address "10.2.0.4")
New-AzPrivateDnsRecordSet -Name db -RecordType A -ZoneName private.contoso.com `
To list the DNS records in your zone, run:
-```azurepowershell
+```azurepowershell-interactive
Get-AzPrivateDnsRecordSet -ZoneName private.contoso.com -ResourceGroupName MyAzureResourceGroup ```
Now you can test the name resolution for your **private.contoso.com** private zo
You can use the ping command to test name resolution. So, configure the firewall on both virtual machines to allow inbound ICMP packets.
-1. Connect to myVM01, and open a Windows PowerShell window with administrator privileges.
-2. Run the following command:
+1. Connect to myVM01 using the username and password you used when creating the VM.
+1. Open a Windows PowerShell window with administrator privileges.
+1. Run the following command:
```powershell New-NetFirewallRule ΓÇôDisplayName "Allow ICMPv4-In" ΓÇôProtocol ICMPv4
Repeat for myVM02.
ping myVM01.private.contoso.com ```
- You should see output that looks similar to this:
+ You should see an output that looks similar to what is shown below:
``` PS C:\> ping myvm01.private.contoso.com
Repeat for myVM02.
ping db.private.contoso.com ```
- You should see output that looks similar to this:
+ You should see an output that looks similar to what is shown below:
``` PS C:\> ping db.private.contoso.com
Repeat for myVM02.
When no longer needed, delete the **MyAzureResourceGroup** resource group to delete the resources created in this article.
-```azurepowershell
+```azurepowershell-interactive
Remove-AzResourceGroup -Name MyAzureResourceGroup ```
event-grid Communication Services Telephony Sms Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/communication-services-telephony-sms-events.md
Title: Azure Communication Services - Telephony and SMS events description: This article describes how to use Azure Communication Services as an Event Grid event source for telephony and SMS Events. Previously updated : 10/15/2021 Last updated : 05/19/2022
This section contains an example of what that data would look like for each even
"eventTime": "2020-09-18T00:22:20Z" }] ```+
+> [!NOTE]
+> Possible values for `DeliveryStatus` are `Delivered` and `Failed`.
+ ### Microsoft.Communication.SMSReceived event ```json
event-grid Consume Private Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/consume-private-endpoints.md
Title: Deliver events using private link service description: This article describes how to work around the limitation of not able to deliver events using private link service. Previously updated : 07/01/2021 Last updated : 05/17/2022 # Deliver events using private link service
To deliver events to Service Bus queues or topics in your Service Bus namespace
1. [Enable the **Allow trusted Microsoft services to bypass this firewall** setting on your Service Bus namespace](../service-bus-messaging/service-bus-service-endpoints.md#trusted-microsoft-services). 1. [Configure the event subscription](managed-service-identity.md) that uses a Service Bus queue or topic as an endpoint to use the system-assigned identity.
-## Deliver events to Storage
+## Deliver events to Storage using managed identity
To deliver events to Storage queues using managed identity, follow these steps: 1. Enable system-assigned identity: [system topics](enable-identity-system-topics.md), [custom topics, and domains](enable-identity-custom-topics-domains.md).
event-grid Handler Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/handler-functions.md
Title: Use a function in Azure as an event handler for Azure Event Grid events description: Describes how you can use functions created in and hosted by Azure Functions as event handlers for Event Grid events. Previously updated : 03/15/2021 Last updated : 05/23/2022 # Use a function as an event handler for Event Grid events
We recommend that you use the first approach (Event Grid trigger) as it has the
- Event Grid automatically validates Event Grid triggers. With generic HTTP triggers, you must implement the [validation response](webhook-event-delivery.md) yourself. - Event Grid automatically adjusts the rate at which events are delivered to a function triggered by an Event Grid event based on the perceived rate at which the function can process events. This rate match feature averts delivery errors that stem from the inability of a function to process events as the functionΓÇÖs event processing rate can vary over time. To improve efficiency at high throughput, enable batching on the event subscription. For more information, see [Enable batching](#enable-batching).
+> [!NOTE]
+> - When you add an event subscription using an Azure function, Event Grid fetches the access key for the target function using Event Grid service principal's credentials. The permissions are granted to Event Grid when you register the Event Grid resource provider in their Azure subscription.
+> - If you protect your Azure function with an **Azure Active Directory** application, you'll have to take the generic webhook approach using the HTTP trigger. Use the Azure function endpoint as a webhook URL when adding the subscription.
+++ ## Tutorials |Title |Description |
event-grid Security Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/security-authentication.md
Managed System Identity <br/>&<br/> Role-based access control | <p>Event Hubs</p
|Bearer token authentication with Azure AD protected webhook | Webhook | See the [Authenticate event delivery to webhook endpoints](#authenticate-event-delivery-to-webhook-endpoints) section for details.. | Client secret as a query parameter | Webhook | See the [Using client secret as a query parameter](#using-client-secret-as-a-query-parameter) section for details. |
+> [!NOTE]
+> If you protect your Azure function with an Azure Active Directory app, you'll have to take the generic webhook approach using the HTTP trigger. Use the Azure function endpoint as a webhook URL when adding the subscription.
+ ## Use system-assigned identities for event delivery You can enable a system-assigned managed identity for a topic or domain and use the identity to forward events to supported destinations such as Service Bus queues and topics, event hubs, and storage accounts.
event-grid Troubleshoot Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/troubleshoot-issues.md
Title: Troubleshoot Event Grid issues description: This article provides different ways of troubleshooting Azure Event Grid issues Previously updated : 06/10/2021 Last updated : 05/17/2022 # Troubleshoot Azure Event Grid issues
az eventgrid event-subscription create --name <event-grid-subscription-name> \
--delivery-attribute-mapping Diagnostic-Id dynamic traceparent ```
+Azure Functions supports [distributed tracing with Azure Monitor](../azure-monitor/app/azure-functions-supported-features.md), which includes built-in tracing of executions and bindings, performance monitoring, and more.
+
+[Microsoft.Azure.WebJobs.Extensions.EventGrid](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.EventGrid/) package version 3.1.0 or later enables correlation for CloudEvents between producer calls and Functions Event Grid trigger executions. For more information, see [Distributed tracing with Azure Functions and Event Grid triggers](https://devblogs.microsoft.com/azure-sdk/distributed-tracing-with-azure-functions-event-grid-triggers/).
+ ### Sample See the [Line Counter sample](/samples/azure/azure-sdk-for-net/line-counter/). This sample app illustrates using Storage, Event Hubs, and Event Grid clients along with ASP.NET Core integration, distributed tracing, and hosted services. It allows users to upload a file to a blob, which triggers an Event Hubs event containing the file name. The Event Hubs Processor receives the event, and then the app downloads the blob and counts the number of lines in the file. The app displays a link to a page containing the line count. When the link is clicked, a CloudEvent containing the name of the file is published using Event Grid.
event-hubs Event Hubs Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-features.md
Title: Overview of features - Azure Event Hubs | Microsoft Docs description: This article provides details about features and terminology of Azure Event Hubs. Previously updated : 01/24/2022+ Last updated : 05/11/2022 # Features and terminology in Azure Event Hubs
Event data:
It's your responsibility to manage the offset.
+## Application groups
+An application group is a collection of client applications that connect to an Event Hubs namespace sharing a unique identifying condition such as the security context - shared access policy or Azure Active Directory (Azure AD) application ID.
+
+Azure Event Hubs enables you to define resource access policies such as throttling policies for a given application group and controls event streaming (publishing or consuming) between client applications and Event Hubs.
+
+For more information, see [Resource governance for client applications with application groups](resource-governance-overview.md).
+ ## Next steps For more information about Event Hubs, visit the following links:
event-hubs Resource Governance Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/resource-governance-overview.md
+
+ Title: Resource governance with application groups
+description: This article describes how to enable resource governance using application groups.
+ Last updated : 05/24/2022+++
+# Resource governance with application groups (preview)
+
+Azure Event Hubs enables you to govern event streaming workloads of client applications that connect to Event Hubs. You can create logical groups known as *application groups* where each group is a collection of client applications, and then apply quota and access management policies for an application group (group of client applications).
+
+> [!NOTE]
+> Application groups are available only in **premium** and **dedicated** tiers.
+
+## Application groups
+
+An application group is a collection of one or more client applications that interact with the Event Hubs data plane. Each application group is scoped to a single Event Hubs namespace and should use a uniquely identifying condition such as the security context - shared access signatures (SAS) or Azure Active Directory (Azure AD) application ID - of the client application.
+
+Event Hubs currently supports using security contexts for creating application groups. Therefore, each application group must have a unique SAS policy or Azure AD application ID associated with them.
+
+Application groups are logical entities that are created at the namespace level. Therefore, client applications interacting with event hubs don't need to be aware of the existence of an application group. Event Hubs can associate any client application to an application group by using the identifying condition.
+
+As illustrated below, you can create application groups based on the security context that each client application uses. Therefore, application groups can span across multiple client applications using the same security context.
++
+Application groups have no direct association with a consumer group. Depending on the application group identifier such as security context, one consumer group can have one or more application groups associated with it or one application group can span across multiple consumer groups.
++
+These are the key attributes of an application group:
+
+| Parameter | Description |
+| - | -- |
+| name | Unique name of an application group. |
+| clientAppGroupIdentifier | Associate an application group with a uniquely identifying condition (i.e security context such as SAS policy or Azure AD application ID). |
+| policies | List of policies, such as throttling policies that control event streaming between client applications and the Event Hubs namespace|
+| isEnabled | Determine whether the client applications of an application group can access Event Hubs namespaces or not. |
++
+## Application group policies
+Each application group can contain zero or more policies that control the data plane access of the client applications that are part of the application group. Application groups currently support throttling policies.
+
+### Throttling policies
+You can have throttling policies specified using different ingress and egress metrics. Application groups support using the following metrics to throttle ingress or egress workloads of client applications.
+
+| Parameter | Description |
+| - | -- |
+| IncomingBytes | Publisher throughput in bytes per second. |
+| OutgoingBytes | Consumer throughput in bytes per second. |
+| IncomingMessages | Number of events published per second. |
+| OutgoingMessages | Number of events consumed per second. |
+
+When policies for application groups are applied, the client application workload may slow down or encounter server busy exceptions.
+
+### Disabling application groups
+Application group is enabled by default and that means all the client applications can access Event Hubs namespace for publishing and consuming events by adhering to the application group policies.
+
+When an application group is disabled, client applications of that application group won't be able to connect to the Event Hubs namespace and all the existing connections that are already established from client applications are terminated.
+
+## Next steps
+For instructions on how to create and manage application groups, see [Resource governance for client applications using Azure portal](resource-governance-with-app-groups.md)
event-hubs Resource Governance With App Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/resource-governance-with-app-groups.md
+
+ Title: Govern resources for client applications with application groups
+description: Learn how to use application groups to govern resources for client applications that connect with Event Hubs.
++ Last updated : 05/24/2022++
+# Govern resources for client applications with application groups
+Azure Event Hubs enables you to govern event streaming workloads for client applications that connect to Event Hubs by using **application groups**. For more information, see [Resource governance with application groups](resource-governance-overview.md).
+
+This article shows you how to perform the following tasks:
+
+- Create an application group.
+- Enable or disable an application group
+- Apply throttling policies to an application group
+
+> [!NOTE]
+> Application groups are available only in **premium** and **dedicated** tiers.
+
+## Create an application group
+
+You can create an application group using the Azure portal as illustrated below. When you create the application group, you should associate it to either a shared access signatures (SAS) or Azure Active Directory(Azure AD) application ID, which is used by client applications.
++
+For example, you can create application group `contosoAppGroup` associating it with SAS policy `contososaspolicy`.
+
+## Apply throttling policies
+You can add zero or more policies when you create an application group or to an existing application group.
+
+For example, you can add throttling policies related to `IncomingMessages`, `IncomingBytes` or `OutgoingBytes` to the `contosoAppGroup`. These policies will get applied to event streaming workloads of client applications that use the SAS policy `contososaspolicy`.
+
+## Publish or consume events
+Once you successfully add throttling policies to the application group, you can test the throttling behavior by either publishing or consuming events using client applications that are part of the `contosoAppGroup` application group. For that, you can use either an [AMQP client](event-hubs-dotnet-standard-getstarted-send.md) or a [Kafka client](event-hubs-quickstart-kafka-enabled-event-hubs.md) application and same SAS policy name or Azure AD application ID that's used to create the application group.
+
+> [!NOTE]
+> When your client applications are throttled, you should experience a slowness in publishing or consuming data.
+
+## Enable or disable application groups
+You can prevent client applications accessing your Event Hubs namespace by disabling the application group that contains those applications. When the application group is disabled, client applications won't be able to publish or consume data. Any established connections from client applications of that application group will also be terminated.
++
+## Create application groups using Resource Manager templates
+You can also create an application group using the Azure Resource Manager (ARM) templates.
+
+The following example shows how to create an application group using an ARM template. In this exmaple, the application group is associated with an existing SAS policy name `contososaspolicy` by setting the client `AppGroupIdentifier` as `SASKeyName=contososaspolicy`. The application group policies are also defined in the ARM template.
++
+```json
+{
+ "type": "ApplicationGroups",
+ "apiVersion": "2022-01-01-preview",
+ "name": "[parameters('applicationGroupName')]",
+ "dependsOn": [
+ "[resourceId('Microsoft.EventHub/namespaces/', parameters('eventHubNamespaceName'))]",
+ "[resourceId('Microsoft.EventHub/namespaces/authorizationRules', parameters('eventHubNamespaceName'),parameters('namespaceAuthorizationRuleName'))]"
+ ],
+ "properties": {
+ "ClientAppGroupIdentifier": "SASKeyName=contososaspolicy",
+ "policies": [{
+ "Type": "ThrottlingPolicy",
+ "Name": "ThrottlingPolicy1",
+ "metricId": "IncomingMessages",
+ "rateLimitThreshold": 10
+ },
+ {
+ "Type": "ThrottlingPolicy",
+ "Name": "ThrottlingPolicy2",
+ "metricId": "IncomingBytes",
+ "rateLimitThreshold": 3951729
+ }
+ ],
+ "isEnabled": true
+ }
+}
+```
+
+## Next steps
+For conceptual information on application groups, see [Resource governance with application groups](resource-governance-overview.md).
event-hubs Transport Layer Security Configure Minimum Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/transport-layer-security-configure-minimum-version.md
To check the minimum required TLS version for your Event Hubs namespace, you can
.\ARMClient.exe token <your-subscription-id> ```
-Once you have your bearer token, you can use the script below in combination with something like [Rest Client](https://marketplace.visualstudio.com/items?itemName=humao.rest-client) to query the API.
+Once you have your bearer token, you can use the script below in combination with something like [REST Client](https://marketplace.visualstudio.com/items?itemName=humao.rest-client) to query the API.
```http @token = Bearer <Token received from ARMClient>
expressroute Expressroute Howto Set Global Reach Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-howto-set-global-reach-portal.md
After the operation is complete, you no longer have connectivity between your on
1. To update the configuration for a Global Reach connection, select the connection name.
+ :::image type="content" source="./media/expressroute-howto-set-global-reach-portal/select-configuration.png" alt-text="Screenshot of Global Reach connection name.":::
1. Update the configuration on the *Edit Global Reach** page and the select **Save**.
+ :::image type="content" source="./media/expressroute-howto-set-global-reach-portal/edit-configuration.png" alt-text="Screenshot of the edit Global Reach configuration page.":::
1. Select **Save** on the main overview page to apply the configuration to the circuit.
+ :::image type="content" source="./media/expressroute-howto-set-global-reach-portal/save-edit-configuration.png" alt-text="Screenshot of the save button after editing Global Reach configuration.":::
## Next steps - [Learn more about ExpressRoute Global Reach](expressroute-global-reach.md)
expressroute How To Configure Custom Bgp Communities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/how-to-configure-custom-bgp-communities.md
BGP communities are groupings of IP prefixes tagged with a community value. This
* Review the [prerequisites](expressroute-prerequisites.md), [routing requirements](expressroute-routing.md), and [workflows](expressroute-workflows.md) before you begin configuration.
-* You must have an active ExpressRoute circuit.
+* You must have an active ExpressRoute circuit in a **non-vWAN environment**. This feature is not supported for ExpressRoute with vWAN.
* Follow the instructions to [create an ExpressRoute circuit](expressroute-howto-circuit-arm.md) and have the circuit enabled by your connectivity provider. * Ensure that you have Azure private peering configured for your circuit. See the [configure routing](expressroute-howto-routing-arm.md) article for routing instructions. * Ensure that Azure private peering gets configured and establishes BGP peering between your network and Microsoft for end-to-end connectivity.
firewall Premium Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/premium-certificates.md
Previously updated : 03/07/2022 Last updated : 05/23/2022
Write-Host "================"
```
-## Certificate auto-generation (preview)
+## Certificate auto-generation
-For non-production deployments, you can use the Azure Firewall Premium Certification Auto-Generation mechanism, which automatically creates the following three resources for you:
+For non-production deployments, you can use the Azure Firewall Premium Certification Auto-Generation mechanism, which automatically creates the following three resources for you:
- Managed Identity - Key Vault - Self-signed Root CA certificate
-Just choose the new preview managed identity, and it ties the three resources together in your Premium policy and sets up TLS inspection.
+Just choose the new managed identity, and it ties the three resources together in your Premium policy and sets up TLS inspection.
## Troubleshooting
frontdoor How To Add Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/standard-premium/how-to-add-custom-domain.md
When you use Azure Front Door for application delivery, a custom domain is neces
After you create an Azure Front Door Standard/Premium profile, the default frontend host will have a subdomain of `azurefd.net`. This subdomain gets included in the URL when Azure Front Door Standard/Premium delivers content from your backend by default. For example, `https://contoso-frontend.azurefd.net/activeusers.htm`. For your convenience, Azure Front Door provides the option of associating a custom domain with the default host. With this option, you deliver your content with a custom domain in your URL instead of an Azure Front Door owned domain name. For example, `https://www.contoso.com/photo.png`. ## Prerequisites+ * Before you can complete the steps in this tutorial, you must first create a Front Door. For more information, see [Quickstart: Create a Front Door Standard/Premium](create-front-door-portal.md). * If you don't already have a custom domain, you must first purchase one with a domain provider. For example, see [Buy a custom domain name](../../app-service/manage-custom-dns-buy-domain.md).
After you create an Azure Front Door Standard/Premium profile, the default front
## Add a new custom domain > [!NOTE]
-> * When using Azure DNS to create Apex domains isn't supported on Azure Front Door currently. There are other DNS providers that support CNAME flattening or DNS chasing that will allow APEX domains to be used for Azure Front Door Standard/Premium.
+> * When using Azure DNS, creating Apex domains isn't supported on Azure Front Door currently. There are other DNS providers that support CNAME flattening or DNS chasing that will allow APEX domains to be used for Azure Front Door Standard/Premium.
> * If a custom domain is validated in one of the Azure Front Door Standard, Premium, classic or classic Microsoft CDN profiles, then it can't be added to another profile.
->
A custom domain is managed by Domains section in the portal. A custom domain can be created and validated before association to an endpoint. A custom domain and its subdomains can be associated with only a single endpoint at a time. However, you can use different subdomains from the same custom domain for different Front Doors. You can also map custom domains with different subdomains to the same Front Door endpoint.
A custom domain is managed by Domains section in the portal. A custom domain can
1. The **Add a domain** page will appear where you can enter information about of the custom domain. You can choose Azure-managed DNS, which is recommended or you can choose to use your own DNS provider. If you choose Azure-managed DNS, select an existing DNS zone and then select a custom subdomain or create a new one. If you're using another DNS provider, manually enter the custom domain name. Select **Add** to add your custom domain.
- > [!NOTE]
- > Azure Front Door supports both Azure managed certificate and customer-managed certificates. If you want to use customer-managed certificate, see [Configure HTTPS on a custom domain](how-to-configure-https-custom-domain.md).
- >
+ > [!NOTE]
+ > Azure Front Door supports both Azure managed certificate and customer-managed certificates. If you want to use customer-managed certificate, see [Configure HTTPS on a custom domain](how-to-configure-https-custom-domain.md).
:::image type="content" source="../media/how-to-add-custom-domain/add-domain-page.png" alt-text="Screenshot of add a domain page.":::
After you've validated your custom domain, you can then add it to your Azure Fro
1. Once the CNAME record gets created and the custom domain is associated to the Azure Front Door endpoint completes, traffic flow will start flowing.
- > [!NOTE]
- > If HTTPS is enabled, certificate provisioning and propagation may take a few minutes because propagation is being done to all edge locations.
+ > [!NOTE]
+ > If HTTPS is enabled, certificate provisioning and propagation may take a few minutes because propagation is being done to all edge locations.
## Verify the custom domain
frontdoor How To Configure Https Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/standard-premium/how-to-configure-https-custom-domain.md
If you want to change the secret version from ΓÇÿLatestΓÇÖ to a specified versio
> [!NOTE] > * It may take up to an hour for the new certificate to be deployed when you switch between certificate types.
- > * If your domain state is Approved, switching the certificate type between BYOC and managed certificate won't have any downtime. Whhen switching to managed certificate, unless the domain ownership is re-validated and the domain state becomes Approved, you will continue to be served by the previous certificate.
+ > * If your domain state is Approved, switching the certificate type between BYOC and managed certificate won't have any downtime. When switching to managed certificate, unless the domain ownership is re-validated and the domain state becomes Approved, you will continue to be served by the previous certificate.
> * If you switch from BYOC to managed certificate, domain re-validation is required. If you switch from managed certificate to BYOC, you're not required to re-validate the domain. >
governance Exemption Structure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/exemption-structure.md
Title: Details of the policy exemption structure description: Describes the policy exemption definition used by Azure Policy to exempt resources from evaluation of initiatives or definitions. Previously updated : 08/17/2021 Last updated : 05/18/2022 ++ # Azure Policy exemption structure
two of the policy definitions in the initiative, the `customOrgPolicy` custom po
```json { "id": "/subscriptions/{subId}/resourceGroups/ExemptRG/providers/Microsoft.Authorization/policyExemptions/resourceIsNotApplicable",
+ "apiVersion": "2020-07-01-preview",
"name": "resourceIsNotApplicable", "type": "Microsoft.Authorization/policyExemptions", "properties": {
assignment.
## Next steps
+- Study the [Microsoft.Authorization policyExemptions resource type](https://docs.microsoft.com/azure/templates/microsoft.authorization/policyexemptions?tabs=json).
- Learn about the [policy definition structure](./definition-structure.md). - Understand how to [programmatically create policies](../how-to/programmatically-create.md). - Learn how to [get compliance data](../how-to/get-compliance-data.md).
governance Shared Query Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/shared-query-bicep.md
+
+ Title: 'Quickstart: Create a shared query with Bicep'
+description: In this quickstart, you use Bicep to create a Resource Graph shared query that counts virtual machines by OS.
++ Last updated : 05/17/2022+++
+# Quickstart: Create a shared query using Bicep
+
+[Azure Resource Graph](../../governance/resource-graph/overview.md) is an Azure service designed to extend Azure Resource Management by providing efficient and performant resource exploration with the ability to query at scale across a given set of subscriptions so you can effectively govern your environment. With Resource Graph queries, you can:
+
+- Query resources with complex filtering, grouping, and sorting by resource properties.
+- Explore resources iteratively based on governance requirements.
+- Assess the impact of applying policies in a vast cloud environment.
+- [Query changes made to resource properties](./how-to/get-resource-changes.md) (preview).
+
+Resource Graph queries can be saved as a _private query_ or a _shared query_. A private query is saved to the individual's Azure portal profile and isn't visible to others. A shared query is a Resource Manager object that can be shared with others through permissions and role-based access. A shared query provides common and consistent execution of resource discovery. This quickstart uses Bicep to create a shared query.
++
+## Prerequisites
+
+If you don't have an Azure subscription, create a [free](https://azure.microsoft.com/free/) account before you begin.
+
+## Review the Bicep file
+
+In this quickstart, you create a shared query called _Count VMs by OS_. To try this query in SDK or in portal with Resource Graph Explorer, see [Samples - Count virtual machines by OS type](./samples/starter.md#count-os).
+
+The Bicep file used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/resourcegraph-sharedquery-countos/).
++
+The resource defined in the Bicep file is:
+
+- [Microsoft.ResourceGraph/queries](/azure/templates/microsoft.resourcegraph/queries)
+
+## Deploy the Bicep file
+
+1. Save the Bicep file as **main.bicep** to your local computer.
+
+ > [!NOTE]
+ > The Bicep file isn't required to be named **main.bicep**. If you save the file with a different name, you must change the name of
+ > the template file in the deployment step below.
+
+1. Deploy the Bicep file using either Azure CLI or Azure PowerShell.
+
+ # [CLI](#tab/CLI)
+
+ ```azurecli
+ az group create --name exampleRG --location eastus
+ az deployment group create --resource-group exampleRG --template-file main.bicep
+ ```
+
+ # [PowerShell](#tab/PowerShell)
+
+ ```azurepowershell
+ New-AzResourceGroup -Name exampleRG -Location eastus
+ New-AzResourceGroupDeployment -ResourceGroupName exampleRG -TemplateFile ./main.bicep
+ ```
+
+
+
+ When the deployment finishes, you should see a message indicating the deployment succeeded.
+
+Some other resources:
+
+- To see the template reference, go to [Azure template reference](/azure/templates/microsoft.resourcegraph/allversions).
+- To learn how to create Bicep files, see [Quickstart: Create Bicep files with Visual Studio Code](../../azure-resource-manager/bicep/quickstart-create-bicep-use-visual-studio-code.md).
+
+## Review deployed resources
+
+Use the Azure portal, Azure CLI, or Azure PowerShell to list the deployed resources in the resource group.
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+az resource list --resource-group exampleRG
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell-interactive
+Get-AzResource -ResourceGroupName exampleRG
+```
+++
+## Clean up resources
+
+When you no longer need the resource that you created, delete the resource group using Azure CLI or Azure PowerShell.
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+az group delete --name exampleRG
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell-interactive
+Remove-AzResourceGroup -Name exampleRG
+```
+++
+## Next steps
+
+In this quickstart, you created a Resource Graph shared query using Bicep.
+
+To learn more about shared queries, continue to the tutorial for:
+
+> [!div class="nextstepaction"]
+> [Manage queries in Azure portal](./tutorials/create-share-query.md)
hdinsight Apache Hbase Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hbase/apache-hbase-replication.md
Some of the hard-coded values in the template:
| Gateway SKU | Basic | | Gateway IP | vnet1gwip |
+Alternatively, follow below steps to setup two different vnets and VMs manually
+1. [Create Two VNET (Virtual Network)](../../virtual-network/quick-create-portal.md) in different Region
+2. Enable [Peering in both the VNET](../../virtual-network/virtual-network-peering-overview.md). Go to **Virtual network** created in above steps then click on **peering** and add peering link of another region. Do it for both the virtual network.
+3. [Create the latest version of the UBUNTU](../../virtual-machines/linux/quick-create-portal.md#create-virtual-machine) in each VNET.
+ ## Setup DNS In the last section, the template creates an Ubuntu virtual machine in each of the two virtual networks. In this section, you install Bind on the two DNS virtual machines, and then configure the DNS forwarding on the two virtual machines.
In this article, you learned how to set up Apache HBase replication within a vir
* [Get started with Apache HBase in HDInsight](./apache-hbase-tutorial-get-started-linux.md) * [HDInsight Apache HBase overview](./apache-hbase-overview.md)
-* [Create Apache HBase clusters in Azure Virtual Network](./apache-hbase-provision-vnet.md)
+* [Create Apache HBase clusters in Azure Virtual Network](./apache-hbase-provision-vnet.md)
hdinsight Apache Hbase Tutorial Get Started Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hbase/apache-hbase-tutorial-get-started-linux.md
The HBase REST API is secured via [basic authentication](https://en.wikipedia.or
echo "Applying mitigation; starting REST Server" sudo python /usr/lib/python2.7/dist-packages/hdinsight_hbrest/HbaseRestAgent.py else
- echo "Rest server already running"
+ echo "REST server already running"
exit 0 fi ```
hdinsight Troubleshoot Data Retention Issues Expired Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hbase/troubleshoot-data-retention-issues-expired-data.md
Despite setting TTL, you may notice sometimes that you don't obtain the desired
## Prerequisites To prepare to follow the steps and commands below, open two ssh connections to HBase cluster:+ * In one of the ssh sessions keep the default bash shell.+ * In the second ssh session launch HBase shell by running the command below.
- ```
- hbase shell
- ```
+ ```
+ hbase shell
+ ```
### Check if desired TTL is configured and if expired data is removed from query result Follow the steps below to understand where is the issue. Start by checking if the behavior occurs for a specific table or for all the tables. If you're unsure whether the issue impacts all the tables or a specific table, just consider as example a specific table name for the start. 1. Check first that TTL has been configured for ColumnFamily for the target tables. Run the command below in the ssh session where you launched HBase shell and observe example and output below. One column family has TTL set to 50 seconds, the other ColumnFamily has no value configured for TTL, thus it appears as "FOREVER" (data in this column family isn't configured to expire).
-
+ ``` describe 'table_name' ```
-1. If not configured, default TTL is set to 'FOREVER'. There are two possibilities why data is not expired as expected and removed from query result.
- 1. If TTL has any other value then 'FOREVER', observe the value for column family and note down the value in seconds(pay special attention to value correlated with the unit measure as cell TTL is in ms, but column family TTL is in seconds) to confirm if it is the expected one. If the observed value isn't correct, fix that first.
- 1. If TTL value is 'FOREVER' for all column families, configure TTL as first step and afterwards monitor if data is expired as expected.
+ ![Screenshot showing describe table name command.](media/troubleshoot-data-retention-issues-expired-data/image-1.png)
+
+1. If not configured, default TTL is set to 'FOREVER'. There are two possibilities why data is not expired as expected and removed from query result.
+
+ 1. If TTL has any other value then 'FOREVER', observe the value for column family and note down the value in seconds(pay special attention to value correlated with the unit measure as cell TTL is in ms, but column family TTL is in seconds) to confirm if it is the expected one. If the observed value isn't correct, fix that first.
+ 1. If TTL value is 'FOREVER' for all column families, configure TTL as first step and afterwards monitor if data is expired as expected.
+ 1. If you establish that TTL is configured and has the correct value for the ColumnFamily, next step is to confirm that the expired data no longer shows up when doing table scans. When data expires, it should be removed and not show up in the scan table results. Run the below command in HBase shell to check.+ ``` scan 'table_name' ```+ ### Check the number and size of StoreFiles per table per region to observe if any changes are visible after the compaction operation 1. Before moving to next step, from ssh session with bash shell, run the following command to check the current number of StoreFiles and size for each StoreFile currently showing up for the ColumnFamily for which the TTL has been configured. Note first the table and ColumnFamily for which you'll be doing the check, then run the following command in ssh session (bash).
-
+ ``` hdfs dfs -ls -R /hbase/data/default/table_name/ | grep "column_family_name" ```+
+ ![Screenshot showing check size of store file command.](media/troubleshoot-data-retention-issues-expired-data/image-2.png)
+ 1. Likely, there will be more results shown in the output, one result for each region ID that is part of the table and between 0 and more results for StoreFiles present under each region name, for the selected ColumnFamily. To count the overall number of rows in the result output above, run the following command.
-
+ ``` hdfs dfs -ls -R /hbase/data/default/table_name/ | grep "column_family_name" | wc -l ``` ### Check the number and size of StoreFiles per table per region after flush
-
+ 1. Based on the TTL configured for each ColumnFamily and how much data is written in the table for the target ColumnFamily, part of the data may still exist in MemStore and isn't written as StoreFile to storage. Thus, to make sure that the data is written to storage as StoreFile, before the maximum configured MemStore size is reached, you can run the following command in HBase shell to write data from MemStore to StoreFile immediately. + ``` flush 'table_name' ```
Follow the steps below to understand where is the issue. Start by checking if th
``` hdfs dfs -ls -R /hbase/data/default/table_name/ | grep "column_family_name" ```
-
+ 1. An additional store file is created compared to previous result output for each region where data is modified, the StoreFile will include current content of MemStore for that region.
+ ![Screenshot showing memory store for the region.](media/troubleshoot-data-retention-issues-expired-data/image-3.png)
+ ### Check the number and size of StoreFiles per table per region after major compaction
-
-1. At this point, the data from MemStore has been written to StoreFile, in storage, but expired data may still exist in one or more of the current StoreFiles. Although minor compactions can help delete some of the expired entries, it isn't guaranteed that it will remove all of them as minor compaction will usually not select all the StoreFiles for compaction, while major compaction will select all the StoreFiles for compaction in that region.
- Also, there's another situation when minor compaction may not remove cells with TTL expired. There's a property named MIN_VERSIONS and it defaults to 0 only (see in the above output from describe 'table_name' the property MIN_VERSIONS=>'0'). If this property is set to 0, the minor compaction will remove the cells with TTL expired. If this value is greater than 0, minor compaction may not remove the cells with TTL expired even if it touches the corresponding file as part of compaction. This property configures the min number of versions of a cell to keep, even if those versions have TTL expired.
+1. At this point, the data from MemStore has been written to StoreFile, in storage, but expired data may still exist in one or more of the current StoreFiles. Although minor compactions can help delete some of the expired entries, it isn't guaranteed that it will remove all of them as minor compaction will usually not select all the StoreFiles for compaction, while major compaction will select all the StoreFiles for compaction in that region.
+
+ Also, there's another situation when minor compaction may not remove cells with TTL expired. There's a property named MIN_VERSIONS and it defaults to 0 only (see in the above output from describe 'table_name' the property MIN_VERSIONS=>'0'). If this property is set to 0, the minor compaction will remove the cells with TTL expired. If this value is greater than 0, minor compaction may not remove the cells with TTL expired even if it touches the corresponding file as part of compaction. This property configures the min number of versions of a cell to keep, even if those versions have TTL expired.
1. To make sure expired data is also deleted from storage, we need to run a major compaction operation. The major compaction operation, when completed, will leave behind a single StoreFile per region. In HBase shell, run the command to execute a major compaction operation on the table:+ ``` major_compact 'table_name' ``` 1. Depending on the table size, major compaction operation can take some time. Use the command below in HBase shell to monitor progress. If the compaction is still running when you execute the command below, you'll see the output "MAJOR", but if the compaction is completed, you will see the output "NONE".+ ``` compaction_state 'table_name' ``` 1. When the compaction status appears as "NONE" in hbase shell, if you switch quickly to bash and run command+ ``` hdfs dfs -ls -R /hbase/data/default/table_name/ | grep "column_family_name" ```+ 1. You will notice that an extra StoreFile has been created in addition to previous ones per region per ColumnFamily and after several moments only the last created StoreFile is kept per region per column family.
+ ![Screenshot showing store file as column family.](media/troubleshoot-data-retention-issues-expired-data/image-4.png)
+ For the example region above, once the extra moments elapse, we can notice that one single StoreFile remained and the size occupied by this file on the storage is reduced as major compaction occurred and at this point any expired data that has not been deleted before(by another major compaction), will be deleted after running current major compaction operation.
+![Screenshot showing expired data not deleted.](media/troubleshoot-data-retention-issues-expired-data/image-5.png)
+ > [!NOTE] > For this troubleshooting exercise we triggered the major compaction manually. But in practice, doing that manually for many tables might be time consuming. By default, major compaction is disabled on HDInsight cluster. The main reason for keeping major compaction disabled by default is because the performance of the table operations is impacted when a major compaction is in progress. However, you can enable major compaction by configuring the value for the property hbase.hregion.majorcompaction in ms or can use a cron tab job or another external system to schedule compaction at a time convenient for you, with lower workload.
hdinsight Hdinsight For Vscode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-for-vscode.md
With Spark & Hive Tools for Visual Studio Code, you can submit interactive Hive
- **MESSAGES** panel: When you select a **Line** number, it jumps to the first line of the running script.
-## Submit interactive PySpark queries (Not supported Synapse PySpark interactive anymore)
+## Submit interactive PySpark queries
-Users can perform PySpark interactive in the following ways:
+Users can perform PySpark interactive in the following ways. Note here that Jupyter Extension version (ms-jupyter): v2022.1.1001614873 and Python Extension version (ms-python): v2021.12.1559732655, python 3.6.x and 3.7.x are only for HDInsight interactive PySpark queries.
### Using the PySpark interactive command in PY file Using the PySpark interactive command to submit the queries, follow these steps:
hdinsight Hdinsight Hadoop Port Settings For Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-port-settings-for-services.md
The following are available for specific cluster types:
| Livy |443 |HTTPS |Spark |Spark REST API. See [Submit Apache Spark jobs remotely using Apache Livy](spark/apache-spark-livy-rest-interface.md) | | Spark Thrift server |443 |HTTPS |Spark |Spark Thrift server used to submit Hive queries. See [Use Beeline with Apache Hive on HDInsight](hadoop/apache-hadoop-use-hive-beeline.md) | | Storm |443 |HTTPS |Storm |Storm web UI. See [Deploy and manage Apache Storm topologies on HDInsight](storm/apache-storm-deploy-monitor-topology-linux.md) |
-| Kafka Rest proxy |443 |HTTPS |Kafka |Kafka REST API. See [Interact with Apache Kafka clusters in Azure HDInsight using a REST proxy](kafk) |
+| Kafka REST proxy |443 |HTTPS |Kafka |Kafka REST API. See [Interact with Apache Kafka clusters in Azure HDInsight using a REST proxy](kafk) |
### Authentication
hdinsight Llap Schedule Based Autoscale Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/interactive-query/llap-schedule-based-autoscale-best-practices.md
+
+ Title: HDInsight Interactive Query Autoscale(Schedule-Based) Guide and Best Practices
+description: LLAP Autoscale Guide and Best Practices
+++++ Last updated : 05/25/2022++
+# Azure HDInsight Interactive Query Cluster (Hive LLAP) Schedule Based Autoscale
+
+This document provides the onboarding steps to enable schedule-based autoscale for Interactive Query (LLAP) Cluster type in Azure HDInsight. It includes some of the best practices to operate Autoscale in Hive-LLAP.
+
+## **Supportability**
+
+- Autoscale isn't supported in HDI 3.6 Interactive Query(LLAP) cluster.
+- HDI 4.0 Interactive Query Cluster supports only Schedule-Based Autoscale.
+
+Feature Supportability with HDInsight 4.0 Interactive Query(LLAP) Autoscale
+
+| Feature | Schedule-Based Autoscale |
+|::|::|
+| Work Load Management | NO |
+| Hive Warehouse Connector | Yes |
+| Manually Installed LLAP | NO |
+
+> [!WARNING]
+> Behaviour of the scheduled autoscale isn't deterministic, If there are other services installed on the HDI Interactive Query Cluster which utilizes the YARN resources.
+
+### **Interactive Query Cluster setup for Autoscale**
+
+1. [Create an HDInsight Interactive Query Cluster.](/azure/hdinsight/hdinsight-hadoop-provision-linux-clusters)
+2. Post successful creation of cluster, navigate to **Azure Portal** and apply the recommended Script Action
+
+```
+- Script Action: https://hdiconfigactions2.blob.core.windows.net/update-ambari-configs-for-llap-autoscale/update_ambari_configs.sh
+- Requried Parameters:<MAX CONCURRENT QUERIES> <TEZ Queue Capacity Percent>
+ - <MAX CONCURRENT QUERIES> is a parameter that sets the max concurrent queries to run, it should be set to the max largest worker node count out of the schedules.
+ - <TEZ Queue Capacity Percent> The configurations below in the example are calculated based on D14v2 worker node SKU (100GB per yarn node) I.e., we are allocating 6% (6GB) per node to launch at least one TEZ AM which is of 4GB. If we are using smaller SKU worker nodes, the above configs need to be tuned proportionately. We need to allocate enough capacity for at least one TEZ AM to run on each node. Please refer to HDInsight Interactive Query Cluster(LLAP) sizing guide | Microsoft Docs for more details.  
+
+- Details:
+ Above script action will update the Interactive Query cluster with the following:
+ 1. Configure a separate Tez queue to launch Tez Sessions. If no arguments are passed, we would configure Default Max Concurrency as 16 and Tez Queue Capacity as 6% of the overall cluster capacity.
+ 2. Tunes hive configs for autoscale.
+ 3. Sets the max concurrent queries can run in parallel.
+- Example: https://hdiconfigactions2.blob.core.windows.net/update-ambari-configs-for-llap-autoscale/update_ambari_configs.sh 21 6
+
+```
+
+3. [Enable and Configure Schedule-Based Autoscale](/azure/hdinsight/hdinsight-autoscale-clusters#create-a-cluster-with-schedule-based-autoscaling)
++
+> [!NOTE]
+> It's recommended to have sufficient gap between two schedules so that data cache is efficiently utilized i.e schedule scale upΓÇÖs when there is peak usage and scale downΓÇÖs when there is no usage.
+
+### **Interactive Query Autoscale FAQs**
+
+<b>1. What happens to the running jobs during the scale-down operation as per configured schedule?ΓÇ»</b>
+
+If there are running jobs while scale-down is triggered, then we can expect one of the following outcomes
+- Query fails due to Tez AM getting killed.
+- Query slows down due to capacity reduced but completes successfully.
+- Query completes successfully without any impact.
+
+
+> [!NOTE]
+> It is recommended to plan approprite down time with the users during the scale down schedules.
++
+<b>2. What happens to the running Spark jobs when using Hive Warehouse Connector to execute queries in the LLAP Cluster with Auto scale enabled?</b>
+
+If there are running jobs(triggered from Spark Cluster) while scale-down is triggered, then we can expect one of the following outcomes.
+- Spark Job fails due to the failure of JDBC calls from spark driver level caused by loss of Tez AMs or containers
+- Spark Job slows down due to capacity reduced but completes successfully.
+- Spark Job complete successfully without any impact.
+
+<b>3. Why is my query running slow even after scale-up?</b>
+
+As the Autoscale Smart probe add/remove worker nodes as part of autoscale, LLAP data cache on newly added worker nodes would require warming up after scale-up. First query on a given dataset might be slow due to cache-misses but the subsequent queries would run fast. It's recommended to run some queries on performance critical tables after scaling to warm up the data cache (Optional).
++
+<b>4. Does schedule based autoscale support Workload Management in LLAP?</b>
+
+Workload Management in LLAP isn't supported with Schedule Based Autoscale as of now. However you schedule with a custom cron job to disable and enable WLM once the scaling action is done.
+Disabling the WLM should be before the actual schedule of the scaling event and enabling should be 1 hour after the scaling event. Here, user/admin should come up with a different WLM resource plan that suits their cluster size after the scale.
++
+<b>5. Why do we observe stale hive configs in the Ambari UI after the scaling has happened?</b>
+
+Each time the Interactive Query cluster scales, the Autoscale smart probe would perform a silent update of the number of LLAP Daemons and the Concurrency in the Ambari since these configurations are static.
+These configs are updated to make sure if autoscale is in disabled state or LLAP Service restarts for some reason. It utilizes all the worker nodes resized at that time. Explicit restart of services to handle these stale config changes isn't required.
+
+### **Next Steps**
+If the above guidelines didn't resolve your query, visit one of the following.
+
+* Get answers from Azure experts through [Azure Community Support](https://azure.microsoft.com/support/community/).
+
+* Connect with [@AzureSupport](https://twitter.com/azuresupport) - the official Microsoft Azure account for improving customer experience by connecting the Azure community to the right resources: answers, support, and experts.
+
+* If you need more help, you can submit a support request from the [Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade/). Select **Support** from the menu bar or open the **Help + support** hub. For more detailed information, review [How to create an Azure support request](../../azure-portal/supportability/how-to-create-azure-support-request.md). Access to Subscription Management and billing support is included with your Microsoft Azure subscription, and Technical Support is provided through one of the [Azure Support Plans](https://azure.microsoft.com/support/plans/).
+
+## **Other References:**
+ * [Interactive Query in Azure HDInsight](/azure/hdinsight/interactive-query/apache-interactive-query-get-started)
+ * [Create a cluster with Schedule-based Autoscaling](/azure/hdinsight/interactive-query/apache-interactive-query-get-started)
+ * [Azure HDInsight Interactive Query Cluster (Hive LLAP) sizing guide](/azure/hdinsight/interactive-query/hive-llap-sizing-guide)
+ * [Hive Warehouse Connector in Azure HDInsight](/azure/hdinsight/interactive-query/apache-hive-warehouse-connector)
hdinsight Kafka Mirrormaker 2 0 Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/kafka/kafka-mirrormaker-2-0-guide.md
+
+ Title: Apache Kafka MirrorMaker 2.0 guide - Azure HDInsight
+description: How to use Kafka MirrorMaker 2.0 in data migration/replication and the use-cases.
+++ Last updated : 05/20/2022++
+# How to use Kafka MirrorMaker 2.0 in data migration, replication and the use-cases
+
+MirrorMaker 2.0 (MM2) is designed to make it easier to mirror or replicate topics from one Kafka cluster to another. It uses the Kafka Connect framework to simplify configuration and scaling. It dynamically detects changes to topics and ensures source and target topic properties are synchronized, including offsets and partitions.
+
+In this article, you'll learn how to use Kafka MirrorMaker 2.0 in data migration/replication and the use-cases.
+
+## Prerequisites
+
+* Environment with at least two HDI Kafka clusters.
+* Kafka version higher than 2.4 (HDI 4.0)
+* The source cluster should have data points and topics to test various features of the MirrorMaker 2.0 replication process
+
+## Use case
+
+Simulation of MirrorMaker 2.0 to replicate data points/offsets between two Kafka clusters in HDInsight. The same can be used for scenarios like required data replication between two or more Kafka Clusters like Disaster Recovery, Cloud Adaption, Geo-replication, Data Isolation, and Data Aggregation.
+
+## Offset replication with MirrorMaker 2.0
+
+### MM2 Internals
+
+The MirrorMaker 2.0 tool is composed of different connectors. These connectors are standard Kafka Connect connectors, which can be used directly with Kafka Connect in standalone or distributed mode.
+
+The summary of the broker setup process is as follows:
+
+**MirrorSourceConnector :**
+
+ 1. Replicates remote topics, topic ACLs & configs of a single source cluster.
+ 1. Emits offset-syncs to an internal topic.
+
+**MirrorSinkConnector:**
+
+ 1. Consumes from the primary cluster and replicate topics to a single target cluster.
+
+**MirrorCheckpointConnector:**
+
+ 1. Consumes offset-syncsr.
+ 1. Emits checkpoints to enable failover points.
+
+**MirrorHeartBeatConnector:**
+
+ 1. Emits heartbeats to remote clusters, enabling monitoring of replication process.
+
+### Deployment
+
+1. Connect-mirror-maker.sh script bundled with the Kafka library implements a distributed MM2 cluster, which manages the Connect workers internally based on a config file. Internally MirrorMaker driver creates and handles pairs of each connector ΓÇô MirrorSourceConnector, MirrorSinkConnector, MirrorCheckpoint connector and MirrorHeartbeatConnector.
+1. Start MirrorMaker 2.0.
+
+```
+./bin/connect-mirror-maker.sh ./config/mirror-maker.properties
+```
+
+> [!NOTE]
+> For Kerberos enabled clusters, the JAAS configuration must be exported to the KAFKA_OPTS or must be specified in the MM2 config file.
+
+```
+export KAFKA_OPTS="-Djava.security.auth.login.config=<path-to-jaas.conf>"
+```
+### Sample MirrorMaker 2.0 Configuration file
+
+```
+ # specify any number of cluster aliases
+ clusters = src, dest
+
+ # connection information for each cluster
+ # This is a comma separated host:port pairs for each cluster
+ # for example. "A_host1:9092, A_host2:9092, A_host3:9092"
+ source.bootstrap.servers = wn0-src-kafka.azurehdinsight.net:9092,wn1-src-kafka.azurehdinsight.net:9092,wn2-src-kafka.azurehdinsight.net:9092
+ destination.bootstrap.servers = wn0-dest-kafka.azurehdinsight.net:9092,wn1-dest-kafka.azurehdinsight.net:9092,wn2-dest-kafka.azurehdinsight.net:9092
+
+ # enable and configure individual replication flows
+ source->destination.enabled = true
+
+ # regex which defines which topics gets replicated. For eg "foo-.*"
+ source->destination.topics = toa.evehicles-latest-dev
+ groups=.*
+ topics.blacklist="*.internal,__.*"
+
+ # Setting replication factor of newly created remote topics
+ replication.factor=3
+
+ checkpoints.topic.replication.factor=1
+ heartbeats.topic.replication.factor=1
+ offset-syncs.topic.replication.factor=1
+
+ offset.storage.replication.factor=1
+ status.storage.replication.factor=1
+ config.storage.replication.factor=1
+```
+
+### SSL configuration
+
+If the setup requires SSL configuration
+```
+destination.security.protocol=SASL_SSL
+destination.ssl.truststore.password=<password>
+destination.ssl.truststore.location=/path/to/kafka.server.truststore.jks
+#keystore location in case client.auth is set to required
+destination.ssl.keystore.password=<password>
+destination.ssl.keystore.location=/path/to/kafka.server.keystore.jks
+destination.sasl.mechanism=GSSAPI
+```
++
+### Global configurations
+
+|Property |Default value |Description |
+||||
+|name|required|name of the connector, For Example, "us-west->us-east"|
+|topics|empty string|regex of topics to replicate, for example "topic1, topic2, topic3". Comma-separated lists are also supported.|
+|topics.blacklist|".*\.internal, .*\.replica, __consumer_offsets" or similar|topics to exclude from replication|
+|groups|empty string|regex of groups to replicate, For Example, ".*"|
+|groups.blacklist|empty string|groups to exclude from replication|
+|source.cluster.alias|required|name of the cluster being replicated|
+|target.cluster.alias|required|name of the downstream Kafka cluster|
+|source.cluster.bootstrap.servers|required|upstream cluster to replicate|
+|target.cluster.bootstrap.servers|required|downstream cluster|
+|sync.topic.configs.enabled|true|whether or not to monitor source cluster for configuration changes|
+|sync.topic.acls.enabled|true|whether to monitor source cluster ACLs for changes|
+|emit.heartbeats.enabled|true|connector should periodically emit heartbeats|
+|emit.heartbeats.interval.seconds|true|frequency of heartbeats|
+|emit.checkpoints.enabled|true|connector should periodically emit consumer offset information|
+|emit.checkpoints.interval.seconds|5 (seconds)|frequency of checkpoints|
+|refresh.topics.enabled|true|connector should periodically check for new consumer groups|
+|refresh.topics.interval.seconds|5 (seconds)|frequency to check source cluster for new consumer groups|
+|refresh.groups.enabled|true|connector should periodically check for new consumer groups|
+|refresh.groups.interval.seconds|5 (seconds)|frequency to check source cluster for new consumer groups|
+|readahead.queue.capacity|500 (records)|number of records to let consumer get ahead of producer|
+|replication.policy.class|org.apache.kafka.connect.mirror.DefaultReplicationPolicy|use LegacyReplicationPolicy to mimic legacy MirrorMaker|
+|heartbeats.topic.retention.ms|one day|used when creating heartbeat topics for the first time|
+|checkpoints.topic.retention.ms|one day|used when creating checkpoint topics for the first time|
+|offset.syncs.topic.retention.ms|max long|used when creating offset sync topic for the first time|
+|replication.factor|two|used when creating the remote topics|
+
+### Frequently asked questions
+
+**Why do we see a difference in the last offset on source and destination cluster post replication of a topic?**
+
+ It's possible that the source topicΓÇÖs data points might have been purged due to which the actual record count would be less than the last offset value. This results in the difference between last offset on source and destination cluster post replication, as the replication will always start from offset-0 of the destination cluster.
+
+**How will the consumers behave on migration, if that the destination cluster may have a different offset mapping to data points?**
+
+ The MirrorMaker 2.0 MirrorCheckpointConnector feature automatically stores consumer group offset checkpoints for consumer groups on the source cluster. Each checkpoint contains a mapping of the last committed offset for each group in the source cluster to the equivalent offset in destination cluster. So on migration the consumers that start consuming from same topic on the destination cluster will be able to resume receiving messages from the last offset they committed on the source cluster.
+
+**How can we retain the exact topic name in destination cluster, as the source alias is prefixed with all the topics replicated?**
+
+ This is the default behavior in MirrorMaker 2.0 to avoid data overriding in complex mirroring topologies. Customization of this needs to be done carefully in terms of replication flow design and topic management to avoid data loss. This can be done by using a custom replication policy class against ΓÇ£replication.policy.classΓÇ¥.
+
+**Why do we see new internal topics created in my source and destination Kafka?**
+
+ MirrorMaker 2.0 internal topics are created by the Connectors to keep track of the replication process, monitoring, offset mapping and checkpointing.
+
+**Why does MirrorMaker create only two replicas of the topic in the destination cluster while the source has more?**
+
+ MirrormMker 2 doesnΓÇÖt replicate the replication factor of topics to target clusters. This can be controlled from MM2 config, by specifying the required number of ΓÇ£replication.factorΓÇ¥. The default value for the same is two.
+
+**How to use custom replication policy in MirrorMaker 2.0?**
+
+ Custom Replication Policy can be created by implementing the interface below.
+
+```
+ /** Defines which topics are "remote topics", e.g. "us-west.topic1". */
+ public interface ReplicationPolicy {
+
+ /** How to rename remote topics; generally should be like us-west.topic1. */
+ String formatRemoteTopic(String sourceClusterAlias, String topic);
+
+ /** Source cluster alias of given remote topic, e.g. "us-west" for "us-west.topic1".
+ Returns null if not a remote topic.
+ */
+ String topicSource(String topic);
+
+ /** Name of topic on the source cluster, e.g. "topic1" for "us-west.topic1".
+ Topics may be replicated multiple hops, so the immediately upstream topic
+ may itself be a remote topic.
+ Returns null if not a remote topic.
+ */
+ String upstreamTopic(String topic);
+
+ /** The name of the original source-topic, which may have been replicated multiple hops.
+ Returns the topic if it is not a remote topic.
+ */
+ String originalTopic(String topic);
+
+ /** Internal topics are never replicated. */
+ boolean isInternalTopic(String topic);
+}
+```
+
+The implementation needs to be added to the Kafka classpath for the class reference to be used against replication.policy.class in MM2 properties.
+
+## Next steps
+
+[What is Apache Kafka on HDInsight?](apache-kafka-introduction.md)
+
+## References
+
+[MirrorMaker 2.0 Changes Apache Doc](https://cwiki.apache.org/confluence/display/KAFKA/KIP-382%3A+MirrorMaker+2.0)
+
+[Client certificates setup for HDI Kafka](apache-kafka-ssl-encryption-authentication.md)
+
+[HDInsight Kafka](./apache-kafka-introduction.md)
+
+[Apache Kafka 2.4 Documentation](https://kafka.apache.org/24/documentation.html)
+
+[Connect an on-premises network to Azure](/azure/architecture/reference-architectures/hybrid-networking.md)
hdinsight Apache Spark Create Standalone Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-create-standalone-application.md
Do the following steps to install the Scala plugin:
1. Open IntelliJ IDEA.
-2. On the welcome screen, navigate to **Configure** > **Plugins** to open the **Plugins** window.
+1. On the welcome screen, navigate to **Configure** > **Plugins** to open the **Plugins** window.
- :::image type="content" source="./media/apache-spark-create-standalone-application/enable-scala-plugin1.png" alt-text="`IntelliJ IDEA enable scala plugin`" border="true":::
+ ![Screenshot showing IntelliJ Welcome Screen.](media/apache-spark-create-standalone-application/spark-1.png)
-3. Select **Install** for the Scala plugin that is featured in the new window.
+1. Select **Install** for Azure Toolkit for IntelliJ.
- :::image type="content" source="./media/apache-spark-create-standalone-application/install-scala-plugin.png" alt-text="`IntelliJ IDEA install scala plugin`" border="true":::
+ ![Screenshot showing IntelliJ Azure Tool Kit.](media/apache-spark-create-standalone-application/spark-2.png)
-4. After the plugin installs successfully, you must restart the IDE.
+1. Select **Install** for the Scala plugin that is featured in the new window.
+
+ ![Screenshot showing IntelliJ Scala Plugin.](media/apache-spark-create-standalone-application/spark-3.png)
+
+1. After the plugin installs successfully, you must restart the IDE.
## Use IntelliJ to create application 1. Start IntelliJ IDEA, and select **Create New Project** to open the **New Project** window.
-2. Select **Apache Spark/HDInsight** from the left pane.
+1. Select **Apache Spark/HDInsight** from the left pane.
-3. Select **Spark Project (Scala)** from the main window.
+1. Select **Spark Project (Scala)** from the main window.
-4. From the **Build tool** drop-down list, select one of the following values:
- * **Maven** for Scala project-creation wizard support.
- * **SBT** for managing the dependencies and building for the Scala project.
+1. From the **Build tool** drop-down list, select one of the following values:
- :::image type="content" source="./media/apache-spark-create-standalone-application/intellij-project-apache-spark.png" alt-text="IntelliJ The New Project dialog box" border="true":::
+ * **Maven** for Scala project-creation wizard support.
+ * **SBT** for managing the dependencies and building for the Scala project.
-5. Select **Next**.
+ ![Screenshot showing create application.](media/apache-spark-create-standalone-application/spark-4.png)
-6. In the **New Project** window, provide the following information:
+1. Select **Next**.
+
+1. In the **New Project** window, provide the following information:
| Property | Description |
- | -- | -- |
- |Project name| Enter a name.|
- |Project&nbsp;location| Enter the location to save your project.|
- |Project SDK| This field will be blank on your first use of IDEA. Select **New...** and navigate to your JDK.|
- |Spark Version|The creation wizard integrates the proper version for Spark SDK and Scala SDK. If the Spark cluster version is earlier than 2.0, select **Spark 1.x**. Otherwise, select **Spark2.x**. This example uses **Spark 2.3.0 (Scala 2.11.8)**.|
+ | -- | -- |
+ |Project name| Enter a name.|
+ |Project&nbsp;location| Enter the location to save your project.|
+ |Project SDK| This field will be blank on your first use of IDEA. Select **New...** and navigate to your JDK.|
+ |Spark Version|The creation wizard integrates the proper version for Spark SDK and Scala SDK. If the Spark cluster version is earlier than 2.0, select **Spark 1.x**. Otherwise, select **Spark2.x**. This example uses **Spark 2.3.0 (Scala 2.11.8)**.|
:::image type="content" source="./media/apache-spark-create-standalone-application/hdi-scala-new-project.png" alt-text="IntelliJ IDEA Selecting the Spark SDK" border="true":::
-7. Select **Finish**.
+1. Select **Finish**.
## Create a standalone Scala project
-1. Start IntelliJ IDEA, and select **Create New Project** to open the **New Project** window.
+ 1. Start IntelliJ IDEA, and select **Create New Project** to open the **New Project** window.
-2. Select **Maven** from the left pane.
+ 1. Select **Maven** from the left pane.
-3. Specify a **Project SDK**. If blank, select **New...** and navigate to the Java installation directory.
+ 1. Specify a **Project SDK**. If blank, select **New...** and navigate to the Java installation directory.
-4. Select the **Create from archetype** checkbox.
+ 1. Select the **Create from archetype** checkbox.
-5. From the list of archetypes, select **`org.scala-tools.archetypes:scala-archetype-simple`**. This archetype creates the right directory structure and downloads the required default dependencies to write Scala program.
+ 1. From the list of archetypes, select **`org.scala-tools.archetypes:scala-archetype-simple`**. This archetype creates the right directory structure and downloads the required default dependencies to write Scala program.
- :::image type="content" source="./media/apache-spark-create-standalone-application/intellij-project-create-maven.png" alt-text="Screenshot shows the selected archetype in the New Project window." border="true":::
+ :::image type="content" source="./media/apache-spark-create-standalone-application/intellij-project-create-maven.png" alt-text="Screenshot shows the selected archetype in the New Project window." border="true":::
-6. Select **Next**.
+ 1. Select **Next**.
-7. Expand **Artifact Coordinates**. Provide relevant values for **GroupId**, and **ArtifactId**. **Name**, and **Location** will autopopulate. The following values are used in this tutorial:
+ 1. Expand **Artifact Coordinates**. Provide relevant values for **GroupId**, and **ArtifactId**. **Name**, and **Location** will autopopulate. The following values are used in this tutorial:
- **GroupId:** com.microsoft.spark.example - **ArtifactId:** SparkSimpleApp
- :::image type="content" source="./media/apache-spark-create-standalone-application/intellij-artifact-coordinates.png" alt-text="Screenshot shows the Artifact Coordinates option in the New Project window." border="true":::
+ :::image type="content" source="./media/apache-spark-create-standalone-application/intellij-artifact-coordinates.png" alt-text="Screenshot shows the Artifact Coordinates option in the New Project window." border="true":::
-8. Select **Next**.
+ 1. Select **Next**.
-9. Verify the settings and then select **Next**.
+ 1. Verify the settings and then select **Next**.
-10. Verify the project name and location, and then select **Finish**. The project will take a few minutes to import.
+1. Verify the project name and location, and then select **Finish**. The project will take a few minutes to import.
-11. Once the project has imported, from the left pane navigate to **SparkSimpleApp** > **src** > **test** > **scala** > **com** > **microsoft** > **spark** > **example**. Right-click **MySpec**, and then select **Delete...**. You don't need this file for the application. Select **OK** in the dialog box.
-
-12. In the later steps, you update the **pom.xml** to define the dependencies for the Spark Scala application. For those dependencies to be downloaded and resolved automatically, you must configure Maven.
+1. Once the project has imported, from the left pane navigate to **SparkSimpleApp** > **src** > **test** > **scala** > **com** > **microsoft** > **spark** > **example**. Right-click **MySpec**, and then select **Delete...**. You don't need this file for the application. Select **OK** in the dialog box.
-13. From the **File** menu, select **Settings** to open the **Settings** window.
+1. In the later steps, you update the **pom.xml** to define the dependencies for the Spark Scala application. For those dependencies to be downloaded and resolved automatically, you must configure Maven.
-14. From the **Settings** window, navigate to **Build, Execution, Deployment** > **Build Tools** > **Maven** > **Importing**.
+1. From the **File** menu, select **Settings** to open the **Settings** window.
-15. Select the **Import Maven projects automatically** checkbox.
+1. From the **Settings** window, navigate to **Build, Execution, Deployment** > **Build Tools** > **Maven** > **Importing**.
-16. Select **Apply**, and then select **OK**. You'll then be returned to the project window.
+1. Select the **Import Maven projects automatically** checkbox.
+1. Select **Apply**, and then select **OK**. You'll then be returned to the project window.
+
+ ```
:::image type="content" source="./media/apache-spark-create-standalone-application/configure-maven-download.png" alt-text="Configure Maven for automatic downloads" border="true":::
+ ```
-17. From the left pane, navigate to **src** > **main** > **scala** > **com.microsoft.spark.example**, and then double-click **App** to open App.scala.
+1. From the left pane, navigate to **src** > **main** > **scala** > **com.microsoft.spark.example**, and then double-click **App** to open App.scala.
-18. Replace the existing sample code with the following code and save the changes. This code reads the data from the HVAC.csv (available on all HDInsight Spark clusters). Retrieves the rows that only have one digit in the sixth column. And writes the output to **/HVACOut** under the default storage container for the cluster.
+1. Replace the existing sample code with the following code and save the changes. This code reads the data from the HVAC.csv (available on all HDInsight Spark clusters). Retrieves the rows that only have one digit in the sixth column. And writes the output to **/HVACOut** under the default storage container for the cluster.
```scala package com.microsoft.spark.example-
+
import org.apache.spark.SparkConf import org.apache.spark.SparkContext-
+
/** * Test IO to wasb */
Do the following steps to install the Scala plugin:
} ```
-19. In the left pane, double-click **pom.xml**.
+1. In the left pane, double-click **pom.xml**.
-20. Within `<project>\<properties>` add the following segments:
+1. Within `<project>\<properties>` add the following segments:
```xml <scala.version>2.11.8</scala.version>
Do the following steps to install the Scala plugin:
<scala.binary.version>2.11</scala.binary.version> ```
-21. Within `<project>\<dependencies>` add the following segments:
+1. Within `<project>\<dependencies>` add the following segments:
```xml <dependency>
Do the following steps to install the Scala plugin:
</dependency> ```
+ ```
Save changes to pom.xml.
+ ```
-22. Create the .jar file. IntelliJ IDEA enables creation of JAR as an artifact of a project. Do the following steps.
+1. Create the .jar file. IntelliJ IDEA enables creation of JAR as an artifact of a project. Do the following steps.
1. From the **File** menu, select **Project Structure...**.
- 2. From the **Project Structure** window, navigate to **Artifacts** > **the plus symbol +** > **JAR** > **From modules with dependencies...**.
+ 1. From the **Project Structure** window, navigate to **Artifacts** > **the plus symbol +** > **JAR** > **From modules with dependencies...**.
:::image type="content" source="./media/apache-spark-create-standalone-application/hdinsight-create-jar1.png" alt-text="`IntelliJ IDEA project structure add jar`" border="true":::
- 3. In the **Create JAR from Modules** window, select the folder icon in the **Main Class** text box.
+ 1. In the **Create JAR from Modules** window, select the folder icon in the **Main Class** text box.
- 4. In the **Select Main Class** window, select the class that appears by default and then select **OK**.
+ 1. In the **Select Main Class** window, select the class that appears by default and then select **OK**.
:::image type="content" source="./media/apache-spark-create-standalone-application/hdinsight-create-jar2.png" alt-text="`IntelliJ IDEA project structure select class`" border="true":::
- 5. In the **Create JAR from Modules** window, ensure the **extract to the target JAR** option is selected, and then select **OK**. This setting creates a single JAR with all dependencies.
+ 1. In the **Create JAR from Modules** window, ensure the **extract to the target JAR** option is selected, and then select **OK**. This setting creates a single JAR with all dependencies.
:::image type="content" source="./media/apache-spark-create-standalone-application/hdinsight-create-jar3.png" alt-text="IntelliJ IDEA project structure jar from module" border="true":::
- 6. The **Output Layout** tab lists all the jars that are included as part of the Maven project. You can select and delete the ones on which the Scala application has no direct dependency. For the application, you're creating here, you can remove all but the last one (**SparkSimpleApp compile output**). Select the jars to delete and then select the negative symbol **-**.
+ 1. The **Output Layout** tab lists all the jars that are included as part of the Maven project. You can select and delete the ones on which the Scala application has no direct dependency. For the application, you're creating here, you can remove all but the last one (**SparkSimpleApp compile output**). Select the jars to delete and then select the negative symbol **-**.
:::image type="content" source="./media/apache-spark-create-standalone-application/hdi-delete-output-jars.png" alt-text="`IntelliJ IDEA project structure delete output`" border="true"::: Ensure sure the **Include in project build** checkbox is selected. This option ensures that the jar is created every time the project is built or updated. Select **Apply** and then **OK**.
- 7. To create the jar, navigate to **Build** > **Build Artifacts** > **Build**. The project will compile in about 30 seconds. The output jar is created under **\out\artifacts**.
+ 1. To create the jar, navigate to **Build** > **Build Artifacts** > **Build**. The project will compile in about 30 seconds. The output jar is created under **\out\artifacts**.
:::image type="content" source="./media/apache-spark-create-standalone-application/hdi-artifact-output-jar.png" alt-text="IntelliJ IDEA project artifact output" border="true":::
If you're not going to continue to use this application, delete the cluster that
## Next step In this article, you learned how to create an Apache Spark scala application. Advance to the next article to learn how to run this application on an HDInsight Spark cluster using Livy.- > [!div class="nextstepaction"]
->[Run jobs remotely on an Apache Spark cluster using Apache Livy](./apache-spark-livy-rest-interface.md)
+> [Run jobs remotely on an Apache Spark cluster using Apache Livy](./apache-spark-livy-rest-interface.md)
+
hdinsight Apache Spark Machine Learning Mllib Ipython https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-machine-learning-mllib-ipython.md
description: Learn how to use Spark MLlib to create a machine learning app that
Previously updated : 04/27/2020 Last updated : 05/19/2022 # Use Apache Spark MLlib to build a machine learning application and analyze a dataset
hdinsight Apache Spark Troubleshoot Outofmemory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-troubleshoot-outofmemory.md
Delete all entries using steps detailed below.
1. Wait for the above command to complete and the cursor to return the prompt and then restart Livy service from Ambari, which should succeed. > [!NOTE]
-> `DELETE` the livy session once it is completed its execution. The Livy batch sessions will not be deleted automatically as soon as the spark app completes, which is by design. A Livy session is an entity created by a POST request against Livy Rest server. A `DELETE` call is needed to delete that entity. Or we should wait for the GC to kick in.
+> `DELETE` the livy session once it is completed its execution. The Livy batch sessions will not be deleted automatically as soon as the spark app completes, which is by design. A Livy session is an entity created by a POST request against Livy REST server. A `DELETE` call is needed to delete that entity. Or we should wait for the GC to kick in.
healthcare-apis Access Healthcare Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/access-healthcare-apis.md
Title: Access Azure Health Data Services description: This article describes the different ways to access Azure Health Data Services in your applications using tools and programming languages. -+ Previously updated : 03/22/2022- Last updated : 05/03/2022+ # Access Azure Health Data Services
healthcare-apis Authentication Authorization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/authentication-authorization.md
You can use online tools such as [https://jwt.ms](https://jwt.ms/) to view the t
**The access token is valid for one hour by default. You can obtain a new token or renew it using the refresh token before it expires.**
-To obtain an access token, you can use tools such as Postman, the Rest Client extension in Visual Studio Code, PowerShell, CLI, curl, and the [Azure AD authentication libraries](../active-directory/develop/reference-v2-libraries.md).
+To obtain an access token, you can use tools such as Postman, the REST Client extension in Visual Studio Code, PowerShell, CLI, curl, and the [Azure AD authentication libraries](../active-directory/develop/reference-v2-libraries.md).
## Encryption
healthcare-apis Autoscale Azure Api Fhir https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/autoscale-azure-api-fhir.md
Title: Autoscale for Azure API for FHIR description: This article describes the autoscale feature for Azure API for FHIR.-+ Previously updated : 02/15/2022- Last updated : 05/03/2022+ # Autoscale for Azure API for FHIR
healthcare-apis Azure Api Fhir Resource Manager Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/azure-api-fhir-resource-manager-template.md
Title: 'Quickstart: Deploy Azure API for FHIR using an ARM template' description: In this quickstart, learn how to deploy Azure API for Fast Healthcare Interoperability Resources (FHIR®), by using an Azure Resource Manager template (ARM template).-+ - Previously updated : 02/15/2022+ Last updated : 05/03/2022 # Quickstart: Use an ARM template to deploy Azure API for FHIR
healthcare-apis Configure Azure Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/configure-azure-rbac.md
Title: Configure Azure role-based access control (Azure RBAC) for Azure API for FHIR description: This article describes how to configure Azure RBAC for the Azure API for FHIR data plane-+ Previously updated : 02/15/2022- Last updated : 05/03/2022+ + # Configure Azure RBAC for FHIR In this article, you'll learn how to use [Azure role-based access control (Azure RBAC)](../../role-based-access-control/index.yml) to assign access to the Azure API for FHIR data plane. Azure RBAC is the preferred methods for assigning data plane access when data plane users are managed in the Azure Active Directory tenant associated with your Azure subscription. If you're using an external Azure Active Directory tenant, refer to the [local RBAC assignment reference](configure-local-rbac.md).
healthcare-apis Configure Cross Origin Resource Sharing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/configure-cross-origin-resource-sharing.md
Title: Configure cross-origin resource sharing in Azure API for FHIR description: This article describes how to configure cross-origin resource sharing in Azure API for FHIR.-- Previously updated : 02/15/2022++ Last updated : 05/03/2022
healthcare-apis Configure Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/configure-database.md
Title: Configure database settings in Azure API for FHIR description: This article describes how to configure Database settings in Azure API for FHIR-+ Previously updated : 02/15/2022- Last updated : 05/03/2022+ # Configure database settings
healthcare-apis Configure Export Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/configure-export-data.md
Last updated 02/15/2022-+ # Configure export settings in Azure API for FHIR and set up a storage account
healthcare-apis Configure Local Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/configure-local-rbac.md
Title: Configure local role-based access control (local RBAC) for Azure API for FHIR description: This article describes how to configure the Azure API for FHIR to use a secondary Azure AD tenant for data plane-+ Previously updated : 02/15/2022- Last updated : 05/03/2022+ ms.devlang: azurecli
healthcare-apis Configure Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/configure-private-link.md
Title: Private link for Azure API for FHIR description: This article describes how to set up a private endpoint for Azure API for FHIR services -+ Previously updated : 02/15/2022- Last updated : 05/03/2022+ # Configure private link
healthcare-apis Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/disaster-recovery.md
Title: Disaster recovery for Azure API for FHIR description: In this article, you'll learn how to enable disaster recovery features for Azure API for FHIR.-+ Previously updated : 02/15/2022- Last updated : 05/03/2022+ # Disaster recovery for Azure API for FHIR
healthcare-apis Fhir Paas Cli Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/fhir-paas-cli-quickstart.md
Title: 'Quickstart: Deploy Azure API for FHIR using Azure CLI' description: In this quickstart, you'll learn how to deploy Azure API for FHIR in Azure using the Azure CLI. -+ Previously updated : 03/21/2022- Last updated : 05/03/2022+
healthcare-apis Fhir Paas Portal Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/fhir-paas-portal-quickstart.md
Title: 'Quickstart: Deploy Azure API for FHIR using Azure portal' description: In this quickstart, you'll learn how to deploy Azure API for FHIR and configure settings using the Azure portal. -+ Last updated 03/21/2022-+
healthcare-apis Fhir Paas Powershell Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/fhir-paas-powershell-quickstart.md
Title: 'Quickstart: Deploy Azure API for FHIR using PowerShell' description: In this quickstart, you'll learn how to deploy Azure API for FHIR using PowerShell. -+ Last updated 02/15/2022-+
healthcare-apis Find Identity Object Ids https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/find-identity-object-ids.md
Title: Find identity object IDs for authentication - Azure API for FHIR description: This article explains how to locate the identity object IDs needed to configure authentication for Azure API for FHIR -+ Previously updated : 02/15/2022- Last updated : 05/03/2022+ # Find identity object IDs for authentication configuration for Azure API for FHIR
healthcare-apis Get Healthcare Apis Access Token Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/get-healthcare-apis-access-token-cli.md
Title: Get access token using Azure CLI - Azure API for FHIR description: This article explains how to obtain an access token for Azure API for FHIR using the Azure CLI. -+ Previously updated : 02/15/2022- Last updated : 05/03/2022+ # Get access token for Azure API for FHIR using Azure CLI
healthcare-apis Get Started With Azure Api Fhir https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/get-started-with-azure-api-fhir.md
+
+ Title: Get started with Azure API for FHIR
+description: This document describes how to get started with Azure API for FHIR.
++++ Last updated : 05/17/2022+++
+# Get started with Azure API for FHIR
+
+This article outlines the basic steps to get started with Azure API for FHIR. Azure API for FHIR is a managed, standards-based, compliant API for clinical health data that enables solutions for actionable analytics and machine learning.
+
+As a prerequisite, you'll need an Azure subscription and have been granted proper permissions to create Azure resource groups and deploy Azure resources. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+
+[![Screenshot of Azure API for FHIR flow diagram.](media/get-started/get-started-azure-api-fhir-diagram.png)](media/get-started/get-started-azure-api-fhir-diagram.png#lightbox)
+
+## Create Azure resource
+
+To get started with Azure API for FHIR, you must [create a resource](https://ms.portal.azure.com/#create/hub) in the Azure portal. Enter *Azure API for FHIR* in the **Search services and marketplace** box.
+
+
+[![Screenshot of the Azure search services and marketplace text box.](media/get-started/search-services-marketplace.png)](media/get-started/search-services-marketplace.png#lightbox)
+
+After youΓÇÖve located the Azure API for FHIR resource, select **Create**.
+
+[![Screenshot of the create Azure API for FHIR resource button.](media/get-started/create-azure-api-for-fhir-resource.png)](media/get-started/create-azure-api-for-fhir-resource.png#lightbox)
+
+## Deploy Azure API for FHIR
+
+Refer to the steps in the [Quickstart guide](fhir-paas-portal-quickstart.md) for deploying an instance of Azure API for FHIR using the Azure portal. You can also deploy an instance of Azure API for FHIR using [PowerShell](fhir-paas-powershell-quickstart.md), [CLI](fhir-paas-cli-quickstart.md), and an [ARM template](azure-api-fhir-resource-manager-template.md).
+
+## Accessing Azure API for FHIR
+
+When you're working with healthcare data, it's important to ensure that the data is secure, and it can't be accessed by unauthorized users or applications. FHIR servers use [OAuth 2.0](https://oauth.net/2/) to ensure this data security. Azure API for FHIR is secured using [Azure Active Directory (Azure AD)](https://docs.microsoft.com/azure/active-directory/), which is an example of an OAuth 2.0 identity provider. [Azure AD identity configuration for Azure API for FHIR](././../azure-api-for-fhir/azure-active-directory-identity-configuration.md) provides an overview of FHIR server authorization, and the steps needed to obtain a token to access a FHIR server. While these steps apply to any FHIR server and any identity provider, this article will walk you through Azure API for FHIR as the FHIR server and Azure AD as our identity provider. For more information about accessing Azure API for FHIR, see [Access control overview](././../azure-api-for-fhir/azure-active-directory-identity-configuration.md#access-control-overview).
+
+### Access token validation
+
+How Azure API for FHIR validates the access token will depend on implementation and configuration. The article [Azure API for FHIR access token validation](azure-api-fhir-access-token-validation.md) will guide you through the validation steps, which can be helpful when troubleshooting access issues.
+
+### Register a client application
+
+For an application to interact with Azure AD, it needs to be registered. In the context of the FHIR server, there are two kinds of application registrations:
+
+- Resource application registrations
+- Client application registrations
+
+For more information about the two kinds of application registrations, see [Register the Azure Active Directory apps for Azure API for FHIR](fhir-app-registration.md).
+
+## Configure Azure RBAC for FHIR
+
+The article [Configure Azure RBAC for FHIR](configure-azure-rbac.md), describes how to use [Azure role-based access control (Azure RBAC)](https://docs.microsoft.com/azure/role-based-access-control/) to assign access to the Azure API for FHIR data plane. Azure RBAC is the preferred method for assigning data plane access when data plane users are managed in the Azure AD tenant associated with your Azure subscription. If you're using an external Azure AD tenant, refer to the [local RBAC assignment reference](configure-local-rbac.md).
+
+## Next steps
+
+This article described the basic steps to get started using Azure API for FHIR. For more information about Azure API for FHIR, see
+
+>[!div class="nextstepaction"]
+>[What is Azure API for FHIR?](overview.md)
+
+>[!div class="nextstepaction"]
+>[Frequently asked questions about Azure API for FHIR](fhir-faq.yml)
+++
healthcare-apis Move Fhir Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/move-fhir-service.md
Title: Move Azure API for FHIR instance to a different subscription or resource group description: This article describes how to move Azure an API for FHIR instance -+ Previously updated : 02/15/2022- Last updated : 05/03/2022+ # Move Azure API for FHIR to a different subscription or resource group
healthcare-apis Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/policy-reference.md
Title: Built-in policy definitions for Azure API for FHIR description: Lists Azure Policy built-in policy definitions for Azure API for FHIR. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 05/11/2022-- Last updated : 05/03/2022++
healthcare-apis Use Custom Headers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/use-custom-headers.md
-- Previously updated : 02/15/2022++ Last updated : 05/03/2022 # Add data to audit logs by using custom HTTP headers in Azure API for FHIR
healthcare-apis Use Smart On Fhir Proxy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/use-smart-on-fhir-proxy.md
-- Previously updated : 02/15/2022++ Last updated : 05/03/2022 # Tutorial: Azure Active Directory SMART on FHIR proxy
healthcare-apis Configure Azure Rbac Using Scripts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/configure-azure-rbac-using-scripts.md
Title: Grant permissions to users and client applications using CLI and REST API - Azure Health Data Services description: This article describes how to grant permissions to users and client applications using CLI and REST API. -+ Previously updated : 03/21/2022- Last updated : 05/03/2022+ # Configure Azure RBAC role using Azure CLI and REST API
healthcare-apis Configure Azure Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/configure-azure-rbac.md
Title: Configure Azure RBAC role for FHIR service - Azure Health Data Services description: This article describes how to configure Azure RBAC role for FHIR.-+ Previously updated : 02/15/2022- Last updated : 05/03/2022+ # Configure Azure RBAC role for Azure Health Data Services
healthcare-apis Deploy Healthcare Apis Using Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/deploy-healthcare-apis-using-bicep.md
Title: How to create Azure Health Data Services, workspaces, FHIR and DICOM service, and MedTech service using Azure Bicep description: This document describes how to deploy Azure Health Data Services using Azure Bicep.-+ Previously updated : 03/24/2022- Last updated : 05/03/2022+
healthcare-apis Deploy Dicom Services In Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/deploy-dicom-services-in-azure.md
Previously updated : 03/22/2022 Last updated : 05/03/2022
healthcare-apis Get Started With Dicom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/get-started-with-dicom.md
Title: Get started with the DICOM service - Azure Health Data Services description: This document describes how to get started with the DICOM service in Azure Health Data Services.-+ Previously updated : 03/22/2022- Last updated : 05/03/2022+
healthcare-apis Configure Cross Origin Resource Sharing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/configure-cross-origin-resource-sharing.md
Title: Configure cross-origin resource sharing in FHIR service description: This article describes how to configure cross-origin resource sharing in FHIR service--++ Last updated 03/02/2022
healthcare-apis Fhir Portal Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/fhir-portal-quickstart.md
description: This article teaches users how to deploy a FHIR service in the Azur
Previously updated : 03/01/2022 Last updated : 05/03/2022
Before getting started, you should have already deployed Azure Health Data Servi
## Create a new FHIR service
-From the workspace, select **Deploy FHIR Services**.
+From the workspace, select **Deploy FHIR service**.
[ ![Deploy FHIR service](media/fhir-service/deploy-fhir-services.png) ](media/fhir-service/deploy-fhir-services.png#lightbox)
To validate that the new FHIR API account is provisioned, fetch a capability sta
## Next steps
-In this article, you learned how to deploy FHIR service within Azure Health Data Services using the Azure portal. For more information about accessing FHIR service using Postman, see
+In this article, you learned how to deploy FHIR service within Azure Health Data Services using the Azure portal. For more information about accessing FHIR service using Postman, see
>[!div class="nextstepaction"] >[Access FHIR service using Postman](../fhir/use-postman.md)
healthcare-apis Fhir Service Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/fhir-service-autoscale.md
Title: Autoscale feature for Azure Health Data Services FHIR service description: This article describes the Autoscale feature for Azure Health Data Services FHIR service.-+ Previously updated : 03/01/2022- Last updated : 05/03/2022+ # FHIR service autoscale
healthcare-apis Fhir Service Diagnostic Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/fhir-service-diagnostic-logs.md
Title: View and enable diagnostic settings in FHIR service - Azure Health Data Services description: This article describes how to enable diagnostic settings in FHIR service and review some sample queries for audit logs. -+ Previously updated : 03/01/2022- Last updated : 05/03/2022+ # View and enable diagnostic settings in the FHIR service
healthcare-apis Fhir Service Resource Manager Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/fhir-service-resource-manager-template.md
Title: Deploy Azure Health Data Services FHIR service using ARM template description: Learn how to deploy FHIR service by using an Azure Resource Manager template (ARM template)-+ - Previously updated : 03/01/2022+ Last updated : 05/03/2022 # Deploy a FHIR service within Azure Health Data Services - using ARM template
healthcare-apis Get Started With Fhir https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/get-started-with-fhir.md
Title: Get started with FHIR service - Azure Health Data Services description: This document describes how to get started with FHIR service in Azure Health Data Services.-+ Previously updated : 03/22/2022- Last updated : 05/03/2022+
healthcare-apis Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/overview.md
Previously updated : 03/01/2022 Last updated : 05/16/2022
Exchange of data via the FHIR service provides audit logs and access controls th
FHIR capabilities from Microsoft are available in three configurations:
-* The FHIR service in Azure Health Data Services is a platform as a service (PaaS) offering in Azure that's easily provisioned in the Azure portal and managed by Microsoft. Includes the ability to provision other datasets, such as DICOM in the same workspace. This is available in Public Preview.
+* The FHIR service in Azure Health Data Services is a platform as a service (PaaS) offering in Azure that's easily provisioned in the Azure portal and managed by Microsoft. Includes the ability to provision other datasets, such as DICOM in the same workspace.
* Azure API for FHIR - A PaaS offering in Azure, easily provisioned in the Azure portal and managed by Microsoft. This implementation only includes FHIR data and is a GA product. * FHIR Server for Azure ΓÇô an open-source project that can be deployed into your Azure subscription, available on GitHub at https://github.com/Microsoft/fhir-server.
healthcare-apis Use Postman https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/use-postman.md
Title: Access the Azure Health Data Services FHIR service using Postman description: This article describes how to access Azure Health Data Services FHIR service with Postman. -+ Previously updated : 03/01/2022- Last updated : 05/03/2022+ # Access using Postman
healthcare-apis Using Curl https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/using-curl.md
In this article, you'll learn how to access Azure Health Data Services with cURL
* An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/). * If you want to run the code locally, install [PowerShell](/powershell/module/powershellget/) and [Azure Az PowerShell](/powershell/azure/install-az-ps).
-* Optionally, you can run the scripts in Visual Studio Code with the Rest Client extension. For more information, see [Make a link to the Rest Client doc](using-rest-client.md).
+* Optionally, you can run the scripts in Visual Studio Code with the REST Client extension. For more information, see [Make a link to the REST Client doc](using-rest-client.md).
* Download and install [cURL](https://curl.se/download.html). ### CLI
In this article, you'll learn how to access Azure Health Data Services with cURL
* An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/). * If you want to run the code locally, install [Azure CLI](/cli/azure/install-azure-cli). * Optionally, install a Bash shell, such as Git Bash, which it's included in [Git for Windows](https://gitforwindows.org/).
-* Optionally, run the scripts in Visual Studio Code with the Rest Client extension. For more information, see [Make a link to the Rest Client doc](using-rest-client.md).
+* Optionally, run the scripts in Visual Studio Code with the REST Client extension. For more information, see [Make a link to the REST Client doc](using-rest-client.md).
* Download and install [cURL](https://curl.se/download.html). ## Obtain Azure Access Token
healthcare-apis Get Access Token https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/get-access-token.md
Title: Get access token using Azure CLI or Azure PowerShell description: This article explains how to obtain an access token for Azure Health Data Services using the Azure CLI or Azure PowerShell. -+ Previously updated : 03/21/2022- Last updated : 05/03/2022+ ms.devlang: azurecli
In this article, you learned how to obtain an access token for the FHIR service
>[Access FHIR service using Postman](./fhir/use-postman.md) >[!div class="nextstepaction"]
->[Access FHIR service using Rest Client](./fhir/using-rest-client.md)
+>[Access FHIR service using REST Client](./fhir/using-rest-client.md)
>[!div class="nextstepaction"] >[Access DICOM service using cURL](dicom/dicomweb-standard-apis-curl.md)
healthcare-apis Get Started With Health Data Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/get-started-with-health-data-services.md
+
+ Title: Get started with Azure Health Data Services
+description: This document describes how to get started with Azure Health Data Services.
++++ Last updated : 05/17/2022+++
+# Get started with Azure Health Data Services
+
+This article outlines the basic steps to get started with Azure Health Data Services. Azure Health Data Services is a set of managed API services based on open standards and frameworks that enable workflows to improve healthcare and offer scalable and secure healthcare solutions.
+
+To get started with Azure Health Data Services, you'll need to create a workspace in the Azure portal.
+
+The workspace is a logical container for all your healthcare service instances such as Fast Healthcare Interoperability Resources (FHIR®) service, Digital Imaging and Communications in Medicine (DICOM®) service, and MedTech service. The workspace also creates a compliance boundary (HIPAA, HITRUST) within which protected health information can travel.
+
+Before you can create a workspace in the Azure portal, you must have an Azure account subscription. If you donΓÇÖt have an Azure subscription, see [Create your free Azure account today](https://azure.microsoft.com/free/search/?OCID=AID2100131_SEM_c4b0772dc7df1f075552174a854fd4bc:G:s&ef_id=c4b0772dc7df1f075552174a854fd4bc:G:s&msclkid=c4b0772dc7df1f075552174a854fd4bc).
+
+[![Screenshot of Azure Health Data Services flow diagram.](media/get-started-azure-health-data-services-diagram.png)](media/get-started-azure-health-data-services-diagram.png#lightbox)
+
+## Deploy Azure Health Data Services
+
+To get started with Azure Health Data Services, you must [create a resource](https://ms.portal.azure.com/#create/hub) in the Azure portal. Enter *Azure Health Data Services* in the **Search services and marketplace** box.
+
+[![Screenshot of the Azure search services and marketplace text box.](media/search-services-marketplace.png)](media/search-services-marketplace.png#lightbox)
+
+After you've located the Azure Health Data Services resource, select **Create**.
+
+[![Screenshot of the create Azure Health Data Services resource button.](media/create-azure-health-data-services-resource.png)](media/create-azure-health-data-services-resource.png#lightbox)
+
+## Create workspace
+
+After the Azure Health Data Services resource group is deployed, you can enter the workspace subscription and instance details.
+
+To be guided through these steps, see [Deploy Azure Health Data Services workspace using Azure portal](healthcare-apis-quickstart.md).
+
+> [!Note]
+> You can provision multiple data services within a workspace, and by design, they work seamlessly with one another. With the workspace, you can organize all your Azure Health Data Services instances and manage certain configuration settings that are shared among all the underlying datasets and services where it's applicable.
+
+[![Screenshot of the Azure Health Data Services workspace.](media/health-data-services-workspace.png)](media/health-data-services-workspace.png#lightbox)
+
+## User access and permissions
+
+Azure Health Data Services is a collection of secured managed services using Azure Active Directory (Azure AD). For Azure Health Data Services to access Azure resources, such as storage accounts and event hubs, you must enable the system managed identity, and grant proper permissions to the managed identity. Client applications are registered in the Azure AD and can be used to access the Azure Health Data Services. User data access controls are done in the applications or services that implement business logic.
+
+Authenticated users and client applications of the Azure Health Data Services must be granted with proper [application roles](./../healthcare-apis/authentication-authorization.md#application-roles). After being granted with proper application roles, the [authenticated users and client applications](./../healthcare-apis/authentication-authorization.md#authorization) can access Azure Health Data Services by obtaining a valid [access token](./../healthcare-apis/authentication-authorization.md#access-token) issued by Azure AD, and perform specific operations defined by the application roles. For more information, see [Authentication and Authorization for Azure Health Data Services](authentication-authorization.md).
+
+Furthermore, to access Azure Health Data Services, you [register a client application](register-application.md) in the Azure AD. It's with these steps that you can find the [application (client) ID](./../healthcare-apis/register-application.md#application-id-client-id), and you can configure the [authentication setting](./../healthcare-apis/register-application.md#authentication-setting-confidential-vs-public) to allow public client flows or to a confidential client application.
+
+As a requirement for the DICOM service (optional for the FHIR service), you configure the user access [API permissions](./../healthcare-apis/register-application.md#api-permissions) or role assignments for Azure Health Data Services that's managed through [Azure role-based access control (Azure RBAC)](configure-azure-rbac.md).
+
+## FHIR service
+
+FHIR service in Azure Health Data Services enables rapid exchange of data through FHIR APIs that's backed by a managed Platform-as-a Service (PaaS) offering in the cloud. It makes it easier for anyone working with health data to ingest, manage, and persist Protected Health Information (PHI) in the cloud.
+
+The FHIR service is secured by Azure AD that can't be disabled. To access the service API, you must create a client application that's also referred to as a service principal in Azure AD and grant it with the right permissions. You can create or register a client application from the [Azure portal](register-application.md), or using PowerShell and Azure CLI scripts. This client application can be used for one or more FHIR service instances. It can also be used for other services in Azure Health Data Services.
+
+You can also do the following:
+- Grant access permissions
+- Perform create, read (search), update, and delete (CRUD) transactions against the FHIR service in your applications
+- Obtain an access token for the FHIR service
+- Access the FHIR service using tools such as cURL, Postman, and REST Client
+- Load data directly using the POST or PUT method against the FHIR service
+- Export ($export) data to Azure Storage
+- Convert data: convert [HL7 v2](./../healthcare-apis/fhir/convert-data.md) and other format data to FHIR
+- Create Power BI dashboard reports with FHIR data
+
+For more information, see [Get started with FHIR service](./../healthcare-apis/fhir/get-started-with-fhir.md).
+
+## DICOM service
+
+DICOM service is a managed service within Azure Health Data Services that ingests and persists DICOM objects at multiple thousands of images per second. It facilitates communication and transmission of imaging data with any DICOMwebΓäó enabled systems or applications via DICOMweb Standard APIs like [Store (STOW-RS)](./../healthcare-apis/dicom/dicom-services-conformance-statement.md#store-stow-rs), [Search (QIDO-RS)](./../healthcare-apis/dicom/dicom-services-conformance-statement.md#search-qido-rs), [Retrieve (WADO-RS)](./../healthcare-apis/dicom/dicom-services-conformance-statement.md#retrieve-wado-rs).
+
+DICOM service is secured by Azure AD that can't be disabled. To access the service API, you must create a client application that's also referred to as a service principal in Azure AD and grant it with the right permissions. You can create or register a client application from the [Azure portal](register-application.md), or using PowerShell and Azure CLI scripts. This client application can be used for one or more DICOM service instances. It can also be used for other services in Azure Health Data Services.
+
+You can also do the following:
+- Grant access permissions or assign roles from the [Azure portal](./../healthcare-apis/configure-azure-rbac.md), or using PowerShell and Azure CLI scripts.
+- Perform create, read (search), update, and delete (CRUD) transactions against the DICOM service in your applications or by using tools such as Postman, REST Client, cURL, and Python
+- Obtain an Azure AD access token using PowerShell, Azure CLI, REST CLI, or .NET SDK
+- Access the DICOM service using tools such as .NET C#, cURL, Python, Postman, and REST Client
+
+For more information, see [Get started with the DICOM service](./../healthcare-apis/dicom/get-started-with-dicom.md).
+
+## MedTech service
+
+MedTech service transforms device data into FHIR-based observation resources and then persists the transformed messages into Azure Health Data Services FHIR service. This allows for a unified approach to health data access, standardization, and trend capture enabling the discovery of operational and clinical insights, connecting new device applications, and enabling new research projects.
+
+To ensure that your MedTech service works properly, it must have granted access permissions to the Azure Event Hub and FHIR service. The Azure Event Hubs Data Receiver role allows the MedTech service that's being assigned this role to receive data from this Event Hub. For more information about application roles, see [Authentication & Authorization for Azure Health Data Services](./../healthcare-apis/authentication-authorization.md)
+
+You can also do the following:
+- Create a new FHIR service or use an existing one in the same or different workspace
+- Create a new Event Hub or use an existing one
+- Assign roles to allow the MedTech service to access [Event Hub](./../healthcare-apis/iot/deploy-iot-connector-in-azure.md#granting-medtech-service-access) and [FHIR service](./../healthcare-apis/iot/deploy-iot-connector-in-azure.md#accessing-the-medtech-service-from-the-fhir-service)
+- Send data to the Event Hub, which is associated with the MedTech service
+
+For more information, see [Get started with the MedTech service](./../healthcare-apis/iot/get-started-with-iot.md).
+
+## Next steps
+
+This article described the basic steps to get started using Azure Health Data Services. For more information about Azure Health Data Services, see
+
+>[!div class="nextstepaction"]
+>[Authentication and Authorization for Azure Health Data Services](authentication-authorization.md)
+
+>[!div class="nextstepaction"]
+>[What is Azure Health Data Services?](healthcare-apis-overview.md)
+
+>[!div class="nextstepaction"]
+>[Frequently asked questions about Azure Health Data Services](healthcare-apis-faqs.md)
+
healthcare-apis Healthcare Apis Configure Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/healthcare-apis-configure-private-link.md
Title: Private Link for Azure Health Data Services description: This article describes how to set up a private endpoint for Azure Health Data Services -+ Previously updated : 03/14/2022- Last updated : 05/03/2022+ # Configure Private Link for Azure Health Data Services
healthcare-apis Healthcare Apis Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/healthcare-apis-quickstart.md
Previously updated : 03/24/2022 Last updated : 05/03/2022
donΓÇÖt have an Azure subscription, see [Create your free Azure account today](h
In the Azure portal, select **Create a resource**.
-[ ![Create resource](media/create-resource.png) ](media/create-resource.png#lightbox)
+[ ![Screenshot of Create resource.](media/create-resource.png) ](media/create-resource.png#lightbox)
## Search for Azure Health Data Services In the search box, enter **Azure Health Data Services**.
-[ ![Search for HAzure Health Data Services](media/search-for-healthcare-apis.png) ](media/search-for-healthcare-apis.png#lightbox)
+[ ![Screenshot of Search for Azure Health Data Services](media/search-services-marketplace.png) ](media/search-services-marketplace.png#lightbox)
## Create Azure Health Data Services account Select **Create** to create a new Azure Health Data Services account.
- [ ![Create workspace](media/create-workspace-preview.png) ](media/create-workspace-preview.png#lightbox)
+ [ ![Screenshot of create new account button.](media/create-azure-health-data-services-resource.png) ](media/create-azure-health-data-services-resource.png#lightbox)
-## Enter Subscription and instance details
+## Enter subscription and workspace details
-1. Select a **Subscription** and **Resource group** from the drop-down lists or select **Create new**.
+1. Under the **Project details** section of the **Basics** tab, select a **Subscription** and **Resource group** from their drop-down lists. Select **Create new** to create a new resource group.
- [ ![Create workspace new](media/create-healthcare-api-workspace-new.png) ](media/create-healthcare-api-workspace-new.png#lightbox)
+ [ ![Screenshot of create health data services workspace basics tab.](media/create-health-data-services-workspace-basics-tab.png) ](media/create-health-data-services-workspace-basics-tab.png#lightbox)
2. Enter a **Name** for the workspace, and then select a **Region**. The name must be 3 to 24 alphanumeric characters, all in lowercase. Don't use a hyphen "-" as it's an invalid character for the name. For information about regions and availability zones, see [Regions and Availability Zones in Azure](../availability-zones/az-overview.md).
-3. (**Optional**) Select **Next: Tags >**. Enter a **Name** and **Value**, and then select **Next: Review + create**.
+3. Select **Next: Networking >**. It's here that you can connect a workspace publicly with the default **Public endpoint (all networks)** option selected. You may also connect a workspace using a private endpoint by selecting the **Private endpoint** option. For more information about accessing Azure Health Data Services over a private endpoint, see [Configure Private Link for Azure Health Data Services](healthcare-apis-configure-private-link.md).
- [ ![Tags](media/tags-new.png) ](media/tags-new.png#lightbox)
+ [ ![Screenshot of create health data services workspace networking tab.](media/create-workspace-networking-tab.png) ](media/create-workspace-networking-tab.png#lightbox)
- Tags are name/value pairs used for categorizing resources. For more information about tags, see [Use tags to organize your Azure resources and management hierarchy](.././azure-resource-manager/management/tag-resources.md).
+4. Select **Next: Tags >** if you want to include name and value pairs to categorize resources and view consolidated billing by applying the same tag to multiple resources and resource groups. Enter a **Name** and **Value** for the workspace, and then select **Review + create** or **Next: Review + create**. For more information about tags, see [Use tags to organize your Azure resources and management hierarchy](.././azure-resource-manager/management/tag-resources.md).
-4. Select **Create**.
+ [ ![Screenshot of the health data services workspace tags tab.](media/tags-new.png) ](media/tags-new.png#lightbox)
-[ ![Workspace terms](media/workspace-terms.png) ](media/workspace-terms.png)
+5. Select **Create** if you don't need to make any changes to the workspace project and instance details. If you must make changes to the project and instance details, select **Previous**.
+ [ ![Screenshot of the health data services workspace instance details.](media/workspace-review-create-tab.png) ](media/workspace-review-create-tab.png#lightbox)
**Optional**: You may select **Download a template for automation** of your newly created workspace.
+6. After the workspace deployment process is complete, select **Go to resource**.
+
+ [ ![Screenshot of the workspace and the go to resource button.](media/workspace-deployment-details.png) ](media/workspace-deployment-details.png#lightbox)
+
+ You now can create a FHIR service, DICOM service, and MedTech service from the newly deployed Azure Health Data Services workspace.
+
+[ ![Screenshot of the newly deployed Azure Health Data Services workspace.](media/deploy-health-data-services-workspace.png) ](media/deploy-health-data-services-workspace.png#lightbox)
+ ## Next steps
-Now that the workspace is created, you can:
+Now that the workspace is created, you can do the following:
+
+>[!div class="nextstepaction"]
+>[Deploy FHIR service](./../healthcare-apis/fhir/fhir-portal-quickstart.md)
-* [Deploy FHIR service](./../healthcare-apis/fhir/fhir-portal-quickstart.md)
-* [Deploy DICOM service](./../healthcare-apis/dicom/deploy-dicom-services-in-azure.md)
-* [Deploy a MedTech service and ingest data to your FHIR service](./../healthcare-apis/iot/deploy-iot-connector-in-azure.md)
-* [Convert your data to FHIR](./../healthcare-apis/fhir/convert-data.md)
+>[!div class="nextstepaction"]
+>[Deploy DICOM service](./../healthcare-apis/dicom/deploy-dicom-services-in-azure.md)
-[ ![Deploy different services](media/healthcare-apis-deploy-services.png) ](media/healthcare-apis-deploy-services.png)
+>[!div class="nextstepaction"]
+>[Deploy a MedTech service and ingest data to your FHIR service](./../healthcare-apis/iot/deploy-iot-connector-in-azure.md)
+
+>[!div class="nextstepaction"]
+>[Convert your data to FHIR](./../healthcare-apis/fhir/convert-data.md)
For more information about Azure Health Data Services workspace, see
healthcare-apis Get Started With Iot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/get-started-with-iot.md
Title: Get started with the MedTech service - Azure Health Data Services description: This document describes how to get started with the MedTech service in Azure Health Data Services.-+ Previously updated : 03/21/2022- Last updated : 05/03/2022+
healthcare-apis Register Application Cli Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/register-application-cli-rest.md
Title: Register a client application in Azure AD using CLI and REST API - Azure Health Data Services description: This article describes how to register a client application Azure AD using CLI and REST API. -+ Previously updated : 02/15/2022- Last updated : 05/03/2022+ # Register a client application using CLI and REST API
healthcare-apis Register Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/register-application.md
Title: Register a client application in Azure Active Directory for the Azure Health Data Services description: How to register a client application in the Azure AD and how to add a secret and API permissions to the Azure Health Data Services -+ Previously updated : 03/21/2022- Last updated : 05/03/2022+ # Register a client application in Azure Active Directory
The following steps are required for the DICOM service, but optional for the FHI
[ ![Select permissions scopes.](dicom/media/dicom-select-scopes.png) ](dicom/media/dicom-select-scopes.png#lightbox) >[!NOTE]
->Use grant_type of client_credentials when trying to otain an access token for the FHIR service using tools such as Postman or Rest Client. For more details, visit [Access using Postman](./fhir/use-postman.md) and [Accessing Azure Health Data Services using the REST Client Extension in Visual Studio Code](./fhir/using-rest-client.md).
+>Use grant_type of client_credentials when trying to otain an access token for the FHIR service using tools such as Postman or REST Client. For more details, visit [Access using Postman](./fhir/use-postman.md) and [Accessing Azure Health Data Services using the REST Client Extension in Visual Studio Code](./fhir/using-rest-client.md).
>>Use grant_type of client_credentials or authentication_doe when trying to obtain an access token for the DICOM service. For more details, visit [Using DICOM with cURL](dicom/dicomweb-standard-apis-curl.md). Your application registration is now complete.
hpc-cache Hpc Cache Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hpc-cache/hpc-cache-prerequisites.md
Title: Azure HPC Cache prerequisites description: Prerequisites for using Azure HPC Cache-+ Previously updated : 02/24/2022- Last updated : 05/16/2022+ # Prerequisites for Azure HPC Cache
More information is included in [Troubleshoot NAS configuration and NFS storage
* If your storage has any exports that are subdirectories of another export, make sure the cache has root access to the lowest segment of the path. Read [Root access on directory paths](troubleshoot-nas.md#allow-root-access-on-directory-paths) in the NFS storage target troubleshooting article for details.
-* NFS back-end storage must be a compatible hardware/software platform. Contact the Azure HPC Cache team for details.
+* NFS back-end storage must be a compatible hardware/software platform. The storage must support NFS Version 3 (NFSv3). Contact the Azure HPC Cache team for more details.
### NFS-mounted blob (ADLS-NFS) storage requirements
industrial-iot Tutorial Deploy Industrial Iot Platform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/industrial-iot/tutorial-deploy-industrial-iot-platform.md
Administrator, or Cloud Application Administrator rights to provide tenant-wide
The Azure Industrial IoT Platform is a Microsoft suite of modules (OPC Publisher, OPC Twin, Discovery) and services that are deployed on Azure. The cloud microservices (Registry, OPC Twin, OPC Publisher, Edge Telemetry Processor, Registry Onboarding Processor, Edge Event Processor, Registry Synchronization) are implemented as ASP.NET microservices with a REST interface and run on managed Azure Kubernetes Services or stand-alone on Azure App Service. The deployment can deploy the platform, an entire simulation environment and a Web UI (Industrial IoT Engineering Tool). The deployment script allows to select which set of components to deploy.-- Minimum dependencies:
+- Minimum dependencies:
- [IoT Hub](https://azure.microsoft.com/services/iot-hub/) to communicate with the edge and ingress raw OPC UA telemetry data - [Cosmos DB](https://azure.microsoft.com/services/cosmos-db/) to persist state that is not persisted in IoT Hub - [Service Bus](https://azure.microsoft.com/services/service-bus/) as integration event bus
The deployment script allows to select which set of components to deploy.
- App Service Plan (shared with microservices), [App Service](https://azure.microsoft.com/services/app-service/) for hosting the Industrial IoT Engineering Tool cloud application - Simulation: - [Virtual machine](https://azure.microsoft.com/services/virtual-machines/), Virtual network, IoT Edge used for a factory simulation to show the capabilities of the platform and to generate sample telemetry-- [Azure Kubernetes Service](https://github.com/Azure/Industrial-IoT/blob/master/docs/deploy/howto-deploy-aks.md) should be used to host the cloud microservices
+- [Azure Kubernetes Service](https://github.com/Azure/Industrial-IoT/blob/main/docs/deploy/howto-deploy-aks.md) should be used to host the cloud microservices
## Deploy Azure IIoT Platform using the deployment script
The deployment script allows to select which set of components to deploy.
Other hosting and deployment methods: -- For production deployments that require staging, rollback, scaling, and resilience, the platform can be deployed into [Azure Kubernetes Service (AKS)](https://github.com/Azure/Industrial-IoT/blob/master/docs/deploy/howto-deploy-aks.md)-- Deploying Azure Industrial IoT Platform microservices into an existing Kubernetes cluster using [Helm](https://github.com/Azure/Industrial-IoT/blob/master/docs/deploy/howto-deploy-helm.md).-- Deploying [Azure Kubernetes Service (AKS) cluster on top of Azure Industrial IoT Platform created by deployment script and adding Azure Industrial IoT components into the cluster](https://github.com/Azure/Industrial-IoT/blob/master/docs/deploy/howto-add-aks-to-ps1.md).
+- For production deployments that require staging, rollback, scaling, and resilience, the platform can be deployed into [Azure Kubernetes Service (AKS)](https://github.com/Azure/Industrial-IoT/blob/main/docs/deploy/howto-deploy-aks.md)
+- Deploying Azure Industrial IoT Platform microservices into an existing Kubernetes cluster using [Helm](https://github.com/Azure/Industrial-IoT/blob/main/docs/deploy/howto-deploy-helm.md).
+- Deploying [Azure Kubernetes Service (AKS) cluster on top of Azure Industrial IoT Platform created by deployment script and adding Azure Industrial IoT components into the cluster](https://github.com/Azure/Industrial-IoT/blob/main/docs/deploy/howto-add-aks-to-ps1.md).
References:-- [Deploying Azure Industrial IoT Platform](https://github.com/Azure/Industrial-IoT/tree/master/docs/deploy)-- [How to deploy all-in-one](https://github.com/Azure/Industrial-IoT/blob/master/docs/deploy/howto-deploy-all-in-one.md)-- [How to deploy platform into AKS](https://github.com/Azure/Industrial-IoT/blob/master/docs/deploy/howto-deploy-aks.md)
+- [Deploying Azure Industrial IoT Platform](https://github.com/Azure/Industrial-IoT/tree/main/docs/deploy)
+- [How to deploy all-in-one](https://github.com/Azure/Industrial-IoT/blob/main/docs/deploy/howto-deploy-all-in-one.md)
+- [How to deploy platform into AKS](https://github.com/Azure/Industrial-IoT/blob/main/docs/deploy/howto-deploy-aks.md)
## Next steps Now that you have deployed the IIoT Platform, you can learn how to customize configuration of the components: > [!div class="nextstepaction"]
-> [Customize the configuration of the components](tutorial-configure-industrial-iot-components.md)
+> [Customize the configuration of the components](tutorial-configure-industrial-iot-components.md)
iot-central Concepts Device Implementation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/concepts-device-implementation.md
An IoT Central device template includes a _model_ that specifies the behaviors a
Each model has a unique _device twin model identifier_ (DTMI), such as `dtmi:com:example:Thermostat;1`. When a device connects to IoT Central, it sends the DTMI of the model it implements. IoT Central can then assign the correct device template to the device.
-[IoT Plug and Play](../../iot-develop/overview-iot-plug-and-play.md) defines a set of [conventions](../../iot-develop/concepts-convention.md) that a device should follow when it implements a DTDL model.
+[IoT Plug and Play](../../iot-develop/overview-iot-plug-and-play.md) defines a set of [conventions](../../iot-develop/concepts-convention.md) that a device should follow when it implements a Digital Twin Definition Language (DTDL) model.
The [Azure IoT device SDKs](#device-sdks) include support for the IoT Plug and Play conventions.
A DTDL model can be a _no-component_ or a _multi-component_ model:
- Multi-component model. A more complex model that includes two or more components. These components include a single root component, and one or more nested components. For an example, see the [Temperature Controller](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/samples/TemperatureController.json) model. > [!TIP]
-> You can export the model from an IoT Central device template as a [Digital Twins Definition Language (DTDL) v2](https://github.com/Azure/opendigitaltwins-dtdl) JSON file.
+> You can [export a device model](howto-set-up-template.md#interfaces-and-components) from an IoT Central device template as a DTDL v2 file.
-To learn more, see [IoT Plug and Play modeling guide](../../iot-develop/concepts-modeling-guide.md)
+To learn more about device models, see the [IoT Plug and Play modeling guide](../../iot-develop/concepts-modeling-guide.md)
### Conventions
iot-central Howto Export To Blob Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-export-to-blob-storage.md
This article shows how to create a managed identity in the Azure portal. You can
### Create an Azure Blob Storage destination
-If you don't have an existing Azure storage account to export to, follow these steps:
+If you don't have an existing Azure storage account to export to, run the following script in the Azure Cloud Shell bash environment. The script creates a resource group, Azure Storage account, and blob container. It then prints the connection string to use when you configure the data export in IoT Central:
-1. Create a [new storage account in the Azure portal](https://portal.azure.com/#create/Microsoft.StorageAccount-ARM). You can learn more about creating new [Azure Blob Storage accounts](../../storage/blobs/storage-quickstart-blobs-portal.md) or [Azure Data Lake Storage v2 storage accounts](../../storage/common/storage-account-create.md). Data export can only write data to storage accounts that support block blobs. The following list shows the known compatible storage account types:
+```azurecli-interactive
+# Replace the storage account name with your own unique value
+SA=yourstorageaccount$RANDOM
+CN=exportdata
+RG=centralexportresources
+LOCATION=eastus
- |Performance Tier|Account Type|
- |-|-|
- |Standard|General Purpose V2|
- |Standard|General Purpose V1|
- |Standard|Blob storage|
- |Premium|Block Blob storage|
+az group create -n $RG --location $LOCATION
+az storage account create --name $SA --resource-group $RG --location $LOCATION --sku Standard_LRS
+az storage container create --account-name $SA --resource-group $RG --name $CN
-1. To create a container in your storage account, go to your storage account. Under **Blob Service**, select **Browse Blobs**. Select **+ Container** at the top to create a new container.
+CS=$(az storage account show-connection-string --resource-group $RG --name $SA --query "connectionString" --output tsv)
-1. Generate a connection string for your storage account by going to **Settings > Access keys**. Copy one of the two connection strings.
+echo "Storage connection string: $CS"
+```
+
+You can learn more about creating new [Azure Blob Storage accounts](../../storage/blobs/storage-quickstart-blobs-portal.md) or [Azure Data Lake Storage v2 storage accounts](../../storage/common/storage-account-create.md). Data export can only write data to storage accounts that support block blobs. The following table shows the known compatible storage account types:
+
+|Performance Tier|Account Type|
+|-|-|
+|Standard|General Purpose V2|
+|Standard|General Purpose V1|
+|Standard|Blob storage|
+|Premium|Block Blob storage|
To create the Blob Storage destination in IoT Central on the **Data export** page:
To create the Blob Storage destination in IoT Central on the **Data export** pag
### Create an Azure Blob Storage destination
-If you don't have an existing Azure storage account to export to, follow these steps:
+If you don't have an existing Azure storage account to export to, run the following script in the Azure Cloud Shell bash environment. The script creates a resource group, Azure Storage account, and blob container. The script then enables the managed identity for your IoT Central application and assigns the role it needs to access your storage account:
-1. Create a [new storage account in the Azure portal](https://portal.azure.com/#create/Microsoft.StorageAccount-ARM). You can learn more about creating new [Azure Blob Storage accounts](../../storage/blobs/storage-quickstart-blobs-portal.md) or [Azure Data Lake Storage v2 storage accounts](../../storage/common/storage-account-create.md). Data export can only write data to storage accounts that support block blobs. The following list shows the known compatible storage account types:
+```azurecli-interactive
+# Replace the storage account name with your own unique value.
+SA=yourstorageaccount$RANDOM
- |Performance Tier|Account Type|
- |-|-|
- |Standard|General Purpose V2|
- |Standard|General Purpose V1|
- |Standard|Blob storage|
- |Premium|Block Blob storage|
+# Replace the IoT Central app name with the name of your
+# IoT Central application.
+CA=your-iot-central-app
-1. To create a container in your storage account, go to your storage account. Under **Blob Service**, select **Browse Blobs**. Select **+ Container** at the top to create a new container.
+CN=exportdata
+RG=centralexportresources
+LOCATION=eastus
+az group create -n $RG --location $LOCATION
+SAID=$(az storage account create --name $SA --resource-group $RG --location $LOCATION --sku Standard_LRS --query "id" --output tsv)
+az storage container create --account-name $SA --resource-group $RG --name $CN
-To configure the permissions:
+# This assumes your IoT Central application is in the
+# default `IOTC` resource group.
+az iot central app identity assign --name $CA --resource-group IOTC --system-assigned
+PI=$(az iot central app identity show --name $CA --resource-group IOTC --query "principalId" --output tsv)
-1. On the **Add role assignment** page, select the subscription you want to use and **Storage** as the scope. Then select your storage account as the resource.
+az role assignment create --assignee $PI --role "Storage Blob Data Contributor" --scope $SAID
-1. Select **Storage Blob Data Contributor** as the **Role**.
+az role assignment list --assignee $PI --all -o table
+
+echo "Endpoint URI: https://$SA.blob.core.windows.net/"
+echo "Container: $CN"
+```
-1. Select **Save**. The managed identity for your IoT Central application is now configured.
+You can learn more about creating new [Azure Blob Storage accounts](../../storage/blobs/storage-quickstart-blobs-portal.md) or [Azure Data Lake Storage v2 storage accounts](../../storage/common/storage-account-create.md). Data export can only write data to storage accounts that support block blobs. The following table shows the known compatible storage account types:
- > [!TIP]
- > This role assignment isn't visible in the list on the **Azure role assignments** page.
+|Performance Tier|Account Type|
+|-|-|
+|Standard|General Purpose V2|
+|Standard|General Purpose V1|
+|Standard|Blob storage|
+|Premium|Block Blob storage|
To further secure your blob container and only allow access from trusted services with managed identities, see [Export data to a secure destination on an Azure Virtual Network](howto-connect-secure-vnet.md).
iot-central Howto Export To Event Hubs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-export-to-event-hubs.md
Event Hubs destinations let you configure the connection with a *connection stri
This article shows how to create a managed identity in the Azure portal. You can also use the Azure CLI to create a manged identity. To learn more, see [Assign a managed identity access to a resource using Azure CLI](../../active-directory/managed-identities-azure-resources/howto-assign-access-cli.md). - # [Connection string](#tab/connection-string) ### Create an Event Hubs destination
-If you don't have an existing Event Hubs namespace to export to, follow these steps:
+If you don't have an existing Event Hubs namespace to export to, run the following script in the Azure Cloud Shell bash environment. The script creates a resource group, Event Hubs namespace, and event hub. It then prints the connection string to use when you configure the data export in IoT Central:
-1. Create a [new Event Hubs namespace in the Azure portal](https://portal.azure.com/#create/Microsoft.EventHub). You can learn more in [Azure Event Hubs docs](../../event-hubs/event-hubs-create.md).
+```azurecli-interactive
+# Replace the Event Hubs namespace name with your own unique value
+EHNS=your-event-hubs-namespace-$RANDOM
+EH=exportdata
+RG=centralexportresources
+LOCATION=eastus
-1. Create an event hub in your Event Hubs namespace. Go to your namespace, and select **+ Event Hub** at the top to create an event hub instance.
+az group create -n $RG --location $LOCATION
+az eventhubs namespace create --name $EHNS --resource-group $RG -l $LOCATION
+az eventhubs eventhub create --name $EH --resource-group $RG --namespace-name $EHNS
+az eventhubs eventhub authorization-rule create --eventhub-name $EH --resource-group $RG --namespace-name $EHNS --name SendRule --rights Send
-1. Generate a key to use when you to set up your data export in IoT Central:
+CS=$(az eventhubs eventhub authorization-rule keys list --eventhub-name $EH --resource-group $RG --namespace-name $EHNS --name SendRule --query "primaryConnectionString" -o tsv)
- - Select the event hub instance you created.
- - Select **Settings > Shared access policies**.
- - Create a new key or choose an existing key that has **Send** permissions.
- - Copy either the primary or secondary connection string. You use this connection string to set up a new destination in IoT Central.
- - Alternatively, you can generate a connection string for the entire Event Hubs namespace:
- 1. Go to your Event Hubs namespace in the Azure portal.
- 2. Under **Settings**, select **Shared Access Policies**.
- 3. Create a new key or choose an existing key that has **Send** permissions.
- 4. Copy either the primary or secondary connection string.
+echo "Event hub connection string: $CS"
+```
To create the Event Hubs destination in IoT Central on the **Data export** page:
To create the Event Hubs destination in IoT Central on the **Data export** page:
### Create an Event Hubs destination
-If you don't have an existing Event Hubs namespace to export to, follow these steps:
+If you don't have an existing Event Hubs namespace to export to, run the following script in the Azure Cloud Shell bash environment. The script creates a resource group, Event Hubs namespace, and event hub. The script then enables the managed identity for your IoT Central application and assigns the role it needs to access your event hub:
-1. Create a [new Event Hubs namespace in the Azure portal](https://portal.azure.com/#create/Microsoft.EventHub). You can learn more in [Azure Event Hubs docs](../../event-hubs/event-hubs-create.md).
+```azurecli-interactive
+# Replace the Event Hubs namespace name with your own unique value
+EHNS=your-event-hubs-namespace-$RANDOM
-1. Create an event hub in your Event Hubs namespace. Go to your namespace, and select **+ Event Hub** at the top to create an event hub instance.
+# Replace the IoT Central app name with the name of your
+# IoT Central application.
+CA=your-iot-central-app
+EH=exportdata
+RG=centralexportresources
+LOCATION=eastus
-To configure the permissions:
+RGID=$(az group create -n $RG --location $LOCATION --query "id" --output tsv)
+az eventhubs namespace create --name $EHNS --resource-group $RG -l $LOCATION
+az eventhubs eventhub create --name $EH --resource-group $RG --namespace-name $EHNS
-1. On the **Add role assignment** page, select the scope and subscription you want to use.
+# This assumes your IoT Central application is in the
+# default `IOTC` resource group.
+az iot central app identity assign --name $CA --resource-group IOTC --system-assigned
+PI=$(az iot central app identity show --name $CA --resource-group IOTC --query "principalId" --output tsv)
- > [!TIP]
- > If your IoT Central application and event hub are in the same resource group, you can choose **Resource group** as the scope and then select the resource group.
+az role assignment create --assignee $PI --role "Azure Event Hubs Data Sender" --scope $RGID
-1. Select **Azure Event Hubs Data Sender** as the **Role**.
+az role assignment list --assignee $PI --all -o table
-1. Select **Save**. The managed identity for your IoT Central application is now configured.
+echo "Host name: $EHNS.servicebus.windows.net"
+echo "Event Hub: $CN"
+```
To further secure your event hub and only allow access from trusted services with managed identities, see [Export data to a secure destination on an Azure Virtual Network](howto-connect-secure-vnet.md).
iot-central Howto Export To Service Bus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-export-to-service-bus.md
This article shows how to create a managed identity in the Azure portal. You can
### Create a Service Bus queue or topic destination
-If you don't have an existing Service Bus namespace to export to, follow these steps:
+If you don't have an existing Service Bus namespace to export to, run the following script in the Azure Cloud Shell bash environment. The script creates a resource group, Service Bus namespace, and queue. It then prints the connection string to use when you configure the data export in IoT Central:
-1. Create a [new Service Bus namespace in the Azure portal](https://portal.azure.com/#create/Microsoft.ServiceBus.1.0.5). You can learn more in [Azure Service Bus docs](../../service-bus-messaging/service-bus-create-namespace-portal.md).
+```azurecli-interactive
+# Replace the Service Bus namespace name with your own unique value
+SBNS=your-service-bus-namespace-$RANDOM
+SBQ=exportdata
+RG=centralexportresources
+LOCATION=eastus
-1. To create a queue or topic to export to, go to your Service Bus namespace, and select **+ Queue** or **+ Topic**.
+az group create -n $RG --location $LOCATION
+az servicebus namespace create --name $SBNS --resource-group $RG -l $LOCATION
-1. Generate a key to use when you to set up your data export in IoT Central:
+# This example uses a Service Bus queue. You can use a Service Bus topic.
+az servicebus queue create --name $SBQ --resource-group $RG --namespace-name $SBNS
+az servicebus queue authorization-rule create --queue-name $SBQ --resource-group $RG --namespace-name $SBNS --name SendRule --rights Send
- - Select the queue or topic you created.
- - Select **Settings/Shared access policies**.
- - Create a new key or choose an existing key that has **Send** permissions.
- - Copy either the primary or secondary connection string. You use this connection string to set up a new destination in IoT Central.
- - Alternatively, you can generate a connection string for the entire Service Bus namespace:
- 1. Go to your Service Bus namespace in the Azure portal.
- 2. Under **Settings**, select **Shared Access Policies**.
- 3. Create a new key or choose an existing key that has **Send** permissions.
- 4. Copy either the primary or secondary connection string.
+CS=$(az servicebus queue authorization-rule keys list --queue-name $SBQ --resource-group $RG --namespace-name $SBNS --name SendRule --query "primaryConnectionString" -o tsv)
+
+echo "Service bus connection string: $CS"
+```
To create the Service Bus destination in IoT Central on the **Data export** page:
To create the Service Bus destination in IoT Central on the **Data export** page
### Create a Service Bus queue or topic destination
-If you don't have an existing Service Bus namespace to export to, follow these steps:
+If you don't have an existing Service Bus namespace to export to, run the following script in the Azure Cloud Shell bash environment. The script creates a resource group, Service Bus namespace, and queue. The script then enables the managed identity for your IoT Central application and assigns the role it needs to access your Service Bus queue:
-1. Create a [new Service Bus namespace in the Azure portal](https://portal.azure.com/#create/Microsoft.ServiceBus.1.0.5). You can learn more in [Azure Service Bus docs](../../service-bus-messaging/service-bus-create-namespace-portal.md).
+```azurecli-interactive
+# Replace the Service Bus namespace name with your own unique value
+SBNS=your-event-hubs-namespace-$RANDOM
-1. To create a queue or topic to export to, go to your Service Bus namespace, and select **+ Queue** or **+ Topic**.
+# Replace the IoT Central app name with the name of your
+# IoT Central application.
+CA=your-iot-central-app
+SBQ=exportdata
+RG=centralexportresources
+LOCATION=eastus
-To configure the permissions:
+RGID=$(az group create -n $RG --location $LOCATION --query "id" --output tsv)
+az servicebus namespace create --name $SBNS --resource-group $RG -l $LOCATION
+az servicebus queue create --name $SBQ --resource-group $RG --namespace-name $SBNS
-1. On the **Add role assignment** page, select the scope and subscription you want to use.
+# This assumes your IoT Central application is in the
+# default `IOTC` resource group.
+az iot central app identity assign --name $CA --resource-group IOTC --system-assigned
+PI=$(az iot central app identity show --name $CA --resource-group IOTC --query "principalId" --output tsv)
- > [!TIP]
- > If your IoT Central application and queue or topic are in the same resource group, you can choose **Resource group** as the scope and then select the resource group.
+az role assignment create --assignee $PI --role "Azure Service Bus Data Sender" --scope $RGID
-1. Select **Azure Service Bus Data Sender** as the **Role**.
+az role assignment list --assignee $PI --all -o table
-1. Select **Save**. The managed identity for your IoT Central application is now configured.
+echo "Host name: $SBNS.servicebus.windows.net"
+echo "Queue: $CN"
+```
To further secure your queue or topic and only allow access from trusted services with managed identities, see [Export data to a secure destination on an Azure Virtual Network](howto-connect-secure-vnet.md).
iot-central Howto Manage Device Templates With Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-device-templates-with-rest-api.md
The request body has some required fields:
* `contents`: lists the properties, telemetry, and commands that make up your device. The capabilities may be defined in multiple interfaces. * `capabilityModel` : Every device template has a capability model. A relationship is established between each module capability model and a device model. A capability model implements one or more module interfaces.
+> [!TIP]
+> The device template JSON is not a standard DTDL document. The device template JSON includes IoT Central specific data such as cloud property definitions, customizations, and display units. You can use the device template JSON format to import and export device templates in IoT Central by using the REST API and the CLI.
+ There are some optional fields you can use to add more details to the capability model, such as display name and description. Each entry in the list of interfaces in the implements section has a:
iot-central Howto Manage Iot Central From Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-iot-central-from-cli.md
Remove-AzIotCentralApp -ResourceGroupName "MyIoTCentralResourceGroup" `
An IoT Central application can use a system assigned [managed identity](../../active-directory/managed-identities-azure-resources/overview.md) to secure the connection to a [data export destination](howto-export-to-blob-storage.md#connection-options).
-To enable the managed identity, use either the [Azure portal - Configure a managed identity](howto-manage-iot-central-from-portal.md#configure-a-managed-identity) or the [REST API](howto-manage-iot-central-with-rest-api.md):
+To enable the managed identity, use either the [Azure portal - Configure a managed identity](howto-manage-iot-central-from-portal.md#configure-a-managed-identity) or the CLI. You can enable the managed identity when you create an IoT Central application:
+```azurecli-interactive
+# Create an IoT Central application with a managed identity
+az iot central app create \
+ --resource-group "MyIoTCentralResourceGroup" \
+ --name "myiotcentralapp" --subdomain "mysubdomain" \
+ --sku ST1 --template "iotc-pnp-preview" \
+ --display-name "My Custom Display Name" \
+ --mi-system-assigned
+```
+
+Alternatively, you can enable a managed identity on an existing IoT Central application:
+
+```azurecli-interactive
+# Enable a system-assigned managed identity
+az iot central app identity assign --name "myiotcentralapp" \
+ --resource-group "MyIoTCentralResourceGroup" \
+ --system-assigned
+```
After you enable the managed identity, you can use the CLI to configure the role assignments. Use the [az role assignment create](/cli/azure/role/assignment#az-role-assignment-create) command to create a role assignment. For example, the following commands first retrieve the principal ID of the managed identity. The second command assigns the `Azure Event Hubs Data Sender` role to the principal ID in the scope of the `MyIoTCentralResourceGroup` resource group: ```azurecli-interactive
-spID=$(az resource list -n myiotcentralapp --query [*].identity.principalId --out tsv)
+scope=$(az group show -n "MyIoTCentralResourceGroup" --query "id" --output tsv)
+spID=$(az iot central app identity show \
+ --name "myiotcentralapp" \
+ --resource-group "MyIoTCentralResourceGroup" \
+ --query "principalId" --output tsv)
az role assignment create --assignee $spID --role "Azure Event Hubs Data Sender" \
- --scope /subscriptions/<your subscription id>/resourceGroups/MyIoTCentralResourceGroup
+ --scope $scope
``` To learn more about the role assignments, see:
iot-central Overview Iot Central Solution Builder https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/overview-iot-central-solution-builder.md
When you use IoT Central to create an IoT solution, tasks include:
- Configure data transformations to make it easier to extract business value from your data. - Configure dashboards and views in the IoT Central web UI. - Use the built-in rules and analytics tools to derive business insights from the connected devices.-- Use the data export, rules capabilities, and APIs to integrate IoT Central with other services and applications.
+- Use the data export feature, rules capabilities, and APIs to integrate IoT Central with other services and applications.
+
+## Data export
+
+Many integration scenarios build on the IoT Central data export feature. An IoT Central application can continuously export filtered and enriched IoT data. Data export pushes changes in near real time to other parts of your cloud solution for warm-path insights, analytics, and storage.
+
+For example, you can:
+
+- Continuously export telemetry, property changes, device connectivity, device lifecycle, and device template lifecycle data in JSON format in near real time.
+- Filter the data streams to export data that matches custom conditions.
+- Enrich the data streams with custom values and property values from the device.
+- [Transform the data](howto-transform-data-internally.md) streams to modify their shape and content.
+
+Currently, IoT Central export data to:
+
+- [Azure Data Explorer](howto-export-to-azure-data-explorer.md)
+- [Blob Storage](howto-export-to-blob-storage.md)
+- [Event Hubs](howto-export-to-event-hubs.md)
+- [Service Bus](howto-export-to-service-bus.md)
+- [Webhook](howto-export-to-webhook.md)
## Transform data at ingress
Scenarios that process IoT data outside of IoT Central to extract business value
- Compute, enrich, and transform:
- IoT Central lets you capture, transform, manage, and visualize IoT data. Sometimes, it's useful to enrich or transform you IoT data using external data sources. You can then feed the enriched data back into IoT Central.
+ IoT Central lets you capture, transform, manage, and visualize IoT data. Sometimes, it's useful to enrich or transform your IoT data using external data sources. You can then feed the enriched data back into IoT Central.
For example, use the IoT Central continuous data export feature to trigger an Azure function. The function enriches captured device telemetry and pushes the enriched data back into IoT Central while preserving timestamps.
iot-develop Concepts Model Repository https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/concepts-model-repository.md
There are more samples available within the source code in the Azure SDK GitHub
1. From your fork, create a pull request that targets the `main` branch. See [Creating an issue or pull request](https://docs.github.com/free-pro-team@latest/desktop/contributing-and-collaborating-using-github-desktop/creating-an-issue-or-pull-request) docs. 1. Review the [pull request requirements](https://github.com/Azure/iot-plugandplay-models/blob/main/pr-reqs.md).
-The pull request triggers a set of GitHub actions that validate the submitted interfaces, and makes sure your pull request satisfies all the requirements.
+The pull request triggers a set of GitHub Actions that validate the submitted interfaces, and makes sure your pull request satisfies all the requirements.
Microsoft will respond to a pull request with all checks in three business days.
iot-dps How To Use Custom Allocation Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/how-to-use-custom-allocation-policies.md
In this section, you create an Azure function that implements your custom alloca
} else {
- string[] hubs = data?.linkedHubs.ToObject<string[]>();
+ string[] hubs = data?.linkedHubs?.ToObject<string[]>();
// Must have hubs selected on the enrollment if (hubs == null)
To delete the resource group by name:
## Next steps * To learn more Reprovisioning, see [IoT Hub Device reprovisioning concepts](concepts-device-reprovision.md)
-* To learn more Deprovisioning, see [How to deprovision devices that were previously autoprovisioned](how-to-unprovision-devices.md)
+* To learn more Deprovisioning, see [How to deprovision devices that were previously autoprovisioned](how-to-unprovision-devices.md)
iot-dps Tutorial Net Provision Device To Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/tutorial-net-provision-device-to-hub.md
- Title: Tutorial - Provision device using Azure IoT Hub Device Provisioning Service (.NET)
-description: This tutorial shows how you can provision your device to a single IoT hub using the Azure IoT Hub Device Provisioning Service (DPS) using .NET.
-- Previously updated : 11/12/2019-------
-# Tutorial: Enroll the device to an IoT hub using the Azure IoT Hub Provisioning Service Client (.NET)
-
-In the previous tutorial, you learned how to set up a device to connect to your Device Provisioning Service. In this tutorial, you learn how to use this service to provision your device to a single IoT hub, using both **_Individual Enrollment_** and **_Enrollment Groups_**. This tutorial shows you how to:
-
-> [!div class="checklist"]
-> * Enroll the device
-> * Start the device
-> * Verify the device is registered
-
-## Prerequisites
-
-* Visual Studio
-
-> [!NOTE]
-> Visual Studio is not required. The installation of [.NET](https://dotnet.microsoft.com) is sufficient and developers can use their preferred editor on Windows or Linux.
-
-This tutorial simulates the period during or right after the hardware manufacturing process, when device information is added to the provisioning service. This code is usually run on a PC or a factory device that can run .NET code and should not be added to the devices themselves.
--
-## Enroll the device
-
-This step involves adding the device's unique security artifacts to the Device Provisioning Service. These security artifacts are as follows:
--- For TPM-based devices:
- - The *Endorsement Key* that is unique to each TPM chip or simulation. Read the [Understand TPM Endorsement Key](/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/cc770443(v=ws.11)) for more information.
- - The *Registration ID* that is used to uniquely identify a device in the namespace/scope. This may or may not be the same as the device ID. The registration ID is mandatory for every device. The registration ID is a case-insensitive string (up to 128 characters long) of alphanumeric characters plus the special characters: `'-'`, `'.'`, `'_'`, `':'`. The last character must be alphanumeric or dash (`'-'`). For TPM-based devices, the registration ID may be derived from the TPM itself, for example, an SHA-256 hash of the TPM Endorsement Key.
--- For X.509 based devices:
- - The [X.509 certificate issued to the device](/windows/win32/seccertenroll/about-x-509-public-key-certificates), in the form of either a *.pem* or a *.cer* file. For individual enrollment, you need to use the *leaf certificate* for your X.509 system, while for enrollment groups, you need to use the *root certificate* or an equivalent *signer certificate*.
- - The *Registration ID* that is used to uniquely identify a device in the namespace/scope. This may or may not be the same as the device ID. The registration ID is mandatory for every device. The registration ID is a case-insensitive string (up to 128 characters long) of alphanumeric characters plus the special characters: `'-'`, `'.'`, `'_'`, `':'`. The last character must be alphanumeric or dash (`'-'`). For X.509 based devices, the registration ID is derived from the certificate's common name (CN), so the common name must adhere to the registration ID string format. For further information on these requirements see [DPS terminology](./concepts-service.md).
-
-There are two ways to enroll the device to the Device Provisioning Service:
--- **Individual Enrollments**
- This represents an entry for a single device that may register with the Device Provisioning Service. Individual enrollments may use either X.509 certificates or SAS tokens (in a real or virtual TPM) as attestation mechanisms. We recommend using individual enrollments for devices, which require unique initial configurations, or for devices that can only use SAS tokens via TPM as the attestation mechanism. Individual enrollments may have the desired IoT hub device ID specified.
--- **Enrollment Groups**
- This represents a group of devices that share a specific attestation mechanism. We recommend using an enrollment group for a large number of devices, which share a desired initial configuration, or for devices all going to the same tenant. Enrollment groups are X.509 only and all share a signing certificate in their X.509 certificate chain.
-
-### Enroll the device using Individual Enrollments
-
-1. In Visual Studio, create a Visual C# Console Application project by using the **Console App** project template. Name the project **DeviceProvisioning**.
-
-1. In Solution Explorer, right-click the **DeviceProvisioning** project, and then click **Manage NuGet Packages...**.
-
-1. In the **NuGet Package Manager** window, select **Browse** and search for **microsoft.azure.devices.provisioning.service**. Select the entry and click **Install** to install the **Microsoft.Azure.Devices.Provisioning.Service** package, and accept the terms of use. This procedure downloads, installs, and adds a reference to the [Azure IoT Device Provisioning Service SDK](https://www.nuget.org/packages/Microsoft.Azure.Devices.Provisioning.Service/) NuGet package and its dependencies.
-
-1. Add the following `using` statements at the top of the **Program.cs** file:
-
- ```csharp
- using Microsoft.Azure.Devices.Provisioning.Service;
- ```
-
-1. Add the following fields to the **Program** class. Replace the placeholder value with the Device Provisioning Service connection string noted in the previous section.
-
- ```csharp
- static readonly string ServiceConnectionString = "{Device Provisioning Service connection string}";
-
- private const string SampleRegistrationId = "sample-individual-csharp";
- private const string SampleTpmEndorsementKey =
- "AToAAQALAAMAsgAgg3GXZ0SEs/gakMyNRqXXJP1S124GUgtk8qHaGzMUaaoABgCAAEMAEAgAAAAAAAEAxsj2gUS" +
- "cTk1UjuioeTlfGYZrrimExB+bScH75adUMRIi2UOMxG1kw4y+9RW/IVoMl4e620VxZad0ARX2gUqVjYO7KPVt3d" +
- "yKhZS3dkcvfBisBhP1XH9B33VqHG9SHnbnQXdBUaCgKAfxome8UmBKfe+naTsE5fkvjb/do3/dD6l4sGBwFCnKR" +
- "dln4XpM03zLpoHFao8zOwt8l/uP3qUIxmCYv9A7m69Ms+5/pCkTu/rK4mRDsfhZ0QLfbzVI6zQFOKF/rwsfBtFe" +
- "WlWtcuJMKlXdD8TXWElTzgh7JS4qhFzreL0c1mI0GCj+Aws0usZh7dLIVPnlgZcBhgy1SSDQMQ==";
- private const string OptionalDeviceId = "myCSharpDevice";
- private const ProvisioningStatus OptionalProvisioningStatus = ProvisioningStatus.Enabled;
- ```
-
-1. Add the following to implement the enrollment for the device:
-
- ```csharp
- static async Task SetRegistrationDataAsync()
- {
- Console.WriteLine("Starting SetRegistrationData");
-
- Attestation attestation = new TpmAttestation(SampleTpmEndorsementKey);
-
- IndividualEnrollment individualEnrollment = new IndividualEnrollment(SampleRegistrationId, attestation);
-
- individualEnrollment.DeviceId = OptionalDeviceId;
- individualEnrollment.ProvisioningStatus = OptionalProvisioningStatus;
-
- Console.WriteLine("\nAdding new individualEnrollment...");
- var serviceClient = ProvisioningServiceClient.CreateFromConnectionString(ServiceConnectionString);
-
- IndividualEnrollment individualEnrollmentResult =
- await serviceClient.CreateOrUpdateIndividualEnrollmentAsync(individualEnrollment).ConfigureAwait(false);
-
- Console.WriteLine("\nIndividualEnrollment created with success.");
- Console.WriteLine(individualEnrollmentResult);
- }
- ```
-
-1. Finally, add the following code to the **Main** method to open the connection to your IoT hub and begin the enrollment:
-
- ```csharp
- try
- {
- Console.WriteLine("IoT Device Provisioning example");
-
- SetRegistrationDataAsync().GetAwaiter().GetResult();
-
- Console.WriteLine("Done, hit enter to exit.");
- }
- catch (Exception ex)
- {
- Console.WriteLine();
- Console.WriteLine("Error in sample: {0}", ex.Message);
- }
- Console.ReadLine();
- ```
-
-1. In the Visual Studio Solution Explorer, right-click your solution, and then click **Set StartUp Projects...**. Select **Single startup project**, and then select the **DeviceProvisioning** project in the dropdown menu.
-
-1. Run the .NET device app **DeviceProvisiong**. It should set up provisioning for the device:
-
- ![Individual registration run](./media/tutorial-net-provision-device-to-hub/individual.png)
-
-When the device is successfully enrolled, you should see it displayed in the portal as following:
-
- ![Successful enrollment in the portal](./media/tutorial-net-provision-device-to-hub/individual-portal.png)
-
-### Enroll the device using Enrollment Groups
-
-> [!NOTE]
-> The enrollment group sample requires an X.509 certificate.
-
-1. In the Visual Studio Solution Explorer, open the **DeviceProvisioning** project created above.
-
-1. Add the following `using` statements at the top of the **Program.cs** file:
-
- ```csharp
- using System.Security.Cryptography.X509Certificates;
- ```
-
-1. Add the following fields to the **Program** class. Replace the placeholder value with the X509 certificate location.
-
- ```csharp
- private const string X509RootCertPathVar = "{X509 Certificate Location}";
- private const string SampleEnrollmentGroupId = "sample-group-csharp";
- ```
-
-1. Add the following to **Program.cs** implement the enrollment for the group:
-
- ```csharp
- public static async Task SetGroupRegistrationDataAsync()
- {
- Console.WriteLine("Starting SetGroupRegistrationData");
-
- using (ProvisioningServiceClient provisioningServiceClient =
- ProvisioningServiceClient.CreateFromConnectionString(ServiceConnectionString))
- {
- Console.WriteLine("\nCreating a new enrollmentGroup...");
-
- var certificate = new X509Certificate2(X509RootCertPathVar);
-
- Attestation attestation = X509Attestation.CreateFromRootCertificates(certificate);
-
- EnrollmentGroup enrollmentGroup = new EnrollmentGroup(SampleEnrollmentGroupId, attestation);
-
- Console.WriteLine(enrollmentGroup);
- Console.WriteLine("\nAdding new enrollmentGroup...");
-
- EnrollmentGroup enrollmentGroupResult =
- await provisioningServiceClient.CreateOrUpdateEnrollmentGroupAsync(enrollmentGroup).ConfigureAwait(false);
-
- Console.WriteLine("\nEnrollmentGroup created with success.");
- Console.WriteLine(enrollmentGroupResult);
- }
- }
- ```
-
-1. Finally, replace the following code to the **Main** method to open the connection to your IoT hub and begin the group enrollment:
-
- ```csharp
- try
- {
- Console.WriteLine("IoT Device Group Provisioning example");
-
- SetGroupRegistrationDataAsync().GetAwaiter().GetResult();
-
- Console.WriteLine("Done, hit enter to exit.");
- Console.ReadLine();
- }
- catch (Exception ex)
- {
- Console.WriteLine();
- Console.WriteLine("Error in sample: {0}", ex.Message);
- }
- ```
-
-1. Run the .NET device app **DeviceProvisiong**. It should set up group provisioning for the device:
-
- ![Group registration run](./media/tutorial-net-provision-device-to-hub/group.png)
-
- When the device group is successfully enrolled, you should see it displayed in the portal as following:
-
- ![Successful group enrollment in the portal](./media/tutorial-net-provision-device-to-hub/group-portal.png)
--
-## Start the device
-
-At this point, the following setup is ready for device registration:
-
-1. Your device or group of devices are enrolled to your Device Provisioning Service, and
-2. Your device is ready with the security configured and accessible through the application using the Device Provisioning Service client SDK.
-
-Start the device to allow your client application to start the registration with your Device Provisioning Service.
--
-## Verify the device is registered
-
-Once your device boots, the following actions should take place. See the [Provisioning Device Client Sample](https://github.com/Azure-Samples/azure-iot-samples-csharp/tree/main/provisioning/Samples/device) for more details.
-
-1. The device sends a registration request to your Device Provisioning Service.
-2. For TPM devices, the Device Provisioning Service sends back a registration challenge to which your device responds.
-3. On successful registration, the Device Provisioning Service sends the IoT hub URI, device ID, and the encrypted key back to the device.
-4. The IoT Hub client application on the device then connects to your hub.
-5. On successful connection to the hub, you should see the device appear in the IoT hub's **Device Explorer**.
-
- ![Successful connection to hub in the portal](./media/tutorial-net-provision-device-to-hub/hub-connect-success.png)
-
-## Next steps
-In this tutorial, you learned how to:
-
-> [!div class="checklist"]
-> * Enroll the device
-> * Start the device
-> * Verify the device is registered
-
-Advance to the next tutorial to learn how to provision multiple devices across load-balanced hubs.
-
-> [!div class="nextstepaction"]
-> [Provision devices across load-balanced IoT hubs](./tutorial-provision-multiple-hubs.md)
iot-dps Tutorial Provision Device To Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/tutorial-provision-device-to-hub.md
- Title: Tutorial - Provision device using Azure IoT Hub Device Provisioning Service
-description: This tutorial shows how you can provision your device to a single IoT hub using the Azure IoT Hub Device Provisioning Service (DPS)
-- Previously updated : 11/12/2019-------
-# Tutorial: Provision the device to an IoT hub using the Azure IoT Hub Device Provisioning Service
-
-In the previous tutorial, you learned how to set up a device to connect to your Device Provisioning Service. In this tutorial, you learn how to use this service to provision your device to a single IoT hub, using auto-provisioning and **_enrollment lists_**. This tutorial shows you how to:
-
-> [!div class="checklist"]
-> * Enroll the device
-> * Start the device
-> * Verify the device is registered
-
-## Prerequisites
-
-Before you proceed, make sure to configure your device as discussed in the tutorial [Setup a device to provision using Azure IoT Hub Device Provisioning Service](./tutorial-set-up-device.md).
-
-If you're unfamiliar with the process of auto-provisioning, review the [provisioning](about-iot-dps.md#provisioning-process) overview before continuing.
-
-<a id="enrolldevice"></a>
-## Enroll the device
-
-This step involves adding the device's unique security artifacts to the Device Provisioning Service. These security artifacts are based on the device's [Attestation mechanism](concepts-service.md#attestation-mechanism) as follows:
--- For TPM-based devices you need:
- - The *Endorsement Key* that is unique to each TPM chip or simulation, which is obtained from the TPM chip manufacturer. Read the [Understand TPM Endorsement Key](/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/cc770443(v=ws.11)) for more information.
- - The *Registration ID* that is used to uniquely identify a device in the namespace/scope. This ID may or may not be the same as the device ID. The ID is mandatory for every device. For TPM-based devices, the registration ID may be derived from the TPM itself, for example, an SHA-256 hash of the TPM Endorsement Key.
-
- [![Enrollment information for TPM in the portal](./media/tutorial-provision-device-to-hub/tpm-device-enrollment.png)](./media/tutorial-provision-device-to-hub/tpm-device-enrollment.png#lightbox)
--- For X.509 based devices you need:
- - The [certificate issued to the X.509](/windows/win32/seccertenroll/about-x-509-public-key-certificates) chip or simulation, in the form of either a *.pem* or a *.cer* file. For individual enrollment, you need to use the per-device *signed certificate* for your X.509 system, while for enrollment groups, you need to use the *root certificate*.
-
- [![Add individual enrollment for X.509 attestation in the portal](./media/tutorial-provision-device-to-hub/individual-enrollment.png)](./media/tutorial-provision-device-to-hub/individual-enrollment.png#lightbox)
-
-There are two ways to enroll the device to the Device Provisioning Service:
--- **Enrollment Groups**
- This represents a group of devices that share a specific attestation mechanism. We recommend using an enrollment group for a large number of devices, which share a desired initial configuration, or for devices all going to the same tenant. For more information on Identity attestation for enrollment groups, see [Security](concepts-x509-attestation.md#controlling-device-access-to-the-provisioning-service-with-x509-certificates).
-
- [![Add group enrollment for X.509 attestation in the portal](./media/tutorial-provision-device-to-hub/group-enrollment.png)](./media/tutorial-provision-device-to-hub/group-enrollment.png#lightbox)
--- **Individual Enrollments**
- This represents an entry for a single device that may register with the Device Provisioning Service. Individual enrollments may use either x509 certificates or SAS tokens (in a real or virtual TPM) as attestation mechanisms. We recommend using individual enrollments for devices that require unique initial configurations, and devices that can only use SAS tokens via TPM or virtual TPM as the attestation mechanism. Individual enrollments may have the desired IoT hub device ID specified.
-
-Now you enroll the device with your Device Provisioning Service instance, using the required security artifacts based on the device's attestation mechanism:
-
-1. Sign in to the Azure portal, click on the **All resources** button on the left-hand menu and open your Device Provisioning Service.
-
-2. On the Device Provisioning Service summary blade, select **Manage enrollments**. Select either **Individual Enrollments** tab or the **Enrollment Groups** tab as per your device setup. Click the **Add** button at the top. Select **TPM** or **X.509** as the identity attestation *Mechanism*, and enter the appropriate security artifacts as discussed previously. You may enter a new **IoT Hub device ID**. Once complete, click the **Save** button.
-
-3. When the device is successfully enrolled, you should see it displayed in the portal as follows:
-
- ![Successful TPM enrollment in the portal](./media/tutorial-provision-device-to-hub/tpm-enrollment-success.png)
-
-After enrollment, the provisioning service then waits for the device to boot and connect with it at any later point in time. When your device boots for the first time, the client SDK library interacts with your chip to extract the security artifacts from the device, and verifies registration with your Device Provisioning Service.
-
-## Start the IoT device
-
-Your IoT device can be a real device, or a simulated device. Since the IoT device has now been enrolled with a Device Provisioning Service instance, the device can now boot up, and call the provisioning service to be recognized using the attestation mechanism. Once the provisioning service has recognized the device, it will be assigned to an IoT hub.
-
-Simulated device examples, using both TPM and X.509 attestation, are included for C, Java, C#, Node.js, and Python. To see an example of a device using TPM attestation, see [Quickstart: Provision a simulated TPM device](quick-create-simulated-device-tpm.md). For an example of a device using X.509 attestation, see [Quickstart:Provision a simulated symmetric key device](quick-create-simulated-device-x509.md#prepare-and-run-the-device-provisioning-code)
-
-Start the device to allow your device's client application to start the registration with your Device Provisioning Service.
-
-## Verify the device is registered
-
-Once your device boots, the following actions should take place:
-
-1. The device sends a registration request to your Device Provisioning Service.
-2. For TPM devices, the Device Provisioning Service sends back a registration challenge to which your device responds.
-3. On successful registration, the Device Provisioning Service sends the IoT hub URI, device ID, and the encrypted key back to the device.
-4. The IoT Hub client application on the device then connects to your hub.
-5. On successful connection to the hub, you should see the device appear in the IoT hub's **IoT Devices** explorer.
-
- ![Successful connection to hub in the portal](./media/tutorial-provision-device-to-hub/hub-connect-success.png)
-
-For more information, see the provisioning device client sample, [prov_dev_client_sample.c](https://github.com/Azure/azure-iot-sdk-c/blob/master/provisioning_client/samples/prov_dev_client_sample/prov_dev_client_sample.c). The sample demonstrates provisioning a simulated device using TPM, X.509 certificates and symmetric keys. Refer back to the [TPM](./quick-create-simulated-device-tpm.md), [X.509](./quick-create-simulated-device-x509.md), and [Symmetric key](./quick-create-simulated-device-symm-key.md) attestation quickstarts for step-by-step instructions on using the sample.
-
-## Next steps
-In this tutorial, you learned how to:
-
-> [!div class="checklist"]
-> * Enroll the device
-> * Start the device
-> * Verify the device is registered
-
-Advance to the next tutorial to learn how to provision multiple devices across load-balanced hubs
-
-> [!div class="nextstepaction"]
-> [Provision devices across load-balanced IoT hubs](./tutorial-provision-multiple-hubs.md)
iot-dps Tutorial Set Up Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/tutorial-set-up-cloud.md
- Title: Tutorial - Set up cloud for Azure IoT Hub Device Provisioning Service in portal
-description: This tutorial shows how you can set up the cloud resources for device provisioning in the [Azure portal](https://portal.azure.com) using the IoT Hub Device Provisioning Service (DPS)
-- Previously updated : 11/12/2019-------
-# Tutorial: Configure cloud resources for device provisioning with the IoT Hub Device Provisioning Service
-
-This tutorial shows how to set up the cloud for automatic device provisioning using the IoT Hub Device Provisioning Service. In this tutorial, you learn how to:
-
-> [!div class="checklist"]
-> * Use the Azure portal to create an IoT Hub Device Provisioning Service and get the ID scope
-> * Create an IoT hub
-> * Link the IoT hub to the Device Provisioning Service
-> * Set the allocation policy on the Device Provisioning Service
-
-If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/) before you begin.
-
-## Prerequisites
-
-Sign in to the [Azure portal](https://portal.azure.com/).
-
-## Create a Device Provisioning Service instance and get the ID scope
-
-Follow these steps to create a new Device Provisioning Service instance.
-
-1. In the upper left-hand corner of the Azure portal, click **Create a resource**.
-
-2. In the Search box, type **device provisioning**.
-
-3. Click **IoT Hub Device Provisioning Service**.
-
-4. Fill out the **IoT Hub Device Provisioning Service** form with the following information:
-
- | Setting ΓÇ» ΓÇ» ΓÇ» | Suggested value | DescriptionΓÇ»|
- | | | - |
- | **Name** | Any unique name | -- |
- | **Subscription** | Your subscription | For details about your subscriptions, see [Subscriptions](https://account.windowsazure.com/Subscriptions). |
- | **Resource group** | myResourceGroup | For valid resource group names, see [Naming rules and restrictions](/azure/architecture/best-practices/resource-naming). |
- | **Location** | Any valid location | For information about regions, see [Azure Regions](https://azure.microsoft.com/regions/). For resiliency and reliability, we recommend deploying to one of the regions that support [Availability Zones](iot-dps-ha-dr.md). |
-
- ![Enter basic information about your Device Provisioning Service in the portal](./media/tutorial-set-up-cloud/create-iot-dps-portal.png)
-
-5. Click **Create**. After a few moments, the Device Provisioning Service instance is created and the **Overview** page is displayed.
-
-6. On the **Overview** page for the new service instance, copy the value for the **ID scope** for use later. That value is used to identify registration IDs, and provides a guarantee that the registration ID is unique.
-
-7. Also, copy the **Service endpoint** value for later use.
-
-## Create an IoT hub
--
-### Retrieve connection string for IoT hub
--
-You have now created your IoT hub, and you have the host name and IoT Hub connection string that you need to complete the rest of this tutorial.
-
-## Link the Device Provisioning Service to an IoT hub
-
-The next step is to link the Device Provisioning Service and IoT hub so that the IoT Hub Device Provisioning Service can register devices to that hub. The service can only provision devices to IoT hubs that have been linked to the Device Provisioning Service. Follow these steps.
-
-1. In the **All resources** page, click the Device Provisioning Service instance you created previously.
-
-2. In the Device Provisioning Service page, click **Linked IoT hubs**.
-
-3. Click **Add**.
-
-4. In the **Add link to IoT hub** page, provide the following information, and click **Save**:
-
- * **Subscription:** Make sure the subscription that contains the IoT hub is selected. You can link to IoT hub that resides in a different subscription.
-
- * **IoT hub:** Choose the name of the IoT hub that you want to link with this Device Provisioning Service instance.
-
- * **Access Policy:** Select **iothubowner** as the credentials to use for establishing the link to the IoT hub.
-
- ![Link the hub name to link to the Device Provisioning Service in the portal](./media/tutorial-set-up-cloud/link-iot-hub-to-dps-portal.png)
-
-## Set the allocation policy on the Device Provisioning Service
-
-The allocation policy is an IoT Hub Device Provisioning Service setting that determines how devices are assigned to an IoT hub. There are three supported allocation policies: 
-
-1. **Lowest latency**: Devices are provisioned to an IoT hub based on the hub with the lowest latency to the device.
-
-2. **Evenly weighted distribution** (default): Linked IoT hubs are equally likely to have devices provisioned to them. This setting is the default. If you are provisioning devices to only one IoT hub, you can keep this setting. 
-
-3. **Static configuration via the enrollment list**: Specification of the desired IoT hub in the enrollment list takes priority over the Device Provisioning Service-level allocation policy.
-
-To set the allocation policy, in the Device Provisioning Service page click **Manage allocation policy**. Make sure the allocation policy is set to **Evenly weighted distribution** (the default). If you make any changes, click **Save** when you are done.
-
-![Manage allocation policy](./media/tutorial-set-up-cloud/iot-dps-manage-allocation.png)
-
-## Clean up resources
-
-Other tutorials in this collection build upon this tutorial. If you plan to continue on to work with subsequent quick starts or with the tutorials, do not clean up the resources created in this tutorial. If you do not plan to continue, use the following steps to delete all resources created by this tutorial in the Azure portal.
-
-1. From the left-hand menu in the Azure portal, click **All resources** and then select your IoT Hub Device Provisioning Service instance. At the top of the **All resources** page, click **Delete**.
-
-2. From the left-hand menu in the Azure portal, click **All resources** and then select your IoT hub. At the top of the **All resources** page, click **Delete**.
-
-## Next steps
-
-In this tutorial, you learned how to:
-
-> [!div class="checklist"]
-> * Use the Azure portal to create an IoT Hub Device Provisioning Service and get the ID scope
-> * Create an IoT hub
-> * Link the IoT hub to the Device Provisioning Service
-> * Set the allocation policy on the Device Provisioning Service
-
-Advance to the next tutorial to learn how to set up your device for provisioning
-
-> [!div class="nextstepaction"]
-> [Set up device for provisioning](tutorial-set-up-device.md)
iot-dps Tutorial Set Up Device https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/tutorial-set-up-device.md
- Title: 'Tutorial - Set up device for the Azure IoT Hub Device Provisioning Service'
-description: 'This tutorial shows how you can set up device to provision via the IoT Hub Device Provisioning Service (DPS) during the device manufacturing process'
-- Previously updated : 11/12/2019--------
-# Tutorial: Set up a device to provision using the Azure IoT Hub Device Provisioning Service
-
-In the previous tutorial, you learned how to set up the Azure IoT Hub Device Provisioning Service to automatically provision your devices to your IoT hub. This tutorial shows you how to set up your device during the manufacturing process, enabling it to be auto-provisioned with IoT Hub. Your device is provisioned based on its [Attestation mechanism](concepts-service.md#attestation-mechanism), upon first boot and connection to the provisioning service. This tutorial covers the following tasks:
-
-> [!div class="checklist"]
-> * Build platform-specific Device Provisioning Services Client SDK
-> * Extract the security artifacts
-> * Create the device registration software
-
-This tutorial expects that you have already created your Device Provisioning Service instance and an IoT hub, using the instructions in the previous [Set up cloud resources](tutorial-set-up-cloud.md) tutorial.
-
-This tutorial uses the [Azure IoT SDKs and libraries for C repository](https://github.com/Azure/azure-iot-sdk-c), which contains the Device Provisioning Service Client SDK for C. The SDK currently provides TPM and X.509 support for devices running on Windows or Ubuntu implementations. This tutorial is based on use of a Windows development client, which also assumes basic proficiency with Visual Studio.
-
-If you're unfamiliar with the process of auto-provisioning, review the [provisioning](about-iot-dps.md#provisioning-process) overview before continuing.
---
-## Prerequisites
-
-The following prerequisites are for a Windows development environment. For Linux or macOS, see the appropriate section in [Prepare your development environment](https://github.com/Azure/azure-iot-sdk-c/blob/master/doc/devbox_setup.md) in the SDK documentation.
-
-* [Visual Studio](https://visualstudio.microsoft.com/vs/) 2019 with the ['Desktop development with C++'](/cpp/ide/using-the-visual-studio-ide-for-cpp-desktop-development) workload enabled. Visual Studio 2015 and Visual Studio 2017 are also supported.
-
-* Latest version of [Git](https://git-scm.com/download/) installed.
-
-## Build a platform-specific version of the SDK
-
-The Device Provisioning Service Client SDK helps you implement your device registration software. But before you can use it, you need to build a version of the SDK specific to your development client platform and attestation mechanism. In this tutorial, you build an SDK that uses Visual Studio on a Windows development platform, for a supported type of attestation:
-
-1. Download the [CMake build system](https://cmake.org/download/).
-
- It is important that the Visual Studio prerequisites (Visual Studio and the 'Desktop development with C++' workload) are installed on your machine, **before** starting the `CMake` installation. Once the prerequisites are in place, and the download is verified, install the CMake build system.
-
-2. Find the tag name for the [latest release](https://github.com/Azure/azure-iot-sdk-c/releases/latest) of the SDK.
-
-3. Open a command prompt or Git Bash shell. Run the following commands to clone the latest release of the [Azure IoT C SDK](https://github.com/Azure/azure-iot-sdk-c) GitHub repository. Use the tag you found in the previous step as the value for the `-b` parameter:
-
- ```cmd/sh
- git clone -b <release-tag> https://github.com/Azure/azure-iot-sdk-c.git
- cd azure-iot-sdk-c
- git submodule update --init
- ```
-
- You should expect this operation to take several minutes to complete.
-
-4. Create a `cmake` subdirectory in the root directory of the git repository, and navigate to that folder. Run the following commands from the `azure-iot-sdk-c` directory:
-
- ```cmd/sh
- mkdir cmake
- cd cmake
- ```
-
-5. Build the SDK for your development platform based on the attestation mechanisms you will be using. Use one of the following commands (also note the two trailing period characters for each command). Upon completion, CMake builds out the `/cmake` subdirectory with content specific to your device:
-
- - For devices that use the TPM simulator for attestation:
-
- ```cmd/sh
- cmake -Duse_prov_client:BOOL=ON -Duse_tpm_simulator:BOOL=ON ..
- ```
-
- - For any other device (physical TPM/HSM/X.509, or a simulated X.509 certificate):
-
- ```cmd/sh
- cmake -Duse_prov_client:BOOL=ON ..
- ```
--
-Now you're ready to use the SDK to build your device registration code.
-
-<a id="extractsecurity"></a>
-
-## Extract the security artifacts
-
-The next step is to extract the security artifacts for the attestation mechanism used by your device.
-
-### Physical devices
-
-Depending on whether you built the SDK to use attestation for a physical TPM/HSM or using X.509 certificates, gathering the security artifacts is as follows:
--- For a TPM device, you need to determine the **Endorsement Key** associated with it from the TPM chip manufacturer. You can derive a unique **Registration ID** for your TPM device by hashing the endorsement key. --- For an X.509 device, you need to obtain the certificates issued to your device(s). The provisioning service exposes two types of enrollment entries that control access for devices using the X.509 attestation mechanism. The certificates needed depend on the enrollment types you will be using.-
- - Individual enrollments: Enrollment for a specific single device. This type of enrollment entry requires [end-entity, "leaf", certificates](concepts-x509-attestation.md#end-entity-leaf-certificate).
-
- - Enrollment groups: This type of enrollment entry requires intermediate or root certificates. For more information, see [Controlling device access to the provisioning service with X.509 certificates](concepts-x509-attestation.md#controlling-device-access-to-the-provisioning-service-with-x509-certificates).
-
-### Simulated devices
-
-Depending on whether you built the SDK to use attestation for a simulated device using TPM or X.509 certificates, gathering the security artifacts is as follows:
--- For a simulated TPM device:-
- 1. Open a Windows Command Prompt, navigate to the `azure-iot-sdk-c` subdirectory, and run the TPM simulator. It listens over a socket on ports 2321 and 2322. Do not close this command window; you will need to keep this simulator running until the end of the following Quickstart.
-
- From the `azure-iot-sdk-c` subdirectory, run the following command to start the simulator:
-
- ```cmd/sh
- .\provisioning_client\deps\utpm\tools\tpm_simulator\Simulator.exe
- ```
-
- > [!NOTE]
- > If you use the Git Bash command prompt for this step, you'll need to change the backslashes to forward slashes, for example: `./provisioning_client/deps/utpm/tools/tpm_simulator/Simulator.exe`.
-
- 1. Using Visual Studio, open the solution generated in the *cmake* folder named `azure_iot_sdks.sln`, and build it using the "Build solution" command on the "Build" menu.
-
- 1. In the *Solution Explorer* pane in Visual Studio, navigate to the folder **Provision\_Tools**. Right-click the **tpm_device_provision** project and select **Set as Startup Project**.
-
- 1. Run the solution using either of the "Start" commands on the "Debug" menu. The output window displays the TPM simulator's **_Registration ID_** and the **_Endorsement Key_**, needed for device enrollment and registration. Copy these values for use later. You can close this window (with Registration ID and Endorsement Key), but leave the TPM simulator window running that you started in step #1.
--- For a simulated X.509 device:-
- 1. Using Visual Studio, open the solution generated in the *cmake* folder named `azure_iot_sdks.sln`, and build it using the "Build solution" command on the "Build" menu.
-
- 1. In the *Solution Explorer* pane in Visual Studio, navigate to the folder **Provision\_Tools**. Right-click the **dice\_device\_enrollment** project and select **Set as Startup Project**.
-
- 1. Run the solution using either of the "Start" commands on the "Debug" menu. In the output window, enter **i** for individual enrollment when prompted. The output window displays a locally generated X.509 certificate for your simulated device. Copy to clipboard the output starting from *--BEGIN CERTIFICATE--* and ending at the first *--END CERTIFICATE--*, making sure to include both of these lines as well. You only need the first certificate from the output window.
-
- 1. Create a file named **_X509testcert.pem_**, open it in a text editor of your choice, and copy the clipboard contents to this file. Save the file as you will use it later for device enrollment. When your registration software runs, it uses the same certificate during auto-provisioning.
-
-These security artifacts are required during enrollment your device to the Device Provisioning Service. The provisioning service waits for the device to boot and connect with it at any later point in time. When your device boots for the first time, the client SDK logic interacts with your chip (or simulator) to extract the security artifacts from the device, and verifies registration with your Device Provisioning Service.
-
-## Create the device registration software
-
-The last step is to write a registration application that uses the Device Provisioning Service client SDK to register the device with the IoT Hub service.
-
-> [!NOTE]
-> For this step we will assume the use of a simulated device, accomplished by running an SDK sample registration application from your workstation. However, the same concepts apply if you are building a registration application for deployment to a physical device.
-
-1. In the Azure portal, select the **Overview** blade for your Device Provisioning Service and copy the **_ID Scope_** value. The *ID Scope* is generated by the service and guarantees uniqueness. It is immutable and used to uniquely identify the registration IDs.
-
- ![Extract Device Provisioning Service endpoint information from the portal blade](./media/tutorial-set-up-device/extract-dps-endpoints.png)
-
-1. In the Visual Studio *Solution Explorer* on your machine, navigate to the folder **Provision\_Samples**. Select the sample project named **prov\_dev\_client\_sample** and open the source file **prov\_dev\_client\_sample.c**.
-
-1. Assign the _ID Scope_ value obtained in step #1, to the `id_scope` variable (removing the left/`[` and right/`]` brackets):
-
- ```c
- static const char* global_prov_uri = "global.azure-devices-provisioning.net";
- static const char* id_scope = "[ID Scope]";
- ```
-
- For reference, the `global_prov_uri` variable, which allows the IoT Hub client registration API `IoTHubClient_LL_CreateFromDeviceAuth` to connect with the designated Device Provisioning Service instance.
-
-1. In the **main()** function in the same file, comment/uncomment the `hsm_type` variable that matches the attestation mechanism being used by your device's registration software (TPM or X.509):
-
- ```c
- hsm_type = SECURE_DEVICE_TYPE_TPM;
- //hsm_type = SECURE_DEVICE_TYPE_X509;
- ```
-
-1. Save your changes and rebuild the **prov\_dev\_client\_sample** sample by selecting "Build solution" from the "Build" menu.
-
-1. Right-click the **prov\_dev\_client\_sample** project under the **Provision\_Samples** folder, and select **Set as Startup Project**. DO NOT run the sample application yet.
-
-> [!IMPORTANT]
-> Do not run/start the device yet! You need to finish the process by enrolling the device with the Device Provisioning Service first, before starting the device. The Next steps section below will guide you to the next article.
-
-### SDK APIs used during registration (for reference only)
-
-For reference, the SDK provides the following APIs for your application to use during registration. These APIs help your device connect and register with the Device Provisioning Service when it boots up. In return, your device receives the information required to establish a connection to your IoT Hub instance:
-
-```C
-// Creates a Provisioning Client for communications with the Device Provisioning Client Service.
-PROV_DEVICE_LL_HANDLE Prov_Device_LL_Create(const char* uri, const char* scope_id, PROV_DEVICE_TRANSPORT_PROVIDER_FUNCTION protocol)
-
-// Disposes of resources allocated by the provisioning Client.
-void Prov_Device_LL_Destroy(PROV_DEVICE_LL_HANDLE handle)
-
-// Asynchronous call initiates the registration of a device.
-PROV_DEVICE_RESULT Prov_Device_LL_Register_Device(PROV_DEVICE_LL_HANDLE handle, PROV_DEVICE_CLIENT_REGISTER_DEVICE_CALLBACK register_callback, void* user_context, PROV_DEVICE_CLIENT_REGISTER_STATUS_CALLBACK reg_status_cb, void* status_user_ctext)
-
-// Api to be called by user when work (registering device) can be done
-void Prov_Device_LL_DoWork(PROV_DEVICE_LL_HANDLE handle)
-
-// API sets a runtime option identified by parameter optionName to a value pointed to by value
-PROV_DEVICE_RESULT Prov_Device_LL_SetOption(PROV_DEVICE_LL_HANDLE handle, const char* optionName, const void* value)
-```
-
-You may also find that you need to refine your Device Provisioning Service client registration application, using a simulated device at first, and a test service setup. Once your application is working in the test environment, you can build it for your specific device and copy the executable to your device image.
-
-## Clean up resources
-
-At this point, you might have the Device Provisioning and IoT Hub services running in the portal. If you wish to abandon the device provisioning setup, and/or delay completion of this tutorial series, we recommend shutting them down to avoid incurring unnecessary costs.
-
-1. From the left-hand menu in the Azure portal, click **All resources** and then select your Device Provisioning Service. At the top of the **All resources** blade, click **Delete**.
-1. From the left-hand menu in the Azure portal, click **All resources** and then select your IoT hub. At the top of the **All resources** blade, click **Delete**.
-
-## Next steps
-In this tutorial, you learned how to:
-
-> [!div class="checklist"]
-> * Build platform-specific Device Provisioning Service Client SDK
-> * Extract the security artifacts
-> * Create the device registration software
-
-Advance to the next tutorial to learn how to provision the device to your IoT hub by enrolling it to the Azure IoT Hub Device Provisioning Service for auto-provisioning.
-
-> [!div class="nextstepaction"]
-> [Provision the device to your IoT hub](tutorial-provision-device-to-hub.md)
iot-edge How To Collect And Transport Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-collect-and-transport-metrics.md
This option does require [extra setup](how-to-collect-and-transport-metrics.md#s
### Sample cloud workflow
-A cloud workflow that delivers metrics messages from IoT Hub to Log Analytics is available as part of the [IoT Edge logging and monitoring sample](https://github.com/Azure-Samples/iotedge-logging-and-monitoring-solution#monitoring-architecture-reference). The sample can be deployed on to existing cloud resources or serve as a production deployment reference.
+A cloud workflow that delivers metrics messages from IoT Hub to Log Analytics is available as part of the [IoT Edge logging and monitoring sample](https://github.com/Azure-Samples/iotedge-logging-and-monitoring-solution/blob/main/docs/CloudWorkflow.md). The sample can be deployed on to existing cloud resources or serve as a production deployment reference.
# [IoT Central](#tab/iotcentral)
iot-edge How To Publish Subscribe https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-publish-subscribe.md
monikerRange: ">=iotedge-2020-11"
You can use Azure IoT Edge MQTT broker to publish and subscribe to messages. This article shows you how to connect to this broker, publish and subscribe to messages over user-defined topics, and use IoT Hub messaging primitives. The IoT Edge MQTT broker is built in the IoT Edge hub. For more information, see [the brokering capabilities of the IoT Edge hub](iot-edge-runtime.md). > [!NOTE]
-> IoT Edge MQTT broker is currently in public preview.
+> IoT Edge MQTT broker (currently in preview) will not move to general availability and will be removed from the future version of IoT Edge Hub. We appreciate the feedback we received on the preview, and we are continuing to refine our plans for an MQTT broker. In the meantime, if you need a standards-compliant MQTT broker on IoT Edge, consider deploying an open-source broker like [Mosquitto](https://mosquitto.org/) as an IoT Edge module.
## Prerequisites
iot-edge Version History https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/version-history.md
The IoT Edge documentation on this site is available for two different versions
* **IoT Edge 1.2** contains content for new features and capabilities that are in the latest stable release. This version of the documentation also contains content for the IoT Edge for Linux on Windows (EFLOW) continuous release version, which is based on IoT Edge 1.2 and contains the latest features and capabilities. IoT Edge 1.2 is now bundled with the [Microsoft Defender for IoT micro-agent for Edge](../defender-for-iot/device-builders/overview.md). * **IoT Edge 1.1 (LTS)** is the first long-term support (LTS) version of IoT Edge. The documentation for this version covers all features and capabilities from all previous versions through 1.1. This version of the documentation also contains content for the IoT Edge for Linux on Windows long-term support version, which is based on IoT Edge 1.1 LTS.
- * This documentation version will be stable through the supported lifetime of version 1.1, and won't reflect new features released in later versions. IoT Edge 1.1 LTS will be supported until December 3, 2022 to match the [.NET Core 3.1 release lifecycle](https://dotnet.microsoft.com/platform/support/policy/dotnet-core).
+ * This documentation version will be stable through the supported lifetime of version 1.1, and won't reflect new features released in later versions. IoT Edge 1.1 LTS will be supported until December 13, 2022 to match the [.NET Core 3.1 release lifecycle](https://dotnet.microsoft.com/platform/support/policy/dotnet-core).
For more information about IoT Edge releases, see [Azure IoT Edge supported systems](support.md).
This table provides recent version history for IoT Edge package releases, and hi
* [View all Azure IoT Edge releases](https://github.com/Azure/azure-iotedge/releases)
-* [Make or review feature requests in the feedback forum](https://feedback.azure.com/d365community/forum/0e2fff5d-f524-ec11-b6e6-000d3a4f0da0)
+* [Make or review feature requests in the feedback forum](https://feedback.azure.com/d365community/forum/0e2fff5d-f524-ec11-b6e6-000d3a4f0da0)
iot-hub-device-update Device Update Plug And Play https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/device-update-plug-and-play.md
# Device Update for IoT Hub and IoT Plug and Play
-Device Update for IoT Hub uses [IoT Plug and Play](../iot-develop/index.yml) to discover and manage devices that are over-the-air update capable. The Device Update service sends and receives properties and messages to and from devices using IoT Plug and Play interfaces. Device Update for IoT Hub requires IoT devices to implement the following interfaces and model-id.
+Device Update for IoT Hub uses [IoT Plug and Play](../iot-develop/index.yml) to discover and manage devices that are over-the-air update capable. The Device Update service sends and receives properties and messages to and from devices using IoT Plug and Play interfaces. Device Update for IoT Hub requires IoT devices to implement the following interfaces and model id.
Concepts: * Understand the [IoT Plug and Play device client](../iot-develop/concepts-developer-guide-device.md?pivots=programming-language-csharp).
IoT Hub Device Twin sample
"deviceProperties": { "manufacturer": "contoso", "model": "virtual-vacuum-v1",
- "interfaceId": "dtmi:azure:iot:deviceUpdate;1",
+ "interfaceId": "dtmi:azure:iot:deviceUpdateModel;1",
"aduVer": "DU;agent/0.8.0-rc1-public-preview", "doVer": "DU;lib/v0.6.0+20211001.174458.c8c4051,DU;agent/v0.6.0+20211001.174418.c8c4051" },
iot-hub Iot Hub Devguide Direct Methods https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-direct-methods.md
Direct methods are implemented on the device and may require zero or more inputs
> [!NOTE] > When you invoke a direct method on a device, property names and values can only contain US-ASCII printable alphanumeric, except any in the following set: ``{'$', '(', ')', '<', '>', '@', ',', ';', ':', '\', '"', '/', '[', ']', '?', '=', '{', '}', SP, HT}``
->
+>
Direct methods are synchronous and either succeed or fail after the timeout period (default: 30 seconds, settable between 5 and 300 seconds). Direct methods are useful in interactive scenarios where you want a device to act if and only if the device is online and receiving commands. For example, turning on a light from a phone. In these scenarios, you want to see an immediate success or failure so the cloud service can act on the result as soon as possible. The device may return some message body as a result of the method, but it isn't required for the method to do so. There is no guarantee on ordering or any concurrency semantics on method calls.
Direct method invocations on a device are HTTPS calls that are made up of the fo
* The *request URI* specific to the device along with the [API version](/rest/api/iothub/service/devices/invokemethod): ```http
- https://fully-qualified-iothubname.azure-devices.net/twins/{deviceId}/methods?api-version=2018-06-30
+ https://fully-qualified-iothubname.azure-devices.net/twins/{deviceId}/methods?api-version=2021-04-12
``` * The POST *method*
To begin, use the [Microsoft Azure IoT extension for Azure CLI](https://github.c
az iot hub generate-sas-token -n <iothubName> --du <duration> ```
-Next, replace the Authorization header with your newly generated SharedAccessSignature, then modify the `iothubName`, `deviceId`, `methodName` and `payload` parameters to match your implementation in the example `curl` command below.
+Next, replace the Authorization header with your newly generated SharedAccessSignature, then modify the `iothubName`, `deviceId`, `methodName` and `payload` parameters to match your implementation in the example `curl` command below.
```bash curl -X POST \
- https://<iothubName>.azure-devices.net/twins/<deviceId>/methods?api-version=2018-06-30 \
+ https://<iothubName>.azure-devices.net/twins/<deviceId>/methods?api-version=2021-04-12\
-H 'Authorization: SharedAccessSignature sr=iothubname.azure-devices.net&sig=x&se=x&skn=iothubowner' \ -H 'Content-Type: application/json' \ -d '{
Execute the modified command to invoke the specified Direct Method. Successful r
> The above example demonstrates invoking a Direct Method on a device. If you wish to invoke a Direct Method in an IoT Edge Module, you would need to modify the url request as shown below: ```bash
-https://<iothubName>.azure-devices.net/twins/<deviceId>/modules/<moduleName>/methods?api-version=2018-06-30
+https://<iothubName>.azure-devices.net/twins/<deviceId>/modules/<moduleName>/methods?api-version=2021-04-12
``` ### Response
iot-hub Iot Hub Devguide Identity Registry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-identity-registry.md
An IoT Hub identity registry:
> [!IMPORTANT] > Only use the identity registry for device management and provisioning operations. High throughput operations at run time should not depend on performing operations in the identity registry. For example, checking the connection state of a device before sending a command is not a supported pattern. Make sure to check the [throttling rates](iot-hub-devguide-quotas-throttling.md) for the identity registry, and the [device heartbeat](iot-hub-devguide-identity-registry.md#device-heartbeat) pattern.
+> [!NOTE]
+> It can take a few seconds for a device or module identity to be available for retrieval after creation. Please retry `get` operation of device or module identities in case of failures.
+ ## Disable devices You can disable devices by updating the **status** property of an identity in the identity registry. Typically, you use this property in two scenarios:
To try out some of the concepts described in this article, see the following IoT
To explore using the IoT Hub Device Provisioning Service to enable zero-touch, just-in-time provisioning, see:
-* [Azure IoT Hub Device Provisioning Service](https://azure.microsoft.com/documentation/services/iot-dps)
+* [Azure IoT Hub Device Provisioning Service](https://azure.microsoft.com/documentation/services/iot-dps)
iot-hub Iot Hub Devguide Jobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-jobs.md
Consider using jobs when you need to schedule and track progress any of the foll
## Job lifecycle
-Jobs are initiated by the solution back end and maintained by IoT Hub. You can initiate a job through a service-facing URI (`PUT https://<iot hub>/jobs/v2/<jobID>?api-version=2018-06-30`) and query for progress on an executing job through a service-facing URI (`GET https://<iot hub>/jobs/v2/<jobID?api-version=2018-06-30`). To refresh the status of running jobs once a job is initiated, run a job query.
+Jobs are initiated by the solution back end and maintained by IoT Hub. You can initiate a job through a service-facing URI (`PUT https://<iot hub>/jobs/v2/<jobID>?api-version=2021-04-12`) and query for progress on an executing job through a service-facing URI (`GET https://<iot hub>/jobs/v2/<jobID?api-version=2021-04-12`). To refresh the status of running jobs once a job is initiated, run a job query.
> [!NOTE] > When you initiate a job, property names and values can only contain US-ASCII printable alphanumeric, except any in the following set: `$ ( ) < > @ , ; : \ " / [ ] ? = { } SP HT`
Jobs are initiated by the solution back end and maintained by IoT Hub. You can i
The following snippet shows the HTTPS 1.1 request details for executing a [direct method](iot-hub-devguide-direct-methods.md) on a set of devices using a job: ```
-PUT /jobs/v2/<jobId>?api-version=2018-06-30
+PUT /jobs/v2/<jobId>?api-version=2021-04-12
Authorization: <config.sharedAccessSignature> Content-Type: application/json; charset=utf-8
The query condition can also be on a single device ID or on a list of device IDs
The following snippet shows the request and response for a job scheduled to call a direct method named testMethod on all devices on contoso-hub-1: ```
-PUT https://contoso-hub-1.azure-devices.net/jobs/v2/job01?api-version=2018-06-30 HTTP/1.1
+PUT https://contoso-hub-1.azure-devices.net/jobs/v2/job01?api-version=2021-04-12 HTTP/1.1
Authorization: SharedAccessSignature sr=contoso-hub-1.azure-devices.net&sig=68ivv8Hxalg%3D&se=1556849884&skn=iothubowner Content-Type: application/json; charset=utf-8 Host: contoso-hub-1.azure-devices.net
Content-Length: 317
"payload": {}, "responseTimeoutInSeconds": 30 },
- "queryCondition": "*",
+ "queryCondition": "*",
"startTime": "2019-05-04T15:53:00.077Z", "maxExecutionTimeInSeconds": 20 }
Date: Fri, 03 May 2019 01:46:18 GMT
The following snippet shows the HTTPS 1.1 request details for updating device twin properties using a job: ```
-PUT /jobs/v2/<jobId>?api-version=2018-06-30
+PUT /jobs/v2/<jobId>?api-version=2021-04-12
Authorization: <config.sharedAccessSignature> Content-Type: application/json; charset=utf-8
Content-Type: application/json; charset=utf-8
The following snippet shows the request and response for a job scheduled to update device twin properties for test-device on contoso-hub-1: ```
-PUT https://contoso-hub-1.azure-devices.net/jobs/v2/job02?api-version=2018-06-30 HTTP/1.1
+PUT https://contoso-hub-1.azure-devices.net/jobs/v2/job02?api-version=2021-04-12 HTTP/1.1
Authorization: SharedAccessSignature sr=contoso-hub-1.azure-devices.net&sig=BN0U-RuA%3D&se=1556925787&skn=iothubowner Content-Type: application/json; charset=utf-8 Host: contoso-hub-1.azure-devices.net
Date: Fri, 03 May 2019 22:45:13 GMT
The following snippet shows the HTTPS 1.1 request details for querying for jobs: ```
-GET /jobs/v2/query?api-version=2018-06-30[&jobType=<jobType>][&jobStatus=<jobStatus>][&pageSize=<pageSize>][&continuationToken=<continuationToken>]
+GET /jobs/v2/query?api-version=2021-04-12[&jobType=<jobType>][&jobStatus=<jobStatus>][&pageSize=<pageSize>][&continuationToken=<continuationToken>]
Authorization: <config.sharedAccessSignature> Content-Type: application/json; charset=utf-8
iot-hub Iot Hub Devguide Messages Read Custom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-messages-read-custom.md
When you use routing and custom endpoints, messages are only delivered to the bu
> [!NOTE] > * IoT Hub only supports writing data to Azure Storage containers as blobs. > * Service Bus queues and topics with **Sessions** or **Duplicate Detection** enabled are not supported as custom endpoints.
-> * In the Azure portal, you can create custom routing endpoints only to Azure resources that are in the same subscription as your IoT hub. You can create custom endpoints for resources in other subscriptions by using either the [Azure CLI](./tutorial-routing-config-message-routing-CLI.md) or [Azure Resource Manager](./tutorial-routing-config-message-routing-RM-template.md).
+> * In the Azure portal, you can create custom routing endpoints only to Azure resources that are in the same subscription as your IoT hub. You can create custom endpoints for resources in other subscriptions by using either the [Azure CLI](./tutorial-routing.md) or Azure Resource Manager.
For more information about creating custom endpoints in IoT Hub, see [IoT Hub endpoints](iot-hub-devguide-endpoints.md).
iot-hub Iot Hub Devguide Query Language https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-query-language.md
The query object exposes multiple **Next** values, depending on the deserializat
> [!IMPORTANT] > Query results can have a few minutes of delay with respect to the latest values in device twins. If querying individual device twins by ID, use the [get twin REST API](/jav#azure-iot-hub-service-sdks).
+Query expressions can have a maximum length of 8192 characters.
+ Currently, comparisons are supported only between primitive types (no objects), for instance `... WHERE properties.desired.config = properties.reported.config` is supported only if those properties have primitive values. ## Get started with jobs queries
SELECT * FROM devices.jobs
### Limitations
+Query expressions can have a maximum length of 8192 characters.
+ Currently, queries on **devices.jobs** do not support: * Projections, therefore only `SELECT *` is possible.
iot-hub Iot Hub Devguide Sdks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-sdks.md
There are two categories of software development kits (SDKs) for working with IoT Hub:
-* [**IoT Hub Service SDKs**](#azure-iot-hub-service-sdks) enable you to build backend applications to manage your IoT hub, and optionally send messages, schedule jobs, invoke direct methods, or send desired property updates to your IoT devices or modules.
+* [**IoT Hub service SDKs**](#azure-iot-hub-service-sdks) enable you to build backend applications to manage your IoT hub, and optionally send messages, schedule jobs, invoke direct methods, or send desired property updates to your IoT devices or modules.
-* [**IoT Hub Device SDKs**](../iot-develop/about-iot-sdks.md) enable you to build apps that run on your IoT devices using device client or module client. These apps send telemetry to your IoT hub, and optionally receive messages, job, method, or twin updates from your IoT hub. You can use these SDKs to build device apps that use [Azure IoT Plug and Play](../iot-develop/overview-iot-plug-and-play.md) conventions and models to advertise their capabilities to IoT Plug and Play-enabled applications. You can also use module client to author [modules](../iot-edge/iot-edge-modules.md) for [Azure IoT Edge runtime](../iot-edge/about-iot-edge.md).
+* [**IoT Hub device SDKs**](../iot-develop/about-iot-sdks.md) enable you to build apps that run on your IoT devices using device client or module client. These apps send telemetry to your IoT hub, and optionally receive messages, job, method, or twin updates from your IoT hub. You can use these SDKs to build device apps that use [Azure IoT Plug and Play](../iot-develop/overview-iot-plug-and-play.md) conventions and models to advertise their capabilities to IoT Plug and Play-enabled applications. You can also use module client to author [modules](../iot-edge/iot-edge-modules.md) for [Azure IoT Edge runtime](../iot-edge/about-iot-edge.md).
In addition, we also provide a set of SDKs for working with the [Device Provisioning Service](../iot-dps/about-iot-dps.md).
-* **Provisioning Device SDKs** enable you to build apps that run on your IoT devices to communicate with the Device Provisioning Service.
+* **Provisioning device SDKs** enable you to build apps that run on your IoT devices to communicate with the Device Provisioning Service.
-* **Provisioning Service SDKs** enable you to build backend applications to manage your enrollments in the Device Provisioning Service.
+* **Provisioning service SDKs** enable you to build backend applications to manage your enrollments in the Device Provisioning Service.
Learn about the [benefits of developing using Azure IoT SDKs](https://azure.microsoft.com/blog/benefits-of-using-the-azure-iot-sdks-in-your-azure-iot-solution/).
-## Azure IoT Hub Service SDKs
+## Azure IoT Hub service SDKs
The Azure IoT service SDKs contain code to facilitate building applications that interact directly with IoT Hub to manage devices and security.
The Azure IoT service SDKs contain code to facilitate building applications that
| Node | [npm](https://www.npmjs.com/package/azure-iothub) | [GitHub](https://github.com/Azure/azure-iot-sdk-node) | [Samples](https://github.com/Azure/azure-iot-sdk-node/tree/main/service/samples) | [Reference](/javascript/api/azure-iothub/) | | Python | [pip](https://pypi.org/project/azure-iot-hub) | [GitHub](https://github.com/Azure/azure-iot-sdk-python) | [Samples](https://github.com/Azure/azure-iot-sdk-python/tree/main/azure-iot-hub/samples) | [Reference](/python/api/azure-iot-hub) |
-## Microsoft Azure Provisioning SDKs
+## Microsoft Azure provisioning SDKs
-The **Microsoft Azure Provisioning SDKs** enable you to provision devices to your IoT Hub using the [Device Provisioning Service](../iot-dps/about-iot-dps.md).
+The **Microsoft Azure provisioning SDKs** enable you to provision devices to your IoT Hub using the [Device Provisioning Service](../iot-dps/about-iot-dps.md). To learn more about the provisioning SDKs, see [Microsoft SDKs for Device Provisioning Service](../iot-dps/libraries-sdks.md).
-| Platform | Package | Source code | Reference |
-| --|--|--|--|
-| .NET|[Device SDK](https://www.nuget.org/packages/Microsoft.Azure.Devices.Provisioning.Client/), [Service SDK](https://www.nuget.org/packages/Microsoft.Azure.Devices.Provisioning.Service/) |[GitHub](https://github.com/Azure/azure-iot-sdk-csharp/)|[Reference](/dotnet/api/microsoft.azure.devices.provisioning.client) |
-| C|[Device SDK](https://github.com/Azure/azure-iot-sdk-c/blob/master/readme.md#packages-and-libraries)|[GitHub](https://github.com/Azure/azure-iot-sdk-c/blob/master/provisioning\_client)|[Reference](/azure/iot-hub/iot-c-sdk-ref/) |
-| Java|[Maven](https://github.com/Azure/azure-iot-sdk-jav#for-the-service-sdk)|[GitHub](https://github.com/Azure/azure-iot-sdk-java/blob/main/provisioning)|[Reference](/java/api/com.microsoft.azure.sdk.iot.provisioning.device) |
-| Node.js|[Device SDK](https://badge.fury.io/js/azure-iot-provisioning-device), [Service SDK](https://badge.fury.io/js/azure-iot-provisioning-service) |[GitHub](https://github.com/Azure/azure-iot-sdk-node/tree/main/provisioning)|[Reference](/javascript/api/overview/azure/iothubdeviceprovisioning) |
-| Python|[Device SDK](https://pypi.org/project/azure-iot-device/), [Service SDK](https://pypi.org/project/azure-iothub-provisioningserviceclient/)|[GitHub](https://github.com/Azure/azure-iot-sdk-python)|[Device Reference](/python/api/azure-iot-device/azure.iot.device.provisioningdeviceclient), [Service Reference](/python/api/azure-mgmt-iothubprovisioningservices) |
-
-## Azure IoT Hub Device SDKs
+## Azure IoT Hub device SDKs
The Microsoft Azure IoT device SDKs contain code that facilitates building applications that connect to and are managed by Azure IoT Hub services.
-Learn more about the IoT Hub Device SDKS in the [IoT Device Development Documentation](../iot-develop/about-iot-sdks.md).
+Learn more about the IoT Hub device SDKS in the [IoT Device Development Documentation](../iot-develop/about-iot-sdks.md).
## SDK and hardware compatibility
-For more information about choosing a device SDK, see [Overview of Azure IoT Device SDKs](../iot-develop/about-iot-sdks.md).
- For more information about SDK compatibility with specific hardware devices, see the [Azure Certified for IoT device catalog](https://devicecatalog.azure.com/) or individual repository. [!INCLUDE [iot-hub-basic](../../includes/iot-hub-basic-partial.md)] ## Next steps
-Relevant docs related to development using the Azure IoT SDKs:
-
-* Learn about [how to manage connectivity and reliable messaging](iot-hub-reliability-features-in-sdks.md) using the IoT Hub SDKs.
-* Learn about how to [develop for mobile platforms](iot-hub-how-to-develop-for-mobile-devices.md) such as iOS and Android.
-* [IoT Device Development Documentation](../iot-develop/about-iot-sdks.md)
-
-Other reference topics in this IoT Hub developer guide include:
-
-* [IoT Hub endpoints](iot-hub-devguide-endpoints.md)
-* [IoT Hub query language for device twins, jobs, and message routing](iot-hub-devguide-query-language.md)
-* [Quotas and throttling](iot-hub-devguide-quotas-throttling.md)
-* [IoT Hub MQTT support](iot-hub-mqtt-support.md)
-* [IoT Hub REST API reference](/rest/api/iothub/)
+* Learn how to [manage connectivity and reliable messaging](iot-hub-reliability-features-in-sdks.md) using the IoT Hub SDKs.
+* Learn how to [develop for mobile platforms](iot-hub-how-to-develop-for-mobile-devices.md) such as iOS and Android.
+* Learn how to [develop without an SDK](iot-hub-devguide-no-sdk.md).
iot-hub Iot Hub How To Clone https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-how-to-clone.md
This article explores ways to clone an IoT Hub and provides some questions you need to answer before you start. Here are several reasons you might want to clone an IoT hub:
-* You are moving your company from one region to another, such as from Europe to North America (or vice versa), and you want your resources and data to be geographically close to your new location, so you need to move your hub.
+* You're moving your company from one region to another, such as from Europe to North America (or vice versa), and you want your resources and data to be geographically close to your new location, so you need to move your hub.
-* You are setting up a hub for a development versus production environment.
+* You're setting up a hub for a development versus production environment.
* You want to do a custom implementation of multi-hub high availability. For more information, see the [How to achieve cross region HA section of IoT Hub high availability and disaster recovery](iot-hub-ha-dr.md#achieve-cross-region-ha).
-* You want to increase the number of [partitions](iot-hub-scaling.md#partitions) configured for your hub. This is set when you first create your hub, and can't be changed. You can use the information in this article to clone your hub and when the clone is created, increase the number of partitions.
+* You want to increase the number of [partitions](iot-hub-scaling.md#partitions) configured for your hub. This number is set when you first create your hub, and can't be changed. You can use the information in this article to clone your hub and when the clone is created, increase the number of partitions.
To clone a hub, you need a subscription with administrative access to the original hub. You can put the new hub in a new resource group and region, in the same subscription as the original hub, or even in a new subscription. You just can't use the same name because the hub name has to be globally unique. > [!NOTE]
-> At this time, there's no feature available for cloning an IoT hub automatically. It's primarily a manual process, and thus is fairly error-prone. The complexity of cloning a hub is directly proportional to the complexity of the hub. For example, cloning an IoT hub with no message routing is fairly simple. If you add message routing as just one complexity, cloning the hub becomes at least an order of magnitude more complicated. If you also move the resources used for routing endpoints, it's another order of magniture more complicated.
+> At this time, there's no feature available for cloning an IoT hub automatically. It's primarily a manual process, and thus is fairly error-prone. The complexity of cloning a hub is directly proportional to the complexity of the hub. For example, cloning an IoT hub with no message routing is fairly simple. If you add message routing as just one complexity, cloning the hub becomes at least an order of magnitude more complicated. If you also move the resources used for routing endpoints, it's another order of magnitude more complicated.
## Things to consider
There are several things to consider before cloning an IoT hub.
* Make sure that all of the features available in the original location are also available in the new location. Some services are in preview, and not all features are available everywhere.
-* Do not remove the original resources before creating and verifying the cloned version. Once you remove a hub, it's gone forever, and there is no way to recover it to check the settings or data to make sure the hub is replicated correctly.
+* Don't remove the original resources before creating and verifying the cloned version. Once you remove a hub, it's gone forever, and there's no way to recover it to check the settings or data to make sure the hub is replicated correctly.
* Many resources require globally unique names, so you must use different names for the cloned versions. You also should use a different name for the resource group to which the cloned hub belongs.
-* Data for the original IoT hub is not migrated. This includes telemetry messages, cloud-to-device (C2D) commands, and job-related information such as schedules and history. Metrics and logging results are also not migrated.
+* Data for the original IoT hub isn't migrated. This data includes device messages, cloud-to-device (C2D) commands, and job-related information such as schedules and history. Metrics and logging results are also not migrated.
* For data or messages routed to Azure Storage, you can leave the data in the original storage account, transfer that data to a new storage account in the new region, or leave the old data in place and create a new storage account in the new location for the new data. For more information on moving data in Blob storage, see [Get started with AzCopy](../storage/common/storage-use-azcopy-v10.md).
-* Data for Event Hubs and for Service Bus Topics and Queues can't be migrated. This is point-in-time data and is not stored after the messages are processed.
+* Data for Event Hubs and for Service Bus Topics and Queues can't be migrated. This data is point-in-time data and isn't stored after the messages are processed.
-* You need to schedule downtime for the migration. Cloning the devices to the new hub takes time. If you are using the Import/Export method, benchmark testing has revealed that it could take around two hours to move 500,000 devices, and four hours to move a million devices.
+* You need to schedule downtime for the migration. Cloning the devices to the new hub takes time. If you use the Import/Export method, benchmark testing has revealed that it could take around two hours to move 500,000 devices, and four hours to move a million devices.
* You can copy the devices to the new hub without shutting down or changing the devices.
There are several things to consider before cloning an IoT hub.
* Otherwise, you have to use the Import/Export method to move the devices, and then the devices have to be modified to use the new hub. For example, you can set up your device to consume the IoT Hub host name from the twin desired properties. The device will take that IoT Hub host name, disconnect the device from the old hub, and reconnect it to the new one.
-* You need to update any certificates you are using so you can use them with the new resources. Also, you probably have the hub defined in a DNS table somewhere ΓÇö you will need to update that DNS information.
+* You need to update any certificates so you can use them with the new resources. Also, you probably have the hub defined in a DNS table somewhere and need to update that DNS information.
## Methodology
-This is the general method we recommend for moving an IoT hub from one region to another. For message routing, this assumes the resources are not being moved to the new region. For more information, see the [section on Message Routing](#how-to-handle-message-routing).
+This is the general method we recommend for moving an IoT hub from one region to another. For message routing, this assumes the resources aren't being moved to the new region. For more information, see the [section on Message Routing](#how-to-handle-message-routing).
+
+ 1. Export the hub and its settings to a Resource Manager template.
- 1. Export the hub and its settings to a Resource Manager template.
-
1. Make the necessary changes to the template, such as updating all occurrences of the name and the location for the cloned hub. For any resources in the template used for message routing endpoints, update the key in the template for that resource.
-
- 1. Import the template into a new resource group in the new location. This creates the clone.
- 1. Debug as needed.
-
- 1. Add anything that wasn't exported to the template.
-
- For example, consumer groups are not exported to the template. You need to add the consumer groups to the template manually or use the [Azure portal](https://portal.azure.com) after the hub is created. There is an example of adding one consumer group to a template in the article [Use an Azure Resource Manager template to configure IoT Hub message routing](tutorial-routing-config-message-routing-rm-template.md).
-
- 1. Copy the devices from the original hub to the clone. This is covered in the section [Managing the devices registered to the IoT hub](#managing-the-devices-registered-to-the-iot-hub).
+ 1. Import the template into a new resource group in the new location. This step creates the clone.
+
+ 1. Debug as needed.
+
+ 1. Add anything that wasn't exported to the template.
+
+ For example, consumer groups aren't exported to the template. You need to add the consumer groups to the template manually or use the [Azure portal](https://portal.azure.com) after the hub is created.
+
+ 1. Copy the devices from the original hub to the clone. This process is covered in the section [Managing the devices registered to the IoT hub](#managing-the-devices-registered-to-the-iot-hub).
## How to handle message routing
-If your hub uses [custom routing](iot-hub-devguide-messages-read-custom.md), exporting the template for the hub includes the routing configuration, but it does not include the resources themselves. You must choose whether to move the routing resources to the new location or to leave them in place and continue to use them "as is".
+If your hub uses [custom routing](iot-hub-devguide-messages-read-custom.md), exporting the template for the hub includes the routing configuration, but it doesn't include the resources themselves. You must choose whether to move the routing resources to the new location or to leave them in place and continue to use them "as is".
For example, say you have a hub in West US that is routing messages to a storage account (also in West US), and you want to move the hub to East US. You can move the hub and have it still route messages to the storage account in West US, or you can move the hub and also move the storage account. There may be a small performance hit from routing messages to endpoint resources in a different region.
-You can move a hub that uses message routing pretty easily if you do not also move the resources used for the routing endpoints.
+You can move a hub that uses message routing easily if you don't also move the resources used for the routing endpoints.
If the hub uses message routing, you have two choices. 1. Move the resources used for the routing endpoints to the new location.
- * You must create the new resources yourself either manually in the [Azure portal](https://portal.azure.com) or through the use of Resource Manager templates.
+ * You must create the new resources yourself either manually in the [Azure portal](https://portal.azure.com) or by using Resource Manager templates.
* You must rename all of the resources when you create them in the new location, as they have globally unique names.
If the hub uses message routing, you have two choices.
1. Don't move the resources used for the routing endpoints. Use them "in place".
- * In the step where you edit the template, you will need to retrieve the keys for each routing resource and put them in the template before you create the new hub.
+ * In the step where you edit the template, you need to retrieve the keys for each routing resource and put them in the template before you create the new hub.
* The hub still references the original routing resources and routes messages to them as configured.
- * You will have a small performance hit because the hub and the routing endpoint resources are not in the same location.
+ * You'll have a small performance hit because the hub and the routing endpoint resources aren't in the same location.
## Prepare to migrate the hub to another region
This section provides specific instructions for migrating the hub.
1. Go to the Downloads folder (or to whichever folder you used when you exported the template) and find the zip file. Extract the zip file and find the file called `template.json`. Select and copy it. Go to a different folder and paste the template file (Ctrl+V). Now you can edit it.
- The following example is for a generic hub with no routing configuration. It is an S1 tier hub (with 1 unit) called **ContosoHub** in region **westus**. Here is the exported template.
+ The following example is for a generic hub with no routing configuration. It's an S1 tier hub (with 1 unit) called **ContosoHub** in region **westus**:
``` json {
You have to make some changes before you can use the template to create the new
#### Edit the hub name and location
-1. Remove the container name parameter section at the top. **ContosoHub** does not have an associated container.
+1. Remove the container name parameter section at the top. **ContosoHub** doesn't have an associated container.
``` json "parameters": {
You have to make some changes before you can use the template to create the new
``` json "location": "eastus", ```
-#### Update the keys for the routing resources that are not being moved
+#### Update the keys for the routing resources that aren't being moved
-When you export the Resource Manager template for a hub that has routing configured, you will see that the keys for those resources are not provided in the exported template -- their placement is denoted by asterisks. You must fill them in by going to those resources in the portal and retrieving the keys **before** you import the new hub's template and create the hub.
+When you export the Resource Manager template for a hub that has routing configured, you will see that the keys for those resources aren't provided in the exported template. Their placement is denoted by asterisks. You must fill them in by going to those resources in the portal and retrieving the keys **before** you import the new hub's template and create the hub.
1. Retrieve the keys required for any of the routing resources and put them in the template. You can retrieve the key(s) from the resource in the [Azure portal](https://portal.azure.com).
When you export the Resource Manager template for a hub that has routing configu
1. After you retrieve the account key for the storage account, put it in the template in the clause `AccountKey=****` in the place of the asterisks.
-1. For service bus queues, get the Shared Access Key matching the SharedAccessKeyName. Here is the key and the `SharedAccessKeyName` in the json:
+1. For service bus queues, get the Shared Access Key matching the SharedAccessKeyName. Here's the key and the `SharedAccessKeyName` in the json:
```json "connectionString": "Endpoint=sb://fabrikamsbnamespace1234.servicebus.windows.net:5671/;
When you export the Resource Manager template for a hub that has routing configu
EntityPath=fabrikamsbqueue1234", ```
-1. The same applies for the Service Bus Topics and Event Hub connections.
+1. The same applies for the Service Bus Topics and Event Hubs connections.
#### Create the new routing resources in the new location
If you want to move the routing resources, you must manually set up the resource
Now you have a template that will create a new hub that looks almost exactly like the old hub, depending on how you decided to handle the routing.
-## Move -- create the new hub in the new region by loading the template
+## Create the new hub in the new region by loading the template
Create the new hub in the new location using the template. If you have routing resources that are going to move, the resources should be set up in the new location and the references in the template updated to match. If you are not moving the routing resources, they should be in the template with the updated keys.
Create the new hub in the new location using the template. If you have routing r
Now that you have your clone up and running, you need to copy all of the devices from the original hub to the clone.
-There are multiple ways to accomplish this. You either originally used [Device Provisioning Service (DPS)](../iot-dps/about-iot-dps.md)to provision the devices, or you didn't. If you did, this is not difficult. If you did not, this can be very complicated.
+There are multiple ways to copy the devices. You either originally used [Device Provisioning Service (DPS)](../iot-dps/about-iot-dps.md) to provision the devices, or you didn't. If you did, this process isn't difficult. If you didn't, this process can be complicated.
-If you did not use DPS to provision your devices, you can skip the next section and start with [Using Import/Export to move the devices to the new hub](#using-import-export-to-move-the-devices-to-the-new-hub).
+If you didn't use DPS to provision your devices, you can skip the next section and start with [Using Import/Export to move the devices to the new hub](#using-import-export-to-move-the-devices-to-the-new-hub).
## Using DPS to re-provision the devices in the new hub
The application targets .NET Core, so you can run it on either Windows or Linux.
Here are the five options you specify when you run the application. We'll put these on the command line in a minute.
-* **addDevices** (argument 1) -- set this to true if you want to add virtual devices that are generated for you. These are added to the source hub. Also, set **numToAdd** (argument 2) to specify how many devices you want to add. The maximum number of devices you can register to a hub is one million.The purpose of this option is for testing -- you can generate a specific number of devices, and then copy them to another hub.
+* **addDevices** (argument 1) -- set this to true if you want to add virtual devices that are generated for you. These are added to the source hub. Also, set **numToAdd** (argument 2) to specify how many devices you want to add. The maximum number of devices you can register to a hub is one million. The purpose of this option is for testing. You can generate a specific number of devices, and then copy them to another hub.
* **copyDevices** (argument 3) -- set this to true to copy the devices from one hub to another.
-* **deleteSourceDevices** (argument 4) -- set this to true to delete all of the devices registered to the source hub. We recommending waiting until you are certain all of the devices have been transferred before you run this. Once you delete the devices, you can't get them back.
+* **deleteSourceDevices** (argument 4) -- set this to true to delete all of the devices registered to the source hub. We recommend waiting until you are certain all of the devices have been transferred before you run this. Once you delete the devices, you can't get them back.
* **deleteDestDevices** (argument 5) -- set this to true to delete all of the devices registered to the destination hub (the clone). You might want to do this if you want to copy the devices more than once.
-The basic command will be *dotnet run* -- this tells .NET to build the local csproj file and then run it. You add your command-line arguments to the end before you run it.
+The basic command is *dotnet run*, which tells .NET to build the local csproj file and then run it. You add your command-line arguments to the end before you run it.
Your command-line will look like these examples:
Your command-line will look like these examples:
1. To get the connection string values, sign in to the [Azure portal](https://portal.azure.com).
-1. Put the connection strings somewhere you can retrieve them, such as NotePad. If you copy the following, you can paste the connection strings in directly where they go. Don't add spaces around the equal sign, or it changes the variable name. Also, you do not need double-quotes around the connection strings. If you put quotes around the storage account connection string, it won't work.
+1. Put the connection strings somewhere you can retrieve them, such as NotePad. If you copy the following, you can paste the connection strings in directly where they go. Don't add spaces around the equal sign, or it changes the variable name. Also, you don't need double-quotes around the connection strings. If you put quotes around the storage account connection string, it won't work.
- For Windows, this is how you set the environment variables:
+ Set the environment variables in Windows:
``` console SET IOTHUB_CONN_STRING=<put connection string to original IoT Hub here> SET DEST_IOTHUB_CONN_STRING=<put connection string to destination or clone IoT Hub here> SET STORAGE_ACCT_CONN_STRING=<put connection string to the storage account here> ```
-
- For Linux, this is how you define the environment variables:
+
+ Set the environment variables in Linux:
``` console export IOTHUB_CONN_STRING="<put connection string to original IoT Hub here>"
Now you have the environment variables in a file with the SET commands, and you
You can view the devices in the [Azure portal](https://portal.azure.com) and verify they are in the new location.
-1. Go to the new hub using the [Azure portal](https://portal.azure.com). Select your hub, then select **IoT Devices**. You see the devices you just copied from the old hub to the cloned hub. You can also view the properties for the cloned hub.
+1. Go to the new hub using the [Azure portal](https://portal.azure.com). Select your hub, then select **IoT Devices**. You see the devices you copied from the old hub to the cloned hub. You can also view the properties for the cloned hub.
1. Check for import/export errors by going to the Azure storage account in the [Azure portal](https://portal.azure.com) and looking in the `devicefiles` container for the `ImportErrors.log`. If this file is empty (the size is 0), there were no errors. If you try to import the same device more than once, it rejects the device the second time and adds an error message to the log file.
-### Committing the changes
+### Commit the changes
At this point, you have copied your hub to the new location and migrated the devices to the new clone. Now you need to make changes so the devices work with the cloned hub.
If you have implemented routing, test and make sure your messages are routed to
## Clean-up
-Don't clean up until you are really certain the new hub is up and running and the devices are working correctly. Also be sure to test the routing if you are using that feature. When you're ready, clean up the old resources by performing these steps:
+Don't clean up until you are certain the new hub is up and running and the devices are working correctly. Also be sure to test the routing if you are using that feature. When you're ready, clean up the old resources by performing these steps:
* If you haven't already, delete the old hub. This removes all of the active devices from the hub.
Don't clean up until you are really certain the new hub is up and running and th
You have cloned an IoT hub into a new hub in a new region, complete with the devices. For more information about performing bulk operations against the identity registry in an IoT Hub, see [Import and export IoT Hub device identities in bulk](iot-hub-bulk-identity-mgmt.md).
-For more information about IoT Hub and development for the hub, please see the following articles.
+For more information about IoT Hub and development for the hub, see the following articles:
* [IoT Hub developer's guide](iot-hub-devguide.md)
For more information about IoT Hub and development for the hub, please see the f
* [IoT Hub device management overview](iot-hub-device-management-overview.md)
-* If you want to deploy the sample application, please see [.NET Core application deployment](/dotnet/core/deploying/index).
+If you want to deploy the sample application, see [.NET Core application deployment](/dotnet/core/deploying/index).
iot-hub Iot Hub Rm Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-rm-rest.md
To complete this tutorial, you need the following:
static string rgName = "{Resource group name}"; static string iotHubName = "{IoT Hub name including your initials}"; ```
-
+ [!INCLUDE [iot-hub-pii-note-naming-hub](../../includes/iot-hub-pii-note-naming-hub.md)] [!INCLUDE [iot-hub-get-access-token](../../includes/iot-hub-get-access-token.md)]
Use the [IoT Hub resource provider REST API](/rest/api/iothub/iothubresource) to
```csharp var content = new StringContent(JsonConvert.SerializeObject(description), Encoding.UTF8, "application/json");
- var requestUri = string.Format("https://management.azure.com/subscriptions/{0}/resourcegroups/{1}/providers/Microsoft.devices/IotHubs/{2}?api-version=2016-02-03", subscriptionId, rgName, iotHubName);
+ var requestUri = string.Format("https://management.azure.com/subscriptions/{0}/resourcegroups/{1}/providers/Microsoft.devices/IotHubs/{2}?api-version=2021-04-12", subscriptionId, rgName, iotHubName);
var result = client.PutAsync(requestUri, content).Result; if (!result.IsSuccessStatusCode)
Use the [IoT Hub resource provider REST API](/rest/api/iothub/iothubresource) to
6. Add the following code to the end of the **CreateIoTHub** method. This code retrieves the keys of the IoT hub you created and prints them to the console: ```csharp
- var listKeysUri = string.Format("https://management.azure.com/subscriptions/{0}/resourceGroups/{1}/providers/Microsoft.Devices/IotHubs/{2}/IoTHubKeys/listkeys?api-version=2016-02-03", subscriptionId, rgName, iotHubName);
+ var listKeysUri = string.Format("https://management.azure.com/subscriptions/{0}/resourceGroups/{1}/providers/Microsoft.Devices/IotHubs/{2}/IoTHubKeys/listkeys?api-version=2021-04-12", subscriptionId, rgName, iotHubName);
var keysresults = client.PostAsync(listKeysUri, null).Result; Console.WriteLine("Keys: {0}", keysresults.Content.ReadAsStringAsync().Result);
iot-hub Tutorial Routing Config Message Routing CLI https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/tutorial-routing-config-message-routing-CLI.md
- Title: Tutorial - Configure message routing for Azure IoT Hub using the Azure CLI
-description: Tutorial - Configure message routing for Azure IoT Hub using the Azure CLI. Depending on properties in the message, route to either a storage account or a Service Bus queue.
---- Previously updated : 8/20/2021--
-#Customer intent: As a developer, I want to be able to route messages sent to my IoT hub to different destinations based on properties stored in the message. I want to be able to set up the resource and the routing using the Azure CLI.
--
-# Tutorial: Use the Azure CLI to configure IoT Hub message routing
---
-## Download the script (optional)
-
-For the second part of this tutorial, you download and run a Visual Studio application to send messages to the IoT Hub. There is a folder in the download that contains the Azure Resource Manager template and parameters file, as well as the Azure CLI and PowerShell scripts.
-
-If you want to view the finished script, download the [Azure IoT C# Samples](https://github.com/Azure-Samples/azure-iot-samples-csharp/archive/main.zip). Unzip the main.zip file. The Azure CLI script is in /iot-hub/Tutorials/Routing/SimulatedDevice/resources/ as **iothub_routing_cli.azcli**.
-
-## Use the Azure CLI to create your resources
-
-Copy and paste the script below into Cloud Shell and press Enter. It runs the script one line at a time. This first section of the script will create the base resources for this tutorial, including the storage account, IoT Hub, Service Bus Namespace, and Service Bus queue. As you go through the rest of the tutorial, copy each block of script and paste it into Cloud Shell to run it.
-
-> [!TIP]
-> A tip about debugging: this script uses the continuation symbol (the backslash `\`) to make the script more readable. If you have a problem running the script, make sure your Cloud Shell session is running `bash` and that there are no spaces after any of the backslashes.
->
-
-There are several resource names that must be globally unique, such as the IoT Hub name and the storage account name. To make this easier, those resource names are appended with a random alphanumeric value called *randomValue*. The randomValue is generated once at the top of the script and appended to the resource names as needed throughout the script. If you don't want it to be random, you can set it to an empty string or to a specific value.
-
-> [!IMPORTANT]
-> The variables set in the initial script are also used by the routing script, so run all of the script in the same Cloud Shell session. If you open a new session to run the script for setting up the routing, several of the variables will be missing values.
->
-
-```azurecli-interactive
-# This command retrieves the subscription id of the current Azure account.
-# This field is used when setting up the routing queries.
-subscriptionID=$(az account show --query id -o tsv)
-
-# Concatenate this number onto the resources that have to be globally unique.
-# You can set this to "" or to a specific value if you don't want it to be random.
-# This retrieves a random value.
-randomValue=$RANDOM
-
-# This command installs the IOT Extension for Azure CLI.
-# You only need to install this the first time.
-# You need it to create the device identity.
-az extension add --name azure-iot
-
-# Set the values for the resource names that
-# don't have to be globally unique.
-location=westus
-resourceGroup=ContosoResources
-iotHubConsumerGroup=ContosoConsumers
-containerName=contosoresults
-iotDeviceName=Contoso-Test-Device
-
-# Create the resource group to be used
-# for all the resources for this tutorial.
-az group create --name $resourceGroup \
- --location $location
-
-# The IoT hub name must be globally unique,
-# so add a random value to the end.
-iotHubName=ContosoTestHub$randomValue
-echo "IoT hub name = " $iotHubName
-
-# Create the IoT hub.
-az iot hub create --name $iotHubName \
- --resource-group $resourceGroup \
- --sku S1 --location $location
-
-# Add a consumer group to the IoT hub for the 'events' endpoint.
-az iot hub consumer-group create --hub-name $iotHubName \
- --name $iotHubConsumerGroup
-
-# The storage account name must be globally unique,
-# so add a random value to the end.
-storageAccountName=contosostorage$randomValue
-echo "Storage account name = " $storageAccountName
-
-# Create the storage account to be used as a routing destination.
-az storage account create --name $storageAccountName \
- --resource-group $resourceGroup \
- --location $location \
- --sku Standard_LRS
-
-# Get the primary storage account key.
-# You need this to create the container.
-storageAccountKey=$(az storage account keys list \
- --resource-group $resourceGroup \
- --account-name $storageAccountName \
- --query "[0].value" | tr -d '"')
-
-# See the value of the storage account key.
-echo "storage account key = " $storageAccountKey
-
-# Create the container in the storage account.
-az storage container create --name $containerName \
- --account-name $storageAccountName \
- --account-key $storageAccountKey \
- --public-access off
-
-# The Service Bus namespace must be globally unique,
-# so add a random value to the end.
-sbNamespace=ContosoSBNamespace$randomValue
-echo "Service Bus namespace = " $sbNamespace
-
-# Create the Service Bus namespace.
-az servicebus namespace create --resource-group $resourceGroup \
- --name $sbNamespace \
- --location $location
-
-# The Service Bus queue name must be globally unique,
-# so add a random value to the end.
-sbQueueName=ContosoSBQueue$randomValue
-echo "Service Bus queue name = " $sbQueueName
-
-# Create the Service Bus queue to be used as a routing destination.
-az servicebus queue create --name $sbQueueName \
- --namespace-name $sbNamespace \
- --resource-group $resourceGroup
-
-# Create the IoT device identity to be used for testing.
-az iot hub device-identity create --device-id $iotDeviceName \
- --hub-name $iotHubName
-
-# Retrieve the information about the device identity, then copy the primary key to
-# Notepad. You need this to run the device simulation during the testing phase.
-az iot hub device-identity show --device-id $iotDeviceName \
- --hub-name $iotHubName
-```
-
-Now that the base resources are set up, you can configure the message routing.
-
-## Set up message routing
--
-To create a routing endpoint, use [az iot hub routing-endpoint create](/cli/azure/iot/hub/routing-endpoint#az-iot-hub-routing-endpoint-create). To create the message route for the endpoint, use [az iot hub route create](/cli/azure/iot/hub/route#az-iot-hub-route-create).
-
-### Route to a storage account
--
-First, set up the endpoint for the storage account, then set up the route.
-
-These are the variables used by the script that must be set within your Cloud Shell session:
-
-**storageConnectionString**: This value is retrieved from the storage account set up in the previous script. It is used by the message routing to access the storage account.
-
- **resourceGroup**: There are two occurrences of resource group -- set them to your resource group.
-
-**endpoint subscriptionID**: This field is set to the Azure subscriptionID for the endpoint.
-
-**endpointType**: This field is the type of endpoint. This value must be set to `azurestoragecontainer`, `eventhub`, `servicebusqueue`, or `servicebustopic`. For your purposes here, set it to `azurestoragecontainer`.
-
-**iotHubName**: This field is the name of the hub that will do the routing.
-
-**containerName**: This field is the name of the container in the storage account to which data will be written.
-
-**encoding**: This field will be either `avro` or `json`. This denotes the format of the stored data.
-
-**routeName**: This field is the name of the route you are setting up.
-
-**endpointName**: This field is the name identifying the endpoint.
-
-**enabled**: This field defaults to `true`, indicating that the message route should be enabled after being created.
-
-**condition**: This field is the query used to filter for the messages sent to this endpoint. The query condition for the messages being routed to storage is `level="storage"`.
-
-Copy this script and paste it into your Cloud Shell window and run it.
-
-```azurecli
-##### ROUTING FOR STORAGE #####
-
-endpointName="ContosoStorageEndpoint"
-endpointType="azurestoragecontainer"
-routeName="ContosoStorageRoute"
-condition='level="storage"'
-
-# Get the connection string for the storage account.
-# Adding the "-o tsv" makes it be returned without the default double quotes around it.
-storageConnectionString=$(az storage account show-connection-string \
- --name $storageAccountName --query connectionString -o tsv)
-```
-
-The next step is to create the routing endpoint for the storage account. You also specify the container in which the results will be stored. The container was created previously when the storage account was created.
-
-```azurecli
-# Create the routing endpoint for storage.
-az iot hub routing-endpoint create \
- --connection-string $storageConnectionString \
- --endpoint-name $endpointName \
- --endpoint-resource-group $resourceGroup \
- --endpoint-subscription-id $subscriptionID \
- --endpoint-type $endpointType \
- --hub-name $iotHubName \
- --container $containerName \
- --resource-group $resourceGroup \
- --encoding avro
-```
-
-Next, create the route for the storage endpoint. The message route designates where to send the messages that meet the query specification.
-
-```azurecli
-# Create the route for the storage endpoint.
-az iot hub route create \
- --name $routeName \
- --hub-name $iotHubName \
- --source devicemessages \
- --resource-group $resourceGroup \
- --endpoint-name $endpointName \
- --enabled \
- --condition $condition
-```
-
-### Route to a Service Bus queue
-
-Now set up the routing for the Service Bus queue. To retrieve the connection string for the Service Bus queue, you must create an authorization rule that has the correct rights defined. The following script creates an authorization rule for the Service Bus queue called `sbauthrule`, and sets the rights to `Listen Manage Send`. Once this authorization rule is defined, you can use it to retrieve the connection string for the queue.
-
-```azurecli
-# Create the authorization rule for the Service Bus queue.
-az servicebus queue authorization-rule create \
- --name "sbauthrule" \
- --namespace-name $sbNamespace \
- --queue-name $sbQueueName \
- --resource-group $resourceGroup \
- --rights Listen Manage Send \
- --subscription $subscriptionID
-```
-
-Now use the authorization rule to retrieve the connection string to the Service Bus queue.
-
-```azurecli
-# Get the Service Bus queue connection string.
-# The "-o tsv" ensures it is returned without the default double-quotes.
-sbqConnectionString=$(az servicebus queue authorization-rule keys list \
- --name "sbauthrule" \
- --namespace-name $sbNamespace \
- --queue-name $sbQueueName \
- --resource-group $resourceGroup \
- --subscription $subscriptionID \
- --query primaryConnectionString -o tsv)
-
-# Show the Service Bus queue connection string.
-echo "service bus queue connection string = " $sbqConnectionString
-```
-
-Now set up the routing endpoint and the message route for the Service Bus queue. These are the variables used by the script that must be set within your Cloud Shell session:
-
-**endpointName**: This field is the name identifying the endpoint.
-
-**endpointType**: This field is the type of endpoint. This value must be set to `azurestoragecontainer`, `eventhub`, `servicebusqueue`, or `servicebustopic`. For your purposes here, set it to `servicebusqueue`.
-
-**routeName**: This field is the name of the route you are setting up.
-
-**condition**: This field is the query used to filter for the messages sent to this endpoint. The query condition for the messages being routed to the Service Bus queue is `level="critical"`.
-
-Here is the Azure CLI for the routing endpoint and the message route for the Service Bus queue.
-
-```azurecli
-endpointName="ContosoSBQueueEndpoint"
-endpointType="ServiceBusQueue"
-routeName="ContosoSBQueueRoute"
-condition='level="critical"'
-
-# Set up the routing endpoint for the Service Bus queue.
-# This uses the Service Bus queue connection string.
-az iot hub routing-endpoint create \
- --connection-string $sbqConnectionString \
- --endpoint-name $endpointName \
- --endpoint-resource-group $resourceGroup \
- --endpoint-subscription-id $subscriptionID \
- --endpoint-type $endpointType \
- --hub-name $iotHubName \
- --resource-group $resourceGroup
-
-# Set up the message route for the Service Bus queue endpoint.
-az iot hub route create --name $routeName \
- --hub-name $iotHubName \
- --source-type devicemessages \
- --resource-group $resourceGroup \
- --endpoint-name $endpointName \
- --enabled \
- --condition $condition
- ```
-
-### View message routing in the portal
--
-## Next steps
-
-Now that you have the resources set up and the message routes configured, advance to the next tutorial to learn how to send messages to the IoT hub and see them be routed to the different destinations.
-
-> [!div class="nextstepaction"]
-> [Part 2 - View the message routing results](tutorial-routing-view-message-routing-results.md)
iot-hub Tutorial Routing Config Message Routing Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/tutorial-routing-config-message-routing-PowerShell.md
- Title: Tutorial - Configure message routing for Azure IoT Hub with Azure PowerShell
-description: Tutorial - Configure message routing for Azure IoT Hub using Azure PowerShell. Depending on properties in the message, route to either a storage account or a Service Bus queue.
----- Previously updated : 03/25/2019--
-#Customer intent: As a developer, I want to be able to route messages sent to my IoT hub to different destinations based on properties stored in the message. I want to be able to set up the resources and the routing using Azure PowerShell.
--
-# Tutorial: Use Azure PowerShell to configure IoT Hub message routing
---
-## Download the script (optional)
-
-For the second part of this tutorial, you download and run a Visual Studio application to send messages to the IoT Hub. There is a folder in the download that contains the Azure Resource Manager template and parameters file, as well as the Azure CLI and PowerShell scripts.
-
-If you want to view the finished script, download the [Azure IoT C# Samples](https://github.com/Azure-Samples/azure-iot-samples-csharp/archive/main.zip). Unzip the main.zip file. The Azure CLI script is in /iot-hub/Tutorials/Routing/SimulatedDevice/resources/ as **iothub_routing_psh.ps1**.
-
-## Create your resources
-
-Start by creating the resources with PowerShell.
-
-### Use PowerShell to create your base resources
-
-Copy and paste the script below into Cloud Shell and press Enter. It runs the script one line at a time. This first section of the script will create the base resources for this tutorial, including the storage account, IoT Hub, Service Bus Namespace, and Service Bus queue. As you go through the tutorial, copy each block of script and paste it into Cloud Shell to run it.
-
-There are several resource names that must be globally unique, such as the IoT Hub name and the storage account name. To make this easier, those resource names are appended with a random alphanumeric value called *randomValue*. The randomValue is generated once at the top of the script and appended to the resource names as needed throughout the script. If you don't want it to be random, you can set it to an empty string or to a specific value.
-
-> [!IMPORTANT]
-> The variables set in the initial script are also used by the routing script, so run all of the script in the same Cloud Shell session. If you open a new session to run the script for setting up the routing, several of the variables will be missing values.
->
-
-```azurepowershell-interactive
-# This command retrieves the subscription id of the current Azure account.
-# This field is used when setting up the routing queries.
-$subscriptionID = (Get-AzContext).Subscription.Id
-
-# Concatenate this number onto the resources that have to be globally unique.
-# You can set this to "" or to a specific value if you don't want it to be random.
-# This retrieves the first 6 digits of a random value.
-$randomValue = "$(Get-Random)".Substring(0,6)
-
-# Set the values for the resource names that don't have to be globally unique.
-$location = "West US"
-$resourceGroup = "ContosoResources"
-$iotHubConsumerGroup = "ContosoConsumers"
-$containerName = "contosoresults"
-
-# Create the resource group to be used
-# for all resources for this tutorial.
-New-AzResourceGroup -Name $resourceGroup -Location $location
-
-# The IoT hub name must be globally unique,
-# so add a random value to the end.
-$iotHubName = "ContosoTestHub" + $randomValue
-Write-Host "IoT hub name is " $iotHubName
-
-# Create the IoT hub.
-New-AzIotHub -ResourceGroupName $resourceGroup `
- -Name $iotHubName `
- -SkuName "S1" `
- -Location $location `
- -Units 1
-
-# Add a consumer group to the IoT hub.
-Add-AzIotHubEventHubConsumerGroup -ResourceGroupName $resourceGroup `
- -Name $iotHubName `
- -EventHubConsumerGroupName $iotHubConsumerGroup
-
-# The storage account name must be globally unique, so add a random value to the end.
-$storageAccountName = "contosostorage" + $randomValue
-Write-Host "storage account name is " $storageAccountName
-
-# Create the storage account to be used as a routing destination.
-# Save the context for the storage account
-# to be used when creating a container.
-$storageAccount = New-AzStorageAccount -ResourceGroupName $resourceGroup `
- -Name $storageAccountName `
- -Location $location `
- -SkuName Standard_LRS `
- -Kind Storage
-# Retrieve the connection string from the context.
-$storageConnectionString = $storageAccount.Context.ConnectionString
-Write-Host "storage connection string = " $storageConnectionString
-
-# Create the container in the storage account.
-New-AzStorageContainer -Name $containerName `
- -Context $storageAccount.Context
-
-# The Service Bus namespace must be globally unique,
-# so add a random value to the end.
-$serviceBusNamespace = "ContosoSBNamespace" + $randomValue
-Write-Host "Service Bus namespace is " $serviceBusNamespace
-
-# Create the Service Bus namespace.
-New-AzServiceBusNamespace -ResourceGroupName $resourceGroup `
- -Location $location `
- -Name $serviceBusNamespace
-
-# The Service Bus queue name must be globally unique,
-# so add a random value to the end.
-$serviceBusQueueName = "ContosoSBQueue" + $randomValue
-Write-Host "Service Bus queue name is " $serviceBusQueueName
-
-# Create the Service Bus queue to be used as a routing destination.
-New-AzServiceBusQueue -ResourceGroupName $resourceGroup `
- -Namespace $serviceBusNamespace `
- -Name $serviceBusQueueName `
- -EnablePartitioning $False
-```
-
-### Create a simulated device
--
-Now that the base resources are set up, you can configure the message routing.
-
-## Set up message routing
--
-To create a routing endpoint, use [Add-AzIotHubRoutingEndpoint](/powershell/module/az.iothub/Add-AzIotHubRoutingEndpoint). To create the messaging route for the endpoint, use [Add-AzIotHubRoute](/powershell/module/az.iothub/Add-AzIoTHubRoute).
-
-### Route to a storage account
-
-First, set up the endpoint for the storage account, then create the message route.
--
-These are the variables used by the script that must be set within your Cloud Shell session:
-
-**resourceGroup**: There are two occurrences of this field -- set both of them to your resource group.
-
-**name**: This field is the name of the IoT Hub to which the routing will apply.
-
-**endpointName**: This field is the name identifying the endpoint.
-
-**endpointType**: This field is the type of endpoint. This value must be set to `azurestoragecontainer`, `eventhub`, `servicebusqueue`, or `servicebustopic`. For your purposes here, set it to `azurestoragecontainer`.
-
-**subscriptionID**: This field is set to the subscriptionID for your Azure account.
-
-**storageConnectionString**: This value is retrieved from the storage account set up in the previous script. It is used by the routing to access the storage account.
-
-**containerName**: This field is the name of the container in the storage account to which data will be written.
-
-**Encoding**: Set this field to either `AVRO` or `JSON`. This designates the format of the stored data. The default is AVRO.
-
-**routeName**: This field is the name of the route you are setting up.
-
-**condition**: This field is the query used to filter for the messages sent to this endpoint. The query condition for the messages being routed to storage is `level="storage"`.
-
-**enabled**: This field defaults to `true`, indicating that the message route should be enabled after being created.
-
-Copy this script and paste it into your Cloud Shell window.
-
-```powershell
-##### ROUTING FOR STORAGE #####
-
-$endpointName = "ContosoStorageEndpoint"
-$endpointType = "azurestoragecontainer"
-$routeName = "ContosoStorageRoute"
-$condition = 'level="storage"'
-```
-
-The next step is to create the routing endpoint for the storage account. You also specify the container in which the results will be stored. The container was created when the storage account was created.
-
-```powershell
-# Create the routing endpoint for storage.
-# Specify 'AVRO' or 'JSON' for the encoding of the data.
-Add-AzIotHubRoutingEndpoint `
- -ResourceGroupName $resourceGroup `
- -Name $iotHubName `
- -EndpointName $endpointName `
- -EndpointType $endpointType `
- -EndpointResourceGroup $resourceGroup `
- -EndpointSubscriptionId $subscriptionId `
- -ConnectionString $storageConnectionString `
- -ContainerName $containerName `
- -Encoding AVRO
-```
-
-Next, create the message route for the storage endpoint. The message route designates where to send the messages that meet the query specification.
-
-```powershell
-# Create the route for the storage endpoint.
-Add-AzIotHubRoute `
- -ResourceGroupName $resourceGroup `
- -Name $iotHubName `
- -RouteName $routeName `
- -Source DeviceMessages `
- -EndpointName $endpointName `
- -Condition $condition `
- -Enabled
-```
-
-### Route to a Service Bus queue
-
-Now set up the routing for the Service Bus queue. To retrieve the connection string for the Service Bus queue, you must create an authorization rule that has the correct rights defined. The following script creates an authorization rule for the Service Bus queue called `sbauthrule`, and sets the rights to `Listen Manage Send`. Once this authorization rule is set up, you can use it to retrieve the connection string for the queue.
-
-```powershell
-##### ROUTING FOR SERVICE BUS QUEUE #####
-
-# Create the authorization rule for the Service Bus queue.
-New-AzServiceBusAuthorizationRule `
- -ResourceGroupName $resourceGroup `
- -NamespaceName $serviceBusNamespace `
- -Queue $serviceBusQueueName `
- -Name "sbauthrule" `
- -Rights @("Manage","Listen","Send")
-```
-
-Now use the authorization rule to retrieve the Service Bus queue key. This authorization rule will be used to retrieve the connection string later in the script.
-
-```powershell
-$sbqkey = Get-AzServiceBusKey `
- -ResourceGroupName $resourceGroup `
- -NamespaceName $serviceBusNamespace `
- -Queue $servicebusQueueName `
- -Name "sbauthrule"
-```
-
-Now set up the routing endpoint and the message route for the Service Bus queue. These are the variables used by the script that must be set within your Cloud Shell session:
-
-**endpointName**: This field is the name identifying the endpoint.
-
-**endpointType**: This field is the type of endpoint. This value must be set to `azurestoragecontainer`, `eventhub`, `servicebusqueue`, or `servicebustopic`. For your purposes here, set it to `servicebusqueue`.
-
-**routeName**: This field is the name of the route you are setting up.
-
-**condition**: This field is the query used to filter for the messages sent to this endpoint. The query condition for the messages being routed to the Service Bus queue is `level="critical"`.
-
-Here is the Azure PowerShell for the message routing for the Service Bus queue.
-
-```powershell
-$endpointName = "ContosoSBQueueEndpoint"
-$endpointType = "servicebusqueue"
-$routeName = "ContosoSBQueueRoute"
-$condition = 'level="critical"'
-
-# If this script fails on the next statement (Add-AzIotHubRoutingEndpoint),
-# put the pause in and run it again. Note that if you're running it
-# interactively, you can just stop it and then run the rest, because
-# you have already set the variables before you get to this point.
-#
-# Pause for 90 seconds to allow previous steps to complete.
-# Then report it to the IoT team here:
-# https://github.com/Azure/azure-powershell/issues
-# pause for 90 seconds and then start again.
-# This way, it if didn't get to finish before it tried to move on,
-# now it will have time to finish first.
- Start-Sleep -Seconds 90
-
-# This command is the one that sometimes doesn't work. It's as if it doesn't have time to
-# finish before it moves to the next line.
-# The error from Add-AzIotHubRoutingEndpoint is "Operation returned an invalid status code 'BadRequest'".
-# This command adds the routing endpoint, using the connection string property from the key.
-# This will definitely work if you execute the Sleep command first (it's in the line above).
-Add-AzIotHubRoutingEndpoint `
- -ResourceGroupName $resourceGroup `
- -Name $iotHubName `
- -EndpointName $endpointName `
- -EndpointType $endpointType `
- -EndpointResourceGroup $resourceGroup `
- -EndpointSubscriptionId $subscriptionId `
- -ConnectionString $sbqkey.PrimaryConnectionString
-
-# Set up the message route for the Service Bus queue endpoint.
-Add-AzIotHubRoute `
- -ResourceGroupName $resourceGroup `
- -Name $iotHubName `
- -RouteName $routeName `
- -Source DeviceMessages `
- -EndpointName $endpointName `
- -Condition $condition `
- -Enabled
-```
-
-### View message routing in the portal
--
-## Next steps
-
-Now that you have the resources set up and the message routes configured, advance to the next tutorial to learn how to send messages to the IoT hub and see them be routed to the different destinations.
-
-> [!div class="nextstepaction"]
-> [Part 2 - View the message routing results](tutorial-routing-view-message-routing-results.md)
iot-hub Tutorial Routing Config Message Routing RM Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/tutorial-routing-config-message-routing-RM-template.md
- Title: Tutorial - Configure message routing for Azure IoT Hub using an Azure Resource Manager template
-description: Tutorial - Configure message routing for Azure IoT Hub using an Azure Resource Manager template
---- Previously updated : 08/24/2021--
-#Customer intent: As a developer, I want to be able to route messages sent to my IoT hub to different destinations based on properties stored in the message. This step of the tutorial needs to show me how to set up my resources using an Azure Resource Manager template.
--
-# Tutorial: Use an Azure Resource Manager template to configure IoT Hub message routing
---
-## Message routing
--
-## Download the template and parameters file
-
-For the second part of this tutorial, you download and run a Visual Studio application to send messages to the IoT Hub. There is a folder in that download that contains the Azure Resource Manager template and parameters file, as well as the Azure CLI and PowerShell scripts.
-
-Go ahead and download the [Azure IoT C# Samples](https://github.com/Azure-Samples/azure-iot-samples-csharp/archive/main.zip) now. Unzip the main.zip file. The Resource Manager template and the parameters file are in /iot-hub/Tutorials/Routing/SimulatedDevice/resources/ as **template_iothub.json** and **template_iothub_parameters.json**.
-
-## Create your resources
-
-You're going to use an Azure Resource Manager (RM) template to create all of your resources. The Azure CLI and PowerShell scripts can be run a few lines at a time. An RM template is deployed in one step. This article shows you the sections separately to help you understand each one. Then it will show you how to deploy the template, and create the virtual device for testing. After the template is deployed, you can view the message routing configuration in the portal.
-
-There are several resource names that must be globally unique, such as the IoT Hub name and the storage account name. To make naming the resources easier, those resource names are set up to append a random alphanumeric value generated from the current date/time.
-
-If you look at the template, you'll see where variables are set up for these resources that take the parameter passed in and concatenate *randomValue* to the parameter.
-
-The following section explains the parameters used.
-
-### Parameters
-
-Most of these parameters have default values. The ones ending with **_in** are concatenated with *randomValue* to make them globally unique.
-
-**randomValue**: This value is generated from the current date/time when you deploy the template. This field is not in the parameters file, as it is generated in the template itself.
-
-**subscriptionId**: This field is set for you to the subscription into which you are deploying the template. This field is not in the parameters file since it is set for you.
-
-**IoTHubName_in**: This field is the base IoT Hub name, which is concatenated with the randomValue so it is globally unique.
-
-**location**: This field is the Azure region into which you are deploying, such as "westus".
-
-**consumer_group**: This field is the consumer group set for messages coming through the routing endpoint. It is used to filter results in Azure Stream Analytics. For example, there is the whole stream where you get everything, or if you have data coming through with consumer_group set to **Contoso**, then you can set up an Azure Stream Analytics stream (and Power BI report) to show only those entries. This field is used in part 2 of this tutorial.
-
-**sku_name**: This field is the scaling for the IoT Hub. This value must be S1 or above; a free tier does not work for this tutorial because it does not allow multiple endpoints.
-
-**sku_units**: This field goes with the **sku_name**, and is the number of IoT Hub units that can be used.
-
-**d2c_partitions**: This field is the number of partitions used for the event stream.
-
-**storageAccountName_in**: This field is the name of the storage account to be created. Messages are routed to a container in the storage account. This field is concatenated with the randomValue to make it globally unique.
-
-**storageContainerName**: This field is the name of the container in which the messages routed to the storage account are stored.
-
-**storage_endpoint**: This field is the name for the storage account endpoint used by the message routing.
-
-**service_bus_namespace_in**: This field is the name of the Service Bus namespace to be created. This value is concatenated with the randomValue to make it globally unique.
-
-**service_bus_queue_in**: This field is the name of the Service Bus queue used for routing messages. This value is concatenated with the randomValue to make it globally unique.
-
-**AuthRules_sb_queue**: This field is the authorization rules for the service bus queue, used to retrieve the connection string for the queue.
-
-### Variables
-
-These values are used in the template, and are mostly derived from parameters.
-
-**queueAuthorizationRuleResourceId**: This field is the ResourceId for the authorization rule for the Service Bus queue. ResourceId is in turn used to retrieve the connection string for the queue.
-
-**iotHubName**: This field is the name of the IoT Hub after having randomValue concatenated.
-
-**storageAccountName**: This field is the name of the storage account after having randomValue concatenated.
-
-**service_bus_namespace**: This field is the namespace after having randomValue concatenated.
-
-**service_bus_queue**: This field is the Service Bus queue name after having randomValue concatenated.
-
-**sbVersion**: The version of the Service Bus API to use. In this case, it is "2017-04-01".
-
-### Resources: Storage account and container
-
-The first resource created is the storage account, along with the container to which messages are routed. The container is a resource under the storage account. It has a `dependsOn` clause for the storage account, requiring the storage account be created before the container.
-
-Here's what this section looks like:
-
-```json
-{
- "type": "Microsoft.Storage/storageAccounts",
- "name": "[variables('storageAccountName')]",
- "apiVersion": "2018-07-01",
- "location": "[parameters('location')]",
- "sku": {
- "name": "Standard_LRS",
- "tier": "Standard"
- },
- "kind": "Storage",
- "properties": {},
- "resources": [
- {
- "type": "blobServices/containers",
- "apiVersion": "2018-07-01",
- "name": "[concat('default/', parameters('storageContainerName'))]",
- "properties": {
- "publicAccess": "None"
- } ,
- "dependsOn": [
- "[resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName'))]"
- ]
- }
- ]
-}
-```
-
-### Resources: Service Bus namespace and queue
-
-The second resource created is the Service Bus namespace, along with the Service Bus queue to which messages are routed. The SKU is set to standard. The API version is retrieved from the variables. It is also set to activate the Service Bus namespace when it deploys this section (status:Active).
-
-```json
-{
- "type": "Microsoft.ServiceBus/namespaces",
- "comments": "The Sku should be 'Standard' for this tutorial.",
- "sku": {
- "name": "Standard",
- "tier": "Standard"
- },
- "name": "[variables('service_bus_namespace')]",
- "apiVersion": "[variables('sbVersion')]",
- "location": "[parameters('location')]",
- "properties": {
- "provisioningState": "Succeeded",
- "metricId": "[concat('a4295411-5eff-4f81-b77e-276ab1ccda12:', variables('service_bus_namespace'))]",
- "serviceBusEndpoint": "[concat('https://', variables('service_bus_namespace'),'.servicebus.windows.net:443/')]",
- "status": "Active"
- },
- "dependsOn": []
-}
-```
-
-This section creates the Service Bus queue. This part of the script has a `dependsOn` clause that ensures the namespace is created before the queue.
-
-```json
-{
- "type": "Microsoft.ServiceBus/namespaces/queues",
- "name": "[concat(variables('service_bus_namespace'), '/', variables('service_bus_queue'))]",
- "apiVersion": "[variables('sbVersion')]",
- "location": "[parameters('location')]",
- "scale": null,
- "properties": {},
- "dependsOn": [
- "[resourceId('Microsoft.ServiceBus/namespaces', variables('service_bus_namespace'))]"
- ]
-}
-```
-
-### Resources: Iot Hub and message routing
-
-Now that the storage account and Service Bus queue have been created, you create the IoT Hub that routes messages to them. The RM template uses `dependsOn` clauses so it doesn't try to create the hub before the Service Bus resources and the storage account have been created.
-
-Here's the first part of the IoT Hub section. This part of the template sets up the dependencies and starts with the properties.
-
-```json
-{
- "apiVersion": "2018-04-01",
- "type": "Microsoft.Devices/IotHubs",
- "name": "[variables('IoTHubName')]",
- "location": "[parameters('location')]",
- "dependsOn": [
- "[resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName'))]",
- "[resourceId('Microsoft.ServiceBus/namespaces', variables('service_bus_namespace'))]",
- "[resourceId('Microsoft.ServiceBus/namespaces/queues', variables('service_bus_namespace'), variables('service_bus_queue'))]"
- ],
- "properties": {
- "eventHubEndpoints": {}
- "events": {
- "retentionTimeInDays": 1,
- "partitionCount": "[parameters('d2c_partitions')]"
- }
- },
-```
-
-The next section is the section for the message routing configuration for the Iot Hub. First is the section for the endpoints. This part of the template sets up the routing endpoints for the Service Bus queue and the storage account, including the connection strings.
-
-To create the connection string for the queue, you need the queueAuthorizationRulesResourcedId, which is retrieved inline. To create the connection string for the storage account, you retrieve the primary storage key and then use it in the format for the connection string.
-
-The endpoint configuration is also where you set the blob format to `AVRO` or `JSON`.
--
- ```json
-"routing": {
- "endpoints": {
- "serviceBusQueues": [
- {
- "connectionString": "[Concat('Endpoint=sb://',variables('service_bus_namespace'),'.servicebus.windows.net/;SharedAccessKeyName=',parameters('AuthRules_sb_queue'),';SharedAccessKey=',listkeys(variables('queueAuthorizationRuleResourceId'),variables('sbVersion')).primaryKey,';EntityPath=',variables('service_bus_queue'))]",
- "name": "[parameters('service_bus_queue_endpoint')]",
- "subscriptionId": "[parameters('subscriptionId')]",
- "resourceGroup": "[resourceGroup().Name]"
- }
- ],
- "serviceBusTopics": [],
- "eventHubs": [],
- "storageContainers": [
- {
- "connectionString":
- "[Concat('DefaultEndpointsProtocol=https;AccountName=',variables('storageAccountName'),';AccountKey=',listKeys(resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName')), providers('Microsoft.Storage', 'storageAccounts').apiVersions[0]).keys[0].value)]",
- "containerName": "[parameters('storageContainerName')]",
- "fileNameFormat": "{iothub}/{partition}/{YYYY}/{MM}/{DD}/{HH}/{mm}",
- "batchFrequencyInSeconds": 100,
- "maxChunkSizeInBytes": 104857600,
- "encoding": "avro",
- "name": "[parameters('storage_endpoint')]",
- "subscriptionId": "[parameters('subscriptionId')]",
- "resourceGroup": "[resourceGroup().Name]"
- }
- ]
- },
-```
-
-This next section is for the message routes to the endpoints. There is one set up for each endpoint, so there is one for the Service Bus queue and one for the storage account container.
-
-Remember that the query condition for the messages being routed to storage is `level="storage"`, and the query condition for the messages being routed to the Service Bus queue is `level="critical"`.
-
-```json
-"routes": [
- {
- "name": "contosoStorageRoute",
- "source": "DeviceMessages",
- "condition": "level=\"storage\"",
- "endpointNames": [
- "[parameters('storage_endpoint')]"
- ],
- "isEnabled": true
- },
- {
- "name": "contosoSBQueueRoute",
- "source": "DeviceMessages",
- "condition": "level=\"critical\"",
- "endpointNames": [
- "[parameters('service_bus_queue_endpoint')]"
- ],
- "isEnabled": true
- }
-],
-```
-
-This json shows the rest of the IoT Hub section, which contains default information and the SKU for the hub.
-
-```json
- "fallbackRoute": {
- "name": "$fallback",
- "source": "DeviceMessages",
- "condition": "true",
- "endpointNames": [
- "events"
- ],
- "isEnabled": true
- }
- },
- "storageEndpoints": {
- "$default": {
- "sasTtlAsIso8601": "PT1H",
- "connectionString": "",
- "containerName": ""
- }
- },
- "messagingEndpoints": {
- "fileNotifications": {
- "lockDurationAsIso8601": "PT1M",
- "ttlAsIso8601": "PT1H",
- "maxDeliveryCount": 10
- }
- },
- "enableFileUploadNotifications": false,
- "cloudToDevice": {
- "maxDeliveryCount": 10,
- "defaultTtlAsIso8601": "PT1H",
- "feedback": {
- "lockDurationAsIso8601": "PT1M",
- "ttlAsIso8601": "PT1H",
- "maxDeliveryCount": 10
- }
- }
- },
- "sku": {
- "name": "[parameters('sku_name')]",
- "capacity": "[parameters('sku_units')]"
- }
-}
-```
-
-### Resources: Service Bus queue authorization rules
-
-The Service Bus queue authorization rule is used to retrieve the connection string for the Service Bus queue. It uses a `dependsOn` clause to ensure it is not created before the Service Bus namespace and the Service Bus queue.
-
-```json
-{
- "type": "Microsoft.ServiceBus/namespaces/queues/authorizationRules",
- "name": "[concat(variables('service_bus_namespace'), '/', variables('service_bus_queue'), '/', parameters('AuthRules_sb_queue'))]",
- "apiVersion": "[variables('sbVersion')]",
- "location": "[parameters('location')]",
- "scale": null,
- "properties": {
- "rights": [
- "Send"
- ]
- },
- "dependsOn": [
- "[resourceId('Microsoft.ServiceBus/namespaces', variables('service_bus_namespace'))]",
- "[resourceId('Microsoft.ServiceBus/namespaces/queues', variables('service_bus_namespace'), variables('service_bus_queue'))]"
- ]
-},
-```
-
-### Resources: Consumer group
-
-In this section, you create a Consumer Group for the IoT Hub data to be used by the Azure Stream Analytics in the second part of this tutorial.
-
-```json
-{
- "type": "Microsoft.Devices/IotHubs/eventHubEndpoints/ConsumerGroups",
- "name": "[concat(variables('iotHubName'), '/events/',parameters('consumer_group'))]",
- "apiVersion": "2018-04-01",
- "dependsOn": [
- "[concat('Microsoft.Devices/IotHubs/', variables('iotHubName'))]"
- ]
-}
-```
-
-### Resources: Outputs
-
-If you want to send a value back to the deployment script to be displayed, you use an output section. This part of the template returns the connection string for the Service Bus queue. Returning a value isn't required, it's included as an example of how to return results to the calling script.
-
-```json
-"outputs": {
- "sbq_connectionString": {
- "type": "string",
- "value": "[Concat('Endpoint=sb://',variables('service_bus_namespace'),'.servicebus.windows.net/;SharedAccessKeyName=',parameters('AuthRules_sb_queue'),';SharedAccessKey=',listkeys(variables('queueAuthorizationRuleResourceId'),variables('sbVersion')).primaryKey,';EntityPath=',variables('service_bus_queue'))]"
- }
- }
-```
-
-## Deploy the RM template
-
-To deploy the template to Azure, upload the template and the parameters file to Azure Cloud Shell, and then execute a script to deploy the template. Open Azure Cloud Shell and sign in. This example uses PowerShell.
-
-To upload the files, select the **Upload/Download files** icon in the menu bar, then choose Upload.
-
-![Screenshot that highlights the Upload/Download files icon.](media/tutorial-routing-config-message-routing-RM-template/CloudShell_upload_files.png)
-
-Use the File Explorer that pops up to find the files on your local disk and select them, then choose **Open**.
-
-After the files are uploaded, a results dialog shows something like the following image.
-
-![Cloud Shell menu bar with Upload/Download results highlighted](media/tutorial-routing-config-message-routing-RM-template/CloudShell_upload_results.png)
-
-The files are uploaded to the share used by your Cloud Shell instance.
-
-Run the script to perform the deployment. The last line of this script retrieves the variable that was set up to be returned -- the Service Bus queue connection string.
-
-The script sets and uses these variables:
-
-**$RGName** is the resource group name to which to deploy the template. This field is created before deploying the template.
-
-**$location** is the Azure location to be used for the template, such as "westus".
-
-**deploymentname** is a name you assign to the deployment to retrieve the returning variable value.
-
-Here's the PowerShell script. Copy this PowerShell script and paste it into the Cloud Shell window, then hit Enter to run it.
-
-```powershell
-$RGName="ContosoResources"
-$location = "westus"
-$deploymentname="contoso-routing"
-
-# Remove the resource group if it already exists.
-#Remove-AzResourceGroup -name $RGName
-# Create the resource group.
-New-AzResourceGroup -name $RGName -Location $location
-
-# Set a path to the parameter file.
-$parameterFile = "$HOME/template_iothub_parameters.json"
-$templateFile = "$HOME/template_iothub.json"
-
-# Deploy the template.
-New-AzResourceGroupDeployment `
- -Name $deploymentname `
- -ResourceGroupName $RGName `
- -TemplateParameterFile $parameterFile `
- -TemplateFile $templateFile `
- -verbose
-
-# Get the returning value of the connection string.
-(Get-AzResourceGroupDeployment -ResourceGroupName $RGName -Name $deploymentname).Outputs.sbq_connectionString.value
-```
-
-If you have script errors, you can edit the script locally, upload it again to the Cloud Shell, and run the script again. After the script finishes running successfully, continue to the next step.
-
-## Create simulated device
--
-## View message routing in the portal
--
-## Next steps
-
-Now that you have all of the resources set up and the message routes are configured, advance to the next tutorial to learn how to process and display the information about the routed messages.
-
-> [!div class="nextstepaction"]
-> [Part 2 - View the message routing results](tutorial-routing-view-message-routing-results.md)
iot-hub Tutorial Routing View Message Routing Results https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/tutorial-routing-view-message-routing-results.md
- Title: Tutorial - View Azure IoT Hub message routing results (.NET) | Microsoft Docs
-description: Tutorial - After setting up all of the resources using Part 1 of the tutorial, add the ability to route messages to Azure Stream Analytics and view the results in Power BI.
---- Previously updated : 09/21/2021--
-#Customer intent: As a developer, I want to be able to route messages sent to my IoT hub to different destinations based on properties stored in the message.
--
-# Tutorial: Part 2 - View the routed messages
---
-## Rules for routing the messages
-
-The following are the rules for the message routing that were set up in Part 1 of this tutorial, and you see them work in this second part.
-
-|Value |Result|
-|||
-|level="storage" |Write to Azure Storage.|
-|level="critical" |Write to a Service Bus queue. A Logic App retrieves the message from the queue and uses Office 365 to e-mail the message.|
-|default |Display this data using Power BI.|
-
-Now you create the resources to which the messages will be routed, run an app to send messages to the hub, and see the routing in action.
-
-## Create a Logic App
-
-The Service Bus queue is to be used for receiving messages designated as critical. Set up a Logic app to monitor the Service Bus queue, and send an e-mail when a message is added to the queue.
-
-1. In the [Azure portal](https://portal.azure.com), select **+ Create a resource**. Put **logic app** in the search box and select Enter. From the search results displayed, select Logic App, then select **Create** to continue to the **Create logic app** pane. Fill in the fields.
-
- **Subscription**: Select your Azure subscription.
-
- **Resource group**: Select **Create new** under the Resource Group field. Specify **ContosoResources** for the name of the resource group.
-
- **Instance Details**
- **Type**: Select **Consumption** for the instance type.
-
- For **Logic App Name**, specify the name of the logic app. This tutorial uses **ContosoLogicApp**.
-
- **Region**: Use the location of the nearest datacenter. This tutorial uses **West US**.
-
- **Enable Log Analytics**: Set this toggle button to not enable the log analytics.
-
- ![The Create Logic App screen](./media/tutorial-routing-view-message-routing-results/create-logic-app.png)
-
- Select **Review + Create**. It may take a few minutes for the app to deploy. When it's finished, it shows a screen giving the overview of the deployment.
-
-1. Go to the Logic App. If you're still on the deployment page, you can select **Go To Resource**. Another way to get to the Logic App is to select **Resource groups**, select your resource group (this tutorial uses **ContosoResources**), then select the Logic App from the list of resources.
-
- Scroll down until you see the almost-empty tile that says **Blank Logic App +** and select it. The default tab on the screen is "For You". If this pane is blank, select **All** to see the connectors and triggers available.
-
-1. Select **Service Bus** from the list of connectors.
-
- ![The list of connectors](./media/tutorial-routing-view-message-routing-results/logic-app-connectors.png)
-
-1. This screenshot shows a list of triggers. Select the one that says **When a message is received in a queue (auto-complete)**.
-
- ![The list of triggers](./media/tutorial-routing-view-message-routing-results/logic-app-triggers.png)
-
-1. Fill in the fields on the next screen with the connection information.
-
- **Connection Name**: ContosoConnection
-
- Select the Service Bus Namespace. This tutorial uses **ContosoSBNamespace**. The name of the key (RootManageSharedAccessKey) and the rights (Listen, Manage, Send) are retrieved and loaded. Select **RootManageSharedAccessKey**. The **Create** button changes to blue (active). Select it; it shows the queue selection screen.
-
-1. Next, provide information about the queue.
-
- ![Selecting a queue](./media/tutorial-routing-view-message-routing-results/logic-app-queue-options.png)
-
- **Queue Name:** This field is the name of the queue from which the message is sent. Select this dropdown list and select the queue name that was set in the setup steps. This tutorial uses **contososbqueue**.
-
- **Queue Type:** The type of queue. Select **Main** from the dropdown list.
-
- Take the defaults for the other fields. Select **Save** to save the logic apps designer configuration.
-
-1. Select **+New Step**. The **Choose an operation** pane is displayed. Select **Office 365 Outlook**. In the list, find and select **Send an Email (V2)**. Sign in to your Office 365 account.
-
-1. Fill in the fields to be used when sending an e-mail about the message in the queue.
-
- ![Select to send-an-email from one of the Outlook connectors](./media/tutorial-routing-view-message-routing-results/logic-app-send-email.png)
-
- **To:** Put in the e-mail address where the warning is to be sent.
-
- **Subject:** Fill in the subject for the e-mail.
-
- **Body**: Fill in some text for the body. Select **Add dynamic content**, it will show fields you can pick from the e-mail to include. If you don't see any, select **See More** to see more options. Select **Content** to have the body from the e-mail displayed in the error message.
-
-1. Select **Save** to save your changes. Close the Logic app Designer.
-
-## Set up Azure Stream Analytics
-
-To see the data in a Power BI visualization, first set up a Stream Analytics job to retrieve the data. Remember that only the messages where the **level** is **normal** are sent to the default endpoint, and will be retrieved by the Stream Analytics job for the Power BI visualization.
-
-### Create the Stream Analytics job
-
-1. Put **stream** **analytics** **job** in the [Azure portal](https://portal.azure.com) search box and select **Enter**. Select **Create** to get to the Stream Analytics job screen, and then **create** again to get to the create screen.
-
-1. Enter the following information for the job.
-
- **Job name**: The name of the job. The name must be globally unique. This tutorial uses **contosoJob**.
-
- **Subscription**: The Azure subscription you are using for the tutorial.
-
- **Resource group**: Use the same resource group used by your IoT hub. This tutorial uses **ContosoResources**.
-
- **Location**: Use the same location used in the setup script. This tutorial uses **West US**.
-
- ![Create the stream analytics job](./media/tutorial-routing-view-message-routing-results/stream-analytics-create-job.png)
-
-1. Select **Create** to create the job. It may take a few minutes to deploy.
-
- To return to the job, select **Go to resource**. You can also select **Resource groups**. This tutorial uses **ContosoResources**. Then select the resource group, then select the Stream Analytics job in the list of resources.
-
-### Add an input to the Stream Analytics job
-
-1. Under **Job Topology**, select **Inputs**.
-
-1. In the **Inputs** pane, select **Add stream input** and select IoT Hub. On the screen that comes up, fill in the following fields:
-
- **Input alias**: This tutorial uses **contosoinputs**.
-
- Select **Select IoT Hub from your subscriptions**, then select your subscription from the dropdown list.
-
- **IoT Hub**: Select the IoT hub. This tutorial uses **ContosoTestHub**.
-
- **Consumer group**: Select the consumer group set up in Part 1 of this tutorial. This tutorial uses **contosoconsumers**.
-
- **Shared access policy name**: Select **service**. The portal fills in the Shared Access Policy Key for you.
-
- **Endpoint**: Select **Messaging**. (If you select Operations Monitoring, you get the telemetry data about the IoT hub rather than the data you're sending through.)
-
- For the rest of the fields, accept the defaults.
-
- ![Set up the inputs for the stream analytics job](./media/tutorial-routing-view-message-routing-results/stream-analytics-job-inputs.png)
-
-1. Select **Save**.
-
-### Add an output to the Stream Analytics job
-
-1. Under **Job Topology**, select **Outputs**.
-
-1. In the **Outputs** pane, select **Add**, and then select **Power BI**. On the screen that comes up, fill in the following fields:
-
- **Output alias**: The unique alias for the output. This tutorial uses **contosooutputs**.
-
- Select **Select Group workspace from your subscriptions**. In **Group workspace**, specify **My workspace**.
-
- **Authentication mode**: Select **User token**.
-
- **Dataset name**: Name of the dataset to be used in Power BI. This tutorial uses **contosodataset**.
-
- **Table name**: Name of the table to be used in Power BI. This tutorial uses **contosotable**.
-
-1. Select **Authorize**, and sign in to your Power BI account. (Signing in may take more than one try).
-
- ![Set up the outputs for the stream analytics job](./media/tutorial-routing-view-message-routing-results/stream-analytics-job-outputs.png)
-
-1. Select **Save**.
-
-### Configure the query of the Stream Analytics job
-
-1. Under **Job Topology**, select **Query**.
-
-1. Replace `[YourInputAlias]` with the input alias of the job. This tutorial uses **contosoinputs**.
-
-1. Replace `[YourOutputAlias]` with the output alias of the job. This tutorial uses **contosooutputs**.
-
- ![Set up the query for the stream analytics job](./media/tutorial-routing-view-message-routing-results/stream-analytics-job-query.png)
-
-1. Select **Save**.
-
-1. Close the Query pane. You return to the view of the resources in the Resource Group. Select the Stream Analytics job. This tutorial calls it **contosoJob**.
-
-### Run the Stream Analytics job
-
-In the Stream Analytics job, select **Start** > **Now** > **Start**. Once the job successfully starts, the job status changes from **Stopped** to **Running**.
-
-To set up the Power BI report, you need data, so you'll set up Power BI after you create the device and run the device simulation application to generate some data.
-
-## Run simulated device app
-
-In Part 1 of this tutorial, you set up a device to simulate using an IoT device. If you haven't already downloaded it, download it not the .NET console app that simulates the device sending device-to-cloud messages to an IoT hub, you'll download it here.
-
-This application sends messages for each of the different message routing methods. There is also a folder in the download that contains the complete Azure Resource Manager template and parameters file, as well as the Azure CLI and PowerShell scripts.
-
-If you didn't download the files from the repository in Part 1 of this tutorial, go ahead and download them now from [IoT Device Simulation](https://github.com/Azure-Samples/azure-iot-samples-csharp/archive/main.zip). Selecting this link downloads a repository with several applications in it; the solution for this tutorial is iot-hub/Tutorials/Routing/IoT_SimulatedDevice.sln.
-
-Double-click on the solution file (IoT_SimulatedDevice.sln) to open the code in Visual Studio, then open Program.cs. Substitute `{your hub name}` with the IoT hub host name. The format of the IoT hub host name is **{iot-hub-name}.azure-devices.net**. For this tutorial, the hub host name is **ContosoTestHub.azure-devices.net**. Next, substitute `{your device key}` with the device key you saved earlier when setting up the simulated device.
-
-```csharp
- static string s_myDeviceId = "Contoso-Test-Device";
- static string s_iotHubUri = "ContosoTestHub.azure-devices.net";
- // This is the primary key for the device. This is in the portal.
- // Find your IoT hub in the portal > IoT devices > select your device > copy the key.
- static string s_deviceKey = "{your device key}";
-```
-
-## Run and test
-
-Run the console application. Wait a few minutes. You can see the messages being sent on the console screen of the application.
-
-The app sends a new device-to-cloud message to the IoT hub every second. The message contains a JSON-serialized object with the device ID, temperature, humidity, and message level, which defaults to `normal`. It randomly assigns a level of `critical` or `storage`, causing the message to be routed to the storage account or to the Service Bus queue (which triggers your Logic App to send an e-mail). The default (`normal`) readings can be displayed in a BI report.
-
-If everything is set up correctly, at this point you should see the following results:
-
-1. You start getting e-mails about critical messages.
-
- ![The resulting emails](./media/tutorial-routing-view-message-routing-results/results-in-email.png)
-
- This result means the following statements are true.
-
- * The routing to the Service Bus queue is working correctly.
- * The Logic App retrieving the message from the Service Bus queue is working correctly.
- * The Logic App connector to Outlook is working correctly.
-
-1. In the [Azure portal](https://portal.azure.com), select **Resource groups** and select your Resource Group. This tutorial uses **ContosoResources**.
-
- Select the storage account, select **Containers**, then select the container that stores your results. This tutorial uses **contosoresults**. You should see a folder, and you can drill down through the directories until you see one or more files. Open one of those files; they contain the entries routed to the storage account.
-
- ![The result files in storage](./media/tutorial-routing-view-message-routing-results/results-in-storage.png)
-
-This result means the following statement is true.
-
-* The routing to the storage account is working correctly.
-
-With the application still running, set up the Power BI visualization to see the messages coming through the default endpoint.
-
-## Set up the Power BI visualizations
-
-1. Sign in to your [Power BI](https://powerbi.microsoft.com/) account.
-
-1. Select **My Workspace**. It shows at least one dataset that was created. If there's nothing there, run the **Simulated Device** application for another 5-10 minutes to stream more data. After the workspace appears, it will have a dataset called ContosoDataset. Right-click on the three vertical dots to the right of the dataset name. In the dropdown list, select **Create report**.
-
- ![Power BI creating report](./media/tutorial-routing-view-message-routing-results/bi-personal-workspace.png)
-
-1. Look in the **Visualizations** section on the right-hand side and select **Line chart** to select a line chart in the BI report page. Drag the graphic so it fills the space horizontally. Now in the **Fields** section on the right, open ContosoTable. Select **EventEnqueuedUtcTime**. It should put it across the X-Axis. Select **temperature** and drag it into the **Values** field for temperature. This adds temperature to the chart. You should have something that looks like the following graphic:
-
- ![Power BI graph of temperature](./media/tutorial-routing-view-message-routing-results/bi-temperature-chart.png)
-
-1. Click in the bottom half of the chart area. Select **Line Chart** again. It creates a chart under the first one.
-
-1. In the table, select **EventQueuedTime**, it will put it in the Axis field. Drag **humidity** to the Values field. Now you see both charts.
-
- ![Power BI graph of both fields](./media/tutorial-routing-view-message-routing-results/bi-chart-temp-humidity.png)
-
- You sent messages from the default endpoint of the IoT Hub to the Azure Stream Analytics. Then you added a Power BI report to show the data, adding two charts to represent the temperature and the humidity.
-
-1. Select **File > Save** to save the report, entering a name for the report when prompted. Save your report in your workspace.
-
-You can see data on both charts. This result means the following statements are true:
-
-* The routing to the default endpoint is working correctly.
-* The Azure Stream Analytics job is streaming correctly.
-* The Power BI Visualization is set up correctly.
-
-You can refresh the charts to see the most recent data by selecting the Refresh button on the top of the Power BI window.
-
-## Clean up resources
-
-If you want to remove all of the Azure resources you've created through both parts of this tutorial, delete the resource group. This action deletes all resources contained within the group. In this case, it removes the IoT hub, the Service Bus namespace and queue, the Logic App, the storage account, and the resource group itself. You can also remove the Power BI resources and clear the emails sent during the tutorial.
-
-### Clean up resources in the Power BI visualization
-
-Sign in to your [Power BI](https://powerbi.microsoft.com/) account. Go to your workspace. This tutorial uses **My Workspace**. To remove the Power BI visualization, go to DataSets and select the trash can icon to delete the dataset. This tutorial uses **contosodataset**. When you remove the dataset, the report is removed as well.
-
-### Use the Azure CLI to clean up resources
-
-To remove the resource group, use the [az group delete](/cli/azure/group#az-group-delete) command. `$resourceGroup` was set to **ContosoResources** back at the beginning of this tutorial.
-
-```azurecli-interactive
-az group delete --name $resourceGroup
-```
-
-### Use PowerShell to clean up resources
-
-To remove the resource group, use the [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) command. `$resourceGroup` was set to **ContosoResources** back at the beginning of this tutorial.
-
-```azurepowershell-interactive
-Remove-AzResourceGroup -Name $resourceGroup
-```
-
-### Clean up test emails
-
-You may also want to delete the quantity of emails in your inbox that were generated through the Logic App while the device application was running.
-
-## Next steps
-
-In this two-part tutorial, you learned how to use message routing to route IoT Hub messages to different destinations by performing the following tasks.
-
-**Part I: Create resources, set up message routing**
-> [!div class="checklist"]
-> * Create the resources--an IoT hub, a storage account, a Service Bus queue, and a simulated device.
-> * Configure the endpoints and message routes in IoT Hub for the storage account and Service Bus queue.
-
-**Part II: Send messages to the hub, view routed results**
-> [!div class="checklist"]
-> * Create a Logic App that is triggered and sends e-mail when a message is added to the Service Bus queue.
-> * Download and run an app that simulates an IoT Device sending messages to the hub for the different routing options.
->
-> * Create a Power BI visualization for data sent to the default endpoint.
->
-> * View the results ...
-> * ...in the Service Bus queue and e-mails.
-> * ...in the storage account.
-> * ...in the Power BI visualization.
-
-Advance to the next tutorial to learn how to manage the state of an IoT device.
-> [!div class="nextstepaction"]
-> [Set up and use metrics and diagnostics with an IoT Hub](tutorial-use-metrics-and-diags.md)
-
iot-hub Tutorial Routing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/tutorial-routing.md
Title: Tutorial - Configure message routing for Azure IoT Hub using Azure CLI
-description: Tutorial - Configure message routing for Azure IoT Hub using the Azure CLI and the Azure portal
+ Title: Tutorial - Configure message routing | Azure IoT Hub
+description: Tutorial - Route device messages to an Azure Storage account with message routing for Azure IoT Hub using the Azure CLI and the Azure portal
Previously updated : 08/16/2021 Last updated : 05/24/2022 #Customer intent: As a developer, I want to be able to route messages sent to my IoT hub to different destinations based on properties stored in the message. This step of the tutorial needs to show me how to set up my base resources using CLI and the Azure Portal.
-# Tutorial: Use the Azure CLI and Azure portal to configure IoT Hub message routing
+# Tutorial: Send device data to Azure Storage using IoT Hub message routing
+Use [message routing](iot-hub-devguide-messages-d2c.md) in Azure IoT Hub to send telemetry data from your IoT devices Azure services such as blob storage, Service Bus Queues, Service Bus Topics, and Event Hubs.
+Every IoT hub has a default built-in endpoint that is compatible with Event Hubs. You can also create custom endpoints and route messages to other Azure services by defining [routing queries](iot-hub-devguide-routing-query-syntax.md). Each message that arrives at the IoT hub is routed to all endpoints whose routing queries it matches. If a message doesn't match any of the defined routing queries, it is routed to the default endpoint.
-## Use the Azure CLI to create the base resources
+In this tutorial, you perform the following tasks:
-This tutorial uses the Azure CLI to create the base resources, then uses the [Azure portal](https://portal.azure.com) to show how to configure message routing and set up the virtual device for testing.
+> [!div class="checklist"]
+>
+> * Create an IoT hub and send device messages to it.
+> * Create a storage account.
+> * Create a custom endpoint for the storage account and route messages to it from the IoT hub.
+> * View device messages in the storage account blob.
-Copy and paste the script below into Cloud Shell and press Enter. It runs the script one line at a time. This will create the base resources for this tutorial, including the storage account, IoT Hub, Service Bus Namespace, and Service Bus queue.
+## Prerequisites
-There are several resource names that must be globally unique, such as the IoT Hub name and the storage account name. To make this easier, those resource names are appended with a random alphanumeric value called *randomValue*. The randomValue is generated once at the top of the script and appended to the resource names as needed throughout the script. If you don't want it to be random, you can set it to an empty string or to a specific value.
+* An Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
-> [!TIP]
-> A tip about debugging: this script uses the continuation symbol (the backslash `\`) to make the script more readable. If you have a problem running the script, make sure your Cloud Shell session is running `bash` and that there are no spaces after any of the backslashes.
->
+* An IoT hub in your Azure subscription. If you don't have a hub yet, you can follow the steps in [Create an IoT hub](iot-hub-create-through-portal.md).
+
+* This tutorial uses sample code from [Azure IoT samples for C#](https://github.com/Azure-Samples/azure-iot-samples-csharp).
+
+ * Download or clone the samples repo to your development machine.
+ * Have .NET Core 3.0.0 or greater on your development machine. Check your version by running `dotnet --version` and [Download .NET](https://dotnet.microsoft.com/download) if necessary. <!-- TODO: update sample to use .NET 6.0 -->
+
+* Make sure that port 8883 is open in your firewall. The sample in this tutorial uses MQTT protocol, which communicates over port 8883. This port may be blocked in some corporate and educational network environments. For more information and ways to work around this issue, see [Connecting to IoT Hub (MQTT)](iot-hub-mqtt-support.md#connecting-to-iot-hub).
+
+* Optionally, install [Azure IoT Explorer](https://github.com/Azure/azure-iot-explorer). This tool helps you observe the messages as they arrive at your IoT hub.
+
+# [Azure portal](#tab/portal)
+
+There are no other prerequisites for the Azure portal.
+
+# [Azure CLI](#tab/cli)
++++
+## Register a device and send messages to IoT Hub
+
+Register a new device in your IoT hub.
+
+# [Azure portal](#tab/portal)
+
+1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to your IoT hub.
+
+1. Select **Devices** from the **Device management** section of the menu.
+
+1. Select **Add device**.
+
+ ![Add a new device in the Azure portal.](./media/tutorial-routing/add-device.png)
+
+1. Provide a device ID and select **Save**.
+
+1. The new device should be in the list of devices now. If it's not, refresh the page. Select the device ID to open the device details page.
+
+1. Copy one of the device keys and save it. You'll use this value to configure the sample code that generates simulated device telemetry messages.
+
+ ![Copy the primary key from the device details page.](./media/tutorial-routing/copy-device-key.png)
+
+# [Azure CLI](#tab/cli)
+
+>[!TIP]
+>Many of the CLI commands used throughout this tutorial use the same parameters. For your convenience, we have you define local variables that can be called as needed. Be sure to run all the commands in the same session, or else you will have to redefine the variables.
+
+1. Define variables for your IoT hub and device.
+
+ *IOTHUB_NAME*: Replace this placeholder with the name of your IoT hub.
+
+ *DEVICE_NAME*: Replace this placeholder with any name you want to use for the device in this tutorial.
+
+ ```azurecli-interactive
+ hubName=IOTHUB_NAME
+ deviceName=DEVICE_NAME
+ ```
+
+1. Run the [az iot hub device-identity create](/cli/azure/iot/hub/device-identity#az-iot-hub-device-identity-create) command in your CLI shell. This command creates the device identity.
+
+ ```azurecli-interactive
+ az iot hub device-identity create --device-id $deviceName --hub-name $hubName
+ ```
+
+1. From the device-identity output, copy the **primaryKey** value without the surrounding quotation marks and save it. You'll use this value to configure the sample code that generates simulated device telemetry messages.
+++
+Now that you have a device ID and key, use the sample code to start sending device telemetry messages to IoT Hub.
+<!-- TODO: update sample to use environment variables, not inline variables -->
+
+>[!TIP]
+>If you're following the Azure CLI steps for this tutorial, run the sample code in a separate session. That way, you can allow the sample code to continue running while you follow the rest of the CLI steps.
+
+1. If you didn't as part of the prerequisites, download or clone the [Azure IoT samples for C# repo](https://github.com/Azure-Samples/azure-iot-samples-csharp) from GitHub now.
+1. In the sample folder, navigate to the `/iot-hub/Tutorials/Routing/SimulatedDevice/` folder.
+1. In an editor of your choice, open the `Program.cs` file.
+1. Find the variable definitions at the top of the **Program** class. Update the following variables with your own information:
+
+ * **s_myDeviceId**: The device ID that you assigned when registering the device.
+ * **s_iotHubUri**: The hostname of your IoT hub, which takes the format `IOTHUB_NAME.azure-devices.net`.
+ * **s_deviceKey**: The device key that you copied from the device identity information.
+
+1. Save and close the file.
+1. Install the Azure IoT C# SDK and necessary dependencies as specified in the `SimulatedDevice.csproj` file:
+
+ ```console
+ dotnet restore
+ ```
+
+1. Run the sample code:
+
+ ```console
+ dotnet run
+ ```
+
+1. You should start to see messages printed to output as they are sent to IoT Hub. Leave this program running for the duration of the tutorial.
+
+## Configure IoT Explorer to view messages
+
+Configure IoT Explorer to connect to your IoT hub and read messages as they arrive at the built-in endpoint.
+
+First, retrieve the connection string for your IoT hub.
+
+# [Azure portal](#tab/portal)
+
+1. In the Azure portal, navigate to your IoT hub.
+1. Select **Shared access policies** from the **Security settings** section of the menu.
+1. Select the **iothubowner** policy.
+
+ ![Open the iothubowner shared access policy.](./media/tutorial-routing/iothubowner-access-policy.png)
+
+1. Copy the **Primary connection string**.
+
+ ![Copy the iothubowner primary connection string.](./media/tutorial-routing/copy-iothubowner-connection-string.png)
+
+# [Azure CLI](#tab/cli)
-```azurecli-interactive
-# This retrieves the subscription id of the account
-# in which you're logged in.
-# This field is used to set up the routing queries.
-subscriptionID=$(az account show --query id)
-
-# Concatenate this number onto the resources that have to be globally unique.
-# You can set this to "" or to a specific value if you don't want it to be random.
-# This retrieves a random value.
-randomValue=$RANDOM
-
-# Set the values for the resource names that
-# don't have to be globally unique.
-location=westus
-resourceGroup=ContosoResources
-iotHubConsumerGroup=ContosoConsumers
-containerName=contosoresults
-
-# Create the resource group to be used
-# for all the resources for this tutorial.
-az group create --name $resourceGroup \
- --location $location
-
-# The IoT hub name must be globally unique,
-# so add a random value to the end.
-iotHubName=ContosoTestHub$randomValue
-echo "IoT hub name = " $iotHubName
-
-# Create the IoT hub.
-az iot hub create --name $iotHubName \
- --resource-group $resourceGroup \
- --sku S1 --location $location
-
-# Add a consumer group to the IoT hub for the 'events' endpoint.
-az iot hub consumer-group create --hub-name $iotHubName \
- --name $iotHubConsumerGroup
-
-# The storage account name must be globally unique,
-# so add a random value to the end.
-storageAccountName=contosostorage$randomValue
-echo "Storage account name = " $storageAccountName
-
-# Create the storage account to be used as a routing destination.
-az storage account create --name $storageAccountName \
- --resource-group $resourceGroup \
- --location $location \
- --sku Standard_LRS
-
-# Get the primary storage account key.
-# You need this to create the container.
-storageAccountKey=$(az storage account keys list \
- --resource-group $resourceGroup \
- --account-name $storageAccountName \
- --query "[0].value" | tr -d '"')
-
-# See the value of the storage account key.
-echo "storage account key = " $storageAccountKey
-
-# Create the container in the storage account.
-az storage container create --name $containerName \
- --account-name $storageAccountName \
- --account-key $storageAccountKey \
- --public-access off
-
-# The Service Bus namespace must be globally unique,
-# so add a random value to the end.
-sbNamespace=ContosoSBNamespace$randomValue
-echo "Service Bus namespace = " $sbNamespace
-
-# Create the Service Bus namespace.
-az servicebus namespace create --resource-group $resourceGroup \
- --name $sbNamespace \
- --location $location
-
-# The Service Bus queue name must be globally unique,
-# so add a random value to the end.
-sbQueueName=ContosoSBQueue$randomValue
-echo "Service Bus queue name = " $sbQueueName
-
-# Create the Service Bus queue to be used as a routing destination.
-az servicebus queue create --name $sbQueueName \
- --namespace-name $sbNamespace \
- --resource-group $resourceGroup
-
-```
-
-Now that the base resources are set up, you can configure the message routing in the [Azure portal](https://portal.azure.com).
+1. Run the [az iot hub connection-string show](/cli/azure/iot/hub/connection-string#az-iot-hub-connection-string-show) command:
+
+ ```azurecli-interactive
+ az iot hub connection-string show --hub-name $hubName
+ ```
+
+2. Copy the connection string without the surrounding quotation marks.
+++
+Now, use that connection string to configure IoT Explorer for your IoT hub.
+
+1. Open IoT Explorer on your development machine.
+1. Select **Add connection**.
+
+ ![Add IoT hub connection in IoT Explorer.](./media/tutorial-routing/iot-explorer-add-connection.png)
+
+1. Paste your hub's connection string into the text box.
+1. Select **Save**.
+1. Once you connect to your IoT hub, you should see a list of devices. Select the device ID that you created for this tutorial.
+1. Select **Telemetry**.
+1. Select **Start**.
+
+ ![Start monitoring device telemetry in IoT Explorer.](./media/tutorial-routing/iot-explorer-start-monitoring-telemetry.png)
+
+1. You should see the messages arriving from your device, with the most recent displayed at the top.
+
+ ![View messages arriving at IoT hub on the built-in endpoint.](./media/tutorial-routing/iot-explorer-view-messages.png)
+
+Watch the incoming messages for a few moments to verify that you see three different types of messages: normal, storage, and critical.
+
+These messages are all arriving at the default built-in endpoint for your IoT hub. In the next sections, we're going to create a custom endpoint and route some of these messages to storage based on the message properties. Those messages will stop appearing in IoT Explorer because messages only go to the built-in endpoint when they don't match any other routes in IoT hub.
## Set up message routing
+You're going to route messages to different resources based on properties attached to the message by the simulated device. Messages that aren't custom routed are sent to the default endpoint (messages/events).
+
+The sample app for this tutorial assigns a **level** property to each message it sends to IoT hub. Each message is randomly assigned a level of **normal**, **storage**, or **critical**.
+
+The first step is to set up the endpoint to which the data will be routed. The second step is to set up the message route that uses that endpoint. After setting up the routing, you can view endpoints and message routes in the portal.
+
+### Create a storage account
+
+Create an Azure Storage account and a container within that account, which will hold the device messages that are routed to it.
+
+# [Azure portal](#tab/portal)
+
+1. In the Azure portal, search for **Storage accounts**.
+
+1. Select **Create**.
+
+1. Provide the following values for your storage account:
+
+ | Parameter | Value |
+ | | -- |
+ | **Subscription** | Select the same subscription that contains your IoT hub. |
+ | **Resource group** | Select the same resource group that contains your IoT hub. |
+ | **Storage account name** | Provide a globally unique name for your storage account. |
+ | **Performance** | Accept the default **Standard** value. |
+
+ ![Create a storage account.](./media/tutorial-routing/create-storage-account.png)
+
+1. You can accept all the other default values by selecting **Review + create**.
+
+1. After validation completes, select **Create**.
+
+1. After the deployment is complete, select **Go to resource**.
+
+1. In the storage account menu, select **Containers** from the **Data storage** section.
+
+1. Select **Container** to create a new container.
+
+ ![Create a storage container](./media/tutorial-routing/create-storage-container.png)
+
+1. Provide a name for your container and select **Create**.
+
+# [Azure CLI](#tab/cli)
+
+1. Define the variables for your storage account and container.
+
+ *GROUP_NAME*: Replace this placeholder with the name of the resource group that contains your IoT hub.
+
+ *STORAGE_NAME*: Replace this placeholder with a name for your storage account. Storage account names must be lowercase and globally unique.
+
+ *CONTAINER_NAME*: Replace this placeholder with a name for your container.
+
+ ```azurecli-interactive
+ resourceGroup=GROUP_NAME
+ storageName=STORAGE_NAME
+ containerName=CONTAINER_NAME
+ ```
+
+1. Use the [az storage account create](/cli/azure/storage/account#az-storage-account-create) command to create a standard general-purpose v2 storage account.
+
+ ```azurecli-interactive
+ az storage account create --name $storageName --resource-group $resourceGroup
+ ```
+
+1. Use the [az storage container create](/cli/azure/storage/container#az-storage-container-create) to add a container to your storage account.
+
+ ```azurecli-interactive
+ az storage container create --auth-mode login --account-name $storageName --name $containerName
+ ```
++ ### Route to a storage account
-Now set up the routing for the storage account. You go to the Message Routing pane, then add a route. When adding the route, define a new endpoint for the route. After this routing is set up, messages where the **level** property is set to **storage** are written to a storage account automatically.
+Now set up the routing for the storage account. In this section you define a new endpoint that points to the storage account you created. Then, create a route that filters for messages where the **level** property is set to **storage**, and route those to the storage endpoint.
[!INCLUDE [iot-hub-include-blob-storage-format](../../includes/iot-hub-include-blob-storage-format.md)]
-Now you set up the configuration for the message routing to Azure Storage.
+# [Azure portal](#tab/portal)
-1. In the [Azure portal](https://portal.azure.com), select **Resource Groups**, then select your resource group. This tutorial uses **ContosoResources**.
+1. In the Azure portal, navigate to your IoT hub.
-2. Select the IoT hub under the list of resources. This tutorial uses **ContosoTestHub**.
+1. Select **Message Routing** from the **Hub settings** section of the menu.
-3. Select **Message Routing** in the middle column that says ***Messaging**. Select +**Add** to see the **Add a Route** pane. Select +**Add endpoint** next to the Endpoint field, then select **Storage**. You see the **Add a storage endpoint** pane.
+1. In the **Routes** tab, select **Add**.
- ![Start adding an endpoint for a route](./media/tutorial-routing/01-add-a-route-to-storage.png)
+ ![Add a new message route.](./media/tutorial-routing/add-route.png)
-4. Enter a name for the endpoint. This tutorial uses **ContosoStorageEndpoint**.
+1. Select **Add endpoint** next to the **Endpoint** field, then select **Storage** from the dropdown menu.
- ![Name the endpoint](./media/tutorial-routing/02-add-a-storage-endpoint.png)
+ ![Add a new endpoint for a route.](./media/tutorial-routing/add-storage-endpoint.png)
-5. Select **Pick a container**. This takes you to a list of your storage accounts. Select the one you set up in the preparation steps; this tutorial uses **contosostorage**. It shows a list of containers in that storage account. **Select** the container you set up in the preparation steps. This tutorial uses **contosoresults**. Then click **Select** at the bottom of the screen. It returns to a different **Add a storage endpoint** pane. You see the URL for the selected container.
+1. Provide the following information for the new storage endpoint:
-6. Set the encoding to AVRO or JSON. For the purpose of this tutorial, use the defaults for the rest of the fields. This field will be greyed out if the region selected does not support JSON encoding. Set the file name format.
+ | Parameter | Value |
+ | | -- |
+ | **Endpoint name** | Create a name for this endpoint. |
+ | **Azure Storage container** | Select **Pick a container**, which takes you to a list of storage accounts. Choose the storage account that you created in the previous section, then choose the container that you created in that account. Select **Select**.|
+ | **Encoding** | Select **JSON**. If this field is greyed out, then your storage account region doesn't support JSON. In that case, continue with the default **AVRO**. |
- > [!NOTE]
- > Set the format of the blob name using the **Blob file name format**. The default is `{iothub}/{partition}/{YYYY}/{MM}/{DD}/{HH}/{mm}`. The format must contain {iothub}, {partition}, {YYYY}, {MM}, {DD}, {HH}, and {mm} in any order.
- >
- > For example, using the default blob file name format, if the hub name is ContosoTestHub, and the date/time is October 30, 2018 at 10:56 a.m., the blob name will look like this: `ContosoTestHub/0/2018/10/30/10/56`.
- >
- > The blobs are written in the AVRO format by default.
- >
+ ![Pick a container.](./media/tutorial-routing/create-storage-endpoint.png)
-7. Select **Create** at the bottom of the page to create the storage endpoint and add it to the route. You are returned to the **Add a Route** pane.
+1. Accept the default values for the rest of the parameters and select **Create**.
-8. Complete the rest of the routing query information. This query specifies the criteria for sending messages to the storage container you just added as an endpoint. Fill in the fields on the screen.
+1. Continue creating the new route, now that you've added the storage endpoint. Provide the following information for the new route:
-9. Fill in the rest of the fields.
+ | Parameter | Value |
+ | -- | -- |
+ | **Name** | Create a name for your route. |
+ | **Data source** | Verify that **Device Telemetry Messages** is selected from the dropdown list. |
+ | **Enable route** | Verify that this field is set to `enabled`. |
+ | **Routing query** | Enter `level="storage"` as the query string. |
- - **Name**: Enter a name for your route. This tutorial uses **ContosoStorageRoute**. Next, specify the endpoint for storage. This tutorial uses ContosoStorageEndpoint.
-
- - Specify **Data source**: Select **Device Telemetry Messages** from the dropdown list.
+ ![Save the routing query information](./media/tutorial-routing/create-storage-route.png)
+
+1. Select **Save**.
- - Select **Enable route**: Be sure this field is set to `enabled`.
+# [Azure CLI](#tab/cli)
- - **Routing query**: Enter `level="storage"` as the query string.
+1. Configure the variables that you need for the endpoint and route commands.
- ![Save the routing query information](./media/tutorial-routing/04-save-storage-route.png)
-
-10. Select **Save**. When it finishes, it returns to the Message Routing pane, where you can see your new routing query for storage. Close the Message Routing pane, which returns you to the Resource group page.
+ *ENDPOINT_NAME*: Provide a name for the endpoint that represents your storage container.
+
+ *ROUTE_NAME*: Provide a name for the route that filters messages for the storage endpoint
+ ```azurecli-interactive
+ endpointName=ENDPOINT_NAME
+ routeName=ROUTE_NAME
+ ```
-### Route to a Service Bus queue
+1. Use the [az iot hub routing-endpoint create](/cli/azure/iot/hub/routing-endpoint#az-iot-hub-routing-endpoint-create) command to create a custom endpoint that points to the storage container you made in the previous section.
-Now set up the routing for the Service Bus queue. You go to the Message Routing pane, then add a route. When adding the route, define a Service Bus Queue as the endpoint for the route. After this route is set up, messages where the **level** property is set to **critical** are written to the Service Bus queue, which triggers a Logic App, which then sends an e-mail with the information.
+ ```azurecli-interactive
+ az iot hub routing-endpoint create \
+ --connection-string $(az storage account show-connection-string --name $storageName --query connectionString -o tsv) \
+ --endpoint-name $endpointName \
+ --endpoint-resource-group $resourceGroup \
+ --endpoint-subscription-id $(az account show --query id -o tsv) \
+ --endpoint-type azurestoragecontainer
+ --hub-name $hubName \
+ --container $containerName \
+ --resource-group $resourceGroup \
+ --encoding json
+ ```
-1. On the Resource group page, select your IoT hub, then select **Message Routing**.
+1. Use the [az iot hub route create](/cli/azure/iot/hub/route#az-iot-hub-route-create) command to create a route that passes any message where `level=storage` to the storage container endpoint.
-2. On the **Message Routing** pane, select +**Add**.
+ ```azurecli-interactive
+ az iot hub route create \
+ --name $routeName \
+ --hub-name $hubName \
+ --resource-group $resourceGroup \
+ --source devicemessages \
+ --endpoint-name $endpointName \
+ --enabled true \
+ --condition 'level="storage"'
+ ```
-3. On the **Add a Route** pane, Select +**Add** near **+endpoint**. Select **Service Bus Queue**. You see the **Add Service Bus Endpoint** pane.
+
- ![Adding a 1st service bus endpoint](./media/tutorial-routing/05-setup-sbq-endpoint.png)
+## View routed messages
-4. Fill in the rest of the fields:
+Once the route is created in IoT Hub and enabled, it will immediately start routing messages that meet its query condition to the storage endpoint.
- **Endpoint Name**: Enter a name for the endpoint. This tutorial uses **ContosoSBQEndpoint**.
-
- **Service Bus Namespace**: Use the dropdown list to select the service bus namespace you set up in the preparation steps. This tutorial uses **ContosoSBNamespace**.
+### Monitor the built-in endpoint with IoT Explorer
- **Service Bus queue**: Use the dropdown list to select the Service Bus queue. This tutorial uses **contososbqueue**.
+Return to the IoT Explorer session on your development machine. Recall that the IoT Explorer monitors the built-in endpoint for your IoT hub. That means that now you should be seeing only the messages that are *not* being routed by the custom route we created. Watch the incoming messages for a few moments and you should only see messages where `level` is set to `normal` or `critical`.
-5. Select **Create** to add the 1st Service Bus queue endpoint. You return to the **Add a route** pane.
+### View messages in the storage container
- ![Adding 2nd service bus endpoint](./media/tutorial-routing/06-save-sbq-endpoint.png)
+Verify that the messages are arriving in the storage container.
-6. Now complete the rest of the routing query information. This query specifies the criteria for sending messages to the Service Bus queue you just added as an endpoint. Fill in the fields on the screen.
+1. In the [Azure portal](https://portal.azure.com), navigate to your storage account.
- **Name**: Enter a name for your route. This tutorial uses **ContosoSBQueueRoute**.
+1. Select **Containers** from the **Data storage** section of the menu.
- **Endpoint**: This shows the endpoint you just set up.
+1. Select the container that you created for this tutorial.
- **Data source**: Select **Device Telemetry Messages** from the dropdown list.
+1. There should be a folder with the name of your IoT hub. Drill down through the file structure until you get to a **.json** file.
- **Enable route**: Set this field to `enable`."
+ ![Find routed messages in storage.](./media/tutorial-routing/view-messages-in-storage.png)
- **Routing query**: Enter `level="critical"` as the routing query.
+1. Download the JSON file and confirm that it contains messages from your device that have the `level` property set to `storage`.
- ![Create a routing query for the Service Bus queue](./media/tutorial-routing/07-save-servicebusqueue-route.png)
+## Clean up resources
-7. Select **Save**. When it returns to the Routes pane, you see both of your new routes.
+If you want to remove all of the Azure resources you used for this tutorial, delete the resource group. This action deletes all resources contained within the group. If you don't want to delete the entire resource group, use the Azure portal to locate and delete the individual resources.
- ![The routes you just set up](./media/tutorial-routing/08-show-both-routes.png)
+# [Azure portal](#tab/portal)
-8. You can see the custom endpoints that you set up by selecting the **Custom Endpoints** tab.
+1. In the Azure portal, navigate to the resource group that contains the IoT hub and storage account for this tutorial.
+1. Review all the resources that are in the resource group to determine which ones you want to clean up.
+ * If you want to delete all the resource, select **Delete resource group**.
+ * If you only want to delete certain resource, use the check boxes next to each resource name to select the ones you want to delete. Then select **Delete**.
- ![The custom endpoints you just set up](./media/tutorial-routing/09-show-custom-endpoints.png)
+# [Azure CLI](#tab/cli)
-9. Close the Message Routing pane, which returns you to the Resource group pane.
+1. Use the [az resource list](/cli/azure/resource#az-resource-list) command to view all the resources in your resource group.
-## Create a simulated device
+ ```azurecli-interactive
+ az resource list --resource-group $resourceGroup --output table
+ ```
+1. Review all the resources that are in the resource group to determine which ones you want to clean up.
+
+ * If you want to delete all the resources, use the [az group delete](/cli/azure/groupt#az-group-delete) command.
+
+ ```azurecli-interactive
+ az group delete --name $resourceGroup
+ ```
+
+ * If you only want to delete certain resources, use the [az resource delete](/cli/azure/resource#az-resource-delete) command. For example:
+
+ ```azurecli-interactive
+ az resource delete --resource-group $resourceGroup --name $storageName
+ ```
++ ## Next steps
-Now that you have the resources set up and the message routes configured, advance to the next tutorial to learn how to send messages to the IoT hub and see them be routed to the different destinations.
+In this tutorial you learned how to create a custom endpoint for an Azure resource and then create a route to send device messages to that endpoint. Continue to the next tutorial to learn how to enrich messages with extra data that can be used to simplify downstream processing
> [!div class="nextstepaction"]
-> [Part 2 - View the message routing results](tutorial-routing-view-message-routing-results.md)
+> [Use Azure IoT Hub message enrichments](tutorial-message-enrichments.md)
key-vault Key Vault Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/key-vault-recovery.md
For more information about Key Vault, see
## Prerequisites * An Azure subscription - [create one for free](https://azure.microsoft.com/free/dotnet)
-* [PowerShell module](/powershell/azure/install-az-ps).
+* [Azure PowerShell](/powershell/azure/install-az-ps).
* [Azure CLI](/cli/azure/install-azure-cli) * A Key Vault - you can create one using [Azure portal](../general/quick-create-portal.md) [Azure CLI](../general/quick-create-cli.md), or [Azure PowerShell](../general/quick-create-powershell.md) * The user will need the following permissions (at subscription level) to perform operations on soft-deleted vaults:
For more information about soft-delete, see [Azure Key Vault soft-delete overvie
* Verify if a key-vault has soft-delete enabled
- ```powershell
+ ```azurepowershell
Get-AzKeyVault -VaultName "ContosoVault" ``` * Delete key vault
- ```powershell
+ ```azurepowershell
Remove-AzKeyVault -VaultName 'ContosoVault' ``` * List all soft-deleted key vaults
- ```powershell
+ ```azurepowershell
Get-AzKeyVault -InRemovedState ``` * Recover soft-deleted key-vault
- ```powershell
+ ```azurepowershell
Undo-AzKeyVaultRemoval -VaultName ContosoVault -ResourceGroupName ContosoRG -Location westus ``` * Purge soft-deleted key-vault **(WARNING! THIS OPERATION WILL PERMANENTLY DELETE YOUR KEY VAULT)**
- ```powershell
+ ```azurepowershell
Remove-AzKeyVault -VaultName ContosoVault -InRemovedState -Location westus ``` * Enable purge-protection on key-vault
- ```powershell
- ($resource = Get-AzResource -ResourceId (Get-AzKeyVault -VaultName "ContosoVault").ResourceId).Properties | Add-Member -MemberType "NoteProperty" -Name "enablePurgeProtection" -Value "true"
-
- Set-AzResource -resourceid $resource.ResourceId -Properties $resource.Properties
+ ```azurepowershell
+ Update-AzKeyVault -VaultName ContosoVault -ResourceGroupName ContosoRG -EnablePurgeProtection
``` ## Certificates (PowerShell) * Grant permissions to recover and purge certificates
- ```powershell
+ ```azurepowershell
Set-AzKeyVaultAccessPolicy -VaultName ContosoVault -UserPrincipalName user@contoso.com -PermissionsToCertificates recover,purge ``` * Delete a Certificate
- ```powershell
+ ```azurepowershell
Remove-AzKeyVaultCertificate -VaultName ContosoVault -Name 'MyCert' ``` * List all deleted certificates in a key vault
- ```powershell
+ ```azurepowershell
Get-AzKeyVaultCertificate -VaultName ContosoVault -InRemovedState ``` * Recover a certificate in the deleted state
- ```powershell
+ ```azurepowershell
Undo-AzKeyVaultCertificateRemoval -VaultName ContosoVault -Name 'MyCert' ``` * Purge a soft-deleted certificate **(WARNING! THIS OPERATION WILL PERMANENTLY DELETE YOUR CERTIFICATE)**
- ```powershell
+ ```azurepowershell
Remove-AzKeyVaultcertificate -VaultName ContosoVault -Name 'MyCert' -InRemovedState ```
For more information about soft-delete, see [Azure Key Vault soft-delete overvie
* Grant permissions to recover and purge keys
- ```powershell
+ ```azurepowershell
Set-AzKeyVaultAccessPolicy -VaultName ContosoVault -UserPrincipalName user@contoso.com -PermissionsToKeys recover,purge ```
-* Delete a Key
+* Delete a key
- ```powershell
+ ```azurepowershell
Remove-AzKeyVaultKey -VaultName ContosoVault -Name 'MyKey' ```
-* List all deleted certificates in a key vault
+* List all deleted keys in a key vault
- ```powershell
+ ```azurepowershell
Get-AzKeyVaultKey -VaultName ContosoVault -InRemovedState ``` * To recover a soft-deleted key
- ```powershell
+ ```azurepowershell
Undo-AzKeyVaultKeyRemoval -VaultName ContosoVault -Name ContosoFirstKey ``` * Purge a soft-deleted key **(WARNING! THIS OPERATION WILL PERMANENTLY DELETE YOUR KEY)**
- ```powershell
+ ```azurepowershell
Remove-AzKeyVaultKey -VaultName ContosoVault -Name ContosoFirstKey -InRemovedState ```
For more information about soft-delete, see [Azure Key Vault soft-delete overvie
* Grant permissions to recover and purge secrets
- ```powershell
+ ```azurepowershell
Set-AzKeyVaultAccessPolicy -VaultName ContosoVault -UserPrincipalName user@contoso.com -PermissionsToSecrets recover,purge ``` * Delete a secret named SQLPassword
- ```powershell
- Remove-AzKeyVaultSecret -VaultName ContosoVault -name SQLPassword
+ ```azurepowershell
+ Remove-AzKeyVaultSecret -VaultName ContosoVault -Name SQLPassword
``` * List all deleted secrets in a key vault
- ```powershell
+ ```azurepowershell
Get-AzKeyVaultSecret -VaultName ContosoVault -InRemovedState ``` * Recover a secret in the deleted state
- ```powershell
- Undo-AzKeyVaultSecretRemoval -VaultName ContosoVault -Name SQLPAssword
+ ```azurepowershell
+ Undo-AzKeyVaultSecretRemoval -VaultName ContosoVault -Name SQLPassword
``` * Purge a secret in deleted state **(WARNING! THIS OPERATION WILL PERMANENTLY DELETE YOUR KEY)**
- ```powershell
- Remove-AzKeyVaultSecret -VaultName ContosoVault -InRemovedState -name SQLPassword
+ ```azurepowershell
+ Remove-AzKeyVaultSecret -VaultName ContosoVault -Name SQLPassword -InRemovedState
```
For more information about soft-delete, see [Azure Key Vault soft-delete overvie
- [Azure Key Vault backup](backup.md) - [How to enable Key Vault logging](howto-logging.md) - [Azure Key Vault security features](security-features.md)-- [Azure Key Vault developer's guide](developers-guide.md)
+- [Azure Key Vault developer's guide](developers-guide.md)
key-vault Soft Delete Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/soft-delete-overview.md
Permanently deleting, purging, a key vault is possible via a POST operation on t
Exceptions are: - When the Azure subscription has been marked as *undeletable*. In this case, only the service may then perform the actual deletion, and does so as a scheduled process. -- When the `--enable-purge-protection flag` is enabled on the vault itself. In this case, Key Vault will wait for 90 days from when the original secret object was marked for deletion to permanently delete the object.
+- When the `--enable-purge-protection` argument is enabled on the vault itself. In this case, Key Vault will wait for 90 days from when the original secret object was marked for deletion to permanently delete the object.
For steps, see [How to use Key Vault soft-delete with CLI: Purging a key vault](./key-vault-recovery.md?tabs=azure-cli#key-vault-cli) or [How to use Key Vault soft-delete with PowerShell: Purging a key vault](./key-vault-recovery.md?tabs=azure-powershell#key-vault-powershell).
Upon deleting a key vault, the service creates a proxy resource under the subscr
### Key vault object recovery
-Upon deleting a key vault object, such as a key, the service will place the object in a deleted state, making it inaccessible to any retrieval operations. While in this state, the key vault object can only be listed, recovered, or forcefully/permanently deleted. To view the objects, use the Azure CLI `az keyvault key list-deleted` command (as documented in [How to use Key Vault soft-delete with CLI](./key-vault-recovery.md)), or the Azure PowerShell `-InRemovedState` parameter (as described in [How to use Key Vault soft-delete with PowerShell](./key-vault-recovery.md?tabs=azure-powershell#key-vault-powershell)).
+Upon deleting a key vault object, such as a key, the service will place the object in a deleted state, making it inaccessible to any retrieval operations. While in this state, the key vault object can only be listed, recovered, or forcefully/permanently deleted. To view the objects, use the Azure CLI `az keyvault key list-deleted` command (as documented in [How to use Key Vault soft-delete with CLI](./key-vault-recovery.md)), or the Azure PowerShell `Get-AzKeyVault -InRemovedState` command (as described in [How to use Key Vault soft-delete with PowerShell](./key-vault-recovery.md?tabs=azure-powershell#key-vault-powershell)).
At the same time, Key Vault will schedule the deletion of the underlying data corresponding to the deleted key vault or key vault object for execution after a predetermined retention interval. The DNS record corresponding to the vault is also retained for the duration of the retention interval.
In general, when an object (a key vault or a key or a secret) is in deleted stat
## Next steps
-The following two guides offer the primary usage scenarios for using soft-delete.
+The following three guides offer the primary usage scenarios for using soft-delete.
- [How to use Key Vault soft-delete with Portal](./key-vault-recovery.md?tabs=azure-portal)-- [How to use Key Vault soft-delete with PowerShell](./key-vault-recovery.md) -- [How to use Key Vault soft-delete with CLI](./key-vault-recovery.md)
+- [How to use Key Vault soft-delete with PowerShell](./key-vault-recovery.md?tabs=azure-powershell)
+- [How to use Key Vault soft-delete with CLI](./key-vault-recovery.md?tabs=azure-cli)
key-vault Quick Create Net https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/secrets/quick-create-net.md
For more information about Key Vault and secrets, see:
* An Azure subscription - [create one for free](https://azure.microsoft.com/free/dotnet) * [.NET Core 3.1 SDK or later](https://dotnet.microsoft.com/download/dotnet-core) * [Azure CLI](/cli/azure/install-azure-cli)
+* [Azure PowerShell](/powershell/azure/install-az-ps)
* A Key Vault - you can create one using [Azure portal](../general/quick-create-portal.md) [Azure CLI](../general/quick-create-cli.md), or [Azure PowerShell](../general/quick-create-powershell.md)
-This quickstart is using `dotnet` and Azure CLI
+This quickstart is using `dotnet` and Azure CLI or Azure PowerShell.
## Setup
+### [Azure CLI](#tab/azure-cli)
+ This quickstart is using Azure Identity library with Azure CLI to authenticate user to Azure Services. Developers can also use Visual Studio or Visual Studio Code to authenticate their calls, for more information, see [Authenticate the client with Azure Identity client library](/dotnet/api/overview/azure/identity-readme?#authenticate-the-client&preserve-view=true). ### Sign in to Azure
-1. Run the `login` command.
+1. Run the `az login` command.
- ```azurecli-interactive
+ ```azurecli
az login ```
Create an access policy for your key vault that grants secret permissions to you
az keyvault set-policy --name <YourKeyVaultName> --upn user@domain.com --secret-permissions delete get list set purge ```
+### [Azure PowerShell](#tab/azure-powershell)
+
+This quickstart is using Azure Identity library with Azure PowerShell to authenticate user to Azure Services. Developers can also use Visual Studio or Visual Studio Code to authenticate their calls, for more information, see [Authenticate the client with Azure Identity client library](/dotnet/api/overview/azure/identity-readme?#authenticate-the-client&preserve-view=true).
+
+### Sign in to Azure
+
+1. Run the `Connect-AzAccount` command.
+
+ ```azurepowershell
+ Connect-AzAccount
+ ```
+
+ If the PowerShell can open your default browser, it will do so and load an Azure sign-in page.
+
+ Otherwise, open a browser page at [https://aka.ms/devicelogin](https://aka.ms/devicelogin) and enter the
+ authorization code displayed in your terminal.
+
+2. Sign in with your account credentials in the browser.
+
+### Grant access to your key vault
+
+Create an access policy for your key vault that grants secret permissions to your user account
+
+```azurepowershell
+Set-AzKeyVaultAccessPolicy -VaultName "<YourKeyVaultName>" -UserPrincipalName "user@domain.com" -PermissionsToSecrets delete,get,list,set,purge
+```
+++ ### Create new .NET console app 1. In a command shell, run the following command to create a project named `key-vault-console-app`:
key-vault Quick Create Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/secrets/quick-create-powershell.md
If you don't have an Azure subscription, create a [free account](https://azure.m
[!INCLUDE [cloud-shell-try-it.md](../../../includes/cloud-shell-try-it.md)]
-If you choose to install and use PowerShell locally, this tutorial requires Azure PowerShell module version 5.0.0 or later. Type `$PSVersionTable.PSVersion` to find the version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-az-ps). If you are running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
+If you choose to install and use PowerShell locally, this tutorial requires Azure PowerShell module version 5.0.0 or later. Type `Get-Module az -ListAvailable` to find the version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-az-ps). If you are running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
-```azurepowershell-interactive
+```azurepowershell
Connect-AzAccount ```
Now, you have created a Key Vault, stored a secret, and retrieved it.
When no longer needed, you can use the [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) command to remove the resource group, Key Vault, and all related resources. ```azurepowershell-interactive
-Remove-AzResourceGroup -Name ContosoResourceGroup
+Remove-AzResourceGroup -Name myResourceGroup
``` ## Next steps
kinect-dk Body Joints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/kinect-dk/body-joints.md
The position and orientation of each joint form its own right-handed joint coord
Legend: | x-axis = red | y-axis = green | z-axis = blue |
+> [!NOTE]
+> The visual output of the `k4abt_simple_3d_viewer.exe` tool is mirrored.
+ ## Joint hierarchy A skeleton includes 32 joints with the joint hierarchy flowing from the center of the body to the extremities. Each connection (bone) links the parent joint with a child joint. The figure illustrates the joint locations and connection relative to the human body.
kinect-dk Body Sdk Download https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/kinect-dk/body-sdk-download.md
This document provides links to install each version of the Azure Kinect Body Tr
Version | Download --|-
+1.1.2 | [msi](https://www.microsoft.com/en-us/download/details.aspx?id=104221) [nuget](https://www.nuget.org/packages/Microsoft.Azure.Kinect.BodyTracking/1.1.2)
1.1.1 | [msi](https://www.microsoft.com/en-us/download/details.aspx?id=104015) [nuget](https://www.nuget.org/packages/Microsoft.Azure.Kinect.BodyTracking/1.1.1) 1.1.0 | [msi](https://www.microsoft.com/en-us/download/details.aspx?id=102901) 1.0.1 | [msi](https://www.microsoft.com/en-us/download/details.aspx?id=100942) [nuget](https://www.nuget.org/packages/Microsoft.Azure.Kinect.BodyTracking/1.0.1)
If the command succeeds, the SDK is ready for use.
## Change log
+### v1.1.2
+* [Feature] Added C# wrapper support for Linux [Link](https://github.com/microsoft/Azure-Kinect-Sensor-SDK/issues/1207)
+* [Bug Fix] `k4abt_simple_3d_viewer.exe` works with latest NVIDIA drivers [Link](https://github.com/microsoft/Azure-Kinect-Sensor-SDK/issues/1696)
+ ### v1.1.1 * [Feature] Added cmake support to all body tracking samples * [Feature] NuGet package returns. Developed new NuGet package that includes Microsoft developed body tracking dlls and headers, and ONNX runtime dependencies. The package no longer includes the NVIDIA CUDA and TRT dependencies. These continue to be included in the MSI package.
kinect-dk Get Body Tracking Results https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/kinect-dk/get-body-tracking-results.md
case K4A_WAIT_RESULT_FAILED:
## Enqueue the capture and pop the results
-The tracker internally maintains an input queue and an output queue to asynchronously process the Azure Kinect DK captures more efficiently. Use the [k4abt_tracker_enqueue_capture()](https://microsoft.github.io/Azure-Kinect-Body-Tracking/release/1.x.x/group__btfunctions_ga093becd9bb4a63f5f4d56f58097a7b1e.html#ga093becd9bb4a63f5f4d56f58097a7b1e) function to add a new capture to the input queue. Use the [k4abt_tracker_pop_result()](https://microsoft.github.io/Azure-Kinect-Body-Tracking/release/1.x.x/group__btfunctions_gaaf446fb1579cbbe0b6af824ee0a7458b.html#gaaf446fb1579cbbe0b6af824ee0a7458b) function o pop a result from the output queue. Use of the timeout value is dependent on the application and controls the queuing wait time.
+The tracker internally maintains an input queue and an output queue to asynchronously process the Azure Kinect DK captures more efficiently. Use the [k4abt_tracker_enqueue_capture()](https://microsoft.github.io/Azure-Kinect-Body-Tracking/release/1.x.x/group__btfunctions_ga093becd9bb4a63f5f4d56f58097a7b1e.html#ga093becd9bb4a63f5f4d56f58097a7b1e) function to add a new capture to the input queue. Use the [k4abt_tracker_pop_result()](https://microsoft.github.io/Azure-Kinect-Body-Tracking/release/1.x.x/group__btfunctions_gaaf446fb1579cbbe0b6af824ee0a7458b.html#gaaf446fb1579cbbe0b6af824ee0a7458b) function to pop a result from the output queue. Use of the timeout value is dependent on the application and controls the queuing wait time.
-### Real-time processing
-Use this pattern for single-threaded applications that need real-time results and can accommodate dropped frames. The `simple_3d_viewer` sample located in [GitHub Azure-Kinect-Samples](https://github.com/microsoft/Azure-Kinect-Samples) is an example of real-time processing.
+### No Wait processing
+Use this pattern for single-threaded applications that need immediate results and can accommodate dropped frames (e.g. viewing live video from a device). The `simple_3d_viewer` sample located in [GitHub Azure-Kinect-Samples](https://github.com/microsoft/Azure-Kinect-Samples) is an example of no wait processing.
```C k4a_wait_result_t queue_capture_result = k4abt_tracker_enqueue_capture(tracker, sensor_capture, 0);
if (pop_frame_result == K4A_WAIT_RESULT_SUCCEEDED)
} ```
-### Synchronous processing
-Use this pattern for applications that do not need real-time results or cannot accommodate dropped frames.
-
-Processing throughput may be limited.
-
-The `simple_sample.exe` sample located in [GitHub Azure-Kinect-Samples](https://github.com/microsoft/Azure-Kinect-Samples) is an example of synchronous processing.
+### Wait processing
+Use this pattern for applications that do not need results for every frames (e.g. processing a video from a file). The `simple_sample.exe` sample located in [GitHub Azure-Kinect-Samples](https://github.com/microsoft/Azure-Kinect-Samples) is an example of wait processing.
```C k4a_wait_result_t queue_capture_result = k4abt_tracker_enqueue_capture(tracker, sensor_capture, K4A_WAIT_INFINITE);
lab-services Quick Create Lab Plan Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/quick-create-lab-plan-template.md
+
+ Title: Azure Lab Services Quickstart - Create a lab plan by using Azure Resource Manager template (ARM template)
+description: In this quickstart, you'll learn how to create an Azure Lab Services lab plan by using Azure Resource Manager template (ARM template).
+++ Last updated : 05/04/2022++
+# Quickstart: Create a lab plan using an ARM template
+
+This quickstart shows you, as the admin, how to use an Azure Resource Manager (ARM) template to create a lab plan. Lab plans are used when creating labs for Azure Lab Services. For an overview of Azure Lab Services, see [An introduction to Azure Lab Services](lab-services-overview.md).
++
+If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
++
+## Prerequisites
+
+If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+
+## Review the template
+
+The template used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/lab-plan).
++
+One Azure resource is defined in the template:
+
+- **[Microsoft.LabServices/labplans](/azure/templates/microsoft.labservices/labplans)**: The lab plan serves as a collection of configurations and settings that apply to the labs created from it.
+
+More Azure Lab Services template samples can be found in [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/?resourceType=Microsoft.Labservices&pageNumber=1&sort=Popular).
+
+## Deploy the template
+
+1. Select the following link to sign in to Azure and open a template. The template creates a lab plan.
+
+ :::image type="content" source="../media/template-deployments/deploy-to-azure.svg" alt-text="Screenshot of the Deploy to Azure button to deploy resources with a template." link="https://portal.azure.com/#create/Microsoft.Template/uri/https%3a%2f%2fraw.githubusercontent.com%2fAzure%2fazure-quickstart-templates%2fmaster%2fquickstarts%2fmicrosoft.labservices%2flab-plan%2fazuredeploy.json":::
+
+1. Optionally, change the name of the lab plan.
+1. Select the **Resource group**.
+1. Select **Review + create**.
+1. Select **Create**.
+
+The Azure portal is used here to deploy the template. You can also use Azure PowerShell, Azure CLI, or the REST API. To learn other deployment methods, see [Deploy resources with ARM templates](../azure-resource-manager/templates/deploy-powershell.md).
+
+## Review deployed resources
+
+You can either use the Azure portal to check the lab plan, or use the Azure PowerShell script to list the lab plan created.
+
+To use Azure PowerShell, first verify the Az.LabServices (preview) module is installed. Then use the **Get-AzLabServicesLabPlan** cmdlet.
+
+```azurepowershell-interactive
+Import-Module Az.LabServices
+
+$labplanName = Read-Host -Prompt "Enter your lab plan name"
+Get-AzLabServicesLabPlan -Name $labplanName
+
+Write-Host "Press [ENTER] to continue..."
+```
+
+## Clean up resources
+
+Other Lab Services quickstarts and tutorials build upon this quickstart. If you plan to continue on to work with subsequent quickstarts and tutorials, you may wish to leave these resources in place.
+
+When no longer needed, [delete the resource group](../azure-resource-manager/management/delete-resource-group.md?tabs=azure-portal#delete-resource-group
+), which deletes the lab plan.
+
+```azurepowershell-interactive
+$resourceGroupName = Read-Host -Prompt "Enter the Resource Group name"
+Remove-AzResourceGroup -Name $resourceGroupName
+
+Write-Host "Press [ENTER] to continue..."
+```
+
+## Next steps
+
+For a step-by-step tutorial that guides you through the process of creating a lab, see:
+
+> [!div class="nextstepaction"]
+> [Quickstart: Create a lab using an ARM template](quick-create-lab-template.md)
lab-services Quick Create Lab Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/quick-create-lab-template.md
+
+ Title: Azure Lab Services Quickstart - Create a lab by using Azure Resource Manager template (ARM template)
+description: In this quickstart, you'll learn how to create an Azure Lab Services lab by using Azure Resource Manager template (ARM template).
+++ Last updated : 05/10/2022++
+# Quickstart: Create a lab using an ARM template
+
+This quickstart shows you, as the educator or admin, how to use an Azure Resource Manager (ARM) template to create a lab. This quickstart shows you how to create a lab with Windows 11 Pro image. Once a lab is created, an educator [configures the template](how-to-create-manage-template.md), [adds lab users](how-to-configure-student-usage.md#add-and-manage-lab-users), and [publishes the lab](tutorial-setup-lab.md#publish-a-lab). For an overview of Azure Lab Services, see [An introduction to Azure Lab Services](lab-services-overview.md).
++
+If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
++
+## Prerequisites
+
+To complete this quick start, make sure that you have:
+
+- Azure subscription. If you donΓÇÖt have one, [create a free account](https://azure.microsoft.com/free/) before you begin.
+- Lab plan. If you haven't create a lab plan, see [Quickstart: Create a lab plan using an ARM template](quick-create-lab-plan-template.md).
+
+## Review the template
+
+The template used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/lab).
++
+One Azure resource is defined in the template:
+
+- **[Microsoft.LabServices/labs](/azure/templates/microsoft.labservices/labs)**: resource type description.
+
+More Azure Lab Services template samples can be found in [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/?resourceType=Microsoft.Labservices&pageNumber=1&sort=Popular). For more information how to create a lab without a lab plan using automation, see [Create Azure LabServices lab template](https://azure.microsoft.com/resources/templates/lab/).
+
+## Deploy the template
+
+1. Select the following link to sign in to Azure and open a template. The template creates a lab.
+
+ :::image type="content" source="../media/template-deployments/deploy-to-azure.svg" alt-text="Screenshot of the Deploy to Azure button to deploy resources with a template." link="https://portal.azure.com/#create/Microsoft.Template/uri/https%3a%2f%2fraw.githubusercontent.com%2fAzure%2fazure-quickstart-templates%2fmaster%2fquickstarts%2fmicrosoft.labservices%2flab-using-lab-plan%2fazuredeploy.json":::
+
+2. Optionally, change the name of the lab.
+3. Select the **resource group** the lab plan you're going to use.
+4. Enter the required values for the template:
+
+ 1. **adminUser**. The name of the user that will be added as an administrator for the lab VM.
+ 2. **adminPassword**. The password for the administrator user for the lab VM.
+ 3. **labPlanId**. The resource ID for the lab plan to be used. The **Id** is listed in the **Properties** page of the lab plan resource in Azure.
+
+ :::image type="content" source="./media/quick-create-lab-template/lab-plan-properties-id.png" alt-text="Screenshot of properties page for lab plan in Azure Lab Services with I D property highlighted.":::
+
+5. Select **Review + create**.
+6. Select **Create**.
+
+The Azure portal is used here to deploy the template. You can also use Azure PowerShell, Azure CLI, or the REST API. To learn other deployment methods, see [Deploy resources with ARM templates](../azure-resource-manager/templates/deploy-powershell.md).
+
+## Review deployed resources
+
+You can either use the Azure portal to check the lab, or use the Azure PowerShell script to list the lab resource created.
+
+To use Azure PowerShell, first verify the Az.LabServices (preview) module is installed. Then use the **Get-AzLabServicesLab** cmdlet.
+
+```azurepowershell-interactive
+Import-Module Az.LabServices
+
+$lab = Read-Host -Prompt "Enter your lab name"
+Get-AzLabServicesLab -Name $lab
+
+Write-Host "Press [ENTER] to continue..."
+```
+
+To verify educators can use the lab, navigate to the Azure Lab Services website: [https://labs.azure.com](https://labs.azure.com). For more information about managing labs, see [View all labs](/azure/lab-services/how-to-manage-labs.md#)](how-to-manage-labs.md#view-all-labs).
+
+## Clean up resources
+
+When no longer needed, [delete the resource group](../azure-resource-manager/management/delete-resource-group.md?tabs=azure-portal#delete-resource-group
+), which deletes the lab and other resources in the same group.
+
+```azurepowershell-interactive
+$resourceGroupName = Read-Host -Prompt "Enter the Resource Group name"
+Remove-AzResourceGroup -Name $resourceGroupName
+
+Write-Host "Press [ENTER] to continue..."
+```
+
+Alternately, an educator may delete a lab from the Azure Lab Services website: [https://labs.azure.com](https://labs.azure.com). For more information about deleting labs, see [Delete a lab](how-to-manage-labs.md#delete-a-lab).
+
+## Next steps
+
+For a step-by-step tutorial that guides you through the process of creating a template, see:
+
+> [!div class="nextstepaction"]
+> [Tutorial: Create and deploy your first ARM template](/azure/azure-resource-manager/templates/template-tutorial-create-first-template)
load-balancer Backend Pool Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/backend-pool-management.md
Title: Backend Pool Management
description: Get started learning how to configure and manage the backend pool of an Azure Load Balancer -+ Last updated 2/17/2022-+ # Backend pool management
load-balancer Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/basic/overview.md
Title: What is Basic Azure Load Balancer? description: Overview of Basic Azure Load Balancer.-+ -+ Last updated 04/14/2022
load-balancer Quickstart Basic Internal Load Balancer Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/basic/quickstart-basic-internal-load-balancer-cli.md
Title: 'Quickstart: Create an internal basic load balancer - Azure CLI' description: This quickstart shows how to create an internal basic load balancer by using the Azure CLI.-+ Last updated 03/24/2022-+ #Customer intent: I want to create a load balancer so that I can load balance internal traffic to VMs.
load-balancer Quickstart Basic Internal Load Balancer Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/basic/quickstart-basic-internal-load-balancer-portal.md
Title: "Quickstart: Create a basic internal load balancer - Azure portal"
description: This quickstart shows how to create a basic internal load balancer by using the Azure portal. -+ Last updated 03/21/2022-+ #Customer intent: I want to create a internal load balancer so that I can load balance internal traffic to VMs.
load-balancer Quickstart Basic Internal Load Balancer Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/basic/quickstart-basic-internal-load-balancer-powershell.md
Title: 'Quickstart: Create an internal basic load balancer - Azure PowerShell' description: This quickstart shows how to create an internal basic load balancer using Azure PowerShell-+ Last updated 03/24/2022-+ #Customer intent: I want to create a load balancer so that I can load balance internal traffic to VMs.
load-balancer Quickstart Basic Public Load Balancer Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/basic/quickstart-basic-public-load-balancer-cli.md
Title: 'Quickstart: Create a basic public load balancer - Azure CLI' description: Learn how to create a public basic SKU Azure Load Balancer in this quickstart using the Azure CLI.--++ Last updated 03/16/2022
Create a network security group rule using [az network nsg rule create](/cli/azu
## Create a bastion host
-In this section, you'll create te resources for Azure Bastion. Azure Bastion is used to securely manage the virtual machines in the backend pool of the load balancer.
+In this section, you'll create the resources for Azure Bastion. Azure Bastion is used to securely manage the virtual machines in the backend pool of the load balancer.
### Create a public IP address
-Use [az network public-ip create](/cli/azure/network/public-ip#az-network-public-ip-create) to create a public ip address for the bastion host. The public IP is used by the bastion host for secure access to the virtual machine resources.
+Use [az network public-ip create](/cli/azure/network/public-ip#az-network-public-ip-create) to create a public IP address for the bastion host. The public IP is used by the bastion host for secure access to the virtual machine resources.
```azurecli az network public-ip create \
load-balancer Quickstart Basic Public Load Balancer Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/basic/quickstart-basic-public-load-balancer-portal.md
Title: 'Quickstart: Create a basic public load balancer - Azure portal' description: Learn how to create a public basic SKU Azure Load Balancer in this quickstart. --++ Last updated 03/15/2022
load-balancer Quickstart Basic Public Load Balancer Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/basic/quickstart-basic-public-load-balancer-powershell.md
Title: 'Quickstart: Create a basic internal load balancer - Azure PowerShell' description: This quickstart shows how to create a basic internal load balancer using Azure PowerShell--++ Last updated 03/22/2022
load-balancer Virtual Network Ipv4 Ipv6 Dual Stack Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/basic/virtual-network-ipv4-ipv6-dual-stack-cli.md
Title: Deploy IPv6 dual stack application - Basic Load Balancer - CLI description: Learn how to deploy a dual stack (IPv4 + IPv6) application with Basic Load Balancer using Azure CLI.-+ Last updated 03/31/2022-+ # Deploy an IPv6 dual stack application using Basic Load Balancer - CLI
load-balancer Virtual Network Ipv4 Ipv6 Dual Stack Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/basic/virtual-network-ipv4-ipv6-dual-stack-powershell.md
Title: Deploy IPv6 dual stack application - Basic Load Balancer - PowerShell description: This article shows how deploy an IPv6 dual stack application in Azure virtual network using Azure PowerShell.-+ Last updated 03/31/2022-+
load-balancer Cli Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/cli-samples.md
description: Azure CLI Samples documentationcenter: load-balancer-+ Last updated 06/14/2018-+ # Azure CLI Samples for Load Balancer
load-balancer Components https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/components.md
Title: Azure Load Balancer components
description: Overview of Azure Load Balancer components documentationcenter: na-+ na Last updated 12/27/2021-+ # Azure Load Balancer components
load-balancer Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/concepts.md
Title: Azure Load Balancer concepts
description: Overview of Azure Load Balancer concepts documentationcenter: na-+ na Last updated 11/29/2021-+
load-balancer Configure Vm Scale Set Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/configure-vm-scale-set-cli.md
Title: Configure virtual machine scale set with an existing Azure Load Balancer - Azure CLI description: Learn how to configure a virtual machine scale set with an existing Azure Load Balancer by using the Azure CLI.--++ Last updated 03/25/2020
load-balancer Configure Vm Scale Set Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/configure-vm-scale-set-portal.md
Title: Configure virtual machine scale set with an existing Azure Load Balancer - Azure portal description: Learn how to configure a virtual machine scale set with an existing Azure Load Balancer by using the Azure portal.--++ Last updated 03/25/2020
load-balancer Configure Vm Scale Set Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/configure-vm-scale-set-powershell.md
Title: Configure virtual machine scale set with an existing Azure Load Balancer - Azure PowerShell description: Learn how to configure a virtual machine scale set with an existing Azure Load Balancer.--++ Last updated 03/26/2020
load-balancer Cross Region Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/cross-region-overview.md
description: Overview of cross region load balancer tier for Azure Load Balancer. documentationcenter: na-+ na Last updated 09/22/2020-+
load-balancer Distribution Mode Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/distribution-mode-concepts.md
Title: Azure Load Balancer distribution modes description: Get started learning about the different distribution modes of Azure Load Balancer.--++ Previously updated : 12/27/2021 Last updated : 05/24/2022 #Customer intent: As a administrator, I want to learn about the different distribution modes of Azure Load Balancer so that I can configure the distribution mode for my application.
Azure Load Balancer supports the following distribution modes for routing connec
| Azure portal configuration | Session persistence: **None** | Session persistence: **Client IP** | Session persistence: **Client IP and protocol** | | [REST API](/rest/api/load-balancer/load-balancers/create-or-update#loaddistribution) | ```"loadDistribution":"Default"```| ```"loadDistribution":SourceIP``` | ```"loadDistribution":SourceIPProtocol``` |
-There is no downtime when switching from one distribution mode to another on a Load Balancer.
+There's no downtime when switching from one distribution mode to another on a load balancer.
## Hash based Azure Load Balancer uses a five tuple hash based distribution mode by default.
-The five tuple is consists of:
+The five tuple consists of:
* **Source IP** * **Source port** * **Destination IP**
load-balancer Egress Only https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/egress-only.md
Title: Outbound-only load balancer configuration description: In this article, learn about how to create an internal load balancer with outbound NAT-+ Last updated 08/21/2021-+ # Outbound-only load balancer configuration
load-balancer Gateway Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/gateway-overview.md
Title: Gateway load balancer (Preview)
description: Overview of gateway load balancer SKU for Azure Load Balancer. --++ Last updated 12/28/2021
load-balancer Gateway Partners https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/gateway-partners.md
Title: Azure Gateway Load Balancer partners description: Learn about partners offering their network appliances for use with this service.-+ Last updated 05/11/2022-+ # Gateway Load Balancer partners
load-balancer Howto Load Balancer Imds https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/howto-load-balancer-imds.md
Title: Retrieve load balancer metadata using Azure Instance Metadata Service (IM
description: Get started learning how to retrieve load balancer metadata using Azure Instance Metadata Service. -+ Last updated 02/12/2021-+ # Retrieve load balancer metadata using Azure Instance Metadata Service (IMDS)
load-balancer Inbound Nat Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/inbound-nat-rules.md
Title: Inbound NAT rules description: Overview of what is inbound NAT rule, why to use inbound NAT rule, and how to use inbound NAT rule.-+ Last updated 2/17/2022-+ #Customer intent: As a administrator, I want to create an inbound NAT rule so that I can forward a port to a virtual machine in the backend pool of an Azure Load Balancer.
load-balancer Instance Metadata Service Load Balancer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/instance-metadata-service-load-balancer.md
Title: Retrieve load balancer information by using Azure Instance Metadata Servi
description: Get started learning about using Azure Instance Metadata Service to retrieve load balancer information. -+ Last updated 02/12/2021-+ # Retrieve load balancer information by using Azure Instance Metadata Service
load-balancer Load Balancer Custom Probe Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-custom-probe-overview.md
Title: Azure Load Balancer health probes description: Learn about the different types of health probes and configuration for Azure Load Balancer-+ Last updated 02/10/2022-+ # Azure Load Balancer health probes
Azure Monitor logs aren't available for both public and internal Basic Load Bala
## Next steps - Learn more about [Standard Load Balancer](./load-balancer-overview.md)
+- Learn [how to manage health probes](../load-balancer/manage-probes-how-to.md)
- [Get started creating a public load balancer in Resource Manager by using PowerShell](quickstart-load-balancer-standard-public-powershell.md) - [REST API for health probes](/rest/api/load-balancer/loadbalancerprobes/)
load-balancer Load Balancer Distribution Mode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-distribution-mode.md
description: In this article, get started configuring the distribution mode for Azure Load Balancer to support source IP affinity. documentationcenter: na-+ na Last updated 02/04/2021-+ # Configure the distribution mode for Azure Load Balancer
load-balancer Load Balancer Floating Ip https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-floating-ip.md
Title: Azure Load Balancer Floating IP configuration
description: Overview of Azure Load Balancer Floating IP documentationcenter: na-+ na Last updated 12/2/2021-+
If you want to reuse the backend port across multiple rules, you must enable Flo
When Floating IP is enabled, Azure changes the IP address mapping to the Frontend IP address of the Load Balancer frontend instead of backend instance's IP.
-Without Floating IP, Azure exposes the VM instances' IP. Enabling Floating IP changes the IP address mapping to the Frontend IP of the load Balancer to allow for additional flexibility. Learn more [here](load-balancer-multivip-overview.md).
+Without Floating IP, Azure exposes the VM instances' IP. Enabling Floating IP changes the IP address mapping to the Frontend IP of the load Balancer to allow for more flexibility. Learn more [here](load-balancer-multivip-overview.md).
-Floating IP can be configured on a Load Balancer rule via the Azure portal, REST API, CLI, PowerShell, or other client. In addition to the rule configuration, you must also configure your virtual machine's Guest OS in order to leverage Floating IP.
+Floating IP can be configured on a Load Balancer rule via the Azure portal, REST API, CLI, PowerShell, or other client. In addition to the rule configuration, you must also configure your virtual machine's Guest OS in order to use Floating IP.
## Floating IP Guest OS configuration For each VM in the backend pool, run the following commands at a Windows Command Prompt.
netsh interface ipv4 set interface ΓÇ£interfacenameΓÇ¥ weakhostreceive=enabled
netsh interface ipv4 set interface ΓÇ£interfacenameΓÇ¥ weakhostsend=enabled ```
-(replace interfacename with the name of this loopback interface)
+(replace **interfacename** with the name of this loopback interface)
> [!IMPORTANT] > The configuration of the loopback interfaces is performed within the guest OS. This configuration is not performed or managed by Azure. Without this configuration, the rules will not function. ## <a name = "limitations"></a>Limitations -- Floating IP is not currently supported on secondary IP configurations for Load Balancing scenarios. Note that this does not apply to Public load balancers with dual-stack configurations or to architectures that utilize a NAT Gateway for outbound connectivity.
+- Floating IP is not currently supported on secondary IP configurations for Load Balancing scenarios. This does not apply to Public load balancers with dual-stack configurations or to architectures that utilize a NAT Gateway for outbound connectivity.
## Next steps
load-balancer Load Balancer Ha Ports Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-ha-ports-overview.md
Title: High availability ports overview in Azure description: Learn about high availability ports load balancing on an internal load balancer. -+ na Last updated 04/14/2022-+ # High availability ports overview
load-balancer Load Balancer Ipv6 For Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-ipv6-for-linux.md
description: In this article, learn how to configure DHCPv6 for Linux VMs. documentationcenter: na-+ keywords: ipv6, azure load balancer, dual stack, public ip, native ipv6, mobile, iot
na Last updated 03/22/2019-+ # Configure DHCPv6 for Linux VMs
load-balancer Load Balancer Ipv6 Internet Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-ipv6-internet-cli.md
description: With this learning path, get started creating a public load balancer with IPv6 using Azure CLI. documentationcenter: na-+ keywords: ipv6, azure load balancer, dual stack, public ip, native ipv6, mobile, iot
na Last updated 06/25/2018-+ # Create a public load balancer with IPv6 using Azure CLI
load-balancer Load Balancer Ipv6 Internet Ps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-ipv6-internet-ps.md
description: Learn how to create an Internet facing load balancer with IPv6 using PowerShell for Resource Manager documentationcenter: na-+ keywords: ipv6, azure load balancer, dual stack, public ip, native ipv6, mobile, iot
na Last updated 09/25/2017-+ # Get started creating an Internet facing load balancer with IPv6 using PowerShell for Resource Manager
load-balancer Load Balancer Ipv6 Internet Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-ipv6-internet-template.md
description: Learn how to deploy IPv6 support for Azure Load Balancer and load-balanced VMs using an Azure template. documentationcenter: na-+ keywords: ipv6, azure load balancer, dual stack, public ip, native ipv6, mobile, iot
na Last updated 09/25/2017-+ # Deploy an Internet-facing load-balancer solution with IPv6 using a template
load-balancer Load Balancer Ipv6 Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-ipv6-overview.md
Title: Overview of IPv6 - Azure Load Balancer
description: With this learning path, get started with IPv6 support for Azure Load Balancer and load-balanced VMs. documentationcenter: na-+ keywords: ipv6, azure load balancer, dual stack, public ip, native ipv6, mobile, iot
na Last updated 08/24/2018-+ # Overview of IPv6 for Azure Load Balancer
load-balancer Load Balancer Multiple Ip Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-multiple-ip-cli.md
description: Learn how to assign multiple IP addresses to a virtual machine using Azure CLI. documentationcenter: na-+ na Last updated 06/25/2018-+ # Load balancing on multiple IP configurations using Azure CLI
load-balancer Load Balancer Multiple Ip Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-multiple-ip-powershell.md
description: In this article, learn about load balancing across primary and secondary IP configurations using Azure CLI. documentationcenter: na-+ na Last updated 09/25/2017-+ # Load balancing on multiple IP configurations using PowerShell
load-balancer Load Balancer Multiple Ip https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-multiple-ip.md
Title: 'Tutorial: Load balance multiple IP configurations - Azure portal' description: In this article, learn about load balancing across primary and secondary NIC configurations using the Azure portal.--++ Last updated 08/08/2021
load-balancer Load Balancer Multivip Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-multivip-overview.md
Title: Multiple frontends - Azure Load Balancer
description: With this learning path, get started with an overview of multiple frontends on Azure Load Balancer documentationcenter: na-+ na Last updated 01/26/2022-+ # Multiple frontends for Azure Load Balancer
load-balancer Load Balancer Outbound Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-outbound-connections.md
Title: Source Network Address Translation (SNAT) for outbound connections
description: Learn how Azure Load Balancer is used for outbound internet connectivity (SNAT). -+ Last updated 03/01/2022-+ # Use Source Network Address Translation (SNAT) for outbound connections
load-balancer Load Balancer Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-overview.md
description: Overview of Azure Load Balancer features, architecture, and implementation. Learn how the Load Balancer works and how to use it in the cloud. documentationcenter: na-+ # Customer intent: As an IT administrator, I want to learn more about the Azure Load Balancer service and what I can use it for.
na Last updated 1/25/2021-+
load-balancer Load Balancer Query Metrics Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-query-metrics-rest-api.md
Title: Retrieve metrics with the REST API
description: In this article, get started using the Azure REST APIs to collect health and usage metrics for Azure Load Balancer. -+ Last updated 11/19/2019-+ # Get Load Balancer usage metrics using the REST API
load-balancer Load Balancer Standard Availability Zones https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-standard-availability-zones.md
description: With this learning path, get started with Azure Standard Load Balancer and Availability Zones. documentationcenter: na-+ na Last updated 05/07/2020-+ # Load Balancer and Availability Zones
load-balancer Load Balancer Standard Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-standard-diagnostics.md
Title: Diagnostics with metrics, alerts, and resource health description: Use the available metrics, alerts, and resource health information to diagnose your load balancer.-+ Last updated 01/26/2022-+ # Standard load balancer diagnostics with metrics, alerts, and resource health
load-balancer Load Balancer Tcp Idle Timeout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-tcp-idle-timeout.md
description: In this article, learn how to configure Azure Load Balancer TCP idle timeout and reset. documentationcenter: na-+ na Last updated 10/26/2020-+ # Configure TCP reset and idle timeout for Azure Load Balancer
load-balancer Load Balancer Tcp Reset https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-tcp-reset.md
description: With this article, learn about Azure Load Balancer with bidirectional TCP RST packets on idle timeout. documentationcenter: na-+ na Last updated 10/07/2020-+ # Load Balancer TCP Reset and Idle Timeout
load-balancer Load Balancer Troubleshoot Backend Traffic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-troubleshoot-backend-traffic.md
Title: Troubleshoot Azure Load Balancer
description: Learn how to troubleshoot known issues with Azure Load Balancer. documentationcenter: na-+
na Last updated 03/02/2022-+ # Troubleshoot Azure Load Balancer backend traffic responses
load-balancer Load Balancer Troubleshoot Health Probe Status https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-troubleshoot-health-probe-status.md
Title: Troubleshoot Azure Load Balancer health probe status
description: Learn how to troubleshoot known issues with Azure Load Balancer health probe status. documentationcenter: na-+
na Last updated 12/02/2020-+ # Troubleshoot Azure Load Balancer health probe status
load-balancer Load Balancer Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-troubleshoot.md
Title: Troubleshoot common issues Azure Load Balancer
description: Learn how to troubleshoot common issues with Azure Load Balancer. documentationcenter: na-+
na Last updated 01/28/2020-+ # Troubleshoot Azure Load Balancer
load-balancer Manage Inbound Nat Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/manage-inbound-nat-rules.md
Title: Manage inbound NAT rules for Azure Load Balancer description: In this article, you'll learn how to add and remove and inbound NAT rule in the Azure portal.--++ Last updated 03/15/2022
load-balancer Manage Probes How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/manage-probes-how-to.md
Title: Manage health probes for Azure Load Balancer - Azure portal description: In this article, learn how to manage health probes for Azure Load Balancer using the Azure portal--++ Last updated 03/02/2022
load-balancer Manage Rules How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/manage-rules-how-to.md
Title: Manage rules for Azure Load Balancer - Azure portal description: In this article, learn how to manage rules for Azure Load Balancer using the Azure portal--++ Last updated 08/23/2021
load-balancer Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/manage.md
Title: Azure Load Balancer portal settings description: Get started learning about Azure Load Balancer portal settings-+ Last updated 08/16/2021-+ # Azure Load Balancer portal settings
load-balancer Monitor Load Balancer Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/monitor-load-balancer-reference.md
Title: Monitoring Load Balancer data reference description: Important reference material needed when you monitor Load Balancer -+ -+ Last updated 06/29/2021
load-balancer Monitor Load Balancer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/monitor-load-balancer.md
Title: Monitoring Azure Load Balancer description: Start here to learn how to monitor load balancer.--++
load-balancer Move Across Regions External Load Balancer Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/move-across-regions-external-load-balancer-portal.md
Title: Move an Azure external load balancer to another Azure region by using the Azure portal description: Use an Azure Resource Manager template to move an external load balancer from one Azure region to another by using the Azure portal.-+ Last updated 09/17/2019-+ # Move an external load balancer to another region by using the Azure portal
load-balancer Move Across Regions External Load Balancer Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/move-across-regions-external-load-balancer-powershell.md
Title: Move Azure external Load Balancer to another Azure region using Azure PowerShell description: Use Azure Resource Manager template to move Azure external Load Balancer from one Azure region to another using Azure PowerShell.-+ Last updated 09/17/2019-+
load-balancer Move Across Regions Internal Load Balancer Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/move-across-regions-internal-load-balancer-portal.md
Title: Move Azure internal Load Balancer to another Azure region using the Azure portal description: Use Azure Resource Manager template to move Azure internal Load Balancer from one Azure region to another using the Azure portal-+ Last updated 09/18/2019-+ # Move Azure internal Load Balancer to another region using the Azure portal
load-balancer Move Across Regions Internal Load Balancer Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/move-across-regions-internal-load-balancer-powershell.md
Title: Move Azure internal Load Balancer to another Azure region using Azure PowerShell description: Use Azure Resource Manager template to move Azure internal Load Balancer from one Azure region to another using Azure PowerShell-+ Last updated 09/17/2019-+
load-balancer Outbound Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/outbound-rules.md
Title: Outbound rules Azure Load Balancer description: This article explains how to configure outbound rules to control egress of internet traffic with Azure Load Balancer. -+ Last updated 1/6/2022-+ # <a name="outboundrules"></a>Outbound rules Azure Load Balancer
load-balancer Powershell Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/powershell-samples.md
Title: Azure PowerShell Samples - Azure Load Balancer
description: With these samples, load balance traffic to multiple websites on VMs and traffic to VMs for HA with Azure Load Balancer. documentationcenter: load-balancer-+ Last updated 12/10/2018-+ # Azure PowerShell Samples for Load Balancer
load-balancer Python Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/python-samples.md
description: With these samples, load balance traffic to multiple websites. Deploy load balancers in a HA configuration. documentationcenter: load-balancer-+ Last updated 08/20/2021-+ # Python Samples for Azure Load Balancer
load-balancer Quickstart Load Balancer Standard Internal Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/quickstart-load-balancer-standard-internal-cli.md
Title: 'Quickstart: Create an internal load balancer - Azure CLI' description: This quickstart shows how to create an internal load balancer by using the Azure CLI.-+ Last updated 03/23/2022-+ #Customer intent: I want to create a load balancer so that I can load balance internal traffic to VMs.
load-balancer Quickstart Load Balancer Standard Internal Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/quickstart-load-balancer-standard-internal-portal.md
Title: "Quickstart: Create an internal load balancer - Azure portal"
description: This quickstart shows how to create an internal load balancer by using the Azure portal. -+ Last updated 03/21/2022-+ #Customer intent: I want to create a internal load balancer so that I can load balance internal traffic to VMs.
load-balancer Quickstart Load Balancer Standard Internal Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/quickstart-load-balancer-standard-internal-powershell.md
Title: 'Quickstart: Create an internal load balancer - Azure PowerShell' description: This quickstart shows how to create an internal load balancer using Azure PowerShell-+ Last updated 03/24/2022-+ #Customer intent: I want to create a load balancer so that I can load balance internal traffic to VMs.
load-balancer Quickstart Load Balancer Standard Internal Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/quickstart-load-balancer-standard-internal-template.md
Title: 'Quickstart: Create an internal load balancer by using a template' description: This quickstart shows how to create an internal Azure load balancer by using an Azure Resource Manager template (ARM template). -+ -+ Last updated 09/14/2020
load-balancer Quickstart Load Balancer Standard Public Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/quickstart-load-balancer-standard-public-cli.md
Title: "Quickstart: Create a public load balancer - Azure CLI" description: This quickstart shows how to create a public load balancer using the Azure CLI-+ Last updated 03/16/2022-+ #Customer intent: I want to create a load balancer so that I can load balance internet traffic to VMs.
load-balancer Quickstart Load Balancer Standard Public Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/quickstart-load-balancer-standard-public-portal.md
Title: "Quickstart: Create a public load balancer - Azure portal" description: This quickstart shows how to create a load balancer by using the Azure portal.-+ Last updated 03/16/2022-+ #Customer intent: I want to create a load balancer so that I can load balance internet traffic to VMs.
load-balancer Quickstart Load Balancer Standard Public Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/quickstart-load-balancer-standard-public-powershell.md
Title: 'Quickstart: Create a public load balancer - Azure PowerShell' description: This quickstart shows how to create a load balancer using Azure PowerShell--++ Last updated 03/17/2022
load-balancer Quickstart Load Balancer Standard Public Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/quickstart-load-balancer-standard-public-template.md
description: This quickstart shows how to create a load balancer by using an Azure Resource Manager template. documentationcenter: na-+ na Last updated 12/09/2020-+ #Customer intent: I want to create a load balancer by using an Azure Resource Manager template so that I can load balance internet traffic to VMs.
load-balancer Load Balancer Linux Cli Load Balance Multiple Websites Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/scripts/load-balancer-linux-cli-load-balance-multiple-websites-vm.md
Title: Load balance multiple websites - Azure CLI - Azure Load Balancer description: This Azure CLI script example shows how to load balance multiple websites to the same virtual machine documentationcenter: load-balancer-+ ms.devlang: azurecli Last updated 03/04/2022-+
load-balancer Load Balancer Linux Cli Sample Nlb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/scripts/load-balancer-linux-cli-sample-nlb.md
Title: Load balance traffic to VMs for HA - Azure CLI - Azure Load Balancer
description: This Azure CLI script example shows how to load balance traffic to VMs for high availability documentationcenter: load-balancer-+ ms.devlang: azurecli Last updated 03/04/2022-+
load-balancer Load Balancer Linux Cli Sample Zonal Frontend https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/scripts/load-balancer-linux-cli-sample-zonal-frontend.md
Title: Load balance VMs within a zone - Azure CLI
description: This Azure CLI script example shows how to load balance traffic to VMs within a specific availability zone documentationcenter: load-balancer-+ # Customer intent: As an IT administrator, I want to create a load balancer that load balances incoming internet traffic to virtual machines within a specific zone in a region. ms.assetid:
Last updated 03/04/2022-+
load-balancer Load Balancer Linux Cli Sample Zone Redundant Frontend https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/scripts/load-balancer-linux-cli-sample-zone-redundant-frontend.md
Title: Load balance VMs across availability zones - Azure CLI - Azure Load Balancer description: This Azure CLI script example shows how to load balance traffic to VMs across availability zones documentationcenter: load-balancer-+ # Customer intent: As an IT administrator, I want to create a load balancer that load balances incoming internet traffic to virtual machines across availability zones in a region. ms.devlang: azurecli Last updated 06/14/2018-+
load-balancer Load Balancer Windows Powershell Load Balance Multiple Websites Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/scripts/load-balancer-windows-powershell-load-balance-multiple-websites-vm.md
Title: Load balance multiple websites - Azure PowerShell - Azure Load Balancer description: This Azure PowerShell script example hows how to load balance multiple websites to the same virtual machine documentationcenter: load-balancer-+ ms.devlang: powershell Last updated 04/20/2018-+
load-balancer Load Balancer Windows Powershell Sample Nlb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/scripts/load-balancer-windows-powershell-sample-nlb.md
description: This Azure PowerShell Script Example shows how to load balance traffic to VMs for high availability documentationcenter: load-balancer-+ ms.devlang: powershell Last updated 04/20/2018-+
load-balancer Skus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/skus.md
Title: Azure Load Balancer SKUs
description: Overview of Azure Load Balancer SKUs documentationcenter: na-+ na Last updated 12/22/2021-+ # Azure Load Balancer SKUs
load-balancer Troubleshoot Load Balancer Imds https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/troubleshoot-load-balancer-imds.md
Title: Common error codes for Azure Instance Metadata Service (IMDS)
description: Overview of common error codes and corresponding mitigation methods for Azure Instance Metadata Service (IMDS) -+ Last updated 02/12/2021-+ # Error codes: Common error codes when using IMDS to retrieve load balancer information
load-balancer Troubleshoot Outbound Connection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/troubleshoot-outbound-connection.md
Title: Troubleshoot SNAT exhaustion and connection timeouts
description: Resolutions for common problems with outbound connectivity with Azure Load Balancer. -+ Last updated 04/21/2022-+ # Troubleshoot SNAT exhaustion and connection timeouts
load-balancer Tutorial Add Lb Existing Scale Set Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/tutorial-add-lb-existing-scale-set-portal.md
Title: 'Tutorial: Add Azure Load Balancer to an existing virtual machine scale set - Azure portal' description: In this tutorial, learn how to add a load balancer to existing virtual machine scale set using the Azure portal. --++ Last updated 4/21/2021
load-balancer Tutorial Cross Region Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/tutorial-cross-region-cli.md
Title: 'Tutorial: Create a cross-region load balancer using Azure CLI' description: Get started with this tutorial deploying a cross-region Azure Load Balancer using Azure CLI.--++ Last updated 03/04/2021
load-balancer Tutorial Cross Region Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/tutorial-cross-region-portal.md
Title: 'Tutorial: Create a cross-region load balancer using the Azure portal' description: Get started with this tutorial deploying a cross-region Azure Load Balancer with the Azure portal.--++ Last updated 08/02/2021
load-balancer Tutorial Cross Region Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/tutorial-cross-region-powershell.md
Title: 'Tutorial: Create a cross-region load balancer using Azure PowerShell' description: Get started with this tutorial deploying a cross-region Azure Load Balancer using Azure PowerShell.--++ Last updated 02/10/2021
load-balancer Tutorial Gateway Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/tutorial-gateway-cli.md
Title: 'Tutorial: Create a gateway load balancer - Azure CLI' description: Use this tutorial to learn how to create a gateway load balancer using the Azure CLI.--++ Last updated 11/02/2021
load-balancer Tutorial Gateway Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/tutorial-gateway-portal.md
Title: 'Tutorial: Create a gateway load balancer - Azure portal' description: Use this tutorial to learn how to create a gateway load balancer using the Azure portal.--++ Last updated 12/03/2021
load-balancer Tutorial Gateway Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/tutorial-gateway-powershell.md
Title: 'Tutorial: Create a gateway load balancer - Azure PowerShell' description: Use this tutorial to learn how to create a gateway load balancer using Azure PowerShell.--++ Last updated 11/17/2021
load-balancer Tutorial Load Balancer Ip Backend Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/tutorial-load-balancer-ip-backend-portal.md
Title: 'Tutorial: Create a public load balancer with an IP-based backend - Azure portal' description: In this tutorial, learn how to create a public load balancer with an IP based backend pool.--++ Last updated 08/06/2021
load-balancer Tutorial Load Balancer Port Forwarding Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/tutorial-load-balancer-port-forwarding-portal.md
Title: "Tutorial: Create a single virtual machine inbound NAT rule - Azure portal" description: This tutorial shows how to configure port forwarding using Azure Load Balancer to create a connection to a single virtual machine in an Azure virtual network.--++ Last updated 03/08/2022
load-balancer Tutorial Load Balancer Standard Public Zonal Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/tutorial-load-balancer-standard-public-zonal-portal.md
Title: "Tutorial: Load balance VMs within an availability zone - Azure portal"
description: This tutorial demonstrates how to create a Standard Load Balancer with zonal frontend to load balance VMs within an availability zone by using Azure portal -+ # Customer intent: As an IT administrator, I want to create a load balancer that load balances incoming internet traffic to virtual machines within a specific zone in a region. Last updated 08/15/2021-+
load-balancer Tutorial Multi Availability Sets Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/tutorial-multi-availability-sets-portal.md
Title: 'Tutorial: Create a load balancer with more than one availability set in the backend pool - Azure portal' description: In this tutorial, deploy an Azure Load Balancer with more than one availability set in the backend pool.--++ Last updated 05/09/2022
load-balancer Tutorial Nat Rule Multi Instance Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/tutorial-nat-rule-multi-instance-portal.md
Title: "Tutorial: Create a multiple virtual machines inbound NAT rule - Azure portal" description: This tutorial shows how to configure port forwarding using Azure Load Balancer to create a connection to multiple virtual machines in an Azure virtual network.--++ Last updated 03/10/2022
load-balancer Upgrade Basic Standard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/upgrade-basic-standard.md
Title: Upgrade a basic to standard public load balancer
description: This article shows you how to upgrade a public load balancer from basic to standard SKU. -+ Last updated 03/17/2022-+ # Upgrade from a basic public to standard public load balancer
An Azure PowerShell script is available that does the following procedures:
* If the load balancer doesn't have a frontend IP configuration or backend pool, you'll encounter an error running the script. Ensure the load balancer has a frontend IP and backend pool
-* The script cannot migrate Virtual Machine Scale Set from Basic Load Balancer's backend to Standard Load Balancer's backend. We recommend manually creating a Standard Load Balancer and follow [Update or delete a load balancer used by virtual machine scale sets](https://docs.microsoft.com/azure/load-balancer/update-load-balancer-with-vm-scale-set) to complete the migration.
+* The script cannot migrate Virtual Machine Scale Set from Basic Load Balancer's backend to Standard Load Balancer's backend. We recommend manually creating a Standard Load Balancer and follow [Update or delete a load balancer used by virtual machine scale sets](update-load-balancer-with-vm-scale-set.md) to complete the migration.
### Change allocation method of the public IP address to static
load-balancer Upgrade Internalbasic To Publicstandard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/upgrade-internalbasic-to-publicstandard.md
Title: Upgrade an internal basic load balancer - Outbound connections required description: Learn how to upgrade a basic internal load balancer to a standard public load balancer.-+ Last updated 03/17/2022-+ # Upgrade an internal basic load balancer - Outbound connections required
An Azure PowerShell script is available that does the following procedures:
* If the load balancer doesn't have a frontend IP configuration or backend pool, you'll encounter an error running the script. Ensure the load balancer has a frontend IP and backend pool
-* The script cannot migrate Virtual Machine Scale Set from Basic Load Balancer's backend to Standard Load Balancer's backend. We recommend manually creating a Standard Load Balancer and follow [Update or delete a load balancer used by virtual machine scale sets](https://docs.microsoft.com/azure/load-balancer/update-load-balancer-with-vm-scale-set) to complete the migration.
+* The script cannot migrate Virtual Machine Scale Set from Basic Load Balancer's backend to Standard Load Balancer's backend. We recommend manually creating a Standard Load Balancer and follow [Update or delete a load balancer used by virtual machine scale sets](update-load-balancer-with-vm-scale-set.md) to complete the migration.
## Download the script
load-testing Quickstart Create And Run Load Test https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/quickstart-create-and-run-load-test.md
Learn more about the [key concepts for Azure Load Testing](./concept-load-testin
## Prerequisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- Azure RBAC role with permission to create and manage resources in the subscription, such as [Contributor](/azure/role-based-access-control/built-in-roles#contributor) or [Owner](/azure/role-based-access-control/built-in-roles#owner)
## <a name="create_resource"></a> Create an Azure Load Testing resource
load-testing Tutorial Identify Bottlenecks Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/tutorial-identify-bottlenecks-azure-portal.md
In this tutorial, you'll learn how to:
Before you can load test the sample app, you have to get it deployed and running. Use Azure CLI commands, Git commands, and PowerShell commands to make that happen.
-1. Open Windows PowerShell, sign in to Azure, and set the subscription:
+1. Open Windows PowerShell, sign in to Azure, and set the subscription:
```azurecli az login az account set --subscription <your-Azure-Subscription-ID> ```
-
+ 1. Clone the sample application's source repo: ```powershell
- git clone https://github.com/Azure-Samples/nodejs-appsvc-cosmosdb-bottleneck.git
+ git clone https://github.com/Azure-Samples/nodejs-appsvc-cosmosdb-bottleneck.git
``` The sample application is a Node.js app that consists of an Azure App Service web component and an Azure Cosmos DB database. The repo includes a PowerShell script that deploys the sample app to your Azure subscription. It also has an Apache JMeter script that you'll use in later steps.
Before you can load test the sample app, you have to get it deployed and running
1. Go to the Node.js app's directory and deploy the sample app by using this PowerShell script: ```powershell
- cd nodejs-appsvc-cosmosdb-bottleneck
- .\deploymentscript.ps1
+ cd nodejs-appsvc-cosmosdb-bottleneck
+ .\deploymentscript.ps1
```
-
+ > [!TIP]
- > You can install PowerShell Core on [Linux/WSL](/powershell/scripting/install/installing-powershell-core-on-linux?view=powershell-7.1#ubuntu-1804&preserve-view=true) or [macOS](/powershell/scripting/install/installing-powershell-core-on-macos?view=powershell-7.1&preserve-view=true).
+ > You can install PowerShell on [Linux/WSL](/powershell/scripting/install/installing-powershell-on-linux) or [macOS](/powershell/scripting/install/installing-powershell-on-macos).
>
- > After you install it, you can run the previous command as `pwsh ./deploymentscript.ps1`.
+ > After you install it, you can run the previous command as `pwsh ./deploymentscript.ps1`.
1. At the prompt, provide:
Before you can load test the sample app, you have to get it deployed and running
1. After deployment finishes, go to the running sample application by opening `https://<yourappname>.azurewebsites.net` in a browser window.
-1. To see the application's components, sign in to the [Azure portal](https://portal.azure.com) and go to the resource group that you created.
+1. To see the application's components, sign in to the [Azure portal](https://portal.azure.com) and go to the resource group that you created.
:::image type="content" source="./media/tutorial-identify-bottlenecks-azure-portal/resource-group.png" alt-text="Screenshot that shows the list of Azure resource groups.":::
Now that you have the application deployed and running, you can run your first l
In this section, you'll create a load test by using a sample Apache JMeter test script.
-The sample application's source repo includes an Apache JMeter script named *SampleApp.jmx*. This script makes three API calls to the web app on each test iteration:
+The sample application's source repo includes an Apache JMeter script named *SampleApp.jmx*. This script makes three API calls to the web app on each test iteration:
* `add`: Carries out a data insert operation on Azure Cosmos DB for the number of visitors on the web app. * `get`: Carries out a GET operation from Azure Cosmos DB to retrieve the count.
To create a load test in the Load Testing resource for the sample app:
|Setting |Value |Description | |||| |**Engine instances** |**1** |The number of parallel test engines that run the Apache JMeter script. |
-
+ :::image type="content" source="./media/tutorial-identify-bottlenecks-azure-portal/create-new-test-load.png" alt-text="Screenshot that shows the Load tab for creating a test." ::: 1. On the **Monitoring** tab, specify the application components that you want to monitor with the resource metrics. Select **Add/modify** to manage the list of application components.
In this section, you'll use the Azure portal to manually start the load test tha
:::image type="content" source="./media/tutorial-identify-bottlenecks-azure-portal/test-runs-run.png" alt-text="Screenshot that shows selections for running a test." ::: Azure Load Testing begins to monitor and display the application's server metrics on the dashboard.
-
+ You can see the streaming client-side metrics while the test is running. By default, the results refresh automatically every five seconds. :::image type="content" source="./media/tutorial-identify-bottlenecks-azure-portal/aggregated-by-percentile.png" alt-text="Screenshot that shows the dashboard with test results.":::
In this section, you'll analyze the results of the load test to identify perform
1. First, look at the client-side metrics. You'll notice that the 90th percentile for the **Response time** metric for the `add` and `get` API requests is higher than it is for the `lasttimestamp` API. :::image type="content" source="./media/tutorial-identify-bottlenecks-azure-portal/client-side-metrics.png" alt-text="Screenshot that shows the client-side metrics.":::
-
+ You can see a similar pattern for **Errors**, where the `lasttimestamp` API has fewer errors than the other APIs.
-
+ :::image type="content" source="./media/tutorial-identify-bottlenecks-azure-portal/client-side-metrics-errors.png" alt-text="Screenshot that shows the error chart."::: The results of the `add` and `get` APIs are similar, whereas the `lasttimestamp` API behaves differently. The cause might be database related, because both the `add` and `get` APIs involve database access.
In this section, you'll analyze the results of the load test to identify perform
1. Now, look at the Azure Cosmos DB server-side metrics. :::image type="content" source="./media/tutorial-identify-bottlenecks-azure-portal/cosmos-db-metrics.png" alt-text="Screenshot that shows Azure Cosmos DB metrics.":::
-
+ Notice that the **Normalized RU Consumption** metric shows that the database was quickly running at 100% resource utilization. The high resource usage might have caused database throttling errors. It also might have increased response times for the `add` and `get` web APIs.
-
+ You can also see that the **Provisioned Throughput** metric for the Azure Cosmos DB instance has a maximum throughput of 400 RUs. Increasing the provisioned throughput of the database might resolve the performance problem. ## Increase the database throughput
-In this section, you'll allocate more resources to the database, to resolve the performance bottleneck.
+In this section, you'll allocate more resources to the database, to resolve the performance bottleneck.
For Azure Cosmos DB, increase the database RU scale setting:
For Azure Cosmos DB, increase the database RU scale setting:
:::image type="content" source="./media/tutorial-identify-bottlenecks-azure-portal/ru-scaling-for-cosmos-db.png" alt-text="Screenshot that shows Data Explorer tab."::: 1. Select **Scale & Settings**, and update the throughput value to **1200**.
-
+ :::image type="content" source="./media/tutorial-identify-bottlenecks-azure-portal/1200-ru-scaling-for-cosmos-db.png" alt-text="Screenshot that shows the updated Azure Cosmos D B scale settings."::: 1. Select **Save** to confirm the changes.
Now that you've increased the database throughput, rerun the load test and verif
1. After the load test finishes, check the **Response time** results and the **Errors** results of the client-side metrics. 1. Check the server-side metrics for Azure Cosmos DB and ensure that the performance has improved.
-
+ :::image type="content" source="./media/tutorial-identify-bottlenecks-azure-portal/cosmos-db-metrics-post-run.png" alt-text="Screenshot that shows the Azure Cosmos D B client-side metrics after update of the scale settings."::: The Azure Cosmos DB **Normalized RU Consumption** value is now well below 100%.
logic-apps Block Connections Connectors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/block-connections-connectors.md
ms.suite: integration Previously updated : 07/23/2020 Last updated : 05/18/2022 # Block connections created by connectors in Azure Logic Apps
-If your organization doesn't permit connecting to restricted or unapproved resources by using their connectors in Azure Logic Apps, you can block the capability to create and use those connections in logic app workflows. With [Azure Policy](../governance/policy/overview.md), you can define and enforce [policies](../governance/policy/overview.md#policy-definition) that prevent creating or using connections for connectors that you want to block. For example, for security reasons, you might want to block connections to specific social media platforms or other services and systems.
+If your organization doesn't permit connecting to restricted or unapproved resources using their [managed connectors](../connectors/managed.md) in Azure Logic Apps, you can block the capability to create and use those connections in logic app workflows. With [Azure Policy](../governance/policy/overview.md), you can define and enforce [policies](../governance/policy/overview.md#policy-definition) that prevent creating or using connections for connectors that you want to block. For example, for security reasons, you might want to block connections to specific social media platforms or other services and systems.
-This topic shows how to set up a policy that blocks specific connections by using the Azure portal, but you can create policy definitions in other ways, for example, through the Azure REST API, Azure PowerShell, Azure CLI, and Azure Resource Manager templates. For more information, see [Tutorial: Create and manage policies to enforce compliance](../governance/policy/tutorials/create-and-manage.md).
+This article shows how to set up a policy that blocks specific connections by using the Azure portal, but you can create policy definitions in other ways. For example, you can use the Azure REST API, Azure PowerShell, Azure CLI, and Azure Resource Manager templates. For more information, see [Tutorial: Create and manage policies to enforce compliance](../governance/policy/tutorials/create-and-manage.md).
## Prerequisites
This topic shows how to set up a policy that blocks specific connections by usin
If you already have a logic app with the connection that you want to block, follow the [steps for the Azure portal](#connector-ID-portal). Otherwise, follow these steps:
-1. Visit the [Logic Apps connectors list](/connectors/connector-reference/connector-reference-logicapps-connectors).
+<a name="connector-ID-doc-reference"></a>
+
+### Connector reference doc
+
+1. Review [Connectors for Azure Logic Apps](/connectors/connector-reference/connector-reference-logicapps-connectors).
1. Find the reference page for the connector that you want to block.
If you already have a logic app with the connection that you want to block, foll
### Azure portal
-1. In the [Azure portal](https://portal.azure.com), find and open your logic app.
-
-1. On the logic app menu, select **Logic app code view** so that you can view your logic app's JSON definition.
+1. In the [Azure portal](https://portal.azure.com), find and open your logic app workflow.
- ![Open "Logic app code view" to find connector ID](./media/block-connections-connectors/code-view-connector-id.png)
-
-1. Find the `parameters` object that contains the `$connections` object, which includes a `{connection-name}` object for each connection in your logic app and specifies information about that connection:
-
- ```json
- {
- "parameters": {
- "$connections": {
- "value" : {
- "{connection-name}": {
- "connectionId": "/subscriptions/{Azure-subscription-ID}/resourceGroups/{Azure-resource-group-name}/providers/Microsoft.Web/connections/{connection-name}",
- "connectionName": "{connection-name}",
- "id": "/subscriptions/{Azure-subscription-ID}/providers/Microsoft.Web/locations/{Azure-region}/managedApis/{connection-name}"
- }
- }
- }
- }
- }
- ```
+1. On the logic app menu, select one of the following options:
+
+ * Consumption logic app: Under **Development Tools**, select **API connections**.
- For example, for the Instagram connector, find the `instagram` object, which identifies an Instagram connection:
+ * Standard logic app: Under **Workflows**, select **Connections**. On the **Connections** pane, select **API Connections** if not already selected.
- ```json
- {
- "parameters": {
- "$connections": {
- "value" : {
- "instagram": {
- "connectionId": "/subscriptions/xxxxxXXXXXxxxxxXXXXXxxxxxXXXXX/resourceGroups/MyLogicApp-RG/providers/Microsoft.Web/connections/instagram",
- "connectionName": "instagram",
- "id": "/subscriptions/xxxxxXXXXXxxxxxXXXXXxxxxxXXXXX/providers/Microsoft.Web/locations/westus/managedApis/instagram"
- }
- }
- }
- }
- }
- ```
+ 1. On the API connections pane, select the connection. When the connection pane opens, in the upper right corner, select **JSON View**.
-1. For the connection that you want to block, find the `id` property and value, which follows this format:
+ 1. Find the `api` object, which contains an `id` property and value that has the following format:
- `"id": "/subscriptions/{Azure-subscription-ID}/providers/Microsoft.Web/locations/{Azure-region}/managedApis/{connection-name}"`
+ `"id": "/subscriptions/{Azure-subscription-ID}/providers/Microsoft.Web/locations/{Azure-region}/managedApis/{connection-name}"`
- For example, here is the `id` property and value for an Instagram connection:
+ The following example shows the `id` property and value for an Instagram connection:
- `"id": "/subscriptions/xxxxxXXXXXxxxxxXXXXXxxxxxXXXXX/providers/Microsoft.Web/locations/westus/managedApis/instagram"`
+ `"id": "/subscriptions/xxxxxXXXXXxxxxxXXXXXxxxxxXXXXX/providers/Microsoft.Web/locations/westus/managedApis/instagram"`
-1. From the `id` property value, copy and save the connector reference ID at the end, for example, `instagram`.
+ 1. From the `id` property value, copy and save the connector reference ID at the end, for example, `instagram`.
- Later, when you create your policy definition, you use this ID in the definition's condition statement, for example:
+ Later, when you create your policy definition, you use this ID in the definition's condition statement, for example:
- `"like": "*managedApis/instagram"`
+ `"like": "*managedApis/instagram"`
<a name="create-policy-connections"></a>
-## Create policy to block creating connections
+## Block creating connections
-To block creating a connection altogether in a logic app, follow these steps:
+To block creating a connection altogether in a logic app workflow, follow these steps:
-1. Sign in to the [Azure portal](https://portal.azure.com). In the portal search box, enter `policy`, and select **Policy**.
+1. In the [Azure portal](https://portal.azure.com) search box, enter **policy**, and select **Policy**.
- ![In Azure portal, find and select "policy"](./media/block-connections-connectors/find-select-azure-policy.png)
+ ![Screenshot showing main Azure portal search box with "policy" entered and "Policy* selected.](./media/block-connections-connectors/find-select-azure-policy.png)
-1. On the **Policy** menu, under **Authoring**, select **Definitions** > **+ Policy definition**.
+1. On the **Policy** menu, under **Authoring**, select **Definitions**. On the **Definitions** pane toolbar, select **Policy definition**.
- ![Select "Definitions" > "+ Policy Definition"](./media/block-connections-connectors/add-new-policy-definition.png)
+ ![Screenshot showing the "Definitions" pane toolbar with "Policy definition" selected.](./media/block-connections-connectors/add-new-policy-definition.png)
-1. Under **Policy definition**, provide the information for your policy definition, based on the properties described under the example:
+1. On the **Policy definition** pane, provide the information for your policy definition, based on the properties described under the example:
- ![Screenshot that shows the "Policy definition" properties.](./media/block-connections-connectors/policy-definition-create-connections-1.png)
+ ![Screenshot showing the policy definition properties.](./media/block-connections-connectors/policy-definition-create-connections-1.png)
| Property | Required | Value | Description | |-|-|-|-|
To block creating a connection altogether in a logic app, follow these steps:
| **Description** | No | <*policy-definition-name*> | A description for the policy definition | | **Category** | Yes | **Logic apps** | The name for an existing category or new category for the policy definition | | **Policy enforcement** | Yes | **Enabled** | This setting specifies whether to enable or disable the policy definition when you save your work. |
- ||||
+ |||||
1. Under **POLICY RULE**, the JSON edit box is pre-populated with a policy definition template. Replace this template with your [policy definition](../governance/policy/concepts/definition-structure.md) based on the properties described in the table below and by following this syntax:
To block creating a connection altogether in a logic app, follow these steps:
| `effect` | `deny` | The `effect` is to block the request, which is to create the specified connection <p><p>For more information, see [Understand Azure Policy effects - Deny](../governance/policy/concepts/effects.md#deny). | ||||
- For example, suppose that you want to block creating connections with the Instagram connector. Here is the policy definition that you can use:
+ For example, suppose that you want to block creating connections with the Instagram connector. Here's the policy definition that you can use:
```json {
To block creating a connection altogether in a logic app, follow these steps:
} ```
- Here is the way that the **POLICY RULE** box appears:
+ Here's the way that the **POLICY RULE** box appears:
- ![Screenshot that shows the "POLICY RULE" box with a policy rule example.](./media/block-connections-connectors/policy-definition-create-connections-2.png)
+ ![Screenshot showing the "POLICY RULE" box with a policy rule example.](./media/block-connections-connectors/policy-definition-create-connections-2.png)
For multiple connectors, you can add more conditions, for example:
To block creating a connection altogether in a logic app, follow these steps:
1. When you're done, select **Save**. After you save the policy definition, Azure Policy generates and adds more property values to the policy definition.
-1. Next, to assign the policy definition where you want enforce the policy, [create a policy assignment](#create-policy-assignment).
+1. Next, to assign the policy definition where you want to enforce the policy, [create a policy assignment](#create-policy-assignment).
For more information about Azure Policy definitions, see these topics:
For more information about Azure Policy definitions, see these topics:
<a name="create-policy-connector-usage"></a>
-## Create policy to block using connections
-
-When you create a connection inside a logic app, that connection exists as separate Azure resource. If you delete only the logic app, the connection isn't automatically deleted and continues to exist until deleted. You might have a scenario where the connection already exists or where you have to create the connection for use outside a logic app. You can still block the capability to use an existing connection in a logic app by creating a policy that prevents saving logic apps that have the restricted or unapproved connection.
+## Block associating connections with logic apps
+
+When you create a connection in a logic app workflow, this connection exists as separate Azure resource. If you delete only the logic app workflow, the connection resource isn't automatically deleted and continues to exist until deleted. You might have a scenario where the connection resource already exists or where you have to create the connection resource for use outside the logic app. You can still block the capability to associate the connection with a different logic app workflow by creating a policy that prevents saving logic app workflows that try to use the restricted or unapproved connection. This policy affects only logic app workflows that don't already use the connection.
+
+1. In the [Azure portal](https://portal.azure.com) search box, enter **policy**, and select **Policy**.
-1. Sign in to the [Azure portal](https://portal.azure.com). In the portal search box, enter `policy`, and select **Policy**.
+ ![Screenshot showing the Azure portal search box with "policy" entered and "Policy" selected.](./media/block-connections-connectors/find-select-azure-policy.png)
- ![In Azure portal, find and select "policy"](./media/block-connections-connectors/find-select-azure-policy.png)
+1. On the **Policy** menu, under **Authoring**, select **Definitions**. On the **Definitions** pane toolbar, select **Policy definition**.
-1. On the **Policy** menu, under **Authoring**, select **Definitions** > **+ Policy definition**.
-
- ![Select "Definitions" > "+ Policy Definition"](./media/block-connections-connectors/add-new-policy-definition.png)
+ ![Screenshot showing "Definitions" pane toolbar with "Policy definition" selected.](./media/block-connections-connectors/add-new-policy-definition.png)
1. Under **Policy definition**, provide the information for your policy definition, based on the properties described under the example and continues by using Instagram as the example:
- ![Policy definition properties](./media/block-connections-connectors/policy-definition-using-connections-1.png)
+ ![Screenshot showing policy definition properties.](./media/block-connections-connectors/policy-definition-using-connections-1.png)
| Property | Required | Value | Description | |-|-|-|-|
When you create a connection inside a logic app, that connection exists as separ
| **Description** | No | <*policy-definition-name*> | A description for the policy definition | | **Category** | Yes | **Logic apps** | The name for an existing category or new category for the policy definition | | **Policy enforcement** | Yes | **Enabled** | This setting specifies whether to enable or disable the policy definition when you save your work. |
- ||||
+ |||||
1. Under **POLICY RULE**, the JSON edit box is pre-populated with a policy definition template. Replace this template with your [policy definition](../governance/policy/concepts/definition-structure.md) based on the properties described in the table below and by following this syntax:
When you create a connection inside a logic app, that connection exists as separ
| `if` | `{condition-to-evaluate}` | The condition that determines when to enforce the policy rule <p><p>In this scenario, the `{condition-to-evaluate}` determines whether the string output from `[string(field('Microsoft.Logic/workflows/parameters'))]`, contains the string, `{connector-name}`. <p><p>For more information, see [Policy definition structure - Policy rule](../governance/policy/concepts/definition-structure.md#policy-rule). | | `value` | `[string(field('Microsoft.Logic/workflows/parameters'))]` | The value to compare against the condition <p><p>In this scenario, the `value` is the string output from `[string(field('Microsoft.Logic/workflows/parameters'))]`, which converts the `$connectors` object inside the `Microsoft.Logic/workflows/parameters` object to a string. | | `contains` | `{connector-name}` | The logical operator and value to use for comparing with the `value` property <p><p>In this scenario, the `contains` operator makes sure that the rule works regardless where `{connector-name}` appears, where the string, `{connector-name}`, is the ID for the connector that you want to restrict or block. <p><p>For example, suppose that you want to block using connections to social media platforms or databases: <p><p>- Twitter: `twitter` <br>- Instagram: `instagram` <br>- Facebook: `facebook` <br>- Pinterest: `pinterest` <br>- SQL Server or Azure SQL: `sql` <p><p>To find these connector IDs, see [Find connector reference ID](#connector-reference-ID) earlier in this topic. |
- | `then` | `{effect-to-apply}` | The effect to apply when the `if` condition is met <p><p>In this scenario, the `{effect-to-apply}` is to block and fail a request or operation the doesn't comply with the policy. <p><p>For more information, see [Policy definition structure - Policy rule](../governance/policy/concepts/definition-structure.md#policy-rule). |
+ | `then` | `{effect-to-apply}` | The effect to apply when the `if` condition is met <p><p>In this scenario, the `{effect-to-apply}` is to block and fail a request or operation that doesn't comply with the policy. <p><p>For more information, see [Policy definition structure - Policy rule](../governance/policy/concepts/definition-structure.md#policy-rule). |
| `effect` | `deny` | The `effect` is to `deny` or block the request to save a logic app that uses the specified connection <p><p>For more information, see [Understand Azure Policy effects - Deny](../governance/policy/concepts/effects.md#deny). | ||||
- For example, suppose that you want to block saving logic apps that use Instagram connections. Here is the policy definition that you can use:
+ For example, suppose that you want to block saving logic apps that use Instagram connections. Here's the policy definition that you can use:
```json {
When you create a connection inside a logic app, that connection exists as separ
} ```
- Here is the way that the **POLICY RULE** box appears:
+ Here's the way that the **POLICY RULE** box appears:
- ![Rule for policy definition](./media/block-connections-connectors/policy-definition-using-connections-2.png)
+ ![Screenshot showing policy definition rule.](./media/block-connections-connectors/policy-definition-using-connections-2.png)
1. When you're done, select **Save**. After you save the policy definition, Azure Policy generates and adds more property values to the policy definition.
-1. Next, to assign the policy definition where you want enforce the policy, [create a policy assignment](#create-policy-assignment).
+1. Next, to assign the policy definition where you want to enforce the policy, [create a policy assignment](#create-policy-assignment).
For more information about Azure Policy definitions, see these topics:
For more information about Azure Policy definitions, see these topics:
Next, you need to assign the policy definition where you want to enforce the policy, for example, to a single resource group, multiple resource groups, Azure Active Directory (Azure AD) tenant, or Azure subscription. For this task, follow these steps to create a policy assignment:
-1. If you signed out, sign back in to the [Azure portal](https://portal.azure.com). In the portal search box, enter `policy`, and select **Policy**.
+1. In the [Azure portal](https://portal.azure.com), portal search box, enter **policy**, and select **Policy**.
- ![In Azure portal, find and select "Policy"](./media/block-connections-connectors/find-select-azure-policy.png)
+ ![Screenshot showing Azure portal search box with "policy" entered and "Policy" selected.](./media/block-connections-connectors/find-select-azure-policy.png)
-1. On the **Policy** menu, under **Authoring**, select **Assignments** > **Assign policy**.
+1. On the **Policy** menu, under **Authoring**, select **Assignments**. On the **Assignments** pane toolbar, select **Assign policy**.
- ![Select "Assignments" > "Assign"](./media/block-connections-connectors/add-new-policy-assignment.png)
+ ![Screenshot showing "Assignments" pane toolbar with "Assign policy" selected.](./media/block-connections-connectors/add-new-policy-assignment.png)
-1. Under **Basics**, provide this information for the policy assignment:
+1. On the **Assign policy** pane, under **Basics**, provide this information for the policy assignment:
| Property | Required | Description | |-|-|-|
Next, you need to assign the policy definition where you want to enforce the pol
For example, to assign the policy to an Azure resource group by using the Instagram example:
- ![Policy assignment properties](./media/block-connections-connectors/policy-assignment-basics.png)
+ ![Screenshot showing policy assignment properties.](./media/block-connections-connectors/policy-assignment-basics.png)
1. When you're done, select **Review + create**.
For more information, see [Quickstart: Create a policy assignment to identify no
## Test the policy
-To try your policy, start creating a connection by using the now restricted connector in the Logic App Designer. Continuing with the Instagram example, when you sign in to Instagram, you get this error that your logic app failed to create the connection:
+To try your policy, start creating a connection by using the now restricted connector in the workflow designer. Continuing with the Instagram example, when you sign in to Instagram, you get this error that your logic app failed to create the connection:
-![Connection failure due to applied policy](./media/block-connections-connectors/connection-failure-message.png)
+![Screenshot showing connection failure due to applied policy.](./media/block-connections-connectors/connection-failure-message.png)
The message includes this information:
The message includes this information:
## Next steps
-* Learn more about [Azure Policy](../governance/policy/overview.md)
+* Learn more about [Azure Policy](../governance/policy/overview.md)
logic-apps Logic Apps Control Flow Loops https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-control-flow-loops.md
The "Until" loop stops execution based on these properties, so make sure that yo
* **Count**: This value is the highest number of loops that run before the loop exits. For the default and maximum limits on the number of "Until" loops that a logic app run can have, see [Concurrency, looping, and debatching limits](../logic-apps/logic-apps-limits-and-config.md#looping-debatching-limits).
-* **Timeout**: This value is the most amount of time that the loop runs before exiting and is specified in [ISO 8601 format](https://en.wikipedia.org/wiki/ISO_8601). For the default and maximum limits on the **Timeout** value, see [Concurrency, looping, and debatching limits](../logic-apps/logic-apps-limits-and-config.md#looping-debatching-limits).
+* **Timeout**: This value is the most amount of time that the "Until" action, including all the loops, runs before exiting and is specified in [ISO 8601 format](https://en.wikipedia.org/wiki/ISO_8601). For the default and maximum limits on the **Timeout** value, see [Concurrency, looping, and debatching limits](../logic-apps/logic-apps-limits-and-config.md#looping-debatching-limits).
The timeout value is evaluated for each loop cycle. If any action in the loop takes longer than the timeout limit, the current cycle doesn't stop. However, the next cycle doesn't start because the limit condition isn't met.
logic-apps Logic Apps Securing A Logic App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-securing-a-logic-app.md
For more information about security in Azure, review these topics:
## Access to logic app operations
-For Consumption logic apps only, before you can create or manage logic apps and their connections, you need specific permissions, which are provided through roles using [Azure role-based access control (Azure RBAC)](../role-based-access-control/role-assignments-portal.md). You can also
-you can set up permissions so that only specific users or groups can run specific tasks, such as managing, editing, and viewing logic apps. To control their permissions, you can assign built-in or customized roles to members who have access to your Azure subscription. Azure Logic Apps has the following specific roles:
+For Consumption logic apps only, before you can create or manage logic apps and their connections, you need specific permissions, which are provided through roles using [Azure role-based access control (Azure RBAC)](../role-based-access-control/role-assignments-portal.md). You can also set up permissions so that only specific users or groups can run specific tasks, such as managing, editing, and viewing logic apps. To control their permissions, you can assign built-in or customized roles to members who have access to your Azure subscription. Azure Logic Apps has the following specific roles:
* [Logic App Contributor](../role-based-access-control/built-in-roles.md#logic-app-contributor): Lets you manage logic apps, but you can't change access to them.
logic-apps Set Up Zone Redundancy Availability Zones https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/set-up-zone-redundancy-availability-zones.md
During preview, the following considerations apply:
* Australia East * Brazil South * Canada Central
+ * Central India
* Central US
+ * East Asia
* East US * East US 2 * France Central
+ * Germany West Central
* Japan East
+ * Korea Central
+ * Norway East
* South Central US * UK South
+ * West Europe
+ * West US 3
* Azure Logic Apps currently supports the option to enable availability zones *only for new Consumption logic app workflows* that run in multi-tenant Azure Logic Apps.
machine-learning Azure Machine Learning Release Notes Cli V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/azure-machine-learning-release-notes-cli-v2.md
description: Learn about the latest updates to Azure Machine Learning CLI (v2)
+
Last updated 04/12/2022
# Azure Machine Learning CLI (v2) release notes [!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)]+ In this article, learn about Azure Machine Learning CLI (v2) releases.
In this article, learn about Azure Machine Learning CLI (v2) releases.
__RSS feed__: Get notified when this page is updated by copying and pasting the following URL into your feed reader: `https://docs.microsoft.com/api/search/rss?search=%22Azure+machine+learning+release+notes-v2%22&locale=en-us`
+## 2022-05-24
+
+### Azure Machine Learning CLI (v2) v2.4.0
+
+- The Azure Machine Learning CLI (v2) is now GA.
+- `az ml job`
+ - The command group is marked as GA.
+ - Added AutoML job type in public preview.
+ - Added `schedules` property to pipeline job in public preview.
+ - Added an option to list only archived jobs.
+ - Improved reliability of `az ml job download` command.
+- `az ml data`
+ - The command group is marked as GA.
+ - Added MLTable data type in public preview.
+ - Added an option to list only archived data assets.
+- `az ml environment`
+ - Added an option to list only archived environments.
+- `az ml model`
+ - The command group is marked as GA.
+ - Allow models to be created from job outputs.
+ - Added an option to list only archived models.
+- `az ml online-deployment`
+ - The command group is marked as GA.
+ - Removed timeout waiting for deployment creation.
+ - Improved online deployment list view.
+- `az ml online-endpoint`
+ - The command group is marked as GA.
+ - Added `mirror_traffic` property to online endpoints in public preview.
+ - Improved online endpoint list view.
+- `az ml batch-deployment`
+ - The command group is marked as GA.
+ - Added support for `uri_file` and `uri_folder` as invocation input.
+ - Fixed a bug in batch deployment update.
+ - Fixed a bug in batch deployment list-jobs output.
+- `az ml batch-endpoint`
+ - The command group is marked as GA.
+ - Added support for `uri_file` and `uri_folder` as invocation input.
+ - Fixed a bug in batch endpoint update.
+ - Fixed a bug in batch endpoint list-jobs output.
+- `az ml component`
+ - The command group is marked as GA.
+ - Added an option to list only archived components.
+- `az ml code`
+ - This command group is removed.
+ ## 2022-03-14 ### Azure Machine Learning CLI (v2) v2.2.1
machine-learning Azure Machine Learning Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/azure-machine-learning-release-notes.md
description: Learn about the latest updates to Azure Machine Learning Python SDK
+
At the time, of this release, the following browsers are supported: Chrome, Fire
+ **Improved Swagger schema generation experience**<br/> Our previous swagger generation method was error prone and impossible to automate. We have a new in-line way of generating swagger schemas from any Python function via decorators. We have open-sourced this code and our schema generation protocol is not coupled to the Azure ML platform.
-+ **Azure ML CLI is generally available (GA)**<br/> Models can now be deployed with a single CLI command. We got common customer feedback that no one deploys an ML model from a Jupyter notebook. The [**CLI reference documentation**](./reference-azure-machine-learning-cli.md) has been updated.
++ **Azure ML CLI is generally available (GA)**<br/> Models can now be deployed with a single CLI command. We got common customer feedback that no one deploys an ML model from a Jupyter notebook. The [**CLI reference documentation**](./v1/reference-azure-machine-learning-cli.md) has been updated. ## 2019-04-22
machine-learning Convert To Image Directory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference/convert-to-image-directory.md
description: Learn how to use the Convert to Image Directory component to Conver
+
This article describes how to use the Convert to Image Directory component to he
> [!NOTE] > For inference, the image dataset folder only needs to contain unclassified images.
-1. [Register the image dataset as a file dataset](../how-to-create-register-datasets.md) in your workspace, since the input of Convert to Image Directory component must be a **File dataset**.
+1. [Register the image dataset as a file dataset](../v1/how-to-create-register-datasets.md) in your workspace, since the input of Convert to Image Directory component must be a **File dataset**.
1. Add the registered image dataset to the canvas. You can find your registered dataset in the **Datasets** category in the component list in the left of canvas. Currently Designer does not support visualize image dataset.
The output of **Convert to Image Directory** component is in **Image Directory**
## Next steps
-See the [set of components available](component-reference.md) to Azure Machine Learning.
+See the [set of components available](component-reference.md) to Azure Machine Learning.
machine-learning Execute Python Script https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference/execute-python-script.md
-+
if spec is None:
## Access to current workspace and registered datasets
-You can refer to the following sample code to access to the [registered datasets](../how-to-create-register-datasets.md) in your workspace:
+You can refer to the following sample code to access to the [registered datasets](../v1/how-to-create-register-datasets.md) in your workspace:
```Python def azureml_main(dataframe1 = None, dataframe2 = None):
machine-learning Execute R Script https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference/execute-r-script.md
description: Learn how to use the Execute R Script component in Azure Machine Le
+
azureml_main <- function(dataframe1, dataframe2){
## Access to registered dataset
-You can refer to the following sample code to access to the [registered datasets](../how-to-create-register-datasets.md) in your workspace:
+You can refer to the following sample code to access to the [registered datasets](../v1/how-to-create-register-datasets.md) in your workspace:
```R azureml_main <- function(dataframe1, dataframe2){
machine-learning Import Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference/import-data.md
description: Learn how to use the Import Data component in Azure Machine Learnin
+
This article describes a component in Azure Machine Learning designer.
Use this component to load data into a machine learning pipeline from existing cloud data services. > [!Note]
-> All functionality provided by this component can be done by **datastore** and **datasets** in the worksapce landing page. We recommend you use **datastore** and **dataset** which includes additional features like data monitoring. To learn more, see [How to Access Data](../how-to-access-data.md) and [How to Register Datasets](../how-to-create-register-datasets.md) article.
+> All functionality provided by this component can be done by **datastore** and **datasets** in the worksapce landing page. We recommend you use **datastore** and **dataset** which includes additional features like data monitoring. To learn more, see [How to Access Data](../v1/how-to-access-data.md) and [How to Register Datasets](../v1/how-to-create-register-datasets.md) article.
> After you register a dataset, you can find it in the **Datasets** -> **My Datasets** category in designer interface. This component is reserved for Studio(classic) users to for a familiar experience. >
machine-learning Concept Automated Ml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-automated-ml.md
Last updated 03/15/2022-+ # What is automated machine learning (AutoML)?
The following settings allow you to configure your automated ML experiment.
|**Split data into train/validation sets**| Γ£ô|Γ£ô |**Supports ML tasks: classification, regression, & forecasting**| Γ£ô| Γ£ô |**Supports computer vision tasks (preview): image classification, object detection & instance segmentation**| Γ£ô|
+|**NLP-Text**| Γ£ô| Γ£ô
|**Optimizes based on primary metric**| Γ£ô| Γ£ô |**Supports Azure ML compute as compute target** | Γ£ô|Γ£ô |**Configure forecast horizon, target lags & rolling window**|Γ£ô|Γ£ô |**Set exit criteria** |Γ£ô|Γ£ô |**Set concurrent iterations**| Γ£ô|Γ£ô
-|**Drop columns**| Γ£ô|Γ£ô
|**Block algorithms**|Γ£ô|Γ£ô |**Cross validation** |Γ£ô|Γ£ô |**Supports training on Azure Databricks clusters**| Γ£ô|
These settings allow you to review and control your experiment runs and its chil
|**Run summary table**| Γ£ô|Γ£ô| |**Cancel runs & child runs**| Γ£ô|Γ£ô| |**Get guardrails**| Γ£ô|Γ£ô|
-|**Pause & resume runs**| Γ£ô| |
## When to use AutoML: classification, regression, forecasting, computer vision & NLP
The following diagram illustrates this process.
You can also inspect the logged run information, which [contains metrics](how-to-understand-automated-ml.md) gathered during the run. The training run produces a Python serialized object (`.pkl` file) that contains the model and data preprocessing.
-While model building is automated, you can also [learn how important or relevant features are](how-to-configure-auto-train.md#explain) to the generated models.
+While model building is automated, you can also [learn how important or relevant features are](./v1/how-to-configure-auto-train-v1.md#explain) to the generated models.
> [!VIDEO https://www.microsoft.com/videoplayer/embed/RE2Xc9t]
Consider these factors when choosing your compute target:
* **Choose a remote ML compute cluster**: If you are training with larger datasets like in production training creating models which need longer trains, remote compute will provide much better end-to-end time performance because `AutoML` will parallelize trains across the cluster's nodes. On a remote compute, the start-up time for the internal infrastructure will add around 1.5 minutes per child run, plus additional minutes for the cluster infrastructure if the VMs are not yet up and running. ### Pros and cons+ Consider these pros and cons when choosing to use local vs. remote. | | Pros (Advantages) |Cons (Handicaps) |
To help confirm that such bias isn't applied to the final recommended model, aut
Learn how to [configure AutoML experiments to use test data (preview) with the SDK](how-to-configure-cross-validation-data-splits.md#provide-test-data-preview) or with the [Azure Machine Learning studio](how-to-use-automated-ml-for-ml-models.md#create-and-run-experiment).
-You can also [test any existing automated ML model (preview)](how-to-configure-auto-train.md#test-existing-automated-ml-model)), including models from child runs, by providing your own test data or by setting aside a portion of your training data.
+You can also [test any existing automated ML model (preview)](./v1/how-to-configure-auto-train-v1.md#test-existing-automated-ml-model)), including models from child runs, by providing your own test data or by setting aside a portion of your training data.
## Feature engineering
Automated machine learning supports ensemble models, which are enabled by defaul
The [Caruana ensemble selection algorithm](http://www.niculescu-mizil.org/papers/shotgun.icml04.revised.rev2.pdf) with sorted ensemble initialization is used to decide which models to use within the ensemble. At a high level, this algorithm initializes the ensemble with up to five models with the best individual scores, and verifies that these models are within 5% threshold of the best score to avoid a poor initial ensemble. Then for each ensemble iteration, a new model is added to the existing ensemble and the resulting score is calculated. If a new model improved the existing ensemble score, the ensemble is updated to include the new model.
-See the [how-to](how-to-configure-auto-train.md#ensemble) for changing default ensemble settings in automated machine learning.
+See the [how-to](./v1/how-to-configure-auto-train-v1.md#ensemble) for changing default ensemble settings in automated machine learning.
<a name="use-with-onnx"></a>
See the [how-to](how-to-configure-auto-train.md#ensemble) for changing default e
With Azure Machine Learning, you can use automated ML to build a Python model and have it converted to the ONNX format. Once the models are in the ONNX format, they can be run on a variety of platforms and devices. Learn more about [accelerating ML models with ONNX](concept-onnx.md).
-See how to convert to ONNX format [in this Jupyter notebook example](https://github.com/Azure/azureml-examples/tree/main/python-sdk/tutorials/automl-with-azureml/classification-bank-marketing-all-features). Learn which [algorithms are supported in ONNX](how-to-configure-auto-train.md#supported-models).
+See how to convert to ONNX format [in this Jupyter notebook example](https://github.com/Azure/azureml-examples/tree/main/python-sdk/tutorials/automl-with-azureml/classification-bank-marketing-all-features). Learn which [algorithms are supported in ONNX](how-to-configure-auto-train.md#supported-algorithms).
The ONNX runtime also supports C#, so you can use the model built automatically in your C# apps without any need for recoding or any of the network latencies that REST endpoints introduce. Learn more about [using an AutoML ONNX model in a .NET application with ML.NET](./how-to-use-automl-onnx-model-dotnet.md) and [inferencing ONNX models with the ONNX runtime C# API](https://onnxruntime.ai/docs/api/csharp-api.html).
machine-learning Concept Azure Machine Learning Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-azure-machine-learning-architecture.md
- Title: 'Architecture & key concepts'-
-description: This article gives you a high-level understanding of the architecture, terms, and concepts that make up Azure Machine Learning.
------ Previously updated : 10/21/2021-
-#Customer intent: As a data scientist, I want to understand the big picture about how Azure Machine Learning works.
--
-# How Azure Machine Learning works: Architecture and concepts
-
-Learn about the architecture and concepts for [Azure Machine Learning](overview-what-is-azure-machine-learning.md). This article gives you a high-level understanding of the components and how they work together to assist in the process of building, deploying, and maintaining machine learning models.
-
-## <a name="workspace"></a> Workspace
-
-A [machine learning workspace](concept-workspace.md) is the top-level resource for Azure Machine Learning.
--
-The workspace is the centralized place to:
-
-* Manage resources you use for training and deployment of models, such as [computes](#compute-instance)
-* Store assets you create when you use Azure Machine Learning, including:
- * [Environments](#environments)
- * [Experiments](#experiments)
- * [Pipelines](#ml-pipelines)
- * [Datasets](#datasets-and-datastores)
- * [Models](#models)
- * [Endpoints](#endpoints)
-
-A workspace includes other Azure resources that are used by the workspace:
-
-+ [Azure Container Registry (ACR)](https://azure.microsoft.com/services/container-registry/): Registers docker containers that you use during training and when you deploy a model. To minimize costs, ACR is only created when deployment images are created.
-+ [Azure Storage account](https://azure.microsoft.com/services/storage/): Is used as the default datastore for the workspace. Jupyter notebooks that are used with your Azure Machine Learning compute instances are stored here as well.
-+ [Azure Application Insights](https://azure.microsoft.com/services/application-insights/): Stores monitoring information about your models.
-+ [Azure Key Vault](https://azure.microsoft.com/services/key-vault/): Stores secrets that are used by compute targets and other sensitive information that's needed by the workspace.
-
-You can share a workspace with others.
-
-## Computes
-
-<a name="compute-targets"></a>
-A [compute target](concept-compute-target.md) is any machine or set of machines you use to run your training script or host your service deployment. You can use your local machine or a remote compute resource as a compute target. With compute targets, you can start training on your local machine and then scale out to the cloud without changing your training script.
-
-Azure Machine Learning introduces two fully managed cloud-based virtual machines (VM) that are configured for machine learning tasks:
-
-* <a name="compute-instance"></a> **Compute instance**: A compute instance is a VM that includes multiple tools and environments installed for machine learning. The primary use of a compute instance is for your development workstation. You can start running sample notebooks with no setup required. A compute instance can also be used as a compute target for training and inferencing jobs.
-
-* **Compute clusters**: Compute clusters are a cluster of VMs with multi-node scaling capabilities. Compute clusters are better suited for compute targets for large jobs and production. The cluster scales up automatically when a job is submitted. Use as a training compute target or for dev/test deployment.
-
-For more information about training compute targets, see [Training compute targets](concept-compute-target.md#train). For more information about deployment compute targets, see [Deployment targets](concept-compute-target.md#deploy).
-
-## Datasets and datastores
-
-[**Azure Machine Learning Datasets**](concept-data.md#datasets) make it easier to access and work with your data. By creating a dataset, you create a reference to the data source location along with a copy of its metadata. Because the data remains in its existing location, you incur no extra storage cost, and don't risk the integrity of your data sources.
-
-For more information, see [Create and register Azure Machine Learning Datasets](how-to-create-register-datasets.md). For more examples using Datasets, see the [sample notebooks](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/work-with-data/datasets-tutorial).
-
-Datasets use [datastores](concept-data.md#datastores) to securely connect to your Azure storage services. Datastores store connection information without putting your authentication credentials and the integrity of your original data source at risk. They store connection information, like your subscription ID and token authorization in your Key Vault associated with the workspace, so you can securely access your storage without having to hard code them in your script.
-
-## Environments
-
-[Workspace](#workspace) > **Environments**
-
-An [environment](concept-environments.md) is the encapsulation of the environment where training or scoring of your machine learning model happens. The environment specifies the Python packages, environment variables, and software settings around your training and scoring scripts.
-
-For code samples, see the "Manage environments" section of [How to use environments](how-to-use-environments.md#manage-environments).
-
-## Experiments
-
-[Workspace](#workspace) > **Experiments**
-
-An experiment is a grouping of many runs from a specified script. It always belongs to a workspace. When you submit a run, you provide an experiment name. Information for the run is stored under that experiment. If the name doesn't exist when you submit an experiment, a new experiment is automatically created.
-
-For an example of using an experiment, see [Tutorial: Train your first model](tutorial-1st-experiment-sdk-train.md).
-
-### Runs
-
-[Workspace](#workspace) > [Experiments](#experiments) > **Run**
-
-A run is a single execution of a training script. An experiment will typically contain multiple runs.
-
-Azure Machine Learning records all runs and stores the following information in the experiment:
-
-* Metadata about the run (timestamp, duration, and so on)
-* Metrics that are logged by your script
-* Output files that are autocollected by the experiment or explicitly uploaded by you
-* A snapshot of the directory that contains your scripts, prior to the run
-
-You produce a run when you submit a script to train a model. A run can have zero or more child runs. For example, the top-level run might have two child runs, each of which might have its own child run.
-
-### Run configurations
-
-[Workspace](#workspace) > [Experiments](#experiments) > [Run](#runs) > **Run configuration**
-
-A run configuration defines how a script should be run in a specified compute target. You use the configuration to specify the script, the compute target and Azure ML environment to run on, any distributed job-specific configurations, and some additional properties. For more information on the full set of configurable options for runs, see [ScriptRunConfig](/python/api/azureml-core/azureml.core.scriptrunconfig).
-
-A run configuration can be persisted into a file inside the directory that contains your training script. Or it can be constructed as an in-memory object and used to submit a run.
-
-For example run configurations, see [Configure a training run](how-to-set-up-training-targets.md).
-
-### Snapshots
-
-[Workspace](#workspace) > [Experiments](#experiments) > [Run](#runs) > **Snapshot**
-
-When you submit a run, Azure Machine Learning compresses the directory that contains the script as a zip file and sends it to the compute target. The zip file is then extracted, and the script is run there. Azure Machine Learning also stores the zip file as a snapshot as part of the run record. Anyone with access to the workspace can browse a run record and download the snapshot.
-
-### Logging
-
-Azure Machine Learning automatically logs standard run metrics for you. However, you can also [use the Python SDK to log arbitrary metrics](how-to-log-view-metrics.md).
-
-There are multiple ways to view your logs: monitoring run status in real time, or viewing results after completion. For more information, see [Monitor and view ML run logs](how-to-log-view-metrics.md).
--
-> [!NOTE]
-> [!INCLUDE [amlinclude-info](../../includes/machine-learning-amlignore-gitignore.md)]
-
-### Git tracking and integration
-
-When you start a training run where the source directory is a local Git repository, information about the repository is stored in the run history. This works with runs submitted using a script run configuration or ML pipeline. It also works for runs submitted from the SDK or Machine Learning CLI.
-
-For more information, see [Git integration for Azure Machine Learning](concept-train-model-git-integration.md).
-
-### Training workflow
-
-When you run an experiment to train a model, the following steps happen. These are illustrated in the training workflow diagram below:
-
-* Azure Machine Learning is called with the snapshot ID for the code snapshot saved in the previous section.
-* Azure Machine Learning creates a run ID (optional) and a Machine Learning service token, which is later used by compute targets like Machine Learning Compute/VMs to communicate with the Machine Learning service.
-* You can choose either a managed compute target (like Machine Learning Compute) or an unmanaged compute target (like VMs) to run training jobs. Here are the data flows for both scenarios:
- * VMs/HDInsight, accessed by SSH credentials in a key vault in the Microsoft subscription. Azure Machine Learning runs management code on the compute target that:
-
- 1. Prepares the environment. (Docker is an option for VMs and local computers. See the following steps for Machine Learning Compute to understand how running experiments on Docker containers works.)
- 1. Downloads the code.
- 1. Sets up environment variables and configurations.
- 1. Runs user scripts (the code snapshot mentioned in the previous section).
-
- * Machine Learning Compute, accessed through a workspace-managed identity.
-Because Machine Learning Compute is a managed compute target (that is, it's managed by Microsoft) it runs under your Microsoft subscription.
-
- 1. Remote Docker construction is kicked off, if needed.
- 1. Management code is written to the user's Azure Files share.
- 1. The container is started with an initial command. That is, management code as described in the previous step.
-
-* After the run completes, you can query runs and metrics. In the flow diagram below, this step occurs when the training compute target writes the run metrics back to Azure Machine Learning from storage in the Cosmos DB database. Clients can call Azure Machine Learning. Machine Learning will in turn pull metrics from the Cosmos DB database and return them back to the client.
-
-[![Training workflow](media/concept-azure-machine-learning-architecture/training-and-metrics.png)](media/concept-azure-machine-learning-architecture/training-and-metrics.png#lightbox)
-
-## Models
-
-At its simplest, a model is a piece of code that takes an input and produces output. Creating a machine learning model involves selecting an algorithm, providing it with data, and [tuning hyperparameters](how-to-tune-hyperparameters.md). Training is an iterative process that produces a trained model, which encapsulates what the model learned during the training process.
-
-You can bring a model that was trained outside of Azure Machine Learning. Or you can train a model by submitting a [run](#runs) of an [experiment](#experiments) to a [compute target](#compute-targets) in Azure Machine Learning. Once you have a model, you [register the model](#register-model) in the workspace.
-
-Azure Machine Learning is framework agnostic. When you create a model, you can use any popular machine learning framework, such as Scikit-learn, XGBoost, PyTorch, TensorFlow, and Chainer.
-
-For an example of training a model using Scikit-learn, see [Tutorial: Train an image classification model with Azure Machine Learning](tutorial-train-deploy-notebook.md).
--
-### <a name="register-model"></a> Model registry
-
-[Workspace](#workspace) > **Models**
-
-The **model registry** lets you keep track of all the models in your Azure Machine Learning workspace.
-
-Models are identified by name and version. Each time you register a model with the same name as an existing one, the registry assumes that it's a new version. The version is incremented, and the new model is registered under the same name.
-
-When you register the model, you can provide additional metadata tags and then use the tags when you search for models.
-
-> [!TIP]
-> A registered model is a logical container for one or more files that make up your model. For example, if you have a model that is stored in multiple files, you can register them as a single model in your Azure Machine Learning workspace. After registration, you can then download or deploy the registered model and receive all the files that were registered.
-
-You can't delete a registered model that is being used by an active deployment.
-
-For an example of registering a model, see [Train an image classification model with Azure Machine Learning](tutorial-train-deploy-notebook.md).
-
-## Deployment
-
-You deploy a [registered model](#register-model) as a service endpoint. You need the following components:
-
-* **Environment**. This environment encapsulates the dependencies required to run your model for inference.
-* **Scoring code**. This script accepts requests, scores the requests by using the model, and returns the results.
-* **Inference configuration**. The inference configuration specifies the environment, entry script, and other components needed to run the model as a service.
-
-For more information about these components, see [Deploy models with Azure Machine Learning](how-to-deploy-and-where.md).
-
-### Endpoints
-
-[Workspace](#workspace) > **Endpoints**
-
-An endpoint is an instantiation of your model into a web service that can be hosted in the cloud.
-
-#### Web service endpoint
-
-When deploying a model as a web service, the endpoint can be deployed on Azure Container Instances, Azure Kubernetes Service, or FPGAs. You create the service from your model, script, and associated files. These are placed into a base container image, which contains the execution environment for the model. The image has a load-balanced, HTTP endpoint that receives scoring requests that are sent to the web service.
-
-You can enable Application Insights telemetry or model telemetry to monitor your web service. The telemetry data is accessible only to you. It's stored in your Application Insights and storage account instances. If you've enabled automatic scaling, Azure automatically scales your deployment.
-
-The following diagram shows the inference workflow for a model deployed as a web service endpoint:
-
-Here are the details:
-
-* The user registers a model by using a client like the Azure Machine Learning SDK.
-* The user creates an image by using a model, a score file, and other model dependencies.
-* The Docker image is created and stored in Azure Container Registry.
-* The web service is deployed to the compute target (Container Instances/AKS) using the image created in the previous step.
-* Scoring request details are stored in Application Insights, which is in the user's subscription.
-* Telemetry is also pushed to the Microsoft/Azure subscription.
-
-[![Inference workflow](media/concept-azure-machine-learning-architecture/inferencing.png)](media/concept-azure-machine-learning-architecture/inferencing.png#lightbox)
--
-For an example of deploying a model as a web service, see [Tutorial: Train and deploy a model](tutorial-train-deploy-notebook.md).
-
-#### Real-time endpoints
-
-When you deploy a trained model in the designer, you can [deploy the model as a real-time endpoint](tutorial-designer-automobile-price-deploy.md). A real-time endpoint commonly receives a single request via the REST endpoint and returns a prediction in real-time. This is in contrast to batch processing, which processes multiple values at once and saves the results after completion to a datastore.
-
-#### Pipeline endpoints
-
-Pipeline endpoints let you call your [ML Pipelines](#ml-pipelines) programatically via a REST endpoint. Pipeline endpoints let you automate your pipeline workflows.
-
-A pipeline endpoint is a collection of published pipelines. This logical organization lets you manage and call multiple pipelines using the same endpoint. Each published pipeline in a pipeline endpoint is versioned. You can select a default pipeline for the endpoint, or specify a version in the REST call.
-
--
-## Automation
-
-### Azure Machine Learning CLI
-
-The [Azure Machine Learning CLI](reference-azure-machine-learning-cli.md) is an extension to the Azure CLI, a cross-platform command-line interface for the Azure platform. This extension provides commands to automate your machine learning activities.
-
-### ML Pipelines
-
-You use [machine learning pipelines](concept-ml-pipelines.md) to create and manage workflows that stitch together machine learning phases. For example, a pipeline might include data preparation, model training, model deployment, and inference/scoring phases. Each phase can encompass multiple steps, each of which can run unattended in various compute targets.
-
-Pipeline steps are reusable, and can be run without rerunning the previous steps if the output of those steps hasn't changed. For example, you can retrain a model without rerunning costly data preparation steps if the data hasn't changed. Pipelines also allow data scientists to collaborate while working on separate areas of a machine learning workflow.
-
-## Monitoring and logging
-
-Azure Machine Learning provides the following monitoring and logging capabilities:
-
-* For __Data Scientists__, you can monitor your experiments and log information from your training runs. For more information, see the following articles:
- * [Start, monitor, and cancel training runs](how-to-track-monitor-analyze-runs.md)
- * [Log metrics for training runs](how-to-log-view-metrics.md)
- * [Track experiments with MLflow](how-to-use-mlflow.md)
- * [Visualize runs with TensorBoard](how-to-monitor-tensorboard.md)
-* For __Administrators__, you can monitor information about the workspace, related Azure resources, and events such as resource creation and deletion by using Azure Monitor. For more information, see [How to monitor Azure Machine Learning](monitor-azure-machine-learning.md).
-* For __DevOps__ or __MLOps__, you can monitor information generated by models deployed as web services to identify problems with the deployments and gather data submitted to the service. For more information, see [Collect model data](how-to-enable-data-collection.md) and [Monitor with Application Insights](how-to-enable-app-insights.md).
-
-## Interacting with your workspace
-
-### Studio
-
-[Azure Machine Learning studio](overview-what-is-machine-learning-studio.md) provides a web view of all the artifacts in your workspace. You can view results and details of your datasets, experiments, pipelines, models, and endpoints. You can also manage compute resources and datastores in the studio.
-
-The studio is also where you access the interactive tools that are part of Azure Machine Learning:
-
-+ [Azure Machine Learning designer](concept-designer.md) to perform workflow steps without writing code
-+ Web experience for [automated machine learning](concept-automated-ml.md)
-+ [Azure Machine Learning notebooks](how-to-run-jupyter-notebooks.md) to write and run your own code in integrated Jupyter notebook servers.
-+ Data labeling projects to create, manage, and monitor projects for labeling [images](how-to-create-image-labeling-projects.md) or [text](how-to-create-text-labeling-projects.md).
-
-### Programming tools
-
-> [!IMPORTANT]
-> Tools marked (preview) below are currently in public preview.
-> The preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-
-+ Interact with the service in any Python environment with the [Azure Machine Learning SDK for Python](/python/api/overview/azure/ml/intro).
-+ Use [Azure Machine Learning designer](concept-designer.md) to perform the workflow steps without writing code.
-+ Use [Azure Machine Learning CLI](./reference-azure-machine-learning-cli.md) for automation.
-
-## Next steps
-
-To get started with Azure Machine Learning, see:
-
-* [What is Azure Machine Learning?](overview-what-is-azure-machine-learning.md)
-* [Create an Azure Machine Learning workspace](how-to-manage-workspace.md)
-* [Tutorial: Train and deploy a model](tutorial-train-deploy-notebook.md)
machine-learning Concept Azure Machine Learning V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-azure-machine-learning-v2.md
+
+ Title: 'How Azure Machine Learning works (v2)'
+
+description: This article gives you a high-level understanding of the resources and assets that make up Azure Machine Learning (v2).
++++++++ Last updated : 04/29/2022
+#Customer intent: As a data scientist, I want to understand the big picture about how Azure Machine Learning works.
++
+# How Azure Machine Learning works: resources and assets (v2)
++
+This article applies to the second version of the [Azure Machine Learning CLI & Python SDK (v2)](concept-v2.md). For version one (v1), see [How Azure Machine Learning works: Architecture and concepts (v1)](v1/concept-azure-machine-learning-architecture.md)
+
+Azure Machine Learning includes several resources and assets to enable you to perform your machine learning tasks. These resources and assets are needed to run any job.
+
+* **Resources**: setup or infrastructural resources needed to run a machine learning workflow. Resources include:
+ * [Workspace](#workspace)
+ * [Compute](#compute)
+ * [Datastore](#datastore)
+* **Assets**: created using Azure ML commands or as part of a training/scoring run. Assets are versioned and can be registered in the Azure ML workspace. They include:
+ * [Model](#model)
+ * [Environment](#environment)
+ * [Data](#data)
+ * [Component](#component)
+
+This document provides a quick overview of these resources and assets.
+
+## Workspace
+
+The workspace is the top-level resource for Azure Machine Learning, providing a centralized place to work with all the artifacts you create when you use Azure Machine Learning. The workspace keeps a history of all jobs, including logs, metrics, output, and a snapshot of your scripts. The workspace stores references to resources like datastores and compute. It also holds all assets like models, environments, components and data asset.
+
+### Create a workspace
+
+### [CLI](#tab/cli)
+
+To create a workspace using CLI v2, use the following command:
++
+```bash
+az ml workspace create --file my_workspace.yml
+```
+
+For more information, see [workspace YAML schema](reference-yaml-workspace.md).
+
+### [Python SDK](#tab/sdk)
+
+To create a workspace using Python SDK v2, you can use the following code:
++
+```python
+ws_basic = Workspace(
+ name="my-workspace",
+ location="eastus", # Azure region (location) of workspace
+ display_name="Basic workspace-example",
+ description="This example shows how to create a basic workspace"
+)
+ml_client.workspaces.begin_create(ws_basic) # use MLClient to connect to the subscription and resource group and create workspace
+```
+
+This [Jupyter notebook](https://github.com/Azure/azureml-examples/blob/sdk-preview/sdk/resources/workspace/workspace.ipynb) shows more ways to create an Azure ML workspace using SDK v2.
+++
+## Compute
+
+A compute is a designated compute resource where you run your job or host your endpoint. Azure Machine learning supports the following types of compute:
+
+* **Compute cluster** - a managed-compute infrastructure that allows you to easily create a cluster of CPU or GPU compute nodes in the cloud.
+* **Compute instance** - a fully configured and managed development environment in the cloud. You can use the instance as a training or inference compute for development and testing. It's similar to a virtual machine on the cloud.
+* **Inference cluster** - used to deploy trained machine learning models to Azure Kubernetes Service. You can create an Azure Kubernetes Service (AKS) cluster from your Azure ML workspace, or attach an existing AKS cluster.
+* **Attached compute** - You can attach your own compute resources to your workspace and use them for training and inference.
+
+### [CLI](#tab/cli)
+
+To create a compute using CLI v2, use the following command:
++
+```bash
+az ml compute --file my_compute.yml
+```
+
+For more information, see [compute YAML schema](reference-yaml-overview.md#compute).
++
+### [Python SDK](#tab/sdk)
+
+To create a compute using Python SDK v2, you can use the following code:
++
+```python
+cluster_basic = AmlCompute(
+ name="basic-example",
+ type="amlcompute",
+ size="STANDARD_DS3_v2",
+ location="westus",
+ min_instances=0,
+ max_instances=2,
+ idle_time_before_scale_down=120,
+)
+ml_client.begin_create_or_update(cluster_basic)
+```
+
+This [Jupyter notebook](https://github.com/Azure/azureml-examples/blob/sdk-preview/sdk/resources/compute/compute.ipynb) shows more ways to create compute using SDK v2.
+++
+## Datastore
+
+Azure Machine Learning datastores securely keep the connection information to your data storage on Azure, so you don't have to code it in your scripts. You can register and create a datastore to easily connect to your storage account, and access the data in your underlying storage service. The CLI v2 and SDK v2 support the following types of cloud-based storage
+
+* Azure Blob Container
+* Azure File Share
+* Azure Data Lake
+* Azure Data Lake Gen2
+
+### [CLI](#tab/cli)
+
+To create a datastore using CLI v2, use the following command:
++
+```bash
+az ml datastore create --file my_datastore.yml
+```
+For more information, see [datastore YAML schema](reference-yaml-overview.md#datastore).
++
+### [Python SDK](#tab/sdk)
+
+To create a datastore using Python SDK v2, you can use the following code:
++
+```python
+blob_datastore1 = AzureBlobDatastore(
+ name="blob-example",
+ description="Datastore pointing to a blob container.",
+ account_name="mytestblobstore",
+ container_name="data-container",
+ credentials={
+ "account_key": "XXXxxxXXXxXXXXxxXXXXXxXXXXXxXxxXxXXXxXXXxXXxxxXXxxXXXxXxXXXxxXxxXXXXxxxxxXXxxxxxxXXXxXXX"
+ },
+)
+ml_client.create_or_update(blob_datastore1)
+```
+
+This [Jupyter notebook](https://github.com/Azure/azureml-examples/blob/sdk-preview/sdk/resources/datastores/datastore.ipynb) shows more ways to create datastores using SDK v2.
+++
+## Model
+
+Azure machine learning models consist of the binary file(s) that represent a machine learning model and any corresponding metadata. Models can be created from a local or remote file or directory. For remote locations `https`, `wasbs` and `azureml` locations are supported. The created model will be tracked in the workspace under the specified name and version. Azure ML supports three types of storage format for models:
+
+* `custom_model`
+* `mlflow_model`
+* `triton_model`
+
+### Creating a model
+
+### [CLI](#tab/cli)
+
+To create a model using CLI v2, use the following command:
++
+```bash
+az ml model create --file my_model.yml
+```
+
+For more information, see [model YAML schema](reference-yaml-model.md).
++
+### [Python SDK](#tab/sdk)
+
+To create a model using Python SDK v2, you can use the following code:
++
+```python
+my_model = Model(
+ path="model.pkl", # the path to where my model file is located
+ type="custom_model", # can be custom_model, mlflow_model or triton_model
+ name="my-model",
+ description="Model created from local file.",
+)
+
+ml_client.models.create_or_update(my_model) # use the MLClient to connect to workspace and create/register the model
+```
+++
+## Environment
+
+Azure Machine Learning environments are an encapsulation of the environment where your machine learning task happens. They specify the software packages, environment variables, and software settings around your training and scoring scripts. The environments are managed and versioned entities within your Machine Learning workspace. Environments enable reproducible, auditable, and portable machine learning workflows across a variety of computes.
+
+### Types of environment
+
+Azure ML supports two types of environments: curated and custom.
+
+Curated environments are provided by Azure Machine Learning and are available in your workspace by default. Intended to be used as is, they contain collections of Python packages and settings to help you get started with various machine learning frameworks. These pre-created environments also allow for faster deployment time. For a full list, see the [curated environments article](resource-curated-environments.md).
+
+In custom environments, you're responsible for setting up your environment and installing packages or any other dependencies that your training or scoring script needs on the compute. Azure ML allows you to create your own environment using
+
+* A docker image
+* A base docker image with a conda YAML to customize further
+* A docker build context
+
+### Create an Azure ML custom environment
+
+### [CLI](#tab/cli)
+
+To create an environment using CLI v2, use the following command:
++
+```bash
+az ml environment create --file my_environment.yml
+```
+For more information, see [environment YAML schema](reference-yaml-environment.md).
+++
+### [Python SDK](#tab/sdk)
+
+To create an environment using Python SDK v2, you can use the following code:
++
+```python
+my_env = Environment(
+ image="pytorch/pytorch:latest", # base image to use
+ name="docker-image-example", # name of the model
+ description="Environment created from a Docker image.",
+)
+
+ml_client.environments.create_or_update(my_env) # use the MLClient to connect to workspace and create/register the environment
+```
+
+This [Jupyter notebook](https://github.com/Azure/azureml-examples/blob/sdk-preview/sdk/assets/environment/environment.ipynb) shows more ways to create custom environments using SDK v2.
+++
+## Data
+
+Azure Machine Learning allows you to work with different types of data:
+
+* URIs (a location in local/cloud storage)
+ * `uri_folder`
+ * `uri_file`
+* Tables (a tabular data abstraction)
+ * `mltable`
+* Primitives
+ * `string`
+ * `boolean`
+ * `number`
+
+For most scenarios, you'll use URIs (`uri_folder` and `uri_file`) - a location in storage that can be easily mapped to the filesystem of a compute node in a job by either mounting or downloading the storage to the node.
+
+`mltable` is an abstraction for tabular data that is to be used for AutoML Jobs, Parallel Jobs, and some advanced scenarios. If you're just starting to use Azure Machine Learning and aren't using AutoML, we strongly encourage you to begin with URIs.
+
+## Component
+
+An Azure Machine Learning [component](concept-component.md) is a self-contained piece of code that does one step in a machine learning pipeline. Components are the building blocks of advanced machine learning pipelines. Components can do tasks such as data processing, model training, model scoring, and so on. A component is analogous to a function - it has a name, parameters, expects input, and returns output.
+
+## Next steps
+
+* [Train models with the CLI (v2)](how-to-train-cli.md)
+* [Train models with the Azure ML Python SDK v2 (preview)](how-to-train-sdk.md)
machine-learning Concept Causal Inference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-causal-inference.md
+
+ Title: Make data-driven policies and influence decision making
+
+description: Make data-driven decisions and policies with the Responsible AI dashboard's integration of the Causal Analysis tool EconML.
++++++ Last updated : 05/10/2022+++
+# Make data-driven policies and influence decision making (preview)
+
+While machine learning models are powerful in identifying patterns in data and making predictions, they offer little support for estimating how the real-world outcome changes in the presence of an intervention. Practitioners have become increasingly focused on using historical data to inform their future decisions and business interventions. For example, how would revenue be affected if a corporation pursues a new pricing strategy? Would a new medication improve a patientΓÇÖs condition, all else equal?
++
+The Causal Inference component of the [Responsible AI dashboard](concept-responsible-ai-dashboard.md) addresses these questions by estimating the effect of a feature on an outcome of interest on average, across a population or a cohort and on an individual level. It also helps to construct promising interventions by simulating different feature responses to various interventions and creating rules to determine which population cohorts would benefit from a particular intervention. Collectively, these functionalities allow decision makers to apply new policies and affect real-world change.
+
+The capabilities of this component are founded by [EconML](https://github.com/Microsoft/EconML) package, which estimates heterogeneous treatment effects from observational data via [double machine learning](https://econml.azurewebsites.net/spec/estimation/dml.html) technique.
+
+Use Causal Inference when you need to:
+
+- Identify the features that have the most direct effect on your outcome of interest.
+- Decide what overall treatment policy to take to maximize real-world impact on an outcome of interest.
+- Understand how individuals with certain feature values would respond to a particular treatment policy.
+- The causal effects computed based on the treatment features is purely a data property. Hence, a trained model is optional when computing the causal effects.
+
+## How are causal inference insights generated?
+
+> [!NOTE]
+> Only historic data is required to generate causal insights.
++
+Double Machine Learning is a method for estimating (heterogeneous) treatment effects when all potential confounders/controls (factors that simultaneously had a direct effect on the treatment decision in the collected data and the observed outcome) are observed but are either too many (high-dimensional) for classical statistical approaches to be applicable or their effect on the treatment and outcome can't be satisfactorily modeled by parametric functions (non-parametric). Both latter problems can be addressed via machine learning techniques (for an example, see [Chernozhukov2016](https://econml.azurewebsites.net/spec/references.html#chernozhukov2016)).
+
+The method reduces the problem to first estimating two predictive tasks:
+
+- Predicting the outcome from the controls
+- Predicting the treatment from the controls
+
+Then the method combines these two predictive models in a final stage estimation to create a model of the heterogeneous treatment effect. The approach allows for arbitrary machine learning algorithms to be used for the two predictive tasks, while maintaining many favorable statistical properties related to the final model (for example, small mean squared error, asymptotic normality, construction of confidence intervals).
+
+## What other tools does Microsoft provide for causal inference?
+
+[Project Azua](https://www.microsoft.com/research/project/project_azua/) provides a novel framework focusing on end-to-end causal inference. AzuaΓÇÖs technology DECI (deep end-to-end causal inference) is a single model that can simultaneously do causal discovery and causal inference. We only require the user to provide data, and the model can output the causal relationships among all different variables. By itself, this can provide insights into the data and enables metrics such as individual treatment effect (ITE), average treatment effect (ATE) and conditional average treatment effect (CATE) to be calculated, which can then be used to make optimal decisions. The framework is scalable for large data, both in terms of the number of variables and the number of data points; it can also handle missing data entries with mixed statistical types.
+
+[EconML](https://www.microsoft.com/research/project/econml/) (powering the backend of the Responsible AI dashboard) is a Python package that applies the power of machine learning techniques to estimate individualized causal responses from observational or experimental data. The suite of estimation methods provided in EconML represents the latest advances in causal machine learning. By incorporating individual machine learning steps into interpretable causal models, these methods improve the reliability of what-if predictions and make causal analysis quicker and easier for a broad set of users.
+
+[DoWhy](https://microsoft.github.io/dowhy/) is a Python library that aims to spark causal thinking and analysis. DoWhy provides a principled four-step interface for causal inference that focuses on explicitly modeling causal assumptions and validating them as much as possible. The key feature of DoWhy is its state-of-the-art refutation API that can automatically test causal assumptions for any estimation method, thus making inference more robust and accessible to non-experts. DoWhy supports estimation of the average causal effect for backdoor, front-door, instrumental variable and other identification methods, and estimation of the conditional effect (CATE) through an integration with the EconML library.
+
+## Next steps
+
+- Learn how to generate the Responsible AI dashboard via [CLIv2 and SDKv2](how-to-responsible-ai-dashboard-sdk-cli.md) or [studio UI ](how-to-responsible-ai-dashboard-ui.md)
+- Learn how to generate a [Responsible AI scorecard](how-to-responsible-ai-scorecard.md)) based on the insights observed in the Responsible AI dashboard.
machine-learning Concept Component https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-component.md
- Previously updated : 10/21/2021+ Last updated : 05/10/2022 --+
-# What is an Azure Machine Learning component (preview)?
-
-An Azure Machine Learning component (previously known as a module) is a self-contained piece of code that does one step in a machine learning pipeline. Components are the building blocks of advanced machine learning pipelines (see [Create and run machine learning pipelines with the Azure Machine Learning CLI](how-to-create-component-pipelines-cli.md)). Components can do tasks such as data processing, model training, model scoring, and so on.
-
-A component is analogous to a function - it has a name, parameters, expects input, and returns output. For more information on creating a component, see [create a component](#define-a-component-preview).
-
-## Why should I use a component (preview)?
-
-Components let you manage and reuse common logic across pipelines.
--- **Composable**: Components let developers hide complicated logic behind a simple interface. Component users don't have to worry about the underlying logic, they only need to provide parameters.--- **Share and reuse**: Components are automatically shared with users in the same workspace. You can reuse components across pipelines, environments, workspaces, and subscriptions. Built-in version-tracking lets you keep track of changes and reproduce results.--- **CLI support**: Use components to create pipelines in the CLI (v2).--
-## Define a component (preview)
-
-To define an Azure Machine Learning component, you must provide two files:
--- A component specification in the valid [YAML component specification format](reference-yaml-component-command.md). This file specifies the following information:
- - Metadata: name, display_name, version, type, and so on.
- - Interface: inputs and outputs
- - Command, code, & environment: The command, code, and environment used to run the component
-- A script to provide the actual execution logic.-
-### Component specification
-
-The component specification file defines the metadata and execution parameters for a component. The component spec tells Azure Machine Learning how to run the Python script that you provide.
-
-The following example is a component specification for a training component.
---
-The following table explains the fields in the example. For a full list of available fields, see the [YAML component specification reference page](reference-yaml-component-command.md).
-
-| Name | Type | Required | Description |
-| - | -- | -- | |
-| name | string | Yes | Name of the component. Must be a unique identifier of the component. Must start with number or letter, and only contain letters, numbers, `_`, and `-`. Maximum length is 255 characters.|
-| version | string | Yes | Version of the component. Must be a string. |
-| display_name | string | No | Display name of the component. Defaults to same as `name`. |
-| type | string | No | The type of the component. Currently, this value must be `command`.|
-| description | string | No | Detailed description of the component. |
-| tags | Dictionary&lt;string&gt; | No | A list of key-value pairs to describe different perspectives of the component. Each tag's key and value should be one word or a short phrase, for example, `Product:Office`, `Domain:NLP`, `Scenario:Image Classification`. |
-| is_deterministic | boolean | No | Whether the component will always generate the same result when given the same input data. The default is `True`. Should be set to `False` for components that will load data from external resources, for instance, importing data from a given url, since the data may be updated. |
-| inputs | Dictionary&lt;string, Input&gt; | No | Defines input ports and parameters of the component. The string key is the name of the input, which must be a valid Python variable name. |
-| outputs | Dictionary&lt;string, Output&gt; | No | Defines output ports of the component. The string key is the name of the output, which must be a valid Python variable name. |
-| code | string | No | Path to the source code. |
-| environment | Environment | No | The runtime environment for the component to run. |
-| command | string | No | The command to run the component code. |
-
-### Python script
-
-Your Python script contains the executable logic for your component. Your script tells Azure Machine Learning what you want your component to do.
-
-To run, you must match the arguments for your Python script with the arguments you defined in the YAML specification. The following example is a Python training script that matches the YAML specification from the previous section.
---
+# What is an Azure Machine Learning component?
-## Create a component
-### Create a component using CLI (v2)
+An Azure Machine Learning component is a self-contained piece of code that does one step in a machine learning pipeline. A component is analogous to a function - it has a name, inputs, outputs, and a body. Components are the building blocks of the [Azure Machine Learning pipelines](concept-ml-pipelines.md).
+A component consists of three parts:
-After you define your component specification and Python script files, and [install CLI (v2) successfully](how-to-configure-cli.md) successfully, you can create the component in your workspaces using:
+- Metadata: name, display_name, version, type, etc.
+- Interface: input/output specifications (name, type, description, default value, etc.).
+- Command, Code & Environment: command, code and environment required to run the component.
-```azurecli
-az ml component create --file my_component.yml --version 1 --resource-group my-resource-group --workspace-name my-workspace
-```
-Use `az ml component create --help`for more information on the `create` command.
+## Why should I use a component?
-## Use components to build ML pipelines
+It's a good engineering practice to build a machine learning pipeline to split a complete machine learning task into a multi-step workflow. Such that, everyone can work on the specific step independently. In Azure Machine Learning, a component represents one reusable step in a pipeline. Components are designed to help improve the productivity of pipeline building. Specifically, components offer:
-You can use the Azure CLI (v2) to create a pipeline job. See [Create and run ML pipelines (CLI)](how-to-create-component-pipelines-cli.md).
+- **Well-defined interface**: Components require a well-defined interface (input and output). The interface allows the user to build steps and connect steps easily. The interface also hides the complex logic of a step and removes the burden of understanding how the step is implemented.
-## Manage components
+- **Share and reuse**: As the building blocks of a pipeline, components can be easily shared and reused across pipelines, workspaces, and subscriptions. Components built by one team can be discovered and used by another team.
-You can check component details and manage the component using CLI (v2). Use `az ml component -h` to get detailed instructions on component command.
+- **Version control**: Components are versioned. The component producers can keep improving components and publish new versions. Consumers can use specific component versions in their pipelines. This gives them compatibility and reproducibility.
-### List components
+Unit testable: A component is a self-contained piece of code. It's easy to write unit test for a component.
-You can use `az ml component list` to list all components in a workspace.
+## Component and Pipeline
-### Show details for a component
+A machine learning pipeline is the workflow for a full machine learning task. Components are the building blocks of a machine learning pipeline. When you're thinking of a component, it must be under the context of pipeline.
-You can use `az ml component show --name <COMPONENT_NAME>` to show the details of a component.
+To build components, the first thing is to define the machine learning pipeline. This requires breaking down the full machine learning task into a multi-step workflow. Each step is a component. For example, considering a simple machine learning task of using historical data to train a sales forecasting model, you may want to build a sequential workflow with data processing, model training, and model evaluation steps. For complex tasks, you may want to further break down. For example, split one single data processing step into data ingestion, data cleaning, data pre-processing, and feature engineering steps.
-### Upgrade a component
+Once the steps in the workflow are defined, the next thing is to specify how each step is connected in the pipeline. For example, to connect your data processing step and model training step, you may want to define a data processing component to output a folder that contains the processed data. A training component takes a folder as input and outputs a folder that contains the trained model. These inputs and outputs definition will become part of your component interface definition.
-You can use `az ml component create --file <NEW_VERSION.yaml>` to upgrade a component.
+Now, it's time to develop the code of executing a step. You can use your preferred languages (python, R, etc.). The code must be able to be executed by a shell command. During the development, you may want to add a few inputs to control how this step is going to be executed. For example, for a training step, you may like to add learning rate, number of epochs as the inputs to control the training. These additional inputs plus the inputs and outputs required to connect with other steps are the interface of the component. The argument of a shell command is used to pass inputs and outputs to the code. The environment to execute the command and the code needs to be specified. The environment could be a curated AzureML environment, a docker image or a conda environment.
+Finally, you can package everything including code, cmd, environment, input, outputs, metadata together into a component. Then connects these components together to build pipelines for your machine learning workflow. One component can be used in multiple pipelines.
-### Delete a component
+To learn more about how to build a component, see:
-You can use `az ml component delete --name <COMPONENT_NAME>` to delete a component.
+- How to [build a component using Azure MLCLI v2](how-to-create-component-pipelines-cli.md).
+- How to [build a component using Azure ML SDK v2](how-to-create-component-pipeline-python.md).
## Next steps -- [Component YAML reference](reference-yaml-component-command.md)-- [Create and run ML pipelines (CLI)](how-to-create-component-pipelines-cli.md)-- [Build machine learning pipelines in the designer](tutorial-designer-automobile-price-train-score.md)
+- [Define component with the Azure ML CLI v2](./how-to-create-component-pipelines-cli.md).
+- [Define component with the Azure ML SDK v2](./how-to-create-component-pipeline-python.md).
+- [Define component with Designer](./how-to-create-component-pipelines-ui.md).
+- [Component CLI v2 YAML reference](./reference-yaml-component-command.md).
+- [What is Azure Machine Learning Pipeline?](concept-ml-pipelines.md).
+- Try out [CLI v2 component example](https://github.com/Azure/azureml-examples/tree/sdk-preview/cli/jobs/pipelines-with-components).
+- Try out [Python SDK v2 component example](https://github.com/Azure/azureml-examples/tree/sdk-preview/sdk/jobs/pipelines).
machine-learning Concept Compute Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-compute-instance.md
description: Learn about the Azure Machine Learning compute instance, a fully ma
+
Compute instances make it easy to get started with Azure Machine Learning develo
Use a compute instance as your fully configured and managed development environment in the cloud for machine learning. They can also be used as a compute target for training and inferencing for development and testing purposes.
-For production grade model training, use an [Azure Machine Learning compute cluster](how-to-create-attach-compute-cluster.md) with multi-node scaling capabilities. For production grade model deployment, use [Azure Kubernetes Service cluster](how-to-deploy-azure-kubernetes-service.md).
- For compute instance Jupyter functionality to work, ensure that web socket communication is not disabled. Please ensure your network allows websocket connections to *.instances.azureml.net and *.instances.azureml.ms. > [!IMPORTANT]
A compute instance is a fully managed cloud-based workstation optimized for your
|Key benefits|Description| |-|-|
-|Productivity|You can build and deploy models using integrated notebooks and the following tools in Azure Machine Learning studio:<br/>- Jupyter<br/>- JupyterLab<br/>- VS Code (preview)<br/>- RStudio (preview)<br/>Compute instance is fully integrated with Azure Machine Learning workspace and studio. You can share notebooks and data with other data scientists in the workspace.<br/>
+|Productivity|You can build and deploy models using integrated notebooks and the following tools in Azure Machine Learning studio:<br/>- Jupyter<br/>- JupyterLab<br/>- VS Code (preview)<br/>Compute instance is fully integrated with Azure Machine Learning workspace and studio. You can share notebooks and data with other data scientists in the workspace.<br/>
|Managed & secure|Reduce your security footprint and add compliance with enterprise security requirements. Compute instances provide robust management policies and secure networking configurations such as:<br/><br/>- Autoprovisioning from Resource Manager templates or Azure Machine Learning SDK<br/>- [Azure role-based access control (Azure RBAC)](../role-based-access-control/overview.md)<br/>- [Virtual network support](./how-to-secure-training-vnet.md#compute-cluster)<br/> - Azure policy to disable SSH access<br/> - Azure policy to enforce creation in a virtual network <br/> - Auto-shutdown/auto-start based on schedule <br/>- TLS 1.2 enabled | |Preconfigured&nbsp;for&nbsp;ML|Save time on setup tasks with pre-configured and up-to-date ML packages, deep learning frameworks, GPU drivers.| |Fully customizable|Broad support for Azure VM types including GPUs and persisted low-level customization such as installing packages and drivers makes advanced scenarios a breeze. You can also use setup scripts to automate customization | * Secure your compute instance with **[No public IP (preview)](./how-to-secure-training-vnet.md#no-public-ip)** * The compute instance is also a secure training compute target similar to compute clusters, but it is single node.
-* You can [create a compute instance](how-to-create-manage-compute-instance.md?tabs=python#create) yourself, or an administrator can **[create a compute instance on your behalf](how-to-create-manage-compute-instance.md?tabs=python#on-behalf)**.
-* You can also **[use a setup script (preview)](how-to-create-manage-compute-instance.md#setup-script)** for an automated way to customize and configure the compute instance as per your needs.
-* To save on costs, **[create a schedule (preview)](how-to-create-manage-compute-instance.md#schedule)** to automatically start and stop the compute instance.
+* You can [create a compute instance](how-to-create-manage-compute-instance.md?tabs=python#create) yourself, or an administrator can **[create a compute instance on your behalf](how-to-create-manage-compute-instance.md?tabs=python#create-on-behalf-of-preview)**.
+* You can also **[use a setup script (preview)](how-to-customize-compute-instance.md)** for an automated way to customize and configure the compute instance as per your needs.
+* To save on costs, **[create a schedule (preview)](how-to-create-manage-compute-instance.md#schedule-automatic-start-and-stop-preview)** to automatically start and stop the compute instance.
-## <a name="contents"></a>Tools and environments
+## Tools and environments
Azure Machine Learning compute instance enables you to author, train, and deploy models in a fully integrated notebook experience in your workspace.
Following tools and environments are already installed on the compute instance:
|**R** tools & environments|Details| |-|:-:|
-|RStudio Server Open Source Edition (preview)||
|R kernel||
+You can [Add RStudio](how-to-create-manage-compute-instance.md#add-custom-applications-such-as-rstudio-preview) when you create the instance.
+ |**PYTHON** tools & environments|Details| |-|-| |Anaconda Python||
You can also clone the latest Azure Machine Learning samples to your folder unde
Writing small files can be slower on network drives than writing to the compute instance local disk itself. If you are writing many small files, try using a directory directly on the compute instance, such as a `/tmp` directory. Note these files will not be accessible from other compute instances.
-Do not store training data on the notebooks file share. You can use the `/tmp` directory on the compute instance for your temporary data. However, do not write very large files of data on the OS disk of the compute instance. OS disk on compute instance has 128 GB capacity. You can also store temporary training data on temporary disk mounted on /mnt. Temporary disk size is configurable based on the VM size chosen and can store larger amounts of data if a higher size VM is chosen. You can also mount [datastores and datasets](concept-azure-machine-learning-architecture.md#datasets-and-datastores). Any software packages you install are saved on the OS disk of compute instance. Please note customer managed key encryption is currently not supported for OS disk. The OS disk for compute instance is encrypted with Microsoft-managed keys.
+Do not store training data on the notebooks file share. You can use the `/tmp` directory on the compute instance for your temporary data. However, do not write very large files of data on the OS disk of the compute instance. OS disk on compute instance has 128 GB capacity. You can also store temporary training data on temporary disk mounted on /mnt. Temporary disk size is configurable based on the VM size chosen and can store larger amounts of data if a higher size VM is chosen. You can also mount [datastores and datasets](v1/concept-azure-machine-learning-architecture.md#datasets-and-datastores). Any software packages you install are saved on the OS disk of compute instance. Please note customer managed key encryption is currently not supported for OS disk. The OS disk for compute instance is encrypted with Microsoft-managed keys.
-### <a name="create"></a>Create a compute instance
+### Create
-As an administrator, you can **[create a compute instance for others in the workspace (preview)](how-to-create-manage-compute-instance.md#on-behalf)**.
+As an administrator, you can **[create a compute instance for others in the workspace (preview)](how-to-create-manage-compute-instance.md#create-on-behalf-of-preview)**.
-You can also **[use a setup script (preview)](how-to-create-manage-compute-instance.md#setup-script)** for an automated way to customize and configure the compute instance.
+You can also **[use a setup script (preview)](how-to-customize-compute-instance.md)** for an automated way to customize and configure the compute instance.
To create a compute instance for yourself, use your workspace in Azure Machine Learning studio, [create a new compute instance](how-to-create-manage-compute-instance.md?tabs=azure-studio#create) from either the **Compute** section or in the **Notebooks** section when you are ready to run one of your notebooks.
You can also create an instance
* Directly from the [integrated notebooks experience](tutorial-train-deploy-notebook.md#azure) * In Azure portal * From Azure Resource Manager template. For an example template, see the [create an Azure Machine Learning compute instance template](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.machinelearningservices/machine-learning-compute-create-computeinstance).
-* With [Azure Machine Learning SDK](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/machine-learning/concept-compute-instance.md)
-* From the [CLI extension for Azure Machine Learning](reference-azure-machine-learning-cli.md#computeinstance)
+* With [Azure Machine Learning SDK](how-to-create-manage-compute-instance.md?tabs=python#create)
+* From the [CLI extension for Azure Machine Learning](how-to-create-manage-compute-instance.md?tabs=azure-cli#create)
The dedicated cores per region per VM family quota and total regional quota, which applies to compute instance creation, is unified and shared with Azure Machine Learning training compute cluster quota. Stopping the compute instance does not release quota to ensure you will be able to restart the compute instance. Please do not stop the compute instance through the OS terminal by doing a sudo shutdown.
Compute instance comes with P10 OS disk. Temp disk type depends on the VM size c
Compute instances can be used as a [training compute target](concept-compute-target.md#train) similar to Azure Machine Learning compute training clusters. A compute instance:+ * Has a job queue. * Runs jobs securely in a virtual network environment, without requiring enterprises to open up SSH port. The job executes in a containerized environment and packages your model dependencies in a Docker container. * Can run multiple small jobs in parallel (preview). One job per core can run in parallel while the rest of the jobs are queued.
machine-learning Concept Compute Target https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-compute-target.md
Last updated 10/21/2021+ #Customer intent: As a data scientist, I want to understand what a compute target is and why I need it.- # What are compute targets in Azure Machine Learning?
You can create Azure Machine Learning compute instances or compute clusters from
* [Compute instance](how-to-create-manage-compute-instance.md). * [Compute cluster](how-to-create-attach-compute-cluster.md). * An Azure Resource Manager template. For an example template, see [Create an Azure Machine Learning compute cluster](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.machinelearningservices/machine-learning-compute-create-amlcompute).
-* A machine learning [extension for the Azure CLI](reference-azure-machine-learning-cli.md#resource-management).
When created, these compute resources are automatically part of your workspace, unlike other kinds of compute targets.
machine-learning Concept Counterfactual Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-counterfactual-analysis.md
+
+ Title: Counterfactuals analysis and what-if
+
+description: Generate diverse counterfactual examples with feature perturbations to see minimal changes required to achieve desired prediction with the Responsible AI dashboard's integration of DiceML.
++++++ Last updated : 05/10/2022+++
+# Counterfactuals analysis and what-if (preview)
+
+What-if counterfactuals address the question of ΓÇ£what would the model predict if the action input is changedΓÇ¥, enables understanding and debugging of a machine learning model in terms of how it reacts to input (feature) changes. Compared with approximating a machine learning model or ranking features by their predictive importance (which standard interpretability techniques do), counterfactual analysis ΓÇ£interrogatesΓÇ¥ a model to determine what changes to a particular datapoint would flip the model decision. Such an analysis helps in disentangling the impact of different correlated features in isolation or for acquiring a more nuanced understanding on how much of a feature change is needed to see a model decision flip for classification models and decision change for regression models.
+
+The Counterfactual Analysis and what-if component of the [Responsible AI dashboard](concept-responsible-ai-dashboard.md) consists of two functionalities:
+
+- Generating a set of examples with minimal changes to a given point such that they change the model's prediction (showing the closest datapoints with opposite model precisions)
+- Enabling users to generate their own what-if perturbations to understand how the model reacts to featuresΓÇÖ changes.
+
+The capabilities of this component are founded by the [DiCE](https://github.com/interpretml/DiCE) package, which implements counterfactual explanations that provide this information by showing feature-perturbed versions of the same datapoint who would have received a different model prediction (for example, Taylor would have received the loan if their income was higher by $10,000). The counterfactual analysis component enables you to identify which features to vary and their permissible ranges for valid and logical counterfactual examples.
+
+Use What-If Counterfactuals when you need to:
+
+- Examine fairness and reliability criteria as a decision evaluator (by perturbing sensitive attributes such as gender, ethnicity, etc., and observing whether model predictions change).
+- Debug specific input instances in depth.
+- Provide solutions to end users and determining what they can do to get a desirable outcome from the model next time.
+
+## How are counterfactual examples generated?
+
+To generate counterfactuals, DiCE implements a few model-agnostic techniques. These methods apply to any opaque-box classifier or regressor. They're based on sampling nearby points to an input point, while optimizing a loss function based on proximity (and optionally, sparsity, diversity, and feasibility). Currently supported methods are:
+
+- [Randomized Search](http://interpret.ml/DiCE/notebooks/DiCE_model_agnostic_CFs.html#1.-Independent-random-sampling-of-features): Samples points randomly near the given query point and returns counterfactuals as those points whose predicted label is the desired class.
+- [Genetic Search](http://interpret.ml/DiCE/notebooks/DiCE_model_agnostic_CFs.html#2.-Genetic-Algorithm): Samples points using a genetic algorithm, given the combined objective of optimizing proximity to the given query point, changing as few features as possible, and diversity among the counterfactuals generated.
+- [KD Tree Search](http://interpret.ml/DiCE/notebooks/DiCE_model_agnostic_CFs.html#3.-Querying-a-KD-Tree) (For counterfactuals from a given training dataset): This algorithm returns counterfactuals from the training dataset. It constructs a KD tree over the training data points based on a distance function and then returns the closest points to a given query point that yields the desired predicted label.
+
+## Next steps
+
+- Learn how to generate the Responsible AI dashboard via [CLIv2 and SDKv2](how-to-responsible-ai-dashboard-sdk-cli.md) or [studio UI ](how-to-responsible-ai-dashboard-ui.md)
+- Learn how to generate a [Responsible AI scorecard](how-to-responsible-ai-scorecard.md)) based on the insights observed in the Responsible AI dashboard.
machine-learning Concept Data Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-data-analysis.md
+
+ Title: Understand your datasets
+
+description: Perform exploratory data analysis to understand feature biases and imbalances with the Responsible AI dashboard's Data Explorer.
++++++ Last updated : 05/10/2022+++
+# Understand your datasets (preview)
+
+Machine learning models "learn" from historical decisions and actions captured in training data. As a result, their performance in real-world scenarios is heavily influenced by the data they're trained on. When feature distribution in a dataset is skewed, this can cause a model to incorrectly predict datapoints belonging to an underrepresented group or to be optimized along an inappropriate metric. For example, while training a housing price prediction AI, the training set was representing 75% of newer houses that have less than median prices. As a result, it was much less successful in successfully identifying more expensive historic houses. The fix was to add older and expensive houses to the training data and augment the features to include insights about the historic value of the house. Upon incorporating that data augmentation, results improved.
+
+The Data Explorer component of the [Responsible AI dashboard](concept-responsible-ai-dashboard.md) helps visualize datasets based on predicted and actual outcomes, error groups, and specific features. This enables you to identify issues of over- and underrepresentation and to see how data is clustered in the dataset. Data visualizations consist of aggregate plots or individual datapoints.
+
+## When to use Data Explorer
+
+Use Data Explorer when you need to:
+
+- Explore your dataset statistics by selecting different filters to slice your data into different dimensions (also known as cohorts).
+- Understand the distribution of your dataset across different cohorts and feature groups.
+- Determine whether your findings related to fairness, error analysis and causality (derived from other dashboard components) are a result of your datasetΓÇÖs distribution.
+- Decide in which areas to collect more data to mitigate errors arising from representation issues, label noise, feature noise, label bias, etc.
+
+## Next steps
+
+- Learn how to generate the Responsible AI dashboard via [CLIv2 and SDKv2](how-to-responsible-ai-dashboard-sdk-cli.md) or [studio UI ](how-to-responsible-ai-dashboard-ui.md)
+- Learn how to generate a [Responsible AI scorecard](how-to-responsible-ai-scorecard.md) based on the insights observed in the Responsible AI dashboard.
machine-learning Concept Data Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-data-encryption.md
description: 'Learn how Azure Machine Learning computes and data stores provides
+
For more information on creating and using a deployment configuration, see the f
* [AciWebservice.deploy_configuration()](/python/api/azureml-core/azureml.core.webservice.aci.aciwebservice#deploy-configuration-cpu-cores-none--memory-gb-none--tags-none--properties-none--description-none--location-none--auth-enabled-none--ssl-enabled-none--enable-app-insights-none--ssl-cert-pem-file-none--ssl-key-pem-file-none--ssl-cname-none--dns-name-label-none--primary-key-none--secondary-key-none--collect-model-data-none--cmk-vault-base-url-none--cmk-key-name-none--cmk-key-version-none-) reference * [Where and how to deploy](how-to-deploy-and-where.md)
-* [Deploy a model to Azure Container Instances](how-to-deploy-azure-container-instance.md)
+ For more information on using a customer-managed key with ACI, see [Encrypt data with a customer-managed key](../container-instances/container-instances-encrypt-data.md#encrypt-data-with-a-customer-managed-key).
Each workspace has an associated system-assigned managed identity that has the s
* [Get data from a datastore](how-to-create-register-datasets.md) * [Connect to data](how-to-connect-data-ui.md) * [Train with datasets](how-to-train-with-datasets.md)
-* [Customer-managed keys](concept-customer-managed-keys.md).
+* [Customer-managed keys](concept-customer-managed-keys.md).
machine-learning Concept Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-data.md
Title: Secure data access in the cloud
+ Title: Data access
-description: Learn how to securely connect to your data storage on Azure with Azure Machine Learning datastores and datasets.
+description: Learn how to connect to your data storage on Azure with Azure Machine Learning.
Previously updated : 10/21/2021--
-# Customer intent: As an experienced Python developer, I need to securely access my data in my Azure storage solutions and use it to accomplish my machine learning tasks.
Last updated : 05/11/2022+
+#Customer intent: As an experienced Python developer, I need to securely access my data in my Azure storage solutions and use it to accomplish my machine learning tasks.
-# Secure data access in Azure Machine Learning
+# Data in Azure Machine Learning
+
+> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning developer platform you are using:"]
+> * [v1](./v1/concept-data.md)
+> * [v2 (current version)](concept-data.md)
Azure Machine Learning makes it easy to connect to your data in the cloud. It provides an abstraction layer over the underlying storage service, so you can securely access and work with your data without having to write code specific to your storage type. Azure Machine Learning also provides the following data capabilities: * Interoperability with Pandas and Spark DataFrames * Versioning and tracking of data lineage
-* Data labeling
-* Data drift monitoring
-
-## Data workflow
-
-When you're ready to use the data in your cloud-based storage solution, we recommend the following data delivery workflow. This workflow assumes you have an [Azure storage account](../storage/common/storage-account-create.md?tabs=azure-portal) and data in a cloud-based storage service in Azure.
-
-1. Create an [Azure Machine Learning datastore](#datastores) to store connection information to your Azure storage.
-
-2. From that datastore, create an [Azure Machine Learning dataset](#datasets) to point to a specific file(s) in your underlying storage.
-
-3. To use that dataset in your machine learning experiment you can either
- 1. Mount it to your experiment's compute target for model training.
-
- **OR**
+* Data labeling (V1 only for now)
- 1. Consume it directly in Azure Machine Learning solutions like, automated machine learning (automated ML) experiment runs, machine learning pipelines, or the [Azure Machine Learning designer](concept-designer.md).
+You can bring data to Azure Machine Learning
-4. Create [dataset monitors](#drift) for your model output dataset to detect for data drift.
+* Directly from your local machine and URLs
-5. If data drift is detected, update your input dataset and retrain your model accordingly.
-
-The following diagram provides a visual demonstration of this recommended workflow.
-
-![Diagram shows the Azure Storage Service which flows into a datastore, which flows into a dataset. The dataset flows into model training, which flows into data drift, which flows back to dataset.](./media/concept-data/data-concept-diagram.svg)
+* That's already in a cloud-based storage service in Azure and access it using your [Azure storage account](../storage/common/storage-account-create.md?tabs=azure-portal) related credentials and an Azure Machine Learning datastore.
<a name="datastores"></a> ## Connect to storage with datastores
-Azure Machine Learning datastores securely keep the connection information to your data storage on Azure, so you don't have to code it in your scripts. [Register and create a datastore](how-to-access-data.md) to easily connect to your storage account, and access the data in your underlying storage service.
-
-Supported cloud-based storage services in Azure that can be registered as datastores:
-
-+ Azure Blob Container
-+ Azure File Share
-+ Azure Data Lake
-+ Azure Data Lake Gen2
-+ Azure SQL Database
-+ Azure Database for PostgreSQL
-+ Databricks File System
-+ Azure Database for MySQL
-
->[!TIP]
-> You can create datastores with credential-based authentication for accessing storage services, like a service principal or shared access signature (SAS) token. These credentials can be accessed by users who have *Reader* access to the workspace. <br><br>If this is a concern, [create a datastore that uses identity-based data access](how-to-identity-based-data-access.md) to connect to storage services.
-
-<a name="datasets"></a>
-## Reference data in storage with datasets
-
-Azure Machine Learning datasets aren't copies of your data. By creating a dataset, you create a reference to the data in its storage service, along with a copy of its metadata.
-
-Because datasets are lazily evaluated, and the data remains in its existing location, you
-
-* Incur no extra storage cost.
-* Don't risk unintentionally changing your original data sources.
-* Improve ML workflow performance speeds.
-
-To interact with your data in storage, [create a dataset](how-to-create-register-datasets.md) to package your data into a consumable object for machine learning tasks. Register the dataset to your workspace to share and reuse it across different experiments without data ingestion complexities.
-
-Datasets can be created from local files, public urls, [Azure Open Datasets](https://azure.microsoft.com/services/open-datasets/), or Azure storage services via datastores.
-
-There are 2 types of datasets:
-
-+ A [FileDataset](/python/api/azureml-core/azureml.data.file_dataset.filedataset) references single or multiple files in your datastores or public URLs. If your data is already cleansed and ready to use in training experiments, you can [download or mount files](how-to-train-with-datasets.md#mount-files-to-remote-compute-targets) referenced by FileDatasets to your compute target.
-
-+ A [TabularDataset](/python/api/azureml-core/azureml.data.tabulardataset) represents data in a tabular format by parsing the provided file or list of files. You can load a TabularDataset into a pandas or Spark DataFrame for further manipulation and cleansing. For a complete list of data formats you can create TabularDatasets from, see the [TabularDatasetFactory class](/python/api/azureml-core/azureml.data.dataset_factory.tabulardatasetfactory).
-
-Additional datasets capabilities can be found in the following documentation:
-
-+ [Version and track](how-to-version-track-datasets.md) dataset lineage.
-+ [Monitor your dataset](how-to-monitor-datasets.md) to help with data drift detection.
-
-## Work with your data
-
-With datasets, you can accomplish a number of machine learning tasks through seamless integration with Azure Machine Learning features.
+Azure Machine Learning datastores securely keep the connection information to your data storage on Azure, so you don't have to code it in your scripts.
-+ Create a [data labeling project](#label).
-+ Train machine learning models:
- + [automated ML experiments](how-to-use-automated-ml-for-ml-models.md)
- + the [designer](tutorial-designer-automobile-price-train-score.md#import-data)
- + [notebooks](how-to-train-with-datasets.md)
- + [Azure Machine Learning pipelines](./how-to-create-machine-learning-pipelines.md)
-+ Access datasets for scoring with [batch inference](./tutorial-pipeline-batch-scoring-classification.md) in [machine learning pipelines](./how-to-create-machine-learning-pipelines.md).
-+ Set up a dataset monitor for [data drift](#drift) detection.
+You can access your data and create datastores with,
+* [Credential-based data authentication](how-to-access-data.md), like a service principal or shared access signature (SAS) token. These credentials can be accessed by users who have *Reader* access to the workspace.
+* Identity-based data authentication to connect to storage services with your Azure Active Directory ID.
-<a name="label"></a>
+The following table summarizes which cloud-based storage services in Azure can be registered as datastores and what authentication type can be used to access them.
-## Label data with data labeling projects
+Supported storage service | Credential-based authentication | Identity-based authentication
+||:-:|::|
+Azure Blob Container| Γ£ô | Γ£ô|
+Azure File Share| Γ£ô | |
+Azure Data Lake Gen1 | Γ£ô | Γ£ô|
+Azure Data Lake Gen2| Γ£ô | Γ£ô|
-Labeling large amounts of data has often been a headache in machine learning projects. Those with a computer vision component, such as image classification or object detection, generally require thousands of images and corresponding labels.
-Azure Machine Learning gives you a central location to create, manage, and monitor labeling projects. Labeling projects help coordinate the data, labels, and team members, allowing you to more efficiently manage the labeling tasks. Currently supported tasks are image classification, either multi-label or multi-class, and object identification using bounded boxes.
+## Work with data
-Create an [image labeling project](how-to-create-image-labeling-projects.md) or [text labeling project](how-to-create-text-labeling-projects.md), and output a dataset for use in machine learning experiments.
+You can read in data from a datastore or directly from storage uri's.
-<a name="drift"></a>
+In Azure Machine Learning there are three types for data
-## Monitor model performance with data drift
+Data type | Description | Example
+||
+`uri_file` | Refers to a specific file | `https://<account_name>.blob.core.windows.net/<container_name>/path/file.csv`.
+`uri_folder`| Refers to a specific folder |`https://<account_name>.blob.core.windows.net/<container_name>/path`
+`mltable` |Defines tabular data for use in automated ML and parallel jobs| Schema and subsetting transforms
-In the context of machine learning, data drift is the change in model input data that leads to model performance degradation. It is one of the top reasons model accuracy degrades over time, thus monitoring data drift helps detect model performance issues.
+In the following example, the expectation is to provide a `uri_folder` because to read the file in, the training script creates a path that joins the folder with the file name. If you want to pass in just an individual file rather than the entire folder you can use the `uri_file` type.
-See the [Create a dataset monitor](how-to-monitor-datasets.md) article, to learn more about how to detect and alert to data drift on new data in a dataset.
+```python
+ file_name = os.path.join(args.input_folder, "MY_CSV_FILE.csv")
+df = pd.read_csv(file_name)
+```
## Next steps
-+ Create a dataset in Azure Machine Learning studio or with the Python SDK [using these steps.](how-to-create-register-datasets.md)
-+ Try out dataset training examples with our [sample notebooks](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/work-with-data/).
+* [Work with data using SDK v2](how-to-use-data.md)
machine-learning Concept Datastore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-datastore.md
+
+ Title: Azure Machine Learning datastores
+
+description: Learn how to securely connect to your data storage on Azure with Azure Machine Learning datastores.
+++++++ Last updated : 10/21/2021++
+# Customer intent: As an experienced Python developer, I need to securely access my data in my Azure storage solutions and use it to accomplish my machine learning tasks.
++
+# Azure Machine Learning datastores
+
+Supported cloud-based storage services in Azure Machine Learning include:
+++ Azure Blob Container++ Azure File Share++ Azure Data Lake++ Azure Data Lake Gen2+
+Azure Machine Learning allows you to connect to data directly by using a storage URI, for example:
+
+- ```https://storageAccount.blob.core.windows.net/container/path/file.csv``` (Azure Blob Container)
+- ```abfss://container@storageAccount.dfs.core.windows.net/base/path/folder1``` (Azure Data Lake Gen2).
+
+Storage URIs use *identity-based* access that will prompt you for your Azure Active Directory token for data access authentication. This approach allows for data access management at the storage level and keeps credentials confidential.
+
+> [!NOTE]
+> When using Notebooks in Azure Machine Learning Studio, your Azure Active Directory token is automatically passed through to storage for data access authentication.
+
+Although storage URIs provide a convenient mechanism to access data, there may be cases where using an Azure Machine Learning *Datastore* is a better option:
+
+* **You need *credential-based* data access (for example: Service Principals, SAS Tokens, Account Name/Key).** Datastores are helpful because they keep the connection information to your data storage securely in an Azure Keyvault, so you don't have to code it in your scripts.
+* **You want team members to easily discover relevant datastores.** Datastores are registered to an Azure Machine Learning workspace making them easier for your team members to find/discover them.
+
+ [Register and create a datastore](how-to-datastore.md) to easily connect to your storage account, and access the data in your underlying storage service.
+
+## Credential-based vs identity-based access
+
+Azure Machine Learning Datastores support both credential-based and identity-based access. In *credential-based* access, your authentication credentials are usually kept in a datastore, which is used to ensure you have permission to access the storage service. When these credentials are registered via datastores, any user with the workspace Reader role can retrieve them. That scale of access can be a security concern for some organizations. When you use *identity-based* data access, Azure Machine Learning prompts you for your Azure Active Directory token for data access authentication instead of keeping your credentials in the datastore. That approach allows for data access management at the storage level and keeps credentials confidential.
++
+## Next steps
+++ [How to create a datastore](how-to-datastore.md)
machine-learning Concept Designer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-designer.md
Last updated 10/21/2021-+ # What is Azure Machine Learning designer?
The designer uses your Azure Machine Learning [workspace](concept-workspace.md)
+ [Pipelines](#pipeline) + [Datasets](#datasets) + [Compute resources](#compute)
-+ [Registered models](concept-azure-machine-learning-architecture.md#models)
++ [Registered models](v1/concept-azure-machine-learning-architecture.md#models) + [Published pipelines](#publish) + [Real-time endpoints](#deploy)
Use a visual canvas to build an end-to-end machine learning workflow. Train, tes
+ [Publish](#publish) your pipelines to a REST **pipeline endpoint** to submit a new pipeline that runs with different parameters and datasets. + Publish a **training pipeline** to reuse a single pipeline to train multiple models while changing parameters and datasets. + Publish a **batch inference pipeline** to make predictions on new data by using a previously trained model.
-1. [Deploy](#deploy) a **real-time inference pipeline** to a real-time endpoint to make predictions on new data in real time.
++ [Deploy](#deploy) a **real-time inference pipeline** to an online endpoint to make predictions on new data in real time. ![Workflow diagram for training, batch inference, and real-time inference in the designer](./media/concept-designer/designer-workflow-diagram.png) ## Pipeline
-A [pipeline](concept-azure-machine-learning-architecture.md#ml-pipelines) consists of datasets and analytical components, which you connect. Pipelines have many uses: you can make a pipeline that trains a single model, or one that trains multiple models. You can create a pipeline that makes predictions in real time or in batch, or make a pipeline that only cleans data. Pipelines let you reuse your work and organize your projects.
+A [pipeline](v1/concept-azure-machine-learning-architecture.md#ml-pipelines) consists of datasets and analytical components, which you connect. Pipelines have many uses: you can make a pipeline that trains a single model, or one that trains multiple models. You can create a pipeline that makes predictions in real time or in batch, or make a pipeline that only cleans data. Pipelines let you reuse your work and organize your projects.
### Pipeline draft
When you're ready to run your pipeline draft, you submit a pipeline run.
Each time you run a pipeline, the configuration of the pipeline and its results are stored in your workspace as a **pipeline run**. You can go back to any pipeline run to inspect it for troubleshooting or auditing. **Clone** a pipeline run to create a new pipeline draft for you to edit.
-Pipeline runs are grouped into [experiments](concept-azure-machine-learning-architecture.md#experiments) to organize run history. You can set the experiment for every pipeline run.
+Pipeline runs are grouped into [experiments](v1/concept-azure-machine-learning-architecture.md#experiments) to organize run history. You can set the experiment for every pipeline run.
## Datasets
For some help navigating through the library of machine learning algorithms avai
## <a name="compute"></a> Compute resources
-Use compute resources from your workspace to run your pipeline and host your deployed models as real-time endpoints or pipeline endpoints (for batch inference). The supported compute targets are:
+Use compute resources from your workspace to run your pipeline and host your deployed models as online endpoints or pipeline endpoints (for batch inference). The supported compute targets are:
| Compute target | Training | Deployment | | - |:-:|:-:|
Compute targets are attached to your [Azure Machine Learning workspace](concept-
## Deploy
-To perform real-time inferencing, you must deploy a pipeline as a **real-time endpoint**. The real-time endpoint creates an interface between an external application and your scoring model. A call to a real-time endpoint returns prediction results to the application in real time. To make a call to a real-time endpoint, you pass the API key that was created when you deployed the endpoint. The endpoint is based on REST, a popular architecture choice for web programming projects.
+To perform real-time inferencing, you must deploy a pipeline as a [online endpoint](concept-endpoints.md#what-are-online-endpoints). The online endpoint creates an interface between an external application and your scoring model. A call to an online endpoint returns prediction results to the application in real time. To make a call to an online endpoint, you pass the API key that was created when you deployed the endpoint. The endpoint is based on REST, a popular architecture choice for web programming projects.
-Real-time endpoints must be deployed to an Azure Kubernetes Service cluster.
+Online endpoints must be deployed to an Azure Kubernetes Service cluster.
To learn how to deploy your model, see [Tutorial: Deploy a machine learning model with the designer](tutorial-designer-automobile-price-deploy.md).
To learn how to deploy your model, see [Tutorial: Deploy a machine learning mode
## Publish
-You can also publish a pipeline to a **pipeline endpoint**. Similar to a real-time endpoint, a pipeline endpoint lets you submit new pipeline runs from external applications using REST calls. However, you cannot send or receive data in real time using a pipeline endpoint.
+You can also publish a pipeline to a **pipeline endpoint**. Similar to an online endpoint, a pipeline endpoint lets you submit new pipeline runs from external applications using REST calls. However, you cannot send or receive data in real time using a pipeline endpoint.
Published pipelines are flexible, they can be used to train or retrain models, [perform batch inferencing](how-to-run-batch-predictions-designer.md), process new data, and much more. You can publish multiple pipelines to a single pipeline endpoint and specify which pipeline version to run.
The designer creates the same [PublishedPipeline](/python/api/azureml-pipeline-c
## Next steps * Learn the fundamentals of predictive analytics and machine learning with [Tutorial: Predict automobile price with the designer](tutorial-designer-automobile-price-train-score.md)
-* Learn how to modify existing [designer samples](samples-designer.md) to adapt them to your needs.
+* Learn how to modify existing [designer samples](samples-designer.md) to adapt them to your needs.
machine-learning Concept Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-endpoints.md
Title: What are endpoints (preview)?
+ Title: What are endpoints?
-description: Learn how Azure Machine Learning endpoints (preview) to simplify machine learning deployments.
+description: Learn how Azure Machine Learning endpoints to simplify machine learning deployments.
- Previously updated : 03/31/2022+ Last updated : 05/24/2022 #Customer intent: As an MLOps administrator, I want to understand what a managed endpoint is and why I need it.
-# What are Azure Machine Learning endpoints (preview)?
+# What are Azure Machine Learning endpoints?
+
-Use Azure Machine Learning endpoints (preview) to streamline model deployments for both real-time and batch inference deployments. Endpoints provide a unified interface to invoke and manage model deployments across compute types.
+Use Azure Machine Learning endpoints to streamline model deployments for both real-time and batch inference deployments. Endpoints provide a unified interface to invoke and manage model deployments across compute types.
In this article, you learn about: > [!div class="checklist"]
In this article, you learn about:
> * Kubernetes online endpoints > * Batch inference endpoints
-## What are endpoints and deployments (preview)?
+## What are endpoints and deployments?
-After you train a machine learning model, you need to deploy the model so that others can use it to do inferencing. In Azure Machine Learning, you can use **endpoints** (preview) and **deployments** (preview) to do so.
+After you train a machine learning model, you need to deploy the model so that others can use it to do inferencing. In Azure Machine Learning, you can use **endpoints** and **deployments** to do so.
An **endpoint** is an HTTPS endpoint that clients can call to receive the inferencing (scoring) output of a trained model. It provides: - Authentication using "key & token" based auth
A **deployment** is a set of resources required for hosting the model that does
A single endpoint can contain multiple deployments. Endpoints and deployments are independent Azure Resource Manager resources that appear in the Azure portal.
-Azure Machine Learning uses the concept of endpoints and deployments to implement different types of endpoints: [online endpoints](#what-are-online-endpoints-preview) and [batch endpoints](#what-are-batch-endpoints-preview).
+Azure Machine Learning uses the concept of endpoints and deployments to implement different types of endpoints: [online endpoints](#what-are-online-endpoints) and [batch endpoints](#what-are-batch-endpoints).
### Multiple developer interfaces
Create and manage batch and online endpoints with multiple developer tools:
- Azure portal (IT/Admin) - Support for CI/CD MLOps pipelines using the Azure CLI interface & REST/ARM interfaces
-## What are online endpoints (preview)?
+## What are online endpoints?
-**Online endpoints** (preview) are endpoints that are used for online (real-time) inferencing. Compared to **batch endpoints**, **online endpoints** contain **deployments** that are ready to receive data from clients and can send responses back in real time.
+**Online endpoints** are endpoints that are used for online (real-time) inferencing. Compared to **batch endpoints**, **online endpoints** contain **deployments** that are ready to receive data from clients and can send responses back in real time.
The following diagram shows an online endpoint that has two deployments, 'blue' and 'green'. The blue deployment uses VMs with a CPU SKU, and runs v1 of a model. The green deployment uses VMs with a GPU SKU, and uses v2 of the model. The endpoint is configured to route 90% of incoming traffic to the blue deployment, while green receives the remaining 10%. ### Online deployments requirements
Traffic allocation can be used to do safe rollout blue/green deployments by bala
> [!TIP] > A request can bypass the configured traffic load balancing by including an HTTP header of `azureml-model-deployment`. Set the header value to the name of the deployment you want the request to route to. ++
+Traffic to one deployment can also be mirrored (copied) to another deployment. Mirroring is useful when you want to test for things like response latency or error conditions without impacting live clients. For example, a blue/green deployment where 100% of the traffic is routed to blue and a 10% is mirrored to the green deployment. With mirroring, the results of the traffic to the green deployment aren't returned to the clients but metrics and logs are collected. Mirror traffic functionality is a __preview__ feature.
+ Learn how to [safely rollout to online endpoints](how-to-safely-rollout-managed-endpoints.md).
Learn how to [safely rollout to online endpoints](how-to-safely-rollout-managed-
All online endpoints integrate with Application Insights to monitor SLAs and diagnose issues.
-However [managed online endpoints](#managed-online-endpoints-vs-kubernetes-online-endpoints-preview) also include out-of-box integration with Azure Logs and Azure Metrics.
+However [managed online endpoints](#managed-online-endpoints-vs-kubernetes-online-endpoints) also include out-of-box integration with Azure Logs and Azure Metrics.
### Security
However [managed online endpoints](#managed-online-endpoints-vs-kubernetes-onlin
Autoscale automatically runs the right amount of resources to handle the load on your application. Managed endpoints support autoscaling through integration with the [Azure monitor autoscale](../azure-monitor/autoscale/autoscale-overview.md) feature. You can configure metrics-based scaling (for instance, CPU utilization >70%), schedule-based scaling (for example, scaling rules for peak business hours), or a combination. ### Visual Studio Code debugging
Visual Studio Code enables you to interactively debug endpoints.
:::image type="content" source="media/concept-endpoints/visual-studio-code-full.png" alt-text="Screenshot of endpoint debugging in VSCode." lightbox="media/concept-endpoints/visual-studio-code-full.png" :::
-## Managed online endpoints vs Kubernetes online endpoints (preview)
+### Private endpoint support (preview)
+
+Optionally, you can secure communication with a managed online endpoint by using private endpoints. This functionality is currently in preview.
++
+You can configure security for inbound scoring requests and outbound communications with the workspace and other services separately. Inbound communications use the private endpoint of the Azure Machine Learning workspace. Outbound communications use private endpoints created per deployment.
+
+For more information, see [Secure online endpoints](how-to-secure-online-endpoint.md).
-There are two types of online endpoints: **managed online endpoints** (preview) and **Kubernetes online endpoints** (preview). Managed online endpoints help to deploy your ML models in a turnkey manner. Managed online endpoints work with powerful CPU and GPU machines in Azure in a scalable, fully managed way. Managed online endpoints take care of serving, scaling, securing, and monitoring your models, freeing you from the overhead of setting up and managing the underlying infrastructure. The main example in this doc uses managed online endpoints for deployment.
+## Managed online endpoints vs Kubernetes online endpoints
+
+There are two types of online endpoints: **managed online endpoints** and **Kubernetes online endpoints**. Managed online endpoints help to deploy your ML models in a turnkey manner. Managed online endpoints work with powerful CPU and GPU machines in Azure in a scalable, fully managed way. Managed online endpoints take care of serving, scaling, securing, and monitoring your models, freeing you from the overhead of setting up and managing the underlying infrastructure. The main example in this doc uses managed online endpoints for deployment.
The following table highlights the key differences between managed online endpoints and Kubernetes online endpoints.
The following table highlights the key differences between managed online endpoi
| **Out-of-box logging** | [Azure Logs and Log Analytics at endpoint level](how-to-deploy-managed-online-endpoints.md#optional-integrate-with-log-analytics) | Supported | | **Application Insights** | Supported | Supported | | **Managed identity** | [Supported](how-to-access-resources-from-endpoints-managed-identities.md) | Supported |
-| **Virtual Network (VNET)** | Not supported yet (we're working on it) | Supported |
+| **Virtual Network (VNET)** | [Supported](how-to-secure-online-endpoint.md) (preview) | Supported |
| **View costs** | [Endpoint and deployment level](how-to-view-online-endpoints-costs.md) | Cluster level |
+| **Mirrored traffic** | [Supported](how-to-safely-rollout-managed-endpoints.md#test-the-deployment-with-mirrored-traffic-preview) | Unsupported
### Managed online endpoints
Managed online endpoints can help streamline your deployment process. Managed on
- Monitor model availability, performance, and SLA using [native integration with Azure Monitor](how-to-monitor-online-endpoints.md). - Debug deployments using the logs and native integration with Azure Log Analytics.
- :::image type="content" source="media/concept-endpoints/log-analytics-and-azure-monitor.png" alt-text="Screenshot showing Azure Monitor graph of endpoint latency":::
+ :::image type="content" source="media/concept-endpoints/log-analytics-and-azure-monitor.png" alt-text="Screenshot showing Azure Monitor graph of endpoint latency.":::
- View costs - Managed online endpoints let you [monitor cost at the endpoint and deployment level](how-to-view-online-endpoints-costs.md)
- :::image type="content" source="media/concept-endpoints/endpoint-deployment-costs.png" alt-text="Screenshot cost chart of an endpoint and deployment":::
+ :::image type="content" source="media/concept-endpoints/endpoint-deployment-costs.png" alt-text="Screenshot cost chart of an endpoint and deployment.":::
+
+ > [!NOTE]
+ > Managed online endpoints are based on Azure Machine Learning compute. When using a managed online endpoint, you pay for the compute and networking charges. There is no additional surcharge.
+ >
+ > If you use a virtual network and secure outbound (egress) traffic from the managed online endpoint, there is an additional cost. For egress, three private endpoints are created _per deployment_ for the managed online endpoint. These are used to communicate with the default storage account, Azure Container Registry, and workspace. Additional networking charges may apply. For more information on pricing, see the [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator/).
For a step-by-step tutorial, see [How to deploy online endpoints](how-to-deploy-managed-online-endpoints.md).
-## What are batch endpoints (preview)?
+## What are batch endpoints?
-**Batch endpoints** (preview) are endpoints that are used to do batch inferencing on large volumes of data over a period of time. **Batch endpoints** receive pointers to data and run jobs asynchronously to process the data in parallel on compute clusters. Batch endpoints store outputs to a data store for further analysis.
+**Batch endpoints** are endpoints that are used to do batch inferencing on large volumes of data over a period of time. **Batch endpoints** receive pointers to data and run jobs asynchronously to process the data in parallel on compute clusters. Batch endpoints store outputs to a data store for further analysis.
:::image type="content" source="media/concept-endpoints/batch-endpoint.png" alt-text="Diagram showing that a single batch endpoint may route requests to multiple deployments, one of which is the default.":::
To create a batch deployment, you need to specify the following elements:
- Scoring script - code needed to do the scoring/inferencing - Environment - a Docker image with Conda dependencies
-If you are deploying [MLFlow models](how-to-train-cli.md#model-tracking-with-mlflow), there's no need to provide a scoring script and execution environment, as both are autogenerated.
+If you're deploying [MLFlow models](how-to-train-cli.md#model-tracking-with-mlflow), there's no need to provide a scoring script and execution environment, as both are autogenerated.
Learn how to [deploy and use batch endpoints with the Azure CLI](how-to-use-batch-endpoint.md) and the [studio web portal](how-to-use-batch-endpoints-studio.md)
You can [override compute resource settings](how-to-use-batch-endpoint.md#config
You can use the following options for input data when invoking a batch endpoint: -- Azure Machine Learning registered datasets - for more information, see [Create Azure Machine Learning datasets](how-to-train-with-datasets.md)-- Cloud data - Either a public data URI or data path in datastore. For more information, see [Connect to data with the Azure Machine Learning studio](how-to-connect-data-ui.md)-- Data stored locally
+- Cloud data - Either a path on Azure Machine Learning registered datastore, a reference to Azure Machine Learning registered V2 data asset, or a public URI. For more information, see [Connect to data with the Azure Machine Learning studio](how-to-connect-data-ui.md)
+- Data stored locally - it will be automatically uploaded to the Azure ML registered datastore and passed to the batch endpoint.
+
+> [!NOTE]
+> - If you are using existing V1 FileDataset for batch endpoint, we recommend migrating them to V2 data assets and refer to them directly when invoking batch endpoints. Currently only data assets of type `uri_folder` or `uri_file` are supported. Batch endpoints created with GA CLIv2 (2.4.0 and newer) or GA REST API (2022-05-01 and newer) will not support V1 Dataset.
+> - You can also extract the URI or path on datastore extracted from V1 FileDataset by using `az ml dataset show` command with `--query` parameter and use that information for invoke.
+> - While Batch endpoints created with earlier APIs will continue to support V1 FileDataset, we will be adding further V2 data assets support with the latest API versions for even more usability and flexibility. For more information on V2 data assets, see [Work with data using SDK v2 (preview)](how-to-use-data.md). For more information on the new V2 experience, see [What is v2](concept-v2.md).
+
+For more information on supported input options, see [Batch scoring with batch endpoint](how-to-use-batch-endpoint.md#invoke-the-batch-endpoint-with-different-input-options).
+
+For more information on supported input options, see [Batch scoring with batch endpoint](how-to-use-batch-endpoint.md#invoke-the-batch-endpoint-with-different-input-options).
Specify the storage output location to any datastore and path. By default, batch endpoints store their output to the workspace's default blob store, organized by the Job Name (a system-generated GUID). ### Security - Authentication: Azure Active Directory Tokens-- SSL by default for endpoint invocation
+- SSL: enabled by default for endpoint invocation
+- VNET support: Batch endpoints support ingress protection. A batch endpoint with ingress protection will accept scoring requests only from hosts inside a virtual network but not from the public internet. A batch endpoint that is created in a private-link enabled workspace will have ingress protection. To create a private-link enabled workspace, see [Create a secure workspace](tutorial-create-secure-workspace.md).
## Next steps - [How to deploy online endpoints with the Azure CLI](how-to-deploy-managed-online-endpoints.md) - [How to deploy batch endpoints with the Azure CLI](how-to-use-batch-endpoint.md) - [How to use online endpoints with the studio](how-to-use-managed-online-endpoint-studio.md)-- [Deploy models with REST (preview)](how-to-deploy-with-rest.md)
+- [Deploy models with REST](how-to-deploy-with-rest.md)
- [How to monitor managed online endpoints](how-to-monitor-online-endpoints.md) - [How to view managed online endpoint costs](how-to-view-online-endpoints-costs.md)-- [Manage and increase quotas for resources with Azure Machine Learning](how-to-manage-quotas.md#azure-machine-learning-managed-online-endpoints-preview)
+- [Manage and increase quotas for resources with Azure Machine Learning](how-to-manage-quotas.md#azure-machine-learning-managed-online-endpoints)
machine-learning Concept Enterprise Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-enterprise-security.md
description: 'Securely use Azure Machine Learning: authentication, authorization
+
Each workspace has an associated system-assigned [managed identity](../active-di
The system-assigned managed identity is used for internal service-to-service authentication between Azure Machine Learning and other Azure resources. The identity token is not accessible to users and cannot be used by them to gain access to these resources. Users can only access the resources through [Azure Machine Learning control and data plane APIs](how-to-assign-roles.md), if they have sufficient RBAC permissions.
-The managed identity needs Contributor permissions on the resource group containing the workspace in order to provision the associated resources, and to [deploy Azure Container Instances for web service endpoints](how-to-deploy-azure-container-instance.md).
+The managed identity needs Contributor permissions on the resource group containing the workspace in order to provision the associated resources, and to [deploy Azure Container Instances for web service endpoints](v1/how-to-deploy-azure-container-instance.md).
We don't recommend that admins revoke the access of the managed identity to the resources mentioned in the preceding table. You can restore access by using the [resync keys operation](how-to-change-storage-access-key.md).
For more information, see the following documents:
* [Virtual network isolation and privacy overview](how-to-network-security-overview.md) * [Secure workspace resources](how-to-secure-workspace-vnet.md) * [Secure training environment](how-to-secure-training-vnet.md)
-* [Secure inference environment](how-to-secure-inferencing-vnet.md)
+* For securing inference, see the following documents:
+ * If using CLI v1 or SDK v1 - [Secure inference environment](how-to-secure-inferencing-vnet.md)
+ * If using CLI v2 or SDK v2 - [Network isolation for managed online endpoints](how-to-secure-online-endpoint.md)
* [Use studio in a secured virtual network](how-to-enable-studio-virtual-network.md) * [Use custom DNS](how-to-custom-dns.md) * [Configure firewall](how-to-access-azureml-behind-firewall.md)
machine-learning Concept Error Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-error-analysis.md
+
+ Title: Assess errors in ML models
+
+description: Assess model error distributions in different cohorts of your dataset with the Responsible AI dashboard's integration of Error Analysis.
++++++ Last updated : 05/10/2022++
+# Assess errors in ML models (preview)
+
+One of the most apparent challenges with current model debugging practices is using aggregate metrics to score models on a benchmark. Model accuracy may not be uniform across subgroups of data, and there might exist input cohorts for which the model fails more often. The direct consequences of these failures are a lack of reliability and safety, unfairness, and a loss of trust in machine learning altogether.
++
+Error Analysis moves away from aggregate accuracy metrics, exposes the distribution of errors to developers in a transparent way, and enables them to identify & diagnose errors efficiently.
+
+The Error Analysis component of the [Responsible AI dashboard](concept-responsible-ai-dashboard.md) provides machine learning practitioners with a deeper understanding of model failure distribution and assists them with quickly identifying erroneous cohorts of data. It contributes to the ΓÇ£identifyΓÇ¥ stage of the model lifecycle workflow through a decision tree that reveals cohorts with high error rates and a heatmap that visualizes how a few input features impact the error rate across cohorts. Discrepancies in error might occur when the system underperforms for specific demographic groups or infrequently observed input cohorts in the training data.
+
+The capabilities of this component are founded by [Error Analysis](https://erroranalysis.ai/)) capabilities on generating model error profiles.
+
+Use Error Analysis when you need to:
+
+- Gain a deep understanding of how model failures are distributed across a given dataset and across several input and feature dimensions.
+- Break down the aggregate performance metrics to automatically discover erroneous cohorts and take targeted mitigation steps.
+
+## How are error analyses generated
+
+Error Analysis identifies the cohorts of data with a higher error rate versus the overall benchmark error rate. The dashboard allows for error exploration by using either a decision tree or a heatmap guided by errors.
+
+## Error tree
+
+Often, error patterns may be complex and involve more than one or two features. Therefore, it may be difficult for developers to explore all possible combinations of features to discover hidden data pockets with critical failure. To alleviate the burden, the binary tree visualization automatically partitions the benchmark data into interpretable subgroups, which have unexpectedly high or low error rates. In other words, the tree uses the input features to maximally separate model error from success. For each node defining a data subgroup, users can investigate the following information:
+
+- **Error rate**: a portion of instances in the node for which the model is incorrect. This is shown through the intensity of the red color.
+- **Error coverage**: a portion of all errors that fall into the node. This is shown through the fill rate of the node.
+- **Data representation**: number of instances in the node. This is shown through the thickness of the incoming edge to the node along with the actual total number of instances in the node.
+
+## Error Heatmap
+
+The view slices the data based on a one- or two-dimensional grid of input features. Users can choose the input features of interest for analysis. The heatmap visualizes cells with higher error with a darker red color to bring the userΓÇÖs attention to regions with high error discrepancy. This is beneficial especially when the error themes are different in different partitions, which happen frequently in practice. In this error identification view, the analysis is highly guided by the users and their knowledge or hypotheses of what features might be most important for understanding failure.
+
+## Next steps
+
+- Learn how to generate the Responsible AI dashboard via [CLIv2 and SDKv2](how-to-responsible-ai-dashboard-sdk-cli.md) or [studio UI ](how-to-responsible-ai-dashboard-ui.md)
+- Learn how to generate a [Responsible AI scorecard](how-to-responsible-ai-scorecard.md)) based on the insights observed in the Responsible AI dashboard.
machine-learning Concept Ml Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-ml-pipelines.md
-- Previously updated : 01/15/2022-++ Last updated : 05/10/2022+ # What are Azure Machine Learning pipelines?
-In this article, you learn how a machine learning pipeline helps you build, optimize, and manage your machine learning workflow.
-<a name="compare"></a>
-## Which Azure pipeline technology should I use?
-The Azure cloud provides several types of pipeline, each with a different purpose. The following table lists the different pipelines and what they're used for:
+An Azure Machine Learning pipeline is an independently executable workflow of a complete machine learning task. An Azure Machine Learning pipeline helps to standardize the best practices of producing a machine learning model, enables the team to execute at scale, and improves the model building efficiency.
-| Scenario | Primary persona | Azure offering | OSS offering | Canonical pipe | Strengths |
-| -- | | -- | | -- | |
-| Model orchestration (Machine learning) | Data scientist | Azure Machine Learning Pipelines | Kubeflow Pipelines | Data -> Model | Distribution, caching, code-first, reuse |
-| Data orchestration (Data prep) | Data engineer | [Azure Data Factory pipelines](../data-factory/concepts-pipelines-activities.md) | Apache Airflow | Data -> Data | Strongly typed movement, data-centric activities |
-| Code & app orchestration (CI/CD) | App Developer / Ops | [Azure Pipelines](https://azure.microsoft.com/services/devops/pipelines/) | Jenkins | Code + Model -> App/Service | Most open and flexible activity support, approval queues, phases with gating |
+## Why are Azure Machine Learning pipelines needed?
+
+The core of a machine learning pipeline is to split a complete machine learning task into a multistep workflow. Each step is a manageable component that can be developed, optimized, configured, and automated individually. Steps are connected through well-defined interfaces. The Azure Machine Learning pipeline service automatically orchestrates all the dependencies between pipeline steps. This modular approach brings two key benefits:
+- [Standardize the Machine learning operation (MLOPs) practice and support scalable team collaboration](#standardize-the-mlops-practice-and-support-scalable-team-collaboration)
+- [Training efficiency and cost reduction](#training-efficiency-and-cost-reduction)
-## What can machine learning pipelines do?
+### Standardize the MLOps practice and support scalable team collaboration
-An Azure Machine Learning pipeline is an independently executable workflow of a complete machine learning task. Subtasks are encapsulated as a series of steps within the pipeline. An Azure Machine Learning pipeline can be as simple as one that calls a Python script, so _may_ do just about anything. Pipelines _should_ focus on machine learning tasks such as:
+Machine learning operation (MLOPs) automates the process of building machine learning models and taking the model to production. This is a complex process. It usually requires collaboration from different teams with different skills. A well-defined machine learning pipeline can abstract this complex process into a multiple steps workflow, mapping each step to a specific task such that each team can work independently.
-+ Data preparation
-+ Training configuration
-+ Efficient training and validation
-+ Repeatable deployments
+For example, a typical machine learning project includes the steps of data collection, data preparation, model training, model evaluation, and model deployment. Usually, the data engineers concentrate on data steps, data scientists spend most time on model training and evaluation, the machine learning engineers are focus on model deployment and automation of the entire workflow. By leveraging machine learning pipeline, each team only needs to work on building their own steps. The best way of building steps is using [Azure Machine Learning component](concept-component.md), a self-contained piece of code that does one step in a machine learning pipeline. All these steps built by different users are finally integrated into one workflow through the pipeline definition. The pipeline is a collaboration tool for everyone in the project. The process of defining a pipeline and all its steps can be standardized by each company's preferred DevOps practice. The pipeline can be further versioned and automated. If the ML projects are described as a pipeline, then the best MLOps practice is already applied.
-Time-consuming steps can be done only when their input changes. A change to the training script may be run without redoing the data loading and preparation steps. Separate steps can use different compute type/sizes for each steps. Independent steps allow multiple data scientists to work on the same pipeline at the same time without over-taxing compute resources.
+### Training efficiency and cost reduction
-## Key advantages
+Besides being the tool to put MLOps into practice, the machine learning pipeline also improves large model trainingΓÇÖs efficiency and reduces cost. Taking modern natural language model training as an example. It requires pre-processing large amounts of data and GPU intensive transformer model training. It takes hours to days to train a model each time. When the model is being built, the data scientist wants to test different training code or hyperparameters and run the training many times to get the best model performance. For most of these trainings, there's usually small changes from one training to another one. It will be a significant waste if every time the full training from data processing to model training takes place. By using machine learning pipeline, it can automatically calculate which steps result is unchanged and reuse outputs from previous training. Additionally, the machine learning pipeline supports running each step on different computation resources. Such that, the memory heavy data processing work and run-on high memory CPU machines, and the computation intensive training can run on expensive GPU machines. By properly choosing which step to run on which type of machines, the training cost can be significantly reduced.
-The key advantages of using pipelines for your machine learning workflows are:
+## Getting started best practices
-|Key advantage|Description|
-|:-:|--|
-|**Unattended&nbsp;runs**|Schedule steps to run in parallel or in sequence in a reliable and unattended manner. Data preparation and modeling can last days or weeks, and pipelines allow you to focus on other tasks while the process is running. |
-|**Heterogenous compute**|Use multiple pipelines that are reliably coordinated across heterogeneous and scalable compute resources and storage locations. Make efficient use of available compute resources by running individual pipeline steps on different compute targets, such as HDInsight, GPU Data Science VMs, and Databricks.|
-|**Reusability**|Create pipeline templates for specific scenarios, such as retraining and batch-scoring. Trigger published pipelines from external systems via simple REST calls.|
-|**Tracking and versioning**|Instead of manually tracking data and result paths as you iterate, use the pipelines SDK to explicitly name and version your data sources, inputs, and outputs. You can also manage scripts and data separately for increased productivity.|
-| **Modularity** | Separating areas of concerns and isolating changes allows software to evolve at a faster rate with higher quality. |
-|**Collaboration**|Pipelines allow data scientists to collaborate across all areas of the machine learning design process, while being able to concurrently work on pipeline steps.|
+Depending on what a machine learning project already has, the starting point of building a machine learning pipeline may vary. There are a few typical approaches to building a pipeline.
-### Analyzing dependencies
+The first approach usually applies to the team that hasnΓÇÖt used pipeline before and wants to take some advantage of pipeline like MLOps. In this situation, data scientists typically have developed some machine learning models on their local environment using their favorite tools. Machine learning engineers need to take data scientistsΓÇÖ output into production. The work involves cleaning up some unnecessary code from original notebook or python code, changes the training input from local data to parameterized values, split the training code into multiple steps as needed, perform unit test of each step, and finally wraps all steps into a pipeline.
-The dependency analysis in Azure Machine Learning pipelines is more sophisticated than simple timestamps. Every step may run in a different hardware and software environment. Azure Machine Learning automatically orchestrates all of the dependencies between pipeline steps. This orchestration might include spinning up and down Docker images, attaching and detaching compute resources, and moving data between the steps in a consistent and automatic manner.
+Once the teams get familiar with pipelines and want to do more machine learning projects using pipelines, they'll find the first approach is hard to scale. The second approach is set up a few pipeline templates, each try to solve one specific machine learning problem. The template predefines the pipeline structure including how many steps, each stepΓÇÖs inputs and outputs, and their connectivity. To start a new machine learning project, the team first forks one template repo. The team leader then assigns members which step they need to work on. The data scientists and data engineers do their regular work. When they're happy with their result, they structure their code to fit in the pre-defined steps. Once the structured codes are checked-in, the pipeline can be executed or automated. If there's any change, each member only needs to work on their piece of code without touching the rest of the pipeline code.
-### Coordinating the steps involved
+Once a team has built a collection of machine learnings pipelines and reusable components, they could start to build the machine learning pipeline from cloning previous pipeline or tie existing reusable component together. At this stage, the teamΓÇÖs overall productivity will be improved significantly.
-When you create and run a `Pipeline` object, the following high-level steps occur:
+Azure Machine Learning offers different methods to build a pipeline. For users who are familiar with DevOps practices, we recommend using [CLI](how-to-create-component-pipelines-cli.md). For data scientists who are familiar with python, we recommend writing pipeline using the [Azure ML SDK](how-to-create-machine-learning-pipelines.md). For users who prefer to use UI, they could use the [designer to build pipeline by using registered components](how-to-create-component-pipelines-ui.md).
-+ For each step, the service calculates requirements for:
- + Hardware compute resources
- + OS resources (Docker image(s))
- + Software resources (Conda / virtualenv dependencies)
- + Data inputs
-+ The service determines the dependencies between steps, resulting in a dynamic execution graph
-+ When each node in the execution graph runs:
- + The service configures the necessary hardware and software environment (perhaps reusing existing resources)
- + The step runs, providing logging and monitoring information to its containing `Experiment` object
- + When the step completes, its outputs are prepared as inputs to the next step and/or written to storage
- + Resources that are no longer needed are finalized and detached
+<a name="compare"></a>
+## Which Azure pipeline technology should I use?
+
+The Azure cloud provides several types of pipeline, each with a different purpose. The following table lists the different pipelines and what they're used for:
-![Pipeline steps](./media/concept-ml-pipelines/run_an_experiment_as_a_pipeline.png)
+| Scenario | Primary persona | Azure offering | OSS offering | Canonical pipe | Strengths |
+| -- | | -- | | -- | |
+| Model orchestration (Machine learning) | Data scientist | Azure Machine Learning Pipelines | Kubeflow Pipelines | Data -> Model | Distribution, caching, code-first, reuse |
+| Data orchestration (Data prep) | Data engineer | [Azure Data Factory pipelines](../data-factory/concepts-pipelines-activities.md) | Apache Airflow | Data -> Data | Strongly typed movement, data-centric activities |
+| Code & app orchestration (CI/CD) | App Developer / Ops | [Azure Pipelines](https://azure.microsoft.com/services/devops/pipelines/) | Jenkins | Code + Model -> App/Service | Most open and flexible activity support, approval queues, phases with gating |
## Next steps
-Azure Machine Learning pipelines are a powerful facility that begins delivering value in the early development stages.
+Azure Machine Learning pipelines are a powerful facility that begins delivering value in the early development stages.
-+ [Define pipelines with the Azure CLI](./how-to-train-cli.md#hello-pipelines)
-+ [Define pipelines with the Azure SDK](./how-to-create-machine-learning-pipelines.md)
-+ [Define pipelines with Designer](./tutorial-designer-automobile-price-train-score.md)
-+ See the SDK reference docs for [pipeline core](/python/api/azureml-pipeline-core/) and [pipeline steps](/python/api/azureml-pipeline-steps/).
-+ Try out example Jupyter notebooks showcasing [Azure Machine Learning pipelines](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/machine-learning-pipelines). Learn how to [run notebooks to explore this service](samples-notebooks.md).
++ [Define pipelines with the Azure ML CLI v2](./how-to-create-component-pipelines-cli.md)++ [Define pipelines with the Azure ML SDK v2](./how-to-create-component-pipeline-python.md)++ [Define pipelines with Designer](./how-to-create-component-pipelines-ui.md)++ Try out [CLI v2 pipeline example](https://github.com/Azure/azureml-examples/tree/sdk-preview/cli/jobs/pipelines-with-components)++ Try out [Python SDK v2 pipeline example](https://github.com/Azure/azureml-examples/tree/sdk-preview/sdk/jobs/pipelines)
machine-learning Concept Mlflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-mlflow.md
Title: MLflow and Azure Machine Learning
-description: Learn about MLflow with Azure Machine Learning to log metrics and artifacts from ML models, and deploy your ML models as a web service.
+description: Learn about how Azure Machine Learning uses MLflow to log metrics and artifacts from ML models, and deploy your ML models to an endpoint.
--++ Previously updated : 10/21/2021 Last updated : 04/15/2022 -+ # MLflow and Azure Machine Learning
-[MLflow](https://www.mlflow.org) is an open-source library for managing the life cycle of your machine learning experiments. MLflow's tracking URI and logging API, collectively known as [MLflow Tracking](https://mlflow.org/docs/latest/quickstart.html#using-the-tracking-api) is a component of MLflow that logs and tracks your training run metrics and model artifacts, no matter your experiment's environment--locally on your computer, on a remote compute target, a virtual machine, or an Azure Databricks cluster.
-Together, MLflow Tracking and Azure Machine learning allow you to track an experiment's run metrics and store model artifacts in your Azure Machine Learning workspace. That experiment could've been run locally on your computer, on a remote compute target, a virtual machine, or an [Azure Databricks cluster](how-to-use-mlflow-azure-databricks.md).
-## Compare MLflow and Azure Machine Learning clients
+> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning developer platform you are using:"]
+> * [v1](v1/concept-mlflow-v1.md)
+> * [v2 (current version)](concept-mlflow.md)
- The following table summarizes the different clients that can use Azure Machine Learning, and their respective function capabilities.
+Azure Machine Learning only uses MLflow Tracking for metric logging and artifact storage for your experiments, whether you created the experiment via the Azure Machine Learning Python SDK, Azure Machine Learning CLI or the Azure Machine Learning studio.
- MLflow Tracking offers metric logging and artifact storage functionalities that are only otherwise available via the [Azure Machine Learning Python SDK](/python/api/overview/azure/ml/intro).
-
-| Capability | MLflow Tracking & Deployment | Azure Machine Learning Python SDK | Azure Machine Learning CLI | Azure Machine Learning studio|
-||||||
-| Manage workspace | | Γ£ô | Γ£ô | Γ£ô |
-| Use data stores | | Γ£ô | Γ£ô | |
-| Log metrics | Γ£ô | Γ£ô | | |
-| Upload artifacts | Γ£ô | Γ£ô | | |
-| View metrics | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
-| Manage compute | | Γ£ô | Γ£ô | Γ£ô |
-| Deploy models | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
-|Monitor model performance||Γ£ô| | |
-| Detect data drift | | Γ£ô | | Γ£ô |
+> [!NOTE]
+> Unlike the Azure Machine Learning SDK v1, there is no logging functionality in the SDK v2 (preview), and it is recommended to use MLflow for logging and tracking.
+[MLflow](https://www.mlflow.org) is an open-source library for managing the lifecycle of your machine learning experiments. MLflow's tracking URI and logging API, collectively known as [MLflow Tracking](https://mlflow.org/docs/latest/quickstart.html#using-the-tracking-api) is a component of MLflow that logs and tracks your training run metrics and model artifacts, no matter your experiment's environment--locally on your computer, on a remote compute target, a virtual machine or an Azure Machine Learning compute instance.
## Track experiments
-With MLflow Tracking you can connect Azure Machine Learning as the backend of your MLflow experiments. By doing so, you can do the following tasks,
+With MLflow Tracking you can connect Azure Machine Learning as the backend of your MLflow experiments. By doing so, you can
+++ Track and log experiment metrics and artifacts in your [Azure Machine Learning workspace](./concept-azure-machine-learning-v2.md#workspace). If you already use MLflow Tracking for your experiments, the workspace provides a centralized, secure, and scalable location to store training metrics and models. Learn more at [Track ML models with MLflow and Azure Machine Learning CLI v2](how-to-use-mlflow-cli-runs.md).
-+ Track and log experiment metrics and artifacts in your [Azure Machine Learning workspace](./concept-azure-machine-learning-architecture.md#workspace). If you already use MLflow Tracking for your experiments, the workspace provides a centralized, secure, and scalable location to store training metrics and models. Learn more at [Track ML models with MLflow and Azure Machine Learning](how-to-use-mlflow.md).
++ Model management in MLflow or Azure Machine Learning model registry.
-+ Track and manage models in MLflow and Azure Machine Learning model registry.
+## Deploy MLflow experiments
-+ [Track Azure Databricks training runs](how-to-use-mlflow-azure-databricks.md).
+You can [Deploy MLflow models to an online endpoint](how-to-deploy-mlflow-models-online-endpoints.md), so you can leverage and apply Azure Machine Learning's model management capabilities and no-code deployment offering.
## Train MLflow projects (preview)
You can use MLflow's tracking URI and logging API, collectively known as MLflow
Learn more at [Train ML models with MLflow projects and Azure Machine Learning (preview)](how-to-train-mlflow-projects.md).
-## Deploy MLflow experiments
-
-You can [deploy your MLflow model as an Azure web service](how-to-deploy-mlflow-models.md), so you can leverage and apply Azure Machine Learning's model management and data drift detection capabilities to your production models.
- ## Next steps
-* [Track ML models with MLflow and Azure Machine Learning](how-to-use-mlflow.md).
-* [Train ML models with MLflow projects and Azure Machine Learning (preview)](how-to-train-mlflow-projects.md).
-* [Track Azure Databricks runs with MLflow](how-to-use-mlflow-azure-databricks.md).
-* [Deploy models with MLflow](how-to-deploy-mlflow-models.md).
--
+* [Track ML models with MLflow and Azure Machine Learning CLI v2](how-to-use-mlflow-cli-runs.md)
+* [Convert your custom model to MLflow model format for no code deployments](how-to-convert-custom-model-to-mlflow.md)
+* [Deploy MLflow models to an online endpoint](how-to-deploy-mlflow-models-online-endpoints.md)
machine-learning Concept Model Management And Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-model-management-and-deployment.md
--- Previously updated : 11/04/2021++++ Last updated : 05/11/2022 # MLOps: Model management, deployment, lineage, and monitoring with Azure Machine Learning
-In this article, you'll learn how to use machine learning operations (MLOps) in Azure Machine Learning to manage the lifecycle of your models. MLOps improves the quality and consistency of your machine learning solutions.
+
+> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning developer platform you are using:"]
+> * [v1](./v1/concept-model-management-and-deployment.md)
+> * [v2 (current version)](concept-model-management-and-deployment.md)
+
+In this article, learn about how do Machine Learning Operations (MLOps) in Azure Machine Learning to manage the lifecycle of your models. MLOps improves the quality and consistency of your machine learning solutions.
## What is MLOps?
For more information, see the "Register model" section of [Deploy models](how-to
> [!IMPORTANT] > When you use the **Filter by** `Tags` option on the **Models** page of Azure Machine Learning Studio, instead of using `TagName : TagValue`, use `TagName=TagValue` without spaces. -
-Machine Learning can help you understand the CPU and memory requirements of the service that's created when you deploy your model. Profiling tests the service that runs your model and returns information like CPU usage, memory usage, and response latency. It also provides a CPU and memory recommendation based on the resource usage.
-
-For more information, see [Profile your model to determine resource utilization](how-to-deploy-profile-model.md).
### Package and debug models
For more information on ONNX with Machine Learning, see [Create and accelerate m
### Use models
-Trained machine learning models are deployed as web services in the cloud or locally. Deployments use CPU, GPU, or field-programmable gate arrays for inferencing. You can also use models from Power BI.
+Trained machine learning models are deployed as [endpoints](concept-endpoints.md) in the cloud or locally. Deployments use CPU, GPU for inferencing.
-When you use a model as a web service, you provide the following items:
+When deploying a model as an endpoint, you provide the following items:
* The models that are used to score data submitted to the service or device. * An entry script. This script accepts requests, uses the models to score the data, and returns a response. * A Machine Learning environment that describes the pip and conda dependencies required by the models and entry script. * Any other assets such as text and data that are required by the models and entry script.
-You also provide the configuration of the target deployment platform. Examples include the VM family type, available memory, and the number of cores when you deploy to Azure Kubernetes Service.
-
-When the image is created, components required by Machine Learning are also added. An example is the assets needed to run the web service.
+You also provide the configuration of the target deployment platform. For example, the VM family type, available memory, and number of cores. When the image is created, components required by Azure Machine Learning are also added. For example, assets needed to run the web service.
#### Batch scoring
-Batch scoring is supported through machine learning pipelines. For more information, see [Batch predictions on big data](./tutorial-pipeline-batch-scoring-classification.md).
+Batch scoring is supported through batch endpoints. For more information, see [endpoints](concept-endpoints.md).
-#### Real-time web services
+#### Online endpoints
-You can use your models in web services with the following compute targets:
+You can use your models with an online endpoint. Online endpoints can use the following compute targets:
-* Azure Container Instances
+* Managed online endpoints
* Azure Kubernetes Service * Local development environment
-To deploy the model as a web service, you must provide the following items:
+To deploy the model to an endpoint, you must provide the following items:
* The model or ensemble of models. * Dependencies required to use the model. Examples are a script that accepts requests and invokes the model and conda dependencies.
For more information, see [Deploy models](how-to-deploy-and-where.md).
#### Controlled rollout
-When you deploy to Azure Kubernetes Service, you can use controlled rollout to enable the following scenarios:
+When deploying to an online endpoint, you can use controlled rollout to enable the following scenarios:
-* Create multiple versions of an endpoint for a deployment.
-* Perform A/B testing by routing traffic to different versions of the endpoint.
-* Switch between endpoint versions by updating the traffic percentage in endpoint configuration.
+* Create multiple versions of an endpoint for a deployment
+* Perform A/B testing by routing traffic to different deployments within the endpoint.
+* Switch between endpoint deployments by updating the traffic percentage in endpoint configuration.
For more information, see [Controlled rollout of machine learning models](./how-to-safely-rollout-managed-endpoints.md).
machine-learning Concept Network Data Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-network-data-access.md
To secure communication between Azure Machine Learning and Azure SQL Database, t
## Next steps
-For information on enabling studio in a network, see [Use Azure Machine Learning studio in an Azure Virtual Network](how-to-enable-studio-virtual-network.md).
+For information on enabling studio in a network, see [Use Azure Machine Learning studio in an Azure Virtual Network](how-to-enable-studio-virtual-network.md).
machine-learning Concept Open Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-open-source.md
-+ Last updated 11/04/2021
For more information on ONNX and how to consume ONNX models, see the following a
### Package and deploy models as containers
-Container technologies such as Docker are one way to deploy models as web services. Containers provide a platform and resource agnostic way to build and orchestrate reproducible software environments. With these core technologies, you can use [preconfigured environments](./how-to-use-environments.md), [preconfigured container images](./how-to-deploy-custom-container.md) or custom ones to deploy your machine learning models to such as [Kubernetes clusters](./how-to-deploy-azure-kubernetes-service.md?tabs=python). For GPU intensive workflows, you can use tools like NVIDIA Triton Inference server to [make predictions using GPUs](how-to-deploy-with-triton.md?tabs=python).
+Container technologies such as Docker are one way to deploy models as web services. Containers provide a platform and resource agnostic way to build and orchestrate reproducible software environments. With these core technologies, you can use [preconfigured environments](./how-to-use-environments.md), [preconfigured container images](./how-to-deploy-custom-container.md) or custom ones to deploy your machine learning models to such as [Kubernetes clusters](./v1/how-to-deploy-azure-kubernetes-service.md?tabs=python). For GPU intensive workflows, you can use tools like NVIDIA Triton Inference server to [make predictions using GPUs](how-to-deploy-with-triton.md?tabs=python).
### Secure deployments with homomorphic encryption
Securing deployments is an important part of the deployment process. To [deploy
Machine Learning Operations (MLOps), commonly thought of as DevOps for machine learning allows you to build more transparent, resilient, and reproducible machine learning workflows. See the [what is MLOps article](./concept-model-management-and-deployment.md) to learn more about MLOps.
-Using DevOps practices like continuous integration (CI) and continuous deployment (CD), you can automate the end-to-end machine learning lifecycle and capture governance data around it. You can define your [machine learning CI/CD pipeline in GitHub actions](./how-to-github-actions-machine-learning.md) to run Azure Machine Learning training and deployment tasks.
+Using DevOps practices like continuous integration (CI) and continuous deployment (CD), you can automate the end-to-end machine learning lifecycle and capture governance data around it. You can define your [machine learning CI/CD pipeline in GitHub Actions](./how-to-github-actions-machine-learning.md) to run Azure Machine Learning training and deployment tasks.
-Capturing software dependencies, metrics, metadata, data and model versioning are an important part of the MLOps process in order to build transparent, reproducible, and auditable pipelines. For this task, you can [use MLFlow in Azure Machine Learning](how-to-use-mlflow.md) as well as when [training machine learning models in Azure Databricks](./how-to-use-mlflow-azure-databricks.md). You can also [deploy MLflow models as an Azure web service](how-to-deploy-mlflow-models.md).
+Capturing software dependencies, metrics, metadata, data and model versioning are an important part of the MLOps process in order to build transparent, reproducible, and auditable pipelines. For this task, you can [use MLFlow in Azure Machine Learning](how-to-use-mlflow.md) as well as when [training machine learning models in Azure Databricks](./how-to-use-mlflow-azure-databricks.md). You can also [deploy MLflow models as an Azure web service](how-to-deploy-mlflow-models.md).
machine-learning Concept Optimize Data Processing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-optimize-data-processing.md
Last updated 10/21/2021-
-# Customer intent: As a data scientist I want to optimize data processing speeds at scale
+
+#Customer intent: As a data scientist I want to optimize data processing speeds at scale
# Optimize data processing with Azure Machine Learning
For data larger than 10 GB| Move to a cluster using `Ray`, `Dask`, or `Spark`
## Next steps * [Data ingestion options with Azure Machine Learning](concept-data-ingestion.md).
-* [Create and register datasets](how-to-create-register-datasets.md).
machine-learning Concept Plan Manage Cost https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-plan-manage-cost.md
description: Plan and manage costs for Azure Machine Learning with cost analysis in Azure portal. Learn further cost-saving tips to lower your cost when building ML models. -+
When you create resources for an Azure Machine Learning workspace, resources for
* [Key Vault](https://azure.microsoft.com/pricing/details/key-vault?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) * [Application Insights](https://azure.microsoft.com/pricing/details/monitor?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn)
-When you create a [compute instance](concept-compute-instance.md), the VM stays on so it is available for your work. [Set up a schedule](how-to-create-manage-compute-instance.md#schedule) to automatically start and stop the compute instance (preview) to save cost when you aren't planning to use it.
+When you create a [compute instance](concept-compute-instance.md), the VM stays on so it is available for your work. [Set up a schedule](how-to-create-manage-compute-instance.md#schedule-automatic-start-and-stop-preview) to automatically start and stop the compute instance (preview) to save cost when you aren't planning to use it.
### Costs might accrue before resource deletion
After you delete an Azure Machine Learning workspace in the Azure portal or with
To delete the workspace along with these dependent resources, use the SDK: ```python ws.delete(delete_dependent_resources=True) ```
machine-learning Concept Responsible Ai Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-responsible-ai-dashboard.md
+
+ Title: Assess AI systems and make data-driven decisions with Azure Machine Learning Responsible AI dashboard
+
+description: The Responsible AI dashboard is a comprehensive UI and set of SDK/YAML components to help data scientists debug their machine learning models and make data-driven decisions.
++++++ Last updated : 05/10/2022+++
+# Assess AI systems and make data-driven decisions with Azure Machine Learning Responsible AI dashboard (preview)
+
+Responsible AI requires rigorous engineering. Rigorous engineering, however, can be tedious, manual, and time-consuming without the right tooling and infrastructure. Data scientists need tools to implement responsible AI in practice effectively and efficiently.
+
+The Responsible AI dashboard provides a single interface that makes responsible machine learning engineering efficient and interoperable across the larger model development and assessment lifecycle. The tool brings together several mature Responsible AI tools in the areas of model statistics assessment, data exploration, [machine learning interpretability](https://interpret.ml/), [unfairness assessment](http://fairlearn.org/), [error analysis](https://erroranalysis.ai/), [causal inference](https://github.com/microsoft/EconML), and [counterfactual analysis](https://github.com/interpretml/DiCE) for a holistic assessment and debugging of models and making informed business decisions. With a single command or simple UI wizard, the dashboard addresses the fragmentation issues of multiple tools and enables you to:
+
+1. Evaluate and debug your machine learning models by identifying model errors, diagnosing why those errors are happening, and informing your mitigation steps.
+2. Boost your data-driven decision-making abilities by addressing questions such as *ΓÇ£what is the minimum change the end user could apply to their features to get a different outcome from the model?ΓÇ¥ and/or ΓÇ£what is the causal effect of reducing red meat consumption on diabetes progression?ΓÇ¥*
+3. Export Responsible AI metadata of your data and models for sharing offline with product and compliance stakeholders.
+
+## Responsible AI dashboard components
+
+The Responsible AI dashboard brings together, in a comprehensive view, various new and pre-existing tools, integrating them with the Azure Machine Learning CLIv2, Python SDKv2 and studio. These tools include:
+
+1. [Data explorer](concept-data-analysis.md) to understand and explore your dataset distributions and statistics.
+2. [Model overview and fairness assessment](concept-fairness-ml.md) to evaluate the performance of your model and evaluate your modelΓÇÖs group fairness issues (how diverse groups of people are impacted by your modelΓÇÖs predictions).
+3. [Error Analysis](concept-error-analysis.md) to view and understand the error distributions of your model in a dataset via a decision tree map or a heat map visualization.
+4. [Model interpretability](how-to-machine-learning-interpretability.md) (aggregate/individual feature importance values) to understand you modelΓÇÖs predictions and how those overall and individual predictions are made.
+5. [Counterfactual What-If's](concept-counterfactual-analysis.md) to observe how feature perturbations would impact your model predictions and provide you with the closest datapoints with opposing or different model predictions.
+6. [Causal analysis](concept-causal-inference.md) to use historical data to view the causal effects of treatment features on the real-world outcome.
+
+Together, these components will enable you to debug machine learning models, while informing your data-driven and model-driven decisions.
++
+### Model debugging
+
+Assessing and debugging machine learning models is critical for model reliability, interpretability, fairness, and compliance. It helps determine how and why AI systems behave the way they do. You can then use this knowledge to improve model performance. Conceptually, model debugging consists of three stages:
+
+- **Identify**, to understand and recognize model errors by addressing the following questions:
+ - *What kinds of errors does my model have?*
+ - *In what areas are errors most prevalent?*
+- **Diagnose**, to explore the reasons behind the identified errors by addressing:
+ - *What are the causes of these errors?*
+ - *Where should I focus my resources to improve my model?*
+- **Mitigate**, to use the identification and diagnosis insights from previous stages to take targeted mitigation steps and address questions such as:
+ - *How can I improve my model?*
+ - *What social or technical solutions exist for these issues?*
++
+Below are the components of the Responsible AI dashboard supporting model debugging:
+
+| Stage | Component | Description |
+|-|--|-|
+| Identify | Error Analysis | The Error Analysis component provides machine learning practitioners with a deeper understanding of model failure distribution and assists you with quickly identifying erroneous cohorts of data. <br><br> The capabilities of this component in the dashboard are founded by [Error Analysis](https://erroranalysis.ai/) capabilities on generating model error profiles.|
+| Identify | Fairness Analysis | The Fairness component assesses how different groups, defined in terms of sensitive attributes such as sex, race, age, etc., are affected by your model predictions and how the observed disparities may be mitigated. It evaluates the performance of your model by exploring the distribution of your prediction values and the values of your model performance metrics across different sensitive subgroups. The capabilities of this component in the dashboard are founded by [Fairlearn](https://fairlearn.org/) capabilities on generating model fairness assessments. |
+| Identify | Model Overview | The Model Statistics component aggregates various model assessment metrics, showing a high-level view of model prediction distribution for better investigation of its performance. It also enables group fairness assessment, highlighting the breakdown of model performance across different sensitive groups. |
+| Diagnose | Data Explorer | The Data Explorer component helps to visualize datasets based on predicted and actual outcomes, error groups, and specific features. This helps to identify issues of over- and underrepresentation and to see how data is clustered in the dataset. |
+| Diagnose | Model Interpretability | The Interpretability component generates human-understandable explanations of the predictions of a machine learning model. It provides multiple views into a modelΓÇÖs behavior: global explanations (for example, which features affect the overall behavior of a loan allocation model) and local explanations (for example, why an applicantΓÇÖs loan application was approved or rejected). <br><br> The capabilities of this component in the dashboard are founded by [InterpretML](https://interpret.ml/) capabilities on generating model explanations. |
+| Diagnose | Counterfactual Analysis and What-If| The Counterfactual Analysis and what-if component consists of two functionalities for better error diagnosis: <br> - Generating a set of examples with minimal changes to a given point such that they change the model's prediction (showing the closest datapoints with opposite model precisions). <br> - Enabling interactive and custom what-if perturbations for individual data points to understand how the model reacts to feature changes. <br> <br> The capabilities of this component in the dashboard are founded by the [DiCE](https://github.com/interpretml/DiCE) package, which provides this information by showing feature-perturbed versions of the same datapoint, which would have received a different model prediction (for example, Taylor would have received the loan approval prediction if their yearly income was higher by $10,000). |
+
+Mitigation steps are available via stand-alone tools such as Fairlearn (for unfairness mitigation).
+
+### Responsible decision-making
+
+Decision-making is one of the biggest promises of machine learning. The Responsible AI dashboard helps you inform your model-driven and data-driven business decisions.
+
+- Data-driven insights to further understand heterogeneous treatment effects on an outcome, using historic data only. For example, *ΓÇ£how would a medicine impact a patientΓÇÖs blood pressure?"*. Such insights are provided through the "Causal Inference" component of the dashboard.
+- Model-driven insights, to answer end-usersΓÇÖ questions such as *ΓÇ£what can I do to get a different outcome from your AI next time?ΓÇ¥* to inform their actions. Such insights are provided to data scientists through the "Counterfactual Analysis and What-If" component described above.
++
+Exploratory data analysis, counterfactual analysis, and causal inference capabilities can assist you make informed model-driven and data-driven decisions responsibly.
+
+Below are the components of the Responsible AI dashboard supporting responsible decision making:
+
+- **Data Explorer**
+ - The component could be reused here to understand data distributions and identify over- and underrepresentation. Data exploration is a critical part of decision making as one can conclude that it isn't feasible to make informed decisions about a cohort that is underrepresented within data.
+- **Causal Inference**
+ - The Causal Inference component estimates how a real-world outcome changes in the presence of an intervention. It also helps to construct promising interventions by simulating different feature responses to various interventions and creating rules to determine which population cohorts would benefit from a particular intervention. Collectively, these functionalities allow you to apply new policies and effect real-world change.
+ - The capabilities of this component are founded by [EconML](https://github.com/Microsoft/EconML) package, which estimates heterogeneous treatment effects from observational data via machine learning.
+- **Counterfactual Analysis**
+ - The Counterfactual Analysis component described above could be reused here to help data scientists generate a set of similar datapoints with opposite prediction outcomes (showing minimum changes applied to a datapoint's features leading to opposite model predictions). Providing counterfactual examples to the end users inform their perspective, educating them on how they can take action to get the desired outcome from the model in the future.
+ - The capabilities of this component are founded by [DiCE](https://github.com/interpretml/DiCE) package.
+
+## Why should you use the Responsible AI dashboard?
+
+While Responsible AI is about rigorous engineering, its operationalization is tedious, manual, and time-consuming without the right tooling and infrastructure. There are minimal instructions, and few disjointed frameworks and tools available to empower data scientists to explore and evaluate their models holistically.
+
+While progress has been made on individual tools for specific areas of Responsible AI, data scientists often need to use various such tools together, to holistically evaluate their models and data. For example, if a data scientist discovers a fairness issue with one tool, they then need to jump to a different tool to understand what data or model factors lie at the root of the issue before taking any steps on mitigation. This highly challenging process is further complicated by the following reasons. First, there's no central location to discover and learn about the tools, extending the time it takes to research and learn new techniques. Second, the different tools don't exactly communicate with each other. Data scientists must wrangle the datasets, models, and other metadata as they pass them between the different tools. Third, the metrics and visualizations aren't easily comparable, and the results are hard to share.
+
+The Responsible AI dashboard is the first comprehensive tool, bringing together fragmented experiences under one roof, enabling you to seamlessly onboard to a single customizable framework for model debugging and data-driven decision making.
+
+## How to customize the Responsible AI dashboard
+
+The Responsible AI dashboardΓÇÖs strength lies in its customizability. It empowers users to design tailored, end-to-end model debugging and decision-making workflows that address their particular needs. Need some inspiration? Here are some examples of how Toolbox components can be put together to analyze scenarios in diverse ways:
+
+| Responsible AI Dashboard Flow | Use Case |
+|-|-|
+| Model Overview -> Error Analysis -> Data Explorer | To identify model errors and diagnose them by understanding the underlying data distribution |
+| Model Overview -> Fairness Assessment -> Data Explorer | To identify model fairness issues and diagnose them by understanding the underlying data distribution |
+| Model Overview -> Error Analysis -> Counterfactuals Analysis and What-If | To diagnose errors in individual instances with counterfactual analysis (minimum change to lead to a different model prediction) |
+| Model Overview -> Data Explorer | To understand the root cause of errors and fairness issues introduced via data imbalances or lack of representation of a particular data cohort |
+| Model Overview -> Interpretability | To diagnose model errors through understanding how the model has made its predictions |
+| Data Explorer -> Causal Inference | To distinguish between correlations and causations in the data or decide the best treatments to apply to see a positive outcome |
+| Interpretability -> Causal Inference | To learn whether the factors that model has used for decision making has any causal effect on the real-world outcome. |
+| Data Explorer -> Counterfactuals Analysis and What-If | To address customer questions about what they can do next time to get a different outcome from an AI. |
+
+## Who should use the Responsible AI dashboard?
+
+The Responsible AI dashboard, and its corresponding [Responsible AI scorecard](how-to-responsible-ai-scorecard.md), could be incorporated by the following personas to build trust with AI systems.
+
+- Machine learning model engineers and data scientists who are interested in debugging and improving their machine learning models pre-deployment.
+-Machine learning model engineers and data scientists who are interested in sharing their model health records with product manager and business stakeholders to build trust and receive deployment permissions.
+- Product managers and business stakeholders who are reviewing machine learning models pre-deployment.
+- Risk officers who are reviewing machine learning models for understanding fairness and reliability issues.
+- Providers of solution to end users who would like to explain model decisions to the end users.
+- Business stakeholders who need to review machine learning models with regulators and auditors.
+
+## Supported machine learning models and scenarios
+
+We support scikit-learn models for counterfactual generation and explanations. The scikit-learn models should implement `predict()/predict_proba()` methods or the model should be wrapped within a class, which implements `predict()/predict_proba()` methods.
+
+Currently, we support counterfactual generation and explanations for tabular datasets having numerical and categorical data types. Counterfactual generation and explanations are supported for free formed text data, images and historical data.
+
+## Next steps
+
+- Learn how to generate the Responsible AI dashboard via [CLIv2 and SDKv2](how-to-responsible-ai-dashboard-sdk-cli.md) or [studio UI ](how-to-responsible-ai-dashboard-ui.md)
+- Learn how to generate a [Responsible AI scorecard](how-to-responsible-ai-scorecard.md)) based on the insights observed in the Responsible AI dashboard.
machine-learning Concept Responsible Ml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-responsible-ml.md
-- Previously updated : 10/21/2021-++ Last updated : 05/06/2022+ #Customer intent: As a data scientist, I want to know learn what responsible AI is and how I can use it in Azure Machine Learning. # What is responsible AI? (preview)
-In this article, you'll learn what responsible AI is and ways you can put it into practice with Azure Machine Learning.
-## Responsible AI principles
-Throughout the development and use of AI systems, trust must be at the core. Trust in the platform, process, and models. At Microsoft, responsible AI with regard to machine learning encompasses the following values and principles:
+The societal implications of AI and the responsibility of organizations to anticipate and mitigate unintended consequences of AI technology are significant. Organizations are finding the need to create internal policies, practices, and tools to guide their AI efforts, whether they're deploying third-party AI solutions or developing their own. At Microsoft, we've recognized six principles that we believe should guide AI development and use: fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability. For us, these principles are the cornerstone of a responsible and trustworthy approach to AI, especially as intelligent technology becomes more prevalent in the products and services we use every day. Azure Machine Learning currently supports tools for various these principles, making it seamless for ML developers and data scientists to implement Responsible AI in practice.
-- Understand machine learning models
- - Interpret and explain model behavior
- - Assess and mitigate model unfairness
-- Protect people and their data
- - Prevent data exposure with differential privacy
- - Work with encrypted data using homomorphic encryption
-- Control the end-to-end machine learning process
- - Document the machine learning lifecycle with datasheets
+## Fairness and inclusiveness
-As artificial intelligence and autonomous systems integrate more into the fabric of society, it's important to proactively make an effort to anticipate and mitigate the unintended consequences of these technologies.
+AI systems should treat everyone fairly and avoid affecting similarly situated groups of people in different ways. For example, when AI systems provide guidance on medical treatment, loan applications, or employment, they should make the same recommendations to everyone with similar symptoms, financial circumstances, or professional qualifications.
-## Interpret and explain model behavior
+**Fairness and inclusiveness in Azure Machine Learning**: Azure Machine LearningΓÇÖs [fairness assessment component](./concept-fairness-ml.md) of the [Responsible AI dashboard](./concept-responsible-ai-dashboard.md) enables data scientists and ML developers to assess model fairness across sensitive groups defined in terms of gender, ethnicity, age, etc.
-Hard to explain or opaque-box systems can be problematic because it makes it hard for stakeholders like system developers, regulators, users, and business decision makers to understand why systems make certain decisions. Some AI systems are more explainable than others and there's sometimes a tradeoff between a system with higher accuracy and one that is more explainable.
+## Reliability and safety
-To build interpretable AI systems, use [InterpretML](https://github.com/interpretml/interpret), an open-source package built by Microsoft. The InterpretML package supports a wide variety of interpretability techniques such as SHapley Additive exPlanations (SHAP), mimic explainer and permutation feature importance (PFI). [InterpretML can be used inside of Azure Machine Learning](how-to-machine-learning-interpretability.md) to [interpret and explain your machine learning models](how-to-machine-learning-interpretability-aml.md), including [automated machine learning models](how-to-machine-learning-interpretability-automl.md).
+To build trust, it's critical that AI systems operate reliably, safely, and consistently under normal circumstances and in unexpected conditions. These systems should be able to operate as they were originally designed, respond safely to unanticipated conditions, and resist harmful manipulation. It's also important to be able to verify that these systems are behaving as intended under actual operating conditions. How they behave and the variety of conditions they can handle reliably and safely largely reflects the range of situations and circumstances that developers anticipate during design and testing.
-## Mitigate fairness in machine learning models
+**Reliability and safety in Azure Machine Learning**: Azure Machine LearningΓÇÖs [Error Analysis](./concept-error-analysis.md) component of the [Responsible AI dashboard](./concept-responsible-ai-dashboard.md) enables data scientists and ML developers to get a deep understanding of how failure is distributed for a model, identify cohorts of data with higher error rate than the overall benchmark. These discrepancies might occur when the system or model underperforms for specific demographic groups or infrequently observed input conditions in the training data.
-As AI systems become more involved in the everyday decision-making of society, it's of extreme importance that these systems work well in providing fair outcomes for everyone.
+## Transparency
-Unfairness in AI systems can result in the following unintended consequences:
+When AI systems are used to help inform decisions that have tremendous impacts on people's lives, it's critical that people understand how those decisions were made. For example, a bank might use an AI system to decide whether a person is creditworthy, or a company might use an AI system to determine the most qualified candidates to hire.
-- Withholding opportunities, resources or information from individuals.-- Reinforcing biases and stereotypes.
+A crucial part of transparency is what we refer to as interpretability, or the useful explanation of the behavior of AI systems and their components. Improving interpretability requires that stakeholders comprehend how and why they function so that they can identify potential performance issues, safety and privacy concerns, fairness issues, exclusionary practices, or unintended outcomes.
-Many aspects of fairness canΓÇÖt be captured or represented by metrics. There are tools and practices that can improve fairness in the design and development of AI systems.
+**Transparency in Azure Machine Learning**: Azure Machine LearningΓÇÖs [Model Interpretability](how-to-machine-learning-interpretability.md) and [Counterfactual What-If](./concept-counterfactual-analysis.md) components of the [Responsible AI dashboard](concept-responsible-ai-dashboard.md) enables data scientists and ML developers to generate human-understandable descriptions of the predictions of a model. It provides multiple views into a modelΓÇÖs behavior: global explanations (for example, what features affect the overall behavior of a loan allocation model) and local explanations (for example, why a customerΓÇÖs loan application was approved or rejected). One can also observe model explanations for a selected cohort as a subgroup of data points. Moreover, the Counterfactual What-If component enables understanding and debugging a machine learning model in terms of how it reacts to input (feature) changes. Azure Machine Learning also supports a Responsible AI scorecard, a customizable report which machine learning developers can easily configure, download, and share with their technical and non-technical stakeholders to educate them about data and model health and compliance and build trust. This scorecard could also be used in audit reviews to inform the stakeholders about the characteristics of machine learning models.
-Two key steps in reducing unfairness in AI systems are assessment and mitigation. We recommend [FairLearn](https://github.com/fairlearn/fairlearn), an open-source package that can assess and mitigate the potential unfairness of AI systems. To learn more about fairness and the FairLearn package, see the [Fairness in ML article](./concept-fairness-ml.md).
+## Privacy and Security
-## Prevent data exposure with differential privacy
+As AI becomes more prevalent, protecting privacy and securing important personal and business information is becoming more critical and complex. With AI, privacy and data security issues require especially close attention because access to data is essential for AI systems to make accurate and informed predictions and decisions about people. AI systems must comply with privacy laws that require transparency about the collection, use, and storage of data and mandate that consumers have appropriate controls to choose how their data is used.
-When data is used for analysis, it's important that the data remains private and confidential throughout its use. Differential privacy is a set of systems and practices that help keep the data of individuals safe and private.
+**Privacy and Security in Azure Machine Learning**: Implementing differentially private systems is difficult. [SmartNoise](https://github.com/opendifferentialprivacy/smartnoise-core) is an open-source project (co-developed by Microsoft) that contains different components for building global differentially private systems. To learn more about differential privacy and the SmartNoise project, see the preserve [data privacy by using differential privacy and SmartNoise article](concept-differential-privacy.md). Azure Machine Learning is also enabling administrators, DevOps, and MLOps to [create a secure configuration that is compliant](concept-enterprise-security.md) with your companies policies. With Azure Machine Learning and the Azure platform, you can:
-In traditional scenarios, raw data is stored in files and databases. When users analyze data, they typically use the raw data. This is a concern because it might infringe on an individual's privacy. Differential privacy tries to deal with this problem by adding "noise" or randomness to the data so that users can't identify any individual data points.
+- Restrict access to resources and operations by user account or groups
+- Restrict incoming and outgoing network communications
+- Encrypt data in transit and at rest
+- Scan for vulnerabilities
+- Apply and audit configuration policies
-Implementing differentially private systems is difficult. [SmartNoise](https://github.com/opendifferentialprivacy/smartnoise-core) is an open-source project that contains different components for building global differentially private systems. To learn more about differential privacy and the SmartNoise project, see the [preserve data privacy by using differential privacy and SmartNoise](./concept-differential-privacy.md) article.
+Besides SmartNoise, Microsoft released [Counterfit](https://github.com/Azure/counterfit/), an open-source project that comprises a command-line tool and generic automation layer to allow developers to simulate cyber-attacks against AI systems. Anyone can download the tool and deploy it through Azure Shell, to run in-browser, or locally in an Anaconda Python environment. It can assess AI models hosted in various cloud environments, on-premises, or in the edge. The tool is agnostic to AI models and supports various data types, including text, images, or generic input.
-## Work on encrypted data with homomorphic encryption
+## Accountability
-In traditional cloud storage and computation solutions, the cloud needs to have unencrypted access to customer data to compute on it. This access exposes the data to cloud operators. Data privacy relies on access control policies implemented by the cloud and trusted by the customer.
+The people who design and deploy AI systems must be accountable for how their systems operate. Organizations should draw upon industry standards to develop accountability norms. These norms can ensure that AI systems aren't the final authority on any decision that impacts people's lives and that humans maintain meaningful control over otherwise highly autonomous AI systems.
-Homomorphic encryption allows for computations to be done on encrypted data without requiring access to a secret (decryption) key. The results of the computations are encrypted and can be revealed only by the owner of the secret key. Using homomorphic encryption, cloud operators will never have unencrypted access to the data they're storing and computing on. Computations are performed directly on encrypted data. Data privacy relies on state-of-the-art cryptography, and the data owner controls all information releases. For more information on homomorphic encryption at Microsoft, see [Microsoft Research](https://www.microsoft.com/research/project/homomorphic-encryption/).
+**Accountability in Azure Machine Learning**: Azure Machine LearningΓÇÖs [Machine Learning Operations (MLOps)](concept-model-management-and-deployment.md) is based on DevOps principles and practices that increase the efficiency of workflows. It specifically supports quality assurance and end-to-end lineage tracking to capture the governance data for the end-to-end ML lifecycle. The logged lineage information can include who is publishing models, why changes were made, and when models were deployed or used in production.
-To get started with homomorphic encryption in Azure Machine Learning, use the [encrypted-inference](https://pypi.org/project/encrypted-inference/) Python bindings for [Microsoft SEAL](https://github.com/microsoft/SEAL). Microsoft SEAL is an open-source homomorphic encryption library that allows additions and multiplications to be performed on encrypted integers or real numbers. To learn more about Microsoft SEAL, see the [Azure Architecture Center](/azure/architecture/solution-ideas/articles/homomorphic-encryption-seal) or the [Microsoft Research project page](https://www.microsoft.com/research/project/microsoft-seal/).
+Azure Machine LearningΓÇÖs [Responsible AI scorecard](./how-to-responsible-ai-scorecard.md) creates accountability by enabling cross-stakeholders communications and by empowering machine learning developers to easily configure, download, and share their model health insights with their technical and non-technical stakeholders to educate them about data and model health and compliance and build trust.
-See the following sample to learn [how to deploy an encrypted inferencing web service in Azure Machine Learning](how-to-homomorphic-encryption-seal.md).
+The ML platform also enables decision-making by informing model-driven and data-driven business decisions:
-## Document the machine learning lifecycle with datasheets
+- Data-driven insights to further understand heterogeneous treatment effects on an outcome, using historic data only. For example, ΓÇ£how would a medicine impact a patientΓÇÖs blood pressure?". Such insights are provided through the[Causal Inference](concept-causal-inference.md) component of the [Responsible AI dashboard](concept-responsible-ai-dashboard.md).
+- Model-driven insights, to answer end-usersΓÇÖ questions such as ΓÇ£what can I do to get a different outcome from your AI next time?ΓÇ¥ to inform their actions. Such insights are provided to data scientists through the [Counterfactual What-If](concept-counterfactual-analysis.md) component of the [Responsible AI dashboard](concept-responsible-ai-dashboard.md).
-Documenting the right information in the machine learning process is key to making responsible decisions at each stage. Datasheets are a way to document machine learning assets that are used and created as part of the machine learning lifecycle.
+## Next steps
-Models tend to be thought of as "opaque boxes" and often thereΓÇÖs little information about them. Because machine learning systems are becoming more pervasive and are used for decision making, using datasheets is a step towards developing more responsible machine learning systems.
-
-Some model information you might want to document as part of a datasheet:
--- Intended use-- Model architecture-- Training data used-- Evaluation data used-- Training model performance metrics-- Fairness information-
-See the following sample to learn how to use the Azure Machine Learning SDK to implement [datasheets for models](https://github.com/microsoft/MLOps/blob/master/pytorch_with_datasheet/model_with_datasheet.ipynb).
-
-## Additional resources
--- For more information, see the [responsible innovation toolkit](/azure/architecture/guide/responsible-innovation/) to learn about best practices.-- Learn more about the [ABOUT ML](https://www.partnershiponai.org/about-ml/) set of guidelines for machine learning system documentation.
+- For more information on how to implement Responsible AI in Azure Machine Learning, see [Responsible AI dashboard](concept-responsible-ai-dashboard.md).
+- Learn more about the [ABOUT ML](https://www.partnershiponai.org/about-ml/) set of guidelines for machine learning system documentation.
machine-learning Concept Secure Network Traffic Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-secure-network-traffic-flow.md
description: Learn how network traffic flows between components when your Azure
+ Previously updated : 02/08/2022 Last updated : 04/08/2022 # Network traffic flow when using a secured workspace
This article assumes the following configuration:
| [Access workspace from studio](#scenario-access-workspace-from-studio) | NA | <ul><li>Azure Active Directory</li><li>Azure Front Door</li><li>Azure Machine Learning service</li></ul> | You may need to use a custom DNS server. For more information, see [Use your workspace with a custom DNS](how-to-custom-dns.md). | | [Use AutoML, designer, dataset, and datastore from studio](#scenario-use-automl-designer-dataset-and-datastore-from-studio) | NA | NA | <ul><li>Workspace service principal configuration</li><li>Allow access from trusted Azure services</li></ul>For more information, see [How to secure a workspace in a virtual network](how-to-secure-workspace-vnet.md#secure-azure-storage-accounts). | | [Use compute instance and compute cluster](#scenario-use-compute-instance-and-compute-cluster) | <ul><li>Azure Machine Learning service on port 44224</li><li>Azure Batch Management service on ports 29876-29877</li></ul> | <ul><li>Azure Active Directory</li><li>Azure Resource Manager</li><li>Azure Machine Learning service</li><li>Azure Storage Account</li><li>Azure Key Vault</li></ul> | If you use a firewall, create user-defined routes. For more information, see [Configure inbound and outbound traffic](how-to-access-azureml-behind-firewall.md). |
-| [Use Azure Kubernetes Service](#scenario-use-azure-kubernetes-service) | NA | For information on the outbound configuration for AKS, see [How to deploy to Azure Kubernetes Service](how-to-deploy-azure-kubernetes-service.md#understand-connectivity-requirements-for-aks-inferencing-cluster). | Configure the Internal Load Balancer. For more information, see [How to deploy to Azure Kubernetes Service](how-to-deploy-azure-kubernetes-service.md#understand-connectivity-requirements-for-aks-inferencing-cluster). |
+| [Use Azure Kubernetes Service](#scenario-use-azure-kubernetes-service) | NA | For information on the outbound configuration for AKS, see [How to deploy to Azure Kubernetes Service](v1/how-to-deploy-azure-kubernetes-service.md#understand-connectivity-requirements-for-aks-inferencing-cluster). | Configure the Internal Load Balancer. For more information, see [How to deploy to Azure Kubernetes Service](v1/how-to-deploy-azure-kubernetes-service.md#understand-connectivity-requirements-for-aks-inferencing-cluster). |
| [Use Docker images managed by Azure Machine Learning](#scenario-use-docker-images-managed-by-azure-ml) | NA | <ul><li>Microsoft Container Registry</li><li>`viennaglobal.azurecr.io` global container registry</li></ul> | If the Azure Container Registry for your workspace is behind the VNet, configure the workspace to use a compute cluster to build images. For more information, see [How to secure a workspace in a virtual network](how-to-secure-workspace-vnet.md#enable-azure-container-registry-acr). | > [!IMPORTANT]
If you use Visual Studio Code on a compute instance, you must allow other outbou
:::image type="content" source="./media/concept-secure-network-traffic-flow/compute-instance-and-cluster.png" alt-text="Diagram of traffic flow when using compute instance or cluster":::
+## Scenario: Use online endpoints
+
+Securing an online endpoint with a private endpoint is a preview feature.
++
+__Inbound__ communication with the scoring URL of the online endpoint can be secured using the `public_network_access` flag on the endpoint. Setting the flag to `disabled` restricts the online endpoint to receiving traffic only from the virtual network. For secure inbound communications, the Azure Machine Learning workspace's private endpoint is used.
+
+__Outbound__ communication from a deployment can be secured on a per-deployment basis by using the `egress_public_network_access` flag. Outbound communication in this case is from the deployment to Azure Container Registry, storage blob, and workspace. Setting the flag to `true` will restrict communication with these resources to the virtual network.
+
+> [!NOTE]
+> For secure outbound communication, a private endpoint is created for each deployment where `egress_public_network_access` is set to `disabled`.
+
+Visibility of the endpoint is also governed by the `public_network_access` flag of the Azure Machine Learning workspace. If this flag is `disabled`, then the scoring endpoints can only be accessed from virtual networks that contain a private endpoint for the workspace. If it is `enabled`, then the scoring endpoint can be accessed from the virtual network and public networks.
+
+### Supported configurations
+
+| Configuration | Inbound </br> (Endpoint property) | Outbound </br> (Deployment property) | Supported? |
+| -- | -- | | |
+| secure inbound with secure outbound | `public_network_access` is disabled | `egress_public_network_access` is disabled | Yes |
+| secure inbound with public outbound | `public_network_access` is disabled | `egress_public_network_access` is enabled | Yes |
+| public inbound with secure outbound | `public_network_access` is enabled | `egress_public_network_access` is disabled | Yes |
+| public inbound with public outbound | `public_network_access` is enabled | `egress_public_network_access` is enabled | Yes |
+ ## Scenario: Use Azure Kubernetes Service
-For information on the outbound configuration required for Azure Kubernetes Service, see the connectivity requirements section of [How to deploy to Azure Kubernetes Service](how-to-deploy-azure-kubernetes-service.md#understand-connectivity-requirements-for-aks-inferencing-cluster).
+For information on the outbound configuration required for Azure Kubernetes Service, see the connectivity requirements section of [How to deploy to Azure Kubernetes Service](v1/how-to-deploy-azure-kubernetes-service.md#understand-connectivity-requirements-for-aks-inferencing-cluster).
> [!NOTE] > The Azure Kubernetes Service load balancer is not the same as the load balancer created by Azure Machine Learning. If you want to host your model as a secured application, only available on the VNet, use the internal load balancer created by Azure Machine Learning. If you want to allow public access, use the public load balancer created by Azure Machine Learning.
If you provide your own docker images, such as on an Azure Container Registry th
Now that you've learned how network traffic flows in a secured configuration, learn more about securing Azure ML in a virtual network by reading the [Virtual network isolation and privacy overview](how-to-network-security-overview.md) article.
-For information on best practices, see the [Azure Machine Learning best practices for enterprise security](/azure/cloud-adoption-framework/ready/azure-best-practices/ai-machine-learning-enterprise-security) article.
+For information on best practices, see the [Azure Machine Learning best practices for enterprise security](/azure/cloud-adoption-framework/ready/azure-best-practices/ai-machine-learning-enterprise-security) article.
machine-learning Concept Train Machine Learning Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-train-machine-learning-model.md
Last updated 09/23/2021-+ ms.devlang: azurecli
Azure Machine Learning provides several ways to train your models, from code-fir
+ **Azure CLI**: The machine learning CLI provides commands for common tasks with Azure Machine Learning, and is often used for **scripting and automating tasks**. For example, once you've created a training script or pipeline, you might use the Azure CLI to start a training run on a schedule or when the data files used for training are updated. For training models, it provides commands that submit training jobs. It can submit jobs using run configurations or pipelines.
-Each of these training methods can use different types of compute resources for training. Collectively, these resources are referred to as [__compute targets__](concept-azure-machine-learning-architecture.md#compute-targets). A compute target can be a local machine or a cloud resource, such as an Azure Machine Learning Compute, Azure HDInsight, or a remote virtual machine.
+Each of these training methods can use different types of compute resources for training. Collectively, these resources are referred to as [__compute targets__](v1/concept-azure-machine-learning-architecture.md#compute-targets). A compute target can be a local machine or a cloud resource, such as an Azure Machine Learning Compute, Azure HDInsight, or a remote virtual machine.
## Python SDK
A generic training job with Azure Machine Learning can be defined using the [Scr
You may start with a run configuration for your local computer, and then switch to one for a cloud-based compute target as needed. When changing the compute target, you only change the run configuration you use. A run also logs information about the training job, such as the inputs, outputs, and logs.
-* [What is a run configuration?](concept-azure-machine-learning-architecture.md#run-configurations)
+* [What is a run configuration?](v1/concept-azure-machine-learning-architecture.md#run-configurations)
* [Tutorial: Train your first ML model](tutorial-1st-experiment-sdk-train.md) * [Examples: Jupyter Notebook and Python examples of training models](https://github.com/Azure/azureml-examples) * [How to: Configure a training run](how-to-set-up-training-targets.md)
The designer lets you train models using a drag and drop interface in your web b
The machine learning CLI is an extension for the Azure CLI. It provides cross-platform CLI commands for working with Azure Machine Learning. Typically, you use the CLI to automate tasks, such as training a machine learning model.
-* [Use the CLI extension for Azure Machine Learning](reference-azure-machine-learning-cli.md)
+* [Use the CLI extension for Azure Machine Learning](how-to-configure-cli.md)
* [MLOps on Azure](https://github.com/microsoft/MLOps) ## VS Code
machine-learning Concept Train Model Git Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-train-model-git-integration.md
Last updated 04/05/2022-+ # Git integration for Azure Machine Learning
The logged information contains text similar to the following JSON:
After submitting a training run, a [Run](/python/api/azureml-core/azureml.core.run%28class%29) object is returned. The `properties` attribute of this object contains the logged git information. For example, the following code retrieves the commit hash: + ```python run.properties['azureml.git.commit'] ```
machine-learning Concept V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-v2.md
+
+ Title: 'CLI & SDK v2'
+
+description: This article explains the difference between the v1 and v2 versions of Azure Machine Learning v1 and v2.
+++++++ Last updated : 04/29/2022+
+#Customer intent: As a data scientist, I want to know whether to use v1 or v2 of CLI, SDK.
++
+# What is Azure Machine Learning CLI & Python SDK v2?
++
+Azure Machine Learning CLI v2 and Azure Machine Learning Python SDK v2 (preview) introduce a consistency of features and terminology across the interfaces. In order to create this consistency, the syntax of commands differs, in some cases significantly, from the first versions (v1).
+
+## Azure Machine Learning CLI v2
+
+The Azure Machine Learning CLI v2 (CLI v2) is the latest extension for the [Azure CLI](/cli/azure/what-is-azure-cli). The CLI v2 provides commands in the format *az ml __\<noun\> \<verb\> \<options\>__* to create and maintain Azure ML assets and workflows. The assets or workflows themselves are defined using a YAML file. The YAML file defines the configuration of the asset or workflow ΓÇô what is it, where should it run, and so on.
+
+A few examples of CLI v2 commands:
+
+* `az ml job create --file my_job_definition.yaml`
+* `az ml environment update --name my-env --file my_updated_env_definition.yaml`
+* `az ml model list`
+* `az ml compute show --name my_compute`
+
+### Use cases for CLI v2
+
+The CLI v2 is useful in the following scenarios:
+
+* On board to Azure ML without the need to learn a specific programming language
+
+ The YAML file defines the configuration of the asset or workflow ΓÇô what is it, where should it run, and so on. Any custom logic/IP used, say data preparation, model training, model scoring can remain in script files, which are referred to in the YAML, but not part of the YAML itself. Azure ML supports script files in python, R, Java, Julia or C#. All you need to learn is YAML format and command lines to use Azure ML. You can stick with script files of your choice.
+
+* Ease of deployment and automation
+
+ The use of command-line for execution makes deployment and automation simpler, since workflows can be invoked from any offering/platform, which allows users to call the command line.
+
+* Managed inference deployments
+
+ Azure ML offers [endpoints](concept-endpoints.md) to streamline model deployments for both real-time and batch inference deployments. This functionality is available only via CLI v2 and SDK v2 (preview).
+
+* Reusable components in pipelines
+
+ Azure ML introduces [components](concept-component.md) for managing and reusing common logic across pipelines. This functionality is available only via CLI v2 and SDK v2.
++
+## Azure Machine Learning Python SDK v2 (preview)
+
+> [!IMPORTANT]
+> SDK v2 is currently in public preview.
+> The preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+Azure ML Python SDK v2 is an updated Python SDK package, which allows users to:
+
+* Submit training jobs
+* Manage data, models, environments
+* Perform managed inferencing (real time and batch)
+* Stitch together multiple tasks and production workflows using Azure ML pipelines
+
+The SDK v2 is on par with CLI v2 functionality and is consistent in how assets (nouns) and actions (verbs) are used between SDK and CLI. For example, to list an asset, the `list` action can be used in both CLI and SDK. The same `list` action can be used to list a compute, model, environment, and so on.
+
+### Use cases for SDK v2
+
+The SDK v2 is useful in the following scenarios:
+
+* Use Python functions to build a single step or a complex workflow
+
+ SDK v2 allows you to build a single command or a chain of commands like python functions - the command has a name, parameters, expects input, and returns output.
+
+* Move from simple to complex concepts incrementally
+
+ SDK v2 allows you to:
+ * Construct a single command.
+ * Add a hyperparameter sweep on top of that command,
+ * Add the command with various others into a pipeline one after the other.
+
+ This construction is useful, given the iterative nature of machine learning.
+
+* Reusable components in pipelines
+
+ Azure ML introduces [components](concept-component.md) for managing and reusing common logic across pipelines. This functionality is available only via CLI v2 and SDK v2.
+
+* Managed inferencing
+
+ Azure ML offers [endpoints](concept-endpoints.md) to streamline model deployments for both real-time and batch inference deployments. This functionality is available only via CLI v2 and SDK v2.
+
+## Should I use v1 or v2?
+
+### CLI v2
+
+The Azure Machine Learning CLI v1 has been deprecated. We recommend you to use CLI v2 if:
+
+* You were a CLI v1 user
+* You want to use new features like - reusable components, managed inferencing
+* You don't want to use a Python SDK - CLI v2 allows you to use YAML with scripts in python, R, Java, Julia or C#
+* You were a user of R SDK previously - Azure ML won't support an SDK in `R`. However, the CLI v2 has support for `R` scripts.
+* You want to use command line based automation/deployments
+
+### SDK v2
+
+The Azure Machine Learning Python SDK v1 doesn't have a planned deprecation date. If you have significant investments in Python SDK v1 and don't need any new features offered by SDK v2, you can continue to use SDK v1. However, you should consider using SDK v2 if:
+
+* You want to use new features like - reusable components, managed inferencing
+* You're starting a new workflow or pipeline - all new features and future investments will be introduced in v2
+* You want to take advantage of the improved usability of the Python SDK v2 - ability to compose jobs and pipelines using Python functions, easy evolution from simple to complex tasks etc.
+* You don't need features like AutoML in pipelines, Parallel Run Steps, Scheduling Pipelines and Spark Jobs. These features are not yet available in SDK v2.
+
+## Next steps
+
+* Get started with CLI v2
+
+ * [Install and set up CLI (v2)](how-to-configure-cli.md)
+ * [Train models with the CLI (v2)](how-to-train-cli.md)
+ * [Deploy and score models with managed online endpoint](how-to-deploy-managed-online-endpoints.md)
+
+* Get started with SDK v2
+
+ * [Install and set up SDK (v2)](https://aka.ms/sdk-v2-install)
+ * [Train models with the Azure ML Python SDK v2 (preview)](how-to-train-sdk.md)
+ * [Tutorial: Create production ML pipelines with Python SDK v2 (preview) in a Jupyter notebook](tutorial-pipeline-python-sdk.md)
machine-learning Concept Vulnerability Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-vulnerability-management.md
Last updated 12/16/2021
+ # Vulnerability management for Azure Machine Learning
Compute clusters automatically upgrade to the latest VM image. If the cluster is
* Managed Online Endpoints automatically receive OS host image updates that include vulnerability fixes. The update frequency of images is at least once a month. * Compute nodes get automatically upgraded to the latest VM image version once released. ThereΓÇÖs no action required on you.
-### Azure Arc-enabled Kubernetes clusters
+### Customer managed Kubernetes clusters
-[Azure Arc compute](how-to-attach-arc-kubernetes.md) lets you configure Azure Arc-enabled Kubernetes clusters to train, inference, and manage models in Azure Machine Learning.
-* Because you manage the environment with Azure Arc, both OS VM vulnerabilities and container image vulnerability management is your responsibility.
-* Azure Machine Learning frequently publishes new versions of AMLArc container images into Microsoft Container Registry. It's MicrosoftΓÇÖs responsibility to ensure new image versions are free from vulnerabilities. Vulnerabilities are fixed with [each release](https://github.com/Azure/AML-Kubernetes/blob/master/docs/release-notes.md).
+[Kubernetes compute](how-to-attach-kubernetes-anywhere.md) lets you configure Kubernetes clusters to train, inference, and manage models in Azure Machine Learning.
+* Because you manage the environment with Kubenetes, both OS VM vulnerabilities and container image vulnerability management is your responsibility.
+* Azure Machine Learning frequently publishes new versions of AzureML extension container images into Microsoft Container Registry. It's MicrosoftΓÇÖs responsibility to ensure new image versions are free from vulnerabilities. Vulnerabilities are fixed with [each release](https://github.com/Azure/AML-Kubernetes/blob/master/docs/release-notes.md).
* When your clusters run jobs without interruption, running jobs may run outdated container image versions. Once you upgrade the amlarc extension to a running cluster, newly submitted jobs will start to use the latest image version. When upgrading the AMLArc extension to its latest version, clean up the old container image versions from the clusters as required. * Observability on whether your Azure Arc cluster is running the latest version of AMLArc, you can find via the Azure portal. Under your Arc resource of the type 'Kubernetes - Azure Arc', see 'Extensions' to find the version of the AMLArc extension.
For code-based training experiences, you control which Azure Machine Learning en
* [Azure Machine Learning Base Images Repository](https://github.com/Azure/AzureML-Containers) * [Data Science Virtual Machine release notes](./data-science-virtual-machine/release-notes.md) * [AzureML Python SDK Release Notes](./azure-machine-learning-release-notes.md)
-* [Machine learning enterprise security](/azure/cloud-adoption-framework/ready/azure-best-practices/ai-machine-learning-enterprise-security)
+* [Machine learning enterprise security](/azure/cloud-adoption-framework/ready/azure-best-practices/ai-machine-learning-enterprise-security)
machine-learning Concept Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-workspace.md
description: The workspace is the top-level resource for Azure Machine Learning.
+
The diagram shows the following components of a workspace:
+ A workspace can contain [Azure Machine Learning compute instances](concept-compute-instance.md), cloud resources configured with the Python environment necessary to run Azure Machine Learning. + [User roles](how-to-assign-roles.md) enable you to share your workspace with other users, teams, or projects.
-+ [Compute targets](concept-azure-machine-learning-architecture.md#compute-targets) are used to run your experiments.
++ [Compute targets](v1/concept-azure-machine-learning-architecture.md#compute-targets) are used to run your experiments. + When you create the workspace, [associated resources](#resources) are also created for you.
-+ [Experiments](concept-azure-machine-learning-architecture.md#experiments) are training runs you use to build your models.
-+ [Pipelines](concept-azure-machine-learning-architecture.md#ml-pipelines) are reusable workflows for training and retraining your model.
-+ [Datasets](concept-azure-machine-learning-architecture.md#datasets-and-datastores) aid in management of the data you use for model training and pipeline creation.
++ [Experiments](v1/concept-azure-machine-learning-architecture.md#experiments) are training runs you use to build your models. ++ [Pipelines](v1/concept-azure-machine-learning-architecture.md#ml-pipelines) are reusable workflows for training and retraining your model.++ [Datasets](v1/concept-azure-machine-learning-architecture.md#datasets-and-datastores) aid in management of the data you use for model training and pipeline creation. + Once you have a model you want to deploy, you create a registered model.
-+ Use the registered model and a scoring script to create a [deployment endpoint](concept-azure-machine-learning-architecture.md#endpoints).
++ Use the registered model and a scoring script to create a [deployment endpoint](v1/concept-azure-machine-learning-architecture.md#endpoints). ## Tools for workspace interaction
You can interact with your workspace in the following ways:
+ [Azure Machine Learning studio ](https://ml.azure.com) + [Azure Machine Learning designer](concept-designer.md) + In any Python environment with the [Azure Machine Learning SDK for Python](/python/api/overview/azure/ml/intro).
-+ On the command line using the Azure Machine Learning [CLI extension](./reference-azure-machine-learning-cli.md)
++ On the command line using the Azure Machine Learning [CLI extension](how-to-configure-cli.md) + [Azure Machine Learning VS Code Extension](how-to-manage-resources-vscode.md#workspaces)
There are multiple ways to create a workspace:
* Use the [Azure portal](how-to-manage-workspace.md?tabs=azure-portal#create-a-workspace) for a point-and-click interface to walk you through each step. * Use the [Azure Machine Learning SDK for Python](how-to-manage-workspace.md?tabs=python#create-a-workspace) to create a workspace on the fly from Python scripts or Jupyter notebooks
-* Use an [Azure Resource Manager template](how-to-create-workspace-template.md) or the [Azure Machine Learning CLI](reference-azure-machine-learning-cli.md) when you need to automate or customize the creation with corporate security standards.
+* Use an [Azure Resource Manager template](how-to-create-workspace-template.md) or the [Azure Machine Learning CLI](how-to-configure-cli.md) when you need to automate or customize the creation with corporate security standards.
* If you work in Visual Studio Code, use the [VS Code extension](how-to-manage-resources-vscode.md#create-a-workspace). > [!NOTE]
machine-learning How To Track Experiments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/data-science-virtual-machine/how-to-track-experiments.md
description: Learn how to track and log experiments from the Data Science Virtual Machine with Azure Machine Learning and/or MLFlow. -+
You should see that the deployment state goes from __transitioning__ to __health
You can test the endpoint using [Postman](https://www.postman.com/), or you can use the AzureML SDK: + ```python from azureml.core import Webservice import json
machine-learning Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/data-science-virtual-machine/release-notes.md
Azure portal users will always find the latest image available for provisioning
See the [list of known issues](reference-known-issues.md) to learn about known bugs and workarounds.
+## May 17, 2022
+[Data Science VM ΓÇô Ubuntu 20.04](https://azuremarketplace.microsoft.com/marketplace/apps/microsoft-dsvm.ubuntu-2004?tab=Overview)
+
+Version `22.05.11`
+
+Main changes:
+
+- Upgraded `log4j(v2)` to version `2.17.2`
+ ## April 29, 2022 [Data Science VM ΓÇô Ubuntu 18.04](https://azuremarketplace.microsoft.com/marketplace/apps/microsoft-dsvm.ubuntu-1804?tab=overview) and [Data Science VM ΓÇô Ubuntu 20.04](https://azuremarketplace.microsoft.com/marketplace/apps/microsoft-dsvm.ubuntu-2004?tab=Overview)
machine-learning How To Access Azureml Behind Firewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-access-azureml-behind-firewall.md
Last updated 03/04/2022-+ ms.devlang: azurecli
In this article, learn about the network communication requirements when securin
> * [Virtual network overview](how-to-network-security-overview.md) > * [Secure the workspace resources](how-to-secure-workspace-vnet.md) > * [Secure the training environment](how-to-secure-training-vnet.md)
-> * [Secure the inference environment](how-to-secure-inferencing-vnet.md)
+> * For securing inference, see the following documents:
+> * If using CLI v1 or SDK v1 - [Secure inference environment](how-to-secure-inferencing-vnet.md)
+> * If using CLI v2 or SDK v2 - [Network isolation for managed online endpoints](how-to-secure-online-endpoint.md)
> * [Enable studio functionality](how-to-enable-studio-virtual-network.md) > * [Use custom DNS](how-to-custom-dns.md)
These rule collections are described in more detail in [What are some Azure Fire
For more information on configuring application rules, see [Deploy and configure Azure Firewall](../firewall/tutorial-firewall-deploy-portal.md#configure-an-application-rule).
-1. To restrict outbound traffic for models deployed to Azure Kubernetes Service (AKS), see the [Restrict egress traffic in Azure Kubernetes Service](../aks/limit-egress-traffic.md) and [Deploy ML models to Azure Kubernetes Service](how-to-deploy-azure-kubernetes-service.md#connectivity) articles.
+1. To restrict outbound traffic for models deployed to Azure Kubernetes Service (AKS), see the [Restrict egress traffic in Azure Kubernetes Service](../aks/limit-egress-traffic.md) and [Deploy ML models to Azure Kubernetes Service](v1/how-to-deploy-azure-kubernetes-service.md#connectivity) articles.
### Azure Kubernetes Services
When using Azure Kubernetes Service with Azure Machine Learning, the following t
* General inbound/outbound requirements for AKS as described in the [Restrict egress traffic in Azure Kubernetes Service](../aks/limit-egress-traffic.md) article. * __Outbound__ to mcr.microsoft.com.
-* When deploying a model to an AKS cluster, use the guidance in the [Deploy ML models to Azure Kubernetes Service](how-to-deploy-azure-kubernetes-service.md#connectivity) article.
+* When deploying a model to an AKS cluster, use the guidance in the [Deploy ML models to Azure Kubernetes Service](v1/how-to-deploy-azure-kubernetes-service.md#connectivity) article.
## Other firewalls
This article is part of a series on securing an Azure Machine Learning workflow.
* [Virtual network overview](how-to-network-security-overview.md) * [Secure the workspace resources](how-to-secure-workspace-vnet.md) * [Secure the training environment](how-to-secure-training-vnet.md)
-* [Secure the inference environment](how-to-secure-inferencing-vnet.md)
+* For securing inference, see the following documents:
+ * If using CLI v1 or SDK v1 - [Secure inference environment](how-to-secure-inferencing-vnet.md)
+ * If using CLI v2 or SDK v2 - [Network isolation for managed online endpoints](how-to-secure-online-endpoint.md)
* [Enable studio functionality](how-to-enable-studio-virtual-network.md) * [Use custom DNS](how-to-custom-dns.md)
machine-learning How To Access Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-access-data.md
- Title: Connect to storage services on Azure-
-description: Learn how to use datastores to securely connect to Azure storage services during training with Azure Machine Learning
------- Previously updated : 01/28/2022---
-# Customer intent: As an experienced Python developer, I need to make my data in Azure storage available to my remote compute to train my machine learning models.
--
-# Connect to storage services on Azure with datastores
-
-In this article, learn how to connect to data storage services on Azure with Azure Machine Learning datastores and the [Azure Machine Learning Python SDK](/python/api/overview/azure/ml/intro).
-
-Datastores securely connect to your storage service on Azure without putting your authentication credentials and the integrity of your original data source at risk. They store connection information, like your subscription ID and token authorization in your [Key Vault](https://azure.microsoft.com/services/key-vault/) that's associated with the workspace, so you can securely access your storage without having to hard code them in your scripts. You can create datastores that connect to [these Azure storage solutions](#matrix).
-
-To understand where datastores fit in Azure Machine Learning's overall data access workflow, see the [Securely access data](concept-data.md#data-workflow) article.
-
-For a low code experience, see how to use the [Azure Machine Learning studio to create and register datastores](how-to-connect-data-ui.md#create-datastores).
-
->[!TIP]
-> This article assumes you want to connect to your storage service with credential-based authentication credentials, like a service principal or a shared access signature (SAS) token. Keep in mind, if credentials are registered with datastores, all users with workspace *Reader* role are able to retrieve these credentials. [Learn more about workspace *Reader* role.](how-to-assign-roles.md#default-roles) <br><br>If this is a concern, learn how to [Connect to storage services with identity based access](how-to-identity-based-data-access.md).
-
-## Prerequisites
--- An Azure subscription. If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/).--- An Azure storage account with a [supported storage type](#matrix).--- The [Azure Machine Learning SDK for Python](/python/api/overview/azure/ml/intro).--- An Azure Machine Learning workspace.
-
- Either [create an Azure Machine Learning workspace](how-to-manage-workspace.md) or use an existing one via the Python SDK.
-
- Import the `Workspace` and `Datastore` class, and load your subscription information from the file `config.json` using the function `from_config()`. This looks for the JSON file in the current directory by default, but you can also specify a path parameter to point to the file using `from_config(path="your/file/path")`.
-
- ```Python
- import azureml.core
- from azureml.core import Workspace, Datastore
-
- ws = Workspace.from_config()
- ```
-
- When you create a workspace, an Azure blob container and an Azure file share are automatically registered as datastores to the workspace. They're named `workspaceblobstore` and `workspacefilestore`, respectively. The `workspaceblobstore` is used to store workspace artifacts and your machine learning experiment logs. It's also set as the **default datastore** and can't be deleted from the workspace. The `workspacefilestore` is used to store notebooks and R scripts authorized via [compute instance](./concept-compute-instance.md#accessing-files).
-
- > [!NOTE]
- > Azure Machine Learning designer will create a datastore named **azureml_globaldatasets** automatically when you open a sample in the designer homepage. This datastore only contains sample datasets. Please **do not** use this datastore for any confidential data access.
-
-<a name="matrix"></a>
-
-## Supported data storage service types
-
-Datastores currently support storing connection information to the storage services listed in the following matrix.
-
-> [!TIP]
-> **For unsupported storage solutions**, and to save data egress cost during ML experiments, [move your data](#move) to a supported Azure storage solution.
-
-| Storage&nbsp;type | Authentication&nbsp;type | [Azure&nbsp;Machine&nbsp;Learning studio](https://ml.azure.com/) | [Azure&nbsp;Machine&nbsp;Learning&nbsp; Python SDK](/python/api/overview/azure/ml/intro) | [Azure&nbsp;Machine&nbsp;Learning CLI](reference-azure-machine-learning-cli.md) | [Azure&nbsp;Machine&nbsp;Learning&nbsp; REST API](/rest/api/azureml/) | VS Code
-||||||
-[Azure&nbsp;Blob&nbsp;Storage](../storage/blobs/storage-blobs-overview.md)| Account key <br> SAS token | Γ£ô | Γ£ô | Γ£ô |Γ£ô |Γ£ô
-[Azure&nbsp;File&nbsp;Share](../storage/files/storage-files-introduction.md)| Account key <br> SAS token | Γ£ô | Γ£ô | Γ£ô |Γ£ô|Γ£ô
-[Azure&nbsp;Data Lake&nbsp;Storage Gen&nbsp;1](../data-lake-store/index.yml)| Service principal| Γ£ô | Γ£ô | Γ£ô |Γ£ô|
-[Azure&nbsp;Data Lake&nbsp;Storage Gen&nbsp;2](../storage/blobs/data-lake-storage-introduction.md)| Service principal| Γ£ô | Γ£ô | Γ£ô |Γ£ô|
-[Azure&nbsp;SQL&nbsp;Database](/azure/azure-sql/database/sql-database-paas-overview)| SQL authentication <br>Service principal| Γ£ô | Γ£ô | Γ£ô |Γ£ô|
-[Azure&nbsp;PostgreSQL](../postgresql/overview.md) | SQL authentication| Γ£ô | Γ£ô | Γ£ô |Γ£ô|
-[Azure&nbsp;Database&nbsp;for&nbsp;MySQL](../mysql/overview.md) | SQL authentication| | Γ£ô* | Γ£ô* |Γ£ô*|
-[Databricks&nbsp;File&nbsp;System](/azure/databricks/data/databricks-file-system)| No authentication | | Γ£ô** | Γ£ô ** |Γ£ô** |
-
-\* MySQL is only supported for pipeline [DataTransferStep](/python/api/azureml-pipeline-steps/azureml.pipeline.steps.datatransferstep)<br />
-\*\* Databricks is only supported for pipeline [DatabricksStep](/python/api/azureml-pipeline-steps/azureml.pipeline.steps.databricks_step.databricksstep)
--
-### Storage guidance
-
-We recommend creating a datastore for an [Azure Blob container](../storage/blobs/storage-blobs-introduction.md). Both standard and premium storage are available for blobs. Although premium storage is more expensive, its faster throughput speeds might improve the speed of your training runs, particularly if you train against a large dataset. For information about the cost of storage accounts, see the [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator/?service=machine-learning-service).
-
-[Azure Data Lake Storage Gen2](../storage/blobs/data-lake-storage-introduction.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json) is built on top of Azure Blob storage and designed for enterprise big data analytics. A fundamental part of Data Lake Storage Gen2 is the addition of a [hierarchical namespace](../storage/blobs/data-lake-storage-namespace.md) to Blob storage. The hierarchical namespace organizes objects/files into a hierarchy of directories for efficient data access.
-
-## Storage access and permissions
-
-To ensure you securely connect to your Azure storage service, Azure Machine Learning requires that you have permission to access the corresponding data storage container. This access depends on the authentication credentials used to register the datastore.
-
-> [!NOTE]
-> This guidance also applies to [datastores created with identity-based data access](how-to-identity-based-data-access.md).
-
-### Virtual network
-
-Azure Machine Learning requires additional configuration steps to communicate with a storage account that is behind a firewall or within a virtual network. If your storage account is behind a firewall, you can [add your client's IP address to an allowlist](../storage/common/storage-network-security.md#managing-ip-network-rules) via the Azure portal.
-
-Azure Machine Learning can receive requests from clients outside of the virtual network. To ensure that the entity requesting data from the service is safe and to enable data being displayed in your workspace, [use a private endpoint with your workspace](how-to-configure-private-link.md).
-
-**For Python SDK users**, to access your data via your training script on a compute target, the compute target needs to be inside the same virtual network and subnet of the storage. You can [use a compute cluster in the same virtual network](how-to-secure-training-vnet.md?tabs=azure-studio%2Cipaddress#compute-cluster) or [use a compute instance in the same virtual network](how-to-secure-training-vnet.md?tabs=azure-studio%2Cipaddress#compute-instance).
-
-**For Azure Machine Learning studio users**, several features rely on the ability to read data from a dataset, such as dataset previews, profiles, and automated machine learning. For these features to work with storage behind virtual networks, use a [workspace managed identity in the studio](how-to-enable-studio-virtual-network.md) to allow Azure Machine Learning to access the storage account from outside the virtual network.
-
-> [!NOTE]
-> If your data storage is an Azure SQL Database behind a virtual network, be sure to set *Deny public access* to **No** via the [Azure portal](https://portal.azure.com/) to allow Azure Machine Learning to access the storage account.
-
-### Access validation
-
-> [!WARNING]
-> Cross tenant access to storage accounts is not supported. If cross tenant access is needed for your scenario, please reach out to the AzureML Data Support team alias at amldatasupport@microsoft.com for assistance with a custom code solution.
-
-**As part of the initial datastore creation and registration process**, Azure Machine Learning automatically validates that the underlying storage service exists and the user provided principal (username, service principal, or SAS token) has access to the specified storage.
-
-**After datastore creation**, this validation is only performed for methods that require access to the underlying storage container, **not** each time datastore objects are retrieved. For example, validation happens if you want to download files from your datastore; but if you just want to change your default datastore, then validation does not happen.
-
-To authenticate your access to the underlying storage service, you can provide either your account key, shared access signatures (SAS) tokens, or service principal in the corresponding `register_azure_*()` method of the datastore type you want to create. The [storage type matrix](#matrix) lists the supported authentication types that correspond to each datastore type.
-
-You can find account key, SAS token, and service principal information on your [Azure portal](https://portal.azure.com).
-
-* If you plan to use an account key or SAS token for authentication, select **Storage Accounts** on the left pane, and choose the storage account that you want to register.
- * The **Overview** page provides information such as the account name, container, and file share name.
- 1. For account keys, go to **Access keys** on the **Settings** pane.
- 1. For SAS tokens, go to **Shared access signatures** on the **Settings** pane.
-
-* If you plan to use a service principal for authentication, go to your **App registrations** and select which app you want to use.
- * Its corresponding **Overview** page will contain required information like tenant ID and client ID.
-
-> [!IMPORTANT]
-> If you need to change your access keys for an Azure Storage account (account key or SAS token), be sure to sync the new credentials with your workspace and the datastores connected to it. Learn how to [sync your updated credentials](how-to-change-storage-access-key.md).
-
-### Permissions
-
-For Azure blob container and Azure Data Lake Gen 2 storage, make sure your authentication credentials have **Storage Blob Data Reader** access. Learn more about [Storage Blob Data Reader](../role-based-access-control/built-in-roles.md#storage-blob-data-reader). An account SAS token defaults to no permissions.
-* For data **read access**, your authentication credentials must have a minimum of list and read permissions for containers and objects.
-
-* For data **write access**, write and add permissions also are required.
-
-<a name="python"></a>
-
-## Create and register datastores
-
-When you register an Azure storage solution as a datastore, you automatically create and register that datastore to a specific workspace. Review the [storage access & permissions](#storage-access-and-permissions) section for guidance on virtual network scenarios, and where to find required authentication credentials.
-
-Within this section are examples for how to create and register a datastore via the Python SDK for the following storage types. The parameters provided in these examples are the **required parameters** to create and register a datastore.
-
-* [Azure blob container](#azure-blob-container)
-* [Azure file share](#azure-file-share)
-* [Azure Data Lake Storage Generation 2](#azure-data-lake-storage-generation-2)
-
- To create datastores for other supported storage services, see the [reference documentation for the applicable `register_azure_*` methods](/python/api/azureml-core/azureml.core.datastore.datastore#methods).
-
-If you prefer a low code experience, see [Connect to data with Azure Machine Learning studio](how-to-connect-data-ui.md).
->[!IMPORTANT]
-> If you unregister and re-register a datastore with the same name, and it fails, the Azure Key Vault for your workspace may not have soft-delete enabled. By default, soft-delete is enabled for the key vault instance created by your workspace, but it may not be enabled if you used an existing key vault or have a workspace created prior to October 2020. For information on how to enable soft-delete, see [Turn on Soft Delete for an existing key vault](../key-vault/general/soft-delete-change.md#turn-on-soft-delete-for-an-existing-key-vault).
--
-> [!NOTE]
-> Datastore name should only consist of lowercase letters, digits and underscores.
-
-### Azure blob container
-
-To register an Azure blob container as a datastore, use [`register_azure_blob_container()`](/python/api/azureml-core/azureml.core.datastore%28class%29#register-azure-blob-container-workspace--datastore-name--container-name--account-name--sas-token-none--account-key-none--protocol-none--endpoint-none--overwrite-false--create-if-not-exists-false--skip-validation-false--blob-cache-timeout-none--grant-workspace-access-false--subscription-id-none--resource-group-none-).
-
-The following code creates and registers the `blob_datastore_name` datastore to the `ws` workspace. This datastore accesses the `my-container-name` blob container on the `my-account-name` storage account, by using the provided account access key. Review the [storage access & permissions](#storage-access-and-permissions) section for guidance on virtual network scenarios, and where to find required authentication credentials.
-
-```Python
-blob_datastore_name='azblobsdk' # Name of the datastore to workspace
-container_name=os.getenv("BLOB_CONTAINER", "<my-container-name>") # Name of Azure blob container
-account_name=os.getenv("BLOB_ACCOUNTNAME", "<my-account-name>") # Storage account name
-account_key=os.getenv("BLOB_ACCOUNT_KEY", "<my-account-key>") # Storage account access key
-
-blob_datastore = Datastore.register_azure_blob_container(workspace=ws,
- datastore_name=blob_datastore_name,
- container_name=container_name,
- account_name=account_name,
- account_key=account_key)
-```
-
-### Azure file share
-
-To register an Azure file share as a datastore, use [`register_azure_file_share()`](/python/api/azureml-core/azureml.core.datastore%28class%29#register-azure-file-share-workspace--datastore-name--file-share-name--account-name--sas-token-none--account-key-none--protocol-none--endpoint-none--overwrite-false--create-if-not-exists-false--skip-validation-false-).
-
-The following code creates and registers the `file_datastore_name` datastore to the `ws` workspace. This datastore accesses the `my-fileshare-name` file share on the `my-account-name` storage account, by using the provided account access key. Review the [storage access & permissions](#storage-access-and-permissions) section for guidance on virtual network scenarios, and where to find required authentication credentials.
-
-```Python
-file_datastore_name='azfilesharesdk' # Name of the datastore to workspace
-file_share_name=os.getenv("FILE_SHARE_CONTAINER", "<my-fileshare-name>") # Name of Azure file share container
-account_name=os.getenv("FILE_SHARE_ACCOUNTNAME", "<my-account-name>") # Storage account name
-account_key=os.getenv("FILE_SHARE_ACCOUNT_KEY", "<my-account-key>") # Storage account access key
-
-file_datastore = Datastore.register_azure_file_share(workspace=ws,
- datastore_name=file_datastore_name,
- file_share_name=file_share_name,
- account_name=account_name,
- account_key=account_key)
-```
-
-### Azure Data Lake Storage Generation 2
-
-For an Azure Data Lake Storage Generation 2 (ADLS Gen 2) datastore, use [register_azure_data_lake_gen2()](/python/api/azureml-core/azureml.core.datastore.datastore#register-azure-data-lake-gen2-workspace--datastore-name--filesystem--account-name--tenant-id--client-id--client-secret--resource-url-none--authority-url-none--protocol-none--endpoint-none--overwrite-false-) to register a credential datastore connected to an Azure DataLake Gen 2 storage with [service principal permissions](../active-directory/develop/howto-create-service-principal-portal.md).
-
-In order to utilize your service principal, you need to [register your application](../active-directory/develop/app-objects-and-service-principals.md) and grant the service principal data access via either Azure role-based access control (Azure RBAC) or access control lists (ACL). Learn more about [access control set up for ADLS Gen 2](../storage/blobs/data-lake-storage-access-control-model.md).
-
-The following code creates and registers the `adlsgen2_datastore_name` datastore to the `ws` workspace. This datastore accesses the file system `test` in the `account_name` storage account, by using the provided service principal credentials.
-Review the [storage access & permissions](#storage-access-and-permissions) section for guidance on virtual network scenarios, and where to find required authentication credentials.
-
-```python
-adlsgen2_datastore_name = 'adlsgen2datastore'
-
-subscription_id=os.getenv("ADL_SUBSCRIPTION", "<my_subscription_id>") # subscription id of ADLS account
-resource_group=os.getenv("ADL_RESOURCE_GROUP", "<my_resource_group>") # resource group of ADLS account
-
-account_name=os.getenv("ADLSGEN2_ACCOUNTNAME", "<my_account_name>") # ADLS Gen2 account name
-tenant_id=os.getenv("ADLSGEN2_TENANT", "<my_tenant_id>") # tenant id of service principal
-client_id=os.getenv("ADLSGEN2_CLIENTID", "<my_client_id>") # client id of service principal
-client_secret=os.getenv("ADLSGEN2_CLIENT_SECRET", "<my_client_secret>") # the secret of service principal
-
-adlsgen2_datastore = Datastore.register_azure_data_lake_gen2(workspace=ws,
- datastore_name=adlsgen2_datastore_name,
- account_name=account_name, # ADLS Gen2 account name
- filesystem='test', # ADLS Gen2 filesystem
- tenant_id=tenant_id, # tenant id of service principal
- client_id=client_id, # client id of service principal
- client_secret=client_secret) # the secret of service principal
-```
---
-## Create datastores with other Azure tools
-In addition to creating datastores with the Python SDK and the studio, you can also use Azure Resource Manager templates or the Azure Machine Learning VS Code extension.
-
-<a name="arm"></a>
-### Azure Resource Manager
-
-There are a number of templates at [https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.machinelearningservices](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.machinelearningservices) that can be used to create datastores.
-
-For information on using these templates, see [Use an Azure Resource Manager template to create a workspace for Azure Machine Learning](how-to-create-workspace-template.md).
-
-### VS Code extension
-
-If you prefer to create and manage datastores using the Azure Machine Learning VS Code extension, visit the [VS Code resource management how-to guide](how-to-manage-resources-vscode.md#datastores) to learn more.
-<a name="train"></a>
-## Use data in your datastores
-
-After you create a datastore, [create an Azure Machine Learning dataset](how-to-create-register-datasets.md) to interact with your data. Datasets package your data into a lazily evaluated consumable object for machine learning tasks, like training.
-
-With datasets, you can [download or mount](how-to-train-with-datasets.md#mount-vs-download) files of any format from Azure storage services for model training on a compute target. [Learn more about how to train ML models with datasets](how-to-train-with-datasets.md).
-
-<a name="get"></a>
-
-## Get datastores from your workspace
-
-To get a specific datastore registered in the current workspace, use the [`get()`](/python/api/azureml-core/azureml.core.datastore%28class%29#get-workspace--datastore-name-) static method on the `Datastore` class:
-
-```Python
-# Get a named datastore from the current workspace
-datastore = Datastore.get(ws, datastore_name='your datastore name')
-```
-To get the list of datastores registered with a given workspace, you can use the [`datastores`](/python/api/azureml-core/azureml.core.workspace%28class%29#datastores) property on a workspace object:
-
-```Python
-# List all datastores registered in the current workspace
-datastores = ws.datastores
-for name, datastore in datastores.items():
- print(name, datastore.datastore_type)
-```
-
-To get the workspace's default datastore, use this line:
-
-```Python
-datastore = ws.get_default_datastore()
-```
-You can also change the default datastore with the following code. This ability is only supported via the SDK.
-
-```Python
- ws.set_default_datastore(new_default_datastore)
-```
-
-## Access data during scoring
-
-Azure Machine Learning provides several ways to use your models for scoring. Some of these methods don't provide access to datastores. Use the following table to understand which methods allow you to access datastores during scoring:
-
-| Method | Datastore access | Description |
-| -- | :--: | -- |
-| [Batch prediction](./tutorial-pipeline-batch-scoring-classification.md) | Γ£ö | Make predictions on large quantities of data asynchronously. |
-| [Web service](how-to-deploy-and-where.md) | &nbsp; | Deploy models as a web service. |
-
-For situations where the SDK doesn't provide access to datastores, you might be able to create custom code by using the relevant Azure SDK to access the data. For example, the [Azure Storage SDK for Python](https://github.com/Azure/azure-storage-python) is a client library that you can use to access data stored in blobs or files.
-
-<a name="move"></a>
-
-## Move data to supported Azure storage solutions
-
-Azure Machine Learning supports accessing data from Azure Blob storage, Azure Files, Azure Data Lake Storage Gen1, Azure Data Lake Storage Gen2, Azure SQL Database, and Azure Database for PostgreSQL. If you're using unsupported storage, we recommend that you move your data to supported Azure storage solutions by using [Azure Data Factory and these steps](../data-factory/quickstart-create-data-factory-copy-data-tool.md). Moving data to supported storage can help you save data egress costs during machine learning experiments.
-
-Azure Data Factory provides efficient and resilient data transfer with more than 80 prebuilt connectors at no additional cost. These connectors include Azure data services, on-premises data sources, Amazon S3 and Redshift, and Google BigQuery.
-
-## Next steps
-
-* [Create an Azure machine learning dataset](how-to-create-register-datasets.md)
-* [Train a model](how-to-set-up-training-targets.md)
-* [Deploy a model](how-to-deploy-and-where.md)
machine-learning How To Access Resources From Endpoints Managed Identities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-access-resources-from-endpoints-managed-identities.md
Previously updated : 03/31/2022 Last updated : 04/07/2022 ---
-# Customer intent: As a data scientist, I want to securely access Azure resources for my machine learning model deployment with an online endpoint and managed identity.
+
+#Customer intent: As a data scientist, I want to securely access Azure resources for my machine learning model deployment with an online endpoint and managed identity.
-# Access Azure resources from an online endpoint (preview) with a managed identity
+# Access Azure resources from an online endpoint with a managed identity
[!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)]+ Learn how to access Azure resources from your scoring script with an online endpoint and either a system-assigned managed identity or a user-assigned managed identity.
-Managed endpoints (preview) allow Azure Machine Learning to manage the burden of provisioning your compute resource and deploying your machine learning model. Typically your model needs to access Azure resources such as the Azure Container Registry or your blob storage for inferencing; with a managed identity you can access these resources without needing to manage credentials in your code. [Learn more about managed identities](../active-directory/managed-identities-azure-resources/overview.md).
+Managed endpoints allow Azure Machine Learning to manage the burden of provisioning your compute resource and deploying your machine learning model. Typically your model needs to access Azure resources such as the Azure Container Registry or your blob storage for inferencing; with a managed identity you can access these resources without needing to manage credentials in your code. [Learn more about managed identities](../active-directory/managed-identities-azure-resources/overview.md).
This guide assumes you don't have a managed identity, a storage account or an online endpoint. If you already have these components, skip to the [give access permission to the managed identity](#give-access-permission-to-the-managed-identity) section. - ## Prerequisites * To use Azure Machine Learning, you must have an Azure subscription. If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/) today.
-* Install and configure the Azure CLI and ML (v2) extension. For more information, see [Install, set up, and use the 2.0 CLI (preview)](how-to-configure-cli.md).
+* Install and configure the Azure CLI and ML (v2) extension. For more information, see [Install, set up, and use the 2.0 CLI](how-to-configure-cli.md).
* An Azure Resource group, in which you (or the service principal you use) need to have `User Access Administrator` and `Contributor` access. You'll have such a resource group if you configured your ML extension per the above article.
Check the status of the endpoint with the following.
::: code language="azurecli" source="~/azureml-examples-main/cli/deploy-managed-online-endpoint-access-resource-sai.sh" id="check_endpoint_Status" :::
-If you encounter any issues, see [Troubleshooting online endpoints deployment and scoring (preview)](how-to-troubleshoot-managed-online-endpoints.md).
+If you encounter any issues, see [Troubleshooting online endpoints deployment and scoring](how-to-troubleshoot-managed-online-endpoints.md).
# [User-assigned managed identity](#tab/user-identity)
Check the status of the endpoint with the following.
::: code language="azurecli" source="~/azureml-examples-main/cli/deploy-managed-online-endpoint-access-resource-uai.sh" id="check_endpoint_Status" :::
-If you encounter any issues, see [Troubleshooting online endpoints deployment and scoring (preview)](how-to-troubleshoot-managed-online-endpoints.md).
+If you encounter any issues, see [Troubleshooting online endpoints deployment and scoring](how-to-troubleshoot-managed-online-endpoints.md).
If you don't plan to continue using the deployed online endpoint and storage, de
## Next steps
-* [Deploy and score a machine learning model by using a online endpoint (preview)](how-to-deploy-managed-online-endpoints.md).
-* For more on deployment, see [Safe rollout for online endpoints (preview)](how-to-safely-rollout-managed-endpoints.md).
+* [Deploy and score a machine learning model by using a online endpoint](how-to-deploy-managed-online-endpoints.md).
+* For more on deployment, see [Safe rollout for online endpoints](how-to-safely-rollout-managed-endpoints.md).
* For more information on using the CLI, see [Use the CLI extension for Azure Machine Learning](reference-azure-machine-learning-cli.md).
-* To see which compute resources you can use, see [Managed online endpoints SKU list (preview)](reference-managed-online-endpoints-vm-sku-list.md).
-* For more on costs, see [View costs for an Azure Machine Learning managed online endpoint (preview)](how-to-view-online-endpoints-costs.md).
-* For information on monitoring endpoints, see [Monitor managed online endpoints (preview)](how-to-monitor-online-endpoints.md).
-* For limitations for managed endpoints, see [Manage and increase quotas for resources with Azure Machine Learning](how-to-manage-quotas.md#azure-machine-learning-managed-online-endpoints-preview).
+* To see which compute resources you can use, see [Managed online endpoints SKU list](reference-managed-online-endpoints-vm-sku-list.md).
+* For more on costs, see [View costs for an Azure Machine Learning managed online endpoint](how-to-view-online-endpoints-costs.md).
+* For information on monitoring endpoints, see [Monitor managed online endpoints](how-to-monitor-online-endpoints.md).
+* For limitations for managed endpoints, see [Manage and increase quotas for resources with Azure Machine Learning](how-to-manage-quotas.md#azure-machine-learning-managed-online-endpoints).
machine-learning How To Access Terminal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-access-terminal.md
+ Last updated 02/05/2021 #Customer intent: As a data scientist, I want to use Git, install packages and add kernels to a compute instance in my workspace in Azure Machine Learning studio.
To access the terminal:
In addition to the steps above, you can also access the terminal from:
-* RStudio: Select the **Terminal** tab on top left.
+* RStudio (See [Add RStudio]([Create and manage an Azure Machine Learning compute instance]): Select the **Terminal** tab on top left.
* Jupyter Lab: Select the **Terminal** tile under the **Other** heading in the Launcher tab. * Jupyter: Select **New>Terminal** on top right in the Files tab. * SSH to the machine, if you enabled SSH access when the compute instance was created.
Learn more about [cloning Git repositories into your workspace file system](conc
Or you can install packages directly in Jupyter Notebook or RStudio:
-* RStudio Use the **Packages** tab on the bottom right, or the **Console** tab on the top left.
+* RStudio [Add RStudio]([Create and manage an Azure Machine Learning compute instance]: Use the **Packages** tab on the bottom right, or the **Console** tab on the top left.
* Python: Add install code and execute in a Jupyter Notebook cell. > [!NOTE]
machine-learning How To Administrate Data Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-administrate-data-authentication.md
+
+ Title: How to administrate data authentication
+
+description: Learn how to manage data access and how to authenticate in Azure Machine Learning
+++++++ Last updated : 05/24/2022+
+# Customer intent: As an administrator, I need to administrate data access and set up authentication method for data scientists.
++
+# How to authenticate data access
+Learn how to manage data access and how to authenticate in Azure Machine Learning
+
+> [!IMPORTANT]
+> The information in this article is intended for Azure administrators who are creating the infrastructure required for an Azure Machine Learning solution.
+
+In general, data access from studio involves the following checks:
+
+* Who is accessing?
+ - There are multiple different types of authentication depending on the storage type. For example, account key, token, service principal, managed identity, and user identity.
+ - If authentication is made using a user identity, then it's important to know *which* user is trying to access storage. Learn more about [identity-based data access](how-to-identity-based-data-access.md).
+* Do they have permission?
+ - Are the credentials correct? If so, does the service principal, managed identity, etc., have the necessary permissions on the storage? Permissions are granted using Azure role-based access controls (Azure RBAC).
+ - [Reader](../role-based-access-control/built-in-roles.md#reader) of the storage account reads metadata of the storage.
+ - [Storage Blob Data Reader](../role-based-access-control/built-in-roles.md#storage-blob-data-reader) reads data within a blob container.
+ - [Contributor](../role-based-access-control/built-in-roles.md#contributor) allows write access to a storage account.
+ - More roles may be required depending on the type of storage.
+* Where is access from?
+ - User: Is the client IP address in the VNet/subnet range?
+ - Workspace: Is the workspace public or does it have a private endpoint in a VNet/subnet?
+ - Storage: Does the storage allow public access, or does it restrict access through a service endpoint or a private endpoint?
+* What operation is being performed?
+ - Create, read, update, and delete (CRUD) operations on a data store/dataset are handled by Azure Machine Learning.
+ - Data Access calls (such as preview or schema) go to the underlying storage and need extra permissions.
+* Where is this operation being run; compute resources in your Azure subscription or resources hosted in a Microsoft subscription?
+ - All calls to dataset and datastore services (except the "Generate Profile" option) use resources hosted in a __Microsoft subscription__ to run the operations.
+ - Jobs, including the "Generate Profile" option for datasets, run on a compute resource in __your subscription__, and access the data from there. So the compute identity needs permission to the storage rather than the identity of the user submitting the job.
+
+The following diagram shows the general flow of a data access call. In this example, a user is trying to make a data access call through a machine learning workspace, without using any compute resource.
++
+## Scenarios and identities
+
+The following table lists what identities should be used for specific scenarios:
+
+| Scenario | Use workspace</br>Managed Service Identity (MSI) | Identity to use |
+|--|--|--|
+| Access from UI | Yes | Workspace MSI |
+| Access from UI | No | User's Identity |
+| Access from Job | Yes/No | Compute MSI |
+| Access from Notebook | Yes/No | User's identity |
++
+Data access is complex and it's important to recognize that there are many pieces to it. For example, accessing data from Azure Machine Learning studio is different than using the SDK. When using the SDK on your local development environment, you're directly accessing data in the cloud. When using studio, you aren't always directly accessing the data store from your client. Studio relies on the workspace to access data on your behalf.
+
+> [!TIP]
+> If you need to access data from outside Azure Machine Learning, such as using Azure Storage Explorer, _user_ identity is probably what is used. Consult the documentation for the tool or service you are using for specific information. For more information on how Azure Machine Learning works with data, see [Identity-based data access to storage services on Azure](how-to-identity-based-data-access.md).
+
+## Azure Storage Account
+
+When using an Azure Storage Account from Azure Machine Learning studio, you must add the managed identity of the workspace to the following Azure RBAC roles for the storage account:
+
+* [Blob Data Reader](../role-based-access-control/built-in-roles.md#storage-blob-data-reader)
+* If the storage account uses a private endpoint to connect to the VNet, you must grant the managed identity the [Reader](../role-based-access-control/built-in-roles.md#reader) role for the storage account private endpoint.
+
+For more information, see [Use Azure Machine Learning studio in an Azure Virtual Network](how-to-enable-studio-virtual-network.md).
+
+See the following sections for information on limitations when using Azure Storage Account with your workspace in a VNet.
+
+### Secure communication with Azure Storage Account
+
+To secure communication between Azure Machine Learning and Azure Storage Accounts, configure storage to [Grant access to trusted Azure services](../storage/common/storage-network-security.md#grant-access-to-trusted-azure-services).
+
+### Azure Storage firewall
+
+When an Azure Storage account is behind a virtual network, the storage firewall can normally be used to allow your client to directly connect over the internet. However, when using studio it isn't your client that connects to the storage account; it's the Azure Machine Learning service that makes the request. The IP address of the service isn't documented and changes frequently. __Enabling the storage firewall will not allow studio to access the storage account in a VNet configuration__.
+
+### Azure Storage endpoint type
+
+When the workspace uses a private endpoint and the storage account is also in the VNet, there are extra validation requirements when using studio:
+
+* If the storage account uses a __service endpoint__, the workspace private endpoint and storage service endpoint must be in the same subnet of the VNet.
+* If the storage account uses a __private endpoint__, the workspace private endpoint and storage service endpoint must be in the same VNet. In this case, they can be in different subnets.
+
+## Azure Data Lake Storage Gen1
+
+When using Azure Data Lake Storage Gen1 as a datastore, you can only use POSIX-style access control lists. You can assign the workspace's managed identity access to resources just like any other security principal. For more information, see [Access control in Azure Data Lake Storage Gen1](../data-lake-store/data-lake-store-access-control.md).
+
+## Azure Data Lake Storage Gen2
+
+When using Azure Data Lake Storage Gen2 as a datastore, you can use both Azure RBAC and POSIX-style access control lists (ACLs) to control data access inside of a virtual network.
+
+**To use Azure RBAC**, follow the steps in the [Datastore: Azure Storage Account](how-to-enable-studio-virtual-network.md#datastore-azure-storage-account) section of the 'Use Azure Machine Learning studio in an Azure Virtual Network' article. Data Lake Storage Gen2 is based on Azure Storage, so the same steps apply when using Azure RBAC.
+
+**To use ACLs**, the managed identity of the workspace can be assigned access just like any other security principal. For more information, see [Access control lists on files and directories](../storage/blobs/data-lake-storage-access-control.md#access-control-lists-on-files-and-directories).
++
+## Next steps
+
+For information on enabling studio in a network, see [Use Azure Machine Learning studio in an Azure Virtual Network](how-to-enable-studio-virtual-network.md).
machine-learning How To Assign Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-assign-roles.md
Last updated 03/23/2022--+
The following table is a summary of Azure Machine Learning activities and the pe
| Scoring against a deployed AKS endpoint | Not required | Not required | Owner, contributor, or custom role allowing: `"/workspaces/services/aks/score/action", "/workspaces/services/aks/listkeys/action"` (when you are not using Azure Active Directory auth) OR `"/workspaces/read"` (when you are using token auth) | | Accessing storage using interactive notebooks | Not required | Not required | Owner, contributor, or custom role allowing: `"/workspaces/computes/read", "/workspaces/notebooks/samples/read", "/workspaces/notebooks/storage/*", "/workspaces/listStorageAccountKeys/action", "/workspaces/listNotebookAccessToken/read"`| | Create new custom role | Owner, contributor, or custom role allowing `Microsoft.Authorization/roleDefinitions/write` | Not required | Owner, contributor, or custom role allowing: `/workspaces/computes/write` |
+| Create/manage online endpoints and deployments | Not required | Not required | Owner, contributor, or custom role allowing `Microsoft.MachineLearningServices/workspaces/onlineEndpoints/*` |
+| Retrieve authentication credentials for online endpoints | Not required | Not required | Owner, contributor, or custom role allowing `Microsoft.MachineLearningServices/workspaces/onlineEndpoints/token/action` and `Microsoft.MachineLearningServices/workspaces/onlineEndpoints/listkeys/action`.
1: If you receive a failure when trying to create a workspace for the first time, make sure that your role allows `Microsoft.MachineLearningServices/register/action`. This action allows you to register the Azure Machine Learning resource provider with your Azure subscription.
Here are a few things to be aware of while you use Azure role-based access contr
- [Enterprise security overview](concept-enterprise-security.md) - [Virtual network isolation and privacy overview](how-to-network-security-overview.md) - [Tutorial: Train and deploy a model](tutorial-train-deploy-notebook.md)-- [Resource provider operations](../role-based-access-control/resource-provider-operations.md#microsoftmachinelearningservices)
+- [Resource provider operations](../role-based-access-control/resource-provider-operations.md#microsoftmachinelearningservices)
machine-learning How To Attach Arc Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-attach-arc-kubernetes.md
- Title: Azure Arc-enabled machine learning (preview)
-description: Configure Azure Kubernetes Service and Azure Arc-enabled Kubernetes clusters to train and inference machine learning models in Azure Machine Learning
----- Previously updated : 11/23/2021----
-# Configure Kubernetes clusters for machine learning (preview)
-
-Learn how to configure Azure Kubernetes Service (AKS) and Azure Arc-enabled Kubernetes clusters for training and inferencing machine learning workloads.
-
-## What is Azure Arc-enabled machine learning?
-
-Azure Arc enables you to run Azure services in any Kubernetes environment, whether itΓÇÖs on-premises, multicloud, or at the edge.
-
-Azure Arc-enabled machine learning lets you configure and use Azure Kubernetes Service or Azure Arc-enabled Kubernetes clusters to train, inference, and manage machine learning models in Azure Machine Learning.
-
-## Machine Learning on Azure Kubernetes Service
-
-To use Azure Kubernetes Service clusters for Azure Machine Learning training and inference workloads, you don't have to connect them to Azure Arc.
-
-Before deploying the Azure Machine Learning extension on Azure Kubernetes Service clusters, you have to:
--- Register the feature in your AKS cluster. For more information, see [Azure Kubernetes Service prerequisites](#aks-prerequisites).-
-To deploy the Azure Machine Learning extension on AKS clusters, see the [Deploy Azure Machine Learning extension](#deploy-azure-machine-learning-extension) section.
-
-## Prerequisites
-
-* An Azure subscription. If you don't have an Azure subscription [create a free account](https://azure.microsoft.com/free) before you begin.
-* Azure Arc-enabled Kubernetes cluster. For more information, see the [Connect an existing Kubernetes cluster to Azure Arc quickstart guide](../azure-arc/kubernetes/quickstart-connect-cluster.md).
-
- > [!NOTE]
- > For AKS clusters, connecting them to Azure Arc is **optional**.
-
-* Clusters running behind an outbound proxy server or firewall need additional network configurations. See [Configure inbound and outbound network traffic](how-to-access-azureml-behind-firewall.md#azure-arc-enabled-kubernetes-).
-* Fulfill [Azure Arc-enabled Kubernetes cluster extensions prerequisites](../azure-arc/kubernetes/extensions.md#prerequisites).
- * Azure CLI version >= 2.24.0
- * Azure CLI k8s-extension extension version >= 1.0.0
-
-* An Azure Machine Learning workspace. [Create a workspace](how-to-manage-workspace.md?tabs=python) before you begin if you don't have one already.
- * Azure Machine Learning Python SDK version >= 1.30
-* Log into Azure using the Azure CLI
-
- ```azurecli
- az login
- az account set --subscription <your-subscription-id>
- ```
-### Azure Kubernetes Service (AKS) <a id="aks-prerequisites"></a>
-
-For AKS clusters, connecting them to Azure Arc is **optional**.
-
-However, you have to register the feature in your cluster. Use the following commands to register the feature:
-
-```azurecli
-az feature register --namespace Microsoft.ContainerService -n AKS-ExtensionManager
-```
-> [!NOTE]
-> For more information, see [Deploy and manage cluster extensions for Azure Kubernetes Service (AKS)](../aks/cluster-extensions.md)
-
-### Azure RedHat OpenShift Service (ARO) and OpenShift Container Platform (OCP) only
-
-* An ARO or OCP Kubernetes cluster is up and running. For more information, see [Create ARO Kubernetes cluster](../openshift/tutorial-create-cluster.md) and [Create OCP Kubernetes cluster](https://docs.openshift.com/container-platform/4.6/installing/installing_platform_agnostic/installing-platform-agnostic.html)
-* Grant privileged access to AzureML service accounts.
-
- Run `oc edit scc privileged` and add the following
-
- * ```system:serviceaccount:azure-arc:azure-arc-kube-aad-proxy-sa```
- * ```system:serviceaccount:azureml:{EXTENSION NAME}-kube-state-metrics``` **(Note:** ```{EXTENSION NAME}``` **here must match with the extension name used in** ```az k8s-extension create --name``` **step)**
- * ```system:serviceaccount:azureml:cluster-status-reporter```
- * ```system:serviceaccount:azureml:prom-admission```
- * ```system:serviceaccount:azureml:default```
- * ```system:serviceaccount:azureml:prom-operator```
- * ```system:serviceaccount:azureml:csi-blob-node-sa```
- * ```system:serviceaccount:azureml:csi-blob-controller-sa```
- * ```system:serviceaccount:azureml:load-amlarc-selinux-policy-sa```
- * ```system:serviceaccount:azureml:azureml-fe```
- * ```system:serviceaccount:azureml:prom-prometheus```
- * ```system:serviceaccount:{KUBERNETES-COMPUTE-NAMESPACE}:default```
-
-> [!NOTE]
-> `{KUBERNETES-COMPUTE-NAMESPACE}` is the namespace of the Kubernetes compute cluster specified in compute attach, which defaults to `default`. Skip this setting if the namespace is `default`
-
-## Deploy Azure Machine Learning extension
-
-Azure Arc-enabled Kubernetes has a cluster extension functionality that enables you to install various agents including Azure Policy definitions, monitoring, machine learning, and many others. Azure Machine Learning requires the use of the *Microsoft.AzureML.Kubernetes* cluster extension to deploy the Azure Machine Learning agent on the Kubernetes cluster. Once the Azure Machine Learning extension is installed, you can attach the cluster to an Azure Machine Learning workspace and use it for the following scenarios:
-
-* [Training only](#training)
-* [Real-time inferencing only](#inferencing)
-* [Training and inferencing](#training-inferencing)
-
-> [!TIP]
-> Train only clusters also support batch inferencing as part of Azure Machine Learning Pipelines.
-
-Use the `k8s-extension` Azure CLI extension [`create`](/cli/azure/k8s-extension) command to deploy the Azure Machine Learning extension to your Azure Arc-enabled Kubernetes cluster.
-
-> [!IMPORTANT]
-> Set the `--cluster-type` parameter to `managedClusters` to deploy the Azure Machine Learning extension to AKS clusters.
-
-The following configuration settings are available to be used for different Azure Machine Learning extension deployment scenarios.
-
-You can use ```--config``` or ```--config-protected``` to specify list of key-value pairs for Azure Machine Learning deployment configurations.
-
-> [!TIP]
-> Set the `openshift` parameter to `True` to deploy the Azure Machine Learning extension to ARO and OCP Kubernetes clusters.
-
-| Configuration Setting Key Name | Description | Training | Inference | Training and Inference |
- |--|--|--|--|--|
- |```enableTraining``` |```True``` or ```False```, default ```False```. **Must** be set to ```True``` for AzureML extension deployment with Machine Learning model training support. | **&check;**| N/A | **&check;** |
- | ```enableInference``` |```True``` or ```False```, default ```False```. **Must** be set to ```True``` for AzureML extension deployment with Machine Learning inference support. |N/A| **&check;** | **&check;** |
- | ```allowInsecureConnections``` |```True``` or ```False```, default False. This **must** be set to ```True``` for AzureML extension deployment with HTTP endpoints support for inference, when ```sslCertPemFile``` and ```sslKeyPemFile``` are not provided. |N/A| Optional | Optional |
- | ```privateEndpointNodeport``` |```True``` or ```False```, default ```False```. **Must** be set to ```True``` for AzureML deployment with Machine Learning inference private endpoints support using serviceType nodePort. | N/A| Optional | Optional |
- | ```privateEndpointILB``` |```True``` or ```False```, default ```False```. **Must** be set to ```True``` for AzureML extension deployment with Machine Learning inference private endpoints support using serviceType internal load balancer | N/A| Optional | Optional |
- |```sslSecret```| The Kubernetes secret under azureml namespace to store `cert.pem` (PEM-encoded SSL cert) and `key.pem` (PEM-encoded SSL key), required for AzureML extension deployment with HTTPS endpoint support for inference, when ``allowInsecureConnections`` is set to ```False```. Use this config or give static cert and key file path in configuration protected settings.|N/A| Optional | Optional |
- |```sslCname``` |A SSL CName to use if enabling SSL validation on the cluster. | N/A | Optional | Optional |
- | ```inferenceLoadBalancerHA``` |```True``` or ```False```, default ```True```. By default, AzureML extension will deploy three ingress controller replicas for high availability, which requires at least three workers in a cluster. Set this config to ```False``` if you have fewer than three workers and want to deploy AzureML extension for development and testing only, in this case it will deploy one ingress controller replica only. | N/A| Optional | Optional |
- |```openshift``` | ```True``` or ```False```, default ```False```. Set to ```True``` if you deploy AzureML extension on ARO or OCP cluster. The deployment process will automatically compile a policy package and load policy package on each node so AzureML services operation can function properly. | Optional| Optional | Optional |
- |```nodeSelector``` | Set the node selector so the extension components and the training/inference workloads will only be deployed to the nodes with all specified selectors. Usage: `nodeSelector.key=value`, support multiple selectors. Example: `nodeSelector.node-purpose=worker nodeSelector.node-region=eastus`| Optional| Optional | Optional |
- |```installNvidiaDevicePlugin``` | ```True``` or ```False```, default ```True```. Nvidia Device Plugin is required for ML workloads on Nvidia GPU hardware. By default, AzureML extension deployment will install Nvidia Device Plugin regardless Kubernetes cluster has GPU hardware or not. User can specify this configuration setting to ```False``` if Nvidia Device Plugin installation is not required (either it is installed already or there is no plan to use GPU for workload). | Optional |Optional |Optional |
- |```reuseExistingPromOp```|```True``` or ```False```, default ```False```. AzureML extension needs prometheus operator to manage prometheus. Set to ```True``` to reuse existing prometheus operator. | Optional| Optional | Optional |
- |```logAnalyticsWS``` |```True``` or ```False```, default ```False```. AzureML extension integrates with Azure LogAnalytics Workspace to provide log viewing and analysis capability through LogAnalytics Workspace. This setting must be explicitly set to ```True``` if customer wants to use this capability. LogAnalytics Workspace cost may apply. |Optional |Optional |Optional |
-
- |Configuration Protected Setting Key Name |Description |Training |Inference |Training and Inference
- |--|--|--|--|--|
- | ```sslCertPemFile```, ```sslKeyPemFile``` |Path to SSL certificate and key file (PEM-encoded), required for AzureML extension deployment with HTTPS endpoint support for inference, when ``allowInsecureConnections`` is set to ```False```. | N/A| Optional | Optional |
-
-> [!WARNING]
-> If Nvidia Device Plugin, is already installed in your cluster, reinstalling them may result in an extension installation error. Set `installNvidiaDevicePlugin` to `False` to prevent deployment errors.
->
-> By default, the deployed Kubernetes deployment resources are randomly deployed to **1 or more** nodes on the cluster, and daemonset resources are deployed to **all** nodes. If you want to restrict the extension deployment to specific nodes, use `nodeSelector` configuration setting.
--
-### Deploy extension for training workloads <a id="training"></a>
-
-Use the following Azure CLI command to deploy the Azure Machine Learning extension and enable training workloads on your Kubernetes cluster:
-
-```azurecli
-az k8s-extension create --name arcml-extension --extension-type Microsoft.AzureML.Kubernetes --config enableTraining=True --cluster-type connectedClusters --cluster-name <your-connected-cluster-name> --resource-group <resource-group> --scope cluster --auto-upgrade-minor-version False
-```
-
-### Deploy extension for real-time inferencing workloads <a id="inferencing"></a>
-
-Depending on your network setup, Kubernetes distribution variant, and where your Kubernetes cluster is hosted (on-premises or the cloud), choose one of following options to deploy the Azure Machine Learning extension and enable inferencing workloads on your Kubernetes cluster.
-
-#### Public endpoints support with public load balancer
-
-* **HTTPS**
-
- ```azurecli
- az k8s-extension create --name arcml-extension --extension-type Microsoft.AzureML.Kubernetes --cluster-type connectedClusters --cluster-name <your-connected-cluster-name> --config enableInference=True sslCname=<cname> --config-protected sslCertPemFile=<path-to-the-SSL-cert-PEM-ile> sslKeyPemFile=<path-to-the-SSL-key-PEM-file> --resource-group <resource-group> --scope cluster --auto-upgrade-minor-version False
- ```
-
-* **HTTP**
-
- > [!WARNING]
- > Public HTTP endpoints support with public load balancer is the least secure way of deploying the Azure Machine Learning extension for real-time inferencing scenarios and is therefore **NOT** recommended.
-
- ```azurecli
- az k8s-extension create --name arcml-extension --extension-type Microsoft.AzureML.Kubernetes --cluster-type connectedClusters --cluster-name <your-connected-cluster-name> --configuration-settings enableInference=True allowInsecureConnections=True --resource-group <resource-group> --scope cluster --auto-upgrade-minor-version False
- ```
-
-#### Private endpoints support with internal load balancer
-
-* **HTTPS**
-
- ```azurecli
- az k8s-extension create --name amlarc-compute --extension-type Microsoft.AzureML.Kubernetes --cluster-type connectedClusters --cluster-name <your-connected-cluster-name> --config enableInference=True privateEndpointILB=True sslCname=<cname> --config-protected sslCertPemFile=<path-to-the-SSL-cert-PEM-ile> sslKeyPemFile=<path-to-the-SSL-key-PEM-file> --resource-group <resource-group> --scope cluster --auto-upgrade-minor-version False
- ```
-
-* **HTTP**
-
- ```azurecli
- az k8s-extension create --name arcml-extension --extension-type Microsoft.AzureML.Kubernetes --cluster-type connectedClusters --cluster-name <your-connected-cluster-name> --config enableInference=True privateEndpointILB=True allowInsecureConnections=True --resource-group <resource-group> --scope cluster --auto-upgrade-minor-version False
- ```
-
-#### Endpoints support with NodePort
-
-Using a NodePort gives you the freedom to set up your own load-balancing solution, to configure environments that are not fully supported by Kubernetes, or even to expose one or more nodes' IPs directly.
-
-When you deploy with NodePort service, the scoring url (or swagger url) will be replaced with one of Node IP (for example, ```http://<NodeIP><NodePort>/<scoring_path>```) and remain unchanged even if the Node is unavailable. But you can replace it with any other Node IP.
-
-* **HTTPS**
-
- ```azurecli
- az k8s-extension create --name arcml-extension --extension-type Microsoft.AzureML.Kubernetes --cluster-type connectedClusters --cluster-name <your-connected-cluster-name> --resource-group <resource-group> --scope cluster --config enableInference=True privateEndpointNodeport=True sslCname=<cname> --config-protected sslCertPemFile=<path-to-the-SSL-cert-PEM-ile> sslKeyPemFile=<path-to-the-SSL-key-PEM-file> --auto-upgrade-minor-version False
- ```
-
-* **HTTP**
-
- ```azurecli
- az k8s-extension create --name arcml-extension --extension-type Microsoft.AzureML.Kubernetes --cluster-type connectedClusters --cluster-name <your-connected-cluster-name> --config enableInference=True privateEndpointNodeport=True allowInsecureConnections=Ture --resource-group <resource-group> --scope cluster --auto-upgrade-minor-version False
- ```
-
-### Deploy extension for training and inferencing workloads <a id="training-inferencing"></a>
-
-Use the following Azure CLI command to deploy the Azure Machine Learning extension and enable cluster real-time inferencing, batch-inferencing, and training workloads on your Kubernetes cluster.
-
-```azurecli
-az k8s-extension create --name arcml-extension --extension-type Microsoft.AzureML.Kubernetes --cluster-type connectedClusters --cluster-name <your-connected-cluster-name> --config enableTraining=True enableInference=True sslCname=<cname> --config-protected sslCertPemFile=<path-to-the-SSL-cert-PEM-ile> sslKeyPemFile=<path-to-the-SSL-key-PEM-file>--resource-group <resource-group> --scope cluster --auto-upgrade-minor-version False
-```
-
-## Resources created during deployment
-
-Once the Azure Machine Learning extension is deployed, the following resources are created in Azure as well as your Kubernetes cluster, depending on the workloads you run on your cluster.
-
- |Resource name |Resource type |Training |Inference |Training and Inference| Description |
- |--|--|--|--|--|--|
- |Azure Service Bus|Azure resource|**&check;**|**&check;**|**&check;**|Used by gateway to sync job and cluster status to Azure Machine Learning services regularly.|
- |Azure Relay|Azure resource|**&check;**|**&check;**|**&check;**|Route traffic from Azure Machine Learning services to the Kubernetes cluster.|
- |aml-operator|Kubernetes deployment|**&check;**|N/A|**&check;**|Manage the lifecycle of training jobs.|
- |{EXTENSION-NAME}-kube-state-metrics|Kubernetes deployment|**&check;**|**&check;**|**&check;**|Export the cluster-related metrics to Prometheus.|
- |{EXTENSION-NAME}-prometheus-operator|Kubernetes deployment|**&check;**|**&check;**|**&check;**| Provide Kubernetes native deployment and management of Prometheus and related monitoring components.|
- |amlarc-identity-controller|Kubernetes deployment|N/A|**&check;**|**&check;**|Request and renew Blob/Azure Container Registry token with managed identity for infrastructure and user containers.|
- |amlarc-identity-proxy|Kubernetes deployment|N/A|**&check;**|**&check;**|Request and renew Blob/Azure Container Registry token with managed identity for infrastructure and user containers.|
- |azureml-fe|Kubernetes deployment|N/A|**&check;**|**&check;**|The front-end component that routes incoming inference requests to deployed services.|
- |inference-operator-controller-manager|Kubernetes deployment|N/A|**&check;**|**&check;**|Manage the lifecycle of inference endpoints. |
- |metrics-controller-manager|Kubernetes deployment|**&check;**|**&check;**|**&check;**|Manage the configuration for Prometheus|
- |relayserver|Kubernetes deployment|**&check;**|**&check;**|**&check;**|Pass the job spec from Azure Machine Learning services to the Kubernetes cluster.|
- |cluster-status-reporter|Kubernetes deployment|**&check;**|**&check;**|**&check;**|Gather the nodes and resource information, and upload it to Azure Machine Learning services.|
- |nfd-master|Kubernetes deployment|**&check;**|N/A|**&check;**|Node feature discovery.|
- |gateway|Kubernetes deployment|**&check;**|**&check;**|**&check;**|Send nodes and cluster resource information to Azure Machine Learning services.|
- |csi-blob-controller|Kubernetes deployment|**&check;**|N/A|**&check;**|Azure Blob Storage Container Storage Interface(CSI) driver.|
- |csi-blob-node|Kubernetes daemonset|**&check;**|N/A|**&check;**|Azure Blob Storage Container Storage Interface(CSI) driver.|
- |fluent-bit|Kubernetes daemonset|**&check;**|**&check;**|**&check;**|Gather infrastructure components' log.|
- |k8s-host-device-plugin-daemonset|Kubernetes daemonset|**&check;**|**&check;**|**&check;**|Expose fuse to pods on each node.|
- |nfd-worker|Kubernetes daemonset|**&check;**|N/A|**&check;**|Node feature discovery.|
- |prometheus-prom-prometheus|Kubernetes statefulset|**&check;**|**&check;**|**&check;**|Gather and send job metrics to Azure.|
- |frameworkcontroller|Kubernetes statefulset|**&check;**|N/A|**&check;**|Manage the lifecycle of Azure Machine Learning training pods.|
- |alertmanager|Kubernetes statefulset|**&check;**|N/A|**&check;**|Handle alerts sent by client applications such as the Prometheus server.|
-
-> [!IMPORTANT]
-> Azure Service Bus and Azure Relay resources are under the same resource group as the Arc cluster resource. These resources are used to communicate with the Kubernetes cluster and modifying them will break attached compute targets.
-
-> [!NOTE]
-> **{EXTENSION-NAME}** is the extension name specified by the ```az k8s-extension create --name``` Azure CLI command.
-
-## Verify your AzureML extension deployment
-
-```azurecli
-az k8s-extension show --name arcml-extension --cluster-type connectedClusters --cluster-name <your-connected-cluster-name> --resource-group <resource-group>
-```
-
-In the response, look for `"extensionType": "arcml-extension"` and `"installState": "Installed"`. Note it might show `"installState": "Pending"` for the first few minutes.
-
-When the `installState` shows **Installed**, run the following command on your machine with the kubeconfig file pointed to your cluster to check that all pods under *azureml* namespace are in *Running* state:
-
-```bash
-kubectl get pods -n azureml
-```
-## Update Azure Machine Learning extension
-
-Use ```k8s-extension update``` CLI command to update the mutable properties of Azure Machine Learning extension. For more information, see the [`k8s-extension update` CLI command documentation](/cli/azure/k8s-extension?view=azure-cli-latest#az-k8s-extension-update&preserve-view=true).
-
-1. Azure Arc supports update of ``--auto-upgrade-minor-version``, ``--version``, ``--configuration-settings``, ``--configuration-protected-settings``.
-2. For configurationSettings, only the settings that require update need to be provided. If the user provides all settings, they would be merged/overwritten with the provided values.
-3. For ConfigurationProtectedSettings, ALL settings should be provided. If some settings are omitted, those settings would be considered obsolete and deleted.
-
-> [!IMPORTANT]
-> **Don't** update following configs if you have active training workloads or real-time inference endpoints. Otherwise, the training jobs will be impacted and endpoints unavailable.
->
-> * `enableTraining` from `True` to `False`
-> * `installNvidiaDevicePlugin` from `True` to `False` when using GPU.
-> * `nodeSelector`. The update operation can't remove existing nodeSelectors. It can only update existing ones or add new ones.
->
-> **Don't** update following configs if you have active real-time inference endpoints, otherwise, the endpoints will be unavailable.
-> * `allowInsecureConnections`
-> *`privateEndpointNodeport`
-> *`privateEndpointILB`
-> * To update `logAnalyticsWS` from `True` to `False`, provide all original `configurationProtectedSettings`. Otherwise, those settings are considered obsolete and deleted.
-
-## Delete Azure Machine Learning extension
-
-Use [`k8s-extension delete`](/cli/azure/k8s-extension?view=azure-cli-latest#az-k8s-extension-delete&preserve-view=true) CLI command to delete the Azure Machine Learning extension.
-
-It takes around 10 minutes to delete all components deployed to the Kubernetes cluster. Run `kubectl get pods -n azureml` to check if all components were deleted.
--
-## Attach Arc Cluster
-
-### Prerequisite
-
-Azure Machine Learning workspace defaults to have a system-assigned managed identity to access Azure ML resources. It's all done if this default setting is applied.
-
-![Managed Identity in workspace](./media/how-to-attach-arc-kubernetes/ws-msi.png)
-
-Otherwise, if a user-assigned managed identity is specified in Azure Machine Learning workspace creation, the following role assignments need to be granted to the identity manually before attaching the compute.
-
-|Azure resource name |Role to be assigned|
-|--|--|
-|Azure Service Bus|Azure Service Bus Data Owner|
-|Azure Relay|Azure Relay Owner|
-|Azure Arc-enable Kubernetes|Reader|
-
-The Azure Service Bus and Azure Relay resource are created under the same Resource Group as the Arc cluster.
-
-### [Studio](#tab/studio)
-
-Attaching an Azure Arc-enabled Kubernetes cluster makes it available to your workspace for training.
-
-1. Navigate to [Azure Machine Learning studio](https://ml.azure.com).
-1. Under **Manage**, select **Compute**.
-1. Select the **Attached computes** tab.
-1. Select **+New > Kubernetes (preview)**
-
- ![Attach Kubernetes cluster](./media/how-to-attach-arc-kubernetes/attach-kubernetes-cluster.png)
-
-1. Enter a compute name and select your Azure Arc-enabled Kubernetes cluster from the dropdown.
-
- * **(Optional)** Enter Kubernetes namespace, which defaults to `default`. All machine learning workloads will be sent to the specified kubernetes namespace in the cluster.
-
- * **(Optional)** Assign system-assigned or user-assigned managed identity. Managed identities eliminate the need for developers to manage credentials. For more information, see [managed identities overview](../active-directory/managed-identities-azure-resources/overview.md) .
-
- ![Configure Kubernetes cluster](./media/how-to-attach-arc-kubernetes/configure-kubernetes-cluster-2.png)
-
-1. Select **Attach**
-
- In the Attached compute tab, the initial state of your cluster is *Creating*. When the cluster is successfully attached, the state changes to *Succeeded*. Otherwise, the state changes to *Failed*.
-
- ![Provision resources](./media/how-to-attach-arc-kubernetes/provision-resources.png)
-
-### [Python SDK](#tab/sdk)
-
-You can use the Azure Machine Learning Python SDK to attach Azure Arc-enabled Kubernetes clusters as compute targets using the [`attach_configuration`](/python/api/azureml-core/azureml.core.compute.kubernetescompute.kubernetescompute?view=azure-ml-py&preserve-view=true) method.
-
-The following Python code shows how to attach an Azure Arc-enabled Kubernetes cluster and use it as a compute target with managed identity enabled.
-
-Managed identities eliminate the need for developers to manage credentials. For more information, see [managed identities overview](../active-directory/managed-identities-azure-resources/overview.md).
-
-```python
-from azureml.core.compute import KubernetesCompute
-from azureml.core.compute import ComputeTarget
-from azureml.core.workspace import Workspace
-import os
-
-ws = Workspace.from_config()
-
-# Specify a name for your Kubernetes compute
-amlarc_compute_name = "<COMPUTE_CLUSTER_NAME>"
-
-# resource ID for the Kubernetes cluster and user-managed identity
-resource_id = "/subscriptions/<sub ID>/resourceGroups/<RG>/providers/Microsoft.Kubernetes/connectedClusters/<cluster name>"
-
-user_assigned_identity_resouce_id = ['subscriptions/<sub ID>/resourceGroups/<RG>/providers/Microsoft.ManagedIdentity/userAssignedIdentities/<identity name>']
-
-ns = "default"
-
-if amlarc_compute_name in ws.compute_targets:
- amlarc_compute = ws.compute_targets[amlarc_compute_name]
- if amlarc_compute and type(amlarc_compute) is KubernetesCompute:
- print("found compute target: " + amlarc_compute_name)
-else:
- print("creating new compute target...")
--
-# assign user-assigned managed identity
-amlarc_attach_configuration = KubernetesCompute.attach_configuration(resource_id = resource_id, namespace = ns, identity_type ='UserAssigned',identity_ids = user_assigned_identity_resouce_id)
-
-# assign system-assigned managed identity
-# amlarc_attach_configuration = KubernetesCompute.attach_configuration(resource_id = resource_id, namespace = ns, identity_type ='SystemAssigned')
-
-amlarc_compute = ComputeTarget.attach(ws, amlarc_compute_name, amlarc_attach_configuration)
-amlarc_compute.wait_for_completion(show_output=True)
-
-# get detailed compute description containing managed identity principle ID, used for permission access.
-print(amlarc_compute.get_status().serialize())
-```
-
-Use the `identity_type` parameter to enable `SystemAssigned` or `UserAssigned` managed identities.
-
-### [CLI](#tab/cli)
-
-You can attach an AKS or Azure Arc enabled Kubernetes cluster using the Azure Machine Learning 2.0 CLI (preview).
-
-Use the Azure Machine Learning CLI [`attach`](/cli/azure/ml/compute) command and set the `--type` argument to `Kubernetes` to attach your Kubernetes cluster using the Azure Machine Learning 2.0 CLI.
-
-> [!NOTE]
-> Compute attach support for AKS or Azure Arc enabled Kubernetes clusters requires a version of the Azure CLI `ml` extension >= 2.0.1a4. For more information, see [Install and set up the CLI (v2)](how-to-configure-cli.md).
-
-The following commands show how to attach an Azure Arc-enabled Kubernetes cluster and use it as a compute target with managed identity enabled.
-
-**AKS**
-
-```azurecli
-az ml compute attach --resource-group <resource-group-name> --workspace-name <workspace-name> --name amlarc-compute --resource-id "/subscriptions/<subscription-id>/resourceGroups/<resource-group-name>/providers/Microsoft.Kubernetes/managedclusters/<cluster-name>" --type Kubernetes --identity-type UserAssigned --user-assigned-identities "subscriptions/<subscription-id>/resourceGroups/<resource-group-name>/providers/Microsoft.ManagedIdentity/userAssignedIdentities/<identity-name>" --no-wait
-```
-
-**Azure Arc enabled Kubernetes**
-
-```azurecli
-az ml compute attach --resource-group <resource-group-name> --workspace-name <workspace-name> --name amlarc-compute --resource-id "/subscriptions/<subscription-id>/resourceGroups/<resource-group-name>/providers/Microsoft.Kubernetes/connectedClusters/<cluster-name>" --type kubernetes --user-assigned-identities "subscriptions/<subscription-id>/resourceGroups/<resource-group-name>/providers/Microsoft.ManagedIdentity/userAssignedIdentities/<identity-name>" --no-wait
-```
-
-Use the `identity_type` argument to enable `SystemAssigned` or `UserAssigned` managed identities.
-
-> [!IMPORTANT]
-> `--user-assigned-identities` is only required for `UserAssigned` managed identities. Although you can provide a list of comma-separated user managed identities, only the first one is used when you attach your cluster.
---
-## Next steps
--- [Create and select different instance types for training and inferencing workloads](how-to-kubernetes-instance-type.md)-- [Train models with CLI (v2)](how-to-train-cli.md)-- [Configure and submit training runs](how-to-set-up-training-targets.md)-- [Tune hyperparameters](how-to-tune-hyperparameters.md)-- [Train a model using Scikit-learn](how-to-train-scikit-learn.md)-- [Train a TensorFlow model](how-to-train-tensorflow.md)-- [Train a PyTorch model](how-to-train-pytorch.md)-- [Train using Azure Machine Learning pipelines](how-to-create-machine-learning-pipelines.md)-- [Train model on-premise with outbound proxy server](../azure-arc/kubernetes/quickstart-connect-cluster.md#connect-using-an-outbound-proxy-server)
machine-learning How To Attach Compute Targets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-attach-compute-targets.md
- Title: Set up training & inference compute targets-
-description: Add compute resources (compute targets) to your workspace to use for machine learning training and inference.
------ Previously updated : 10/21/2021---
-# Set up compute targets for model training and deployment
-
-Learn how to attach Azure compute resources to your Azure Machine Learning workspace. Then you can use these resources as training and inference [compute targets](concept-compute-target.md) in your machine learning tasks.
-
-In this article, learn how to set up your workspace to use these compute resources:
-
-* Your local computer
-* Remote virtual machines
-* Apache Spark pools (powered by Azure Synapse Analytics)
-* Azure HDInsight
-* Azure Batch
-* Azure Databricks - used as a training compute target only in [machine learning pipelines](how-to-create-machine-learning-pipelines.md)
-* Azure Data Lake Analytics
-* Azure Container Instance
-* Azure Kubernetes Service & Azure Arc-enabled Kubernetes (preview)
-
-To use compute targets managed by Azure Machine Learning, see:
-
-* [Azure Machine Learning compute instance](how-to-create-manage-compute-instance.md)
-* [Azure Machine Learning compute cluster](how-to-create-attach-compute-cluster.md)
-* [Azure Kubernetes Service cluster](how-to-create-attach-kubernetes.md)
-
-## Prerequisites
-
-* An Azure Machine Learning workspace. For more information, see [Create an Azure Machine Learning workspace](how-to-manage-workspace.md).
-
-* The [Azure CLI extension for Machine Learning service](reference-azure-machine-learning-cli.md), [Azure Machine Learning Python SDK](/python/api/overview/azure/ml/intro), or the [Azure Machine Learning Visual Studio Code extension](how-to-setup-vs-code.md).
-
-## Limitations
-
-* **Do not create multiple, simultaneous attachments to the same compute** from your workspace. For example, attaching one Azure Kubernetes Service cluster to a workspace using two different names. Each new attachment will break the previous existing attachment(s).
-
- If you want to reattach a compute target, for example to change TLS or other cluster configuration setting, you must first remove the existing attachment.
-
-## What's a compute target?
-
-With Azure Machine Learning, you can train your model on various resources or environments, collectively referred to as [__compute targets__](concept-azure-machine-learning-architecture.md#compute-targets). A compute target can be a local machine or a cloud resource, such as an Azure Machine Learning Compute, Azure HDInsight, or a remote virtual machine. You also use compute targets for model deployment as described in ["Where and how to deploy your models"](how-to-deploy-and-where.md).
--
-## <a id="local"></a>Local computer
-
-When you use your local computer for **training**, there is no need to create a compute target. Just [submit the training run](how-to-set-up-training-targets.md) from your local machine.
-
-When you use your local computer for **inference**, you must have Docker installed. To perform the deployment, use [LocalWebservice.deploy_configuration()](/python/api/azureml-core/azureml.core.webservice.local.localwebservice#deploy-configuration-port-none-) to define the port that the web service will use. Then use the normal deployment process as described in [Deploy models with Azure Machine Learning](how-to-deploy-and-where.md).
-
-## <a id="vm"></a>Remote virtual machines
-
-Azure Machine Learning also supports attaching an Azure Virtual Machine. The VM must be an Azure Data Science Virtual Machine (DSVM). The VM offers a curated choice of tools and frameworks for full-lifecycle machine learning development. For more information on how to use the DSVM with Azure Machine Learning, see [Configure a development environment](./how-to-configure-environment.md#dsvm).
-
-> [!TIP]
-> Instead of a remote VM, we recommend using the [Azure Machine Learning compute instance](concept-compute-instance.md). It is a fully managed, cloud-based compute solution that is specific to Azure Machine Learning. For more information, see [create and manage Azure Machine Learning compute instance](how-to-create-manage-compute-instance.md).
-
-1. **Create**: Azure Machine Learning cannot create a remote VM for you. Instead, you must create the VM and then attach it to your Azure Machine Learning workspace. For information on creating a DSVM, see [Provision the Data Science Virtual Machine for Linux (Ubuntu)](./data-science-virtual-machine/dsvm-ubuntu-intro.md).
-
- > [!WARNING]
- > Azure Machine Learning only supports virtual machines that run **Ubuntu**. When you create a VM or choose an existing VM, you must select a VM that uses Ubuntu.
- >
- > Azure Machine Learning also requires the virtual machine to have a __public IP address__.
-
-1. **Attach**: To attach an existing virtual machine as a compute target, you must provide the resource ID, user name, and password for the virtual machine. The resource ID of the VM can be constructed using the subscription ID, resource group name, and VM name using the following string format: `/subscriptions/<subscription_id>/resourceGroups/<resource_group>/providers/Microsoft.Compute/virtualMachines/<vm_name>`
-
-
- ```python
- from azureml.core.compute import RemoteCompute, ComputeTarget
-
- # Create the compute config
- compute_target_name = "attach-dsvm"
-
- attach_config = RemoteCompute.attach_configuration(resource_id='<resource_id>',
- ssh_port=22,
- username='<username>',
- password="<password>")
-
- # Attach the compute
- compute = ComputeTarget.attach(ws, compute_target_name, attach_config)
-
- compute.wait_for_completion(show_output=True)
- ```
-
- Or you can attach the DSVM to your workspace [using Azure Machine Learning studio](how-to-create-attach-compute-studio.md#attached-compute).
-
- > [!WARNING]
- > Do not create multiple, simultaneous attachments to the same DSVM from your workspace. Each new attachment will break the previous existing attachment(s).
-
-1. **Configure**: Create a run configuration for the DSVM compute target. Docker and conda are used to create and configure the training environment on the DSVM.
-
- ```python
- from azureml.core import ScriptRunConfig
- from azureml.core.environment import Environment
- from azureml.core.conda_dependencies import CondaDependencies
-
- # Create environment
- myenv = Environment(name="myenv")
-
- # Specify the conda dependencies
- myenv.python.conda_dependencies = CondaDependencies.create(conda_packages=['scikit-learn'])
-
- # If no base image is explicitly specified the default CPU image "azureml.core.runconfig.DEFAULT_CPU_IMAGE" will be used
- # To use GPU in DSVM, you should specify the default GPU base Docker image or another GPU-enabled image:
- # myenv.docker.enabled = True
- # myenv.docker.base_image = azureml.core.runconfig.DEFAULT_GPU_IMAGE
-
- # Configure the run configuration with the Linux DSVM as the compute target and the environment defined above
- src = ScriptRunConfig(source_directory=".", script="train.py", compute_target=compute, environment=myenv)
- ```
-
-> [!TIP]
-> If you want to __remove__ (detach) a VM from your workspace, use the [RemoteCompute.detach()](/python/api/azureml-core/azureml.core.compute.remotecompute#detach--) method.
->
-> Azure Machine Learning does not delete the VM for you. You must manually delete the VM using the Azure portal, CLI, or the SDK for Azure VM.
-
-## <a id="synapse"></a>Apache Spark pools
-
-The Azure Synapse Analytics integration with Azure Machine Learning (preview) allows you to attach an Apache Spark pool backed by Azure Synapse for interactive data exploration and preparation. With this integration, you can have a dedicated compute for data wrangling at scale. For more information, see [How to attach Apache Spark pools powered by Azure Synapse Analytics](how-to-link-synapse-ml-workspaces.md#attach-synapse-spark-pool-as-a-compute).
-
-## <a id="hdinsight"></a>Azure HDInsight
-
-Azure HDInsight is a popular platform for big-data analytics. The platform provides Apache Spark, which can be used to train your model.
-
-1. **Create**: Azure Machine Learning cannot create an HDInsight cluster for you. Instead, you must create the cluster and then attach it to your Azure Machine Learning workspace. For more information, see [Create a Spark Cluster in HDInsight](../hdinsight/spark/apache-spark-jupyter-spark-sql.md).
-
- > [!WARNING]
- > Azure Machine Learning requires the HDInsight cluster to have a __public IP address__.
-
- When you create the cluster, you must specify an SSH user name and password. Take note of these values, as you need them to use HDInsight as a compute target.
-
- After the cluster is created, connect to it with the hostname \<clustername>-ssh.azurehdinsight.net, where \<clustername> is the name that you provided for the cluster.
-
-1. **Attach**: To attach an HDInsight cluster as a compute target, you must provide the resource ID, user name, and password for the HDInsight cluster. The resource ID of the HDInsight cluster can be constructed using the subscription ID, resource group name, and HDInsight cluster name using the following string format: `/subscriptions/<subscription_id>/resourceGroups/<resource_group>/providers/Microsoft.HDInsight/clusters/<cluster_name>`
-
- ```python
- from azureml.core.compute import ComputeTarget, HDInsightCompute
- from azureml.exceptions import ComputeTargetException
-
- try:
- # if you want to connect using SSH key instead of username/password you can provide parameters private_key_file and private_key_passphrase
-
- attach_config = HDInsightCompute.attach_configuration(resource_id='<resource_id>',
- ssh_port=22,
- username='<ssh-username>',
- password='<ssh-pwd>')
- hdi_compute = ComputeTarget.attach(workspace=ws,
- name='myhdi',
- attach_configuration=attach_config)
-
- except ComputeTargetException as e:
- print("Caught = {}".format(e.message))
-
- hdi_compute.wait_for_completion(show_output=True)
- ```
-
- Or you can attach the HDInsight cluster to your workspace [using Azure Machine Learning studio](how-to-create-attach-compute-studio.md#attached-compute).
-
- > [!WARNING]
- > Do not create multiple, simultaneous attachments to the same HDInsight from your workspace. Each new attachment will break the previous existing attachment(s).
-
-1. **Configure**: Create a run configuration for the HDI compute target.
-
- [!code-python[](~/aml-sdk-samples/ignore/doc-qa/how-to-set-up-training-targets/hdi.py?name=run_hdi)]
-
-> [!TIP]
-> If you want to __remove__ (detach) an HDInsight cluster from the workspace, use the [HDInsightCompute.detach()](/python/api/azureml-core/azureml.core.compute.hdinsight.hdinsightcompute#detach--) method.
->
-> Azure Machine Learning does not delete the HDInsight cluster for you. You must manually delete it using the Azure portal, CLI, or the SDK for Azure HDInsight.
-
-## <a id="azbatch"></a>Azure Batch
-
-Azure Batch is used to run large-scale parallel and high-performance computing (HPC) applications efficiently in the cloud. AzureBatchStep can be used in an Azure Machine Learning Pipeline to submit jobs to an Azure Batch pool of machines.
-
-To attach Azure Batch as a compute target, you must use the Azure Machine Learning SDK and provide the following information:
--- **Azure Batch compute name**: A friendly name to be used for the compute within the workspace-- **Azure Batch account name**: The name of the Azure Batch account-- **Resource Group**: The resource group that contains the Azure Batch account.-
-The following code demonstrates how to attach Azure Batch as a compute target:
-
-```python
-from azureml.core.compute import ComputeTarget, BatchCompute
-from azureml.exceptions import ComputeTargetException
-
-# Name to associate with new compute in workspace
-batch_compute_name = 'mybatchcompute'
-
-# Batch account details needed to attach as compute to workspace
-batch_account_name = "<batch_account_name>" # Name of the Batch account
-# Name of the resource group which contains this account
-batch_resource_group = "<batch_resource_group>"
-
-try:
- # check if the compute is already attached
- batch_compute = BatchCompute(ws, batch_compute_name)
-except ComputeTargetException:
- print('Attaching Batch compute...')
- provisioning_config = BatchCompute.attach_configuration(
- resource_group=batch_resource_group, account_name=batch_account_name)
- batch_compute = ComputeTarget.attach(
- ws, batch_compute_name, provisioning_config)
- batch_compute.wait_for_completion()
- print("Provisioning state:{}".format(batch_compute.provisioning_state))
- print("Provisioning errors:{}".format(batch_compute.provisioning_errors))
-
-print("Using Batch compute:{}".format(batch_compute.cluster_resource_id))
-```
-
-> [!WARNING]
-> Do not create multiple, simultaneous attachments to the same Azure Batch from your workspace. Each new attachment will break the previous existing attachment(s).
-
-## <a id="databricks"></a>Azure Databricks
-
-Azure Databricks is an Apache Spark-based environment in the Azure cloud. It can be used as a compute target with an Azure Machine Learning pipeline.
-
-> [!IMPORTANT]
-> Azure Machine Learning cannot create an Azure Databricks compute target. Instead, you must create an Azure Databricks workspace, and then attach it to your Azure Machine Learning workspace. To create a workspace resource, see the [Run a Spark job on Azure Databricks](/azure/databricks/scenarios/quickstart-create-databricks-workspace-portal) document.
->
-> To attach an Azure Databricks workspace from a __different Azure subscription__, you (your Azure AD account) must be granted the **Contributor** role on the Azure Databricks workspace. Check your access in the [Azure portal](https://portal.azure.com/).
-
-To attach Azure Databricks as a compute target, provide the following information:
-
-* __Databricks compute name__: The name you want to assign to this compute resource.
-* __Databricks workspace name__: The name of the Azure Databricks workspace.
-* __Databricks access token__: The access token used to authenticate to Azure Databricks. To generate an access token, see the [Authentication](/azure/databricks/dev-tools/api/latest/authentication) document.
-
-The following code demonstrates how to attach Azure Databricks as a compute target with the Azure Machine Learning SDK:
-
-```python
-import os
-from azureml.core.compute import ComputeTarget, DatabricksCompute
-from azureml.exceptions import ComputeTargetException
-
-databricks_compute_name = os.environ.get(
- "AML_DATABRICKS_COMPUTE_NAME", "<databricks_compute_name>")
-databricks_workspace_name = os.environ.get(
- "AML_DATABRICKS_WORKSPACE", "<databricks_workspace_name>")
-databricks_resource_group = os.environ.get(
- "AML_DATABRICKS_RESOURCE_GROUP", "<databricks_resource_group>")
-databricks_access_token = os.environ.get(
- "AML_DATABRICKS_ACCESS_TOKEN", "<databricks_access_token>")
-
-try:
- databricks_compute = ComputeTarget(
- workspace=ws, name=databricks_compute_name)
- print('Compute target already exists')
-except ComputeTargetException:
- print('compute not found')
- print('databricks_compute_name {}'.format(databricks_compute_name))
- print('databricks_workspace_name {}'.format(databricks_workspace_name))
- print('databricks_access_token {}'.format(databricks_access_token))
-
- # Create attach config
- attach_config = DatabricksCompute.attach_configuration(resource_group=databricks_resource_group,
- workspace_name=databricks_workspace_name,
- access_token=databricks_access_token)
- databricks_compute = ComputeTarget.attach(
- ws,
- databricks_compute_name,
- attach_config
- )
-
- databricks_compute.wait_for_completion(True)
-```
-
-For a more detailed example, see an [example notebook](https://aka.ms/pl-databricks) on GitHub.
-
-> [!WARNING]
-> Do not create multiple, simultaneous attachments to the same Azure Databricks from your workspace. Each new attachment will break the previous existing attachment(s).
-
-## <a id="adla"></a>Azure Data Lake Analytics
-
-Azure Data Lake Analytics is a big data analytics platform in the Azure cloud. It can be used as a compute target with an Azure Machine Learning pipeline.
-
-Create an Azure Data Lake Analytics account before using it. To create this resource, see the [Get started with Azure Data Lake Analytics](../data-lake-analytics/data-lake-analytics-get-started-portal.md) document.
-
-To attach Data Lake Analytics as a compute target, you must use the Azure Machine Learning SDK and provide the following information:
-
-* __Compute name__: The name you want to assign to this compute resource.
-* __Resource Group__: The resource group that contains the Data Lake Analytics account.
-* __Account name__: The Data Lake Analytics account name.
-
-The following code demonstrates how to attach Data Lake Analytics as a compute target:
-
-```python
-import os
-from azureml.core.compute import ComputeTarget, AdlaCompute
-from azureml.exceptions import ComputeTargetException
--
-adla_compute_name = os.environ.get(
- "AML_ADLA_COMPUTE_NAME", "<adla_compute_name>")
-adla_resource_group = os.environ.get(
- "AML_ADLA_RESOURCE_GROUP", "<adla_resource_group>")
-adla_account_name = os.environ.get(
- "AML_ADLA_ACCOUNT_NAME", "<adla_account_name>")
-
-try:
- adla_compute = ComputeTarget(workspace=ws, name=adla_compute_name)
- print('Compute target already exists')
-except ComputeTargetException:
- print('compute not found')
- print('adla_compute_name {}'.format(adla_compute_name))
- print('adla_resource_id {}'.format(adla_resource_group))
- print('adla_account_name {}'.format(adla_account_name))
- # create attach config
- attach_config = AdlaCompute.attach_configuration(resource_group=adla_resource_group,
- account_name=adla_account_name)
- # Attach ADLA
- adla_compute = ComputeTarget.attach(
- ws,
- adla_compute_name,
- attach_config
- )
-
- adla_compute.wait_for_completion(True)
-```
-
-For a more detailed example, see an [example notebook](https://aka.ms/pl-adla) on GitHub.
-
-> [!WARNING]
-> Do not create multiple, simultaneous attachments to the same ADLA from your workspace. Each new attachment will break the previous existing attachment(s).
-
-> [!TIP]
-> Azure Machine Learning pipelines can only work with data stored in the default data store of the Data Lake Analytics account. If the data you need to work with is in a non-default store, you can use a [`DataTransferStep`](/python/api/azureml-pipeline-steps/azureml.pipeline.steps.data_transfer_step.datatransferstep) to copy the data before training.
-
-## <a id="aci"></a>Azure Container Instance
-
-Azure Container Instances (ACI) are created dynamically when you deploy a model. You cannot create or attach ACI to your workspace in any other way. For more information, see [Deploy a model to Azure Container Instances](how-to-deploy-azure-container-instance.md).
-
-## <a id="kubernetes"></a>Kubernetes (preview)
-
-Azure Machine Learning provides you with the following options to attach your own Kubernetes clusters for training and inferencing:
-
-* [Azure Kubernetes Service](../aks/intro-kubernetes.md). Azure Kubernetes Service provides a managed cluster in Azure.
-* [Azure Arc Kubernetes](../azure-arc/kubernetes/overview.md). Use Azure Arc-enabled Kubernetes clusters if your cluster is hosted outside of Azure.
--
-To detach a Kubernetes cluster from your workspace, use the following method:
-
-```python
-compute_target.detach()
-```
-
-> [!WARNING]
-> Detaching a cluster **does not delete the cluster**. To delete an Azure Kubernetes Service cluster, see [Use the Azure CLI with AKS](../aks/learn/quick-kubernetes-deploy-cli.md#delete-the-cluster). To delete an Azure Arc-enabled Kubernetes cluster, see [Azure Arc quickstart](../azure-arc/kubernetes/quickstart-connect-cluster.md#clean-up-resources).
-
-## Notebook examples
-
-See these notebooks for examples of training with various compute targets:
-* [how-to-use-azureml/training](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/training)
-* [tutorials/img-classification-part1-training.ipynb](https://github.com/Azure/MachineLearningNotebooks/blob/master/tutorials/image-classification-mnist-data/img-classification-part1-training.ipynb)
--
-## Next steps
-
-* Use the compute resource to [configure and submit a training run](how-to-set-up-training-targets.md).
-* [Tutorial: Train and deploy a model](tutorial-train-deploy-notebook.md) uses a managed compute target to train a model.
-* Learn how to [efficiently tune hyperparameters](how-to-tune-hyperparameters.md) to build better models.
-* Once you have a trained model, learn [how and where to deploy models](how-to-deploy-and-where.md).
-* [Use Azure Machine Learning with Azure Virtual Networks](./how-to-network-security-overview.md)
machine-learning How To Attach Kubernetes Anywhere https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-attach-kubernetes-anywhere.md
+
+ Title: Azure Machine Learning anywhere with Kubernetes (preview)
+description: Configure and attach an existing Kubernetes in any infrastructure across on-premises and multi-cloud to build, train, and deploy models with seamless Azure ML experience.
+++++ Last updated : 11/23/2021++++
+# Azure Machine Learning anywhere with Kubernetes (preview)
+
+Azure Machine Learning anywhere with Kubernetes (AzureML anywhere) enables customers to build, train, and deploy models in any infrastructure on-premises and across multi-cloud using Kubernetes. With an AzureML extension deployment on a Kubernetes cluster, you can instantly onboard teams of ML professionals with AzureML service capabilities. These services include full machine learning lifecycle and automation with MLOps in hybrid cloud and multi-cloud.
+
+In this article, you can learn about steps to configure and attach an existing Kubernetes cluster anywhere for Azure Machine Learning:
+* [Deploy AzureML extension to Kubernetes cluster](#deploy-azureml-extensionexample-scenarios)
+* [Create and use instance types to manage compute resources efficiently](#create-custom-instance-types)
+
+## Prerequisites
+
+1. A running Kubernetes cluster - **We recommend minimum of 4 vCPU cores and 8GB memory, around 2 vCPU cores and 3GB memory will be used by Azure Arc agent and AzureML extension components**.
+1. Connect your Kubernetes cluster to Azure Arc. Follow instructions in [connect existing Kubernetes cluster to Azure Arc](../azure-arc/kubernetes/quickstart-connect-cluster.md).
+
+ a. if you have Azure RedHat OpenShift Service (ARO) cluster or OpenShift Container Platform (OCP) cluster, follow another prerequisite step [here](#prerequisite-for-azure-arc-enabled-kubernetes) before AzureML extension deployment.
+1. If you have an AKS cluster in Azure, register the AKS-ExtensionManager feature flag by using the ```az feature register --namespace "Microsoft.ContainerService" --name "AKS-ExtensionManager``` command. **Azure Arc connection is not required and not recommended**.
+1. Install or upgrade Azure CLI to version >=2.16.0
+1. Install the Azure CLI extension ```k8s-extension``` (version>=1.0.0) by running ```az extension add --name k8s-extension```
+
+## What is AzureML extension
+
+AzureML extension consists of a set of system components deployed to your Kubernetes cluster so you can enable your cluster to run an AzureML workload - model training jobs or model endpoints. You can use an Azure CLI command ```k8s-extension create``` to deploy AzureML extension.
+
+For a detailed list of AzureML extension system components, see appendix [AzureML extension components](#appendix-i-azureml-extension-components).
+
+## Key considerations for AzureML extension deployment
+
+AzureML extension allows you to specify configuration settings needed for different workload support at deployment time. Before AzureML extension deployment, **read following carefully to avoid unnecessary extension deployment errors**:
+
+ * Type of workload to enable for your cluster. ```enableTraining``` and ```enableInference``` config settings are your convenient choices here; they will enable training and inference workload respectively.
+ * For inference workload support, it requires ```azureml-fe``` router service to be deployed for routing incoming inference requests to model pod, and you would need to specify ```inferenceRouterServiceType``` config setting for ```azureml-fe```. ```azureml-fe``` can be deployed with one of following ```inferenceRouterServiceType```:
+ * Type ```LoadBalancer```. Exposes ```azureml-fe``` externally using a cloud provider's load balancer. To specify this value, ensure that your cluster supports load balancer provisioning. Note most on-premises Kubernetes clusters might not support external load balancer.
+ * Type ```NodePort```. Exposes ```azureml-fe``` on each Node's IP at a static port. You'll be able to contact ```azureml-fe```, from outside of cluster, by requesting ```<NodeIP>:<NodePort>```. Using ```NodePort``` also allows you to set up your own load balancing solution and SSL termination for ```azureml-fe```.
+ * Type ```ClusterIP```. Exposes ```azureml-fe``` on a cluster-internal IP, and it makes ```azureml-fe``` only reachable from within the cluster. For ```azureml-fe``` to serve inference requests coming outside of cluster, it requires you to set up your own load balancing solution and SSL termination for ```azureml-fe```.
+ * For inference workload support, to ensure high availability of ```azureml-fe``` routing service, AzureML extension deployment by default creates 3 replicas of ```azureml-fe``` for clusters having 3 nodes or more. If your cluster has **less than 3 nodes**, set ```inferenceLoadbalancerHA=False```.
+ * For inference workload support, you would also want to consider using **HTTPS** to restrict access to model endpoints and secure the data that clients submit. For this purpose, you would need to specify either ```sslSecret``` config setting or combination of ```sslCertPemFile``` and ```sslCertKeyFile``` config settings. By default, AzureML extension deployment expects **HTTPS** support required, and you would need to provide above config setting. For development or test purposes, **HTTP** support is conveniently supported through config setting ```allowInsecureConnections=True```.
+
+For a complete list of configuration settings available to choose at AzureML deployment time, see appendix [Review AzureML extension config settings](#appendix-ii-review-azureml-deployment-configuration-settings)
+
+## Deploy AzureML extension - example scenarios
+
+### Use AKS in Azure for a quick Proof of Concept, both training and inference workloads support
+
+Ensure you have fulfilled [prerequisites](#prerequisites). For AzureML extension deployment on AKS, make sure to specify ```managedClusters``` value for ```--cluster-type``` parameter. Run the following Azure CLI command to deploy AzureML extension:
+```azurecli
+ az k8s-extension create --name azureml-extension --extension-type Microsoft.AzureML.Kubernetes --config enableTraining=True enableInference=True inferenceRouterServiceType=LoadBalancer allowInsecureConnections=True inferenceLoadBalancerHA=False --cluster-type managedClusters --cluster-name <your-AKS-cluster-name> --resource-group <your-RG-name> --scope cluster
+```
+
+### Use Minikube on your desktop for a quick POC, training workload support only
+
+Ensure you have fulfilled [prerequisites](#prerequisites). Since the follow steps would create an Azure Arc connected cluster, you would need to specify ```connectedClusters``` value for ```--cluster-type``` parameter. Run following simple Azure CLI command to deploy AzureML extension:
+```azurecli
+ az k8s-extension create --name azureml-extension --extension-type Microsoft.AzureML.Kubernetes --config enableTraining=True --cluster-type connectedClusters --cluster-name <your-connected-cluster-name> --resource-group <your-RG-name> --scope cluster
+```
+
+### Enable an AKS cluster in Azure for production training and inference workload
+
+Ensure you have fulfilled [prerequisites](#prerequisites). Assuming your cluster has more than 3 nodes, and you will use an Azure public load balancer and HTTPS for inference workload support, run following Azure CLI command to deploy AzureML extension:
+```azurecli
+ az k8s-extension create --name azureml-extension --extension-type Microsoft.AzureML.Kubernetes --config enableTraining=True enableInference=True inferenceRouterServiceType=LoadBalancer --config-protected sslCertPemFile=<file-path-to-cert-PEM> sslCertKeyFile=<file-path-to-cert-KEY> --cluster-type managedClusters --cluster-name <your-AKS-cluster-name> --resource-group <your-RG-name> --scope cluster
+```
+### Enable an Azure Arc connected cluster anywhere for production training and inference workload
+
+Ensure you have fulfilled [prerequisites](#prerequisites). Assuming your cluster has more than 3 nodes, you will use a NodePort service type and HTTPS for inference workload support, run following Azure CLI command to deploy AzureML extension:
+```azurecli
+ az k8s-extension create --name azureml-extension --extension-type Microsoft.AzureML.Kubernetes --config enableTraining=True enableInference=True inferenceRouterServiceType=NodePort --config-protected sslCertPemFile=<file-path-to-cert-PEM> sslCertKeyFile=<file-path-to-cert-KEY> --cluster-type connectedClusters --cluster-name <your-connected-cluster-name> --resource-group <your-RG-name> --scope cluster
+```
+
+### Verify AzureML extension deployment
+
+1. Run the following CLI command to check AzureML extension details:
+
+ ```azurecli
+ az k8s-extension show --name arcml-extension --cluster-type connectedClusters --cluster-name <your-connected-cluster-name> --resource-group <resource-group>
+ ```
+
+1. In the response, look for "name": "azureml-extension" and "provisioningState": "Succeeded". Note it might show "provisioningState": "Pending" for the first few minutes.
+
+1. If the provisioningState shows Succeeded, run the following command on your machine with the kubeconfig file pointed to your cluster to check that all pods under "azureml" namespace are in 'Running' state:
+
+ ```bash
+ kubectl get pods -n azureml
+ ```
+
+## Attach a Kubernetes cluster to an AzureML workspace
+
+### Prerequisite for Azure Arc enabled Kubernetes
+
+Azure Machine Learning workspace defaults to having a system-assigned managed identity to access Azure ML resources. The steps are completed if the system assigned default setting is on.
++
+Otherwise, if a user-assigned managed identity is specified in Azure Machine Learning workspace creation, the following role assignments need to be granted to the identity manually before attaching the compute.
+
+|Azure resource name |Role to be assigned|
+|--|--|
+|Azure Relay|Azure Relay Owner|
+|Azure Arc-enabled Kubernetes|Reader|
+
+Azure Relay resources are created under the same Resource Group as the Arc cluster.
+
+### [Studio](#tab/studio)
+
+Attaching an Azure Arc-enabled Kubernetes cluster makes it available to your workspace for training.
+
+1. Navigate to [Azure Machine Learning studio](https://ml.azure.com).
+1. Under **Manage**, select **Compute**.
+1. Select the **Attached computes** tab.
+1. Select **+New > Kubernetes (preview)**
+
+ :::image type="content" source="media/how-to-attach-arc-kubernetes/attach-kubernetes-cluster.png" alt-text="Screenshot of settings for Kubernetes cluster to make available in your workspace.":::
+
+1. Enter a compute name and select your Azure Arc-enabled Kubernetes cluster from the dropdown.
+
+ * **(Optional)** Enter Kubernetes namespace, which defaults to `default`. All machine learning workloads will be sent to the specified Kubernetes namespace in the cluster.
+
+ * **(Optional)** Assign system-assigned or user-assigned managed identity. Managed identities eliminate the need for developers to manage credentials. For more information, see [managed identities overview](../active-directory/managed-identities-azure-resources/overview.md) .
+
+ :::image type="content" source="media/how-to-attach-arc-kubernetes/configure-kubernetes-cluster-2.png" alt-text="Screenshot of settings for developer configuration of Kubernetes cluster.":::
+
+1. Select **Attach**
+
+ In the Attached compute tab, the initial state of your cluster is *Creating*. When the cluster is successfully attached, the state changes to *Succeeded*. Otherwise, the state changes to *Failed*.
+
+ :::image type="content" source="media/how-to-attach-arc-kubernetes/provision-resources.png" alt-text="Screenshot of attached settings for configuration of Kubernetes cluster.":::
+
+### [CLI](#tab/cli)
+
+You can attach an AKS or Azure Arc enabled Kubernetes cluster using the Azure Machine Learning 2.0 CLI (preview).
+
+Use the Azure Machine Learning CLI [`attach`](/cli/azure/ml/compute) command and set the `--type` argument to `Kubernetes` to attach your Kubernetes cluster using the Azure Machine Learning 2.0 CLI.
+
+> [!NOTE]
+> Compute attach support for AKS or Azure Arc enabled Kubernetes clusters requires a version of the Azure CLI `ml` extension >= 2.0.1a4. For more information, see [Install and set up the CLI (v2)](how-to-configure-cli.md).
+
+The following commands show how to attach an Azure Arc-enabled Kubernetes cluster and use it as a compute target with managed identity enabled.
+
+**AKS**
+
+```azurecli
+az ml compute attach --resource-group <resource-group-name> --workspace-name <workspace-name> --name k8s-compute --resource-id "/subscriptions/<subscription-id>/resourceGroups/<resource-group-name>/providers/Microsoft.Kubernetes/managedclusters/<cluster-name>" --type Kubernetes --identity-type UserAssigned --user-assigned-identities "subscriptions/<subscription-id>/resourceGroups/<resource-group-name>/providers/Microsoft.ManagedIdentity/userAssignedIdentities/<identity-name>" --no-wait
+```
+
+**Azure Arc enabled Kubernetes**
+
+```azurecli
+az ml compute attach --resource-group <resource-group-name> --workspace-name <workspace-name> --name amlarc-compute --resource-id "/subscriptions/<subscription-id>/resourceGroups/<resource-group-name>/providers/Microsoft.Kubernetes/connectedClusters/<cluster-name>" --type kubernetes --user-assigned-identities "subscriptions/<subscription-id>/resourceGroups/<resource-group-name>/providers/Microsoft.ManagedIdentity/userAssignedIdentities/<identity-name>" --no-wait
+```
+
+Use the `identity_type` argument to enable `SystemAssigned` or `UserAssigned` managed identities.
+
+> [!IMPORTANT]
+> `--user-assigned-identities` is only required for `UserAssigned` managed identities. Although you can provide a list of comma-separated user managed identities, only the first one is used when you attach your cluster.
+++
+## Create instance types for efficient compute resource usage
+
+### What are instance types?
+
+Instance types are an Azure Machine Learning concept that allows targeting certain types of
+compute nodes for training and inference workloads. For an Azure VM, an example for an
+instance type is `STANDARD_D2_V3`.
+
+In Kubernetes clusters, instance types are represented in a custom resource definition (CRD) that is installed with the AzureML extension. Instance types are represented by two elements in AzureML extension:
+[nodeSelector](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector)
+and [resources](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/).
+In short, a `nodeSelector` lets us specify which node a pod should run on. The node must have a
+corresponding label. In the `resources` section, we can set the compute resources (CPU, memory and
+Nvidia GPU) for the pod.
+
+### Default instance type
+
+By default, a `defaultinstancetype` with following definition is created when you attach Kuberenetes cluster to AzureML workspace:
+- No `nodeSelector` is applied, meaning the pod can get scheduled on any node.
+- The workload's pods are assigned default resources with 0.6 cpu cores, 1536Mi memory and 0 GPU:
+```yaml
+resources:
+ requests:
+ cpu: "0.6"
+ memory: "1536Mi"
+ limits:
+ cpu: "0.6"
+ memory: "1536Mi"
+ nvidia.com/gpu: null
+```
+
+> [!NOTE]
+> - The default instance type purposefully uses little resources. To ensure all ML workloads
+run with appropriate resources, for example GPU resource, it is highly recommended to create custom instance types.
+> - `defaultinstancetype` will not appear as an InstanceType custom resource in the cluster when running the command ```kubectl get instancetype```, but it will appear in all clients (UI, CLI, SDK).
+> - `defaultinstancetype` can be overridden with a custom instance type definition having the same name as `defaultinstancetype` (see [Create custom instance types](#create-custom-instance-types) section)
+
+## Create custom instance types
+
+To create a new instance type, create a new custom resource for the instance type CRD. For example:
+
+```bash
+kubectl apply -f my_instance_type.yaml
+```
+
+With `my_instance_type.yaml`:
+```yaml
+apiVersion: amlarc.azureml.com/v1alpha1
+kind: InstanceType
+metadata:
+ name: myinstancetypename
+spec:
+ nodeSelector:
+ mylabel: mylabelvalue
+ resources:
+ limits:
+ cpu: "1"
+ nvidia.com/gpu: 1
+ memory: "2Gi"
+ requests:
+ cpu: "700m"
+ memory: "1500Mi"
+```
+
+The following steps will create an instance type with the labeled behavior:
+- Pods will be scheduled only on nodes with label `mylabel: mylabelvalue`.
+- Pods will be assigned resource requests of `700m` CPU and `1500Mi` memory.
+- Pods will be assigned resource limits of `1` CPU, `2Gi` memory and `1` Nvidia GPU.
+
+> [!NOTE]
+> - Nvidia GPU resources are only specified in the `limits` section as integer values. For more information,
+ see the Kubernetes [documentation](https://kubernetes.io/docs/tasks/manage-gpus/scheduling-gpus/#using-device-plugins).
+> - CPU and memory resources are string values.
+> - CPU can be specified in millicores, for example `100m`, or in full numbers, for example `"1"`
+ is equivalent to `1000m`.
+> - Memory can be specified as a full number + suffix, for example `1024Mi` for 1024 MiB.
+
+It is also possible to create multiple instance types at once:
+
+```bash
+kubectl apply -f my_instance_type_list.yaml
+```
+
+With `my_instance_type_list.yaml`:
+```yaml
+apiVersion: amlarc.azureml.com/v1alpha1
+kind: InstanceTypeList
+items:
+ - metadata:
+ name: cpusmall
+ spec:
+ resources:
+ requests:
+ cpu: "100m"
+ memory: "100Mi"
+ limits:
+ cpu: "1"
+ nvidia.com/gpu: 0
+ memory: "1Gi"
+
+ - metadata:
+ name: defaultinstancetype
+ spec:
+ resources:
+ requests:
+ cpu: "1"
+ memory: "1Gi"
+ limits:
+ cpu: "1"
+ nvidia.com/gpu: 0
+ memory: "1Gi"
+```
+
+The above example creates two instance types: `cpusmall` and `defaultinstancetype`. Above `defaultinstancetype` definition will override the `defaultinstancetype` definition created when Kubernetes cluster was attached to AzureML workspace.
+
+If a training or inference workload is submitted without an instance type, it uses the default
+instance type. To specify a default instance type for a Kubernetes cluster, create an instance
+type with name `defaultinstancetype`. It will automatically be recognized as the default.
+
+## Select instance type to submit training job
+
+To select an instance type for a training job using CLI (V2), specify its name as part of the
+`resources` properties section in job YAML. For example:
+```yaml
+command: python -c "print('Hello world!')"
+environment:
+ docker:
+ image: python
+compute: azureml:<compute_target_name>
+resources:
+ instance_type: <instance_type_name>
+```
+
+In the above example, replace `<compute_target_name>` with the name of your Kubernetes compute
+target and `<instance_type_name>` with the name of the instance type you wish to select. If there is no `instance_type` property specified, the system will use `defaultinstancetype` to submit job.
+
+## Select instance type to deploy model
+
+To select an instance type for a model deployment using CLI (V2), specify its name for `instance_type` property in deployment YAML. For example:
+
+```yaml
+deployments:
+ - name: blue
+ app_insights_enabled: true
+ model:
+ name: sklearn_mnist_model
+ version: 1
+ local_path: ./model/sklearn_mnist_model.pkl
+ code_configuration:
+ code:
+ local_path: ./script/
+ scoring_script: score.py
+ instance_type: <instance_type_name>
+ environment:
+ name: sklearn-mnist-env
+ version: 1
+ path: .
+ conda_file: file:./model/conda.yml
+ docker:
+ image: mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04:20210727.v1
+```
+
+In the above example, replace `<instance_type_name>` with the name of the instance type you wish to select. If there is no `instance_type` property specified, the system will use `defaultinstancetype` to deploy model.
+
+### Appendix I: AzureML extension components
+
+Upon AzureML extension deployment completes, it will create following resources in Azure cloud:
+
+ |Resource name |Resource type | Description |
+ |--|--|--|
+ |Azure Service Bus|Azure resource|Used to sync nodes and cluster resource information to Azure Machine Learning services regularly.|
+ |Azure Relay|Azure resource|Route traffic between Azure Machine Learning services and the Kubernetes cluster.|
+
+Upon AzureML extension deployment completes, it will create following resources in Kubernetes cluster, depending on each AzureML extension deployment scenario:
+
+ |Resource name |Resource type |Training |Inference |Training and Inference| Description | Communication with cloud service|
+ |--|--|--|--|--|--|--|
+ |relayserver|Kubernetes deployment|**&check;**|**&check;**|**&check;**|The entry component to receive and sync the message with cloud.|Receive the request of job creation, model deployment from cloud service; sync the job status with cloud service.|
+ |gateway|Kubernetes deployment|**&check;**|**&check;**|**&check;**|The gateway to communicate and send data back and forth.|Send nodes and cluster resource information to cloud services.|
+ |aml-operator|Kubernetes deployment|**&check;**|N/A|**&check;**|Manage the lifecycle of training jobs.| Token exchange with cloud token service for authentication and authorization of Azure Container Registry used by training job.|
+ |metrics-controller-manager|Kubernetes deployment|**&check;**|**&check;**|**&check;**|Manage the configuration for Prometheus|N/A|
+ |{EXTENSION-NAME}-kube-state-metrics|Kubernetes deployment|**&check;**|**&check;**|**&check;**|Export the cluster-related metrics to Prometheus.|N/A|
+ |{EXTENSION-NAME}-prometheus-operator|Kubernetes deployment|**&check;**|**&check;**|**&check;**| Provide Kubernetes native deployment and management of Prometheus and related monitoring components.|N/A|
+ |amlarc-identity-controller|Kubernetes deployment|N/A|**&check;**|**&check;**|Request and renew Azure Blob/Azure Container Registry token through managed identity.|Token exchange with cloud token service for authentication and authorization of Azure Container Registry and Azure Blob used by inference/model deployment.|
+ |amlarc-identity-proxy|Kubernetes deployment|N/A|**&check;**|**&check;**|Request and renew Azure Blob/Azure Container Registry token through managed identity.|Token exchange with cloud token service for authentication and authorization of Azure Container Registry and Azure Blob used by inference/model deployment.|
+ |azureml-fe|Kubernetes deployment|N/A|**&check;**|**&check;**|The front-end component that routes incoming inference requests to deployed services.|azureml-fe service logs are sent to Azure Blob.|
+ |inference-operator-controller-manager|Kubernetes deployment|N/A|**&check;**|**&check;**|Manage the lifecycle of inference endpoints. |N/A|
+ |cluster-status-reporter|Kubernetes deployment|**&check;**|**&check;**|**&check;**|Gather the cluster information, like cpu/gpu/memory usage, cluster healthiness.|N/A|
+ |csi-blob-controller|Kubernetes deployment|**&check;**|N/A|**&check;**|Azure Blob Storage Container Storage Interface(CSI) driver.|N/A|
+ |csi-blob-node|Kubernetes daemonset|**&check;**|N/A|**&check;**|Azure Blob Storage Container Storage Interface(CSI) driver.|N/A|
+ |fluent-bit|Kubernetes daemonset|**&check;**|**&check;**|**&check;**|Gather the components' system log.| Upload the components' system log to cloud.|
+ |k8s-host-device-plugin-daemonset|Kubernetes daemonset|**&check;**|**&check;**|**&check;**|Expose fuse to pods on each node.|N/A|
+ |prometheus-prom-prometheus|Kubernetes statefulset|**&check;**|**&check;**|**&check;**|Gather and send job metrics to cloud.|Send job metrics like cpu/gpu/memory utilization to cloud.|
+ |volcano-admission|Kubernetes deployment|**&check;**|N/A|**&check;**|Volcano admission webhook.|N/A|
+ |volcano-controllers|Kubernetes deployment|**&check;**|N/A|**&check;**|Manage the lifecycle of Azure Machine Learning training job pods.|N/A|
+ |volcano-scheduler |Kubernetes deployment|**&check;**|N/A|**&check;**|Used to do in cluster job scheduling.|N/A|
+
+> [!IMPORTANT]
+ > * Azure ServiceBus and Azure Relay resources are under the same resource group as the Arc cluster resource. These resources are used to communicate with the Kubernetes cluster and modifying them will break attached compute targets.
+ > * By default, the deployed kubernetes deployment resourses are randomly deployed to 1 or more nodes of the cluster, and daemonset resource are deployed to ALL nodes. If you want to restrict the extension deployment to specific nodes, use `nodeSelector` configuration setting described as below.
+
+> [!NOTE]
+ > * **{EXTENSION-NAME}:** is the extension name specified with ```az k8s-extension create --name``` CLI command.
+
+### Appendix II: Review AzureML deployment configuration settings
+
+For AzureML extension deployment configurations, use ```--config``` or ```--config-protected``` to specify list of ```key=value``` pairs. Following is the list of configuration settings available to be used for different AzureML extension deployment scenario ns.
+
+ |Configuration Setting Key Name |Description |Training |Inference |Training and Inference
+ |--|--|--|--|--|
+ |```enableTraining``` |```True``` or ```False```, default ```False```. **Must** be set to ```True``` for AzureML extension deployment with Machine Learning model training support. | **&check;**| N/A | **&check;** |
+ | ```enableInference``` |```True``` or ```False```, default ```False```. **Must** be set to ```True``` for AzureML extension deployment with Machine Learning inference support. |N/A| **&check;** | **&check;** |
+ | ```allowInsecureConnections``` |```True``` or ```False```, default False. This **must** be set to ```True``` for AzureML extension deployment with HTTP endpoints support for inference, when ```sslCertPemFile``` and ```sslKeyPemFile``` are not provided. |N/A| Optional | Optional |
+ | ```inferenceRouterServiceType``` |```loadBalancer``` or ```nodePort```. **Must** be set for ```enableInference=true```. | N/A| **&check;** | **&check;** |
+ | ```internalLoadBalancerProvider``` | This config is only applicable for Azure Kubernetes Service(AKS) cluster now. **Must** be set to ```azure``` to allow the inference router use internal load balancer. | N/A| Optional | Optional |
+ |```sslSecret```| The Kubernetes secret name under azureml namespace to store `cert.pem` (PEM-encoded SSL cert) and `key.pem` (PEM-encoded SSL key), required for AzureML extension deployment with HTTPS endpoint support for inference, when ``allowInsecureConnections`` is set to False. Use this config or give static cert and key file path in configuration protected settings. |N/A| Optional | Optional |
+ |```sslCname``` |A SSL CName to use if enabling SSL validation on the cluster. | N/A | N/A | required when using HTTPS endpoint |
+ | ```inferenceLoadBalancerHA``` |```True``` or ```False```, default ```True```. By default, AzureML extension will deploy three ingress controller replicas for high availability, which requires at least three workers in a cluster. Set this value to ```False``` if you have fewer than three workers and want to deploy AzureML extension for development and testing only, in this case it will deploy one ingress controller replica only. | N/A| Optional | Optional |
+ |```openshift``` | ```True``` or ```False```, default ```False```. Set to ```True``` if you deploy AzureML extension on ARO or OCP cluster. The deployment process will automatically compile a policy package and load policy package on each node so AzureML services operation can function properly. | Optional| Optional | Optional |
+ |```nodeSelector``` | Set the node selector so the extension components and the training/inference workloads will only be deployed to the nodes with all specified selectors. Usage: `nodeSelector.key=value`, support multiple selectors. Example: `nodeSelector.node-purpose=worker nodeSelector.node-region=eastus`| Optional| Optional | Optional |
+ |```installNvidiaDevicePlugin``` | ```True``` or ```False```, default ```False```. [Nvidia Device Plugin](https://github.com/NVIDIA/k8s-device-plugin#nvidia-device-plugin-for-kubernetes) is required for ML workloads on Nvidia GPU hardware. By default, AzureML extension deployment will not install Nvidia Device Plugin regardless Kubernetes cluster has GPU hardware or not. User can specify this configuration setting to ```True```, so the extension will install Nvidia Device Plugin, but make sure to have [Prerequisites](https://github.com/NVIDIA/k8s-device-plugin#prerequisites) ready beforehand. | Optional |Optional |Optional |
+ |```blobCsiDriverEnabled```| ```True``` or ```False```, default ```True```. Blob CSI driver is required for ML workloads. User can specify this configuration setting to ```False``` if it was installed already. | Optional |Optional |Optional |
+ |```reuseExistingPromOp```|```True``` or ```False```, default ```False```. AzureML extension needs prometheus operator to manage prometheus. Set to ```True``` to reuse existing prometheus operator. Compatible [kube-prometheus-stack](https://github.com/prometheus-community/helm-charts/blob/main/charts/kube-prometheus-stack/README.md) helm chart versions are from 9.3.4 to 30.0.1.| Optional| Optional | Optional |
+ |```volcanoScheduler.enable```| ```True``` or ```False```, default ```True```. AzureML extension needs volcano scheduler to schedule the job. Set to ```False``` to reuse existing volcano scheduler. Supported volcano scheduler versions are 1.4, 1.5. | Optional| N/A | Optional |
+ |```logAnalyticsWS``` |```True``` or ```False```, default ```False```. AzureML extension integrates with Azure LogAnalytics Workspace to provide log viewing and analysis capability through LogAalytics Workspace. This setting must be explicitly set to ```True``` if customer wants to use this capability. LogAnalytics Workspace cost may apply. |N/A |Optional |Optional |
+ |```installDcgmExporter``` |```True``` or ```False```, default ```False```. Dcgm-exporter is used to collect GPU metrics for GPU jobs. Specify ```installDcgmExporter``` flag to ```true``` to enable the build-in dcgm-exporter. |N/A |Optional |Optional |
+
+ |Configuration Protected Setting Key Name |Description |Training |Inference |Training and Inference
+ |--|--|--|--|--|
+ | ```sslCertPemFile```, ```sslKeyPemFile``` |Path to SSL certificate and key file (PEM-encoded), required for AzureML extension deployment with HTTPS endpoint support for inference, when ``allowInsecureConnections`` is set to False. | N/A| Optional | Optional |
+
+
+## Next steps
+
+- [Train models with CLI (v2)](how-to-train-cli.md)
+- [Configure and submit training runs](how-to-set-up-training-targets.md)
+- [Tune hyperparameters](how-to-tune-hyperparameters.md)
+- [Train a model using Scikit-learn](how-to-train-scikit-learn.md)
+- [Train a TensorFlow model](how-to-train-tensorflow.md)
+- [Train a PyTorch model](how-to-train-pytorch.md)
+- [Train using Azure Machine Learning pipelines](how-to-create-machine-learning-pipelines.md)
+- [Train model on-premise with outbound proxy server](../azure-arc/kubernetes/quickstart-connect-cluster.md#connect-using-an-outbound-proxy-server)
machine-learning How To Authenticate Online Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-authenticate-online-endpoint.md
+
+ Title: Authenticate to an online endpoint
+
+description: Learn to authenticate clients to an Azure Machine Learning online endpoint
++++++ Last updated : 05/10/2022++++
+# Key and token-based authentication for online endpoints
+
+When consuming an online endpoint from a client, you can use either a _key_ or a _token_. Keys don't expire, tokens do.
+
+## Configure the endpoint authentication
+
+You can set the authentication type when you create an online endpoint. Set the `auth_mode` to `key` or `aml_token` depending on which one you want to use. The default value is `key`.
+
+When deploying using CLI v2, set this value in the [online endpoint YAML file](reference-yaml-endpoint-online.md). For more information, see [How to deploy an online endpoint](how-to-deploy-managed-online-endpoints.md).
+
+When deploying using the Python SDK v2 (preview), use the [OnlineEndpoint](/python/api/azure-ai-ml/azure.ai.ml.entities.onlineendpoint) class.
+
+## Get the key or token
+
+Access to retrieve the key or token for an online endpoint is restricted by Azure role-based access controls (Azure RBAC). To retrieve the authentication key or token, your security principal (user identity or service principal) must be assigned one of the following roles:
+
+* Owner
+* Contributor
+* A custom role that allows `Microsoft.MachineLearningServices/workspaces/onlineEndpoints/token/action` and `Microsoft.MachineLearningServices/workspaces/onlineEndpoints/listkeys/action`.
+
+For more information on using Azure RBAC with Azure Machine Learning, see [Manage access to Azure Machine Learning](how-to-assign-roles.md).
+
+To get the key, use [az ml online-endpoint get-credentials](/cli/azure/ml/online-endpoint#az-ml-online-endpoint-get-credentials). This command returns a JSON document that contains the key or token. __Keys__ will be returned in the `primaryKey` and `secondaryKey` fields. __Tokens__ will be returned in the `accessToken` field. Additionally, the `expiryTimeUtc` and `refreshAfterTimeUtc` fields contain the token expiration and refresh times. The following example shows how to use the `--query` parameter to return only the primary key:
++
+## Score data using the token
+
+When calling the online endpoint for scoring, pass the key or token in the authorization header. The following example shows how to use the curl utility to call the online endpoint using a key (if using a token, replace `$ENDPOINT_KEY` with the token value):
++
+## Next steps
+
+* [Deploy a machine learning model using an online endpoint](how-to-deploy-managed-online-endpoints.md)
+* [Enable network isolation for managed online endpoints](how-to-secure-online-endpoint.md)
machine-learning How To Authenticate Web Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-authenticate-web-service.md
Last updated 10/21/2021 -+ # Configure authentication for models deployed as web services + Azure Machine Learning allows you to deploy your trained machine learning models as web services. In this article, learn how to configure authentication for these deployments. The model deployments created by Azure Machine Learning can be configured to use one of two authentication methods:
print(token)
## Next steps
-For more information on authenticating to a deployed model, see [Create a client for a model deployed as a web service](how-to-consume-web-service.md).
+For more information on authenticating to a deployed model, see [Create a client for a model deployed as a web service](how-to-consume-web-service.md).
machine-learning How To Auto Train Forecast https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-auto-train-forecast.md
-+ Last updated 11/18/2021 # Set up AutoML to train a time-series forecasting model with Python + In this article, you learn how to set up AutoML training for time-series forecasting models with Azure Machine Learning automated ML in the [Azure Machine Learning Python SDK](/python/api/overview/azure/ml/). To do so, you:
For time series forecasting, only **Rolling Origin Cross Validation (ROCV)** is
You can also bring your own validation data, learn more in [Configure data splits and cross-validation in AutoML](how-to-configure-cross-validation-data-splits.md#provide-validation-data). ```python automl_config = AutoMLConfig(task='forecasting',
The following code,
* Sets the `time_column_name` to the `day_datetime` field in the data set. * Sets the `forecast_horizon` to 50 in order to predict for the entire test set. - ```python from azureml.automl.core.forecasting_parameters import ForecastingParameters
The following table summarizes the available settings for `short_series_handling
When you have your `AutoMLConfig` object ready, you can submit the experiment. After the model finishes, retrieve the best run iteration. + ```python ws = Workspace.from_config() experiment = Experiment(ws, "Tutorial-automl-forecasting")
machine-learning How To Auto Train Image Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-auto-train-image-models.md
Title: Set up AutoML for computer vision
-description: Set up Azure Machine Learning automated ML to train computer vision models with the Azure Machine Learning Python SDK (preview).
+description: Set up Azure Machine Learning automated ML to train computer vision models with the CLI v2 and Python SDK v2 (preview).
+ Last updated 01/18/2022-
-# Customer intent: I'm a data scientist with ML knowledge in the computer vision space, looking to build ML models using image data in Azure Machine Learning with full control of the model algorithm, hyperparameters, and training and deployment environments.
-
+#Customer intent: I'm a data scientist with ML knowledge in the computer vision space, looking to build ML models using image data in Azure Machine Learning with full control of the model algorithm, hyperparameters, and training and deployment environments.
-# Set up AutoML to train computer vision models with Python (preview)
+# Set up AutoML to train computer vision models
+
+> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning CLI extension you are using:"]
+> * [v1](v1/how-to-auto-train-image-models-v1.md)
+> * [v2 (current version)](how-to-auto-train-image-models.md)
> [!IMPORTANT] > This feature is currently in public preview. This preview version is provided without a service-level agreement. Certain features might not be supported or might have constrained capabilities. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-In this article, you learn how to train computer vision models on image data with automated ML in the [Azure Machine Learning Python SDK](/python/api/overview/azure/ml/).
+In this article, you learn how to train computer vision models on image data with automated ML with the Azure Machine Learning CLI extension v2 or the Azure Machine Learning Python SDK v2 (preview).
Automated ML supports model training for computer vision tasks like image classification, object detection, and instance segmentation. Authoring AutoML models for computer vision tasks is currently supported via the Azure Machine Learning Python SDK. The resulting experimentation runs, models, and outputs are accessible from the Azure Machine Learning studio UI. [Learn more about automated ml for computer vision tasks on image data](concept-automated-ml.md).
-> [!NOTE]
-> Automated ML for computer vision tasks is only available via the Azure Machine Learning Python SDK.
- ## Prerequisites
+# [CLI v2](#tab/CLI-v2)
+
+* An Azure Machine Learning workspace. To create the workspace, see [Create an Azure Machine Learning workspace](how-to-manage-workspace.md).
+* Install and [set up CLI (v2)](how-to-configure-cli.md#prerequisites) and make sure you install the `ml` extension.
+
+# [Python SDK v2 (preview)](#tab/SDK-v2)
* An Azure Machine Learning workspace. To create the workspace, see [Create an Azure Machine Learning workspace](how-to-manage-workspace.md).
-* The Azure Machine Learning Python SDK installed.
- To install the SDK you can either,
+* The Azure Machine Learning Python SDK v2 (preview) installed.
+
+ To install the SDK you can either,
* Create a compute instance, which automatically installs the SDK and is pre-configured for ML workflows. For more information, see [Create and manage an Azure Machine Learning compute instance](how-to-create-manage-compute-instance.md).
- * [Install the `automl` package yourself](https://github.com/Azure/azureml-examples/blob/main/python-sdk/tutorials/automl-with-azureml/README.md#setup-using-a-local-conda-environment), which includes the [default installation](/python/api/overview/azure/ml/install#default-install) of the SDK.
-
+ * Use the following commands to install Azure ML Python SDK v2:
+ * Uninstall previous preview version:
+ ```python
+ pip uninstall azure-ai-ml
+ ```
+ * Install the Azure ML Python SDK v2:
+ ```python
+ pip install azure-ai-ml
+ ```
+
> [!NOTE] > Only Python 3.6 and 3.7 are compatible with automated ML support for computer vision tasks. ++ ## Select your task type
-Automated ML for images supports the following task types:
+Automated ML for images supports the following task types:
-Task type | AutoMLImage config syntax
+Task type | AutoML Job syntax
|
- image classification | `ImageTask.IMAGE_CLASSIFICATION`
-image classification multi-label | `ImageTask.IMAGE_CLASSIFICATION_MULTILABEL`
-image object detection | `ImageTask.IMAGE_OBJECT_DETECTION`
-image instance segmentation| `ImageTask.IMAGE_INSTANCE_SEGMENTATION`
+ image classification | CLI v2: `image_classification` <br> SDK v2: `image_classification()`
+image classification multi-label | CLI v2: `image_classification_multilabel` <br> SDK v2: `image_classification_multilabel()`
+image object detection | CLI v2: `image_object_detection` <br> SDK v2: `image_object_detection()`
+image instance segmentation| CLI v2: `image_instance_segmentation` <br> SDK v2: `image_instance_segmentation()`
+
+# [CLI v2](#tab/CLI-v2)
-This task type is a required parameter and is passed in using the `task` parameter in the [`AutoMLImageConfig`](/python/api/azureml-train-automl-client/azureml.train.automl.automlimageconfig.automlimageconfig).
+
+This task type is a required parameter and can be set using the `task` key.
+
+For example:
+
+```yaml
+task: image_object_detection
+```
+
+# [Python SDK v2 (preview)](#tab/SDK-v2)
+Based on the task type, you can create automl image jobs using task specific `automl` functions.
For example: ```python
-from azureml.train.automl import AutoMLImageConfig
-from azureml.automl.core.shared.constants import ImageTask
-automl_image_config = AutoMLImageConfig(task=ImageTask.IMAGE_OBJECT_DETECTION)
+from azure.ai.ml import automl
+image_object_detection_job = automl.image_object_detection()
```+ ## Training and validation data
-In order to generate computer vision models, you need to bring labeled image data as input for model training in the form of an Azure Machine Learning [TabularDataset](/python/api/azureml-core/azureml.data.tabulardataset). You can either use a `TabularDataset` that you have [exported from a data labeling project](./how-to-create-image-labeling-projects.md#export-the-labels), or create a new `TabularDataset` with your labeled training data.
+In order to generate computer vision models, you need to bring labeled image data as input for model training in the form of an `MLTable`. You can create an `MLTable` from training data in JSONL format.
If your training data is in a different format (like, pascal VOC or COCO), you can apply the helper scripts included with the sample notebooks to convert the data to JSONL. Learn more about how to [prepare data for computer vision tasks with automated ML](how-to-prepare-datasets-for-automl-images.md).
+> [!Note]
+> The training data needs to have at least 10 images in order to be able to submit an AutoML run.
+ > [!Warning]
-> Creation of TabularDatasets is only supported using the SDK to create datasets from data in JSONL format for this capability. Creating the dataset via UI is not supported at this time.
+> Creation of `MLTable` is only supported using the SDK and CLI to create from data in JSONL format for this capability. Creating the `MLTable` via UI is not supported at this time.
-> [!Note]
-> The training dataset needs to have at least 10 images in order to be able to submit an AutoML run.
### JSONL schema samples
The following is a sample JSONL file for image classification:
### Consume data
-Once your data is in JSONL format, you can create a TabularDataset with the following code:
+Once your data is in JSONL format, you can create training and validation `MLTable` as shown below.
-```python
-ws = Workspace.from_config()
-ds = ws.get_default_datastore()
-from azureml.core import Dataset
-
-training_dataset = Dataset.Tabular.from_json_lines_files(
- path=ds.path('odFridgeObjects/odFridgeObjects.jsonl'),
- set_column_types={'image_url': DataType.to_stream(ds.workspace)})
-training_dataset = training_dataset.register(workspace=ws, name=training_dataset_name)
+
+Automated ML doesn't impose any constraints on training or validation data size for computer vision tasks. Maximum dataset size is only limited by the storage layer behind the dataset (i.e. blob store). There's no minimum number of images or labels. However, we recommend starting with a minimum of 10-15 samples per label to ensure the output model is sufficiently trained. The higher the total number of labels/classes, the more samples you need per label.
+
+# [CLI v2](#tab/CLI-v2)
++
+Training data is a required parameter and is passed in using the `training` key of the data section. You can optionally specify another MLtable as a validation data with the `validation` key. If no validation data is specified, 20% of your training data will be used for validation by default, unless you pass `validation_data_size` argument with a different value.
+
+Target column name is a required parameter and used as target for supervised ML task. It's passed in using the `target_column_name` key in the data section. For example,
+
+```yaml
+target_column_name: label
+training_data:
+ path: data/training-mltable-folder
+ type: mltable
+validation_data:
+ path: data/validation-mltable-folder
+ type: mltable
```
-Automated ML does not impose any constraints on training or validation data size for computer vision tasks. Maximum dataset size is only limited by the storage layer behind the dataset (i.e. blob store). There is no minimum number of images or labels. However, we recommend to start with a minimum of 10-15 samples per label to ensure the output model is sufficiently trained. The higher the total number of labels/classes, the more samples you need per label.
+# [Python SDK v2 (preview)](#tab/SDK-v2)
-Training data is a required and is passed in using the `training_data` parameter. You can optionally specify another TabularDataset as a validation dataset to be used for your model with the `validation_data` parameter of the AutoMLImageConfig. If no validation dataset is specified, 20% of your training data will be used for validation by default, unless you pass `validation_size` argument with a different value.
+You can create data inputs from training and validation MLTable from your local directory or cloud storage with the following code:
-For example:
+[!Notebook-python[] (~/azureml-examples-sdk-preview/sdk/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=data-load)]
+
+Training data is a required parameter and is passed in using the `training_data` parameter of the task specific `automl` type function. You can optionally specify another MLTable as a validation data with the `validation_data` parameter. If no validation data is specified, 20% of your training data will be used for validation by default, unless you pass `validation_data_size` argument with a different value.
+
+Target column name is a required parameter and used as target for supervised ML task. It's passed in using the `target_column_name` parameter of the task specific `automl` function. For example,
```python
-from azureml.train.automl import AutoMLImageConfig
-automl_image_config = AutoMLImageConfig(training_data=training_dataset)
+from azure.ai.ml import automl
+image_object_detection_job = automl.image_object_detection(
+ training_data=my_training_data_input,
+ validation_data=my_validation_data_input,
+ target_column_name="label"
+)
```+ ## Compute to run experiment Provide a [compute target](concept-azure-machine-learning-architecture.md#compute-targets) for automated ML to conduct model training. Automated ML models for computer vision tasks require GPU SKUs and support NC and ND families. We recommend the NCsv3-series (with v100 GPUs) for faster training. A compute target with a multi-GPU VM SKU leverages multiple GPUs to also speed up training. Additionally, when you set up a compute target with multiple nodes you can conduct faster model training through parallelism when tuning hyperparameters for your model.
-The compute target is a required parameter and is passed in using the `compute_target` parameter of the `AutoMLImageConfig`. For example:
+The compute target is passed in using the `compute` parameter. For example:
+
+# [CLI v2](#tab/CLI-v2)
++
+```yaml
+compute: azureml:gpu-cluster
+```
+
+# [Python SDK v2 (preview)](#tab/SDK-v2)
```python
-from azureml.train.automl import AutoMLImageConfig
-automl_image_config = AutoMLImageConfig(compute_target=compute_target)
+from azure.ai.ml import automl
+
+compute_name = "gpu-cluster"
+image_object_detection_job = automl.image_object_detection(
+ compute=compute_name,
+)
```+ ## Configure model algorithms and hyperparameters
In general, deep learning model performance can often improve with more data. Da
Before doing a large sweep to search for the optimal models and hyperparameters, we recommend trying the default values to get a first baseline. Next, you can explore multiple hyperparameters for the same model before sweeping over multiple models and their parameters. This way, you can employ a more iterative approach, because with multiple models and multiple hyperparameters for each, the search space grows exponentially and you need more iterations to find optimal configurations.
-If you wish to use the default hyperparameter values for a given algorithm (say yolov5), you can specify the config for your AutoML Image runs as follows:
+# [CLI v2](#tab/CLI-v2)
-```python
-from azureml.train.automl import AutoMLImageConfig
-from azureml.train.hyperdrive import GridParameterSampling, choice
-from azureml.automl.core.shared.constants import ImageTask
-
-automl_image_config_yolov5 = AutoMLImageConfig(task=ImageTask.IMAGE_OBJECT_DETECTION,
- compute_target=compute_target,
- training_data=training_dataset,
- validation_data=validation_dataset,
- hyperparameter_sampling=GridParameterSampling({'model_name': choice('yolov5')}),
- iterations=1)
+
+If you wish to use the default hyperparameter values for a given algorithm (say yolov5), you can specify it using model_name key in image_model section. For example,
+
+```yaml
+image_model:
+ model_name: "yolov5"
```
+# [Python SDK v2 (preview)](#tab/SDK-v2)
+
+If you wish to use the default hyperparameter values for a given algorithm (say yolov5), you can specify it using model_name parameter in set_image_model method of the task specific `automl` job. For example,
+```python
+image_object_detection_job.set_image_model(model_name="yolov5")
+```
+ Once you've built a baseline model, you might want to optimize model performance in order to sweep over the model algorithm and hyperparameter space. You can use the following sample config to sweep over the hyperparameters for each algorithm, choosing from a range of values for learning_rate, optimizer, lr_scheduler, etc., to generate a model with the optimal primary metric. If hyperparameter values are not specified, then default values are used for the specified algorithm. ### Primary metric
The primary metric used for model optimization and hyperparameter tuning depends
### Experiment budget
-You can optionally specify the maximum time budget for your AutoML Vision experiment using `experiment_timeout_hours` - the amount of time in hours before the experiment terminates. If none specified, default experiment timeout is seven days (maximum 60 days).
+You can optionally specify the maximum time budget for your AutoML Vision training job using the `timeout` parameter in the `limits` - the amount of time in minutes before the experiment terminates. If none specified, default experiment timeout is seven days (maximum 60 days). For example,
+# [CLI v2](#tab/CLI-v2)
++
+```yaml
+limits:
+ timeout: 60
+```
+
+# [Python SDK v2 (preview)](#tab/SDK-v2)
+[!Notebook-python[] (~/azureml-examples-sdk-preview/sdk/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=limit-settings)]
++ ## Sweeping hyperparameters for your model
You can define the model algorithms and hyperparameters to sweep in the paramete
### Sampling methods for the sweep
-When sweeping hyperparameters, you need to specify the sampling method to use for sweeping over the defined parameter space. Currently, the following sampling methods are supported with the `hyperparameter_sampling` parameter:
+When sweeping hyperparameters, you need to specify the sampling method to use for sweeping over the defined parameter space. Currently, the following sampling methods are supported with the `sampling_algorithm` parameter:
-* [Random sampling](how-to-tune-hyperparameters.md#random-sampling)
-* [Grid sampling](how-to-tune-hyperparameters.md#grid-sampling)
-* [Bayesian sampling](how-to-tune-hyperparameters.md#bayesian-sampling)
+| Sampling type | AutoML Job syntax |
+|-||
+|[Random Sampling](how-to-tune-hyperparameters.md#random-sampling)| `random` |
+|[Grid Sampling](how-to-tune-hyperparameters.md#grid-sampling)| `grid` |
+|[Bayesian Sampling](how-to-tune-hyperparameters.md#bayesian-sampling)| `bayesian` |
> [!NOTE] > Currently only random sampling supports conditional hyperparameter spaces. ### Early termination policies
-You can automatically end poorly performing runs with an early termination policy. Early termination improves computational efficiency, saving compute resources that would have been otherwise spent on less promising configurations. Automated ML for images supports the following early termination policies using the `early_termination_policy` parameter. If no termination policy is specified, all configurations are run to completion.
+You can automatically end poorly performing runs with an early termination policy. Early termination improves computational efficiency, saving compute resources that would have been otherwise spent on less promising configurations. Automated ML for images supports the following early termination policies using the `early_termination` parameter. If no termination policy is specified, all configurations are run to completion.
-* [Bandit policy](how-to-tune-hyperparameters.md#bandit-policy)
-* [Median stopping policy](how-to-tune-hyperparameters.md#median-stopping-policy)
-* [Truncation selection policy](how-to-tune-hyperparameters.md#truncation-selection-policy)
+
+| Early termination policy | AutoML Job syntax |
+|-||
+|[Bandit policy](how-to-tune-hyperparameters.md#bandit-policy)| CLI v2: `bandit` <br> SDK v2: `BanditPolicy()` |
+|[Median stopping policy](how-to-tune-hyperparameters.md#median-stopping-policy)| CLI v2: `median_stopping` <br> SDK v2: `MedianStoppingPolicy()` |
+|[Truncation selection policy](how-to-tune-hyperparameters.md#truncation-selection-policy)| CLI v2: `truncation_selection` <br> SDK v2: `TruncationSelectionPolicy()` |
Learn more about [how to configure the early termination policy for your hyperparameter sweep](how-to-tune-hyperparameters.md#early-termination). ### Resources for the sweep
-You can control the resources spent on your hyperparameter sweep by specifying the `iterations` and the `max_concurrent_iterations` for the sweep.
+You can control the resources spent on your hyperparameter sweep by specifying the `max_trials` and the `max_concurrent_trials` for the sweep.
+> [!NOTE]
+> For a complete sweep configuration sample, please refer to this [tutorial](tutorial-auto-train-image-models.md#hyperparameter-sweeping-for-image-tasks).
Parameter | Detail --|-
-`iterations` | Required parameter for maximum number of configurations to sweep. Must be an integer between 1 and 1000. When exploring just the default hyperparameters for a given model algorithm, set this parameter to 1.
-`max_concurrent_iterations`| Maximum number of runs that can run concurrently. If not specified, all runs launch in parallel. If specified, must be an integer between 1 and 100. <br><br> **NOTE:** The number of concurrent runs is gated on the resources available in the specified compute target. Ensure that the compute target has the available resources for the desired concurrency.
+`max_trials` | Required parameter for maximum number of configurations to sweep. Must be an integer between 1 and 1000. When exploring just the default hyperparameters for a given model algorithm, set this parameter to 1.
+`max_concurrent_trials`| Maximum number of runs that can run concurrently. If not specified, all runs launch in parallel. If specified, must be an integer between 1 and 100. <br><br> **NOTE:** The number of concurrent runs is gated on the resources available in the specified compute target. Ensure that the compute target has the available resources for the desired concurrency.
+
+You can configure all the sweep related parameters as shown in the example below.
+
+# [CLI v2](#tab/CLI-v2)
++
+```yaml
+sweep:
+ limits:
+ max_trials: 10
+ max_concurrent_trials: 2
+ sampling_algorithm: random
+ early_termination:
+ type: bandit
+ evaluation_interval: 2
+ slack_factor: 0.2
+ delay_evaluation: 6
+```
+# [Python SDK v2 (preview)](#tab/SDK-v2)
-> [!NOTE]
-> For a complete sweep configuration sample, please refer to this [tutorial](tutorial-auto-train-image-models.md#hyperparameter-sweeping-for-image-tasks).
+[!Notebook-python[] (~/azureml-examples-sdk-preview/sdk/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=sweep-settings)]
++
-### Arguments
-You can pass fixed settings or parameters that don't change during the parameter space sweep as arguments. Arguments are passed in name-value pairs and the name must be prefixed by a double dash.
+### Fixed settings
-```python
-from azureml.train.automl import AutoMLImageConfig
-arguments = ["--early_stopping", 1, "--evaluation_frequency", 2]
-automl_image_config = AutoMLImageConfig(arguments=arguments)
+You can pass fixed settings or parameters that don't change during the parameter space sweep as shown below.
+
+# [CLI v2](#tab/CLI-v2)
++
+```yaml
+image_model:
+ early_stopping: True
+ evaluation_frequency: 1
``` +
+# [Python SDK v2 (preview)](#tab/SDK-v2)
+
+[!Notebook-python[] (~/azureml-examples-sdk-preview/sdk/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=pass-arguments)]
++++ ## Incremental training (optional) Once the training run is done, you have the option to further train the model by loading the trained model checkpoint. You can either use the same dataset or a different one for incremental training.
-There are two available options for incremental training. You can,
-
-* Pass the run ID that you want to load the checkpoint from.
-* Pass the checkpoints through a FileDataset.
### Pass the checkpoint via run ID
-To find the run ID from the desired model, you can use the following code.
-```python
-# find a run id to get a model checkpoint from
-target_checkpoint_run = automl_image_run.get_best_child()
-```
+You can pass the run ID that you want to load the checkpoint from.
-To pass a checkpoint via the run ID, you need to use the `checkpoint_run_id` parameter.
+# [CLI v2](#tab/CLI-v2)
-```python
-automl_image_config = AutoMLImageConfig(task='image-object-detection',
- compute_target=compute_target,
- training_data=training_dataset,
- validation_data=validation_dataset,
- checkpoint_run_id= target_checkpoint_run.id,
- primary_metric='mean_average_precision',
- **tuning_settings)
-
-automl_image_run = experiment.submit(automl_image_config)
-automl_image_run.wait_for_completion(wait_post_processing=True)
+
+```yaml
+image_model:
+ checkpoint_run_id : "target_checkpoint_run_id"
```
-### Pass the checkpoint via FileDataset
-To pass a checkpoint via a FileDataset, you need to use the `checkpoint_dataset_id` and `checkpoint_filename` parameters.
+
+# [Python SDK v2 (preview)](#tab/SDK-v2)
+
+To find the run ID from the desired model, you can use the following code.
```python
-# download the checkpoint from the previous run
-model_name = "outputs/model.pt"
-model_local = "checkpoints/model_yolo.pt"
-target_checkpoint_run.download_file(name=model_name, output_file_path=model_local)
-
-# upload the checkpoint to the blob store
-ds.upload(src_dir="checkpoints", target_path='checkpoints')
-
-# create a FileDatset for the checkpoint and register it with your workspace
-ds_path = ds.path('checkpoints/model_yolo.pt')
-checkpoint_yolo = Dataset.File.from_files(path=ds_path)
-checkpoint_yolo = checkpoint_yolo.register(workspace=ws, name='yolo_checkpoint')
-
-automl_image_config = AutoMLImageConfig(task='image-object-detection',
- compute_target=compute_target,
- training_data=training_dataset,
- validation_data=validation_dataset,
- checkpoint_dataset_id= checkpoint_yolo.id,
- checkpoint_filename='model_yolo.pt',
- primary_metric='mean_average_precision',
- **tuning_settings)
-
-automl_image_run = experiment.submit(automl_image_config)
-automl_image_run.wait_for_completion(wait_post_processing=True)
+# find a run id to get a model checkpoint from
+import mlflow
-```
+# Obtain the tracking URL from MLClient
+MLFLOW_TRACKING_URI = ml_client.workspaces.get(
+ name=ml_client.workspace_name
+).mlflow_tracking_uri
+mlflow.set_tracking_uri(MLFLOW_TRACKING_URI)
-## Submit the run
+from mlflow.tracking.client import MlflowClient
+
+mlflow_client = MlflowClient()
+mlflow_parent_run = mlflow_client.get_run(automl_job.name)
+
+# Fetch the id of the best automl child run.
+target_checkpoint_run_id = mlflow_parent_run.data.tags["automl_best_child_run_id"]
+```
-When you have your `AutoMLImageConfig` object ready, you can submit the experiment.
+To pass a checkpoint via the run ID, you need to use the `checkpoint_run_id` parameter in `set_image_model` function.
```python
-ws = Workspace.from_config()
-experiment = Experiment(ws, "Tutorial-automl-image-object-detection")
-automl_image_run = experiment.submit(automl_image_config)
+image_object_detection_job = automl.image_object_detection(
+ compute=compute_name,
+ experiment_name=exp_name,
+ training_data=my_training_data_input,
+ validation_data=my_validation_data_input,
+ target_column_name="label",
+ primary_metric="mean_average_precision",
+ tags={"my_custom_tag": "My custom value"},
+)
+
+image_object_detection_job.set_image_model(checkpoint_run_id=target_checkpoint_run_id)
+
+automl_image_job_incremental = ml_client.jobs.create_or_update(
+ image_object_detection_job
+)
```
-## Outputs and evaluation metrics
+
-The automated ML training runs generates output model files, evaluation metrics, logs and deployment artifacts like the scoring file and the environment file which can be viewed from the outputs and logs and metrics tab of the child runs.
-> [!TIP]
-> Check how to navigate to the run results from the [View run results](how-to-understand-automated-ml.md#view-run-results) section.
+## Submit the AutoML job
-For definitions and examples of the performance charts and metrics provided for each run, see [Evaluate automated machine learning experiment results](how-to-understand-automated-ml.md#metrics-for-image-models-preview)
-## Register and deploy model
-Once the run completes, you can register the model that was created from the best run (configuration that resulted in the best primary metric)
+# [CLI v2](#tab/CLI-v2)
+
-```Python
-best_child_run = automl_image_run.get_best_child()
-model_name = best_child_run.properties['model_name']
-model = best_child_run.register_model(model_name = model_name, model_path='outputs/model.pt')
+To submit your AutoML job, you run the following CLI v2 command with the path to your .yml file, workspace name, resource group and subscription ID.
+
+```azurecli
+az ml job create --file ./hello-automl-job-basic.yml --workspace-name [YOUR_AZURE_WORKSPACE] --resource-group [YOUR_AZURE_RESOURCE_GROUP] --subscription [YOUR_AZURE_SUBSCRIPTION]
```
-After you register the model you want to use, you can deploy it as a web service on [Azure Container Instances (ACI)](how-to-deploy-azure-container-instance.md) or [Azure Kubernetes Service (AKS)](how-to-deploy-azure-kubernetes-service.md). ACI is the perfect option for testing deployments, while AKS is better suited for high-scale, production usage.
+# [Python SDK v2 (preview)](#tab/SDK-v2)
-This example deploys the model as a web service in AKS. To deploy in AKS, first create an AKS compute cluster or use an existing AKS cluster. You can use either GPU or CPU VM SKUs for your deployment cluster.
+When you've configured your AutoML Job to the desired settings, you can submit the job.
-```python
+[!Notebook-python[] (~/azureml-examples-sdk-preview/sdk/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=submit-run)]
+
-from azureml.core.compute import ComputeTarget, AksCompute
-from azureml.exceptions import ComputeTargetException
-
-# Choose a name for your cluster
-aks_name = "cluster-aks-gpu"
-
-# Check to see if the cluster already exists
-try:
- aks_target = ComputeTarget(workspace=ws, name=aks_name)
- print('Found existing compute target')
-except ComputeTargetException:
- print('Creating a new compute target...')
- # Provision AKS cluster with GPU machine
- prov_config = AksCompute.provisioning_configuration(vm_size="STANDARD_NC6",
- location="eastus2")
- # Create the cluster
- aks_target = ComputeTarget.create(workspace=ws,
- name=aks_name,
- provisioning_configuration=prov_config)
- aks_target.wait_for_completion(show_output=True)
-```
+## Outputs and evaluation metrics
-Next, you can define the inference configuration, that describes how to set up the web-service containing your model. You can use the scoring script and the environment from the training run in your inference config.
+The automated ML training runs generates output model files, evaluation metrics, logs and deployment artifacts like the scoring file and the environment file which can be viewed from the outputs and logs and metrics tab of the child runs.
-```python
-from azureml.core.model import InferenceConfig
+> [!TIP]
+> Check how to navigate to the run results from the [View run results](how-to-understand-automated-ml.md#view-run-results) section.
-best_child_run.download_file('outputs/scoring_file_v_1_0_0.py', output_file_path='score.py')
-environment = best_child_run.get_environment()
-inference_config = InferenceConfig(entry_script='score.py', environment=environment)
-```
+For definitions and examples of the performance charts and metrics provided for each run, see [Evaluate automated machine learning experiment results](how-to-understand-automated-ml.md#metrics-for-image-models-preview)
-You can then deploy the model as an AKS web service.
+## Register and deploy model
-```python
-# Deploy the model from the best run as an AKS web service
-from azureml.core.webservice import AksWebservice
-from azureml.core.webservice import Webservice
-from azureml.core.model import Model
-from azureml.core.environment import Environment
-
-aks_config = AksWebservice.deploy_configuration(autoscale_enabled=True,
- cpu_cores=1,
- memory_gb=50,
- enable_app_insights=True)
-
-aks_service = Model.deploy(ws,
- models=[model],
- inference_config=inference_config,
- deployment_config=aks_config,
- deployment_target=aks_target,
- name='automl-image-test',
- overwrite=True)
-aks_service.wait_for_deployment(show_output=True)
-print(aks_service.state)
-```
+Once the run completes, you can register the model that was created from the best run (configuration that resulted in the best primary metric).
-Alternatively, you can deploy the model from the [Azure Machine Learning studio UI](https://ml.azure.com/).
+You can deploy the model from the [Azure Machine Learning studio UI](https://ml.azure.com/).
Navigate to the model you wish to deploy in the **Models** tab of the automated ML run and select the **Deploy**. ![Select model from the automl runs in studio UI ](./media/how-to-auto-train-image-models/select-model.png)
You can configure the model deployment endpoint name and the inferencing cluster
![Deploy configuration](./media/how-to-auto-train-image-models/deploy-image-model.png)
-### Update inference configuration
+## Code examples
+# [CLI v2](#tab/CLI-v2)
-In the previous step, we downloaded the scoring file `outputs/scoring_file_v_1_0_0.py` from the best model into a local `score.py` file and we used it to create an `InferenceConfig` object. This script can be modified to change the model specific inference settings if needed after it has been downloaded and before creating the `InferenceConfig`. For instance, this is the code section that initializes the model in the scoring file:
-
-```
-...
-def init():
- ...
- try:
- logger.info("Loading model from path: {}.".format(model_path))
- model_settings = {...}
- model = load_model(TASK_TYPE, model_path, **model_settings)
- logger.info("Loading successful.")
- except Exception as e:
- logging_utilities.log_traceback(e, logger)
- raise
-...
-```
+Review detailed code examples and use cases in the [azureml-examples repository for automated machine learning samples](https://github.com/Azure/azureml-examples/tree/sdk-preview/cli/jobs/automl-standalone-jobs).
-Each of the tasks (and some models) have a set of parameters in the `model_settings` dictionary. By default, we use the same values for the parameters that were used during the training and validation. Depending on the behavior that we need when using the model for inference, we can change these parameters. Below you can find a list of parameters for each task type and model.
-| Task | Parameter name | Default |
-| |- | |
-|Image classification (multi-class and multi-label) | `valid_resize_size`<br>`valid_crop_size` | 256<br>224 |
-|Object detection | `min_size`<br>`max_size`<br>`box_score_thresh`<br>`nms_iou_thresh`<br>`box_detections_per_img` | 600<br>1333<br>0.3<br>0.5<br>100 |
-|Object detection using `yolov5`| `img_size`<br>`model_size`<br>`box_score_thresh`<br>`nms_iou_thresh` | 640<br>medium<br>0.1<br>0.5 |
-|Instance segmentation| `min_size`<br>`max_size`<br>`box_score_thresh`<br>`nms_iou_thresh`<br>`box_detections_per_img`<br>`mask_pixel_score_threshold`<br>`max_number_of_polygon_points`<br>`export_as_image`<br>`image_type` | 600<br>1333<br>0.3<br>0.5<br>100<br>0.5<br>100<br>False<br>JPG|
-
-For a detailed description on task specific hyperparameters, please refer to [Hyperparameters for computer vision tasks in automated machine learning](reference-automl-images-hyperparameters.md).
-
-If you want to use tiling, and want to control tiling behavior, the following parameters are available: `tile_grid_size`, `tile_overlap_ratio` and `tile_predictions_nms_thresh`. For more details on these parameters please check [Train a small object detection model using AutoML](how-to-use-automl-small-object-detect.md).
-
-## Example notebooks
-Review detailed code examples and use cases in the [GitHub notebook repository for automated machine learning samples](https://github.com/Azure/azureml-examples/tree/main/python-sdk/tutorials/automl-with-azureml). Please check the folders with 'image-' prefix for samples specific to building computer vision models.
+# [Python SDK v2 (preview)](#tab/SDK-v2)
+Review detailed code examples and use cases in the [GitHub notebook repository for automated machine learning samples](https://github.com/Azure/azureml-examples/tree/sdk-preview/sdk/jobs/automl-standalone-jobs).
+ ## Next steps * [Tutorial: Train an object detection model (preview) with AutoML and Python](tutorial-auto-train-image-models.md).
-* [Make predictions with ONNX on computer vision models from AutoML](how-to-inference-onnx-automl-image-models.md)
* [Troubleshoot automated ML experiments](how-to-troubleshoot-auto-ml.md).
machine-learning How To Auto Train Nlp Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-auto-train-nlp-models.md
Title: Set up AutoML for NLP
-description: Set up Azure Machine Learning automated ML to train natural language processing models with the Azure Machine Learning Python SDK.
+description: Set up Azure Machine Learning automated ML to train natural language processing models with the Azure Machine Learning Python SDK or the Azure Machine Learning CLI.
-+ Last updated 03/15/2022-
-# Customer intent: I'm a data scientist with ML knowledge in the natural language processing space, looking to build ML models using language specific data in Azure Machine Learning with full control of the model algorithm, hyperparameters, and training and deployment environments.
+#Customer intent: I'm a data scientist with ML knowledge in the natural language processing space, looking to build ML models using language specific data in Azure Machine Learning with full control of the model algorithm, hyperparameters, and training and deployment environments.
-# Set up AutoML to train a natural language processing model with Python (preview)
+# # Set up AutoML to train a natural language processing model (preview)
+> [!div class="op_single_selector" title1="Select the version of the developer platform of Azure Machine Learning you are using:"]
+> * [v1](./v1/how-to-auto-train-nlp-models-v1.md)
+> * [v2 (current version)](how-to-auto-train-nlp-models.md)
+
[!INCLUDE [preview disclaimer](../../includes/machine-learning-preview-generic-disclaimer.md)]
-In this article, you learn how to train natural language processing (NLP) models with [automated ML](concept-automated-ml.md) in the [Azure Machine Learning Python SDK](/python/api/overview/azure/ml/).
+In this article, you learn how to train natural language processing (NLP) models with [automated ML](concept-automated-ml.md) in Azure Machine Learning. You can create NLP models with automated ML via the Azure Machine Learning Python SDK v2 (preview) or the Azure Machine Learning CLI v2.
Automated ML supports NLP which allows ML professionals and data scientists to bring their own text data and build custom models for tasks such as, multi-class text classification, multi-label text classification, and named entity recognition (NER).
You can seamlessly integrate with the [Azure Machine Learning data labeling](how
## Prerequisites
+# [CLI v2](#tab/CLI-v2)
++
+* Azure subscription. If you don't have an Azure subscription, sign up to try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/) today.
+
+* An Azure Machine Learning workspace with a GPU training compute. To create the workspace, see [Create an Azure Machine Learning workspace](how-to-manage-workspace.md). See [GPU optimized virtual machine sizes](../virtual-machines/sizes-gpu.md) for more details of GPU instances provided by Azure.
+
+ > [!WARNING]
+ > Support for multilingual models and the use of models with longer max sequence length is necessary for several NLP use cases, such as non-english datasets and longer range documents. As a result, these scenarios may require higher GPU memory for model training to succeed, such as the NC_v3 series or the ND series.
+
+* The Azure Machine Learning CLI v2 installed. For guidance to update and install the latest version, see the [Install and set up CLI (v2)](how-to-configure-cli.md).
+
+* This article assumes some familiarity with setting up an automated machine learning experiment. Follow the [how-to](how-to-configure-auto-train.md) to see the main automated machine learning experiment design patterns.
+
+# [Python SDK v2 (preview)](#tab/SDK-v2)
++ * Azure subscription. If you don't have an Azure subscription, sign up to try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/) today. * An Azure Machine Learning workspace with a GPU training compute. To create the workspace, see [Create an Azure Machine Learning workspace](how-to-manage-workspace.md). See [GPU optimized virtual machine sizes](../virtual-machines/sizes-gpu.md) for more details of GPU instances provided by Azure.
- > [!Warning]
+ > [!WARNING]
> Support for multilingual models and the use of models with longer max sequence length is necessary for several NLP use cases, such as non-english datasets and longer range documents. As a result, these scenarios may require higher GPU memory for model training to succeed, such as the NC_v3 series or the ND series.
-* The Azure Machine Learning Python SDK installed.
+* The Azure Machine Learning Python SDK v2 installed.
To install the SDK you can either, * Create a compute instance, which automatically installs the SDK and is pre-configured for ML workflows. See [Create and manage an Azure Machine Learning compute instance](how-to-create-manage-compute-instance.md) for more information.
You can seamlessly integrate with the [Azure Machine Learning data labeling](how
* [Install the `automl` package yourself](https://github.com/Azure/azureml-examples/blob/main/python-sdk/tutorials/automl-with-azureml/README.md#setup-using-a-local-conda-environment), which includes the [default installation](/python/api/overview/azure/ml/install#default-install) of the SDK. [!INCLUDE [automl-sdk-version](../../includes/machine-learning-automl-sdk-version.md)]
-
- > [!WARNING]
- > Python 3.8 is not compatible with `automl`.
-* This article assumes some familiarity with setting up an automated machine learning experiment. Follow the [tutorial](tutorial-auto-train-models.md) or [how-to](how-to-configure-auto-train.md) to see the main automated machine learning experiment design patterns.
+* This article assumes some familiarity with setting up an automated machine learning experiment. Follow the [how-to](how-to-configure-auto-train.md) to see the main automated machine learning experiment design patterns.
++ ## Select your NLP task Determine what NLP task you want to accomplish. Currently, automated ML supports the follow deep neural network NLP tasks.
-Task |AutoMLConfig syntax| Description
+Task |AutoML job syntax| Description
-|-|
-Multi-class text classification | `task = 'text-classification'`| There are multiple possible classes and each sample can be classified as exactly one class. The task is to predict the correct class for each sample. <br> <br> For example, classifying a movie script as "Comedy" or "Romantic".
-Multi-label text classification | `task = 'text-classification-multilabel'`| There are multiple possible classes and each sample can be assigned any number of classes. The task is to predict all the classes for each sample<br> <br> For example, classifying a movie script as "Comedy", or "Romantic", or "Comedy and Romantic".
-Named Entity Recognition (NER)| `task = 'text-ner'`| There are multiple possible tags for tokens in sequences. The task is to predict the tags for all the tokens for each sequence. <br> <br> For example, extracting domain-specific entities from unstructured text, such as contracts or financial documents
+Multi-class text classification | CLI v2: `text_classification` <br> SDK v2 (preview): `text_classification()`| There are multiple possible classes and each sample can be classified as exactly one class. The task is to predict the correct class for each sample. <br> <br> For example, classifying a movie script as "Comedy" or "Romantic".
+Multi-label text classification | CLI v2: `text_classification_multilabel` <br> SDK v2 (preview): `text_classification_multilabel()`| There are multiple possible classes and each sample can be assigned any number of classes. The task is to predict all the classes for each sample<br> <br> For example, classifying a movie script as "Comedy", or "Romantic", or "Comedy and Romantic".
+Named Entity Recognition (NER)| CLI v2:`text_ner` <br> SDK v2 (preview): `text_ner()`| There are multiple possible tags for tokens in sequences. The task is to predict the tags for all the tokens for each sequence. <br> <br> For example, extracting domain-specific entities from unstructured text, such as contracts or financial documents
## Preparing data
-For NLP experiments in automated ML, you can bring an Azure Machine Learning dataset with `.csv` format for multi-class and multi-label classification tasks. For NER tasks, two-column `.txt` files that use a space as the separator and adhere to the CoNLL format are supported. The following sections provide additional detail for the data format accepted for each task.
+For NLP experiments in automated ML, you can bring your data in `.csv` format for multi-class and multi-label classification tasks. For NER tasks, two-column `.txt` files that use a space as the separator and adhere to the CoNLL format are supported. The following sections provide additional detail for the data format accepted for each task.
### Multi-class For multi-class classification, the dataset can contain several text columns and exactly one label column. The following example has only one text column.
-```python
-
+```
text,labels "I love watching Chicago Bulls games.","NBA" "Tom Brady is a great player.","NFL"
For multi-label classification, the dataset columns would be the same as multi-c
Example data for multi-label in plain text format.
-```python
+```
text,labels "I love watching Chicago Bulls games.","basketball" "The four most popular leagues are NFL, MLB, NBA and NHL","football,baseball,basketball,hockey"
Unlike multi-class or multi-label, which takes `.csv` format datasets, named ent
For example,
-``` python
+```
Hudson B-loc Square I-loc is O
Before training, automated ML applies data validation checks on the input data t
Task | Data validation check |
-All tasks | - Both training and validation sets must be provided <br> - At least 50 training samples are required
+All tasks | At least 50 training samples are required
Multi-class and Multi-label | The training data and validation data must have <br> - The same set of columns <br>- The same order of columns from left to right <br>- The same data type for columns with the same name <br>- At least two unique labels <br> - Unique column names within each dataset (For example, the training set can't have multiple columns named **Age**) Multi-class only | None Multi-label only | - The label column format must be in [accepted format](#multi-label) <br> - At least one sample should have 0 or 2+ labels, otherwise it should be a `multiclass` task <br> - All labels should be in `str` or `int` format, with no overlapping. You should not have both label `1` and label `'1'`
NER only | - The file should not start with an empty line <br> - Each line must
## Configure experiment
-Automated ML's NLP capability is triggered through `AutoMLConfig`, which is the same workflow for submitting automated ML experiments for classification, regression and forecasting tasks. You would set most of the parameters as you would for those experiments, such as `task`, `compute_target` and data inputs.
+Automated ML's NLP capability is triggered through task specific `automl` type jobs, which is the same workflow for submitting automated ML experiments for classification, regression and forecasting tasks. You would set parameters as you would for those experiments, such as `experiment_name`, `compute_name` and data inputs.
However, there are key differences:
-* You can ignore `primary_metric`, as it is only for reporting purpose. Currently, automated ML only trains one model per run for NLP and there is no model selection.
+* You can ignore `primary_metric`, as it is only for reporting purposes. Currently, automated ML only trains one model per run for NLP and there is no model selection.
* The `label_column_name` parameter is only required for multi-class and multi-label text classification tasks.
-* If the majority of the samples in your dataset contain more than 128 words, it's considered long range. For this scenario, you can enable the long range text option with the `enable_long_range_text=True` parameter in your `AutoMLConfig`. Doing so, helps improve model performance but requires a longer training times.
+* If the majority of the samples in your dataset contain more than 128 words, it's considered long range. For this scenario, you can enable the long range text option with the `enable_long_range_text=True` parameter in your task function. Doing so, helps improve model performance but requires longer training times.
* If you enable long range text, then a GPU with higher memory is required such as, [NCv3](../virtual-machines/ncv3-series.md) series or [ND](../virtual-machines/nd-series.md) series. * The `enable_long_range_text` parameter is only available for multi-class classification tasks.
+# [CLI v2](#tab/CLI-v2)
-```python
-automl_settings = {
- "verbosity": logging.INFO,
- "enable_long_range_text": True, # # You only need to set this parameter if you want to enable the long-range text setting
-}
-
-automl_config = AutoMLConfig(
- task="text-classification",
- debug_log="automl_errors.log",
- compute_target=compute_target,
- training_data=train_dataset,
- validation_data=val_dataset,
- label_column_name=target_column_name,
- **automl_settings
+
+For CLI v2 AutoML jobs you configure your experiment in a YAML file like the following.
+++
+# [Python SDK v2 (preview)](#tab/SDK-v2)
++
+For AutoML jobs via the SDK, you configure the job with the specific NLP task function. The following example demonstrates the configuration for `text_classification`.
+
+```Python
+# general job parameters
+compute_name = "gpu-cluster"
+exp_name = "dpv2-nlp-text-classification-experiment"
+
+# Create the AutoML job with the related factory-function.
+text_classification_job = automl.text_classification(
+ compute=compute_name,
+ # name="dpv2-nlp-text-classification-multiclass-job-01",
+ experiment_name=exp_name,
+ training_data=my_training_data_input,
+ validation_data=my_validation_data_input,
+ target_column_name="Sentiment",
+ primary_metric="accuracy",
+ tags={"my_custom_tag": "My custom value"},
)+
+text_classification_job.set_limits(timeout=120)
+ ```+ ### Language settings As part of the NLP functionality, automated ML supports 104 languages leveraging language specific and multilingual pre-trained text DNN models, such as the BERT family of models. Currently, language selection defaults to English.
- The following table summarizes what model is applied based on task type and language. See the full list of [supported languages and their codes](/python/api/azureml-automl-core/azureml.automl.core.constants.textdnnlanguages#azureml-automl-core-constants-textdnnlanguages-supported).
+The following table summarizes what model is applied based on task type and language. See the full list of [supported languages and their codes](/python/api/azureml-automl-core/azureml.automl.core.constants.textdnnlanguages#azureml-automl-core-constants-textdnnlanguages-supported).
Task type |Syntax for `dataset_language` | Text model algorithm -|-|
-Multi-label text classification| `'eng'` <br> `'deu'` <br> `'mul'`| English&nbsp;BERT&nbsp;[uncased](https://huggingface.co/bert-base-uncased) <br> [German BERT](https://huggingface.co/bert-base-german-cased)<br> [Multilingual BERT](https://huggingface.co/bert-base-multilingual-cased) <br><br>For all other languages, automated ML applies multilingual BERT
-Multi-class text classification| `'eng'` <br> `'deu'` <br> `'mul'`| English&nbsp;BERT&nbsp;[cased](https://huggingface.co/bert-base-cased)<br> [Multilingual BERT](https://huggingface.co/bert-base-multilingual-cased) <br><br>For all other languages, automated ML applies multilingual BERT
-Named entity recognition (NER)| `'eng'` <br> `'deu'` <br> `'mul'`| English&nbsp;BERT&nbsp;[cased](https://huggingface.co/bert-base-cased) <br> [German BERT](https://huggingface.co/bert-base-german-cased)<br> [Multilingual BERT](https://huggingface.co/bert-base-multilingual-cased) <br><br>For all other languages, automated ML applies multilingual BERT
+Multi-label text classification|`"eng"` <br> `"deu"` <br> `"mul"`| English&nbsp;BERT&nbsp;[uncased](https://huggingface.co/bert-base-uncased) <br> [German BERT](https://huggingface.co/bert-base-german-cased)<br> [Multilingual BERT](https://huggingface.co/bert-base-multilingual-cased) <br><br>For all other languages, automated ML applies multilingual BERT
+Multi-class text classification|`"eng"` <br> `"deu"` <br> `"mul"`| English&nbsp;BERT&nbsp;[cased](https://huggingface.co/bert-base-cased)<br> [Multilingual BERT](https://huggingface.co/bert-base-multilingual-cased) <br><br>For all other languages, automated ML applies multilingual BERT
+Named entity recognition (NER)|`"eng"` <br> `"deu"` <br> `"mul"`| English&nbsp;BERT&nbsp;[cased](https://huggingface.co/bert-base-cased) <br> [German BERT](https://huggingface.co/bert-base-german-cased)<br> [Multilingual BERT](https://huggingface.co/bert-base-multilingual-cased) <br><br>For all other languages, automated ML applies multilingual BERT
+# [CLI v2](#tab/CLI-v2)
-You can specify your dataset language in your `FeaturizationConfig`. BERT is also used in the featurization process of automated ML experiment training, learn more about [BERT integration and featurization in automated ML](how-to-configure-auto-features.md#bert-integration-in-automated-ml).
-```python
-from azureml.automl.core.featurization import FeaturizationConfig
+You can specify your dataset language in the featurization section of your configuration YAML file. BERT is also used in the featurization process of automated ML experiment training, learn more about [BERT integration and featurization in automated ML](how-to-configure-auto-features.md#bert-integration-in-automated-ml).
-featurization_config = FeaturizationConfig(dataset_language='{your language code}')
-automl_config = AutomlConfig("featurization": featurization_config)
+```azurecli
+featurization:
+ dataset_language: "eng"
```
+# [Python SDK v2 (preview)](#tab/SDK-v2)
++
+You can specify your dataset language with the `set_featurization()` method. BERT is also used in the featurization process of automated ML experiment training, learn more about [BERT integration and featurization in automated ML](how-to-configure-auto-features.md#bert-integration-in-automated-ml).
+
+```python
+text_classification_job.set_featurization(dataset_language='eng')
+```
+++ ## Distributed training
-You can also run your NLP experiments with distributed training on an Azure ML compute cluster. This is handled automatically by automated ML when the parameters `max_concurrent_iterations = number_of_vms` and `enable_distributed_dnn_training = True` are provided in your `AutoMLConfig` during experiment set up.
+You can also run your NLP experiments with distributed training on an Azure ML compute cluster.
+
+# [CLI v2](#tab/CLI-v2)
+++
+# [Python SDK v2 (preview)](#tab/SDK-v2)
++
+This is handled automatically by automated ML when the parameters `max_concurrent_iterations = number_of_vms` and `enable_distributed_dnn_training = True` are provided in your `AutoMLConfig` during experiment set up. Doing so, schedules distributed training of the NLP models and automatically scales to every GPU on your virtual machine or cluster of virtual machines. The max number of virtual machines allowed is 32. The training is scheduled with number of virtual machines that is in powers of two.
```python max_concurrent_iterations = number_of_vms enable_distributed_dnn_training = True ```
-Doing so, schedules distributed training of the NLP models and automatically scales to every GPU on your virtual machine or cluster of virtual machines. The max number of virtual machines allowed is 32. The training is scheduled with number of virtual machines that is in powers of two.
++
+## Submit the AutoML job
+
+# [CLI v2](#tab/CLI-v2)
+
-## Example notebooks
+To submit your AutoML job, you can run the following CLI v2 command with the path to your .yml file, workspace name, resource group and subscription ID.
+
+```azurecli
+
+az ml job create --file ./hello-automl-job-basic.yml --workspace-name [YOUR_AZURE_WORKSPACE] --resource-group [YOUR_AZURE_RESOURCE_GROUP] --subscription [YOUR_AZURE_SUBSCRIPTION]
+```
+
+# [Python SDK v2 (preview)](#tab/SDK-v2)
++
+With the `MLClient` created earlier, you can run this `CommandJob` in the workspace.
+
+```python
+returned_job = ml_client.jobs.create_or_update(
+ text_classification_job
+) # submit the job to the backend
+
+print(f"Created job: {returned_job}")
+ml_client.jobs.stream(returned_job.name)
+```
+++
+## Code examples
+
+# [CLI v2](#tab/CLI-v2)
+
+See the following sample YAML files for each NLP task.
+
+* [Multi-class text classification](https://github.com/Azure/azureml-examples/blob/april-sdk-preview/cli/jobs/automl-standalone-jobs/cli-automl-text-classification-newsgroup/cli-automl-text-classification-newsgroup.yml)
+* [Multi-label text classification](https://github.com/Azure/azureml-examples/blob/april-sdk-preview/cli/jobs/automl-standalone-jobs/cli-automl-text-classification-multilabel-paper-cat/cli-automl-text-classification-multilabel-paper-cat.yml)
+* [Named entity recognition](https://github.com/Azure/azureml-examples/blob/april-sdk-preview/cli/jobs/automl-standalone-jobs/cli-automl-text-ner-conll/cli-automl-text-ner-conll2003.yml)
+
+# [Python SDK v2 (preview)](#tab/SDK-v2)
+ See the sample notebooks for detailed code examples for each NLP task.
-* [Multi-class text classification](https://github.com/Azure/azureml-examples/blob/main/python-sdk/tutorials/automl-with-azureml/automl-nlp-multiclass/automl-nlp-text-classification-multiclass.ipynb)
+
+* [Multi-class text classification](https://github.com/Azure/azureml-examples/blob/april-sdk-preview/sdk/jobs/automl-standalone-jobs/automl-nlp-text-classification-multiclass-task-sentiment-analysis/automl-nlp-text-classification-multiclass-task-sentiment.ipynb)
* [Multi-label text classification](
-https://github.com/Azure/azureml-examples/blob/main/python-sdk/tutorials/automl-with-azureml/automl-nlp-multilabel/automl-nlp-text-classification-multilabel.ipynb)
-* [Named entity recognition](https://github.com/Azure/azureml-examples/blob/main/python-sdk/tutorials/automl-with-azureml/automl-nlp-ner/automl-nlp-ner.ipynb)
+https://github.com/Azure/azureml-examples/blob/april-sdk-preview/sdk/jobs/automl-standalone-jobs/automl-nlp-text-classification-multilabel-task-paper-categorization/automl-nlp-text-classification-multilabel-task-paper-cat.ipynb)
+* [Named entity recognition](https://github.com/Azure/azureml-examples/blob/april-sdk-preview/sdk/jobs/automl-standalone-jobs/automl-nlp-text-named-entity-recognition-task/automl-nlp-text-ner-task.ipynb)
++ ## Next steps
-+ Learn more about [how and where to deploy a model](how-to-deploy-and-where.md).
-+ [Troubleshoot automated ML experiments](how-to-troubleshoot-auto-ml.md).
+++ [Deploy AutoML models to an online (real-time inference) endpoint](how-to-deploy-automl-endpoint.md)++ [Troubleshoot automated ML experiments](how-to-troubleshoot-auto-ml.md)
machine-learning How To Autoscale Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-autoscale-endpoints.md
--++ Last updated 04/27/2022-
-# Autoscale a managed online endpoint (preview)
+# Autoscale a managed online endpoint
Autoscale automatically runs the right amount of resources to handle the load on your application. [Managed endpoints](concept-endpoints.md) supports autoscaling through integration with the Azure Monitor autoscale feature.
Azure Monitor autoscaling supports a rich set of rules. You can configure metric
Today, you can manage autoscaling using either the Azure CLI, REST, ARM, or the browser-based Azure portal. Other Azure ML SDKs, such as the Python SDK, will add support over time. -
-## Prerequisites
+## Prerequisites
-* A deployed endpoint. [Deploy and score a machine learning model by using a managed online endpoint (preview)](how-to-deploy-managed-online-endpoints.md).
+* A deployed endpoint. [Deploy and score a machine learning model by using a managed online endpoint](how-to-deploy-managed-online-endpoints.md).
## Define an autoscale profile
To learn more about autoscale with Azure Monitor, see the following articles:
- [Understand autoscale settings](../azure-monitor/autoscale/autoscale-understanding-settings.md) - [Overview of common autoscale patterns](../azure-monitor/autoscale/autoscale-common-scale-patterns.md) - [Best practices for autoscale](../azure-monitor/autoscale/autoscale-best-practices.md)-- [Troubleshooting Azure autoscale](../azure-monitor/autoscale/autoscale-troubleshoot.md)
+- [Troubleshooting Azure autoscale](../azure-monitor/autoscale/autoscale-troubleshoot.md)
machine-learning How To Change Storage Access Key https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-change-storage-access-key.md
Last updated 10/21/2021---+ # Regenerate storage account access keys Learn how to change the access keys for Azure Storage accounts used by Azure Machine Learning. Azure Machine Learning can use storage accounts to store data or trained models.
For security purposes, you may need to change the access keys for an Azure Stora
* The [Azure Machine Learning SDK](/python/api/overview/azure/ml/install).
-* The [Azure Machine Learning CLI extension v1](reference-azure-machine-learning-cli.md).
+* The [Azure Machine Learning CLI extension v1](v1/reference-azure-machine-learning-cli.md).
> [!NOTE] > The code snippets in this document were tested with version 1.0.83 of the Python SDK.
To update Azure Machine Learning to use the new key, use the following steps:
## Next steps
-For more information on registering datastores, see the [`Datastore`](/python/api/azureml-core/azureml.core.datastore%28class%29) class reference.
+For more information on registering datastores, see the [`Datastore`](/python/api/azureml-core/azureml.core.datastore%28class%29) class reference.
machine-learning How To Configure Auto Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-configure-auto-features.md
-+ Last updated 01/24/2022 # Data featurization in automated machine learning + Learn about the data featurization settings in Azure Machine Learning, and how to customize those features for [automated machine learning experiments](concept-automated-ml.md). ## Feature engineering and featurization
automl_settings = {
* Learn more about [how and where to deploy a model](how-to-deploy-and-where.md).
-* Learn more about [how to train a regression model by using automated machine learning](tutorial-auto-train-models.md) or [how to train by using automated machine learning on a remote resource](concept-automated-ml.md#local-remote).
+* Learn more about [how to train a regression model by using automated machine learning](tutorial-auto-train-models.md) or [how to train by using automated machine learning on a remote resource](./v1/concept-automated-ml-v1.md#local-remote).
machine-learning How To Configure Auto Train https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-configure-auto-train.md
Title: Set up AutoML with Python
+ Title: Set up AutoML with Python (v2)
-description: Learn how to set up an AutoML training run with the Azure Machine Learning Python SDK using Azure Machine Learning automated ML.
---
+description: Learn how to set up an AutoML training run with the Azure Machine Learning Python SDK v2 (preview) using Azure Machine Learning automated ML.
+++ Previously updated : 01/24/2021 Last updated : 04/20/2022 --+
-# Set up AutoML training with Python
+# Set up AutoML training with the Azure ML Python SDK v2 (preview)
+
+> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning Python you are using:"]
+> * [v1](./v1/how-to-configure-auto-train-v1.md)
+> * [v2 (current version)](how-to-configure-auto-train.md)
-In this guide, learn how to set up an automated machine learning, AutoML, training run with the [Azure Machine Learning Python SDK](/python/api/overview/azure/ml/intro) using Azure Machine Learning automated ML. Automated ML picks an algorithm and hyperparameters for you and generates a model ready for deployment. This guide provides details of the various options that you can use to configure automated ML experiments.
-For an end to end example, see [Tutorial: AutoML- train regression model](tutorial-auto-train-models.md).
+In this guide, learn how to set up an automated machine learning, AutoML, training job with the [Azure Machine Learning Python SDK v2 (preview)](/python/api/overview/azure/ml/intro). Automated ML picks an algorithm and hyperparameters for you and generates a model ready for deployment. This guide provides details of the various options that you can use to configure automated ML experiments.
If you prefer a no-code experience, you can also [Set up no-code AutoML training in the Azure Machine Learning studio](how-to-use-automated-ml-for-ml-models.md).
+If you prefer to submit training jobs with the Azure Machine learning CLI v2 extension, see [Train models with the CLI (v2)](how-to-train-cli.md).
+ ## Prerequisites
-For this article you need,
+For this article you need:
* An Azure Machine Learning workspace. To create the workspace, see [Create an Azure Machine Learning workspace](how-to-manage-workspace.md).
-* The Azure Machine Learning Python SDK installed.
+* The Azure Machine Learning Python SDK v2 (preview) installed.
To install the SDK you can either,
- * Create a compute instance, which automatically installs the SDK and is preconfigured for ML workflows. See [Create and manage an Azure Machine Learning compute instance](how-to-create-manage-compute-instance.md) for more information.
-
- * [Install the `automl` package yourself](https://github.com/Azure/azureml-examples/blob/main/python-sdk/tutorials/automl-with-azureml/README.md#setup-using-a-local-conda-environment), which includes the [default installation](/python/api/overview/azure/ml/install#default-install) of the SDK.
+ * Create a compute instance, which already has installed the latest AzureML Python SDK and is pre-configured for ML workflows. See [Create and manage an Azure Machine Learning compute instance](how-to-create-manage-compute-instance.md) for more information.
+
+ * Use the followings commands to install Azure ML Python SDK v2:
+ * Uninstall previous preview version:
+ ```Python
+ pip uninstall azure-ai-ml
+ ```
+ * Install the Azure ML Python SDK v2:
+ ```Python
+ pip install azure-ai-ml
+ ```
[!INCLUDE [automl-sdk-version](../../includes/machine-learning-automl-sdk-version.md)]
-
- > [!WARNING]
- > Python 3.8 is not compatible with `automl`.
-## Select your experiment type
+## Setup your workspace
-Before you begin your experiment, you should determine the kind of machine learning problem you are solving. Automated machine learning supports task types of `classification`, `regression`, and `forecasting`. Learn more about [task types](concept-automated-ml.md#when-to-use-automl-classification-regression-forecasting-computer-vision--nlp).
+To connect to a workspace, you need to provide a subscription, resource group and workspace name. These details are used in the MLClient from `azure.ai.ml` to get a handle to the required Azure Machine Learning workspace.
->[!NOTE]
-> Support for computer vision tasks: image classification (multi-class and multi-label), object detection, and instance segmentation is available in public preview. [Learn more about computer vision tasks in automated ML](concept-automated-ml.md#computer-vision-preview).
->
->Support for natural language processing (NLP) tasks: image classification (multi-class and multi-label) and named entity recognition is available in public preview. [Learn more about NLP tasks in automated ML](concept-automated-ml.md#nlp).
->
-> These preview capabilities are provided without a service-level agreement. Certain features might not be supported or might have constrained functionality. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+In the following example, the default Azure authentication is used along with the default workspace configuration or from any `config.json` file you might have copied into the folders structure. If no `config.json` is found, then you need to manually introduce the subscription_id, resource_group and workspace when creating MLClient.
-The following code uses the `task` parameter in the `AutoMLConfig` constructor to specify the experiment type as `classification`.
-
-```python
-from azureml.train.automl import AutoMLConfig
+```Python
+from azure.identity import DefaultAzureCredential
+from azure.ai.ml import MLClient
+
+credential = DefaultAzureCredential()
+ml_client = None
+try:
+ ml_client = MLClient.from_config(credential)
+except Exception as ex:
+ print(ex)
+ # Enter details of your AML workspace
+ subscription_id = "<SUBSCRIPTION_ID>"
+ resource_group = "<RESOURCE_GROUP>"
+ workspace = "<AML_WORKSPACE_NAME>"
+ ml_client = MLClient(credential, subscription_id, resource_group, workspace)
-# task can be one of classification, regression, forecasting
-automl_config = AutoMLConfig(task = "classification")
``` ## Data source and format
-Automated machine learning supports data that resides on your local desktop or in the cloud such as Azure Blob Storage. The data can be read into a **Pandas DataFrame** or an **Azure Machine Learning TabularDataset**. [Learn more about datasets](how-to-create-register-datasets.md).
+In order to provide training data to AutoML in SDK v2 you need to upload it into the cloud through an **MLTable**.
-Requirements for training data in machine learning:
+Requirements for loading data into an MLTable:
- Data must be in tabular form. - The value to predict, target column, must be in the data.
-> [!IMPORTANT]
-> Automated ML experiments do not support training with datasets that use [identity-based data access](how-to-identity-based-data-access.md).
-
-**For remote experiments**, training data must be accessible from the remote compute. Automated ML only accepts [Azure Machine Learning TabularDatasets](/python/api/azureml-core/azureml.data.tabulardataset) when working on a remote compute.
+Training data must be accessible from the remote compute. Automated ML v2 (Python SDK and CLI/YAML) accepts MLTable data assets (v2), although for backwards compatibility it also supports v1 Tabular Datasets from v1 (a registered Tabular Dataset) through the same input dataset properties. However the recommendation is to use MLTable available in v2.
-Azure Machine Learning datasets expose functionality to:
+The following YAML code is the definition of a MLTable that could be placed in a local folder or a remote folder in the cloud, along with the data file (.CSV or Parquet file).
-* Easily transfer data from static files or URL sources into your workspace.
-* Make your data available to training scripts when running on cloud compute resources. See [How to train with datasets](how-to-train-with-datasets.md#mount-files-to-remote-compute-targets) for an example of using the `Dataset` class to mount data to your remote compute target.
+```
+# MLTable definition file
+
+paths:
+ - file: ./bank_marketing_train_data.csv
+transformations:
+ - read_delimited:
+ delimiter: ','
+ encoding: 'ascii'
+```
-The following code creates a TabularDataset from a web url. See [Create a TabularDataset](how-to-create-register-datasets.md#create-a-tabulardataset) for code examples on how to create datasets from other sources like local files and datastores.
+Therefore, the MLTable folder would have the MLTable deinifition file plus the data file (the bank_marketing_train_data.csv file in this case).
-```python
-from azureml.core.dataset import Dataset
-data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/creditcard.csv"
-dataset = Dataset.Tabular.from_delimited_files(data)
-```
+The following shows two ways of creating an MLTable.
+- A. Providing your training data and MLTable definition file from your local folder and it'll be automatically uploaded into the cloud (default Workspace Datastore)
+- B. Providing a MLTable already registered and uploaded into the cloud.
-**For local compute experiments**, we recommend pandas dataframes for faster processing times.
+```Python
+from azure.ai.ml.constants import AssetTypes
+from azure.ai.ml import automl
+from azure.ai.ml.entities import JobInput
- ```python
- import pandas as pd
- from sklearn.model_selection import train_test_split
+# A. Create MLTable for training data from your local directory
+my_training_data_input = JobInput(
+ type=AssetTypes.MLTABLE, path="./data/training-mltable-folder"
+)
- df = pd.read_csv("your-local-file.csv")
- train_data, test_data = train_test_split(df, test_size=0.1, random_state=42)
- label = "label-col-name"
- ```
+# B. Remote MLTable definition
+my_training_data_input = JobInput(type=AssetTypes.MLTABLE, path="azureml://datastores/workspaceblobstore/paths/Classification/Train")
+```
-## Training, validation, and test data
+### Training, validation, and test data
-You can specify separate **training data and validation data sets** directly in the `AutoMLConfig` constructor. Learn more about [how to configure training, validation, cross validation, and test data](how-to-configure-cross-validation-data-splits.md) for your AutoML experiments.
+You can specify separate **training data and validation data sets**, however training data must be provided to the `training_data` parameter in the factory function of your automated ML job.
If you do not explicitly specify a `validation_data` or `n_cross_validation` parameter, automated ML applies default techniques to determine how validation is performed. This determination depends on the number of rows in the dataset assigned to your `training_data` parameter.
If you do not explicitly specify a `validation_data` or `n_cross_validation` par
|**Larger&nbsp;than&nbsp;20,000&nbsp;rows**| Train/validation data split is applied. The default is to take 10% of the initial training data set as the validation set. In turn, that validation set is used for metrics calculation. |**Smaller&nbsp;than&nbsp;20,000&nbsp;rows**| Cross-validation approach is applied. The default number of folds depends on the number of rows. <br> **If the dataset is less than 1,000 rows**, 10 folds are used. <br> **If the rows are between 1,000 and 20,000**, then three folds are used. -
-> [!TIP]
-> You can upload **test data (preview)** to evaluate models that automated ML generated for you. These features are [experimental](/python/api/overview/azure/ml/#stable-vs-experimental) preview capabilities, and may change at any time.
-> Learn how to:
-> * [Pass in test data to your AutoMLConfig object](how-to-configure-cross-validation-data-splits.md#provide-test-data-preview).
-> * [Test the models automated ML generated for your experiment](#test-models-preview).
->
-> If you prefer a no-code experience, see [step 12 in Set up AutoML with the studio UI](how-to-use-automated-ml-for-ml-models.md#create-and-run-experiment)
-- ### Large data
-Automated ML supports a limited number of algorithms for training on large data that can successfully build models for big data on small virtual machines. Automated ML heuristics depend on properties such as data size, virtual machine memory size, experiment timeout and featurization settings to determine if these large data algorithms should be applied. [Learn more about what models are supported in automated ML](#supported-models).
+Automated ML supports a limited number of algorithms for training on large data that can successfully build models for big data on small virtual machines. Automated ML heuristics depend on properties such as data size, virtual machine memory size, experiment timeout and featurization settings to determine if these large data algorithms should be applied. [Learn more about what models are supported in automated ML](#supported-algorithms).
* For regression, [Online Gradient Descent Regressor](/python/api/nimbusml/nimbusml.linear_model.onlinegradientdescentregressor?preserve-view=true&view=nimbusml-py-latest) and [Fast Linear Regressor](/python/api/nimbusml/nimbusml.linear_model.fastlinearregressor?preserve-view=true&view=nimbusml-py-latest)
If you want to override these heuristics, apply the following settings:
Task | Setting | Notes |||
-Block&nbsp;data streaming algorithms | `blocked_models` in your `AutoMLConfig` object and list the model(s) you don't want to use. | Results in either run failure or long run time
-Use&nbsp;data&nbsp;streaming&nbsp;algorithms| `allowed_models` in your `AutoMLConfig` object and list the model(s) you want to use.|
+Block&nbsp;data streaming algorithms | Use the `blocked_algorithms` parameter in the `set_training()` function and list the model(s) you don't want to use. | Results in either run failure or long run time
+Use&nbsp;data&nbsp;streaming&nbsp;algorithms| Use the `allowed_algorithms` parameter in the `set_training()` function and list the model(s) you want to use.|
Use&nbsp;data&nbsp;streaming&nbsp;algorithms <br> [(studio UI experiments)](how-to-use-automated-ml-for-ml-models.md#create-and-run-experiment)|Block all models except the big data algorithms you want to use. | ## Compute to run experiment
-Next determine where the model will be trained. An automated ML training experiment can run on the following compute options. Learn the [pros and cons of local and remote compute](concept-automated-ml.md#local-remote) options.
-* Your **local** machine such as a local desktop or laptop ΓÇô Generally when you have a small dataset and you are still in the exploration stage. See [this notebook](https://github.com/Azure/azureml-examples/blob/main/python-sdk/tutorials/automl-with-azureml/local-run-classification-credit-card-fraud/auto-ml-classification-credit-card-fraud-local.ipynb) for a local compute example.
-
-* A **remote** machine in the cloud ΓÇô [Azure Machine Learning Managed Compute](concept-compute-target.md#amlcompute) is a managed service that enables the ability to train machine learning models on clusters of Azure virtual machines. Compute instance is also supported as a compute target.
+Automated ML jobs with the Python SDK v2 (or CLI v2) are currently only supported on Azure ML remote compute (cluster or compute instance).
- See [this notebook](https://github.com/Azure/azureml-examples/blob/main/python-sdk/tutorials/automl-with-azureml/classification-bank-marketing-all-features/auto-ml-classification-bank-marketing-all-features.ipynb) for a remote example using Azure Machine Learning Managed Compute.
+Next determine where the model will be trained. An automated ML training experiment can run on the following compute options.
-* An **Azure Databricks cluster** in your Azure subscription. You can find more details in [Set up an Azure Databricks cluster for automated ML](how-to-configure-databricks-automl-environment.md). See this [GitHub site](https://github.com/Azure/azureml-examples/tree/main/python-sdk/tutorials/automl-with-databricks) for examples of notebooks with Azure Databricks.
+* Your **local** machine such as a local desktop or laptop ΓÇô Generally when you have a small dataset and you are still in the exploration stage. See [this notebook](https://github.com/Azure/azureml-examples/blob/main/python-sdk/tutorials/automl-with-azureml/local-run-classification-credit-card-fraud/auto-ml-classification-credit-card-fraud-local.ipynb) for a local compute example.
+
<a name='configure-experiment'></a> ## Configure your experiment settings
-There are several options that you can use to configure your automated ML experiment. These parameters are set by instantiating an `AutoMLConfig` object. See the [AutoMLConfig class](/python/api/azureml-train-automl-client/azureml.train.automl.automlconfig.automlconfig) for a full list of parameters.
+There are several options that you can use to configure your automated ML experiment. These configuration parameters are set in your task method. You can also set job training settings and [exit criteria](#exit-criteria) with the `set_training()` and `set_limits()` functions, respectively.
-The following example is for a classification task. The experiment uses AUC weighted as the [primary metric](#primary-metric) and has an experiment time out set to 30 minutes and 2 cross-validation folds.
+The following example shows the required parameters for a classification task that specifies accuracy as the [primary metric](#primary-metric) and 5 cross-validation folds.
```python
- automl_classifier=AutoMLConfig(task='classification',
- primary_metric='AUC_weighted',
- experiment_timeout_minutes=30,
- blocked_models=['XGBoostClassifier'],
- training_data=train_data,
- label_column_name=label,
- n_cross_validations=2)
-```
-You can also configure forecasting tasks, which requires extra setup. See the [Set up AutoML for time-series forecasting](how-to-auto-train-forecast.md) article for more details.
-```python
- time_series_settings = {
- 'time_column_name': time_column_name,
- 'time_series_id_column_names': time_series_id_column_names,
- 'forecast_horizon': n_test_periods
- }
-
- automl_config = AutoMLConfig(
- task = 'forecasting',
- debug_log='automl_oj_sales_errors.log',
- primary_metric='normalized_root_mean_squared_error',
- experiment_timeout_minutes=20,
- training_data=train_data,
- label_column_name=label,
- n_cross_validations=5,
- path=project_folder,
- verbosity=logging.INFO,
- **time_series_settings
- )
+classification_job = automl.classification(
+ compute=compute_name,
+ experiment_name=exp_name,
+ training_data=my_training_data_input,
+ target_column_name="y",
+ primary_metric="accuracy",
+ n_cross_validations=5,
+ enable_model_explainability=True,
+ tags={"my_custom_tag": "My custom value"}
+)
+
+# Limits are all optional
+
+classification_job.set_limits(
+ timeout=600, # timeout
+ trial_timeout=20, # trial_timeout
+ max_trials=max_trials,
+ # max_concurrent_trials = 4,
+ # max_cores_per_trial: -1,
+ enable_early_termination=True,
+)
+
+# Training properties are optional
+classification_job.set_training(
+ blocked_models=["LogisticRegression"],
+ enable_onnx_compatible_models=True
+)
```
-
-### Supported models
+
+### Select your machine learning task type (ML problem)
+
+Before you can submit your automated ML job, you need to determine the kind of machine learning problem you are solving. This problem determines which function your automated ML job uses and what model algorithms it applies.
+
+Automated ML supports tabular data based tasks (classification, regression, forecasting), computer vision tasks (such as Image Classification and Object Detection), and natural language processing tasks (such as Text classification and Entity Recognition tasks). Learn more about [task types](concept-automated-ml.md#when-to-use-automl-classification-regression-forecasting-computer-vision--nlp).
++
+### Supported algorithms
Automated machine learning tries different models and algorithms during the automation and tuning process. As a user, there is no need for you to specify the algorithm.
-The three different `task` parameter values determine the list of algorithms, or models, to apply. Use the `allowed_models` or `blocked_models` parameters to further modify iterations with the available models to include or exclude.
-The following table summarizes the supported models by task type.
+The task method determines the list of algorithms/models, to apply. Use the `allowed_algorithms` or `blocked_algorithms` parameters in the `set_training()` setter function to further modify iterations with the available models to include or exclude.
-> [!NOTE]
-> If you plan to export your automated ML created models to an [ONNX model](concept-onnx.md), only those algorithms indicated with an * (asterisk) are able to be converted to the ONNX format. Learn more about [converting models to ONNX](concept-automated-ml.md#use-with-onnx). <br> <br> Also note, ONNX only supports classification and regression tasks at this time.
->
-Classification | Regression | Time Series Forecasting
-|-- |-- |--
-[Logistic Regression](/python/api/azureml-train-automl-client/azureml.train.automl.constants.supportedmodels.classification#logisticregression-logisticregression-)* | [Elastic Net](/python/api/azureml-train-automl-client/azureml.train.automl.constants.supportedmodels.regression#elasticnet-elasticnet-)* | [AutoARIMA](/python/api/azureml-train-automl-client/azureml.train.automl.constants.supportedmodels.forecasting#autoarima-autoarima-)
-[Light GBM](/python/api/azureml-train-automl-client/azureml.train.automl.constants.supportedmodels.classification#lightgbmclassifier-lightgbm-)* | [Light GBM](/python/api/azureml-train-automl-client/azureml.train.automl.constants.supportedmodels.regression#lightgbmregressor-lightgbm-)* | [Prophet](/python/api/azureml-automl-core/azureml.automl.core.shared.constants.supportedmodels.forecasting#prophet-prophet-)
-[Gradient Boosting](/python/api/azureml-train-automl-client/azureml.train.automl.constants.supportedmodels.classification#gradientboosting-gradientboosting-)* | [Gradient Boosting](/python/api/azureml-train-automl-client/azureml.train.automl.constants.supportedmodels.regression#gradientboostingregressor-gradientboosting-)* | [Elastic Net](https://scikit-learn.org/stable/modules/linear_model.html#elastic-net)
-[Decision Tree](/python/api/azureml-train-automl-client/azureml.train.automl.constants.supportedmodels.classification#decisiontree-decisiontree-)* |[Decision Tree](/python/api/azureml-train-automl-client/azureml.train.automl.constants.supportedmodels.regression#decisiontreeregressor-decisiontree-)* |[Light GBM](https://lightgbm.readthedocs.io/en/latest/https://docsupdatetracker.net/index.html)
-[K Nearest Neighbors](/python/api/azureml-train-automl-client/azureml.train.automl.constants.supportedmodels.classification#knearestneighborsclassifier-knn-)* |[K Nearest Neighbors](/python/api/azureml-train-automl-client/azureml.train.automl.constants.supportedmodels.regression#knearestneighborsregressor-knn-)* | [Gradient Boosting](https://scikit-learn.org/stable/modules/ensemble.html#regression)
-[Linear SVC](/python/api/azureml-train-automl-client/azureml.train.automl.constants.supportedmodels.classification#linearsupportvectormachine-linearsvm-)* |[LARS Lasso](/python/api/azureml-train-automl-client/azureml.train.automl.constants.supportedmodels.regression#lassolars-lassolars-)* | [Decision Tree](https://scikit-learn.org/stable/modules/tree.html#regression)
-[Support Vector Classification (SVC)](/python/api/azureml-train-automl-client/azureml.train.automl.constants.supportedmodels.classification#supportvectormachine-svm-)* |[Stochastic Gradient Descent (SGD)](/python/api/azureml-train-automl-client/azureml.train.automl.constants.supportedmodels.regression#sgdregressor-sgd-)* | [Arimax](/python/api/azureml-automl-core/azureml.automl.core.shared.constants.supportedmodels.forecasting#arimax-arimax-)
-[Random Forest](/python/api/azureml-train-automl-client/azureml.train.automl.constants.supportedmodels.classification#randomforest-randomforest-)* | [Random Forest](/python/api/azureml-train-automl-client/azureml.train.automl.constants.supportedmodels.regression#randomforestregressor-randomforest-) | [LARS Lasso](https://scikit-learn.org/stable/modules/linear_model.html#lars-lasso)
-[Extremely Randomized Trees](https://scikit-learn.org/stable/modules/ensemble.html#extremely-randomized-trees)* | [Extremely Randomized Trees](https://scikit-learn.org/stable/modules/ensemble.html#extremely-randomized-trees)* | [Stochastic Gradient Descent (SGD)](https://scikit-learn.org/stable/modules/sgd.html#regression)
-[Xgboost](/python/api/azureml-train-automl-client/azureml.train.automl.constants.supportedmodels.classification#xgboostclassifier-xgboostclassifier-)* |[Xgboost](/python/api/azureml-train-automl-client/azureml.train.automl.constants.supportedmodels.regression#xgboostregressor-xgboostregressor-)* | [Random Forest](https://scikit-learn.org/stable/modules/ensemble.html#random-forests)
-[Averaged Perceptron Classifier](/python/api/azureml-train-automl-client/azureml.train.automl.constants.supportedmodels.classification#averagedperceptronclassifier-averagedperceptronclassifier-)| [Online Gradient Descent Regressor](/python/api/nimbusml/nimbusml.linear_model.onlinegradientdescentregressor?preserve-view=true&view=nimbusml-py-latest) | [Xgboost](https://xgboost.readthedocs.io/en/latest/parameter.html)
-[Naive Bayes](https://scikit-learn.org/stable/modules/naive_bayes.html#bernoulli-naive-bayes)* |[Fast Linear Regressor](/python/api/azureml-train-automl-client/azureml.train.automl.constants.supportedmodels.regression#fastlinearregressor-fastlinearregressor-)| [ForecastTCN](/python/api/azureml-automl-core/azureml.automl.core.shared.constants.supportedmodels.forecasting#tcnforecaster-tcnforecaster-)
-[Stochastic Gradient Descent (SGD)](/python/api/azureml-train-automl-client/azureml.train.automl.constants.supportedmodels.classification#sgdclassifier-sgd-)* || Naive
-[Linear SVM Classifier](/python/api/nimbusml/nimbusml.linear_model.linearsvmbinaryclassifier?preserve-view=true&view=nimbusml-py-latest)* || SeasonalNaive
-||| Average
-||| SeasonalAverage
-||| [ExponentialSmoothing](/python/api/azureml-automl-core/azureml.automl.core.shared.constants.supportedmodels.forecasting#exponentialsmoothing-exponentialsmoothing-)
+In the following list of links you can explore the supported algorithms per machine learning task listed below.
+
+* Classification Algorithms (Tabular Data)
+* Regression Algorithms (Tabular Data)
+* Time Series Forecasting Algorithms (Tabular Data)
+* [Image Classification Multi-class Algorithms](how-to-auto-train-image-models.md#supported-model-algorithms)
+* [Image Classification Multi-label Algorithms](how-to-auto-train-image-models.md#supported-model-algorithms)
+* [Image Object Detection Algorithms](how-to-auto-train-image-models.md#supported-model-algorithms)
+* [Image Instance Segmentation Algorithms](how-to-auto-train-image-models.md#supported-model-algorithms)
+* [NLP Text Classification Multi-class Algorithms](how-to-auto-train-nlp-models.md#language-settings)
+* [NLP Text Classification Multi-label Algorithms](how-to-auto-train-nlp-models.md#language-settings)
+* [NLP Text Named Entity Recognition (NER) Algorithms](how-to-auto-train-nlp-models.md#language-settings)
+
+Follow [this link](https://github.com/Azure/azureml-examples/tree/sdk-preview/sdk/jobs/automl-standalone-jobs) for example notebooks of each task type.
### Primary metric
Choosing a primary metric for automated ML to optimize depends on many factors.
Learn about the specific definitions of these metrics in [Understand automated machine learning results](how-to-understand-automated-ml.md).
-#### Metrics for classification scenarios
+#### Metrics for classification multi-class scenarios
+
+These metrics apply for all classification scenarios, including tabular data, images/computer-vision and NLP-Text.
Threshold-dependent metrics, like `accuracy`, `recall_score_weighted`, `norm_macro_recall`, and `precision_score_weighted` may not optimize as well for datasets that are small, have very large class skew (class imbalance), or when the expected metric value is very close to 0.0 or 1.0. In those cases, `AUC_weighted` can be a better choice for the primary metric. After automated ML completes, you can choose the winning model based on the metric best suited to your business needs.
Threshold-dependent metrics, like `accuracy`, `recall_score_weighted`, `norm_mac
| `norm_macro_recall` | Churn prediction | | `precision_score_weighted` | |
+You can also see the *enums* to use in Python in this reference page for ClassificationPrimaryMetrics Enum
+
+#### Metrics for classification multi-label scenarios
+
+- For Text classification multi-label currently 'Accuracy' is the only primary metric supported.
+
+- For Image classification multi-label, the primary metrics supported are defined in the ClassificationMultilabelPrimaryMetrics Enum
+
+#### Metrics for NLP Text NER (Named Entity Recognition) scenarios
+
+- For NLP Text NER (Named Entity Recognition) currently 'Accuracy' is the only primary metric supported.
+ #### Metrics for regression scenarios `r2_score`, `normalized_mean_absolute_error` and `normalized_root_mean_squared_error` are all trying to minimize prediction errors. `r2_score` and `normalized_root_mean_squared_error` are both minimizing average squared errors while `normalized_mean_absolute_error` is minizing the average absolute value of errors. Absolute value treats errors at all magnitudes alike and squared errors will have a much larger penalty for errors with larger absolute values. Depending on whether larger errors should be punished more or not, one can choose to optimize squared error or absolute error.
However, currently no primary metrics for regression addresses relative differen
| `r2_score` | Airline delay, Salary estimation, Bug resolution time | | `normalized_mean_absolute_error` | |
-#### Metrics for time series forecasting scenarios
+You can also see the *enums* to use in Python in this reference page for RegressionPrimaryMetrics Enum
+
+#### Metrics for Time Series Forecasting scenarios
The recommendations are similar to those noted for regression scenarios.
The recommendations are similar to those noted for regression scenarios.
| `r2_score` | Price prediction (forecasting), Inventory optimization, Demand forecasting | | `normalized_mean_absolute_error` | |
-### Data featurization
+You can also see the *enums* to use in Python in this reference page for ForecastingPrimaryMetrics Enum
+#### Metrics for Image Object Detection scenarios
-In every automated ML experiment, your data is automatically scaled and normalized to help *certain* algorithms that are sensitive to features that are on different scales. This scaling and normalization is referred to as featurization.
-See [Featurization in AutoML](how-to-configure-auto-features.md#) for more detail and code examples.
+- For Image Object Detection, the primary metrics supported are defined in the ObjectDetectionPrimaryMetrics Enum
-> [!NOTE]
-> Automated machine learning featurization steps (feature normalization, handling missing data, converting text to numeric, etc.) become part of the underlying model. When using the model for predictions, the same featurization steps applied during training are applied to your input data automatically.
+#### Metrics for Image Instance Segmentation scenarios
-When configuring your experiments in your `AutoMLConfig` object, you can enable/disable the setting `featurization`. The following table shows the accepted settings for featurization in the [AutoMLConfig object](/python/api/azureml-train-automl-client/azureml.train.automl.automlconfig.automlconfig).
+- For Image Instance Segmentation scenarios, the primary metrics supported are defined in the InstanceSegmentationPrimaryMetrics Enum
-|Featurization Configuration | Description |
-| - | - |
-|`"featurization": 'auto'`| Indicates that as part of preprocessing, [data guardrails and featurization steps](how-to-configure-auto-features.md#featurization) are performed automatically. **Default setting**.|
-|`"featurization": 'off'`| Indicates featurization step shouldn't be done automatically.|
-|`"featurization":`&nbsp;`'FeaturizationConfig'`| Indicates customized featurization step should be used. [Learn how to customize featurization](how-to-configure-auto-features.md#customize-featurization).|
---
-<a name="ensemble"></a>
-
-### Ensemble configuration
-
-Ensemble models are enabled by default, and appear as the final run iterations in an AutoML run. Currently **VotingEnsemble** and **StackEnsemble** are supported.
-
-Voting implements soft-voting, which uses weighted averages. The stacking implementation uses a two layer implementation, where the first layer has the same models as the voting ensemble, and the second layer model is used to find the optimal combination of the models from the first layer.
-
-If you are using ONNX models, **or** have model-explainability enabled, stacking is disabled and only voting is utilized.
-
-Ensemble training can be disabled by using the `enable_voting_ensemble` and `enable_stack_ensemble` boolean parameters.
-
-```python
-automl_classifier = AutoMLConfig(
- task='classification',
- primary_metric='AUC_weighted',
- experiment_timeout_minutes=30,
- training_data=data_train,
- label_column_name=label,
- n_cross_validations=5,
- enable_voting_ensemble=False,
- enable_stack_ensemble=False
- )
-```
-
-To alter the default ensemble behavior, there are multiple default arguments that can be provided as `kwargs` in an `AutoMLConfig` object.
+### Data featurization
-> [!IMPORTANT]
-> The following parameters aren't explicit parameters of the AutoMLConfig class.
-* `ensemble_download_models_timeout_sec`: During **VotingEnsemble** and **StackEnsemble** model generation, multiple fitted models from the previous child runs are downloaded. If you encounter this error: `AutoMLEnsembleException: Could not find any models for running ensembling`, then you may need to provide more time for the models to be downloaded. The default value is 300 seconds for downloading these models in parallel and there is no maximum timeout limit. Configure this parameter with a higher value than 300 secs, if more time is needed.
+In every automated ML experiment, your data is automatically transformed to numbers and vectors of numbers plus (i.e. converting text to numeric) also scaled and normalized to help *certain* algorithms that are sensitive to features that are on different scales. This data transformation, scaling and normalization is referred to as featurization.
- > [!NOTE]
- > If the timeout is reached and there are models downloaded, then the ensembling proceeds with as many models it has downloaded. It's not required that all the models need to be downloaded to finish within that timeout.
-The following parameters only apply to **StackEnsemble** models:
+> [!NOTE]
+> Automated machine learning featurization steps (feature normalization, handling missing data, converting text to numeric, etc.) become part of the underlying model. When using the model for predictions, the same featurization steps applied during training are applied to your input data automatically.
-* `stack_meta_learner_type`: the meta-learner is a model trained on the output of the individual heterogeneous models. Default meta-learners are `LogisticRegression` for classification tasks (or `LogisticRegressionCV` if cross-validation is enabled) and `ElasticNet` for regression/forecasting tasks (or `ElasticNetCV` if cross-validation is enabled). This parameter can be one of the following strings: `LogisticRegression`, `LogisticRegressionCV`, `LightGBMClassifier`, `ElasticNet`, `ElasticNetCV`, `LightGBMRegressor`, or `LinearRegression`.
+When configuring your automated ML jobs, you can enable/disable the `featurization` settings by using the `.set_featurization()` setter function.
-* `stack_meta_learner_train_percentage`: specifies the proportion of the training set (when choosing train and validation type of training) to be reserved for training the meta-learner. Default value is `0.2`.
+The following table shows the accepted settings for featurization.
-* `stack_meta_learner_kwargs`: optional parameters to pass to the initializer of the meta-learner. These parameters and parameter types mirror the parameters and parameter types from the corresponding model constructor, and are forwarded to the model constructor.
+|Featurization Configuration | Description |
+| - | - |
+|`"mode": 'auto'`| Indicates that as part of preprocessing, [data guardrails and featurization steps](how-to-configure-auto-features.md#featurization) are performed automatically. **Default setting**.|
+|`"mode": 'off'`| Indicates featurization step shouldn't be done automatically.|
+|`"mode":`&nbsp;`'custom'`| Indicates customized featurization step should be used.|
-The following code shows an example of specifying custom ensemble behavior in an `AutoMLConfig` object.
+The following code shows how custom featurization can be provided in this case for a regression job.
```python
-ensemble_settings = {
- "ensemble_download_models_timeout_sec": 600
- "stack_meta_learner_type": "LogisticRegressionCV",
- "stack_meta_learner_train_percentage": 0.3,
- "stack_meta_learner_kwargs": {
- "refit": True,
- "fit_intercept": False,
- "class_weight": "balanced",
- "multi_class": "auto",
- "n_jobs": -1
- }
- }
-automl_classifier = AutoMLConfig(
- task='classification',
- primary_metric='AUC_weighted',
- experiment_timeout_minutes=30,
- training_data=train_data,
- label_column_name=label,
- n_cross_validations=5,
- **ensemble_settings
- )
+from azure.ai.ml.automl import ColumnTransformer
+
+transformer_params = {
+ "imputer": [
+ ColumnTransformer(fields=["CACH"], parameters={"strategy": "most_frequent"}),
+ ColumnTransformer(fields=["PRP"], parameters={"strategy": "most_frequent"}),
+ ],
+}
+regression_job.set_featurization(
+ mode="custom",
+ transformer_params=transformer_params,
+ blocked_transformers=["LabelEncoding"],
+ column_name_and_types={"CHMIN": "Categorical"},
+)
``` <a name="exit"></a> ### Exit criteria
-There are a few options you can define in your AutoMLConfig to end your experiment.
+There are a few options you can define in the `set_limits()` function to end your experiment prior to job completion.
|Criteria| description |-|- No&nbsp;criteria | If you do not define any exit parameters the experiment continues until no further progress is made on your primary metric.
-After&nbsp;a&nbsp;length&nbsp;of&nbsp;time| Use `experiment_timeout_minutes` in your settings to define how long, in minutes, your experiment should continue to run. <br><br> To help avoid experiment time out failures, there is a minimum of 15 minutes, or 60 minutes if your row by column size exceeds 10 million.
-A&nbsp;score&nbsp;has&nbsp;been&nbsp;reached| Use `experiment_exit_score` completes the experiment after a specified primary metric score has been reached.
+`timeout`| Defines how long, in minutes, your experiment should continue to run.If not specified, the default job's total timeout is 6 days (8,640 minutes). To specify a timeout less than or equal to 1 hour (60 minutes), make sure your dataset's size is not greater than 10,000,000 (rows times column) or an error results. <br><br> This timeout includes setup, featurization and training runs but does not include the ensembling and model explainability runs at the end of the process since those actions need to happen once all the trials (children jobs) are done.
+`trial_timeout` | Maximum time in minutes that each trial (child job) can run for before it terminates. If not specified, a value of 1 month or 43200 minutes is used
+`enable_early_termination`|Whether to end the job if the score is not improving in the short term
+`max_trials`| The maximum number of trials/runs each with a different combination of algorithm and hyperparameters to try during an AutoML job. If not specified, the default is 1000 trials. If using `enable_early_termination` the number of trials used can be smaller.
+`max_concurrent_trials`| Represents the maximum number of trials (children jobs) that would be executed in parallel. It's a good practice to match this number with the number of nodes your cluster
## Run experiment- > [!WARNING] > If you run an experiment with the same configuration settings and primary metric multiple times, you'll likely see variation in each experiments final metrics score and generated models. The algorithms automated ML employs have inherent randomness that can cause slight variation in the models output by the experiment and the recommended model's final metrics score, like accuracy. You'll likely also see results with the same model name, but different hyperparameters used.
-For automated ML, you create an `Experiment` object, which is a named object in a `Workspace` used to run experiments.
+Submit the experiment to run and generate a model. With the MLClient created in the prerequisites,you can run the following command in the workspace.
```python
-from azureml.core.experiment import Experiment
-ws = Workspace.from_config()
+# Submit the AutoML job
+returned_job = ml_client.jobs.create_or_update(
+ classification_job
+) # submit the job to the backend
-# Choose a name for the experiment and specify the project folder.
-experiment_name = 'Tutorial-automl'
-project_folder = './sample_projects/automl-classification'
-
-experiment = Experiment(ws, experiment_name)
-```
+print(f"Created job: {returned_job}")
-Submit the experiment to run and generate a model. Pass the `AutoMLConfig` to the `submit` method to generate the model.
+# Get a URL for the status of the job
+returned_job.services["Studio"].endpoint
-```python
-run = experiment.submit(automl_config, show_output=True)
```
->[!NOTE]
->Dependencies are first installed on a new machine. It may take up to 10 minutes before output is shown.
->Setting `show_output` to `True` results in output being shown on the console.
- ### Multiple child runs on clusters Automated ML experiment child runs can be performed on a cluster that is already running another experiment. However, the timing depends on how many nodes the cluster has, and if those nodes are available to run a different experiment.
Each node in the cluster acts as an individual virtual machine (VM) that can acc
To help manage child runs and when they can be performed, we recommend you create a dedicated cluster per experiment, and match the number of `max_concurrent_iterations` of your experiment to the number of nodes in the cluster. This way, you use all the nodes of the cluster at the same time with the number of concurrent child runs/iterations you want.
-Configure `max_concurrent_iterations` in your `AutoMLConfig` object. If it is not configured, then by default only one concurrent child run/iteration is allowed per experiment.
-In case of compute instance, `max_concurrent_iterations` can be set to be the same as number of cores on the compute instance VM.
+Configure `max_concurrent_iterations` in the .set_limits() setter function. If it is not configured, then by default only one concurrent child run/iteration is allowed per experiment.
+In case of compute instance, `max_concurrent_trials` can be set to be the same as number of cores on the compute instance VM.
## Explore models and metrics Automated ML offers options for you to monitor and evaluate your training results.
-* You can view your training results in a widget or inline if you are in a notebook. See [Monitor automated machine learning runs](#monitor) for more details.
- * For definitions and examples of the performance charts and metrics provided for each run, see [Evaluate automated machine learning experiment results](how-to-understand-automated-ml.md). * To get a featurization summary and understand what features were added to a particular model, see [Featurization transparency](how-to-configure-auto-features.md#featurization-transparency).
-You can view the hyperparameters, the scaling and normalization techniques, and algorithm applied to a specific automated ML run with the [custom code solution, `print_model()`](how-to-configure-auto-features.md#scaling-and-normalization).
-
-> [!TIP]
-> Automated ML also let's you [view the generated model training code for Auto ML trained models](how-to-generate-automl-training-code.md). This functionality is in public preview and can change at any time.
-
-## <a name="monitor"></a> Monitor automated machine learning runs
-
-For automated ML runs, to access the charts from a previous run, replace `<<experiment_name>>` with the appropriate experiment name:
-
-```python
-from azureml.widgets import RunDetails
-from azureml.core.run import Run
-
-experiment = Experiment (workspace, <<experiment_name>>)
-run_id = 'autoML_my_runID' #replace with run_ID
-run = Run(experiment, run_id)
-RunDetails(run).show()
-```
-
-![Jupyter notebook widget for Automated Machine Learning](./media/how-to-configure-auto-train/azure-machine-learning-auto-ml-widget.png)
-
-## Test models (preview)
-
->[!IMPORTANT]
-> Testing your models with a test dataset to evaluate automated ML generated models is a preview feature. This capability is an [experimental](/python/api/overview/azure/ml/#stable-vs-experimental) preview feature, and may change at any time.
-
-> [!WARNING]
-> This feature is not available for the following automated ML scenarios
-> * [Computer vision tasks (preview)](how-to-auto-train-image-models.md)
-> * [Many models and hiearchical time series forecasting training (preview)](how-to-auto-train-forecast.md)
-> * [Forecasting tasks where deep learning neural networks (DNN) are enabled](how-to-auto-train-forecast.md#enable-deep-learning)
-> * [Automated ML runs from local computes or Azure Databricks clusters](how-to-configure-auto-train.md#compute-to-run-experiment)
-
-Passing the `test_data` or `test_size` parameters into the `AutoMLConfig`, automatically triggers a remote test run that uses the provided test data to evaluate the best model that automated ML recommends upon completion of the experiment. This remote test run is done at the end of the experiment, once the best model is determined. See how to [pass test data into your `AutoMLConfig`](how-to-configure-cross-validation-data-splits.md#provide-test-data-preview).
-
-### Get test run results
-
-You can get the predictions and metrics from the remote test run from the [Azure Machine Learning studio](how-to-use-automated-ml-for-ml-models.md#view-remote-test-run-results-preview) or with the following code.
--
-```python
-best_run, fitted_model = remote_run.get_output()
-test_run = next(best_run.get_children(type='automl.model_test'))
-test_run.wait_for_completion(show_output=False, wait_post_processing=True)
-
-# Get test metrics
-test_run_metrics = test_run.get_metrics()
-for name, value in test_run_metrics.items():
- print(f"{name}: {value}")
-
-# Get test predictions as a Dataset
-test_run_details = test_run.get_details()
-dataset_id = test_run_details['outputDatasets'][0]['identifier']['savedId']
-test_run_predictions = Dataset.get_by_id(workspace, dataset_id)
-predictions_df = test_run_predictions.to_pandas_dataframe()
-
-# Alternatively, the test predictions can be retrieved via the run outputs.
-test_run.download_file("predictions/predictions.csv")
-predictions_df = pd.read_csv("predictions.csv")
-
-```
-
-The model test run generates the predictions.csv file that's stored in the default datastore created with the workspace. This datastore is visible to all users with the same subscription. Test runs are not recommended for scenarios if any of the information used for or created by the test run needs to remain private.
-
-### Test existing automated ML model
-
-To test other existing automated ML models created, best run or child run, use [`ModelProxy()`](/python/api/azureml-train-automl-client/azureml.train.automl.model_proxy.modelproxy) to test a model after the main AutoML run has completed. `ModelProxy()` already returns the predictions and metrics and does not require further processing to retrieve the outputs.
-
-> [!NOTE]
-> ModelProxy is an [experimental](/python/api/overview/azure/ml/#stable-vs-experimental) preview class, and may change at any time.
-
-The following code demonstrates how to test a model from any run by using [ModelProxy.test()](/python/api/azureml-train-automl-client/azureml.train.automl.model_proxy.modelproxy#test-test-data--azureml-data-abstract-dataset-abstractdataset--include-predictions-only--boolfalse--typing-tuple-azureml-data-abstract-dataset-abstractdataset--typing-dict-str--typing-any--) method. In the test() method you have the option to specify if you only want to see the predictions of the test run with the `include_predictions_only` parameter.
-
-```python
-from azureml.train.automl.model_proxy import ModelProxy
-
-model_proxy = ModelProxy(child_run=my_run, compute_target=cpu_cluster)
-predictions, metrics = model_proxy.test(test_data, include_predictions_only= True
-)
-```
+From Azure Machine Learning UI at the model's page you can also view the hyperparameters used when training a particular a particular model and also view and customize the internal model's training code used.
## Register and deploy models
-After you test a model and confirm you want to use it in production, you can register it for later use and
+After you test a model and confirm you want to use it in production, you can register it for later use.
-To register a model from an automated ML run, use the [`register_model()`](/python/api/azureml-train-automl-client/azureml.train.automl.run.automlrun#register-model-model-name-none--description-none--tags-none--iteration-none--metric-none-) method.
-
-```Python
-
-best_run = run.get_best_child()
-print(fitted_model.steps)
-
-model_name = best_run.properties['model_name']
-description = 'AutoML forecast example'
-tags = None
-
-model = run.register_model(model_name = model_name,
- description = description,
- tags = tags)
-```
--
-For details on how to create a deployment configuration and deploy a registered model to a web service, see [how and where to deploy a model](how-to-deploy-and-where.md?tabs=python#define-a-deployment-configuration).
> [!TIP] > For registered models, one-click deployment is available via the [Azure Machine Learning studio](https://ml.azure.com). See [how to deploy registered models from the studio](how-to-use-automated-ml-for-ml-models.md#deploy-your-model).
-<a name="explain"></a>
-
-## Model interpretability
-
-Model interpretability allows you to understand why your models made predictions, and the underlying feature importance values. The SDK includes various packages for enabling model interpretability features, both at training and inference time, for local and deployed models.
-
-See how to [enable interpretability features](how-to-machine-learning-interpretability-automl.md) specifically within automated ML experiments.
-
-For general information on how model explanations and feature importance can be enabled in other areas of the SDK outside of automated machine learning, see the [concept article on interpretability](how-to-machine-learning-interpretability.md) .
-
-> [!NOTE]
-> The ForecastTCN model is not currently supported by the Explanation Client. This model will not return an explanation dashboard if it is returned as the best model, and does not support on-demand explanation runs.
## Next steps + Learn more about [how and where to deploy a model](how-to-deploy-and-where.md).-
-+ Learn more about [how to train a regression model with Automated machine learning](tutorial-auto-train-models.md).
-
-+ [Troubleshoot automated ML experiments](how-to-troubleshoot-auto-ml.md).
machine-learning How To Configure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-configure-cli.md
Last updated 04/08/2022 -+ # Install and set up the CLI (v2) [!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)]
-The `ml` extension (preview) to the [Azure CLI](/cli/azure/) is the enhanced interface for Azure Machine Learning. It enables you to train and deploy models from the command line, with features that accelerate scaling data science up and out while tracking the model lifecycle.
+> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning CLI extension you are using:"]
+> * [v1](v1/reference-azure-machine-learning-cli.md)
+> * [v2 (current version)](how-to-configure-cli.md)
+
+The `ml` extension (preview) to the [Azure CLI](/cli/azure/) is the enhanced interface for Azure Machine Learning. It enables you to train and deploy models from the command line, with features that accelerate scaling data science up and out while tracking the model lifecycle.
## Prerequisites
If your Azure Machine Learning workspace uses a private endpoint and virtual net
- [Train models using CLI (v2)](how-to-train-cli.md) - [Set up the Visual Studio Code Azure Machine Learning extension](how-to-setup-vs-code.md) - [Train an image classification TensorFlow model using the Azure Machine Learning Visual Studio Code extension](tutorial-train-deploy-image-classification-model-vscode.md)-- [Explore Azure Machine Learning with examples](samples-notebooks.md)
+- [Explore Azure Machine Learning with examples](samples-notebooks.md)
machine-learning How To Configure Cross Validation Data Splits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-configure-cross-validation-data-splits.md
-+ Last updated 11/15/2021- # Configure training, validation, cross-validation and test data in automated machine learning + In this article, you learn the different options for configuring training data and validation data splits along with cross-validation settings for your automated machine learning, automated ML, experiments. In Azure Machine Learning, when you use automated ML to build multiple ML models, each child run needs to validate the related model by calculating the quality metrics for that model, such as accuracy or AUC weighted. These metrics are calculated by comparing the predictions made with each model with real labels from past observations in the validation data. [Learn more about how metrics are calculated based on validation type](#metric-calculation-for-cross-validation-in-machine-learning).
If you do not explicitly specify either a `validation_data` or `n_cross_validati
## Provide validation data
-In this case, you can either start with a single data file and split it into training data and validation data sets or you can provide a separate data file for the validation set. Either way, the `validation_data` parameter in your `AutoMLConfig` object assigns which data to use as your validation set. This parameter only accepts data sets in the form of an [Azure Machine Learning dataset](how-to-create-register-datasets.md) or pandas dataframe.
+In this case, you can either start with a single data file and split it into training data and validation data sets or you can provide a separate data file for the validation set. Either way, the `validation_data` parameter in your `AutoMLConfig` object assigns which data to use as your validation set. This parameter only accepts data sets in the form of an [Azure Machine Learning dataset](./v1/how-to-create-register-datasets.md) or pandas dataframe.
> [!NOTE] > The `validation_data` parameter requires the `training_data` and `label_column_name` parameters to be set as well. You can only set one validation parameter, that is you can only specify either `validation_data` or `n_cross_validations`, not both.
When either a custom validation set or an automatically selected validation set
## Provide test data (preview)
-You can also provide test data to evaluate the recommended model that automated ML generates for you upon completion of the experiment. When you provide test data it's considered a separate from training and validation, so as to not bias the results of the test run of the recommended model. [Learn more about training, validation and test data in automated ML.](concept-automated-ml.md#training-validation-and-test-data)
- [!INCLUDE [preview disclaimer](../../includes/machine-learning-preview-generic-disclaimer.md)]
+You can also provide test data to evaluate the recommended model that automated ML generates for you upon completion of the experiment. When you provide test data it's considered a separate from training and validation, so as to not bias the results of the test run of the recommended model. [Learn more about training, validation and test data in automated ML.](concept-automated-ml.md#training-validation-and-test-data)
+ > [!WARNING] > This feature is not available for the following automated ML scenarios > * [Computer vision tasks (preview)](how-to-auto-train-image-models.md)
You can also provide test data to evaluate the recommended model that automated
> * [Forecasting tasks where deep learning neural networks (DNN) are enabled](how-to-auto-train-forecast.md#enable-deep-learning) > * [Automated ML runs from local computes or Azure Databricks clusters](how-to-configure-auto-train.md#compute-to-run-experiment)
-Test datasets must be in the form of an [Azure Machine Learning TabularDataset](how-to-create-register-datasets.md#tabulardataset). You can specify a test dataset with the `test_data` and `test_size` parameters in your `AutoMLConfig` object. These parameters are mutually exclusive and can not be specified at the same time or with `cv_split_column_names` or `cv_splits_indices`.
+Test datasets must be in the form of an [Azure Machine Learning TabularDataset](./v1/how-to-create-register-datasets.md#tabulardataset). You can specify a test dataset with the `test_data` and `test_size` parameters in your `AutoMLConfig` object. These parameters are mutually exclusive and can not be specified at the same time or with `cv_split_column_names` or `cv_splits_indices`.
With the `test_data` parameter, specify an existing dataset to pass into your `AutoMLConfig` object.
automl_config = AutoMLConfig(task = 'regression',
> Forecasting does not currently support specifying a test dataset using a train/test split with the `test_size` parameter.
-Passing the `test_data` or `test_size` parameters into the `AutoMLConfig`, automatically triggers a remote test run upon completion of your experiment. This test run uses the provided test data to evaluate the best model that automated ML recommends. Learn more about [how to get the predictions from the test run](how-to-configure-auto-train.md#test-models-preview).
+Passing the `test_data` or `test_size` parameters into the `AutoMLConfig`, automatically triggers a remote test run upon completion of your experiment. This test run uses the provided test data to evaluate the best model that automated ML recommends. Learn more about [how to get the predictions from the test run](./v1/how-to-configure-auto-train-v1.md#test-models-preview).
## Next steps
machine-learning How To Configure Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-configure-environment.md
Last updated 03/22/2021 -+ # Set up a Python development environment for Azure Machine Learning
Create a workspace configuration file in one of the following methods:
Create a script to connect to your Azure Machine Learning workspace and use the [`write_config`](/python/api/azureml-core/azureml.core.workspace.workspace#write-config-path-none--file-name-none-) method to generate your file and save it as *.azureml/config.json*. Make sure to replace `subscription_id`,`resource_group`, and `workspace_name` with your own.
+ [!INCLUDE [sdk v1](../../includes/machine-learning-sdk-v1.md)]
+ ```python from azureml.core import Workspace
machine-learning How To Configure Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-configure-private-link.md
-+
Azure Private Link enables you to connect to your workspace using a private endp
> * [Virtual network isolation and privacy overview](how-to-network-security-overview.md). > * [Secure workspace resources](how-to-secure-workspace-vnet.md). > * [Secure training environments](how-to-secure-training-vnet.md).
-> * [Secure inference environments](how-to-secure-inferencing-vnet.md).
+> * For securing inference, see the following documents:
+> * If using CLI v1 or SDK v1 - [Secure inference environment](how-to-secure-inferencing-vnet.md)
+> * If using CLI v2 or SDK v2 - [Network isolation for managed online endpoints](how-to-secure-online-endpoint.md)
> * [Use Azure Machine Learning studio in a VNet](how-to-enable-studio-virtual-network.md). > * [API platform network isolation](how-to-configure-network-isolation-with-v2.md)
Use one of the following methods to create a workspace with a private endpoint.
The Azure Machine Learning Python SDK provides the [PrivateEndpointConfig](/python/api/azureml-core/azureml.core.privateendpointconfig) class, which can be used with [Workspace.create()](/python/api/azureml-core/azureml.core.workspace.workspace#create-name--auth-none--subscription-id-none--resource-group-none--location-none--create-resource-group-true--sku--basictags-none--friendly-name-none--storage-account-none--key-vault-none--app-insights-none--container-registry-none--adb-workspace-none--cmk-keyvault-none--resource-cmk-uri-none--hbi-workspace-false--default-cpu-compute-target-none--default-gpu-compute-target-none--private-endpoint-config-none--private-endpoint-auto-approval-true--exist-ok-false--show-output-true-) to create a workspace with a private endpoint. This class requires an existing virtual network. + ```python from azureml.core import Workspace from azureml.core import PrivateEndPointConfig
az network private-endpoint dns-zone-group add \
# [Azure CLI extension 1.0](#tab/azurecliextensionv1)
-If you are using the Azure CLI [extension 1.0 for machine learning](reference-azure-machine-learning-cli.md), use the [az ml workspace create](/cli/azure/ml/workspace#az-ml-workspace-create) command. The following parameters for this command can be used to create a workspace with a private network, but it requires an existing virtual network:
+If you are using the Azure CLI [extension 1.0 for machine learning](v1/reference-azure-machine-learning-cli.md), use the [az ml workspace create](/cli/azure/ml/workspace#az-ml-workspace-create) command. The following parameters for this command can be used to create a workspace with a private network, but it requires an existing virtual network:
* `--pe-name`: The name of the private endpoint that is created. * `--pe-auto-approval`: Whether private endpoint connections to the workspace should be automatically approved.
Use one of the following methods to add a private endpoint to an existing worksp
# [Python](#tab/python) + ```python from azureml.core import Workspace from azureml.core import PrivateEndPointConfig
az network private-endpoint dns-zone-group add \
# [Azure CLI extension 1.0](#tab/azurecliextensionv1)
-The Azure CLI [extension 1.0 for machine learning](reference-azure-machine-learning-cli.md) provides the [az ml workspace private-endpoint add](/cli/azure/ml(v1)/workspace/private-endpoint#az-ml-workspace-private-endpoint-add) command.
+The Azure CLI [extension 1.0 for machine learning](v1/reference-azure-machine-learning-cli.md) provides the [az ml workspace private-endpoint add](/cli/azure/ml(v1)/workspace/private-endpoint#az-ml-workspace-private-endpoint-add) command.
```azurecli az ml workspace private-endpoint add -w myworkspace --pe-name myprivateendpoint --pe-auto-approval --pe-vnet-name myvnet
To remove a private endpoint, use the following information:
To remove a private endpoint, use [Workspace.delete_private_endpoint_connection](/python/api/azureml-core/azureml.core.workspace(class)#delete-private-endpoint-connection-private-endpoint-connection-name-). The following example demonstrates how to remove a private endpoint: + ```python from azureml.core import Workspace
az network private-endpoint delete \
# [Azure CLI extension 1.0](#tab/azurecliextensionv1)
-The Azure CLI [extension 1.0 for machine learning](reference-azure-machine-learning-cli.md) provides the [az ml workspace private-endpoint delete](/cli/azure/ml(v1)/workspace/private-endpoint#az-ml-workspace-private-endpoint-delete) command.
+The Azure CLI [extension 1.0 for machine learning](v1/reference-azure-machine-learning-cli.md) provides the [az ml workspace private-endpoint delete](/cli/azure/ml(v1)/workspace/private-endpoint#az-ml-workspace-private-endpoint-delete) command.
# [Portal](#tab/azure-portal)
To enable public access, use the following steps:
To enable public access, use [Workspace.update](/python/api/azureml-core/azureml.core.workspace(class)#update-friendly-name-none--description-none--tags-none--image-build-compute-none--service-managed-resources-settings-none--primary-user-assigned-identity-none--allow-public-access-when-behind-vnet-none-) and set `allow_public_access_when_behind_vnet=True`. + ```python from azureml.core import Workspace
az ml workspace update \
# [Azure CLI extension 1.0](#tab/azurecliextensionv1)
-The Azure CLI [extension 1.0 for machine learning](reference-azure-machine-learning-cli.md) provides the [az ml workspace update](/cli/azure/ml/workspace#az-ml-workspace-update) command. To enable public access to the workspace, add the parameter `--allow-public-access true`.
+The Azure CLI [extension 1.0 for machine learning](v1/reference-azure-machine-learning-cli.md) provides the [az ml workspace update](/cli/azure/ml/workspace#az-ml-workspace-update) command. To enable public access to the workspace, add the parameter `--allow-public-access true`.
# [Portal](#tab/azure-portal)
machine-learning How To Connect Data Ui https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-connect-data-ui.md
Last updated 01/18/2021--
-# Customer intent: As low code experience data scientist, I need to make my data in storage on Azure available to my remote compute to train my ML models.
+
+#Customer intent: As low code experience data scientist, I need to make my data in storage on Azure available to my remote compute to train my ML models.
# Connect to data with the Azure Machine Learning studio
-In this article, learn how to access your data with the [Azure Machine Learning studio](overview-what-is-machine-learning-studio.md). Connect to your data in storage services on Azure with [Azure Machine Learning datastores](how-to-access-data.md), and then package that data for tasks in your ML workflows with [Azure Machine Learning datasets](how-to-create-register-datasets.md).
+In this article, learn how to access your data with the [Azure Machine Learning studio](overview-what-is-machine-learning-studio.md). Connect to your data in storage services on Azure with [Azure Machine Learning datastores](how-to-access-data.md), and then package that data for tasks in your ML workflows with [Azure Machine Learning datasets](./v1/how-to-create-register-datasets.md).
The following table defines and summarizes the benefits of datastores and datasets.
The following table defines and summarizes the benefits of datastores and datase
|Datastores| Securely connect to your storage service on Azure, by storing your connection information, like your subscription ID and token authorization in your [Key Vault](https://azure.microsoft.com/services/key-vault/) associated with the workspace | Because your information is securely stored, you <br><br> <li> Don't&nbsp;put&nbsp;authentication&nbsp;credentials&nbsp;or&nbsp;original&nbsp;data sources at risk. <li> No longer need to hard code them in your scripts. |Datasets| By creating a dataset, you create a reference to the data source location, along with a copy of its metadata. With datasets you can, <br><br><li> Access data during model training.<li> Share data and collaborate with other users.<li> Leverage open-source libraries, like pandas, for data exploration. | Because datasets are lazily evaluated, and the data remains in its existing location, you <br><br><li>Keep a single copy of data in your storage.<li> Incur no extra storage cost <li> Don't risk unintentionally changing your original data sources.<li>Improve ML workflow performance speeds.
-To understand where datastores and datasets fit in Azure Machine Learning's overall data access workflow, see the [Securely access data](concept-data.md#data-workflow) article.
+To understand where datastores and datasets fit in Azure Machine Learning's overall data access workflow, see the [Securely access data](./v1/concept-data.md#data-workflow) article.
For a code first experience, see the following articles to use the [Azure Machine Learning Python SDK](/python/api/overview/azure/ml/) to: * [Connect to Azure storage services with datastores](how-to-access-data.md).
-* [Create Azure Machine Learning datasets](how-to-create-register-datasets.md).
+* [Create Azure Machine Learning datasets](./v1/how-to-create-register-datasets.md).
## Prerequisites
For a code first experience, see the following articles to use the [Azure Machin
- An Azure Machine Learning workspace. [Create an Azure Machine Learning workspace](how-to-manage-workspace.md).
- - When you create a workspace, an Azure blob container and an Azure file share are automatically registered as datastores to the workspace. They're named `workspaceblobstore` and `workspacefilestore`, respectively. If blob storage is sufficient for your needs, the `workspaceblobstore` is set as the default datastore, and already configured for use. Otherwise, you need a storage account on Azure with a [supported storage type](how-to-access-data.md#matrix).
+ - When you create a workspace, an Azure blob container and an Azure file share are automatically registered as datastores to the workspace. They're named `workspaceblobstore` and `workspacefilestore`, respectively. If blob storage is sufficient for your needs, the `workspaceblobstore` is set as the default datastore, and already configured for use. Otherwise, you need a storage account on Azure with a [supported storage type](how-to-access-data.md#supported-data-storage-service-types).
## Create datastores
-You can create datastores from [these Azure storage solutions](how-to-access-data.md#matrix). **For unsupported storage solutions**, and to save data egress cost during ML experiments, you must [move your data](how-to-access-data.md#move) to a supported Azure storage solution. [Learn more about datastores](how-to-access-data.md).
+You can create datastores from [these Azure storage solutions](how-to-access-data.md#supported-data-storage-service-types). **For unsupported storage solutions**, and to save data egress cost during ML experiments, you must [move your data](how-to-access-data.md#move-data-to-supported-azure-storage-solutions) to a supported Azure storage solution. [Learn more about datastores](./v1/how-to-access-data.md).
You can create datastores with credential-based access or identity-based access.
The following example demonstrates what the form looks like when you create an *
## Create datasets
-After you create a datastore, create a dataset to interact with your data. Datasets package your data into a lazily evaluated consumable object for machine learning tasks, like training. [Learn more about datasets](how-to-create-register-datasets.md).
+After you create a datastore, create a dataset to interact with your data. Datasets package your data into a lazily evaluated consumable object for machine learning tasks, like training. [Learn more about datasets](./v1/how-to-create-register-datasets.md).
There are two types of datasets, FileDataset and TabularDataset.
-[FileDatasets](how-to-create-register-datasets.md#filedataset) create references to single or multiple files or public URLs. Whereas [TabularDatasets](how-to-create-register-datasets.md#tabulardataset) represent your data in a tabular format. You can create TabularDatasets from .csv, .tsv, .parquet, .jsonl files, and from SQL query results.
+[FileDatasets](./v1/how-to-create-register-datasets.md#filedataset) create references to single or multiple files or public URLs. Whereas [TabularDatasets](./v1/how-to-create-register-datasets.md#tabulardataset) represent your data in a tabular format. You can create TabularDatasets from .csv, .tsv, .parquet, .jsonl files, and from SQL query results.
The following steps and animation show how to create a dataset in [Azure Machine Learning studio](https://ml.azure.com).
If your data storage account is in a **virtual network**, additional configurati
**After datastore creation**, this validation is only performed for methods that require access to the underlying storage container, **not** each time datastore objects are retrieved. For example, validation happens if you want to download files from your datastore; but if you just want to change your default datastore, then validation does not happen.
-To authenticate your access to the underlying storage service, you can provide either your account key, shared access signatures (SAS) tokens, or service principal according to the datastore type you want to create. The [storage type matrix](how-to-access-data.md#matrix) lists the supported authentication types that correspond to each datastore type.
+To authenticate your access to the underlying storage service, you can provide either your account key, shared access signatures (SAS) tokens, or service principal according to the datastore type you want to create. The [storage type matrix](how-to-access-data.md#supported-data-storage-service-types) lists the supported authentication types that correspond to each datastore type.
You can find account key, SAS token, and service principal information on your [Azure portal](https://portal.azure.com).
For Azure blob container and Azure Data Lake Gen 2 storage, make sure your authe
## Train with datasets
-Use your datasets in your machine learning experiments for training ML models. [Learn more about how to train with datasets](how-to-train-with-datasets.md)
+Use your datasets in your machine learning experiments for training ML models. [Learn more about how to train with datasets](how-to-train-with-datasets.md).
## Next steps
machine-learning How To Consume Web Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-consume-web-service.md
Last updated 10/21/2021 ms.devlang: csharp, golang, java, python-+ #Customer intent: As a developer, I need to understand how to create a client application that consumes the web service of a deployed ML model.
There are a several ways to retrieve this information for deployed web
# [Python](#tab/python) + * When you deploy a model, a `Webservice` object is returned with information about the service: ```python
token, refresh_by = service.get_token()
print(token) ```
-If you have the [Azure CLI and the machine learning extension](reference-azure-machine-learning-cli.md), you can use the following command to get a token:
+If you have the [Azure CLI and the machine learning extension](v1/reference-azure-machine-learning-cli.md), you can use the following command to get a token:
[!INCLUDE [cli v1](../../includes/machine-learning-cli-v1.md)]
machine-learning How To Convert Custom Model To Mlflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-convert-custom-model-to-mlflow.md
+
+ Title: Convert custom models to MLflow
+
+description: Convert custom models to MLflow model format for no code deployment with endpoints.
+++++ Last updated : 04/15/2022++++
+# Convert custom ML models to MLflow formatted models
+
+In this article, learn how to convert your custom ML model into MLflow format. [MLflow](https://www.mlflow.org) is an open-source library for managing the lifecycle of your machine learning experiments. In some cases, you might use a machine learning framework without its built-in MLflow model flavor support. Due to this lack of built-in MLflow model flavor, you cannot log or register the model with MLflow model fluent APIs. To resolve this, you can convert your model to an MLflow format where you can leverage the following benefits of Azure Machine Learning and MLflow models.
+
+With Azure Machine Learning, MLflow models get the added benefits of,
+
+* No code deployment
+* Portability as an open source standard format
+* Ability to deploy both locally and on cloud
+
+MLflow provides support for a variety of [machine learning frameworks](https://mlflow.org/docs/latest/models.html#built-in-model-flavors) (scikit-learn, Keras, Pytorch, and more); however, it might not cover every use case. For example, you may want to create an MLflow model with a framework that MLflow does not natively support or you may want to change the way your model does pre-processing or post-processing when running jobs.
+
+If you didn't train your model with MLFlow and want to use Azure Machine Learning's MLflow no-code deployment offering, you need to convert your custom model to MLFLow. Learn more about [custom python models and MLflow](https://mlflow.org/docs/latest/models.html#custom-python-models).
+
+## Prerequisites
+
+Only the mlflow package installed is needed to convert your custom models to an MLflow format.
+
+## Create a Python wrapper for your model
+
+Before you can convert your model to an MLflow supported format, you need to first create a python wrapper for your model.
+The following code demonstrates how to create a Python wrapper for an `sklearn` model.
+
+```python
+
+# Load training and test datasets
+from sys import version_info
+import sklearn
+from sklearn import datasets
+from sklearn.model_selection import train_test_split
+
+import mlflow.pyfunc
+from sklearn.metrics import mean_squared_error
+from sklearn.model_selection import train_test_split
+
+PYTHON_VERSION = "{major}.{minor}.{micro}".format(major=version_info.major,
+ minor=version_info.minor,
+ micro=version_info.micro)
+
+# Train and save an SKLearn model
+sklearn_model_path = "model.pkl"
+
+artifacts = {
+ "sklearn_model": sklearn_model_path
+}
+
+# create wrapper
+class SKLearnWrapper(mlflow.pyfunc.PythonModel):
+
+ def load_context(self, context):
+ import pickle
+ self.sklearn_model = pickle.load(open(context.artifacts["sklearn_model"], 'rb'))
+
+ def predict(self, model, data):
+ return self.sklearn_model.predict(data)
+```
+
+## Create a Conda environment
+
+Next, you need to create Conda environment for the new MLflow Model that contains all necessary dependencies. If not indicated, the environment is inferred from the current installation. If not, it can be specified.
+
+```python
+
+import cloudpickle
+conda_env = {
+ 'channels': ['defaults'],
+ 'dependencies': [
+ 'python={}'.format(PYTHON_VERSION),
+ 'pip',
+ {
+ 'pip': [
+ 'mlflow',
+ 'scikit-learn=={}'.format(sklearn.__version__),
+ 'cloudpickle=={}'.format(cloudpickle.__version__),
+ ],
+ },
+ ],
+ 'name': 'sklearn_env'
+}
+```
+
+## Load the MLFlow formatted model and test predictions
+
+Once your environment is ready, you can pass the SKlearnWrapper, the Conda environment, and your newly created artifacts dictionary to the mlflow.pyfunc.save_model() method. Doing so saves the model to your disk.
+
+```python
+mlflow_pyfunc_model_path = "sklearn_mlflow_pyfunc7"
+mlflow.pyfunc.save_model(path=mlflow_pyfunc_model_path, python_model=SKLearnWrapper(), conda_env=conda_env, artifacts=artifacts)
+
+```
+
+To ensure your newly saved MLflow formatted model didn't change during the save, you can load your model and print out a test prediction to compare your original model.
+
+The following code prints a test prediction from the mlflow formatted model and a test prediction from the sklearn model that's saved to your disk for comparison.
+
+```python
+loaded_model = mlflow.pyfunc.load_model(mlflow_pyfunc_model_path)
+
+input_data = "<insert test data>"
+# Evaluate the model
+import pandas as pd
+test_predictions = loaded_model.predict(input_data)
+print(test_predictions)
+
+# load the model from disk
+import pickle
+loaded_model = pickle.load(open(sklearn_model_path, 'rb'))
+result = loaded_model.predict(input_data)
+print(result)
+```
+
+## Register the MLflow formatted model
+
+Once you've confirmed that your model saved correctly, you can create a test run, so you can register and save your MLflow formatted model to your model registry.
+
+```python
+
+mlflow.start_run()
+
+mlflow.pyfunc.log_model(artifact_path=mlflow_pyfunc_model_path,
+ loader_module=None,
+ data_path=None,
+ code_path=None,
+ python_model=SKLearnWrapper(),
+ registered_model_name="Custom_mlflow_model",
+ conda_env=conda_env,
+ artifacts=artifacts)
+mlflow.end_run()
+```
+
+> [!IMPORTANT]
+> In some cases, you might use a machine learning framework without its built-in MLflow model flavor support. For instance, the `vaderSentiment` library is a standard natural language processing (NLP) library used for sentiment analysis. Since it lacks a built-in MLflow model flavor, you cannot log or register the model with MLflow model fluent APIs. See an example on [how to save, log and register a model that doesn't have a supported built-in MLflow model flavor](https://mlflow.org/docs/latest/model-registry.html#registering-an-unsupported-machine-learning-model).
+
+## Next steps
+
+* [No-code deployment for Mlflow models](how-to-deploy-mlflow-models-online-endpoints.md)
+* Learn more about [MLflow and Azure Machine Learning](concept-mlflow.md)
machine-learning How To Create Attach Compute Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-attach-compute-cluster.md
-+ Previously updated : 07/09/2021 Last updated : 05/02/2022 # Create an Azure Machine Learning compute cluster
+> [!div class="op_single_selector" title1="Select the Azure Machine Learning CLI version you are using:"]
+> * [v1](v1/how-to-create-attach-compute-cluster.md)
+> * [v2 (preview)](how-to-create-attach-compute-cluster.md)
+ Learn how to create and manage a [compute cluster](concept-compute-target.md#azure-machine-learning-compute-managed) in your Azure Machine Learning workspace. You can use Azure Machine Learning compute cluster to distribute a training or batch inference process across a cluster of CPU or GPU compute nodes in the cloud. For more information on the VM sizes that include GPUs, see [GPU-optimized virtual machine sizes](../virtual-machines/sizes-gpu.md).
In this article, learn how to:
* An Azure Machine Learning workspace. For more information, see [Create an Azure Machine Learning workspace](how-to-manage-workspace.md).
-* The [Azure CLI extension for Machine Learning service](reference-azure-machine-learning-cli.md), [Azure Machine Learning Python SDK](/python/api/overview/azure/ml/intro), or the [Azure Machine Learning Visual Studio Code extension](how-to-setup-vs-code.md).
+* The [Azure CLI extension for Machine Learning service (v2)](reference-azure-machine-learning-cli.md), [Azure Machine Learning Python SDK](/python/api/overview/azure/ml/intro), or the [Azure Machine Learning Visual Studio Code extension](how-to-setup-vs-code.md).
* If using the Python SDK, [set up your development environment with a workspace](how-to-configure-environment.md). Once your environment is set up, attach to the workspace in your Python script:
+ [!INCLUDE [sdk v1](../../includes/machine-learning-sdk-v1.md)]
+ ```python from azureml.core import Workspace
To create a persistent Azure Machine Learning Compute resource in Python, specif
* **vm_size**: The VM family of the nodes created by Azure Machine Learning Compute. * **max_nodes**: The max number of nodes to autoscale up to when you run a job on Azure Machine Learning Compute. + [!code-python[](~/aml-sdk-samples/ignore/doc-qa/how-to-set-up-training-targets/amlcompute2.py?name=cpu_cluster)] You can also configure several advanced properties when you create Azure Machine Learning Compute. The properties allow you to create a persistent cluster of fixed size, or within an existing Azure Virtual Network in your subscription. See the [AmlCompute class](/python/api/azureml-core/azureml.core.compute.amlcompute.amlcompute) for details.
You can also configure several advanced properties when you create Azure Machine
# [Azure CLI](#tab/azure-cli)
-```azurecli-interactive
-az ml computetarget create amlcompute -n cpu --min-nodes 1 --max-nodes 1 -s STANDARD_D3_V2 --location westus2
+```azurecli
+az ml compute create -f create-cluster.yml
```
+Where the file *create-cluster.yml* is:
+++ > [!WARNING] > When using a compute cluster in a different region than your workspace or datastores, you may see increased network latency and data transfer costs. The latency and costs can occur when creating the cluster, and when running jobs on it.
-For more information, see [az ml computetarget create amlcompute](/cli/azure/ml(v1)/computetarget/create#az-ml-computetarget-create-amlcompute).
# [Studio](#tab/azure-studio)
You may also choose to use [low-priority VMs](how-to-manage-optimize-cost.md#low
Use any of these ways to specify a low-priority VM: # [Python](#tab/python)
-
++ ```python compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_D2_V2', vm_priority='lowpriority',
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_D2_V2',
# [Azure CLI](#tab/azure-cli) Set the `vm-priority`:
-```azurecli-interactive
-az ml computetarget create amlcompute --name lowpriocluster --vm-size Standard_NC6 --max-nodes 5 --vm-priority lowpriority
+```azurecli
+az ml compute create -f create-cluster.yml
```
+Where the file *create-cluster.yml* is:
++ # [Studio](#tab/azure-studio) In the studio, choose **Low Priority** when you create a VM.
-## <a id="managed-identity"></a> Set up managed identity
+## Set up managed identity
[!INCLUDE [aml-clone-in-azure-notebook](../../includes/aml-managed-identity-intro.md)] # [Python](#tab/python) + * Configure managed identity in your provisioning configuration: * System assigned managed identity created in a workspace named `ws`
In the studio, choose **Low Priority** when you create a VM.
# [Azure CLI](#tab/azure-cli)
-* Create a new managed compute cluster with managed identity
+### Create a new managed compute cluster with managed identity
- * User-assigned managed identity
+Use this command:
- ```azurecli
- az ml computetarget create amlcompute --name cpu-cluster --vm-size Standard_NC6 --max-nodes 5 --assign-identity '/subscriptions/<subcription_id>/resourcegroups/<resource_group>/providers/Microsoft.ManagedIdentity/userAssignedIdentities/<user_assigned_identity>'
- ```
+```azurecli
+az ml compute create -f create-cluster.yml
+```
- * System-assigned managed identity
+Where the contents of *create-cluster.yml* are as follows:
- ```azurecli
- az ml computetarget create amlcompute --name cpu-cluster --vm-size Standard_NC6 --max-nodes 5 --assign-identity '[system]'
- ```
-* Add a managed identity to an existing cluster:
+* User-assigned managed identity
- * User-assigned managed identity
- ```azurecli
- az ml computetarget amlcompute identity assign --name cpu-cluster '/subscriptions/<subcription_id>/resourcegroups/<resource_group>/providers/Microsoft.ManagedIdentity/userAssignedIdentities/<user_assigned_identity>'
- ```
- * System-assigned managed identity
+ :::code language="yaml" source="~/azureml-examples-main/cli/resources/compute/cluster-user-identity.yml":::
+
+* System-assigned managed identity
+
+ :::code language="yaml" source="~/azureml-examples-main/cli/resources/compute/cluster-system-identity.yml":::
+
+### Add a managed identity to an existing cluster
+
+To update an existing cluster:
+
+* User-assigned managed identity
+
+ :::code language="azurecli" source="~/azureml-examples-main/cli/deploy-mlcompute-update-to-user-identity.sh":::
+
+* System-assigned managed identity
+
+ :::code language="azurecli" source="~/azureml-examples-main/cli/deploy-mlcompute-update-to-system-identity.sh":::
- ```azurecli
- az ml computetarget amlcompute identity assign --name cpu-cluster '[system]'
- ```
# [Studio](#tab/azure-studio)
machine-learning How To Create Attach Compute Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-attach-compute-studio.md
Last updated 10/21/2021 -+ # Create compute targets for model training and deployment in Azure Machine Learning studio
In this article, learn how to create and manage compute targets in Azure Machine
## What's a compute target?
-With Azure Machine Learning, you can train your model on a variety of resources or environments, collectively referred to as [__compute targets__](concept-azure-machine-learning-architecture.md#compute-targets). A compute target can be a local machine or a cloud resource, such as an Azure Machine Learning Compute, Azure HDInsight, or a remote virtual machine. You can also create compute targets for model deployment as described in ["Where and how to deploy your models"](how-to-deploy-and-where.md).
+With Azure Machine Learning, you can train your model on a variety of resources or environments, collectively referred to as [__compute targets__](v1/concept-azure-machine-learning-architecture.md#compute-targets). A compute target can be a local machine or a cloud resource, such as an Azure Machine Learning Compute, Azure HDInsight, or a remote virtual machine. You can also create compute targets for model deployment as described in ["Where and how to deploy your models"](how-to-deploy-and-where.md).
## <a id="portal-view"></a>View compute targets
If you created your compute instance or compute cluster with SSH access enabled,
After a target is created and attached to your workspace, you use it in your [run configuration](how-to-set-up-training-targets.md) with a `ComputeTarget` object: + ```python from azureml.core.compute import ComputeTarget myvm = ComputeTarget(workspace=ws, name='my-vm-name')
machine-learning How To Create Attach Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-attach-kubernetes.md
-+ Previously updated : 11/05/2021 Last updated : 04/21/2022 # Create and attach an Azure Kubernetes Service cluster Azure Machine Learning can deploy trained machine learning models to Azure Kubernetes Service. However, you must first either __create__ an Azure Kubernetes Service (AKS) cluster from your Azure ML workspace, or __attach__ an existing AKS cluster. This article provides information on both creating and attaching a cluster.
Azure Machine Learning can deploy trained machine learning models to Azure Kuber
- An Azure Machine Learning workspace. For more information, see [Create an Azure Machine Learning workspace](how-to-manage-workspace.md). -- The [Azure CLI extension for Machine Learning service](reference-azure-machine-learning-cli.md), [Azure Machine Learning Python SDK](/python/api/overview/azure/ml/intro), or the [Azure Machine Learning Visual Studio Code extension](how-to-setup-vs-code.md).
+- The [Azure CLI extension for Machine Learning service](v1/reference-azure-machine-learning-cli.md), [Azure Machine Learning Python SDK](/python/api/overview/azure/ml/intro), or the [Azure Machine Learning Visual Studio Code extension](how-to-setup-vs-code.md).
- If you plan on using an Azure Virtual Network to secure communication between your Azure ML workspace and the AKS cluster, your workspace and its associated resources (storage, key vault, Azure Container Registry) must have private endpoints or service endpoints in the same VNET as AKS cluster's VNET. Please follow tutorial [create a secure workspace](./tutorial-create-secure-workspace.md) to add those private endpoints or service endpoints to your VNET.
The following example demonstrates how to create a new AKS cluster using the SDK
# [Python](#tab/python) + ```python from azureml.core.compute import AksCompute, ComputeTarget
The following example demonstrates how to attach an existing AKS cluster to your
# [Python](#tab/python) + ```python from azureml.core.compute import AksCompute, ComputeTarget # Set the resource group that contains the AKS cluster and the cluster name
For information on attaching an AKS cluster in the portal, see [Create compute t
When you [create or attach an AKS cluster](how-to-create-attach-kubernetes.md), you can enable TLS termination with **[AksCompute.provisioning_configuration()](/python/api/azureml-core/azureml.core.compute.akscompute#provisioning-configuration-agent-count-none--vm-size-none--ssl-cname-none--ssl-cert-pem-file-none--ssl-key-pem-file-none--location-none--vnet-resourcegroup-name-none--vnet-name-none--subnet-name-none--service-cidr-none--dns-service-ip-none--docker-bridge-cidr-none--cluster-purpose-none--load-balancer-type-none--load-balancer-subnet-none-)** and **[AksCompute.attach_configuration()](/python/api/azureml-core/azureml.core.compute.akscompute#attach-configuration-resource-group-none--cluster-name-none--resource-id-none--cluster-purpose-none-)** configuration objects. Both methods return a configuration object that has an **enable_ssl** method, and you can use **enable_ssl** method to enable TLS. Following example shows how to enable TLS termination with automatic TLS certificate generation and configuration by using Microsoft certificate under the hood.++ ```python from azureml.core.compute import AksCompute, ComputeTarget
Following example shows how to enable TLS termination with automatic TLS certifi
``` Following example shows how to enable TLS termination with custom certificate and custom domain name. With custom domain and certificate, you must update your DNS record to point to the IP address of scoring endpoint, please see [Update your DNS](how-to-secure-web-service.md#update-your-dns) + ```python from azureml.core.compute import AksCompute, ComputeTarget
When you create or attach an AKS cluster, you can configure the cluster to use a
# [Create](#tab/akscreate) + To create an AKS cluster that uses an Internal Load Balancer, use the `load_balancer_type` and `load_balancer_subnet` parameters: ```python
aks_target.wait_for_completion(show_output = True)
# [Attach](#tab/aksattach) + To attach an AKS cluster and use an internal load balancer (no public IP for the cluster), use the `load_balancer_type` and `load_balancer_subnet` parameters: ```python
To detach a cluster from your workspace, use one of the following methods:
# [Python](#tab/python) + ```python aks_target.detach() ```
Updates to Azure Machine Learning components installed in an Azure Kubernetes Se
You can apply these updates by detaching the cluster from the Azure Machine Learning workspace and reattaching the cluster to the workspace. + ```python compute_target = ComputeTarget(workspace=ws, name=clusterWorkspaceName) compute_target.detach()
kubectl delete cm azuremlfeconfig
If TLS is enabled in the cluster, you will need to supply the TLS/SSL certificate and private key when reattaching the cluster. + ```python attach_config = AksCompute.attach_configuration(resource_group=resourceGroup, cluster_name=kubernetesClusterName)
To resolve this problem, create/attach the cluster by using the `load_balancer_t
* [Use Azure RBAC for Kubernetes authorization](../aks/manage-azure-rbac.md) * [How and where to deploy a model](how-to-deploy-and-where.md)
-* [Deploy a model to an Azure Kubernetes Service cluster](how-to-deploy-azure-kubernetes-service.md)
machine-learning How To Create Component Pipeline Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-component-pipeline-python.md
+
+ Title: 'Create and run machine learning pipelines using components with the Azure Machine Learning SDK v2 (Preview)'
+
+description: Build a machine learning pipeline for image classification. Focus on machine learning instead of infrastructure and automation.
++++++ Last updated : 05/10/2022+++
+# Create and run machine learning pipelines using components with the Azure Machine Learning SDK v2 (Preview)
++
+In this article, you learn how to build an [Azure Machine Learning pipeline](concept-ml-pipelines.md) using Python SDK v2 to complete an image classification task containing three steps: prepare data, train an image classification model, and score the model. Machine learning pipelines optimize your workflow with speed, portability, and reuse, so you can focus on machine learning instead of infrastructure and automation.
+
+The example trains a small [Keras](https://keras.io/) convolutional neural network to classify images in the [Fashion MNIST](https://github.com/zalandoresearch/fashion-mnist) dataset. The pipeline looks like following.
+++
+In this article, you complete the following tasks:
+
+> [!div class="checklist"]
+> * Prepare input data for the pipeline job
+> * Create three components to prepare the data, train and score
+> * Compose a Pipeline from the components
+> * Get access to workspace with compute
+> * Submit the pipeline job
+> * Review the output of the components and the trained neural network
+> * (Optional) Register the component for further reuse and sharing within workspace
+
+If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/) today.
+
+## Prerequisites
+
+* Complete the [Quickstart: Get started with Azure Machine Learning](quickstart-create-resources.md) if you don't already have an Azure Machine Learning workspace.
+* A Python environment in which you've installed Azure Machine Learning Python SDK v2 - [install instructions](https://github.com/Azure/azureml-examples/tree/sdk-preview/sdk#getting-started) - check the getting started section. This environment is for defining and controlling your Azure Machine Learning resources and is separate from the environment used at runtime for training.
+* Clone examples repository
+
+ To run the training examples, first clone the examples repository and change into the `sdk` directory:
+
+ ```bash
+ git clone --depth 1 https://github.com/Azure/azureml-examples --branch sdk-preview
+ cd azureml-examples/sdk
+ ```
+
+## Start an interactive Python session
+
+This article uses the Python SDK for Azure ML to create and control an Azure Machine Learning pipeline. The article assumes that you'll be running the code snippets interactively in either a Python REPL environment or a Jupyter notebook.
+
+This article is based on the [image_classification_keras_minist_convnet.ipynb](https://github.com/Azure/azureml-examples/blob/sdk-preview/sdk/jobs/pipelines/2e_image_classification_keras_minist_convnet/image_classification_keras_minist_convnet.ipynb) notebook found in the `sdk/jobs/pipelines/2e_image_classification_keras_minist_convnet` directory of the [AzureML Examples](https://github.com/azure/azureml-examples) repository.
+
+## Import required libraries
+
+Import all the Azure Machine Learning required libraries that you'll need for this article:
+
+[!notebook-python[] (~/azureml-examples-sdk-preview/sdk/jobs/pipelines/2e_image_classification_keras_minist_convnet/image_classification_keras_minist_convnet.ipynb?name=required-library)]
+
+## Prepare input data for your pipeline job
+
+You need to prepare the input data for this image classification pipeline.
+
+Fashion-MNIST is a dataset of fashion images divided into 10 classes. Each image is a 28x28 grayscale image and there are 60,000 training and 10,000 test images. As an image classification problem, Fashion-MNIST is harder than the classic MNIST handwritten digit database. It's distributed in the same compressed binary form as the original [handwritten digit database](http://yann.lecun.com/exdb/mnist/).
+
+To define the input data of a job that references the Web-based data, run:
+
+[!notebook-python[] (~/azureml-examples-sdk-preview/sdk/jobs/pipelines/2e_image_classification_keras_minist_convnet/image_classification_keras_minist_convnet.ipynb?name=define-input)]
+
+By defining an `Input`, you create a reference to the data source location. The data remains in its existing location, so no extra storage cost is incurred.
+
+## Create components for building pipeline
+
+The image classification task can be split into three steps: prepare data, train model and score model.
+
+[Azure Machine Learning component](concept-component.md) is a self-contained piece of code that does one step in a machine learning pipeline. In this article, you'll create three components for the image classification task:
+
+- Prepare data for training and test
+- Train a neural networking for image classification using training data
+- Score the model using test data
+
+For each component, you need to prepare the following staff:
+
+1. Prepare the python script containing the execution logic
+
+1. Define the interface of the component,
+
+1. Add other metadata of the component, including run-time environment, command to run the component, and etc.
+
+The next section will show create components in two different ways: the first two components using python function and the third component using yaml definition.
+
+### Create the data-preparation component
+
+The first component in this pipeline will convert the compressed data files of `fashion_ds` into two csv files, one for training and the other for scoring. You'll use python function to define this component.
+
+If you're following along with the example in the [AzureML Examples repo](https://github.com/Azure/azureml-examples/tree/sdk-preview/sdk/jobs/pipelines/2e_image_classification_keras_minist_convnet), the source files are already available in `prep/` folder. This folder contains two files to construct the component: `prep_component.py`, which defines the component and `conda.yaml`, which defines the run-time environment of the component.
+
+#### Define component using python function
+
+By using command_component() function as a decorator, you can easily define the component's interface, metadata and code to execute from a python function. Each decorated Python function will be transformed into a single static specification (YAML) that the pipeline service can process.
++
+The code above define a component with display name `Prep Data` using `@command_component` decorator:
+
+* `name` is the unique identifier of the component.
+* `version` is the current version of the component. A component can have multiple versions.
+* `display_name` is a friendly display name of the component in UI, which isn't unique.
+* `description` usually describes what task this component can complete.
+* `environment` specifies the run-time environment for this component. The environment of this component specifies a docker image and refers to the `conda.yaml` file.
+* The `prepare_data_component` function defines one input for `input_data` and two outputs for `training_data` and `test_data`.
+`input_data` is input data path. `training_data` and `test_data` are output data paths for training data and test data.
+* This component converts the data from `input_data` into a training data csv to `training_data` and a test data csv to `test_data`.
+
+Following is what a component looks like in the studio UI.
+
+- A component is a block in a pipeline graph.
+- The `input_data`, `training_data` and `test_data` are ports of the component, which connects to other components for data streaming.
++
+#### Specify component run-time environment
+
+You'll need to modify the runtime environment in which your component runs.
++
+The above code creates an object of `Environment` class, which represents the runtime environment in which the component runs.
+
+The `conda.yaml` file contains all packages used for the component like following:
+++
+Now, you've prepared all source files for the `Prep Data` component.
++
+### Create the train-model component
+
+In this section, you'll create a component for training the image classification model in the python function like the `Prep Data` component.
+
+The difference is that since the training logic is more complicated, you can put the original training code in a separate Python file.
+
+The source files of this component are under `train/` folder in the [AzureML Examples repo](https://github.com/Azure/azureml-examples/tree/sdk-preview/sdk/jobs/pipelines/2e_image_classification_keras_minist_convnet). This folder contains three files to construct the component:
+
+- `train.py`: contains the actual logic to train model.
+- `train_component.py`: defines the interface of the component and imports the function in `train.py`.
+- `conda.yaml`: defines the run-time environment of the component.
+
+#### Get a script containing execution logic
+
+The `train.py` file contains a normal python function, which performs the training model logic to train a Keras neural network for image classification. You can find the code [here](https://github.com/Azure/azureml-examples/tree/sdk-preview/sdk/jobs/pipelines/2e_image_classification_keras_minist_convnet/train/train.py).
+
+#### Define component using python function
+
+After defining the training function successfully, you can use @command_component in Azure Machine Learning SDK v2 to wrap your function as a component, which can be used in AML pipelines.
++
+The code above define a component with display name `Train Image Classification Keras` using `@command_component`:
+
+* The `keras_train_component` function defines one input `input_data` where training data comes from, one input `epochs` specifying epochs during training, and one output `output_model` where outputs the model file. The default value of `epochs` is 10. The execution logic of this component is from `train()` function in `train.py` above.
+
+#### Specify component run-time environment
+
+The train-model component has a slightly more complex configuration than the prep-data component. The `conda.yaml` is like following:
++
+Now, you've prepared all source files for the `Train Image Classification Keras` component.
+
+### Create the score-model component
+
+In this section, other than the previous components, you'll create a component to score the trained model via Yaml specification and script.
+
+If you're following along with the example in the [AzureML Examples repo](https://github.com/Azure/azureml-examples/tree/sdk-preview/sdk/jobs/pipelines/2e_image_classification_keras_minist_convnet), the source files are already available in `score/` folder. This folder contains three files to construct the component:
+
+- `score.py`: contains the source code of the component.
+- `score.yaml`: defines the interface and other details of the component.
+- `conda.yaml`: defines the run-time environment of the component.
+
+#### Get a script containing execution logic
+
+The `score.py` file contains a normal python function, which performs the training model logic.
+++
+The code in score.py takes three command-line arguments: `input_data`, `input_model` and `output_result`. The program score the input model using input data and then output the scoring result.
+
+#### Define component via Yaml
+
+In this section, you'll learn to create a component specification in the valid YAML component specification format. This file specifies the following information:
+
+- Metadata: name, display_name, version, type, and so on.
+- Interface: inputs and outputs
+- Command, code, & environment: The command, code, and environment used to run the component
++
+* `name` is the unique identifier of the component. Its display name is `Score Image Classification Keras`.
+* This component has two inputs and one output.
+* The source code path of it's defined in the `code` section and when the component is run in cloud, all files from that path will be uploaded as the snapshot of this component.
+* The `command` section specifies the command to execute while running this component.
+* The `environment` section contains a docker image and a conda yaml file.
+
+#### Specify component run-time environment
+
+The score component uses the same image and conda.yaml file as the train component. The source file is in the [sample repository](https://github.com/Azure/azureml-examples/blob/sdk-preview/sdk/jobs/pipelines/2e_image_classification_keras_minist_convnet/train/conda.yaml).
+
+Now, you've got all source files for score-model component.
+
+## Load components to build pipeline
+
+For prep-data component and train-model component defined by python function, you can import the components just like normal python functions.
+
+In the following code, you import `prepare_data_component()` and `keras_train_component()` function from the `prep_component.py` file under `prep` folder and `train_component` file under `train` folder respectively.
+
+[!notebook-python[] (~/azureml-examples-sdk-preview/sdk/jobs/pipelines/2e_image_classification_keras_minist_convnet/image_classification_keras_minist_convnet.ipynb?name=load-from-dsl-component)]
+
+For score component defined by yaml, you can use `load_component()` function to load.
+
+[!notebook-python[] (~/azureml-examples-sdk-preview/sdk/jobs/pipelines/2e_image_classification_keras_minist_convnet/image_classification_keras_minist_convnet.ipynb?name=load-from-yaml)]
+
+## Build your pipeline
+
+Now that you've created and loaded all components and input data to build the pipeline. You can compose them into a pipeline:
+
+[!notebook-python[] (~/azureml-examples-sdk-preview/sdk/jobs/pipelines/2e_image_classification_keras_minist_convnet/image_classification_keras_minist_convnet.ipynb?name=build-pipeline)]
+
+The pipeline has a default compute `cpu_compute_target`, which means if you don't specify compute for a specific node, that node will run on the default compute.
+
+The pipeline has a pipeline level input `pipeline_input_data`. You can assign value to pipeline input when you submit a pipeline job.
+
+The pipeline contains three nodes, prepare_data_node, train_node and score_node.
+
+- The `input_data` of `prepare_data_node` uses the value of `pipeline_input_data`.
+
+- The `input_data` of `train_node` is from the `training_data` output of the prepare_data_node.
+
+- The `input_data` of score_node is from the `test_data` output of prepare_data_node, and the `input_model` is from the `output_model` of train_node.
+
+- Since `train_node` will train a CNN model, you can specify its compute as the gpu_compute_target, which can improve the training performance.
+
+## Submit your pipeline job
+
+Now you've constructed the pipeline, you can submit to your workspace. To submit a job, you need to firstly connect to a workspace.
+
+### Get access to your workspace
+
+#### Configure credential
+
+We'll use `DefaultAzureCredential` to get access to workspace. `DefaultAzureCredential` to get access to workspace.
+
+`DefaultAzureCredential` should be capable of handling most Azure SDK authentication scenarios.
+
+Reference for more available credentials if it doesn't work for you: [configure credential example](https://github.com/Azure/MachineLearningNotebooks/blob/master/configuration.ipynb), [azure-identity reference doc](/python/api/azure-identity/azure.identity?view=azure-python&preserve-view=true ).
++
+[!notebook-python[] (~/azureml-examples-sdk-preview/sdk/jobs/pipelines/2e_image_classification_keras_minist_convnet/image_classification_keras_minist_convnet.ipynb?name=credential)]
+
+#### Get a handle to a workspace with compute
+
+Create a `MLClient` object to manage Azure Machine Learning services.
+
+[!notebook-python[] (~/azureml-examples-sdk-preview/sdk/jobs/pipelines/2e_image_classification_keras_minist_convnet/image_classification_keras_minist_convnet.ipynb?name=workspace)]
+
+> [!IMPORTANT]
+> This code snippet expects the workspace configuration json file to be saved in the current directory or its parent. For more information on creating a workspace, see [Create and manage Azure Machine Learning workspaces](how-to-manage-workspace.md). For more information on saving the configuration to file, see [Create a workspace configuration file](how-to-configure-environment.md#workspace).
+
+#### Submit pipeline job to workspace
+
+Now you've get a handle to your workspace, you can submit your pipeline job.
+
+[!notebook-python[] (~/azureml-examples-sdk-preview/sdk/jobs/pipelines/2e_image_classification_keras_minist_convnet/image_classification_keras_minist_convnet.ipynb?name=submit-pipeline)]
++
+The code above submit this image classification pipeline job to experiment called `pipeline_samples`. It will auto create the experiment if not exists. The `pipeline_input_data` uses `fashion_ds`.
+
+The call to `pipeline_job`produces output similar to:
+
+The call to `submit` the `Experiment` completes quickly, and produces output similar to:
+
+| Experiment | Name | Type | Status | Details Page |
+| | - | -- | -- | - |
+| pipeline_samples | sharp_pipe_4gvqx6h1fb | pipeline | Preparing | Link to Azure Machine Learning studio. |
+
+You can monitor the pipeline run by opening the link or you can block until it completes by running:
+
+[!notebook-python[] (~/azureml-examples-sdk-preview/sdk/jobs/pipelines/2e_image_classification_keras_minist_convnet/image_classification_keras_minist_convnet.ipynb?name=stream-pipeline)]
+
+> [!IMPORTANT]
+> The first pipeline run takes roughly *15 minutes*. All dependencies must be downloaded, a Docker image is created, and the Python environment is provisioned and created. Running the pipeline again takes significantly less time because those resources are reused instead of created. However, total run time for the pipeline depends on the workload of your scripts and the processes that are running in each pipeline step.
+
+### Checkout outputs and debug your pipeline in UI
+
+You can open the `Link to Azure Machine Learning studio`, which is the job detail page of your pipeline. You'll see the pipeline graph like following.
++
+You can check the logs and outputs of each component by right clicking the component, or select the component to open its detail pane. To learn more about how to debug your pipeline in UI, see [How to use studio UI to build and debug Azure ML pipelines](how-to-use-pipeline-ui.md).
+
+## (Optional) Register components to workspace
+
+In the previous section, you have built a pipeline using three components to E2E complete an image classification task. You can also register components to your workspace so that they can be shared and resued within the workspace. Following is an example to register prep-data component.
+
+[!notebook-python[] (~/azureml-examples-sdk-preview/sdk/jobs/pipelines/2e_image_classification_keras_minist_convnet/image_classification_keras_minist_convnet.ipynb?name=register-component)]
+
+Using `ml_client.components.get()`, you can get a registered component by name and version. Using `ml_client.compoennts.create_or_update()`, you can register a component previously loaded from python function or yaml.
+
+## Next steps
+
+* For more examples of how to build pipelines by using the machine learning SDK, see the [example repository](https://github.com/Azure/azureml-examples/tree/sdk-preview/sdk/jobs/pipelines).
+* For how to use studio UI to submit and debug your pipeline, refer to [how to create pipelines using component in the UI](how-to-create-component-pipelines-ui.md).
+* For how to use Azure Machine Learning CLI to create components and pipelines, refer to [how to create pipelines using component with CLI](how-to-create-component-pipelines-cli.md).
machine-learning How To Create Component Pipelines Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-component-pipelines-cli.md
description: Create and run machine learning pipelines using the Azure Machine L
-- Previously updated : 03/31/2022++ Last updated : 05/10/2022 -+ ms.devlang: azurecli, cliv2-
-# Create and run machine learning pipelines using components with the Azure Machine Learning CLI (Preview)
+# Create and run machine learning pipelines using components with the Azure Machine Learning CLI
[!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)]
-In this article, you learn how to create and run [machine learning pipelines](concept-ml-pipelines.md) by using the Azure CLI and Components (for more, see [What is an Azure Machine Learning component?](concept-component.md)). You can [create pipelines without using components](how-to-train-cli.md#build-a-training-pipeline), but components offer the greatest amount of flexibility and reuse. AzureML Pipelines may be defined in YAML and run from the CLI, authored in Python, or composed in AzureML Studio Designer with a drag-and-drop UI. This document focuses on the CLI.
+In this article, you learn how to create and run [machine learning pipelines](concept-ml-pipelines.md) by using the Azure CLI and components (for more, see [What is an Azure Machine Learning component?](concept-component.md)). You can [create pipelines without using components](how-to-train-cli.md#build-a-training-pipeline), but components offer the greatest amount of flexibility and reuse. AzureML Pipelines may be defined in YAML and run from the CLI, authored in Python, or composed in AzureML Studio Designer with a drag-and-drop UI. This document focuses on the CLI.
## Prerequisites
-* If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/).
+- If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/).
-* You'll need an [Azure Machine Learning workspace](how-to-manage-workspace.md) for your pipelines and associated resources
+- You'll need an [Azure Machine Learning workspace](how-to-manage-workspace.md) for your pipelines and associated resources
-* [Install and set up the Azure CLI extension for Machine Learning](how-to-configure-cli.md)
+- [Install and set up the Azure CLI extension for Machine Learning](how-to-configure-cli.md)
-* Clone the examples repository:
+- Clone the examples repository:
```azurecli-interactive git clone https://github.com/Azure/azureml-examples --depth 1
- cd azureml-examples/cli/jobs/pipelines-with-components/
+ cd azureml-examples/cli/jobs/pipelines-with-components/basics
```
-## Introducing machine learning pipelines
+### Suggested pre-reading
+
+- [What is Azure Machine Learning pipeline](./concept-ml-pipelines.md)
+- [What is Azure Machine Learning component](./concept-component.md)
+
+## Create your first pipeline with component
+
+Let's create your first pipeline with component using an example. This section aims to give you an initial impression of what pipeline and component look like in AzureML with a concrete example.
-Pipelines in AzureML let you sequence a collection of machine learning tasks into a workflow. Data Scientists typically iterate with scripts focusing on individual tasks such as data preparation, training, scoring, and so forth. When all these scripts are ready, pipelines help connect a collection of such scripts into production-quality processes that are:
+From the `cli/jobs/pipelines-with-components/basics` directory of the [`azureml-examples` repository](https://github.com/Azure/azureml-examples), navigate to the `3b_pipeline_with_data` subdirector. There are three types of files in this directory. Those are the files you'll need to create when building your own pipeline.
-| Benefit | Description |
-| | |
-| Self-contained | Pipelines may run in a self-contained way for hours, or even days, taking upstream data, processing it, and passing it to later scripts without any manual intervention. |
-| Powerful | Pipelines may run on large compute clusters hosted in the cloud that have the processing power to crunch large datasets or to do thousands of sweeps to find the best models. |
-| Repeatable & Automatable | Pipelines can be scheduled to run and process new data and update ML models, making ML workflows repeatable. |
-| Reproducible | Pipelines can generate reproducible results by logging all activity and persisting all outputs including intermediate data to the cloud, helping meet compliance and audit requirements. |
+- **pipeline.yml**: This YAML file defines the machine learning pipeline. This YAML file describes how to break a full machine learning task into a multistep workflow. For example, considering a simple machine learning task of using historical data to train a sales forecasting model, you may want to build a sequential workflow with data processing, model training, and model evaluation steps. Each step is a component that has well defined interface and can be developed, tested, and optimized independently. The pipeline YAML also defines how the child steps connect to other steps in the pipeline, for example the model training step generate a model file and the model file will pass to a model evaluation step.
-Azure has other types of pipelines: Azure Data Factory pipelines have strong support for data-to-data pipelines, while Azure Pipelines are the best choice for CI/CD automation. [Compare Machine Learning pipelines with these different pipelines](concept-ml-pipelines.md#which-azure-pipeline-technology-should-i-use).
+- **component.yml**: This YAML file defines the component. It packages following information:
+ - Metadata: name, display name, version, description, type etc. The metadata helps to describe and manage the component.
+ - Interface: inputs and outputs. For example, a model training component will take training data and number of epochs as input, and generate a trained model file as output. Once the interface is defined, different teams can develop and test the component independently.
+ - Command, code & environment: the command, code and environment to run the component. Command is the shell command to execute the component. Code usually refers to a source code directory. Environment could be an AzureML environment(curated or customer created), docker image or conda environment.
-## Create your first pipeline
+- **component_src**: This is the source code directory for a specific component. It contains the source code that will be executed in the component. You can use your preferred lanuage(Python, R...). The code must be executed by a shell command. The source code can take a few inputs from shell command line to control how this step is going to be executed. For example, a training step may take training data, learning rate, number of epochs to control the training process. The argument of a shell command is used to pass inputs and outputs to the code.
-From the `cli/jobs/pipelines-with-components/basics` directory of the [`azureml-examples` repository](https://github.com/Azure/azureml-examples), navigate to the `3a_basic_pipeline` subdirectory (earlier examples in that directory show non-component pipelines). List your available compute resources with the following command:
+ Now let's create a pipeline using the `3b_pipeline_with_data` example. We'll explain the detailed meaning of each file in following sections.
+
+ First list your available compute resources with the following command:
```azurecli az ml compute list
If you don't have it, create a cluster called `cpu-cluster` by running:
az ml compute create -n cpu-cluster --type amlcompute --min-instances 0 --max-instances 10 ```
-Now, create a pipeline job with the following command:
+Now, create a pipeline job defined in the pipeline.yml file with the following command. The compute target will be referenced in the pipeline.yml file as `azureml:cpu-cluster`. If your compute target uses a different name, remember to update it in the pipeline.yml file.
```azurecli az ml job create --file pipeline.yml
az ml job create --file pipeline.yml
You should receive a JSON dictionary with information about the pipeline job, including:
-| Key | Description |
-| | |
-| `name` | The GUID-based name of the job. |
-| `experiment_name` | The name under which jobs will be organized in Studio. |
-| `services.Studio.endpoint` | A URL for monitoring and reviewing the pipeline job. |
-| `status` | The status of the job. This will likely be `Preparing` at this point. |
-
-### Review a component
-
- Take a quick look at the Python source code in `componentA_src`, `componentB_src`, and `componentC_src`. As you can see, each of these directories contains a slightly different "Hello World" program in Python.
-
-Open `ComponentA.yaml` to see how the first component is defined:
--
-In the current preview, only components of type `command` are supported. The `name` is the unique identifier and used in Studio to describe the component, and `display_name` is used for a display-friendly name. The `version` key-value pair allows you to evolve your pipeline components while maintaining reproducibility with older versions.
-
-All files in the `./componentA_src` directory will be uploaded to Azure for processing.
-
-The `environment` section allows you to specify the software environment in which the component runs. In this case, the component uses a base Docker image, as specified in `environment.image`. For more, see [Create & use software environments in Azure Machine Learning](how-to-use-environments.md).
-
-Finally, the `command` key is used to specify the command to be run.
-
-> [!NOTE]
-> The value of `command` begin with `>-` which is YAML "folding style with block-chomping." This allows you to write your command over multiple lines of text for clarity.
-
-For more information on components and their specification, see [What is an Azure Machine Learning component?](concept-component.md).
-
-### Review the pipeline specification
+| Key | Description |
+|-|--|
+| `name` | The GUID-based name of the job. |
+| `experiment_name` | The name under which jobs will be organized in Studio. |
+| `services.Studio.endpoint` | A URL for monitoring and reviewing the pipeline job. |
+| `status` | The status of the job. This will likely be `Preparing` at this point. |
-In the example directory, the `pipeline.yaml` file looks like the following code:
+Open the `services.Studio.endpoint` URL you'll see a graph visualization of the pipeline looks like below.
-If you open the job's URL in Studio (the value of `services.Studio.endpoint` from the `job create` command when creating a job or `job show` after the job has been created), you'll see a graph representation of your pipeline:
+## Understand the pipeline definition YAML
+Let's take a look at the pipeline definition in the *3b_pipeline_with_data/pipeline.yml* file.
-There are no dependencies between the components in this pipeline. Generally, pipelines will have dependencies and this page will show them visually. Since these components aren't dependent upon each other, and since the `cpu-cluster` had sufficient nodes, they ran concurrently.
-If you double-click on a component in the pipeline graph, you can see details of the component's child run.
+Below table describes the most common used fields of pipeline YAML schema. See [full pipeline YAML schema here](reference-yaml-job-pipeline.md).
+|key|description|
+|||
+|type|**Required**. Job type, must be `pipeline` for pipeline jobs.|
+|display_name|Display name of the pipeline job in Studio UI. Editable in Studio UI. Doesn't have to be unique across all jobs in the workspace.|
+|jobs|**Required**. Dictionary of the set of individual jobs to run as steps within the pipeline. These jobs are considered child jobs of the parent pipeline job. In this release, supported job types in pipeline are `command` and `sweep`
+|inputs|Dictionary of inputs to the pipeline job. The key is a name for the input within the context of the job and the value is the input value. These pipeline inputs can be referenced by the inputs of an individual step job in the pipeline using the ${{ parent.inputs.<input_name> }} expression.|
+|outputs|Dictionary of output configurations of the pipeline job. The key is a name for the output within the context of the job and the value is the output configuration. These pipeline outputs can be referenced by the outputs of an individual step job in the pipeline using the ${{ parents.outputs.<output_name> }} expression. |
-## Upload and use data
+In the *3b_pipeline_with_data* example, we've created a three steps pipeline.
-The example `3b_pipeline_with_data` demonstrates how you define input and output data flow and storage in pipelines.
+- The three steps are defined under `jobs`. All three step type is command job. Each step's definition is in corresponding `component.yml` file. You can see the component YAML files under *3b_pipeline_with_data* directory. We'll explain the componentA.yml in next section.
+- This pipeline has data dependency, which is common in most real world pipelines. Component_a takes data input from local folder under `./data`(line 17-20) and passes its output to componentB (line 29). Component_a's output can be referenced as `${{parent.jobs.component_a.outputs.component_a_output}}`.
+- The `compute` defines the default compute for this pipeline. If a component under `jobs` defines a different compute for this component, the system will respect component specific setting.
-You define input data directories for your pipeline in the pipeline YAML file using the `inputs` path. You define output and intermediate data directories using the `outputs` path. You use these definitions in the `jobs.<JOB_NAME>.inputs` and `jobs.<JOB_NAME>.outputs` paths, as shown in the following image:
+### Read and write data in pipeline
-1. The `parent.inputs.pipeline_sample_input_data` path (line 7) creates a key identifier and uploads the input data from the `path` directory (line 9). This identifier `${{parent.inputs.pipeline_sample_input_data}}` is then used as the value of the `parent.jobs.componentA_job.inputs.componentA_input` key (line 20). In other words, the pipeline's `pipeline_sample_input_data` input is passed to the `componentA_input` input of Component A.
-1. The `parent.jobs.componentA_job.outputs.componentA_output` path (line 22) is used with the identifier `${{parent.jobs.componentA_job.outputs.componentA_output}}` as the value for the next step's `parent.jobs.componentB_job.inputs.componentB_input` key (line 28).
-1. As with Component A, the output of Component B (line 30) is used as the input to Component C (line 36).
-1. The pipeline's `parent.outputs.final_pipeline_output` key (line 12) is the source of the identifier used as the value for the `parent.jobs.componentC_job.outputs.componentC_output` key (line 38). In other words, Component C's output is the pipeline's final output.
+One common scenario is to read and write data in your pipeline. In AuzreML, we use the same schema to [read and write data](how-to-read-write-data-v2.md) for all type of jobs (pipeline job, command job, and sweep job). Below are pipeline job examples of using data for common scenarios.
-Studio's visualization of this pipeline looks like this:
+- [local data](https://github.com/Azure/azureml-examples/tree/sdk-preview/cli/jobs/pipelines-with-components/basics/4a_local_data_input)
+- [web file with public URL](https://github.com/Azure/azureml-examples/blob/sdk-preview/cli/jobs/pipelines-with-components/basics/4c_web_url_input/pipeline.yml)
+- [AzureML datastore and path](https://github.com/Azure/azureml-examples/tree/sdk-preview/cli/jobs/pipelines-with-components/basics/4b_datastore_datapath_uri)
+- [AzureML data asset](https://github.com/Azure/azureml-examples/tree/sdk-preview/cli/jobs/pipelines-with-components/basics/4d_data_input)
+## Understand the component definition YAML
-You can see that `parent.inputs.pipeline_sample_input_data` is represented as a `Dataset`. The keys of the `jobs.<COMPONENT_NAME>.inputs` and `outputs` paths are shown as data flows between the pipeline components.
+Now let's look at the *componentA.yml* as an example to understand component definition YAML.
-You can run this example by switching to the `3b_pipeline_with_data` subdirectory of the samples repository and running:
-`az ml job create --file pipeline.yaml`
+The most common used schema of the component YAML is described in below table. See [full component YAML schema here](reference-yaml-component-command.md).
-### Access data in your script
+|key|description|
+|||
+|name|**Required**. Name of the component. Must be unique across the AzureML workspace. Must start with lowercase letter. Allow lowercase letters, numbers and understore(_). Maximum length is 255 characters.|
+|display_name|Display name of the component in the studio UI. Can be non-unique within the workspace.|
+|command|**Required** the command to execute|
+|code|Local path to the source code directory to be uploaded and used for the component.|
+|environment|**Required**. The environment that will be used to execute the component.|
+|inputs|Dictionary of component inputs. The key is a name for the input within the context of the component and the value is the component input definition. Inputs can be referenced in the command using the ${{ inputs.<input_name> }} expression.|
+|outputs|Dictionary of component outputs. The key is a name for the output within the context of the component and the value is the component output definition. Outputs can be referenced in the command using the ${{ outputs.<output_name> }} expression.|
+|is_deterministic|Whether to reuse previous job's result if the component inputs not change. Default value is `true`, also known as resue by default. The common scenario to set it as `false` is to force reload data from a cloud storage or URL.|
-Input and output directory paths for a component are passed to your script as arguments. The name of the argument will be the key you specified in the YAML file in the `inputs` or `outputs` path. For instance:
+For the example in *3b_pipeline_with_data/componentA.yml*, componentA has one data input and one data output, which can be connected to other steps in the parent pipeline. All the files under `code` section in component YAML will be uploaded to AzureML when submitting the pipeline job. In this example, files under `./componentA_src` will be uploaded (line 16 in *componentA.yml*). You can see the uploaded source code in Studio UI: double select the ComponentA step and navigate to Snapshot tab, as shown in below screenshot. We can see it's a hello-world script just doing some simple printing, and write current datetime to the `componentA_output` path. The component takes input and output through command line argument, and it's handled in the *hello.py* using `argparse`.
+
-```python
-import argparse
+### Input and output
+Input and output define the interface of a component. Input and output could be either of a literal value(of type `string`,`number`,`integer`, or `boolean`) or an object containing input schema.
-parser = argparse.ArgumentParser()
-parser.add_argument("--componentA_input", type=str)
-parser.add_argument("--componentA_output", type=str)
+**Object input** (of type `uri_file`, `uri_folder`,`mltable`,`mlflow_model`,`custom_model`) can connect to other steps in the parent pipeline job and hence pass data/model to other steps. In pipeline graph, the object type input will render as a connection dot.
-args = parser.parse_args()
+**Literal value inputs** (`string`,`number`,`integer`,`boolean`) are the parameters you can pass to the component at run time. You can add default value of literal inputs under `default` field. For `number` and `integer` type, you can also add minimum and maximum value of the accepted value using `min` and `max` fields. If the input value exceeds the min and max, pipeline will fail at validation. Validation happens before you submit a pipeline job to save your time. Validation works for CLI, Python SDK and designer UI. Below screenshot shows a validation example in designer UI. Similarly, you can define allowed values in `enum` field.
-print("componentA_input path: %s" % args.componentA_input)
-print("componentA_output path: %s" % args.componentA_output)
-```
-
-For inputs, the pipeline orchestrator downloads (or mounts) the data from the cloud store and makes it available as a local folder to read from for the script that runs in each job. This behavior means the script doesn't need any modification between running locally and running on cloud compute. Similarly, for outputs, the script writes to a local folder that is mounted and synced to the cloud store or is uploaded after script is complete. You can use the `mode` keyword to specify download vs mount for inputs and upload vs mount for outputs.
-
-## Create a preparation-train-evaluate pipeline
-
-One of the common scenarios for machine learning pipelines has three major phases:
-
-1. Data preparation
-1. Training
-1. Evaluating the model
-
-Each of these phases may have multiple components. For instance, the data preparation step may have separate steps for loading and transforming the training data. The examples repository contains an end-to-end example pipeline in the `cli/jobs/pipelines-with-components/nyc_taxi_data_regression` directory.
-
-The `pipeline.yml` begins with the mandatory `type: pipeline` key-value pair. Then, it defines inputs and outputs as follows:
-
+If you want to add an input to a component, remember to edit three places: 1)`inputs` field in component YAML 2) `command` field in component YAML. 3) component source code to handle the command line input. It's marked in green box in above screenshot.
-As described previously, these entries specify the input data to the pipeline, in this case the dataset in `./data`, and the intermediate and final outputs of the pipeline, which are stored in separate paths. The names within these input and output entries become values in the `inputs` and `outputs` entries of the individual jobs:
+### Environment
+Environment defines the environment to execute the component. It could be an AzureML environment(curated or custom registered), docker image or conda environment. See examples below.
-Notice how `parent.jobs.train-job.outputs.model_output` is used as an input to both the prediction job and the scoring job, as shown in the following diagram:
+- [AzureML registered environment asset](https://github.com/Azure/azureml-examples/tree/sdk-preview/cli/jobs/pipelines-with-components/basics/5b_env_registered). It's referenced in component following `azureml:<environment-name>:<environment-version>` syntax.
+- [public docker image](https://github.com/Azure/azureml-examples/tree/sdk-preview/cli/jobs/pipelines-with-components/basics/5a_env_public_docker_image)
+- [conda file](https://github.com/Azure/azureml-examples/tree/sdk-preview/cli/jobs/pipelines-with-components/basics/5c_env_conda_file) Conda file needs to be used together with a base image.
-
-## Register components for reuse and sharing
+## Register component for reuse and sharing
While some components will be specific to a particular pipeline, the real benefit of components comes from reuse and sharing. Register a component in your Machine Learning workspace to make it available for reuse. Registered components support automatic versioning so you can update the component but assure that pipelines that require an older version will continue to work.
az ml component create --file score.yml
az ml component create --file eval.yml ```
-After these commands run to completion, you can see the components in Studio:
-
-![Screenshot of Studio showing the components that were just registered](media/how-to-create-component-pipelines-cli/registered-components.png)
-
-Click on a component. You'll see some basic information about the component, such as creation and modification dates. Also, you'll see editable fields for Tags and Description. The tags can be used for adding rapidly searched keywords. The description field supports Markdown formatting and should be used to describe your component's functionality and basic use.
-
-### Use registered components in a job specification file
+After these commands run to completion, you can see the components in Studio, under Asset -> Components:
-In the `1b_e2e_registered_components` directory, open the `pipeline.yml` file. The keys and values in the `inputs` and `outputs` dictionaries are similar to those already discussed. The only significant difference is the value of the `command` values in the `jobs.<JOB_NAME>.component` entries. The `component` value is of the form `azureml:<JOB_NAME>:<COMPONENT_VERSION>`. The `train-job` definition, for instance, specifies the latest version of the registered component `Train` should be used:
+Select a component. You'll see detailed information for each version of the component.
+Under **Details** tab, you'll see basic information of the component like name, created by, version etc. You'll see editable fields for Tags and Description. The tags can be used for adding rapidly searched keywords. The description field supports Markdown formatting and should be used to describe your component's functionality and basic use.
-## Caching & reuse
+Under **Jobs** tab, you'll see the history of all jobs that use this component.
-By default, only those components whose inputs have changed are rerun. You can change this behavior by setting the `is_deterministic` key of the component specification YAML to `False`. A common need for this is a component that loads data that may have been updated from a fixed location or URL.
-## FAQ
+### Use registered components in a pipeline job YAML file
-### How do I change the location of the outputs generated by the pipeline?
-You can use the `settings` section in the pipeline job to specify a different datastore for all the jobs in the pipeline (See line 25 - 26 in [this example](https://github.com/Azure/azureml-examples/blob/main/cli/jobs/pipelines-with-components/basics/1a_e2e_local_components/pipeline.yml)). Specifying a different datastore for a specific job or specific output is currently not supported. Specifying paths where are outputs are saved on the datastore is also not currently supported.
+Let's use `1b_e2e_registered_components` to demo how to use registered component in pipeline YAML. Navigate to `1b_e2e_registered_components` directory, open the `pipeline.yml` file. The keys and values in the `inputs` and `outputs` fields are similar to those already discussed. The only significant difference is the value of the `component` field in the `jobs.<JOB_NAME>.component` entries. The `component` value is of the form `azureml:<COMPONENT_NAME>:<COMPONENT_VERSION>`. The `train-job` definition, for instance, specifies the latest version of the registered component `my_train` should be used:
-### How do I specify a compute that can be used by all jobs?
-You can specify a compute at the pipeline job level, which will be used by jobs that don't explicitly mention a compute. (See line 28 in [this example](https://github.com/Azure/azureml-examples/blob/main/cli/jobs/pipelines-with-components/basics/1a_e2e_local_components/pipeline.yml).)
-### What job types are supported in the pipeline job?
-The current release supports command, component, and sweep job types.
+### Manage components
-### What are the different modes that I use with inputs or outputs?
-| Category | Allowed Modes | Default |
-| | | |
-| Dataset Inputs | `ro_mount` and `download` | `ro_mount` |
-| URI Inputs | `ro_mount`, `rw_mount`, and `download` | `ro_mount` |
-| Outputs | `rw_mount`, `upload` | `rw_mount` |
+You can check component details and manage the component using CLI (v2). Use `az ml component -h` to get detailed instructions on component command. Below table lists all available commands. See more examples in [Azure CLI reference](/cli/azure/ml/component?view=azure-cli-latest&preserve-view=true)
-### When do I use command jobs vs component jobs?
-You can iterate quickly with command jobs and then connect them together into a pipeline. However, this makes the pipeline monolithic. If someone needs to use one of the steps of the pipeline in a different pipeline, they need to copy over the job definition, the scripts, environment, and so on. If you want to make the individual steps reusable across pipelines and easy to understand and use for others on your team, the additional steps to create and register makes sense. The other reason you want to consider using Components is you want to use the Drag-and-Drop Designer UI to build Pipelines. Since jobs aren't registered with the workspace, you can't drag-and-drop them on the Designer canvas.
-
-### I'm doing distributed training in my component. The component, which is registered, specifies distributed training settings including node count. How can I change the number of nodes used during runtime? The optimal number of nodes is best determined at runtime, so I don't want to update the component and register a new version.
-
-You can use the overrides section in component job to change the resource and distribution settings. See [this example using TensorFlow](https://github.com/Azure/azureml-examples/tree/main/cli/jobs/pipelines-with-components/basics/6a_tf_hello_world) or [this example using PyTorch](https://github.com/Azure/azureml-examples/tree/main/cli/jobs/pipelines-with-components/basics/6b_pytorch_hello_world).
-
-### How can I define an environment with conda dependencies inside a component?
-See [this example](https://github.com/Azure/azureml-examples/tree/main/cli/jobs/pipelines-with-components/basics/5c_env_conda_file).
-
+|commands|description|
+|||
+|`az ml component create`|Create a component|
+|`az ml component list`|List components in a workspace|
+|`az ml component show`|Show details of a component|
+|`az ml component update`|Update a component. Only a few fields(description, display_name) support update|
+|`az ml component archive`|Archive a component container|
+|`az ml component restore`|Restore an archived component|
## Next steps -- To share your pipeline with colleagues or customers, see [Publish machine learning pipelines](how-to-deploy-pipelines.md)-- Use [these Jupyter notebooks on GitHub](https://aka.ms/aml-pipeline-readme) to explore machine learning pipelines further-- See the SDK reference help for the [azureml-pipelines-core](/python/api/azureml-pipeline-core/) package and the [azureml-pipelines-steps](/python/api/azureml-pipeline-steps/) package-- See [Troubleshooting machine learning pipelines](how-to-debug-pipelines.md) for tips on debugging and troubleshooting pipelines-- Learn how to run notebooks by following the article [Use Jupyter notebooks to explore this service](samples-notebooks.md).
+- Try out [CLI v2 component example](https://github.com/Azure/azureml-examples/tree/sdk-preview/cli/jobs/pipelines-with-components)
machine-learning How To Create Component Pipelines Ui https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-component-pipelines-ui.md
+
+ Title: Create and run component-based ML pipelines (UI)
+
+description: Create and run machine learning pipelines using the Azure Machine Learning studio UI.
+++++ Last updated : 05/10/2022++++
+# Create and run machine learning pipelines using components with the Azure Machine Learning studio (Preview)
++
+In this article, you'll learn how to create and run [machine learning pipelines](concept-ml-pipelines.md) by using the Azure Machine Learning studio and [Components](concept-component.md). You can [create pipelines without using components](how-to-train-cli.md#build-a-training-pipeline), but components offer better amount of flexibility and reuse. Azure ML Pipelines may be defined in YAML and [run from the CLI](how-to-create-component-pipelines-cli.md), [authored in Python](how-to-create-component-pipeline-python.md), or composed in Azure ML Studio Designer with a drag-and-drop UI. This document focuses on the AzureML studio designer UI.
++
+## Prerequisites
+
+* If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/).
+
+* You'll need an [Azure Machine Learning workspace](how-to-manage-workspace.md) for your pipelines and associated resources
+
+* [Install and set up the Azure CLI extension for Machine Learning](how-to-configure-cli.md)
+
+* Clone the examples repository:
+
+ ```azurecli-interactive
+ git clone https://github.com/Azure/azureml-examples --depth 1
+ cd azureml-examples/cli/jobs/pipelines-with-components/
+ ```
+
+## Register component in your workspace
+
+To build pipeline using components in UI, you need to register components to your workspace first. You can use CLI or SDK to register components to your workspace, so that you can share and reuse the component within the workspace. Registered components support automatic versioning so you can update the component but assure that pipelines that require an older version will continue to work.
+
+In the example below take using CLI for example. If you want to learn more about how to build a component, see [Create and run pipelines using components with CLI](how-to-create-component-pipelines-cli.md).
+
+1. From the `cli/jobs/pipelines-with-components/basics` directory of the [`azureml-examples` repository](https://github.com/Azure/azureml-examples), navigate to the `1b_e2e_registered_components` subdirectory.
+
+1. Register the components to AzureML workspace using following commands. Learn more about [ML components](concept-component.md).
+
+ ```CLI
+ az ml component create --file train.yml
+ az ml component create --file score.yml
+ az ml component create --file eval.yml
+ ```
+
+1. After register component successfully, you can see your component in the studio UI.
++
+## Create pipeline using registered component
+
+1. Create a new pipeline in the designer.
+
+ :::image type="content" source="./media/how-to-create-component-pipelines-ui/new-pipeline.png" alt-text="Screenshot showing creating new pipeline in designer homepage." lightbox ="./media/how-to-create-component-pipelines-ui/new-pipeline.png":::
+
+1. Set the default compute target of the pipeline.
+
+ Select the **Gear icon** ![Screenshot of the gear icon that is in the UI.](./media/tutorial-designer-automobile-price-train-score/gear-icon.png) at the top right of the canvas to open the **Settings** pane. Select the default compute target for your pipeline.
+
+ :::image type="content" source="./media/how-to-create-component-pipelines-ui/set-default-compute.png" alt-text="Screenshot showing setting default compute for the pipeline." lightbox ="./media/how-to-create-component-pipelines-ui/set-default-compute.png":::
+
+ > [!Important]
+ > Attached compute is not supported, use [compute instances or clusters](concept-compute-target.md#azure-machine-learning-compute-managed) instead.
+
+1. In asset library, you can see **Data assets** and **Components** tabs. Switch to **Components** tab, you can see the components registered from previous section.
+
+ :::image type="content" source="./media/how-to-create-component-pipelines-ui/asset-library.png" alt-text="Screenshot showing registered component in asset library." lightbox ="./media/how-to-create-component-pipelines-ui/asset-library.png":::
+
+ Drag the components and drop on the canvas. By default it will use the default version of the component, and you can change to a specific version in the right pane of component if your component has multiple versions.
+
+ :::image type="content" source="./media/how-to-create-component-pipelines-ui/change-component-version.png" alt-text="Screenshot showing changing version of component." lightbox ="./media/how-to-create-component-pipelines-ui/change-component-version.png":::
+
+1. Connect the upstream component output ports to the downstream component input ports.
+
+1. Select one component, you'll see a right pane where you can configure the component.
+
+ For components with primitive type inputs like number, integer, string and boolean, you can change values of such inputs in the component detailed pane.
+
+ You can also change the output settings and compute target where this component run in the right pane.
+
+ :::image type="content" source="./media/how-to-create-component-pipelines-ui/component-parameter.png" alt-text="Screenshot showing component parameter settings." lightbox ="./media/how-to-create-component-pipelines-ui/component-parameter.png":::
+
+> [!NOTE]
+> Currently registered components and the designer built-in components cannot be used together.
+
+## Submit pipeline
+
+1. Select submit, and fill in the required information for your pipeline job.
+
+ :::image type="content" source="./media/how-to-create-component-pipelines-ui/submit-pipeline.png" alt-text="Screenshot of set up pipeline job with submit highlighted." lightbox ="./media/how-to-create-component-pipelines-ui/submit-pipeline.png":::
+
+1. After submit successfully, you'll see a job detail page link in the left page. Select **Job detail** to go to pipeline job detail page for checking status and debugging.
+
+ :::image type="content" source="./media/how-to-create-component-pipelines-ui/submission-list.png" alt-text="Screenshot showing the submitted jobs list." lightbox ="./media/how-to-create-component-pipelines-ui/submission-list.png":::
+
+ > [!NOTE]
+ > The **Submitted jobs** list only contains pipeline jobs submitted during an active session. A page reload will clear out the content.
+
+## Next steps
+
+- Use [these Jupyter notebooks on GitHub](https://github.com/Azure/azureml-examples/tree/pipeline/builder_function_samples/cli/jobs/pipelines-with-components) to explore machine learning pipelines further
+- Learn [how to use CLI v2 to create pipeline using components](how-to-create-component-pipelines-cli.md).
+- Learn [how to use SDK v2 (preview) to create pipeline using components](how-to-create-component-pipeline-python.md)
machine-learning How To Create Machine Learning Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-machine-learning-pipelines.md
Last updated 10/21/2021 --+ # Create and run machine learning pipelines with Azure Machine Learning SDK + In this article, you learn how to create and run [machine learning pipelines](concept-ml-pipelines.md) by using the [Azure Machine Learning SDK](/python/api/overview/azure/ml/intro). Use **ML pipelines** to create a workflow that stitches together various ML phases. Then, publish that pipeline for later access or sharing with others. Track ML pipelines to see how your model is performing in the real world and to detect data drift. ML pipelines are ideal for batch scoring scenarios, using various computes, reusing steps instead of rerunning them, and sharing ML workflows with others. This article isn't a tutorial. For guidance on creating your first pipeline, see [Tutorial: Build an Azure Machine Learning pipeline for batch scoring](tutorial-pipeline-batch-scoring-classification.md) or [Use automated ML in an Azure Machine Learning pipeline in Python](how-to-use-automlstep-in-pipelines.md).
Create the resources required to run an ML pipeline:
* Configure a `Dataset` object to point to persistent data that lives in, or is accessible in, a datastore. Configure an `OutputFileDatasetConfig` object for temporary data passed between pipeline steps.
-* Set up the [compute targets](concept-azure-machine-learning-architecture.md#compute-targets) on which your pipeline steps will run.
+* Set up the [compute targets](v1/concept-azure-machine-learning-architecture.md#compute-targets) on which your pipeline steps will run.
### Set up a datastore
def_file_store = Datastore(ws, "workspacefilestore")
```
-Steps generally consume data and produce output data. A step can create data such as a model, a directory with model and dependent files, or temporary data. This data is then available for other steps later in the pipeline. To learn more about connecting your pipeline to your data, see the articles [How to Access Data](how-to-access-data.md) and [How to Register Datasets](how-to-create-register-datasets.md).
+Steps generally consume data and produce output data. A step can create data such as a model, a directory with model and dependent files, or temporary data. This data is then available for other steps later in the pipeline. To learn more about connecting your pipeline to your data, see the articles [How to Access Data](how-to-access-data.md) and [How to Register Datasets](./v1/how-to-create-register-datasets.md).
### Configure data with `Dataset` and `OutputFileDatasetConfig` objects
When you submit the pipeline, Azure Machine Learning checks the dependencies for
> [!IMPORTANT] > [!INCLUDE [amlinclude-info](../../includes/machine-learning-amlignore-gitignore.md)] >
-> For more information, see [Snapshots](concept-azure-machine-learning-architecture.md#snapshots).
+> For more information, see [Snapshots](v1/concept-azure-machine-learning-architecture.md#snapshots).
```python from azureml.core import Experiment
machine-learning How To Create Manage Compute Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-manage-compute-instance.md
description: Learn how to create and manage an Azure Machine Learning compute in
+ ---++ Previously updated : 10/21/2021 Last updated : 05/04/2022 # Create and manage an Azure Machine Learning compute instance Learn how to create and manage a [compute instance](concept-compute-instance.md) in your Azure Machine Learning workspace.
-Use a compute instance as your fully configured and managed development environment in the cloud. For development and testing, you can also use the instance as a [training compute target](concept-compute-target.md#train) or for an [inference target](concept-compute-target.md#deploy). A compute instance can run multiple jobs in parallel and has a job queue. As a development environment, a compute instance cannot be shared with other users in your workspace.
+Use a compute instance as your fully configured and managed development environment in the cloud. For development and testing, you can also use the instance as a [training compute target](concept-compute-target.md#train) or for an [inference target](concept-compute-target.md#deploy). A compute instance can run multiple jobs in parallel and has a job queue. As a development environment, a compute instance can't be shared with other users in your workspace.
In this article, you learn how to: * [Create](#create) a compute instance * [Manage](#manage) (start, stop, restart, delete) a compute instance
-* [Create a schedule](#schedule) to automatically start and stop the compute instance (preview)
-* [Use a setup script](#setup-script) to customize and configure the compute instance
+* [Create a schedule](#schedule-automatic-start-and-stop-preview) to automatically start and stop the compute instance (preview)
+
+You can also [use a setup script (preview)](how-to-customize-compute-instance.md) to create the compute instance with your own custom environment.
Compute instances can run jobs securely in a [virtual network environment](how-to-secure-training-vnet.md), without requiring enterprises to open up SSH ports. The job executes in a containerized environment and packages your model dependencies in a Docker container.
+> [!NOTE]
+> This article shows CLI v2 in the sections below. If you are still using CLI v1, see [Create an Azure Machine Learning compute cluster CLI v1)](v1/how-to-create-manage-compute-instance.md).
+ ## Prerequisites * An Azure Machine Learning workspace. For more information, see [Create an Azure Machine Learning workspace](how-to-manage-workspace.md).
-* The [Azure CLI extension for Machine Learning service (v1)](reference-azure-machine-learning-cli.md), [Azure Machine Learning Python SDK](/python/api/overview/azure/ml/intro), or the [Azure Machine Learning Visual Studio Code extension](how-to-setup-vs-code.md).
+* The [Azure CLI extension for Machine Learning service (v2)](https://aka.ms/sdk-v2-install), [Azure Machine Learning Python SDK](/python/api/overview/azure/ml/intro), or the [Azure Machine Learning Visual Studio Code extension](how-to-setup-vs-code.md).
## Create
Compute instances can run jobs securely in a [virtual network environment](how-t
Creating a compute instance is a one time process for your workspace. You can reuse the compute as a development workstation or as a compute target for training. You can have multiple compute instances attached to your workspace.
-The dedicated cores per region per VM family quota and total regional quota, which applies to compute instance creation, is unified and shared with Azure Machine Learning training compute cluster quota. Stopping the compute instance does not release quota to ensure you will be able to restart the compute instance. It is not possible to change the virtual machine size of compute instance once it is created.
+The dedicated cores per region per VM family quota and total regional quota, which applies to compute instance creation, is unified and shared with Azure Machine Learning training compute cluster quota. Stopping the compute instance doesn't release quota to ensure you'll be able to restart the compute instance. It isn't possible to change the virtual machine size of compute instance once it's created.
-<a name="create-instance"></a> The following example demonstrates how to create a compute instance:
+The following example demonstrates how to create a compute instance:
# [Python](#tab/python) + ```python import datetime import time
For more information on the classes, methods, and parameters used in this exampl
# [Azure CLI](#tab/azure-cli)
-```azurecli-interactive
-az ml computetarget create computeinstance -n instance -s "STANDARD_D3_V2" -v
+```azurecli
+az ml compute create -f create-instance.yml
```
-For more information, see the [az ml computetarget create computeinstance](/cli/azure/ml(v1)/computetarget/create#az-ml-computetarget-create-computeinstance) reference.
+Where the file *create-instance.yml* is:
++ # [Studio](#tab/azure-studio)
For more information, see the [az ml computetarget create computeinstance](/cli/
|Field |Description | |||
- |Compute name | <ul><li>Name is required and must be between 3 to 24 characters long.</li><li>Valid characters are upper and lower case letters, digits, and the **-** character.</li><li>Name must start with a letter</li><li>Name needs to be unique across all existing computes within an Azure region. You will see an alert if the name you choose is not unique</li><li>If **-** character is used, then it needs to be followed by at least one letter later in the name</li></ul> |
- |Virtual machine type | Choose CPU or GPU. This type cannot be changed after creation |
+ |Compute name | <ul><li>Name is required and must be between 3 to 24 characters long.</li><li>Valid characters are upper and lower case letters, digits, and the **-** character.</li><li>Name must start with a letter</li><li>Name needs to be unique across all existing computes within an Azure region. You'll see an alert if the name you choose isn't unique</li><li>If **-** character is used, then it needs to be followed by at least one letter later in the name</li></ul> |
+ |Virtual machine type | Choose CPU or GPU. This type can't be changed after creation |
|Virtual machine size | Supported virtual machine sizes might be restricted in your region. Check the [availability list](https://azure.microsoft.com/global-infrastructure/services/?products=virtual-machines) | 1. Select **Create** unless you want to configure advanced settings for the compute instance. 1. <a name="advanced-settings"></a> Select **Next: Advanced Settings** if you want to:
- * Enable SSH access. Follow the [detailed SSH access instructions](#enable-ssh) below.
+ * Enable SSH access. Follow the [detailed SSH access instructions](#enable-ssh-access) below.
* Enable virtual network. Specify the **Resource group**, **Virtual network**, and **Subnet** to create the compute instance inside an Azure Virtual Network (vnet). You can also select __No public IP__ (preview) to prevent the creation of a public IP address, which requires a private link workspace. You must also satisfy these [network requirements](./how-to-secure-training-vnet.md) for virtual network setup.
- * Assign the computer to another user. For more about assigning to other users, see [Create on behalf of](#on-behalf).
- * Provision with a setup script (preview) - for more details about how to create and use a setup script, see [Customize the compute instance with a script](#setup-script).
- * Add schedule (preview). Schedule times for the compute instance to automatically start and/or shutdown. See [schedule details](#schedule) below.
+ * Assign the computer to another user. For more about assigning to other users, see [Create on behalf of](#create-on-behalf-of-preview).inel
+ * Provision with a setup script (preview) - for more information about how to create and use a setup script, see [Customize the compute instance with a script](how-to-customize-compute-instance.md).
+ * Add schedule (preview). Schedule times for the compute instance to automatically start and/or shutdown. See [schedule details](#schedule-automatic-start-and-stop-preview) below.
You can also create a compute instance with an [Azure Resource Manager template](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.machinelearningservices/machine-learning-compute-create-computeinstance).
-## <a name="enable-ssh"></a> Enable SSH access
+## Enable SSH access
-SSH access is disabled by default. SSH access cannot be changed after creation. Make sure to enable access if you plan to debug interactively with [VS Code Remote](how-to-set-up-vs-code-remote.md).
+SSH access is disabled by default. SSH access can't be changed after creation. Make sure to enable access if you plan to debug interactively with [VS Code Remote](how-to-set-up-vs-code-remote.md).
[!INCLUDE [amlinclude-info](../../includes/machine-learning-enable-ssh.md)] Once the compute instance is created and running, see [Connect with SSH access](how-to-create-attach-compute-studio.md#ssh-access).
-## <a name="on-behalf"></a> Create on behalf of (preview)
+## Create on behalf of (preview)
As an administrator, you can create a compute instance on behalf of a data scientist and assign the instance to them with:
The data scientist can start, stop, and restart the compute instance. They can u
* RStudio * Integrated notebooks
-## <a name="schedule"></a> Schedule automatic start and stop (preview)
+## Schedule automatic start and stop (preview)
Define multiple schedules for auto-shutdown and auto-start. For instance, create a schedule to start at 9 AM and stop at 6 PM from Monday-Thursday, and a second schedule to start at 9 AM and stop at 4 PM for Friday. You can create a total of four schedules per compute instance.
-Schedules can also be defined for [create on behalf of](#on-behalf) compute instances. You can create schedule to create a compute instance in a stopped state. This is particularly useful when a user creates a compute instance on behalf of another user.
+Schedules can also be defined for [create on behalf of](#create-on-behalf-of-preview) compute instances. You can create a schedule that creates the compute instance in a stopped state. Stopped compute instances are particularly useful when you create a compute instance on behalf of another user.
### Create a schedule in studio
-1. [Fill out the form](?tabs=azure-studio#create-instance).
+1. [Fill out the form](?tabs=azure-studio#create).
1. On the second page of the form, open **Show advanced settings**. 1. Select **Add schedule** to add a new schedule.
Schedules can also be defined for [create on behalf of](#on-behalf) compute inst
1. Select **Add schedule** again if you want to create another schedule. Once the compute instance is created, you can view, edit, or add new schedules from the compute instance details section.
-Please note timezone labels don't account for day light savings. For instance, (UTC+01:00) Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna is actually UTC+02:00 during day light savings.
++
+> [!NOTE]
+> Timezone labels don't account for day light savings. For instance, (UTC+01:00) Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna is actually UTC+02:00 during day light savings.
+
+### Create a schedule with CLI
++
+```azurecli
+az ml compute create -f create-instance.yml
+```
+
+Where the file *create-instance.yml* is:
++ ### Create a schedule with a Resource Manager template
Then use either cron or LogicApps expressions to define the schedule that starts
// the ranges shown above or two numbers in the range separated by a // hyphen (meaning an inclusive range). ```+ ### Azure Policy support to default a schedule Use Azure Policy to enforce a shutdown schedule exists for every compute instance in a subscription or default to a schedule if nothing exists. Following is a sample policy to default a shutdown schedule at 10 PM PST.
Following is a sample policy to default a shutdown schedule at 10 PM PST.
} ```
-## <a name="setup-script"></a> Customize the compute instance with a script (preview)
-
-Use a setup script for an automated way to customize and configure the compute instance at provisioning time. As an administrator, you can write a customization script to be used to provision all compute instances in the workspace according to your requirements.
-
-Some examples of what you can do in a setup script:
+## Add custom applications such as RStudio (preview)
-* Install packages, tools, and software
-* Mount data
-* Create custom conda environment and Jupyter kernels
-* Clone git repositories and set git config
-* Set network proxies
-* Set environment variables
-* Install JupyterLab extensions
+You can set up other applications, such as RStudio, when creating a compute instance. Follow these steps in studio to set up a custom application on your compute instance
-### Create the setup script
-
-The setup script is a shell script, which runs as *rootuser*. Create or upload the script into your **Notebooks** files:
+1. Fill out the form to [create a new compute instance](?tabs=azure-studio#create)
+1. Select **Next: Advanced Settings**
+1. Select **Add application** under the **Custom application setup (RStudio Workbench, etc.)** section
+
-1. Sign into the [studio](https://ml.azure.com) and select your workspace.
-2. On the left, select **Notebooks**
-3. Use the **Add files** tool to create or upload your setup shell script. Make sure the script filename ends in ".sh". When you create a new file, also change the **File type** to *bash(.sh)*.
+> [!NOTE]
+> Custom applications are currently not supported in private link workspaces.
+
+### Setup RStudio Workbench
+RStudio is one of the most popular IDEs among R developers for ML and data science projects. You can easily set up RStudio Workbench to run on your compute instance, using your own RStudio license, and access the rich feature set that RStudio Workbench offers.
-When the script runs, the current working directory of the script is the directory where it was uploaded. For example, if you upload the script to **Users>admin**, the location of the script on the compute instance and current working directory when the script runs is */home/azureuser/cloudfiles/code/Users/admin*. This would enable you to use relative paths in the script.
+1. Follow the steps listed above to **Add application** when creating your compute instance.
+1. Select **RStudio Workbench (bring your own license)** in the **Application** dropdown and enter your RStudio Workbench license key in the **License key** field. You can get your RStudio Workbench license or trial license [from RStudio](https://www.rstudio.com/).
+1. Select **Create** to add RStudio Workbench application to your compute instance.
+
-Script arguments can be referred to in the script as $1, $2, etc.
+> [!NOTE]
+> * Support for accessing your workspace file store from RStudio is not yet available.
+> * When accessing multiple instances of RStudio, if you see a "400 Bad Request. Request Header Or Cookie Too Large" error, use a new browser or access from a browser in incognito mode.
+> * Shiny applications are not currently supported on RStudio Workbench.
-If your script was doing something specific to azureuser such as installing conda environment or jupyter kernel, you will have to put it within *sudo -u azureuser* block like this
+
+### Setup RStudio open source
+To use RStudio open source, set up a custom application as follows:
-The command *sudo -u azureuser* changes the current working directory to */home/azureuser*. You also can't access the script arguments in this block.
+1. Follow the steps listed above to **Add application** when creating your compute instance.
+1. Select **Custom Application** on the **Application** dropdown
+1. Configure the **Application name** you would like to use.
+1. Set up the application to run on **Target port** `8787` - the docker image for RStudio open source listed below needs to run on this Target port.
+1. Set up the application to be accessed on **Published port** `8787` - you can configure the application to be accessed on a different Published port if you wish.
+1. Point the **Docker image** to `ghcr.io/azure/rocker-rstudio-ml-verse:latest`.
+1. Select **Create** to set up RStudio as a custom application on your compute instance.
-For other example scripts, see [azureml-examples](https://github.com/Azure/azureml-examples/tree/main/setup-ci).
+
+
+### Setup other custom applications
-You can also use the following environment variables in your script:
+Set up other custom applications on your compute instance by providing the application on a Docker image.
-1. CI_RESOURCE_GROUP
-2. CI_WORKSPACE
-3. CI_NAME
-4. CI_LOCAL_UBUNTU_USER. This points to azureuser
+1. Follow the steps listed above to **Add application** when creating your compute instance.
+1. Select **Custom Application** on the **Application** dropdown.
+1. Configure the **Application name**, the **Target port** you wish to run the application on, the **Published port** you wish to access the application on and the **Docker image** that contains your application.
+1. Optionally, add **Environment variables** and **Bind mounts** you wish to use for your application.
+1. Select **Create** to set up the custom application on your compute instance.
-You can use setup script in conjunction with **Azure Policy to either enforce or default a setup script for every compute instance creation**.
-The default value for setup script timeout is 15 minutes. This can be changed through Studio UI or through ARM templates using the DURATION parameter.
-DURATION is a floating point number with an optional suffix: 's' for seconds (the default), 'm' for minutes, 'h' for hours or 'd' for days.
-### Use the script in the studio
+### Accessing custom applications in studio
-Once you store the script, specify it during creation of your compute instance:
+Access the custom applications that you set up in studio:
-1. Sign into the [studio](https://ml.azure.com/) and select your workspace.
1. On the left, select **Compute**.
-1. Select **+New** to create a new compute instance.
-1. [Fill out the form](?tabs=azure-studio#create-instance).
-1. On the second page of the form, open **Show advanced settings**.
-1. Turn on **Provision with setup script**.
-1. Browse to the shell script you saved. Or upload a script from your computer.
-1. Add command arguments as needed.
--
-If workspace storage is attached to a virtual network you might not be able to access the setup script file unless you are accessing the Studio from within virtual network.
-
-### Use script in a Resource Manager template
-
-In a Resource Manager [template](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.machinelearningservices/machine-learning-compute-create-computeinstance), add `setupScripts` to invoke the setup script when the compute instance is provisioned. For example:
-
-```json
-"setupScripts":{
- "scripts":{
- "creationScript":{
- "scriptSource":"workspaceStorage",
- "scriptData":"[parameters('creationScript.location')]",
- "scriptArguments":"[parameters('creationScript.cmdArguments')]"
- }
- }
-}
-```
-*scriptData* above specifies the location of the creation script in the notebooks file share such as *Users/admin/testscript.sh*.
-*scriptArguments* is optional above and specifies the arguments for the creation script.
-
-You could instead provide the script inline for a Resource Manager template. The shell command can refer to any dependencies uploaded into the notebooks file share. When you use an inline string, the working directory for the script is */mnt/batch/tasks/shared/LS_root/mounts/clusters/**ciname**/code/Users*.
-
-For example, specify a base64 encoded command string for `scriptData`:
-
-```json
-"setupScripts":{
- "scripts":{
- "creationScript":{
- "scriptSource":"inline",
- "scriptData":"[base64(parameters('inlineCommand'))]",
- "scriptArguments":"[parameters('creationScript.cmdArguments')]"
- }
- }
-}
-```
-
-### Setup script logs
-
-Logs from the setup script execution appear in the logs folder in the compute instance details page. Logs are stored back to your notebooks file share under the Logs\<compute instance name> folder. Script file and command arguments for a particular compute instance are shown in the details page.
+1. On the **Compute instance** tab, see your applications under the **Applications** column.
+> [!NOTE]
+> It might take a few minutes after setting up a custom application until you can access it via the links above. The amount of time taken will depend on the size of the image used for your custom application. If you see a 502 error message when trying to access the application, wait for some time for the application to be set up and try again.
## Manage
-Start, stop, restart, and delete a compute instance. A compute instance does not automatically scale down, so make sure to stop the resource to prevent ongoing charges. Stopping a compute instance deallocates it. Then start it again when you need it. While stopping the compute instance stops the billing for compute hours, you will still be billed for disk, public IP, and standard load balancer.
+Start, stop, restart, and delete a compute instance. A compute instance doesn't automatically scale down, so make sure to stop the resource to prevent ongoing charges. Stopping a compute instance deallocates it. Then start it again when you need it. While stopping the compute instance stops the billing for compute hours, you'll still be billed for disk, public IP, and standard load balancer.
-You can [create a schedule](#schedule) for the compute instance to automatically start and stop based on a time and day of week.
+You can [create a schedule](#schedule-automatic-start-and-stop-preview) for the compute instance to automatically start and stop based on a time and day of week.
> [!TIP] > The compute instance has 120GB OS disk. If you run out of disk space, [use the terminal](how-to-access-terminal.md) to clear at least 1-2 GB before you stop or restart the compute instance. Please do not stop the compute instance by issuing sudo shutdown from the terminal. The temp disk size on compute instance depends on the VM size chosen and is mounted on /mnt. # [Python](#tab/python) ++ In the examples below, the name of the compute instance is **instance** * Get status
In the examples below, the name of the compute instance is **instance**
# [Azure CLI](#tab/azure-cli)
-In the examples below, the name of the compute instance is **instance**
+In the examples below, the name of the compute instance is **instance**, in workspace **my-workspace**, in resource group **my-resource-group**.
* Stop
- ```azurecli-interactive
- az ml computetarget stop computeinstance -n instance -v
+ ```azurecli
+ az ml compute stop --name instance --resource-group my-resource-group --workspace-name my-workspace
```
- For more information, see [az ml computetarget stop computeinstance](/cli/azure/ml(v1)/computetarget/computeinstance#az-ml-computetarget-computeinstance-stop).
- * Start
- ```azurecli-interactive
- az ml computetarget start computeinstance -n instance -v
+ ```azurecli
+ az ml compute start --name instance --resource-group my-resource-group --workspace-name my-workspace
```
- For more information, see [az ml computetarget start computeinstance](/cli/azure/ml(v1)/computetarget/computeinstance#az-ml-computetarget-computeinstance-start).
- * Restart
- ```azurecli-interactive
- az ml computetarget restart computeinstance -n instance -v
+ ```azurecli
+ az ml compute restart --name instance --resource-group my-resource-group --workspace-name my-workspace
```
- For more information, see [az ml computetarget restart computeinstance](/cli/azure/ml(v1)/computetarget/computeinstance#az-ml-computetarget-computeinstance-restart).
- * Delete
- ```azurecli-interactive
- az ml computetarget delete -n instance -v
+ ```azurecli
+ az ml compute delete --name instance --resource-group my-resource-group --workspace-name my-workspace
```
- For more information, see [az ml computetarget delete computeinstance](/cli/azure/ml(v1)/computetarget#az-ml-computetarget-delete).
- # [Studio](#tab/azure-studio) <a name="schedule"></a>
-In your workspace in Azure Machine Learning studio, select **Compute**, then select **Compute Instance** on the top.
+In your workspace in Azure Machine Learning studio, select **Compute**, then select **compute instance** on the top.
![Manage a compute instance](./media/concept-compute-instance/manage-compute-instance.png)
You can perform the following actions:
* Create a new compute instance * Refresh the compute instances tab.
-* Start, stop, and restart a compute instance. You do pay for the instance whenever it is running. Stop the compute instance when you are not using it to reduce cost. Stopping a compute instance deallocates it. Then start it again when you need it. You can also schedule a time for the compute instance to start and stop.
+* Start, stop, and restart a compute instance. You do pay for the instance whenever it's running. Stop the compute instance when you aren't using it to reduce cost. Stopping a compute instance deallocates it. Then start it again when you need it. You can also schedule a time for the compute instance to start and stop.
* Delete a compute instance.
-* Filter the list of compute instances to show only those you have created.
+* Filter the list of compute instances to show only ones you've created.
For each compute instance in a workspace that you created (or that was created for you), you can:
For each compute instance in a workspace that you created (or that was created f
-[Azure RBAC](../role-based-access-control/overview.md) allows you to control which users in the workspace can create, delete, start, stop, restart a compute instance. All users in the workspace contributor and owner role can create, delete, start, stop, and restart compute instances across the workspace. However, only the creator of a specific compute instance, or the user assigned if it was created on their behalf, is allowed to access Jupyter, JupyterLab, and RStudio on that compute instance. A compute instance is dedicated to a single user who has root access, and can terminal in through Jupyter/JupyterLab/RStudio. Compute instance will have single-user log in and all actions will use that userΓÇÖs identity for Azure RBAC and attribution of experiment runs. SSH access is controlled through public/private key mechanism.
+[Azure RBAC](../role-based-access-control/overview.md) allows you to control which users in the workspace can create, delete, start, stop, restart a compute instance. All users in the workspace contributor and owner role can create, delete, start, stop, and restart compute instances across the workspace. However, only the creator of a specific compute instance, or the user assigned if it was created on their behalf, is allowed to access Jupyter, JupyterLab, and RStudio on that compute instance. A compute instance is dedicated to a single user who has root access, and can terminal in through Jupyter/JupyterLab/RStudio. Compute instance will have single-user sign-in and all actions will use that userΓÇÖs identity for Azure RBAC and attribution of experiment runs. SSH access is controlled through public/private key mechanism.
These actions can be controlled by Azure RBAC: * *Microsoft.MachineLearningServices/workspaces/computes/read*
These actions can be controlled by Azure RBAC:
* *Microsoft.MachineLearningServices/workspaces/computes/restart/action* * *Microsoft.MachineLearningServices/workspaces/computes/updateSchedules/action*
-To create a compute instance you'll need permissions for the following actions:
+To create a compute instance, you'll need permissions for the following actions:
* *Microsoft.MachineLearningServices/workspaces/computes/write* * *Microsoft.MachineLearningServices/workspaces/checkComputeNameAvailability/action* - ## Next steps * [Access the compute instance terminal](how-to-access-terminal.md)
machine-learning How To Create Register Data Assets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-register-data-assets.md
+
+ Title: Create Azure Machine Learning data assets
+
+description: Learn how to create Azure Machine Learning data assets to access your data for machine learning experiment runs.
++++++++ Last updated : 05/24/2022+
+# Customer intent: As an experienced data scientist, I need to package my data into a consumable and reusable object to train my machine learning models.
+++
+# Create Azure Machine Learning data assets
+
+> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning SDK you are using:"]
+> * [v1](./v1/how-to-create-register-datasets.md)
+> * [v2 (current version)](how-to-create-register-datasets.md)
++
+In this article, you learn how to create Azure Machine Learning Data to access data for your local or remote experiments with the Azure Machine Learning SDK V2 and CLI V2. To understand where Data fits in Azure Machine Learning's overall data access workflow, see the [Work with Data](concept-data.md) article.
+
+By creating a Data asset, you create a reference to the data source location, along with a copy of its metadata. Because the data remains in its existing location, you incur no extra storage cost, and don't risk the integrity of your data sources. Also Data assets are lazily evaluated, which aids in workflow performance speeds. You can create Data from Datastores, Azure Storage, public URLs, and local files.
+
+With Azure Machine Learning Data assets, you can:
+
+* Easy to share with other members of the team (no need to remember file locations)
+
+* Seamlessly access data during model training without worrying about connection strings or data paths.
+
+* Can refer to the Data by short Entity name in Azure ML
+++
+## Prerequisites
+
+To create and work with Data assets, you need:
+
+* An Azure subscription. If you don't have one, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/).
+
+* An [Azure Machine Learning workspace](how-to-manage-workspace.md).
+
+* The [Azure Machine Learning CLI/SDK installed](how-to-configure-cli.md) and MLTable package installed.
+
+ * Create an [Azure Machine Learning compute instance](how-to-create-manage-compute-instance.md), which is a fully configured and managed development environment that includes integrated notebooks and the SDK already installed.
+
+ **OR**
+
+ * Work on your own Jupyter notebook and install the CLI/SDK and required packages.
+
+> [!IMPORTANT]
+> While the package may work on older versions of Linux distros, we do not recommend using a distro that is out of mainstream support. Distros that are out of mainstream support may have security vulnerabilities, as they do not receive the latest updates. We recommend using the latest supported version of your distro that is compatible with .
+
+## Compute size guidance
+
+When creating a Data asset, review your compute processing power and the size of your data in memory. The size of your data in storage isn't the same as the size of data in a dataframe. For example, data in CSV files can expand up to 10x in a dataframe, so a 1-GB CSV file can become 10 GB in a dataframe.
+
+If your data is compressed, it can expand further; 20 GB of relatively sparse data stored in compressed parquet format can expand to ~400 GB in memory.
+
+[Learn more about optimizing data processing in Azure Machine Learning](concept-optimize-data-processing.md).
+
+## Data types
+
+Azure Machine Learning allows you to work with different types of data. Your data can be local or in the cloud (from a registered Azure ML Datastore, a common Azure Storage URL or a public data url). In this article, you'll learn about using the Python SDK V2 and CLI V2 to work with _URIs_ and _Tables_. URIs reference a location either local to your development environment or in the cloud. Tables are a tabular data abstraction.
+
+For most scenarios, you could use URIs (`uri_folder` and `uri_file`). A URI references a location in storage that can be easily mapped to the filesystem of a compute node when you run a job. The data is accessed by either mounting or downloading the storage to the node.
+
+When using tables, you could use `mltable`. It's an abstraction for tabular data that is used for AutoML jobs, parallel jobs, and some advanced scenarios. If you're just starting to use Azure Machine Learning, and aren't using AutoML, we strongly encourage you to begin with URIs.
+
+If you're creating Azure ML Data asset from an existing Datastore:
+
+1. Verify that you have `contributor` or `owner` access to the underlying storage service of your registered Azure Machine Learning datastore. [Check your storage account permissions in the Azure portal](/azure/role-based-access-control/check-access).
+
+1. Create the data asset by referencing paths in the datastore. You can create a Data asset from multiple paths in multiple datastores. There's no hard limit on the number of files or data size that you can create a data asset from.
+
+> [!NOTE]
+> For each data path, a few requests will be sent to the storage service to check whether it points to a file or a folder. This overhead may lead to degraded performance or failure. A Data asset referencing one folder with 1000 files inside is considered referencing one data path. We recommend creating Data asset referencing less than 100 paths in datastores for optimal performance.
+
+> [!TIP]
+> You can create Data asset with identity-based data access. If you don't provide any credentials, we will use your identity by default.
++
+> [!TIP]
+> If you have dataset assets created using the SDK v1, you can still use those with SDK v2. For more information, see the [Consuming V1 Dataset Assets in V2](how-to-read-write-data-v2.md) section.
+++
+## URIs
+
+The code snippets in this section cover the following scenarios:
+* Registering data as an asset in Azure Machine Learning
+* Reading registered data assets from Azure Machine Learning in a job
+
+These snippets use `uri_file` and `uri_folder`.
+
+- `uri_file` is a type that refers to a specific file. For example, `'https://<account_name>.blob.core.windows.net/<container_name>/path/file.csv'`.
+- `uri_folder` is a type that refers to a specific folder. For example, `'https://<account_name>.blob.core.windows.net/<container_name>/path'`.
+
+> [!TIP]
+> We recommend using an argument parser to pass folder information into _data-plane_ code. By data-plane code, we mean your data processing and/or training code that you run in the cloud. The code that runs in your development environment and submits code to the data-plane is _control-plane_ code.
+>
+> Data-plane code is typically a Python script, but can be any programming language. Passing the folder as part of job submission allows you to easily adjust the path from training locally using local data, to training in the cloud.
+> If you wanted to pass in just an individual file rather than the entire folder you can use the `uri_file` type.
+
+For a complete example, see the [working_with_uris.ipynb notebook](https://github.com/Azure/azureml-examples/blob/samuel100/mltable/sdk/assets/data/working_with_uris.ipynb).
++
+### Register data as URI Folder type Data
+
+# [Python-SDK](#tab/Python-SDK)
+```python
+from azure.ai.ml.entities import Data
+from azure.ai.ml._constants import AssetTypes
+
+# select one from:
+my_path = 'abfss://<file_system>@<account_name>.dfs.core.windows.net/<path>' # adls gen2
+my_path = 'https://<account_name>.blob.core.windows.net/<container_name>/path' # blob
+
+my_data = Data(
+ path=my_path,
+ type=AssetTypes.URI_FOLDER,
+ description="description here",
+ name="a_name",
+ version='1'
+)
+
+ml_client.data.create_or_update(my_data)
+```
+# [CLI](#tab/CLI)
+You can also use CLI to register a URI Folder type Data as below example.
+
+```azurecli
+az ml data create -f <file-name>.yml
+```
+
+Sample `YAML` file `<file-name>.yml` for local path is as below:
+
+```yaml
+$schema: https://azuremlschemas.azureedge.net/latest/data.schema.json
+name: uri_folder_my_data
+description: Local data asset will be created as URI folder type Data in Azure ML.
+path: path
+```
+
+Sample `YAML` file `<file-name>.yml` for data folder in an existing Azure ML Datastore is as below:
+```yaml
+$schema: https://azuremlschemas.azureedge.net/latest/data.schema.json
+name: uri_folder_my_data
+description: Datastore data asset will be created as URI folder type Data in Azure ML.
+type: uri_folder
+path: azureml://datastores/workspaceblobstore/paths/example-data/
+```
+
+Sample `YAML` file `<file-name>.yml` for data folder in storage url is as below:
+```yaml
+$schema: https://azuremlschemas.azureedge.net/latest/data.schema.json
+name: cloud_file_wasbs_example
+description: Data asset created from folder in cloud using wasbs URL.
+type: uri_folder
+path: wasbs://mainstorage9c05dabf5c924.blob.core.windows.net/azureml-blobstore-54887b46-3cb0-485b-bb15-62e7b5578ee6/example-data/
+```
++
+### Consume registered URI Folder data assets in job
+
+```python
+from azure.ai.ml.entities import Data, UriReference, JobInput, CommandJob
+from azure.ai.ml._constants import AssetTypes
+
+registered_data_asset = ml_client.data.get(name='titanic', version='1')
+
+my_job_inputs = {
+ "input_data": JobInput(
+ type=AssetTypes.URI_FOLDER,
+ path=registered_data_asset.id
+ )
+}
+
+job = CommandJob(
+ code="./src",
+ command='python read_data_asset.py --input_folder ${{inputs.input_data}}',
+ inputs=my_job_inputs,
+ environment="AzureML-sklearn-0.24-ubuntu18.04-py37-cpu:9",
+ compute="cpu-cluster"
+)
+
+#submit the command job
+returned_job = ml_client.create_or_update(job)
+#get a URL for the status of the job
+returned_job.services["Studio"].endpoint
+```
+
+### Register data as URI File type Data
+# [Python-SDK](#tab/Python-SDK)
+```python
+from azure.ai.ml.entities import Data
+from azure.ai.ml._constants import AssetTypes
+
+# select one from:
+my_file_path = '<path>/<file>' # local
+my_file_path = 'abfss://<file_system>@<account_name>.dfs.core.windows.net/<path>/<file>' # adls gen2
+my_file_path = 'https://<account_name>.blob.core.windows.net/<container_name>/<path>/<file>' # blob
+
+my_data = Data(
+ path=my_file_path,
+ type=AssetTypes.URI_FILE,
+ description="description here",
+ name="a_name",
+ version='1'
+)
+
+ml_client.data.create_or_update(my_data)
+```
+# [CLI](#tab/CLI)
+You can also use CLI to register a URI File type Data as below example.
+
+```cli
+> az ml data create -f <file-name>.yml
+```
+Sample `YAML` file `<file-name>.yml` for data in local path is as below:
+```yaml
+$schema: https://azuremlschemas.azureedge.net/latest/data.schema.json
+name: uri_file_my_data
+description: Local data asset will be created as URI folder type Data in Azure ML.
+path: ./paths/example-data.csv
+```
+
+Sample `YAML` file `<file-name>.yml` for data in an existing Azure ML Datastore is as below:
+```yaml
+$schema: https://azuremlschemas.azureedge.net/latest/data.schema.json
+name: uri_file_my_data
+description: Datastore data asset will be created as URI folder type Data in Azure ML.
+type: uri_file
+path: azureml://datastores/workspaceblobstore/paths/example-data.csv
+```
+
+Sample `YAML` file `<file-name>.yml` for data in storage url is as below:
+```yaml
+$schema: https://azuremlschemas.azureedge.net/latest/data.schema.json
+name: cloud_file_wasbs_example
+description: Data asset created from folder in cloud using wasbs URL.
+type: uri_file
+path: wasbs://mainstorage9c05dabf5c924.blob.core.windows.net/azureml-blobstore-54887b46-3cb0-485b-bb15-62e7b5578ee6/paths/example-data.csv
+```
+
+
+## MLTable
+
+### Register data as MLTable type Data assets
+Registering a `mltable` as an asset in Azure Machine Learning
+You can register a `mltable` as a data asset in Azure Machine Learning.
+
+In the MLTable file, the path attribute supports any Azure ML supported URI format:
+
+- a relative file: "file://foo/bar.csv"
+- a short form entity URI: "azureml://datastores/foo/paths/bar/baz"
+- a long form entity URI: "azureml://subscriptions/my-sub-id/resourcegroups/my-rg/workspaces/myworkspace/datastores/mydatastore/paths/path_to_data/"
+- a storage URI: "https://", "wasbs://", "abfss://", "adl://"
+- a public URI: "http://mypublicdata.com/foo.csv"
++
+Below we show an example of versioning the sample data in this repo. The data is uploaded to cloud storage and registered as an asset.
+# [Python-SDK](#tab/Python-SDK)
+```python
+from azure.ai.ml.entities import Data
+from azure.ai.ml._constants import AssetTypes
+import mltable
+
+my_data = Data(
+ path="./sample_data",
+ type=AssetTypes.MLTABLE,
+ description="Titanic Data",
+ name="titanic-mltable",
+ version='1'
+)
+
+ml_client.data.create_or_update(my_data)
+```
+
+> [!TIP]
+> Although the above example shows a local file. Remember that path supports cloud storage (https, abfss, wasbs protocols). Therefore, if you want to register data in a > cloud location just specify the path with any of the supported protocols.
+
+# [CLI](#tab/CLI)
+You can also use CLI and following YAML that describes an MLTable to register MLTable Data.
+```cli
+> az ml data create -f <file-name>.yml
+```
+```yaml
+paths:
+ - file: ./titanic.csv
+transformations:
+ - read_delimited:
+ delimiter: ','
+ encoding: 'ascii'
+ empty_as_string: false
+ header: from_first_file
+```
+
+The contents of the MLTable file specify the underlying data location (here a local path) and also the transforms to perform on the underlying data before materializing into a pandas/spark/dask data frame. The important part here's that the MLTable-artifact doesn't have any absolute paths, making it *self-contained*. All the information stored in one folder; regardless of whether that folder is stored on your local drive or in your cloud drive or on a public http server.
+
+To consume the data in a job or interactive session, use `mltable`:
+
+```python
+import mltable
+
+tbl = mltable.load("./sample_data")
+df = tbl.to_pandas_dataframe()
+```
+
+For a full example of using an MLTable, see the [Working with MLTable notebook](https://github.com/Azure/azureml-examples/blob/samuel100/mltable/sdk/assets/data/working_with_mltable.ipynb).
+
+
+## mltable-artifact
+
+Here the files that make up the mltable-artifact are stored on the user's local machine:
+
+```
+.
+Γö£ΓöÇΓöÇ MLTable
+ΓööΓöÇΓöÇ iris.csv
+```
+
+The contents of the MLTable file specify the underlying data location (here a local path) and also the transforms to perform on the underlying data before materializing into a pandas/spark/dask data frame:
+
+```yaml
+#source ../configs/dataset/iris/MLTable
+$schema: http://azureml/sdk-2-0/MLTable.json
+type: mltable
+
+paths:
+ - file: ./iris.csv
+transformations:
+ - read_delimited:
+ delimiter: ","
+ encoding: ascii
+ header: all_files_same_headers
+```
+
+The important part here's that the MLTable-artifact doesn't have any absolute paths, hence it's self-contained and all that is needed is stored in that one folder; regardless of whether that folder is stored on your local drive or in your cloud drive or on a public http server.
+
+This artifact file can be consumed in a command job as follows:
+
+```yaml
+#source ../configs/dataset/01-mltable-CommandJob.yaml
+$schema: http://azureml/sdk-2-0/CommandJob.json
+
+inputs:
+ my_mltable_artifact:
+ type: mltable
+ # folder needs to contain an MLTable file
+ mltable: file://iris
+
+command: |
+ python -c "
+ from mltable import load
+ # load a table from a folder containing an MLTable file
+ tbl = load(${{my_mltable_artifact}})
+ tbl.to_pandas_dataframe()
+ ...
+ "
+```
+
+> [!NOTE]
+> **For local files and folders**, only relative paths are supported. To be explicit, we will **not** support absolute paths as that would require us to change the MLTable file that is residing on disk before we move it to cloud storage.
+
+You can put MLTable file and underlying data in the *same folder* but in a cloud object store. You can specify `mltable:` in their job that points to a location on a datastore that contains the MLTable file:
+
+```yaml
+#source ../configs/dataset/04-mltable-CommandJob.yaml
+$schema: http://azureml/sdk-2-0/CommandJob.json
+
+inputs:
+ my_mltable_artifact:
+ type: mltable
+ mltable: azureml://datastores/some_datastore/paths/data/iris
+
+command: |
+ python -c "
+ from mltable import load
+ # load a table from a folder containing an MLTable file
+ tbl = load(${{my_mltable_artifact}})
+ tbl.to_pandas_dataframe()
+ ...
+ "
+```
+
+You can also have an MLTable file stored on the *local machine*, but no data files. The underlying data is stored on the cloud. In this case, the MLTable should reference the underlying data with an **absolute expression (i.e. a URI)**:
+
+```
+.
+Γö£ΓöÇΓöÇ MLTable
+```
++
+```yaml
+#source ../configs/dataset/iris-cloud/MLTable
+$schema: http://azureml/sdk-2-0/MLTable.json
+type: mltable
+
+paths:
+ - file: azureml://datastores/mydatastore/paths/data/iris.csv
+transformations:
+ - read_delimited:
+ delimiter: ","
+ encoding: ascii
+ header: all_files_same_headers
+```
++
+### Supporting multiple files in a table
+While above scenarios are creating rectangular data, it's also possible to create an mltable-artifact that just contains files:
+
+```
+.
+ΓööΓöÇΓöÇ MLTable
+```
+
+Where the contents of the MLTable file is:
+
+```yaml
+#source ../configs/dataset/multiple-files/MLTable
+$schema: http://azureml/sdk-2-0/MLTable.json
+type: mltable
+
+# creating dataset from folder on cloud path.
+paths:
+ - file: http://foo.com/1.csv
+ - file: http://foo.com/2.csv
+ - file: http://foo.com/3.csv
+ - file: http://foo.com/4.csv
+ - file: http://foo.com/5.csv
+```
+
+As outlined above, MLTable can be created from a URI or a local folder path:
+
+```yaml
+#source ../configs/types/22_input_mldataset_artifacts-PipelineJob.yaml
+
+$schema: http://azureml/sdk-2-0/PipelineJob.json
+
+jobs:
+ first:
+ description: this job takes a mltable-artifact as input and mounts it.
+ Note that the actual data could be in a different location
+
+ inputs:
+ mnist:
+ type: mltable # redundant but there for clarity
+ # needs to point to a folder that contains an MLTable file
+ mltable: azureml://datastores/some_datastore/paths/data/public/mnist
+ mode: ro_mount # or download
+
+ command: |
+ python -c "
+ import mltable as mlt
+ # load a table from a folder containing an MLTable file
+ tbl = mlt.load('${{inputs.mnist}}')
+ tbl.list_files()
+ ...
+ "
+
+ second:
+ description: this job loads a table artifact from a local_path.
+ Note that the folder needs to contain a well-formed MLTable file
+
+ inputs:
+ tbl_access_artifact:
+ type: mltable
+ mltable: file:./iris
+ mode: download
+
+ command: |
+ python -c "
+ import mltable as mlt
+ # load a table from a folder containing an MLTable file
+ tbl = MLTable.load('${{inputs.tbl_access_artifact}}')
+ tbl.list_files()
+ ...
+ "
+```
+
+MLTable-artifacts can yield files that aren't necessarily located in the `mltable`'s storage. Or it can **subset or shuffle** the data that resides in the storage using the `take_random_sample` transform for example. That view is only visible if the MLTable file is evaluated by the engine. The user can do that as described above by using the MLTable SDK by running `mltable.load`, but that requires python and the installation of the SDK.
+
+### Support globbing of files
+Along with users being able to provide a `file` or `folder`, the MLTable artifact file will also allow customers to specify a *pattern* to do globbing of files:
+
+```yaml
+#source ../configs/dataset/parquet-artifact-search/MLTable
+$schema: http://azureml/sdk-2-0/MLTable.json
+type: mltable
+paths:
+ - pattern: parquet_files/*1.parquet # only get files with this specific pattern
+transformations:
+ - read_parquet:
+ include_path_column: false
+```
+++
+### Delimited text: Transformations
+There are the following transformations that are *specific to delimited text*.
+
+- `infer_column_types`: Boolean to infer column data types. Defaults to True. Type inference requires that the data source is accessible from current compute. Currently type inference will only pull first 200 rows. If the data contains multiple types of value, it's better to provide desired type as an override via `set_column_types` argument
+- `encoding`: Specify the file encoding. Supported encodings are 'utf8', 'iso88591', 'latin1', 'ascii', 'utf16', 'utf32', 'utf8bom' and 'windows1252'. Defaults to utf8.
+- header: user can choose one of the following options:
+ - `no_header`
+ - `from_first_file`
+ - `all_files_different_headers`
+ - `all_files_same_headers` (default)
+- `delimiter`: The separator used to split columns.
+- `empty_as_string`: Specify if empty field values should be loaded as empty strings. The default (False) will read empty field values as nulls. Passing this as True will read empty field values as empty strings. If the values are converted to numeric or datetime, then this has no effect as empty values will be converted to nulls.
+- `include_path_column`: Boolean to keep path information as column in the table. Defaults to False. This is useful when reading multiple files, and want to know which file a particular record originated from, or to keep useful information in file path.
+- `support_multi_line`: By default (support_multi_line=False), all line breaks, including those in quoted field values, will be interpreted as a record break. Reading data this way is faster and more optimized for parallel execution on multiple CPU cores. However, it may result in silently producing more records with misaligned field values. This should be set to True when the delimited files are known to contain quoted line breaks.
+
+### Parquet files: Transforms
+If user doesn't define options for `read_parquet` transformation, default options will be selected (see below).
+
+- `include_path_column`: Boolean to keep path information as column in the table. Defaults to False. This is useful when reading multiple files, and want to know which file a particular record originated from, or to keep useful information in file path.
+
+### Json lines: Transformations
+Below are the supported transformations that are specific for json lines:
+
+- `include_path` Boolean to keep path information as column in the MLTable. Defaults to False. This is useful when reading multiple files, and want to know which file a particular record originated from, or to keep useful information in file path.
+- `invalid_lines` How to handle lines that are invalid JSON. Supported values are `error` and `drop`. Defaults to `error`.
+- `encoding` Specify the file encoding. Supported encodings are `utf8`, `iso88591`, `latin1`, `ascii`, `utf16`, `utf32`, `utf8bom` and `windows1252`. Default is `utf8`.
+
+## Global transforms
+
+MLTable-artifacts provide transformations specific to the delimited text, parquet, Delta. There are other transforms that mltable-artifact files support:
+
+- `take`: Takes the first *n* records of the table
+- `take_random_sample`: Takes a random sample of the table where each record has a *probability* of being selected. The user can also include a *seed*.
+- `skip`: This skips the first *n* records of the table
+- `drop_columns`: Drops the specified columns from the table. This transform supports regex so that users can drop columns matching a particular pattern.
+- `keep_columns`: Keeps only the specified columns in the table. This transform supports regex so that users can keep columns matching a particular pattern.
+- `filter`: Filter the data, leaving only the records that match the specified expression.
+- `extract_partition_format_into_columns`: Specify the partition format of path. Defaults to None. The partition information of each path will be extracted into columns based on the specified format. Format part '{column_name}' creates string column, and '{column_name:yyyy/MM/dd/HH/mm/ss}' creates datetime column, where 'yyyy', 'MM', 'dd', 'HH', 'mm' and 'ss' are used to extract year, month, day, hour, minute and second for the datetime type. The format should start from the position of first partition key until the end of file path. For example, given the path '../Accounts/2019/01/01/data.csv' where the partition is by department name and time, partition_format='/{Department}/{PartitionDate:yyyy/MM/dd}/data.csv' creates a string column 'Department' with the value 'Accounts' and a datetime column 'PartitionDate' with the value '2019-01-01'.
+Our principle here's to support transforms *specific to data delivery* and not to get into wider feature engineering transforms.
++
+## Traits
+The keen eyed among you may have spotted that `mltable` type supports a `traits` section. Traits define fixed characteristics of the table (that is, they are **not** freeform metadata that users can add) and they don't perform any transformations but can be used by the engine.
+
+- `index_columns`: Set the table index using existing columns. This trait can be used by partition_by in the data plane to split data by the index.
+- `timestamp_column`: Defines the timestamp column of the table. This trait can be used in filter transforms, or in other data plane operations (SDK) such as drift detection.
+
+Moreover, *in the future* we can use traits to define RAI aspects of the data, for example:
+
+- `sensitive_columns`: Here the user can define certain columns that contain sensitive information.
+
+Again, this isn't a transform but is informing the system of some extra properties in the data.
++++
+## Next steps
+
+* [Install and set up Python SDK v2 (preview)](https://aka.ms/sdk-v2-install)
+* [Install and use the CLI (v2)](how-to-configure-cli.md)
+* [Train models with the Python SDK v2 (preview)](how-to-train-sdk.md)
+* [Tutorial: Create production ML pipelines with Python SDK v2 (preview)](tutorial-pipeline-python-sdk.md)
+* Learn more about [Data in Azure Machine Learning](concept-data.md)
machine-learning How To Create Register Datasets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-register-datasets.md
- Title: Create Azure Machine Learning datasets-
-description: Learn how to create Azure Machine Learning datasets to access your data for machine learning experiment runs.
-------- Previously updated : 10/21/2021-
-# Customer intent: As an experienced data scientist, I need to package my data into a consumable and reusable object to train my machine learning models.
---
-# Create Azure Machine Learning datasets
-
-In this article, you learn how to create Azure Machine Learning datasets to access data for your local or remote experiments with the Azure Machine Learning Python SDK. To understand where datasets fit in Azure Machine Learning's overall data access workflow, see the [Securely access data](concept-data.md#data-workflow) article.
-
-By creating a dataset, you create a reference to the data source location, along with a copy of its metadata. Because the data remains in its existing location, you incur no extra storage cost, and don't risk the integrity of your data sources. Also datasets are lazily evaluated, which aids in workflow performance speeds. You can create datasets from datastores, public URLs, and [Azure Open Datasets](../open-datasets/how-to-create-azure-machine-learning-dataset-from-open-dataset.md).
-
-For a low-code experience, [Create Azure Machine Learning datasets with the Azure Machine Learning studio.](how-to-connect-data-ui.md#create-datasets)
-
-With Azure Machine Learning datasets, you can:
-
-* Keep a single copy of data in your storage, referenced by datasets.
-
-* Seamlessly access data during model training without worrying about connection strings or data paths. [Learn more about how to train with datasets](how-to-train-with-datasets.md).
-
-* Share data and collaborate with other users.
-
-## Prerequisites
-
-To create and work with datasets, you need:
-
-* An Azure subscription. If you don't have one, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/).
-
-* An [Azure Machine Learning workspace](how-to-manage-workspace.md).
-
-* The [Azure Machine Learning SDK for Python installed](/python/api/overview/azure/ml/install), which includes the azureml-datasets package.
-
- * Create an [Azure Machine Learning compute instance](how-to-create-manage-compute-instance.md), which is a fully configured and managed development environment that includes integrated notebooks and the SDK already installed.
-
- **OR**
-
- * Work on your own Jupyter notebook and [install the SDK yourself](/python/api/overview/azure/ml/install).
-
-> [!NOTE]
-> Some dataset classes have dependencies on the [azureml-dataprep](https://pypi.org/project/azureml-dataprep/) package, which is only compatible with 64-bit Python. If you are developing on __Linux__, these classes rely on .NET Core 2.1, and are only supported on specific distributions. For more information on the supported distros, see the .NET Core 2.1 column in the [Install .NET on Linux](/dotnet/core/install/linux) article.
-
-> [!IMPORTANT]
-> While the package may work on older versions of Linux distros, we do not recommend using a distro that is out of mainstream support. Distros that are out of mainstream support may have security vulnerabilities, as they do not receive the latest updates. We recommend using the latest supported version of your distro that is compatible with .
-
-## Compute size guidance
-
-When creating a dataset, review your compute processing power and the size of your data in memory. The size of your data in storage is not the same as the size of data in a dataframe. For example, data in CSV files can expand up to 10x in a dataframe, so a 1 GB CSV file can become 10 GB in a dataframe.
-
-If your data is compressed, it can expand further; 20 GB of relatively sparse data stored in compressed parquet format can expand to ~800 GB in memory. Since Parquet files store data in a columnar format, if you only need half of the columns, then you only need to load ~400 GB in memory.
-
-[Learn more about optimizing data processing in Azure Machine Learning](concept-optimize-data-processing.md).
-
-## Dataset types
-
-There are two dataset types, based on how users consume them in training; FileDatasets and TabularDatasets. Both types can be used in Azure Machine Learning training workflows involving, estimators, AutoML, hyperDrive and pipelines.
-
-### FileDataset
-
-A [FileDataset](/python/api/azureml-core/azureml.data.file_dataset.filedataset) references single or multiple files in your datastores or public URLs.
-If your data is already cleansed, and ready to use in training experiments, you can [download or mount](how-to-train-with-datasets.md#mount-vs-download) the files to your compute as a FileDataset object.
-
-We recommend FileDatasets for your machine learning workflows, since the source files can be in any format, which enables a wider range of machine learning scenarios, including deep learning.
-
-Create a FileDataset with the [Python SDK](#create-a-filedataset) or the [Azure Machine Learning studio](how-to-connect-data-ui.md#create-datasets)
-.
-### TabularDataset
-
-A [TabularDataset](/python/api/azureml-core/azureml.data.tabulardataset) represents data in a tabular format by parsing the provided file or list of files. This provides you with the ability to materialize the data into a pandas or Spark DataFrame so you can work with familiar data preparation and training libraries without having to leave your notebook. You can create a `TabularDataset` object from .csv, .tsv, [.parquet](/python/api/azureml-core/azureml.data.dataset_factory.tabulardatasetfactory#from-parquet-files-path--validate-true--include-path-false--set-column-types-none--partition-format-none-), [.jsonl files](/python/api/azureml-core/azureml.data.dataset_factory.tabulardatasetfactory#from-json-lines-files-path--validate-true--include-path-false--set-column-types-none--partition-format-none--invalid-lines--errorencoding--utf8--), and from [SQL query results](/python/api/azureml-core/azureml.data.dataset_factory.tabulardatasetfactory#from-sql-query-query--validate-true--set-column-types-none--query-timeout-30-).
-
-With TabularDatasets, you can specify a time stamp from a column in the data or from wherever the path pattern data is stored to enable a time series trait. This specification allows for easy and efficient filtering by time. For an example, see [Tabular time series-related API demo with NOAA weather data](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/work-with-data/datasets-tutorial/timeseries-datasets/tabular-timeseries-dataset-filtering.ipynb).
-
-Create a TabularDataset with [the Python SDK](#create-a-tabulardataset) or [Azure Machine Learning studio](how-to-connect-data-ui.md#create-datasets).
-
->[!NOTE]
-> [Automated ML](concept-automated-ml.md) workflows generated via the Azure Machine Learning studio currently only support TabularDatasets.
-
-## Access datasets in a virtual network
-
-If your workspace is in a virtual network, you must configure the dataset to skip validation. For more information on how to use datastores and datasets in a virtual network, see [Secure a workspace and associated resources](how-to-secure-workspace-vnet.md#datastores-and-datasets).
-
-<a name="datasets-sdk"></a>
-
-## Create datasets from datastores
-
-For the data to be accessible by Azure Machine Learning, datasets must be created from paths in [Azure Machine Learning datastores](how-to-access-data.md) or web URLs.
-
-> [!TIP]
-> You can create datasets directly from storage urls with identity-based data access. Learn more at [Connect to storage with identity-based data access](how-to-identity-based-data-access.md).
-
-
-To create datasets from a datastore with the Python SDK:
-
-1. Verify that you have `contributor` or `owner` access to the underlying storage service of your registered Azure Machine Learning datastore. [Check your storage account permissions in the Azure portal](../role-based-access-control/check-access.md).
-
-1. Create the dataset by referencing paths in the datastore. You can create a dataset from multiple paths in multiple datastores. There is no hard limit on the number of files or data size that you can create a dataset from.
-
-> [!NOTE]
-> For each data path, a few requests will be sent to the storage service to check whether it points to a file or a folder. This overhead may lead to degraded performance or failure. A dataset referencing one folder with 1000 files inside is considered referencing one data path. We recommend creating dataset referencing less than 100 paths in datastores for optimal performance.
-
-### Create a FileDataset
-
-Use the [`from_files()`](/python/api/azureml-core/azureml.data.dataset_factory.filedatasetfactory#from-files-path--validate-true-) method on the `FileDatasetFactory` class to load files in any format and to create an unregistered FileDataset.
-
-If your storage is behind a virtual network or firewall, set the parameter `validate=False` in your `from_files()` method. This bypasses the initial validation step, and ensures that you can create your dataset from these secure files. Learn more about how to [use datastores and datasets in a virtual network](how-to-secure-workspace-vnet.md#datastores-and-datasets).
-
-```Python
-from azureml.core import Workspace, Datastore, Dataset
-
-# create a FileDataset pointing to files in 'animals' folder and its subfolders recursively
-datastore_paths = [(datastore, 'animals')]
-animal_ds = Dataset.File.from_files(path=datastore_paths)
-
-# create a FileDataset from image and label files behind public web urls
-web_paths = ['https://azureopendatastorage.blob.core.windows.net/mnist/train-images-idx3-ubyte.gz',
- 'https://azureopendatastorage.blob.core.windows.net/mnist/train-labels-idx1-ubyte.gz']
-mnist_ds = Dataset.File.from_files(path=web_paths)
-```
-
-If you want to upload all the files from a local directory, create a FileDataset in a single method with [upload_directory()](/python/api/azureml-core/azureml.data.dataset_factory.filedatasetfactory#upload-directory-src-dir--target--pattern-none--overwrite-false--show-progress-true-). This method uploads data to your underlying storage, and as a result incur storage costs.
-
-```Python
-from azureml.core import Workspace, Datastore, Dataset
-from azureml.data.datapath import DataPath
-
-ws = Workspace.from_config()
-datastore = Datastore.get(ws, '<name of your datastore>')
-ds = Dataset.File.upload_directory(src_dir='<path to you data>',
- target=DataPath(datastore, '<path on the datastore>'),
- show_progress=True)
-
-```
-
-To reuse and share datasets across experiment in your workspace, [register your dataset](#register-datasets).
-
-### Create a TabularDataset
-
-Use the [`from_delimited_files()`](/python/api/azureml-core/azureml.data.dataset_factory.tabulardatasetfactory#from-delimited-files-path--validate-true--include-path-false--infer-column-types-true--set-column-types-none--separatorheader-true--partition-format-none--support-multi-line-false--empty-as-string-false--encoding--utf8--) method on the `TabularDatasetFactory` class to read files in .csv or .tsv format, and to create an unregistered TabularDataset. To read in files from .parquet format, use the [`from_parquet_files()`](/python/api/azureml-core/azureml.data.dataset_factory.tabulardatasetfactory#from-parquet-files-path--validate-true--include-path-false--set-column-types-none--partition-format-none-) method. If you're reading from multiple files, results will be aggregated into one tabular representation.
-
-See the [TabularDatasetFactory reference documentation](/python/api/azureml-core/azureml.data.dataset_factory.tabulardatasetfactory) for information about supported file formats, as well as syntax and design patterns such as [multiline support](/python/api/azureml-core/azureml.data.dataset_factory.tabulardatasetfactory#from-delimited-files-path--validate-true--include-path-false--infer-column-types-true--set-column-types-none--separatorheader-true--partition-format-none--support-multi-line-false--empty-as-string-false--encoding--utf8--).
-
-If your storage is behind a virtual network or firewall, set the parameter `validate=False` in your `from_delimited_files()` method. This bypasses the initial validation step, and ensures that you can create your dataset from these secure files. Learn more about how to use [datastores and datasets in a virtual network](how-to-secure-workspace-vnet.md#datastores-and-datasets).
-
-The following code gets the existing workspace and the desired datastore by name. And then passes the datastore and file locations to the `path` parameter to create a new TabularDataset, `weather_ds`.
-
-```Python
-from azureml.core import Workspace, Datastore, Dataset
-
-datastore_name = 'your datastore name'
-
-# get existing workspace
-workspace = Workspace.from_config()
-
-# retrieve an existing datastore in the workspace by name
-datastore = Datastore.get(workspace, datastore_name)
-
-# create a TabularDataset from 3 file paths in datastore
-datastore_paths = [(datastore, 'weather/2018/11.csv'),
- (datastore, 'weather/2018/12.csv'),
- (datastore, 'weather/2019/*.csv')]
-
-weather_ds = Dataset.Tabular.from_delimited_files(path=datastore_paths)
-```
-### Set data schema
-
-By default, when you create a TabularDataset, column data types are inferred automatically. If the inferred types don't match your expectations, you can update your dataset schema by specifying column types with the following code. The parameter `infer_column_type` is only applicable for datasets created from delimited files. [Learn more about supported data types](/python/api/azureml-core/azureml.data.dataset_factory.datatype).
--
-```Python
-from azureml.core import Dataset
-from azureml.data.dataset_factory import DataType
-
-# create a TabularDataset from a delimited file behind a public web url and convert column "Survived" to boolean
-web_path ='https://dprepdata.blob.core.windows.net/demo/Titanic.csv'
-titanic_ds = Dataset.Tabular.from_delimited_files(path=web_path, set_column_types={'Survived': DataType.to_bool()})
-
-# preview the first 3 rows of titanic_ds
-titanic_ds.take(3).to_pandas_dataframe()
-```
-
-|(Index)|PassengerId|Survived|Pclass|Name|Sex|Age|SibSp|Parch|Ticket|Fare|Cabin|Embarked
--|--|--||-|||--|--||-|--|--|
-0|1|False|3|Braund, Mr. Owen Harris|male|22.0|1|0|A/5 21171|7.2500||S
-1|2|True|1|Cumings, Mrs. John Bradley (Florence Briggs Th...|female|38.0|1|0|PC 17599|71.2833|C85|C
-2|3|True|3|Heikkinen, Miss. Laina|female|26.0|0|0|STON/O2. 3101282|7.9250||S
-
-To reuse and share datasets across experiments in your workspace, [register your dataset](#register-datasets).
-
-## Wrangle data
-After you create and [register](#register-datasets) your dataset, you can load it into your notebook for data wrangling and [exploration](#explore-data) prior to model training.
-
-If you don't need to do any data wrangling or exploration, see how to consume datasets in your training scripts for submitting ML experiments in [Train with datasets](how-to-train-with-datasets.md).
-
-### Filter datasets (preview)
-
-Filtering capabilities depends on the type of dataset you have.
-> [!IMPORTANT]
-> Filtering datasets with the preview method, [`filter()`](/python/api/azureml-core/azureml.data.tabulardataset#filter-expression-) is an [experimental](/python/api/overview/azure/ml/#stable-vs-experimental) preview feature, and may change at any time.
->
-**For TabularDatasets**, you can keep or remove columns with the [keep_columns()](/python/api/azureml-core/azureml.data.tabulardataset#keep-columns-columns--validate-false-) and [drop_columns()](/python/api/azureml-core/azureml.data.tabulardataset#drop-columns-columns-) methods.
-
-To filter out rows by a specific column value in a TabularDataset, use the [filter()](/python/api/azureml-core/azureml.data.tabulardataset#filter-expression-) method (preview).
-
-The following examples return an unregistered dataset based on the specified expressions.
-
-```python
-# TabularDataset that only contains records where the age column value is greater than 15
-tabular_dataset = tabular_dataset.filter(tabular_dataset['age'] > 15)
-
-# TabularDataset that contains records where the name column value contains 'Bri' and the age column value is greater than 15
-tabular_dataset = tabular_dataset.filter((tabular_dataset['name'].contains('Bri')) & (tabular_dataset['age'] > 15))
-```
-
-**In FileDatasets**, each row corresponds to a path of a file, so filtering by column value is not helpful. But, you can [filter()](/python/api/azureml-core/azureml.data.filedataset#filter-expression-) out rows by metadata like, CreationTime, Size etc.
-
-The following examples return an unregistered dataset based on the specified expressions.
-
-```python
-# FileDataset that only contains files where Size is less than 100000
-file_dataset = file_dataset.filter(file_dataset.file_metadata['Size'] < 100000)
-
-# FileDataset that only contains files that were either created prior to Jan 1, 2020 or where
-file_dataset = file_dataset.filter((file_dataset.file_metadata['CreatedTime'] < datetime(2020,1,1)) | (file_dataset.file_metadata['CanSeek'] == False))
-```
-
-**Labeled datasets** created from [image labeling projects](how-to-create-image-labeling-projects.md) are a special case. These datasets are a type of TabularDataset made up of image files. For these types of datasets, you can [filter()](/python/api/azureml-core/azureml.data.tabulardataset#filter-expression-) images by metadata, and by column values like `label` and `image_details`.
-
-```python
-# Dataset that only contains records where the label column value is dog
-labeled_dataset = labeled_dataset.filter(labeled_dataset['label'] == 'dog')
-
-# Dataset that only contains records where the label and isCrowd columns are True and where the file size is larger than 100000
-labeled_dataset = labeled_dataset.filter((labeled_dataset['label']['isCrowd'] == True) & (labeled_dataset.file_metadata['Size'] > 100000))
-```
-
-### Partition data
-
-You can partition a dataset by including the `partitions_format` parameter when creating a TabularDataset or FileDataset.
-
-When you partition a dataset, the partition information of each file path is extracted into columns based on the specified format. The format should start from the position of first partition key until the end of file path.
-
-For example, given the path `../Accounts/2019/01/01/data.jsonl` where the partition is by department name and time; the `partition_format='/{Department}/{PartitionDate:yyyy/MM/dd}/data.jsonl'` creates a string column 'Department' with the value 'Accounts' and a datetime column 'PartitionDate' with the value `2019-01-01`.
-
-If your data already has existing partitions and you want to preserve that format, include the `partitioned_format` parameter in your [`from_files()`](/python/api/azureml-core/azureml.data.dataset_factory.filedatasetfactory#from-files-path--validate-true--partition-format-none-) method to create a FileDataset.
-
-To create a TabularDataset that preserves existing partitions, include the `partitioned_format` parameter in the [from_parquet_files()](/python/api/azureml-core/azureml.data.dataset_factory.tabulardatasetfactory#from-parquet-files-path--validate-true--include-path-false--set-column-types-none--partition-format-none-) or the
-[from_delimited_files()](/python/api/azureml-core/azureml.data.dataset_factory.tabulardatasetfactory#from-delimited-files-path--validate-true--include-path-false--infer-column-types-true--set-column-types-none--separatorheader-true--partition-format-none--support-multi-line-false--empty-as-string-false--encoding--utf8--) method.
-
-The following example,
-* Creates a FileDataset from partitioned files.
-* Gets the partition keys
-* Creates a new, indexed FileDataset using
-
-```Python
-
-file_dataset = Dataset.File.from_files(data_paths, partition_format = '{userid}/*.wav')
-ds.register(name='speech_dataset')
-
-# access partition_keys
-indexes = file_dataset.partition_keys # ['userid']
-
-# get all partition key value pairs should return [{'userid': 'user1'}, {'userid': 'user2'}]
-partitions = file_dataset.get_partition_key_values()
--
-partitions = file_dataset.get_partition_key_values(['userid'])
-# return [{'userid': 'user1'}, {'userid': 'user2'}]
-
-# filter API, this will only download data from user1/ folder
-new_file_dataset = file_dataset.filter(ds['userid'] == 'user1').download()
-```
-
-You can also create a new partitions structure for TabularDatasets with the [partitions_by()](/python/api/azureml-core/azureml.data.tabulardataset#partition-by-partition-keys--target--name-none--show-progress-true--partition-as-file-dataset-false-) method.
-
-```Python
-
- dataset = Dataset.get_by_name('test') # indexed by country, state, partition_date
-
-# call partition_by locally
-new_dataset = ds.partition_by(name="repartitioned_ds", partition_keys=['country'], target=DataPath(datastore, "repartition"))
-partition_keys = new_dataset.partition_keys # ['country']
-```
-
-## Explore data
-
-After you're done wrangling your data, you can [register](#register-datasets) your dataset, and then load it into your notebook for data exploration prior to model training.
-
-For FileDatasets, you can either **mount** or **download** your dataset, and apply the Python libraries you'd normally use for data exploration. [Learn more about mount vs download](how-to-train-with-datasets.md#mount-vs-download).
-
-```python
-# download the dataset
-dataset.download(target_path='.', overwrite=False)
-
-# mount dataset to the temp directory at `mounted_path`
-
-import tempfile
-mounted_path = tempfile.mkdtemp()
-mount_context = dataset.mount(mounted_path)
-
-mount_context.start()
-```
-
-For TabularDatasets, use the [`to_pandas_dataframe()`](/python/api/azureml-core/azureml.data.tabulardataset#to-pandas-dataframe-on-error--nullout-of-range-datetime--null--) method to view your data in a dataframe.
-
-```python
-# preview the first 3 rows of titanic_ds
-titanic_ds.take(3).to_pandas_dataframe()
-```
-
-|(Index)|PassengerId|Survived|Pclass|Name|Sex|Age|SibSp|Parch|Ticket|Fare|Cabin|Embarked
--|--|--||-|||--|--||-|--|--|
-0|1|False|3|Braund, Mr. Owen Harris|male|22.0|1|0|A/5 21171|7.2500||S
-1|2|True|1|Cumings, Mrs. John Bradley (Florence Briggs Th...|female|38.0|1|0|PC 17599|71.2833|C85|C
-2|3|True|3|Heikkinen, Miss. Laina|female|26.0|0|0|STON/O2. 3101282|7.9250||S
-
-## Create a dataset from pandas dataframe
-
-To create a TabularDataset from an in memory pandas dataframe
-use the [`register_pandas_dataframe()`](/python/api/azureml-core/azureml.data.dataset_factory.tabulardatasetfactory#register-pandas-dataframe-dataframe--target--name--description-none--tags-none--show-progress-true-) method. This method registers the TabularDataset to the workspace and uploads data to your underlying storage, which incurs storage costs.
-
-```python
-from azureml.core import Workspace, Datastore, Dataset
-import pandas as pd
-
-pandas_df = pd.read_csv('<path to your csv file>')
-ws = Workspace.from_config()
-datastore = Datastore.get(ws, '<name of your datastore>')
-dataset = Dataset.Tabular.register_pandas_dataframe(pandas_df, datastore, "dataset_from_pandas_df", show_progress=True)
-
-```
-> [!TIP]
-> Create and register a TabularDataset from an in memory spark dataframe or a dask dataframe with the public preview methods, [`register_spark_dataframe()`](/python/api/azureml-core/azureml.data.dataset_factory.tabulardatasetfactory##register-spark-dataframe-dataframe--target--name--description-none--tags-none--show-progress-true-) and [`register_dask_dataframe()`](/python/api/azureml-core/azureml.data.dataset_factory.tabulardatasetfactory#register-dask-dataframe-dataframe--target--name--description-none--tags-none--show-progress-true-). These methods are [experimental](/python/api/overview/azure/ml/#stable-vs-experimental) preview features, and may change at any time.
->
-> These methods upload data to your underlying storage, and as a result incur storage costs.
-
-## Register datasets
-
-To complete the creation process, register your datasets with a workspace. Use the [`register()`](/python/api/azureml-core/azureml.data.abstract_dataset.abstractdataset#&preserve-view=trueregister-workspace--name--description-none--tags-none--create-new-version-false-) method to register datasets with your workspace in order to share them with others and reuse them across experiments in your workspace:
-
-```Python
-titanic_ds = titanic_ds.register(workspace=workspace,
- name='titanic_ds',
- description='titanic training data')
-```
-
-## Create datasets using Azure Resource Manager
-
-There are many templates at [https://github.com/Azure/azure-quickstart-templates/tree/master//quickstarts/microsoft.machinelearningservices](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.machinelearningservices) that can be used to create datasets.
-
-For information on using these templates, see [Use an Azure Resource Manager template to create a workspace for Azure Machine Learning](how-to-create-workspace-template.md).
-
-
-## Train with datasets
-
-Use your datasets in your machine learning experiments for training ML models. [Learn more about how to train with datasets](how-to-train-with-datasets.md).
-
-## Version datasets
-
-You can register a new dataset under the same name by creating a new version. A dataset version is a way to bookmark the state of your data so that you can apply a specific version of the dataset for experimentation or future reproduction. Learn more about [dataset versions](how-to-version-track-datasets.md).
-```Python
-# create a TabularDataset from Titanic training data
-web_paths = ['https://dprepdata.blob.core.windows.net/demo/Titanic.csv',
- 'https://dprepdata.blob.core.windows.net/demo/Titanic2.csv']
-titanic_ds = Dataset.Tabular.from_delimited_files(path=web_paths)
-
-# create a new version of titanic_ds
-titanic_ds = titanic_ds.register(workspace = workspace,
- name = 'titanic_ds',
- description = 'new titanic training data',
- create_new_version = True)
-```
-
-## Next steps
-
-* Learn [how to train with datasets](how-to-train-with-datasets.md).
-* Use automated machine learning to [train with TabularDatasets](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand/auto-ml-forecasting-energy-demand.ipynb).
-* For more dataset training examples, see the [sample notebooks](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/work-with-data/).
machine-learning How To Custom Dns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-custom-dns.md
Last updated 03/01/2022 -+ # How to use your workspace with a custom DNS server
When using an Azure Machine Learning workspace with a private endpoint, there ar
> * [Virtual network overview](how-to-network-security-overview.md) > * [Secure the workspace resources](how-to-secure-workspace-vnet.md) > * [Secure the training environment](how-to-secure-training-vnet.md)
-> * [Secure the inference environment](how-to-secure-inferencing-vnet.md)
+> * For securing inference, see the following documents:
+> * If using CLI v1 or SDK v1 - [Secure inference environment](how-to-secure-inferencing-vnet.md)
+> * If using CLI v2 or SDK v2 - [Network isolation for managed online endpoints](how-to-secure-online-endpoint.md)
> * [Enable studio functionality](how-to-enable-studio-virtual-network.md) > * [Use a firewall](how-to-access-azureml-behind-firewall.md) ## Prerequisites
Access to a given Azure Machine Learning workspace via Private Link is done by c
- ```<per-workspace globally-unique identifier>.workspace.<region the workspace was created in>.cert.api.azureml.ms``` - ```<compute instance name>.<region the workspace was created in>.instances.azureml.ms``` - ```ml-<workspace-name, truncated>-<region>-<per-workspace globally-unique identifier>.notebooks.azure.net```
+- ```*.<per-workspace globally-unique identifier>.inference.<region the workspace was created in>.api.azureml.ms``` - Used by managed online endpoints
**Azure China 21Vianet regions**: - ```<per-workspace globally-unique identifier>.workspace.<region the workspace was created in>.api.ml.azure.cn```
The following list contains the fully qualified domain names (FQDNs) used by you
> * Compute instances can be accessed only from within the virtual network. > * The IP address for this FQDN is **not** the IP of the compute instance. Instead, use the private IP address of the workspace private endpoint (the IP of the `*.api.azureml.ms` entries.)
+* `*.<workspace-GUID>.inference.<region>.api.azureml.ms`
+ #### Azure China region The following FQDNs are for Azure China regions:
The information returned from all methods is the same; a list of the FQDN and pr
| `fb7e20a0-8891-458b-b969-55ddb3382f51.workspace.eastus.api.azureml.ms` | `10.1.0.5` | | `fb7e20a0-8891-458b-b969-55ddb3382f51.workspace.eastus.cert.api.azureml.ms` | `10.1.0.5` | | `ml-myworkspace-eastus-fb7e20a0-8891-458b-b969-55ddb3382f51.notebooks.azure.net` | `10.1.0.6` |
+| `mymanagedonlineendpoint.fb7e20a0-8891-458b-b969-55ddb3382f51.inference.eastus.api.azureml.ms` | `10.1.0.7` |
The following table shows example IPs from Azure China regions:
The following is an example of `hosts` file entries for Azure Machine Learning:
10.1.0.5 fb7e20a0-8891-458b-b969-55ddb3382f51.workspace.eastus.api.azureml.ms 10.1.0.5 fb7e20a0-8891-458b-b969-55ddb3382f51.workspace.eastus.cert.api.azureml.ms 10.1.0.6 ml-myworkspace-eastus-fb7e20a0-8891-458b-b969-55ddb3382f51.notebooks.azure.net
+10.1.0.7 mymanagedonlineendpoint.fb7e20a0-8891-458b-b969-55ddb3382f51.inference.eastus.api.azureml.ms
# For a compute instance named 'mycomputeinstance' 10.1.0.5 mycomputeinstance.eastus.instances.azureml.ms
This article is part of a series on securing an Azure Machine Learning workflow.
* [Virtual network overview](how-to-network-security-overview.md) * [Secure the workspace resources](how-to-secure-workspace-vnet.md) * [Secure the training environment](how-to-secure-training-vnet.md)
-* [Secure the inference environment](how-to-secure-inferencing-vnet.md)
+* For securing inference, see the following documents:
+ * If using CLI v1 or SDK v1 - [Secure inference environment](how-to-secure-inferencing-vnet.md)
+ * If using CLI v2 or SDK v2 - [Network isolation for managed online endpoints](how-to-secure-online-endpoint.md)
* [Enable studio functionality](how-to-enable-studio-virtual-network.md) * [Use a firewall](how-to-access-azureml-behind-firewall.md)
machine-learning How To Customize Compute Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-customize-compute-instance.md
+
+ Title: Customize compute instance with a script (preview)
+
+description: Create a customized compute instance, using a startup script. Use the compute instance as your development environment, or as compute target for dev/test purposes.
++++++++ Last updated : 05/04/2022++
+# Customize the compute instance with a script (preview)
+
+> [!IMPORTANT]
+> Setup scripts are currently in public preview.
+> The preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+Use a setup script for an automated way to customize and configure a compute instance at provisioning time.
+
+Use a compute instance as your fully configured and managed development environment in the cloud. For development and testing, you can also use the instance as a [training compute target](concept-compute-target.md#train) or for an [inference target](concept-compute-target.md#deploy). A compute instance can run multiple jobs in parallel and has a job queue. As a development environment, a compute instance can't be shared with other users in your workspace.
+
+As an administrator, you can write a customization script to be used to provision all compute instances in the workspace according to your requirements.
+
+Some examples of what you can do in a setup script:
+
+* Install packages, tools, and software
+* Mount data
+* Create custom conda environment and Jupyter kernels
+* Clone git repositories and set git config
+* Set network proxies
+* Set environment variables
+* Install JupyterLab extensions
+
+## Create the setup script
+
+The setup script is a shell script, which runs as `rootuser`. Create or upload the script into your **Notebooks** files:
+
+1. Sign into the [studio](https://ml.azure.com) and select your workspace.
+2. On the left, select **Notebooks**
+3. Use the **Add files** tool to create or upload your setup shell script. Make sure the script filename ends in ".sh". When you create a new file, also change the **File type** to *bash(.sh)*.
++
+When the script runs, the current working directory of the script is the directory where it was uploaded. For example, if you upload the script to **Users>admin**, the location of the script on the compute instance and current working directory when the script runs is */home/azureuser/cloudfiles/code/Users/admin*. This location enables you to use relative paths in the script.
+
+Script arguments can be referred to in the script as $1, $2, etc.
+
+If your script was doing something specific to azureuser such as installing conda environment or Jupyter kernel, you'll have to put it within `sudo -u azureuser` block like this
++
+The command `sudo -u azureuser` changes the current working directory to `/home/azureuser`. You also can't access the script arguments in this block.
+
+For other example scripts, see [azureml-examples](https://github.com/Azure/azureml-examples/tree/main/setup-ci).
+
+You can also use the following environment variables in your script:
+
+* `CI_RESOURCE_GROUP`
+* `CI_WORKSPACE`
+* `CI_NAME`
+* `CI_LOCAL_UBUNTU_USER` - points to `azureuser`
+
+Use a setup script in conjunction with **Azure Policy to either enforce or default a setup script for every compute instance creation**.
+The default value for a setup script timeout is 15 minutes. The time can be changed in studio, or through ARM templates using the `DURATION` parameter.
+`DURATION` is a floating point number with an optional suffix: `'s'` for seconds (the default), `'m'` for minutes, `'h'` for hours or `'d'` for days.
+
+## Use the script in studio
+
+Once you store the script, specify it during creation of your compute instance:
+
+1. Sign into [studio](https://ml.azure.com/) and select your workspace.
+1. On the left, select **Compute**.
+1. Select **+New** to create a new compute instance.
+1. [Fill out the form](how-to-create-manage-compute-instance.md?tabs=azure-studio#create).
+1. On the second page of the form, open **Show advanced settings**.
+1. Turn on **Provision with setup script**.
+1. Browse to the shell script you saved. Or upload a script from your computer.
+1. Add command arguments as needed.
++
+> [!TIP]
+> If workspace storage is attached to a virtual network you might not be able to access the setup script file unless you are accessing the studio from within virtual network.
+
+## Use the script in a Resource Manager template
+
+In a Resource Manager [template](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.machinelearningservices/machine-learning-compute-create-computeinstance), add `setupScripts` to invoke the setup script when the compute instance is provisioned. For example:
+
+```json
+"setupScripts":{
+ "scripts":{
+ "creationScript":{
+ "scriptSource":"workspaceStorage",
+ "scriptData":"[parameters('creationScript.location')]",
+ "scriptArguments":"[parameters('creationScript.cmdArguments')]"
+ }
+ }
+}
+```
+
+`scriptData` above specifies the location of the creation script in the notebooks file share such as `Users/admin/testscript.sh`.
+`scriptArguments` is optional above and specifies the arguments for the creation script.
+
+You could instead provide the script inline for a Resource Manager template. The shell command can refer to any dependencies uploaded into the notebooks file share. When you use an inline string, the working directory for the script is `/mnt/batch/tasks/shared/LS_root/mounts/clusters/**ciname**/code/Users`.
+
+For example, specify a base64 encoded command string for `scriptData`:
+
+```json
+"setupScripts":{
+ "scripts":{
+ "creationScript":{
+ "scriptSource":"inline",
+ "scriptData":"[base64(parameters('inlineCommand'))]",
+ "scriptArguments":"[parameters('creationScript.cmdArguments')]"
+ }
+ }
+}
+```
+
+## Setup script logs
+
+Logs from the setup script execution appear in the logs folder in the compute instance details page. Logs are stored back to your notebooks file share under the `Logs\<compute instance name>` folder. Script file and command arguments for a particular compute instance are shown in the details page.
+
+## Next steps
+
+* [Access the compute instance terminal](how-to-access-terminal.md)
+* [Create and manage files](how-to-manage-files.md)
+* [Update the compute instance to the latest VM image](concept-vulnerability-management.md#compute-instance)
+* [Submit a training run](how-to-set-up-training-targets.md)
machine-learning How To Data Ingest Adf https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-data-ingest-adf.md
Last updated 10/21/2021 --
-# Customer intent: As an experienced data engineer, I need to create a production data ingestion pipeline for the data used to train my models.
-+
+#Customer intent: As an experienced data engineer, I need to create a production data ingestion pipeline for the data used to train my models.
# Data ingestion with Azure Data Factory
The Data Factory pipeline saves the prepared data to your cloud storage (such as
Consume your prepared data in Azure Machine Learning by, * Invoking an Azure Machine Learning pipeline from your Data Factory pipeline.<br>**OR**
-* Creating an [Azure Machine Learning datastore](how-to-access-data.md#create-and-register-datastores) and [Azure Machine Learning dataset](how-to-create-register-datasets.md) for use at a later time.
+* Creating an [Azure Machine Learning datastore](how-to-access-data.md#create-and-register-datastores).
### Invoke Azure Machine Learning pipeline from Data Factory
If you don't want to create a ML pipeline, you can access the data directly from
The following Python code demonstrates how to create a datastore that connects to Azure DataLake Generation 2 storage. [Learn more about datastores and where to find service principal permissions](how-to-access-data.md#create-and-register-datastores). + ```python ws = Workspace.from_config() adlsgen2_datastore_name = '<ADLS gen2 storage account alias>' #set ADLS Gen2 storage account alias in AML
adlsgen2_datastore = Datastore.register_azure_data_lake_gen2(
Next, create a dataset to reference the file(s) you want to use in your machine learning task.
-The following code creates a TabularDataset from a csv file, `prepared-data.csv`. Learn more about [dataset types and accepted file formats](how-to-create-register-datasets.md#dataset-types).
+The following code creates a TabularDataset from a csv file, `prepared-data.csv`. Learn more about [dataset types and accepted file formats](./v1/how-to-create-register-datasets.md#dataset-types).
+ ```python from azureml.core import Workspace, Datastore, Dataset
From here, use `prepared_dataset` to reference your prepared data, like in your
* [Run a Databricks notebook in Azure Data Factory](../data-factory/transform-data-using-databricks-notebook.md) * [Access data in Azure storage services](./how-to-access-data.md#create-and-register-datastores) * [Train models with datasets in Azure Machine Learning](./how-to-train-with-datasets.md).
-* [DevOps for a data ingestion pipeline](./how-to-cicd-data-ingestion.md)
+* [DevOps for a data ingestion pipeline](./how-to-cicd-data-ingestion.md)
machine-learning How To Data Prep Synapse Spark Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-data-prep-synapse-spark-pool.md
Last updated 10/21/2021---
-# Customer intent: As a data scientist, I want to prepare my data at scale, and to train my machine learning models from a single notebook using Azure Machine Learning.
+
+#Customer intent: As a data scientist, I want to prepare my data at scale, and to train my machine learning models from a single notebook using Azure Machine Learning.
# Data wrangling with Apache Spark pools (preview) ++ In this article, you learn how to perform data wrangling tasks interactively within a dedicated Synapse session, powered by [Azure Synapse Analytics](../synapse-analytics/overview-what-is.md), in a Jupyter notebook using the [Azure Machine Learning Python SDK](/python/api/overview/azure/ml/). If you prefer to use Azure Machine Learning pipelines, see [How to use Apache Spark (powered by Azure Synapse Analytics) in your machine learning pipeline (preview)](how-to-use-synapsesparkstep.md).
There are two ways to load data from these storage
* Directly load data from storage using its Hadoop Distributed Files System (HDFS) path.
-* Read in data from an existing [Azure Machine Learning dataset](how-to-create-register-datasets.md).
+* Read in data from an existing [Azure Machine Learning dataset](./v1/how-to-create-register-datasets.md).
To access these storage services, you need **Storage Blob Data Reader** permissions. If you plan to write data back to these storage services, you need **Storage Blob Data Contributor** permissions. [Learn more about storage permissions and roles](../storage/blobs/assign-azure-role-data-access.md).
When you've completed data preparation and saved your prepared data to storage,
## Create dataset to represent prepared data
-When you're ready to consume your prepared data for model training, connect to your storage with an [Azure Machine Learning datastore](how-to-access-data.md), and specify which file(s) you want to use with an [Azure Machine Learning dataset](how-to-create-register-datasets.md).
+When you're ready to consume your prepared data for model training, connect to your storage with an [Azure Machine Learning datastore](how-to-access-data.md), and specify which file(s) you want to use with an [Azure Machine Learning dataset](./v1/how-to-create-register-datasets.md).
The following code example, * Assumes you already created a datastore that connects to the storage service where you saved your prepared data. * Gets that existing datastore, `mydatastore`, from the workspace, `ws` with the get() method.
-* Creates a [FileDataset](how-to-create-register-datasets.md#filedataset), `train_ds`, that references the prepared data files located in the `training_data` directory in `mydatastore`.
+* Creates a [FileDataset](./v1/how-to-create-register-datasets.md#filedataset), `train_ds`, that references the prepared data files located in the `training_data` directory in `mydatastore`.
* Creates the variable `input1`, which can be used at a later time to make the data files of the `train_ds` dataset available to a compute target for your training tasks. ```python
Similarly, if you have an Azure Machine Learning pipeline, you can use the [Syna
Making your data available to the Synapse Spark pool depends on your dataset type. * For a FileDataset, you can use the [`as_hdfs()`](/python/api/azureml-core/azureml.data.filedataset#as-hdfs--) method. When the run is submitted, the dataset is made available to the Synapse Spark pool as a Hadoop distributed file system (HFDS).
-* For a [TabularDataset](how-to-create-register-datasets.md#tabulardataset), you can use the [`as_named_input()`](/python/api/azureml-core/azureml.data.abstract_dataset.abstractdataset#as-named-input-name-) method.
+* For a [TabularDataset](./v1/how-to-create-register-datasets.md#tabulardataset), you can use the [`as_named_input()`](/python/api/azureml-core/azureml.data.abstract_dataset.abstractdataset#as-named-input-name-) method.
The following code,
machine-learning How To Datastore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-datastore.md
+
+ Title: Use datastores
+
+description: Learn how to use datastores to connect to Azure storage services during training with Azure Machine Learning.
+++++++ Last updated : 01/28/2022+++
+# Customer intent: As an experienced Python developer, I need to make my data in Azure storage available to my remote compute to train my machine learning models.
++
+# Connect to storage with Azure Machine Learning datastores
+
+In this article, learn how to connect to data storage services on Azure with Azure Machine Learning datastores.
+
+## Prerequisites
+
+- An Azure subscription. If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/).
+
+- The [Azure Machine Learning SDK for Python](/python/api/overview/azure/ml/intro).
+
+- An Azure Machine Learning workspace.
+
+> [!NOTE]
+> Azure Machine Learning datastores do **not** create the underlying storage accounts, rather they register an **existing** storage account for use in Azure Machine Learning. It is not a requirement to use Azure Machine Learning datastores - you can use storage URIs directly assuming you have access to the underlying data.
++
+## Create an Azure Blob datastore
+
+# [CLI: Identity-based access](#tab/cli-identity-based-access)
+Create the following YAML file (updating the values):
+
+```yaml
+# my_blob_datastore.yml
+$schema: https://azuremlschemas.azureedge.net/latest/azureBlob.schema.json
+name: my_blob_ds # add name of your datastore here
+type: azure_blob
+description: here is a description # add a description of your datastore here
+account_name: my_account_name # add storage account name here
+container_name: my_container_name # add storage container name here
+```
+
+Create the Azure Machine Learning datastore in the CLI:
+
+```azurecli
+az ml datastore create --file my_blob_datastore.yml
+```
+
+# [CLI: Account key](#tab/cli-account-key)
+Create the following YAML file (updating the values):
+
+```yaml
+# my_blob_datastore.yml
+$schema: https://azuremlschemas.azureedge.net/latest/azureBlob.schema.json
+name: blob_example
+type: azure_blob
+description: Datastore pointing to a blob container.
+account_name: mytestblobstore
+container_name: data-container
+credentials:
+ account_key: XXXxxxXXXxXXXXxxXXXXXxXXXXXxXxxXxXXXxXXXxXXxxxXXxxXXXxXxXXXxxXxxXXXXxxxxxXXxxxxxxXXXxXXX
+```
+
+Create the Azure Machine Learning datastore in the CLI:
+
+```azurecli
+az ml datastore create --file my_blob_datastore.yml
+```
+
+# [CLI: SAS](#tab/cli-sas)
+Create the following YAML file (updating the values):
+
+```yaml
+# my_blob_datastore.yml
+$schema: https://azuremlschemas.azureedge.net/latest/azureBlob.schema.json
+name: blob_sas_example
+type: azure_blob
+description: Datastore pointing to a blob container using SAS token.
+account_name: mytestblobstore
+container_name: data-container
+credentials:
+ sas_token: ?xx=XXXX-XX-XX&xx=xxxx&xxx=xxx&xx=xxxxxxxxxxx&xx=XXXX-XX-XXXXX:XX:XXX&xx=XXXX-XX-XXXXX:XX:XXX&xxx=xxxxx&xxx=XXxXXXxxxxxXXXXXXXxXxxxXXXXXxxXXXXXxXXXXxXXXxXXxXX
+```
+
+Create the Azure Machine Learning datastore in the CLI:
+
+```azurecli
+az ml datastore create --file my_blob_datastore.yml
+```
+
+# [Python SDK: Identity-based access](#tab/sdk-identity-based-access)
+
+```python
+from azure.ai.ml.entities import AzureBlobDatastore
+from azure.ai.ml import MLClient
+
+ml_client = MLClient.from_config()
+
+store = AzureBlobDatastore(
+ name="",
+ description="",
+ account_name="",
+ container_name=""
+)
+
+ml_client.create_or_update(store)
+```
+
+# [Python SDK: Account key](#tab/sdk-account-key)
+
+```python
+from azure.ai.ml.entities import AzureBlobDatastore
+from azure.ai.ml.entities._datastore.credentials import AccountKeyCredentials
+from azure.ai.ml import MLClient
+
+ml_client = MLClient.from_config()
+
+creds = AccountKeyCredentials(account_key="")
+
+store = AzureBlobDatastore(
+ name="",
+ description="",
+ account_name="",
+ container_name="",
+ credentials=creds
+)
+
+ml_client.create_or_update(store)
+```
+
+# [Python SDK: SAS](#tab/sdk-SAS)
+
+```python
+from azure.ai.ml.entities import AzureBlobDatastore
+from azure.ai.ml.entities._datastore.credentials import SasTokenCredentials
+from azure.ai.ml import MLClient
+
+ml_client = MLClient.from_config()
+
+creds = SasTokenCredentials(sas_token="")
+
+store = AzureBlobDatastore(
+ name="",
+ description="",
+ account_name="",
+ container_name="",
+ credentials=creds
+)
+
+ml_client.create_or_update(store)
+```
++
+## Create an Azure Data Lake Gen2 datastore
+
+# [CLI: Identity-based access](#tab/cli-adls-identity-based-access)
+Create the following YAML file (updating the values):
+
+```yaml
+# my_adls_datastore.yml
+$schema: https://azuremlschemas.azureedge.net/latest/azureDataLakeGen2.schema.json
+name: adls_gen2_credless_example
+type: azure_data_lake_gen2
+description: Credential-less datastore pointing to an Azure Data Lake Storage Gen2.
+account_name: mytestdatalakegen2
+filesystem: my-gen2-container
+```
+
+Create the Azure Machine Learning datastore in the CLI:
+
+```azurecli
+az ml datastore create --file my_adls_datastore.yml
+```
+
+# [CLI: Service principal](#tab/cli-adls-sp)
+Create the following YAML file (updating the values):
+
+```yaml
+# my_adls_datastore.yml
+$schema: https://azuremlschemas.azureedge.net/latest/azureDataLakeGen2.schema.json
+name: adls_gen2_example
+type: azure_data_lake_gen2
+description: Datastore pointing to an Azure Data Lake Storage Gen2.
+account_name: mytestdatalakegen2
+filesystem: my-gen2-container
+credentials:
+ tenant_id: XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX
+ client_id: XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX
+ client_secret: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
+```
+
+Create the Azure Machine Learning datastore in the CLI:
+
+```azurecli
+az ml datastore create --file my_adls_datastore.yml
+```
+
+# [Python SDK: Identity-based access](#tab/sdk-adls-identity-access)
+
+```python
+from azure.ai.ml.entities import AzureDataLakeGen2Datastore
+from azure.ai.ml import MLClient
+
+ml_client = MLClient.from_config()
+
+store = AzureDataLakeGen2Datastore(
+ name="",
+ description="",
+ account_name="",
+ file_system=""
+)
+
+ml_client.create_or_update(store)
+```
+
+# [Python SDK: Service principal](#tab/sdk-adls-sp)
+
+```python
+from azure.ai.ml.entities import AzureDataLakeGen2Datastore
+from azure.ai.ml.entities._datastore.credentials import ServicePrincipalCredentials
+from azure.ai.ml import MLClient
+
+ml_client = MLClient.from_config()
+
+creds = ServicePrincipalCredentials(
+ authority_url="",
+ resource_url=""
+ tenant_id="",
+ secrets=""
+)
+
+store = AzureDataLakeGen2Datastore(
+ name="",
+ description="",
+ account_name="",
+ file_system="",
+ credentials=creds
+)
+
+ml_client.create_or_update(store)
+```
++
+## Create an Azure Files datastore
+
+# [CLI: Account key](#tab/cli-azfiles-account-key)
+Create the following YAML file (updating the values):
+
+```yaml
+# my_files_datastore.yml
+$schema: https://azuremlschemas.azureedge.net/latest/azureFile.schema.json
+name: file_example
+type: azure_file
+description: Datastore pointing to an Azure File Share.
+account_name: mytestfilestore
+file_share_name: my-share
+credentials:
+ account_key: XxXxXxXXXXXXXxXxXxxXxxXXXXXXXXxXxxXXxXXXXXXXxxxXxXXxXXXXXxXXxXXXxXxXxxxXXxXXxXXXXXxXxxXX
+```
+
+Create the Azure Machine Learning datastore in the CLI:
+
+```azurecli
+az ml datastore create --file my_files_datastore.yml
+```
+
+# [CLI: SAS](#tab/cli-azfiles-sas)
+Create the following YAML file (updating the values):
+
+```yaml
+# my_files_datastore.yml
+$schema: https://azuremlschemas.azureedge.net/latest/azureFile.schema.json
+name: file_sas_example
+type: azure_file
+description: Datastore pointing to an Azure File Share using SAS token.
+account_name: mytestfilestore
+file_share_name: my-share
+credentials:
+ sas_token: ?xx=XXXX-XX-XX&xx=xxxx&xxx=xxx&xx=xxxxxxxxxxx&xx=XXXX-XX-XXXXX:XX:XXX&xx=XXXX-XX-XXXXX:XX:XXX&xxx=xxxxx&xxx=XXxXXXxxxxxXXXXXXXxXxxxXXXXXxxXXXXXxXXXXxXXXxXXxXX
+```
+
+Create the Azure Machine Learning datastore in the CLI:
+
+```azurecli
+az ml datastore create --file my_files_datastore.yml
+```
+
+# [Python SDK: Account key](#tab/sdk-azfiles-accountkey)
+
+```python
+from azure.ai.ml.entities import AzureFileDatastore
+from azure.ai.ml.entities._datastore.credentials import AccountKeyCredentials
+from azure.ai.ml import MLClient
+
+ml_client = MLClient.from_config()
+
+creds = AccountKeyCredentials(account_key="")
+
+store = AzureFileDatastore(
+ name="",
+ description="",
+ account_name="",
+ file_share_name="",
+ credentials=creds
+)
+
+ml_client.create_or_update(store)
+```
+
+# [Python SDK: SAS](#tab/sdk-azfiles-sas)
+
+```python
+from azure.ai.ml.entities import AzureFileDatastore
+from azure.ai.ml.entities._datastore.credentials import SasTokenCredentials
+from azure.ai.ml import MLClient
+
+ml_client = MLClient.from_config()
+
+creds = SasTokenCredentials(sas_token="")
+
+store = AzureFileDatastore(
+ name="",
+ description="",
+ account_name="",
+ file_share_name="",
+ credentials=creds
+)
+
+ml_client.create_or_update(store)
+```
++
+## Create an Azure Data Lake Gen1 datastore
+
+# [CLI: Identity-based access](#tab/cli-adlsgen1-identity-based-access)
+Create the following YAML file (updating the values):
+
+```yaml
+# my_adls_datastore.yml
+$schema: https://azuremlschemas.azureedge.net/latest/azureDataLakeGen1.schema.json
+name: alds_gen1_credless_example
+type: azure_data_lake_gen1
+description: Credential-less datastore pointing to an Azure Data Lake Storage Gen1.
+store_name: mytestdatalakegen1
+```
+
+Create the Azure Machine Learning datastore in the CLI:
+
+```azurecli
+az ml datastore create --file my_adls_datastore.yml
+```
+
+# [CLI: Service principal](#tab/cli-adlsgen1-sp)
+Create the following YAML file (updating the values):
+
+```yaml
+# my_adls_datastore.yml
+$schema: https://azuremlschemas.azureedge.net/latest/azureDataLakeGen1.schema.json
+name: adls_gen1_example
+type: azure_data_lake_gen1
+description: Datastore pointing to an Azure Data Lake Storage Gen1.
+store_name: mytestdatalakegen1
+credentials:
+ tenant_id: XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX
+ client_id: XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX
+ client_secret: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
+```
+
+Create the Azure Machine Learning datastore in the CLI:
+
+```azurecli
+az ml datastore create --file my_adls_datastore.yml
+```
+
+# [Python SDK: Identity-based access](#tab/sdk-adlsgen1-identity-access)
+
+```python
+from azure.ai.ml.entities import AzureDataLakeGen1Datastore
+from azure.ai.ml import MLClient
+
+ml_client = MLClient.from_config()
+
+store = AzureDataLakeGen1Datastore(
+ name="",
+ store_name="",
+ description="",
+)
+
+ml_client.create_or_update(store)
+```
+
+# [Python SDK: Service principal](#tab/sdk-adlsgen1-sp)
+
+```python
+from azure.ai.ml.entities import AzureDataLakeGen1Datastore
+from azure.ai.ml.entities._datastore.credentials import ServicePrincipalCredentials
+from azure.ai.ml import MLClient
+
+ml_client = MLClient.from_config()
+
+creds = ServicePrincipalCredentials(
+ authority_url="",
+ resource_url=""
+ tenant_id="",
+ secrets=""
+)
+
+store = AzureDataLakeGen1Datastore(
+ name="",
+ store_name="",
+ description="",
+ credentials=creds
+)
++
+ml_client.create_or_update(store)
+```
+++
+## Next steps
+
+* [Register and Consume your data](how-to-create-register-data-assets.md)
machine-learning How To Debug Parallel Run Step https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-debug-parallel-run-step.md
-+ Last updated 10/21/2021 #Customer intent: As a data scientist, I want to figure out why my ParallelRunStep doesn't run so that I can fix it.- # Troubleshooting the ParallelRunStep + In this article, you learn how to troubleshoot when you get errors using the [ParallelRunStep](/python/api/azureml-pipeline-steps/azureml.pipeline.steps.parallel_run_step.parallelrunstep) class from the [Azure Machine Learning SDK](/python/api/overview/azure/ml/intro). For general tips on troubleshooting a pipeline, see [Troubleshooting machine learning pipelines](how-to-debug-pipelines.md).
machine-learning How To Debug Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-debug-pipelines.md
Last updated 10/21/2021 -+ #Customer intent: As a data scientist, I want to figure out why my pipeline doesn't run so that I can fix it. # Troubleshooting machine learning pipelines + In this article, you learn how to troubleshoot when you get errors running a [machine learning pipeline](concept-ml-pipelines.md) in the [Azure Machine Learning SDK](/python/api/overview/azure/ml/intro) and [Azure Machine Learning designer](./concept-designer.md). ## Troubleshooting tips
In some cases, you may need to interactively debug the Python code used in your
* See the SDK reference for help with the [azureml-pipelines-core](/python/api/azureml-pipeline-core/) package and the [azureml-pipelines-steps](/python/api/azureml-pipeline-steps/) package.
-* See the list of [designer exceptions and error codes](algorithm-module-reference/designer-error-codes.md).
+* See the list of [designer exceptions and error codes](algorithm-module-reference/designer-error-codes.md).
machine-learning How To Debug Visual Studio Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-debug-visual-studio-code.md
Last updated 10/21/2021+ # Interactive debugging with Visual Studio Code + Learn how to interactively debug Azure Machine Learning experiments, pipelines, and deployments using Visual Studio Code (VS Code) and [debugpy](https://github.com/microsoft/debugpy/). ## Run and debug experiments locally
Learn more about troubleshooting:
* [Remote model deployment](how-to-troubleshoot-deployment.md) * [Machine learning pipelines](how-to-debug-pipelines.md) * [ParallelRunStep](how-to-debug-parallel-run-step.md)-
machine-learning How To Deploy Advanced Entry Script https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-advanced-entry-script.md
Last updated 10/21/2021 -+ # Advanced entry script authoring + This article shows how to write entry scripts for specialized use cases. ## Prerequisites
More entry script examples for specific machine learning use cases can be found
## Next steps * [Troubleshoot a failed deployment](how-to-troubleshoot-deployment.md)
-* [Deploy to Azure Kubernetes Service](how-to-deploy-azure-kubernetes-service.md)
+* [Deploy to Azure Kubernetes Service](v1/how-to-deploy-azure-kubernetes-service.md)
* [Create client applications to consume web services](how-to-consume-web-service.md) * [Update web service](how-to-deploy-update-web-service.md) * [How to deploy a model using a custom Docker image](./how-to-deploy-custom-container.md)
machine-learning How To Deploy And Where https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-and-where.md
Last updated 11/12/2021 -+ adobe-target: true
-# Deploy machine learning models to Azure
+# Deploy machine learning models to Azure
+ Learn how to deploy your machine learning or deep learning model as a web service in the Azure cloud.
az ml workspace list --resource-group=<resource-group>
# [Python](#tab/python) + ```python from azureml.core import Workspace ws = Workspace(subscription_id="<subscription_id>",
For more information, see the documentation for the [Model class](/python/api/az
+ Register a model from an `azureml.core.Run` object:
+ [!INCLUDE [sdk v1](../../includes/machine-learning-sdk-v1.md)]
+ ```python model = run.register_model(model_name='bidaf_onnx', tags={'area': 'qna'},
For more information, see the documentation for the [Model class](/python/api/az
+ Register a model from an `azureml.train.automl.run.AutoMLRun` object:
+ [!INCLUDE [sdk v1](../../includes/machine-learning-sdk-v1.md)]
+ ```python description = 'My AutoML Model' model = run.register_model(description = description,
A minimal inference configuration can be written as:
Save this file with the name `dummyinferenceconfig.json`.
-[See this article](./reference-azure-machine-learning-cli.md#inference-configuration-schema) for a more thorough discussion of inference configurations.
+[See this article](./v1/reference-azure-machine-learning-cli.md#inference-configuration-schema) for a more thorough discussion of inference configurations.
# [Python](#tab/python)
The options available for a deployment configuration differ depending on the com
[!INCLUDE [aml-local-deploy-config](../../includes/machine-learning-service-local-deploy-config.md)]
-For more information, see the [deployment schema](./reference-azure-machine-learning-cli.md#deployment-configuration-schema).
+For more information, see the [deployment schema](./v1/reference-azure-machine-learning-cli.md#deployment-configuration-schema).
# [Python](#tab/python)
Save this file as `inferenceconfig.json`
# [Python](#tab/python) + ```python env = Environment(name='myenv') python_packages = ['nltk', 'numpy', 'onnxruntime']
The options available for a deployment configuration differ depending on the com
Save this file as `re-deploymentconfig.json`.
-For more information, see [this reference](./reference-azure-machine-learning-cli.md#deployment-configuration-schema).
+For more information, see [this reference](./v1/reference-azure-machine-learning-cli.md#deployment-configuration-schema).
# [Python](#tab/python)
machine-learning How To Deploy Automl Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-automl-endpoint.md
Title: Deploy an AutoML model with an online endpoint (preview)
+ Title: Deploy an AutoML model with an online endpoint
description: Learn to deploy your AutoML model as a web service that's automatically managed by Azure.
Previously updated : 03/31/2022 Last updated : 05/11/2022 -+ ms.devlang: azurecli
-# How to deploy an AutoML model to an online endpoint (preview)
+# How to deploy an AutoML model to an online endpoint
+
-In this article, you'll learn how to deploy an AutoML-trained machine learning model to an online endpoint. Automated machine learning, also referred to as automated ML or AutoML, is the process of automating the time-consuming, iterative tasks of developing a machine learning model. For more, see [What is automated machine learning (AutoML)?](concept-automated-ml.md).
+In this article, you'll learn how to deploy an AutoML-trained machine learning model to an online (real-time inference) endpoint. Automated machine learning, also referred to as automated ML or AutoML, is the process of automating the time-consuming, iterative tasks of developing a machine learning model. For more, see [What is automated machine learning (AutoML)?](concept-automated-ml.md).
In this article you'll know how to deploy AutoML trained machine learning model to online endpoints using: - Azure Machine Learning studio - Azure Machine Learning CLI (v2)) - ## Prerequisites An AutoML-trained machine learning model. For more, see [Tutorial: Train a classification model with no-code AutoML in the Azure Machine Learning studio](tutorial-first-experiment-automated-ml.md) or [Tutorial: Forecast demand with automated machine learning](tutorial-automated-ml-forecast.md).
Deploying an AutoML-trained model from the Automated ML page is a no-code experi
1. Choose the Models tab 1. Select the model you want to deploy 1. Once you select a model, the Deploy button will light up with a drop-down menu
-1. Select *Deploy to real-time endpoint (preview)* option
+1. Select *Deploy to real-time endpoint* option
:::image type="content" source="media/how-to-deploy-automl-endpoint/deploy-button.png" lightbox="media/how-to-deploy-automl-endpoint/deploy-button.png" alt-text="Screenshot showing the Deploy button's drop-down menu":::
Deploying an AutoML-trained model from the Automated ML page is a no-code experi
:::image type="content" source="media/how-to-deploy-automl-endpoint/environment.png" lightbox="media/how-to-deploy-automl-endpoint/environment.png" alt-text="Screenshot showing the generated Environment":::
-5. Complete the wizard to deploy the model to a real-time endpoint
+5. Complete the wizard to deploy the model to an online endpoint
:::image type="content" source="media/how-to-deploy-automl-endpoint/complete-wizard.png" lightbox="media/how-to-deploy-automl-endpoint/complete-wizard.png" alt-text="Screenshot showing the review-and-create page":::
You'll need to modify this file to use the files you downloaded from the AutoML
| `environment:conda_file` | A file URL for the downloaded conda environment file (`conda_env_<VERSION>.yml`). | > [!NOTE]
- > For a full description of the YAML, see [Online endpoint (preview) YAML reference](reference-yaml-endpoint-online.md).
+ > For a full description of the YAML, see [Online endpoint YAML reference](reference-yaml-endpoint-online.md).
1. From the command line, run:
machine-learning How To Deploy Azure Container Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-azure-container-instance.md
- Title: How to deploy models to Azure Container Instances-
-description: 'Learn how to deploy your Azure Machine Learning models as a web service using Azure Container Instances.'
-------- Previously updated : 10/21/2021--
-# Deploy a model to Azure Container Instances
-
-Learn how to use Azure Machine Learning to deploy a model as a web service on Azure Container Instances (ACI). Use Azure Container Instances if you:
--- prefer not to manage your own Kubernetes cluster-- Are OK with having only a single replica of your service, which may impact uptime-
-For information on quota and region availability for ACI, see [Quotas and region availability for Azure Container Instances](../container-instances/container-instances-quotas.md) article.
-
-> [!IMPORTANT]
-> It is highly advised to debug locally before deploying to the web service, for more information see [Debug Locally](./how-to-troubleshoot-deployment-local.md)
->
-> You can also refer to Azure Machine Learning - [Deploy to Local Notebook](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/deployment/deploy-to-local)
-
-## Prerequisites
--- An Azure Machine Learning workspace. For more information, see [Create an Azure Machine Learning workspace](how-to-manage-workspace.md).--- A machine learning model registered in your workspace. If you don't have a registered model, see [How and where to deploy models](how-to-deploy-and-where.md).--- The [Azure CLI extension (v1) for Machine Learning service](reference-azure-machine-learning-cli.md), [Azure Machine Learning Python SDK](/python/api/overview/azure/ml/intro), or the [Azure Machine Learning Visual Studio Code extension](how-to-setup-vs-code.md).--- The __Python__ code snippets in this article assume that the following variables are set:-
- * `ws` - Set to your workspace.
- * `model` - Set to your registered model.
- * `inference_config` - Set to the inference configuration for the model.
-
- For more information on setting these variables, see [How and where to deploy models](how-to-deploy-and-where.md).
--- The __CLI__ snippets in this article assume that you've created an `inferenceconfig.json` document. For more information on creating this document, see [How and where to deploy models](how-to-deploy-and-where.md).-
-## Limitations
-
-* When using Azure Container Instances in a virtual network, the virtual network must be in the same resource group as your Azure Machine Learning workspace.
-* When using Azure Container Instances inside the virtual network, the Azure Container Registry (ACR) for your workspace cannot also be in the virtual network.
-
-For more information, see [How to secure inferencing with virtual networks](how-to-secure-inferencing-vnet.md#enable-azure-container-instances-aci).
-
-## Deploy to ACI
-
-To deploy a model to Azure Container Instances, create a __deployment configuration__ that describes the compute resources needed. For example, number of cores and memory. You also need an __inference configuration__, which describes the environment needed to host the model and web service. For more information on creating the inference configuration, see [How and where to deploy models](how-to-deploy-and-where.md).
-
-> [!NOTE]
-> * ACI is suitable only for small models that are under 1 GB in size.
-> * We recommend using single-node AKS to dev-test larger models.
-> * The number of models to be deployed is limited to 1,000 models per deployment (per container).
-
-### Using the SDK
-
-```python
-from azureml.core.webservice import AciWebservice, Webservice
-from azureml.core.model import Model
-
-deployment_config = AciWebservice.deploy_configuration(cpu_cores = 1, memory_gb = 1)
-service = Model.deploy(ws, "aciservice", [model], inference_config, deployment_config)
-service.wait_for_deployment(show_output = True)
-print(service.state)
-```
-
-For more information on the classes, methods, and parameters used in this example, see the following reference documents:
-
-* [AciWebservice.deploy_configuration](/python/api/azureml-core/azureml.core.webservice.aciwebservice#deploy-configuration-cpu-cores-none--memory-gb-none--tags-none--properties-none--description-none--location-none--auth-enabled-none--ssl-enabled-none--enable-app-insights-none--ssl-cert-pem-file-none--ssl-key-pem-file-none--ssl-cname-none--dns-name-label-none--primary-key-none--secondary-key-none--collect-model-data-none--cmk-vault-base-url-none--cmk-key-name-none--cmk-key-version-none-)
-* [Model.deploy](/python/api/azureml-core/azureml.core.model.model#deploy-workspace--name--models--inference-config-none--deployment-config-none--deployment-target-none--overwrite-false-)
-* [Webservice.wait_for_deployment](/python/api/azureml-core/azureml.core.webservice%28class%29#wait-for-deployment-show-output-false-)
-
-### Using the Azure CLI
--
-To deploy using the CLI, use the following command. Replace `mymodel:1` with the name and version of the registered model. Replace `myservice` with the name to give this service:
-
-```azurecli-interactive
-az ml model deploy -n myservice -m mymodel:1 --ic inferenceconfig.json --dc deploymentconfig.json
-```
--
-For more information, see the [az ml model deploy](/cli/azure/ml/model#az-ml-model-deploy) reference.
-
-## Using VS Code
-
-See [how to manage resources in VS Code](how-to-manage-resources-vscode.md).
-
-> [!IMPORTANT]
-> You don't need to create an ACI container to test in advance. ACI containers are created as needed.
-
-> [!IMPORTANT]
-> We append hashed workspace id to all underlying ACI resources which are created, all ACI names from same workspace will have same suffix. The Azure Machine Learning service name would still be the same customer provided "service_name" and all the user facing Azure Machine Learning SDK APIs do not need any change. We do not give any guarantees on the names of underlying resources being created.
-
-## Next steps
-
-* [How to deploy a model using a custom Docker image](./how-to-deploy-custom-container.md)
-* [Deployment troubleshooting](how-to-troubleshoot-deployment.md)
-* [Update the web service](how-to-deploy-update-web-service.md)
-* [Use TLS to secure a web service through Azure Machine Learning](how-to-secure-web-service.md)
-* [Consume a ML Model deployed as a web service](how-to-consume-web-service.md)
-* [Monitor your Azure Machine Learning models with Application Insights](how-to-enable-app-insights.md)
-* [Collect data for models in production](how-to-enable-data-collection.md)
machine-learning How To Deploy Azure Kubernetes Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-azure-kubernetes-service.md
- Title: Deploy ML models to Kubernetes Service-
-description: 'Learn how to deploy your Azure Machine Learning models as a web service using Azure Kubernetes Service.'
-------- Previously updated : 10/21/2021--
-# Deploy a model to an Azure Kubernetes Service cluster
-
-Learn how to use Azure Machine Learning to deploy a model as a web service on Azure Kubernetes Service (AKS). Azure Kubernetes Service is good for high-scale production deployments. Use Azure Kubernetes service if you need one or more of the following capabilities:
--- __Fast response time__-- __Autoscaling__ of the deployed service-- __Logging__-- __Model data collection__-- __Authentication__-- __TLS termination__-- __Hardware acceleration__ options such as GPU and field-programmable gate arrays (FPGA)-
-When deploying to Azure Kubernetes Service, you deploy to an AKS cluster that is __connected to your workspace__. For information on connecting an AKS cluster to your workspace, see [Create and attach an Azure Kubernetes Service cluster](how-to-create-attach-kubernetes.md).
-
-> [!IMPORTANT]
-> We recommend that you debug locally before deploying to the web service. For more information, see [Debug Locally](./how-to-troubleshoot-deployment-local.md)
->
-> You can also refer to Azure Machine Learning - [Deploy to Local Notebook](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/deployment/deploy-to-local)
--
-## Prerequisites
--- An Azure Machine Learning workspace. For more information, see [Create an Azure Machine Learning workspace](how-to-manage-workspace.md).--- A machine learning model registered in your workspace. If you don't have a registered model, see [How and where to deploy models](how-to-deploy-and-where.md).--- The [Azure CLI extension (v1) for Machine Learning service](reference-azure-machine-learning-cli.md), [Azure Machine Learning Python SDK](/python/api/overview/azure/ml/intro), or the [Azure Machine Learning Visual Studio Code extension](how-to-setup-vs-code.md).--- The __Python__ code snippets in this article assume that the following variables are set:-
- * `ws` - Set to your workspace.
- * `model` - Set to your registered model.
- * `inference_config` - Set to the inference configuration for the model.
-
- For more information on setting these variables, see [How and where to deploy models](how-to-deploy-and-where.md).
--- The __CLI__ snippets in this article assume that you've created an `inferenceconfig.json` document. For more information on creating this document, see [How and where to deploy models](how-to-deploy-and-where.md).--- An Azure Kubernetes Service cluster connected to your workspace. For more information, see [Create and attach an Azure Kubernetes Service cluster](how-to-create-attach-kubernetes.md).-
- - If you want to deploy models to GPU nodes or FPGA nodes (or any specific SKU), then you must create a cluster with the specific SKU. There is no support for creating a secondary node pool in an existing cluster and deploying models in the secondary node pool.
-
-## Understand the deployment processes
-
-The word "deployment" is used in both Kubernetes and Azure Machine Learning. "Deployment" has different meanings in these two contexts. In Kubernetes, a `Deployment` is a concrete entity, specified with a declarative YAML file. A Kubernetes `Deployment` has a defined lifecycle and concrete relationships to other Kubernetes entities such as `Pods` and `ReplicaSets`. You can learn about Kubernetes from docs and videos at [What is Kubernetes?](https://aka.ms/k8slearning).
-
-In Azure Machine Learning, "deployment" is used in the more general sense of making available and cleaning up your project resources. The steps that Azure Machine Learning considers part of deployment are:
-
-1. Zipping the files in your project folder, ignoring those specified in .amlignore or .gitignore
-1. Scaling up your compute cluster (Relates to Kubernetes)
-1. Building or downloading the dockerfile to the compute node (Relates to Kubernetes)
- 1. The system calculates a hash of:
- - The base image
- - Custom docker steps (see [Deploy a model using a custom Docker base image](./how-to-deploy-custom-container.md))
- - The conda definition YAML (see [Create & use software environments in Azure Machine Learning](./how-to-use-environments.md))
- 1. The system uses this hash as the key in a lookup of the workspace Azure Container Registry (ACR)
- 1. If it is not found, it looks for a match in the global ACR
- 1. If it is not found, the system builds a new image (which will be cached and pushed to the workspace ACR)
-1. Downloading your zipped project file to temporary storage on the compute node
-1. Unzipping the project file
-1. The compute node executing `python <entry script> <arguments>`
-1. Saving logs, model files, and other files written to `./outputs` to the storage account associated with the workspace
-1. Scaling down compute, including removing temporary storage (Relates to Kubernetes)
-
-### Azure ML router
-
-The front-end component (azureml-fe) that routes incoming inference requests to deployed services automatically scales as needed. Scaling of azureml-fe is based on the AKS cluster purpose and size (number of nodes). The cluster purpose and nodes are configured when you [create or attach an AKS cluster](how-to-create-attach-kubernetes.md). There is one azureml-fe service per cluster, which may be running on multiple pods.
-
-> [!IMPORTANT]
-> When using a cluster configured as __dev-test__, the self-scaler is **disabled**. Even for FastProd/DenseProd clusters, Self-Scaler is only enabled when telemetry shows that it's needed.
-
-Azureml-fe scales both up (vertically) to use more cores, and out (horizontally) to use more pods. When making the decision to scale up, the time that it takes to route incoming inference requests is used. If this time exceeds the threshold, a scale-up occurs. If the time to route incoming requests continues to exceed the threshold, a scale-out occurs.
-
-When scaling down and in, CPU usage is used. If the CPU usage threshold is met, the front end will first be scaled down. If the CPU usage drops to the scale-in threshold, a scale-in operation happens. Scaling up and out will only occur if there are enough cluster resources available.
-
-When scale-up or scale-down, azureml-fe pods will be restarted to apply the cpu/memory changes. Inferencing requests are not affected by the restarts.
-
-<a id="connectivity"></a>
-
-## Understand connectivity requirements for AKS inferencing cluster
-
-When Azure Machine Learning creates or attaches an AKS cluster, AKS cluster is deployed with one of the following two network models:
-* Kubenet networking - The network resources are typically created and configured as the AKS cluster is deployed.
-* Azure Container Networking Interface (CNI) networking - The AKS cluster is connected to an existing virtual network resource and configurations.
-
-For Kubenet networking, the network is created and configured properly for Azure Machine Learning service. For the CNI networking, you need to understand the connectivity requirements and ensure DNS resolution and outbound connectivity for AKS inferencing. For example, you may be using a firewall to block network traffic.
-
-The following diagram shows the connectivity requirements for AKS inferencing. Black arrows represent actual communication, and blue arrows represent the domain names. You may need to add entries for these hosts to your firewall or to your custom DNS server.
-
- ![Connectivity Requirements for AKS Inferencing](./media/how-to-deploy-aks/aks-network.png)
-
-For general AKS connectivity requirements, see [Control egress traffic for cluster nodes in Azure Kubernetes Service](../aks/limit-egress-traffic.md).
-
-For accessing Azure ML services behind a firewall, see [How to access azureml behind firewall](https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/machine-learning/how-to-access-azureml-behind-firewall.md).
-
-### Overall DNS resolution requirements
-
-DNS resolution within an existing VNet is under your control. For example, a firewall or custom DNS server. The following hosts must be reachable:
-
-| Host name | Used by |
-| -- | -- |
-| `<cluster>.hcp.<region>.azmk8s.io` | AKS API server |
-| `mcr.microsoft.com` | Microsoft Container Registry (MCR) |
-| `<ACR name>.azurecr.io` | Your Azure Container Registry (ACR) |
-| `<account>.table.core.windows.net` | Azure Storage Account (table storage) |
-| `<account>.blob.core.windows.net` | Azure Storage Account (blob storage) |
-| `api.azureml.ms` | Azure Active Directory (Azure AD) authentication |
-| `ingest-vienna<region>.kusto.windows.net` | Kusto endpoint for uploading telemetry |
-| `<leaf-domain-label + auto-generated suffix>.<region>.cloudapp.azure.com` | Endpoint domain name, if you autogenerated by Azure Machine Learning. If you used a custom domain name, you do not need this entry. |
-
-### Connectivity requirements in chronological order: from cluster creation to model deployment
-
-In the process of AKS create or attach, Azure ML router (azureml-fe) is deployed into the AKS cluster. In order to deploy Azure ML router, AKS node should be able to:
-* Resolve DNS for AKS API server
-* Resolve DNS for MCR in order to download docker images for Azure ML router
-* Download images from MCR, where outbound connectivity is required
-
-Right after azureml-fe is deployed, it will attempt to start and this requires to:
-* Resolve DNS for AKS API server
-* Query AKS API server to discover other instances of itself (it is a multi-pod service)
-* Connect to other instances of itself
-
-Once azureml-fe is started, it requires the following connectivity to function properly:
-* Connect to Azure Storage to download dynamic configuration
-* Resolve DNS for Azure AD authentication server api.azureml.ms and communicate with it when the deployed service uses Azure AD authentication.
-* Query AKS API server to discover deployed models
-* Communicate to deployed model PODs
-
-At model deployment time, for a successful model deployment AKS node should be able to:
-* Resolve DNS for customer's ACR
-* Download images from customer's ACR
-* Resolve DNS for Azure BLOBs where model is stored
-* Download models from Azure BLOBs
-
-After the model is deployed and service starts, azureml-fe will automatically discover it using AKS API and will be ready to route request to it. It must be able to communicate to model PODs.
->[!Note]
->If the deployed model requires any connectivity (e.g. querying external database or other REST service, downloading a BLOB etc), then both DNS resolution and outbound communication for these services should be enabled.
-
-## Deploy to AKS
-
-To deploy a model to Azure Kubernetes Service, create a __deployment configuration__ that describes the compute resources needed. For example, number of cores and memory. You also need an __inference configuration__, which describes the environment needed to host the model and web service. For more information on creating the inference configuration, see [How and where to deploy models](how-to-deploy-and-where.md).
-
-> [!NOTE]
-> The number of models to be deployed is limited to 1,000 models per deployment (per container).
-
-<a id="using-the-cli"></a>
-
-# [Python](#tab/python)
-
-```python
-from azureml.core.webservice import AksWebservice, Webservice
-from azureml.core.model import Model
-from azureml.core.compute import AksCompute
-
-aks_target = AksCompute(ws,"myaks")
-# If deploying to a cluster configured for dev/test, ensure that it was created with enough
-# cores and memory to handle this deployment configuration. Note that memory is also used by
-# things such as dependencies and AML components.
-deployment_config = AksWebservice.deploy_configuration(cpu_cores = 1, memory_gb = 1)
-service = Model.deploy(ws, "myservice", [model], inference_config, deployment_config, aks_target)
-service.wait_for_deployment(show_output = True)
-print(service.state)
-print(service.get_logs())
-```
-
-For more information on the classes, methods, and parameters used in this example, see the following reference documents:
-
-* [AksCompute](/python/api/azureml-core/azureml.core.compute.aks.akscompute)
-* [AksWebservice.deploy_configuration](/python/api/azureml-core/azureml.core.webservice.aks.aksservicedeploymentconfiguration)
-* [Model.deploy](/python/api/azureml-core/azureml.core.model.model#deploy-workspace--name--models--inference-config-none--deployment-config-none--deployment-target-none--overwrite-false-)
-* [Webservice.wait_for_deployment](/python/api/azureml-core/azureml.core.webservice%28class%29#wait-for-deployment-show-output-false-)
-
-# [Azure CLI](#tab/azure-cli)
--
-To deploy using the CLI, use the following command. Replace `myaks` with the name of the AKS compute target. Replace `mymodel:1` with the name and version of the registered model. Replace `myservice` with the name to give this service:
-
-```azurecli-interactive
-az ml model deploy --ct myaks -m mymodel:1 -n myservice --ic inferenceconfig.json --dc deploymentconfig.json
-```
--
-For more information, see the [az ml model deploy](/cli/azure/ml/model#az-ml-model-deploy) reference.
-
-# [Visual Studio Code](#tab/visual-studio-code)
-
-For information on using VS Code, see [deploy to AKS via the VS Code extension](how-to-manage-resources-vscode.md).
-
-> [!IMPORTANT]
-> Deploying through VS Code requires the AKS cluster to be created or attached to your workspace in advance.
---
-### Autoscaling
-
-The component that handles autoscaling for Azure ML model deployments is azureml-fe, which is a smart request router. Since all inference requests go through it, it has the necessary data to automatically scale the deployed model(s).
-
-> [!IMPORTANT]
-> * **Do not enable Kubernetes Horizontal Pod Autoscaler (HPA) for model deployments**. Doing so would cause the two auto-scaling components to compete with each other. Azureml-fe is designed to auto-scale models deployed by Azure ML, where HPA would have to guess or approximate model utilization from a generic metric like CPU usage or a custom metric configuration.
->
-> * **Azureml-fe does not scale the number of nodes in an AKS cluster**, because this could lead to unexpected cost increases. Instead, **it scales the number of replicas for the model** within the physical cluster boundaries. If you need to scale the number of nodes within the cluster, you can manually scale the cluster or [configure the AKS cluster autoscaler](../aks/cluster-autoscaler.md).
-
-Autoscaling can be controlled by setting `autoscale_target_utilization`, `autoscale_min_replicas`, and `autoscale_max_replicas` for the AKS web service. The following example demonstrates how to enable autoscaling:
-
-```python
-aks_config = AksWebservice.deploy_configuration(autoscale_enabled=True,
- autoscale_target_utilization=30,
- autoscale_min_replicas=1,
- autoscale_max_replicas=4)
-```
-
-Decisions to scale up/down is based off of utilization of the current container replicas. The number of replicas that are busy (processing a request) divided by the total number of current replicas is the current utilization. If this number exceeds `autoscale_target_utilization`, then more replicas are created. If it is lower, then replicas are reduced. By default, the target utilization is 70%.
-
-Decisions to add replicas are eager and fast (around 1 second). Decisions to remove replicas are conservative (around 1 minute).
-
-You can calculate the required replicas by using the following code:
-
-```python
-from math import ceil
-# target requests per second
-targetRps = 20
-# time to process the request (in seconds)
-reqTime = 10
-# Maximum requests per container
-maxReqPerContainer = 1
-# target_utilization. 70% in this example
-targetUtilization = .7
-
-concurrentRequests = targetRps * reqTime / targetUtilization
-
-# Number of container replicas
-replicas = ceil(concurrentRequests / maxReqPerContainer)
-```
-
-For more information on setting `autoscale_target_utilization`, `autoscale_max_replicas`, and `autoscale_min_replicas`, see the [AksWebservice](/python/api/azureml-core/azureml.core.webservice.akswebservice) module reference.
-
-## Web service authentication
-
-When deploying to Azure Kubernetes Service, __key-based__ authentication is enabled by default. You can also enable __token-based__ authentication. Token-based authentication requires clients to use an Azure Active Directory account to request an authentication token, which is used to make requests to the deployed service.
-
-To __disable__ authentication, set the `auth_enabled=False` parameter when creating the deployment configuration. The following example disables authentication using the SDK:
-
-```python
-deployment_config = AksWebservice.deploy_configuration(cpu_cores=1, memory_gb=1, auth_enabled=False)
-```
-
-For information on authenticating from a client application, see the [Consume an Azure Machine Learning model deployed as a web service](how-to-consume-web-service.md).
-
-### Authentication with keys
-
-If key authentication is enabled, you can use the `get_keys` method to retrieve a primary and secondary authentication key:
-
-```python
-primary, secondary = service.get_keys()
-print(primary)
-```
-
-> [!IMPORTANT]
-> If you need to regenerate a key, use [`service.regen_key`](/python/api/azureml-core/azureml.core.webservice%28class%29)
-
-### Authentication with tokens
-
-To enable token authentication, set the `token_auth_enabled=True` parameter when you are creating or updating a deployment. The following example enables token authentication using the SDK:
-
-```python
-deployment_config = AksWebservice.deploy_configuration(cpu_cores=1, memory_gb=1, token_auth_enabled=True)
-```
-
-If token authentication is enabled, you can use the `get_token` method to retrieve a JWT token and that token's expiration time:
-
-```python
-token, refresh_by = service.get_token()
-print(token)
-```
-
-> [!IMPORTANT]
-> You will need to request a new token after the token's `refresh_by` time.
->
-> Microsoft strongly recommends that you create your Azure Machine Learning workspace in the same region as your Azure Kubernetes Service cluster. To authenticate with a token, the web service will make a call to the region in which your Azure Machine Learning workspace is created. If your workspace's region is unavailable, then you will not be able to fetch a token for your web service even, if your cluster is in a different region than your workspace. This effectively results in Token-based Authentication being unavailable until your workspace's region is available again. In addition, the greater the distance between your cluster's region and your workspace's region, the longer it will take to fetch a token.
->
-> To retrieve a token, you must use the Azure Machine Learning SDK or the [az ml service get-access-token](/cli/azure/ml(v1)/computetarget/create#az-ml-service-get-access-token) command.
--
-### Vulnerability scanning
-
-Microsoft Defender for Cloud provides unified security management and advanced threat protection across hybrid cloud workloads. You should allow Microsoft Defender for Cloud to scan your resources and follow its recommendations. For more, see [Azure Kubernetes Services integration with Defender for Cloud](../security-center/defender-for-kubernetes-introduction.md).
-
-## Next steps
-
-* [Use Azure RBAC for Kubernetes authorization](../aks/manage-azure-rbac.md)
-* [Secure inferencing environment with Azure Virtual Network](how-to-secure-inferencing-vnet.md)
-* [How to deploy a model using a custom Docker image](./how-to-deploy-custom-container.md)
-* [Deployment troubleshooting](how-to-troubleshoot-deployment.md)
-* [Update web service](how-to-deploy-update-web-service.md)
-* [Use TLS to secure a web service through Azure Machine Learning](how-to-secure-web-service.md)
-* [Consume a ML Model deployed as a web service](how-to-consume-web-service.md)
-* [Monitor your Azure Machine Learning models with Application Insights](how-to-enable-app-insights.md)
-* [Collect data for models in production](how-to-enable-data-collection.md)
machine-learning How To Deploy Batch With Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-batch-with-rest.md
Title: "Deploy models using batch endpoints with REST APIs (preview)"
+ Title: "Deploy models using batch endpoints with REST APIs"
description: Learn how to deploy models using batch endpoints with REST APIs. -- Previously updated : 03/31/2022--++ Last updated : 05/24/2022++
-# Deploy models with REST (preview) for batch scoring
+# Deploy models with REST for batch scoring
++
+Learn how to use the Azure Machine Learning REST API to deploy models for batch scoring.
-Learn how to use the Azure Machine Learning REST API to deploy models for batch scoring (preview).
The REST API uses standard HTTP verbs to create, retrieve, update, and delete resources. The REST API works with any language or tool that can make HTTP requests. REST's straightforward structure makes it a good choice in scripting environments and for MLOps automation.
In this article, you learn how to use the new REST APIs to:
## Azure Machine Learning batch endpoints
-[Batch endpoints (preview)](concept-endpoints.md#what-are-batch-endpoints-preview) simplify the process of hosting your models for batch scoring, so you can focus on machine learning, not infrastructure. In this article, you'll create a batch endpoint and deployment, and invoking it to start a batch scoring job. But first you'll have to register the assets needed for deployment, including model, code, and environment.
+[Batch endpoints](concept-endpoints.md#what-are-batch-endpoints) simplify the process of hosting your models for batch scoring, so you can focus on machine learning, not infrastructure. In this article, you'll create a batch endpoint and deployment, and invoking it to start a batch scoring job. But first you'll have to register the assets needed for deployment, including model, code, and environment.
-There are many ways to create an Azure Machine Learning batch endpoint, [including the Azure CLI](how-to-use-batch-endpoint.md), and visually with [the studio](how-to-use-batch-endpoints-studio.md). The following example creates a batch endpoint and deployment with the REST API.
+There are many ways to create an Azure Machine Learning batch endpoint, including [the Azure CLI](how-to-use-batch-endpoint.md), and visually with [the studio](how-to-use-batch-endpoints-studio.md). The following example creates a batch endpoint and a batch deployment with the REST API.
## Create machine learning assets
You can use the tool [jq](https://stedolan.github.io/jq/) to parse the JSON resu
### Upload & register code
-Now that you have the datastore, you can upload the scoring script. Use the Azure Storage CLI to upload a blob into your default container:
+Now that you have the datastore, you can upload the scoring script. For more information about how to author the scoring script, see [Understanding the scoring script](how-to-use-batch-endpoint.md#understanding-the-scoring-script). Use the Azure Storage CLI to upload a blob into your default container:
:::code language="rest-api" source="~/azureml-examples-main/cli/batch-score-rest.sh" id="upload_code":::
Once you upload your code, you can specify your code with a PUT request:
### Upload and register model
-Similar to the code, Upload the model files:
+Similar to the code, upload the model files:
:::code language="rest-api" source="~/azureml-examples-main/cli/batch-score-rest.sh" id="upload_model":::
Now, run the following snippet to create an environment:
## Deploy with batch endpoints
-Next, create the batch endpoint, a deployment, and set the default deployment.
+Next, create a batch endpoint, a batch deployment, and set the default deployment for the endpoint.
### Create batch endpoint
Invoking a batch endpoint triggers a batch scoring job. A job `id` is returned i
### Invoke the batch endpoint to start a batch scoring job
+#### Getting the Scoring URI and access token
+ Get the scoring uri and access token to invoke the batch endpoint. First get the scoring uri: :::code language="rest-api" source="~/azureml-examples-main/cli/batch-score-rest.sh" id="get_endpoint":::
Get the batch endpoint access token:
:::code language="rest-api" source="~/azureml-examples-main/cli/batch-score-rest.sh" id="get_access_token":::
-Now, invoke the batch endpoint to start a batch scoring job. The following example scores data publicly available in the cloud:
+#### Invoke the batch endpoint with different input options
+It's time to invoke the batch endpoint to start a batch scoring job. If your data is a folder (potentially with multiple files) publicly available from the web, you can use the following snippet:
-If your data is stored in an Azure Machine Learning registered datastore, you can invoke the batch endpoint with a dataset. The following code creates a new dataset:
+```rest-api
+response=$(curl --location --request POST $SCORING_URI \
+--header "Authorization: Bearer $SCORING_TOKEN" \
+--header "Content-Type: application/json" \
+--data-raw "{
+ \"properties\": {
+ \"InputData\": {
+ \"mnistinput\": {
+ \"JobInputType\" : \"UriFolder\",
+ \"Uri\": \"https://pipelinedata.blob.core.windows.net/sampledata/mnist\"
+ }
+ }
+ }
+}")
+JOB_ID=$(echo $response | jq -r '.id')
+JOB_ID_SUFFIX=$(echo ${JOB_ID##/*/})
+```
-Next, reference the dataset when invoking the batch endpoint:
+Now, let's look at other options for invoking the batch endpoint. When it comes to input data, there are multiple scenarios you can choose from, depending on the input type (whether you are specifying a folder or a single file), and the URI type (whether you are using a path on Azure Machine Learning registered datastore, a reference to Azure Machine Learning registered V2 data asset, or a public URI).
+- An `InputData` property has `JobInputType` and `Uri` keys. When you are specifying a single file, use `"JobInputType": "UriFile"`, and when you are specifying a folder, use `'JobInputType": "UriFolder"`.
-In the previous code snippet, a custom output location is provided by using `datastoreId`, `path`, and `outputFileName`. These settings allow you to configure where to store the batch scoring results.
+- When the file or folder is on Azure ML registered datastore, the syntax for the `Uri` is `azureml://datastores/<datastore-name>/paths/<path-on-datastore>` for folder, and `azureml://datastores/<datastore-name>/paths/<path-on-datastore>/<file-name>` for a specific file. You can also use the longer form to represent the same path, such as `azureml://subscriptions/<subscription_id>/resourceGroups/<resource-group-name>/workspaces/<workspace-name>/datastores/<datastore-name>/paths/<path-on-datastore>/`.
-> [!IMPORTANT]
-> You must provide a unique output location. If the output file already exists, the batch scoring job will fail.
+- When the file or folder is registered as V2 data asset as `uri_folder` or `uri_file`, the syntax for the `Uri` is `\"azureml://data/<data-name>/versions/<data-version>/\"` (short form) or `\"azureml://subscriptions/<subscription_id>/resourceGroups/<resource-group-name>/workspaces/<workspace-name>/data/<data-name>/versions/<data-version>/\"` (long form).
-For this example, the output is stored in the default blob storage for the workspace. The folder name is the same as the endpoint name, and the file name is randomly generated by the following code:
+- When the file or folder is a publicly accessible path, the syntax for the URI is `https://<public-path>` for folder, `https://<public-path>/<file-name>` for a specific file.
+> [!NOTE]
+> For more information about data URI, see [Azure Machine Learning data reference URI](reference-yaml-core-syntax.md#azure-ml-data-reference-uri).
+
+Below are some examples using different types of input data.
+
+- If your data is a folder on the Azure ML registered datastore, you can either:
+
+ - Use the short form to represent the URI:
+
+ ```rest-api
+ response=$(curl --location --request POST $SCORING_URI \
+ --header "Authorization: Bearer $SCORING_TOKEN" \
+ --header "Content-Type: application/json" \
+ --data-raw "{
+ \"properties\": {
+ \"InputData\": {
+ \"mnistInput\": {
+ \"JobInputType\" : \"UriFolder\",
+ \"Uri": \"azureml://datastores/workspaceblobstore/paths/$ENDPOINT_NAME/mnist\"
+ }
+ }
+ }
+ }")
+
+ JOB_ID=$(echo $response | jq -r '.id')
+ JOB_ID_SUFFIX=$(echo ${JOB_ID##/*/})
+ ```
+
+ - Or use the long form for the same URI:
+
+ ```rest-api
+ response=$(curl --location --request POST $SCORING_URI \
+ --header "Authorization: Bearer $SCORING_TOKEN" \
+ --header "Content-Type: application/json" \
+ --data-raw "{
+ \"properties\": {
+ \"InputData\": {
+ \"mnistinput\": {
+ \"JobInputType\" : \"UriFolder\",
+ \"Uri\": \"azureml://subscriptions/$SUBSCRIPTION_ID/resourceGroups/$RESOURCE_GROUP/workspaces/$WORKSPACE/datastores/workspaceblobstore/paths/$ENDPOINT_NAME/mnist\"
+ }
+ }
+ }
+ }")
+
+ JOB_ID=$(echo $response | jq -r '.id')
+ JOB_ID_SUFFIX=$(echo ${JOB_ID##/*/})
+ ```
+
+- If you want to manage your data as Azure ML registered V2 data asset as `uri_folder`, you can follow the two steps below:
+
+ 1. Create the V2 data asset:
+
+ ```rest-api
+ DATA_NAME="mnist"
+ DATA_VERSION=$RANDOM
+
+ response=$(curl --location --request PUT https://management.azure.com/subscriptions/$SUBSCRIPTION_ID/resourceGroups/$RESOURCE_GROUP/providers/Microsoft.MachineLearningServices/workspaces/$WORKSPACE/data/$DATA_NAME/versions/$DATA_VERSION?api-version=$API_VERSION \
+ --header "Content-Type: application/json" \
+ --header "Authorization: Bearer $TOKEN" \
+ --data-raw "{
+ \"properties\": {
+ \"dataType\": \"uri_folder\",
+ \"dataUri\": \"https://pipelinedata.blob.core.windows.net/sampledata/mnist\",
+ \"description\": \"Mnist data asset\"
+ }
+ }")
+ ```
+
+ 2. Reference the data asset in the batch scoring job:
+
+ ```rest-api
+ response=$(curl --location --request POST $SCORING_URI \
+ --header "Authorization: Bearer $SCORING_TOKEN" \
+ --header "Content-Type: application/json" \
+ --data-raw "{
+ \"properties\": {
+ \"InputData\": {
+ \"mnistInput\": {
+ \"JobInputType\" : \"UriFolder\",
+ \"Uri": \"azureml://data/$DATA_NAME/versions/$DATA_VERSION/\"
+ }
+ }
+ }
+ }")
+
+ JOB_ID=$(echo $response | jq -r '.id')
+ JOB_ID_SUFFIX=$(echo ${JOB_ID##/*/})
+ ```
+
+- If your data is a single file publicly available from the web, you can use the following snippet:
+
+ ```rest-api
+ response=$(curl --location --request POST $SCORING_URI \
+ --header "Authorization: Bearer $SCORING_TOKEN" \
+ --header "Content-Type: application/json" \
+ --data-raw "{
+ \"properties\": {
+ \"InputData\": {
+ \"mnistInput\": {
+ \"JobInputType\" : \"UriFile\",
+ \"Uri": \"https://pipelinedata.blob.core.windows.net/sampledata/mnist/0.png\"
+ }
+ }
+ }
+ }")
+
+ JOB_ID=$(echo $response | jq -r '.id')
+ JOB_ID_SUFFIX=$(echo ${JOB_ID##/*/})
+ ```
+
+> [!NOTE]
+> We strongly recommend using the latest REST API version for batch scoring.
+> - If you want to use local data, you can upload it to Azure Machine Learning registered datastore and use REST API for Cloud data.
+> - If you are using existing V1 FileDataset for batch endpoint, we recommend migrating them to V2 data assets and refer to them directly when invoking batch endpoints. Currently only data assets of type `uri_folder` or `uri_file` are supported. Batch endpoints created with GA CLIv2 (2.4.0 and newer) or GA REST API (2022-05-01 and newer) will not support V1 Dataset.
+> - You can also extract the URI or path on datastore extracted from V1 FileDataset by using `az ml dataset show` command with `--query` parameter and use that information for invoke.
+> - While Batch endpoints created with earlier APIs will continue to support V1 FileDataset, we will be adding further V2 data assets support with the latest API versions for even more usability and flexibility. For more information on V2 data assets, see [Work with data using SDK v2 (preview)](how-to-use-data.md). For more information on the new V2 experience, see [What is v2](concept-v2.md).
+
+#### Configure the output location and overwrite settings
+
+The batch scoring results are by default stored in the workspace's default blob store within a folder named by job name (a system-generated GUID). You can configure where to store the scoring outputs when you invoke the batch endpoint. Use `OutputData` to configure the output file path on an Azure Machine Learning registered datastore. `OutputData` has `JobOutputType` and `Uri` keys. `UriFile` is the only supported value for `JobOutputType`. The syntax for `Uri` is the same as that of `InputData`, i.e., `azureml://datastores/<datastore-name>/paths/<path-on-datastore>/<file-name>`.
+
+Following is the example snippet for configuring the output location for the batch scoring results.
+
+```rest-api
+response=$(curl --location --request POST $SCORING_URI \
+--header "Authorization: Bearer $SCORING_TOKEN" \
+--header "Content-Type: application/json" \
+--data-raw "{
+ \"properties\": {
+ \"InputData\":
+ {
+ \"mnistInput\": {
+ \"JobInputType\" : \"UriFolder\",
+ \"Uri": \"azureml://datastores/workspaceblobstore/paths/$ENDPOINT_NAME/mnist\"
+ }
+ },
+ \"OutputData\":
+ {
+ \"mnistOutput\": {
+ \"JobOutputType\": \"UriFile\",
+ \"Uri\": \"azureml://datastores/workspaceblobstore/paths/$ENDPOINT_NAME/mnistOutput/$OUTPUT_FILE_NAME\"
+ }
+ }
+ }
+}")
+
+JOB_ID=$(echo $response | jq -r '.id')
+JOB_ID_SUFFIX=$(echo ${JOB_ID##/*/})
+```
+
+> [!IMPORTANT]
+> You must use a unique output location. If the output file exists, the batch scoring job will fail.
### Check the batch scoring job
machine-learning How To Deploy Custom Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-custom-container.md
Previously updated : 03/31/2022 Last updated : 05/11/2022 -+ ms.devlang: azurecli
-# Deploy a TensorFlow model served with TF Serving using a custom container in an online endpoint (preview)
+# Deploy a TensorFlow model served with TF Serving using a custom container in an online endpoint
[!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)]+ Learn how to deploy a custom container as an online endpoint in Azure Machine Learning. Custom container deployments can use web servers other than the default Python Flask server used by Azure Machine Learning. Users of these deployments can still take advantage of Azure Machine Learning's built-in monitoring, scaling, alerting, and authentication. - > [!WARNING] > Microsoft may not be able to help troubleshoot problems caused by a custom image. If you encounter problems, you may be asked to use the default image or one of the images Microsoft provides to see if the problem is specific to your image. ## Prerequisites
-* Install and configure the Azure CLI and ML extension. For more information, see [Install, set up, and use the CLI (v2) (preview)](how-to-configure-cli.md).
+* Install and configure the Azure CLI and ML extension. For more information, see [Install, set up, and use the CLI (v2)](how-to-configure-cli.md).
* You must have an Azure resource group, in which you (or the service principal you use) need to have `Contributor` access. You'll have such a resource group if you configured your ML extension per the above article.
Notice that this deployment uses the same path for both liveness and readiness,
### Locating the mounted model
-When you deploy a model as a real-time endpoint, Azure Machine Learning _mounts_ your model to your endpoint. Model mounting enables you to deploy new versions of the model without having to create a new Docker image. By default, a model registered with the name *foo* and version *1* would be located at the following path inside of your deployed container: `/var/azureml-app/azureml-models/foo/1`
+When you deploy a model as an online endpoint, Azure Machine Learning _mounts_ your model to your endpoint. Model mounting enables you to deploy new versions of the model without having to create a new Docker image. By default, a model registered with the name *foo* and version *1* would be located at the following path inside of your deployed container: `/var/azureml-app/azureml-models/foo/1`
For example, if you have a directory structure of `/azureml-examples/cli/endpoints/online/custom-container` on your local machine, where the model is named `half_plus_two`:
az ml model delete -n tfserving-mounted --version 1
## Next steps -- [Safe rollout for online endpoints (preview)](how-to-safely-rollout-managed-endpoints.md)
+- [Safe rollout for online endpoints](how-to-safely-rollout-managed-endpoints.md)
- [Troubleshooting online endpoints deployment](./how-to-troubleshoot-online-endpoints.md) - [Torch serve sample](https://github.com/Azure/azureml-examples/blob/main/cli/deploy-torchserve.sh)
machine-learning How To Deploy Fpga Web Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-fpga-web-service.md
Last updated 10/21/2021 -+ # Deploy ML models to field-programmable gate arrays (FPGAs) with Azure Machine Learning + In this article, you learn about FPGAs and how to deploy your ML models to an Azure FPGA using the [hardware-accelerated models Python package](/python/api/azureml-accel-models/azureml.accel) from [Azure Machine Learning](overview-what-is-azure-machine-learning.md). ## What are FPGAs?
machine-learning How To Deploy Inferencing Gpus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-inferencing-gpus.md
Last updated 10/21/2021 -+ # Deploy a deep learning model for inference with GPU This article teaches you how to use Azure Machine Learning to deploy a GPU-enabled model as a web service. The information in this article is based on deploying a model on Azure Kubernetes Service (AKS). The AKS cluster provides a GPU resource that is used by the model for inference.
Inference, or model scoring, is the phase where the deployed model is used to ma
> Although the code snippets in this article use a TensorFlow model, you can apply the information to any machine learning framework that supports GPUs. > [!NOTE]
-> The information in this article builds on the information in the [How to deploy to Azure Kubernetes Service](how-to-deploy-azure-kubernetes-service.md) article. Where that article generally covers deployment to AKS, this article covers GPU specific deployment.
+> The information in this article builds on the information in the [How to deploy to Azure Kubernetes Service](v1/how-to-deploy-azure-kubernetes-service.md) article. Where that article generally covers deployment to AKS, this article covers GPU specific deployment.
## Prerequisites
except ComputeTargetException:
> [!IMPORTANT] > Azure will bill you as long as the AKS cluster exists. Make sure to delete your AKS cluster when you're done with it.
-For more information on using AKS with Azure Machine Learning, see [How to deploy to Azure Kubernetes Service](how-to-deploy-azure-kubernetes-service.md).
+For more information on using AKS with Azure Machine Learning, see [How to deploy to Azure Kubernetes Service](v1/how-to-deploy-azure-kubernetes-service.md).
## Write the entry script
machine-learning How To Deploy Local Container Notebook Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-local-container-notebook-vm.md
-+ Last updated 04/22/2021
An example notebook that demonstrates local deployments is included on your comp
To submit sample data to the running service, use the following code. Replace the value of `service_url` with the URL of from the previous step: > [!NOTE]
-> When authenticating to a deployment on the compute instance, the authentication is made using Azure Active Directory. The call to `interactive_auth.get_authentication_header()` in the example code authenticates you using AAD, and returns a header that can then be used to authenticate to the service on the compute instance. For more information, see [Set up authentication for Azure Machine Learning resources and workflows](how-to-setup-authentication.md#interactive-authentication).
+> When authenticating to a deployment on the compute instance, the authentication is made using Azure Active Directory. The call to `interactive_auth.get_authentication_header()` in the example code authenticates you using AAD, and returns a header that can then be used to authenticate to the service on the compute instance. For more information, see [Set up authentication for Azure Machine Learning resources and workflows](how-to-setup-authentication.md#use-interactive-authentication).
> > When authenticating to a deployment on Azure Kubernetes Service or Azure Container Instances, a different authentication method is used. For more information on, see [Configure authentication for Azure Machine models deployed as web services](how-to-authenticate-web-service.md).
print("prediction:", resp.text)
* [Use TLS to secure a web service through Azure Machine Learning](how-to-secure-web-service.md) * [Consume a ML Model deployed as a web service](how-to-consume-web-service.md) * [Monitor your Azure Machine Learning models with Application Insights](how-to-enable-app-insights.md)
-* [Collect data for models in production](how-to-enable-data-collection.md)
+* [Collect data for models in production](how-to-enable-data-collection.md)
machine-learning How To Deploy Local https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-local.md
Last updated 11/20/2020 -+ # Deploy models trained with Azure Machine Learning on your local machines + This article describes how to use your local computer as a target for training or deploying models created in Azure Machine Learning. Azure Machine Learning is flexible enough to work with most Python machine learning frameworks. Machine learning solutions generally have complex dependencies that can be difficult to duplicate. This article will show you how to balance total control with ease of use. Scenarios for local deployment include:
After you download the model and resolve its dependencies, there are no Azure-de
## Upload a retrained model to Azure Machine Learning
-If you have a locally trained or retrained model, you can register it with Azure. After it's registered, you can continue tuning it by using Azure compute or deploy it by using Azure facilities like [Azure Kubernetes Service](how-to-deploy-azure-kubernetes-service.md) or [Triton Inference Server (Preview)](how-to-deploy-with-triton.md).
+If you have a locally trained or retrained model, you can register it with Azure. After it's registered, you can continue tuning it by using Azure compute or deploy it by using Azure facilities like [Azure Kubernetes Service](v1/how-to-deploy-azure-kubernetes-service.md) or [Triton Inference Server (Preview)](how-to-deploy-with-triton.md).
To be used with the Azure Machine Learning Python SDK, a model must be stored as a serialized Python object in pickle format (a .pkl file). It must also implement a `predict(data)` method that returns a JSON-serializable object. For example, you might store a locally trained scikit-learn diabetes model with:
machine-learning How To Deploy Managed Online Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-managed-online-endpoints.md
Title: Deploy an ML model by using an online endpoint (preview)
+ Title: Deploy an ML model by using an online endpoint
description: Learn to deploy your machine learning model as a web service that's to Azure.
Previously updated : 03/31/2022 Last updated : 04/26/2022 -+
-# Deploy and score a machine learning model by using an online endpoint (preview)
+# Deploy and score a machine learning model by using an online endpoint
[!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)]
-Learn how to use an online endpoint (preview) to deploy your model, so you don't have to create and manage the underlying infrastructure. You'll begin by deploying a model on your local machine to debug any errors, and then you'll deploy and test it in Azure.
-You'll also learn how to view the logs and monitor the service-level agreement (SLA). You start with a model and end up with a scalable HTTPS/REST endpoint that you can use for online and real-time scoring.
+Learn how to use an online endpoint to deploy your model, so you don't have to create and manage the underlying infrastructure. You'll begin by deploying a model on your local machine to debug any errors, and then you'll deploy and test it in Azure.
-Managed online endpoints help to deploy your ML models in a turnkey manner. Managed online endpoints work with powerful CPU and GPU machines in Azure in a scalable, fully managed way. Managed online endpoints take care of serving, scaling, securing, and monitoring your models, freeing you from the overhead of setting up and managing the underlying infrastructure. The main example in this doc uses managed online endpoints for deployment. To use Kubernetes instead, see the notes in this document inline with the managed online endpoint discussion. For more information, see [What are Azure Machine Learning endpoints (preview)?](concept-endpoints.md).
+You'll also learn how to view the logs and monitor the service-level agreement (SLA). You start with a model and end up with a scalable HTTPS/REST endpoint that you can use for online and real-time scoring.
+Managed online endpoints help to deploy your ML models in a turnkey manner. Managed online endpoints work with powerful CPU and GPU machines in Azure in a scalable, fully managed way. Managed online endpoints take care of serving, scaling, securing, and monitoring your models, freeing you from the overhead of setting up and managing the underlying infrastructure. The main example in this doc uses managed online endpoints for deployment. To use Kubernetes instead, see the notes in this document inline with the managed online endpoint discussion. For more information, see [What are Azure Machine Learning endpoints?](concept-endpoints.md).
## Prerequisites * To use Azure Machine Learning, you must have an Azure subscription. If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/).
-* Install and configure the Azure CLI and the `ml` extension to the Azure CLI. For more information, see [Install, set up, and use the CLI (v2) (preview)](how-to-configure-cli.md).
+* Install and configure the Azure CLI and the `ml` extension to the Azure CLI. For more information, see [Install, set up, and use the CLI (v2)](how-to-configure-cli.md).
-* You must have an Azure resource group, and you (or the service principal you use) must have Contributor access to it. A resource group is created in [Install, set up, and use the CLI (v2) (preview)](how-to-configure-cli.md).
+* You must have an Azure resource group, and you (or the service principal you use) must have Contributor access to it. A resource group is created in [Install, set up, and use the CLI (v2)](how-to-configure-cli.md).
-* You must have an Azure Machine Learning workspace. A workspace is created in [Install, set up, and use the CLI (v2) (preview)](how-to-configure-cli.md).
+* You must have an Azure Machine Learning workspace. A workspace is created in [Install, set up, and use the CLI (v2)](how-to-configure-cli.md).
* If you haven't already set the defaults for the Azure CLI, save your default settings. To avoid passing in the values for your subscription, workspace, and resource group multiple times, run this code:
Managed online endpoints help to deploy your ML models in a turnkey manner. Mana
az configure --defaults workspace=<Azure Machine Learning workspace name> group=<resource group> ```
+* Azure role-based access controls (Azure RBAC) is used to grant access to operations in Azure Machine Learning. To perform the steps in this article, your user account must be assigned the __owner__ or __contributor__ role for the Azure Machine Learning workspace, or a custom role allowing `Microsoft.MachineLearningServices/workspaces/onlineEndpoints/*`. For more information, see [Manage access to an Azure Machine Learning workspace](how-to-assign-roles.md).
+ * (Optional) To deploy locally, you must [install Docker Engine](https://docs.docker.com/engine/install/) on your local computer. We *highly recommend* this option, so it's easier to debug issues. > [!IMPORTANT]
The following snippet shows the *endpoints/online/managed/sample/endpoint.yml* f
:::code language="yaml" source="~/azureml-examples-main/cli/endpoints/online/managed/sample/endpoint.yml"::: > [!NOTE]
-> For a full description of the YAML, see [Online endpoint (preview) YAML reference](reference-yaml-endpoint-online.md).
+> For a full description of the YAML, see [Online endpoint YAML reference](reference-yaml-endpoint-online.md).
-The reference for the endpoint YAML format is described in the following table. To learn how to specify these attributes, see the YAML example in [Prepare your system](#prepare-your-system) or the [online endpoint YAML reference](reference-yaml-endpoint-online.md). For information about limits related to managed endpoints, see [Manage and increase quotas for resources with Azure Machine Learning](how-to-manage-quotas.md#azure-machine-learning-managed-online-endpoints-preview).
+The reference for the endpoint YAML format is described in the following table. To learn how to specify these attributes, see the YAML example in [Prepare your system](#prepare-your-system) or the [online endpoint YAML reference](reference-yaml-endpoint-online.md). For information about limits related to managed endpoints, see [Manage and increase quotas for resources with Azure Machine Learning](how-to-manage-quotas.md#azure-machine-learning-managed-online-endpoints).
| Key | Description | | | | | `$schema` | (Optional) The YAML schema. To see all available options in the YAML file, you can view the schema in the preceding example in a browser.|
-| `name` | The name of the endpoint. It must be unique in the Azure region.<br>Naming rules are defined under [managed online endpoint limits](how-to-manage-quotas.md#azure-machine-learning-managed-online-endpoints-preview).|
+| `name` | The name of the endpoint. It must be unique in the Azure region.<br>Naming rules are defined under [managed online endpoint limits](how-to-manage-quotas.md#azure-machine-learning-managed-online-endpoints).|
| `auth_mode` | Use `key` for key-based authentication. Use `aml_token` for Azure Machine Learning token-based authentication. `key` doesn't expire, but `aml_token` does expire. (Get the most recent token by using the `az ml online-endpoint get-credentials` command.) | The example contains all the files needed to deploy a model on an online endpoint. To deploy a model, you must have:
The table describes the attributes of a `deployment`:
| | | | `name` | The name of the deployment. | | `model` | In this example, we specify the model properties inline: `path`. Model files are automatically uploaded and registered with an autogenerated name. For related best practices, see the tip in the next section. |
-| `code_configuration.code.path` | The directory that contains all the Python source code for scoring the model. You can use nested directories and packages. |
-| `code_configuration.scoring_script` | The Python file that's in the `code_configuration.code.path` scoring directory. This Python code must have an `init()` function and a `run()` function. The function `init()` will be called after the model is created or updated (you can use it to cache the model in memory, for example). The `run()` function is called at every invocation of the endpoint to do the actual scoring and prediction. |
+| `code_configuration.code.path` | The directory on the local development environment that contains all the Python source code for scoring the model. You can use nested directories and packages. |
+| `code_configuration.scoring_script` | The Python file that's in the `code_configuration.code.path` scoring directory on the local development environment. This Python code must have an `init()` function and a `run()` function. The function `init()` will be called after the model is created or updated (you can use it to cache the model in memory, for example). The `run()` function is called at every invocation of the endpoint to do the actual scoring and prediction. |
| `environment` | Contains the details of the environment to host the model and code. In this example, we have inline definitions that include the`path`. We'll use `environment.docker.image` for the image. The `conda_file` dependencies will be installed on top of the image. For more information, see the tip in the next section. | | `instance_type` | The VM SKU that will host your deployment instances. For more information, see [Managed online endpoints supported VM SKUs](reference-managed-online-endpoints-vm-sku-list.md). |
-| `instance_count` | The number of instances in the deployment. Base the value on the workload you expect. For high availability, we recommend that you set `instance_count` to at least `3`. |
+| `instance_count` | The number of instances in the deployment. Base the value on the workload you expect. For high availability, we recommend that you set `instance_count` to at least `3`. We reserve an extra 20% for performing upgrades. For more information, see [managed online endpoint quotas](how-to-manage-quotas.md#azure-machine-learning-managed-online-endpoints). |
+
+During deployment, the local files such as the Python source for the scoring model, are uploaded from the development environment.
For more information about the YAML schema, see the [online endpoint YAML reference](reference-yaml-endpoint-online.md). > [!NOTE] > To use Kubernetes instead of managed endpoints as a compute target:
-> 1. Create and attach your Kubernetes cluster as a compute target to your Azure Machine Learning workspace by using [Azure Machine Learning studio](how-to-attach-arc-kubernetes.md?&tabs=studio#attach-arc-cluster).
+> 1. Create and attach your Kubernetes cluster as a compute target to your Azure Machine Learning workspace by using [Azure Machine Learning studio](how-to-attach-kubernetes-anywhere.md?&tabs=studio#attach-a-kubernetes-cluster-to-an-azureml-workspace).
> 1. Use the [endpoint YAML](https://github.com/Azure/azureml-examples/blob/main/cli/endpoints/online/amlarc/endpoint.yml) to target Kubernetes instead of the managed endpoint YAML. You'll need to edit the YAML to change the value of `target` to the name of your registered compute target. You can use this [deployment.yaml](https://github.com/Azure/azureml-examples/blob/main/cli/endpoints/online/amlarc/blue-deployment.yml) that has additional properties applicable to Kubernetes deployment. > > All the commands that are used in this article (except the optional SLA monitoring and Azure Log Analytics integration) can be used either with managed endpoints or with Kubernetes endpoints.
The output should appear similar to the following JSON. Note that the `provision
} ```
+The following table contains the possible values for `provisioning_state`:
+
+| State | Description |
+| -- | -- |
+| __Creating__ | The resource is being created. |
+| __Updating__ | The resource is being updated. |
+| __Deleting__ | The resource is being deleted. |
+| __Succeeded__ | The create/update operation was successful. |
+| __Failed__ | The create/update/delete operation has failed. |
+ ### Invoke the local endpoint to score data by using your model Invoke the endpoint to score the model by using the convenience command `invoke` and passing query parameters that are stored in a JSON file:
This deployment might take up to 15 minutes, depending on whether the underlying
> [!TIP] > * If you prefer not to block your CLI console, you may add the flag `--no-wait` to the command. However, this will stop the interactive display of the deployment status. >
-> * Use [Troubleshooting online endpoints deployment (preview)](./how-to-troubleshoot-online-endpoints.md) to debug errors.
+> * Use [Troubleshooting online endpoints deployment](./how-to-troubleshoot-online-endpoints.md) to debug errors.
### Check the status of the deployment
You can use either the `invoke` command or a REST client of your choice to invok
The following example shows how to get the key used to authenticate to the endpoint:
+> [!TIP]
+> You can control which Azure Active Directory security principals can get the authentication key by assigning them to a custom role that allows `Microsoft.MachineLearningServices/workspaces/onlineEndpoints/token/action` and `Microsoft.MachineLearningServices/workspaces/onlineEndpoints/listkeys/action`. For more information, see [Manage access to an Azure Machine Learning workspace](how-to-assign-roles.md).
+ :::code language="azurecli" source="~/azureml-examples-main/cli/deploy-managed-online-endpoint.sh" ID="test_endpoint_using_curl_get_key"::: Next, use curl to score data.
Notice we use `show` and `get-credentials` commands to get the authentication cr
To see the invocation logs, run `get-logs` again.
+For information on authenticating using a token, see [Authenticate to online endpoints](how-to-authenticate-online-endpoint.md).
+ ### (Optional) Update the deployment If you want to update the code, model, or environment, update the YAML file, and then run the `az ml online-endpoint update` command.
-> [!Note]
+> [!NOTE]
> If you update instance count and along with other model settings (code, model, or environment) in a single `update` command: first the scaling operation will be performed, then the other updates will be applied. In production environment is a good practice to perform these operations separately. To understand how `update` works:
If you aren't going use the deployment, you should delete it by running the foll
To learn more, review these articles: -- [Deploy models with REST (preview)](how-to-deploy-with-rest.md)-- [Create and use online endpoints (preview) in the studio](how-to-use-managed-online-endpoint-studio.md)-- [Safe rollout for online endpoints (preview)](how-to-safely-rollout-managed-endpoints.md)
+- [Deploy models with REST](how-to-deploy-with-rest.md)
+- [Create and use online endpoints in the studio](how-to-use-managed-online-endpoint-studio.md)
+- [Safe rollout for online endpoints](how-to-safely-rollout-managed-endpoints.md)
- [How to autoscale managed online endpoints](how-to-autoscale-endpoints.md)-- [Use batch endpoints (preview) for batch scoring](how-to-use-batch-endpoint.md)-- [View costs for an Azure Machine Learning managed online endpoint (preview)](how-to-view-online-endpoints-costs.md)-- [Access Azure resources with a online endpoint and managed identity (preview)](how-to-access-resources-from-endpoints-managed-identities.md)
+- [Use batch endpoints for batch scoring](how-to-use-batch-endpoint.md)
+- [View costs for an Azure Machine Learning managed online endpoint](how-to-view-online-endpoints-costs.md)
+- [Access Azure resources with a online endpoint and managed identity](how-to-access-resources-from-endpoints-managed-identities.md)
- [Troubleshoot online endpoints deployment](how-to-troubleshoot-online-endpoints.md)
+- [Enable network isolation with managed online endpoints](how-to-secure-online-endpoint.md)
machine-learning How To Deploy Mlflow Models Online Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-mlflow-models-online-endpoints.md
Last updated 03/31/2022 -+ ms.devlang: azurecli # Deploy MLflow models to online endpoints (preview) +
-In this article, learn how to deploy your [MLflow](https://www.mlflow.org) model to an [online endpoint](concept-endpoints.md) (preview). When you deploy your MLflow model to an online endpoint, it's a no-code-deployment so you don't have to provide a scoring script or an environment.
+> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning CLI extension you are using:"]
+> * [v1](./v1/how-to-deploy-mlflow-models.md)
+> * [v2 (current version)](how-to-deploy-mlflow-models-online-endpoints.md)
+
+In this article, learn how to deploy your [MLflow](https://www.mlflow.org) model to an [online endpoint](concept-endpoints.md) (preview) for real-time inference. When you deploy your MLflow model to an online endpoint, it's a no-code-deployment so you don't have to provide a scoring script or an environment.
You only provide the typical MLflow model folder contents:
For no-code-deployment, Azure Machine Learning
* Dynamically installs Python packages provided in the `conda.yaml` file, this means the dependencies are installed during container runtime. * The base container image/curated environment used for dynamic installation is `mcr.microsoft.com/azureml/mlflow-ubuntu18.04-py37-cpu-inference` or `AzureML-mlflow-ubuntu18.04-py37-cpu-inference`-
-Provides a MLflow base image/curated environment that contains,
-
-* [`azureml-inference-server-http`](how-to-inference-server-http.md)
-* [`mlflow-skinny`](https://github.com/mlflow/mlflow/blob/master/README_SKINNY.rst)
-* `pandas`
-* The scoring script baked into the image
-
+* Provides a MLflow base image/curated environment that contains the following items:
+ * [`azureml-inference-server-http`](how-to-inference-server-http.md)
+ * [`mlflow-skinny`](https://github.com/mlflow/mlflow/blob/master/README_SKINNY.rst)
+ * `pandas`
+ * The scoring script baked into the image.
## Prerequisites
Provides a MLflow base image/curated environment that contains,
* You must have a MLflow model. The examples in this article are based on the models from [https://github.com/Azure/azureml-examples/tree/main/cli/endpoints/online/mlflow](https://github.com/Azure/azureml-examples/tree/main/cli/endpoints/online/mlflow).
+ * If you don't have an MLflow formatted model, you can [convert your custom ML model to MLflow format](how-to-convert-custom-model-to-mlflow.md).
+ [!INCLUDE [clone repo & set defaults](../../includes/machine-learning-cli-prepare.md)] In this code snippets used in this article, the `ENDPOINT_NAME` environment variable contains the name of the endpoint to create and use. To set this, use the following command from the CLI. Replace `<YOUR_ENDPOINT_NAME>` with the name of your endpoint:
machine-learning How To Deploy Mlflow Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-mlflow-models.md
- Title: Deploy MLflow models as web services-
-description: Set up MLflow with Azure Machine Learning to deploy your ML models as an Azure web service.
----- Previously updated : 10/25/2021----
-# Deploy MLflow models as Azure web services
-
-In this article, learn how to deploy your [MLflow](https://www.mlflow.org) model as an Azure web service, so you can leverage and apply Azure Machine Learning's model management and data drift detection capabilities to your production models. See [MLflow and Azure Machine Learning](concept-mlflow.md) for additional MLflow and Azure Machine Learning functionality integrations.
-
-Azure Machine Learning offers deployment configurations for:
-* Azure Container Instance (ACI) which is a suitable choice for a quick dev-test deployment.
-* Azure Kubernetes Service (AKS) which is recommended for scalable production deployments.
--
-> [!TIP]
-> The information in this document is primarily for data scientists and developers who want to deploy their MLflow model to an Azure Machine Learning web service endpoint. If you are an administrator interested in monitoring resource usage and events from Azure Machine Learning, such as quotas, completed training runs, or completed model deployments, see [Monitoring Azure Machine Learning](monitor-azure-machine-learning.md).
-
-## MLflow with Azure Machine Learning deployment
-
-MLflow is an open-source library for managing the life cycle of your machine learning experiments. Its integration with Azure Machine Learning allows for you to extend this management beyond model training to the deployment phase of your production model.
-
-The following diagram demonstrates that with the MLflow deploy API and Azure Machine Learning, you can deploy models created with popular frameworks, like PyTorch, Tensorflow, scikit-learn, etc., as Azure web services and manage them in your workspace.
-
-![ deploy mlflow models with azure machine learning](./media/how-to-deploy-mlflow-models/mlflow-diagram-deploy.png)
-
-## Prerequisites
-
-* A machine learning model. If you don't have a trained model, find the notebook example that best fits your compute scenario in [this repo](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/ml-frameworks/using-mlflow) and follow its instructions.
-* [Set up the MLflow Tracking URI to connect Azure Machine Learning](how-to-use-mlflow.md#track-local-runs).
-* Install the `azureml-mlflow` package.
- * This package automatically brings in `azureml-core` of the [The Azure Machine Learning Python SDK](/python/api/overview/azure/ml/install), which provides the connectivity for MLflow to access your workspace.
-* See which [access permissions you need to perform your MLflow operations with your workspace](how-to-assign-roles.md#mlflow-operations).
-
-## Deploy to Azure Container Instance (ACI)
-
-To deploy your MLflow model to an Azure Machine Learning web service, your model must be set up with the [MLflow Tracking URI to connect with Azure Machine Learning](how-to-use-mlflow.md).
-
-In order to deploy to ACI, you don't need to define any deployment configuration, the service will default to an ACI deployment when a config is not provided.
-Then, register and deploy the model in one step with MLflow's [deploy](https://www.mlflow.org/docs/latest/python_api/mlflow.azureml.html#mlflow.azureml.deploy) method for Azure Machine Learning.
--
-```python
-from mlflow.deployments import get_deploy_client
-
-# set the tracking uri as the deployment client
-client = get_deploy_client(mlflow.get_tracking_uri())
-
-# set the model path
-model_path = "model"
-
-# define the model path and the name is the service name
-# the model gets registered automatically and a name is autogenerated using the "name" parameter below
-client.create_deployment(name="mlflow-test-aci", model_uri='runs:/{}/{}'.format(run.id, model_path))
-```
-
-### Customize deployment configuration
-
-If you prefer not to use the defaults, you can set up your deployment configuration with a deployment config json file that uses parameters from the [deploy_configuration()](/python/api/azureml-core/azureml.core.webservice.aciwebservice#deploy-configuration-cpu-cores-none--memory-gb-none--tags-none--properties-none--description-none--location-none--auth-enabled-none--ssl-enabled-none--enable-app-insights-none--ssl-cert-pem-file-none--ssl-key-pem-file-none--ssl-cname-none--dns-name-label-none-) method as reference.
-
-For your deployment config json file, each of the deployment config parameters need to be defined in the form of a dictionary. The following is an example. [Learn more about what your deployment configuration json file can contain](reference-azure-machine-learning-cli.md#azure-container-instance-deployment-configuration-schema).
-
-```json
-{"computeType": "aci",
- "containerResourceRequirements": {"cpu": 1, "memoryInGB": 1},
- "location": "eastus2"
-}
-```
-
-Your json file can then be used to create your deployment.
-
-```python
-# set the deployment config
-deploy_path = "deployment_config.json"
-test_config = {'deploy-config-file': deploy_path}
-
-client.create_deployment(model_uri='runs:/{}/{}'.format(run.id, model_path),
- config=test_config,
- name="mlflow-test-aci")
-```
--
-## Deploy to Azure Kubernetes Service (AKS)
-
-To deploy your MLflow model to an Azure Machine Learning web service, your model must be set up with the [MLflow Tracking URI to connect with Azure Machine Learning](how-to-use-mlflow.md).
-
-To deploy to AKS, first create an AKS cluster. Create an AKS cluster using the [ComputeTarget.create()](/python/api/azureml-core/azureml.core.computetarget#create-workspace--name--provisioning-configuration-) method. It may take 20-25 minutes to create a new cluster.
-
-```python
-from azureml.core.compute import AksCompute, ComputeTarget
-
-# Use the default configuration (can also provide parameters to customize)
-prov_config = AksCompute.provisioning_configuration()
-
-aks_name = 'aks-mlflow'
-
-# Create the cluster
-aks_target = ComputeTarget.create(workspace=ws,
- name=aks_name,
- provisioning_configuration=prov_config)
-
-aks_target.wait_for_completion(show_output = True)
-
-print(aks_target.provisioning_state)
-print(aks_target.provisioning_errors)
-```
-Create a deployment config json using [deploy_configuration()](/python/api/azureml-core/azureml.core.webservice.aks.aksservicedeploymentconfiguration#parameters) method values as a reference. Each of the deployment config parameters simply need to be defined as a dictionary. Here's an example below:
-
-```json
-{"computeType": "aks", "computeTargetName": "aks-mlflow"}
-```
-
-Then, register and deploy the model in one step with MLflow's [deployment client](https://www.mlflow.org/docs/latest/python_api/mlflow.deployments.html).
-
-```python
-from mlflow.deployments import get_deploy_client
-
-# set the tracking uri as the deployment client
-client = get_deploy_client(mlflow.get_tracking_uri())
-
-# set the model path
-model_path = "model"
-
-# set the deployment config
-deploy_path = "deployment_config.json"
-test_config = {'deploy-config-file': deploy_path}
-
-# define the model path and the name is the service name
-# the model gets registered automatically and a name is autogenerated using the "name" parameter below
-client.create_deployment(model_uri='runs:/{}/{}'.format(run.id, model_path),
- config=test_config,
- name="mlflow-test-aci")
-```
-
-The service deployment can take several minutes.
-
-## Clean up resources
-
-If you don't plan to use your deployed web service, use `service.delete()` to delete it from your notebook. For more information, see the documentation for [WebService.delete()](/python/api/azureml-core/azureml.core.webservice%28class%29#delete--).
-
-## Example notebooks
-
-The [MLflow with Azure Machine Learning notebooks](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/ml-frameworks/using-mlflow) demonstrate and expand upon concepts presented in this article.
-
-> [!NOTE]
-> A community-driven repository of examples using mlflow can be found at https://github.com/Azure/azureml-examples.
-
-## Next steps
-
-* [Manage your models](concept-model-management-and-deployment.md).
-* Monitor your production models for [data drift](./how-to-enable-data-collection.md).
-* [Track Azure Databricks runs with MLflow](how-to-use-mlflow-azure-databricks.md).
machine-learning How To Deploy Model Cognitive Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-model-cognitive-search.md
Last updated 03/11/2021-+ # Deploy a model for use with Cognitive Search This article teaches you how to use Azure Machine Learning to deploy a model for use with [Azure Cognitive Search](../search/search-what-is-azure-search.md).
When you deploy a model from Azure Machine Learning to Azure Kubernetes Service,
The following code demonstrates how to create a new Azure Kubernetes Service (AKS) cluster for your workspace: > [!TIP]
-> You can also attach an existing Azure Kubernetes Service to your Azure Machine Learning workspace. For more information, see [How to deploy models to Azure Kubernetes Service](how-to-deploy-azure-kubernetes-service.md).
+> You can also attach an existing Azure Kubernetes Service to your Azure Machine Learning workspace. For more information, see [How to deploy models to Azure Kubernetes Service](v1/how-to-deploy-azure-kubernetes-service.md).
> [!IMPORTANT] > Notice that the code uses the `enable_ssl()` method to enable transport layer security (TLS) for the cluster. This is required when you plan on using the deployed model from Cognitive Services.
except Exception as e:
> [!IMPORTANT] > Azure will bill you as long as the AKS cluster exists. Make sure to delete your AKS cluster when you're done with it.
-For more information on using AKS with Azure Machine Learning, see [How to deploy to Azure Kubernetes Service](how-to-deploy-azure-kubernetes-service.md).
+For more information on using AKS with Azure Machine Learning, see [How to deploy to Azure Kubernetes Service](v1/how-to-deploy-azure-kubernetes-service.md).
## Write the entry script
aks_target.delete()
## Next steps
-* [Build and deploy a custom skill with Azure Machine Learning](../search/cognitive-search-tutorial-aml-custom-skill.md)
+* [Build and deploy a custom skill with Azure Machine Learning](../search/cognitive-search-tutorial-aml-custom-skill.md)
machine-learning How To Deploy Model Designer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-model-designer.md
Last updated 10/21/2021 -+ # Use the studio to deploy models trained in the designer
-In this article, you learn how to deploy a designer model as a real-time endpoint in Azure Machine Learning studio.
+In this article, you learn how to deploy a designer model as an online (real-time) endpoint in Azure Machine Learning studio.
Once registered or downloaded, you can use designer trained models just like any other model. Exported models can be deployed in use cases such as internet of things (IoT) and local deployments.
After downloading the necessary files, you're ready to deploy the model.
1. In the configuration menu, enter the following information: - Input a name for the endpoint.
- - Select to deploy the model to [Azure Kubernetes Service](how-to-deploy-azure-kubernetes-service.md) or [Azure Container Instance](how-to-deploy-azure-container-instance.md).
+ - Select to deploy the model to [Azure Kubernetes Service](v1/how-to-deploy-azure-kubernetes-service.md) or [Azure Container Instance](v1/how-to-deploy-azure-container-instance.md).
- Upload the `score.py` for the **Entry script file**. - Upload the `conda_env.yml` for the **Conda dependencies file**. >[!TIP] > In **Advanced** setting, you can set CPU/Memory capacity and other parameters for deployment. These settings are important for certain models such as PyTorch models, which consume considerable amount of memery (about 4 GB).
-1. Select **Deploy** to deploy your model as a real-time endpoint.
+1. Select **Deploy** to deploy your model as an online endpoint.
![Screenshot of deploy model in model asset page](./media/how-to-deploy-model-designer/deploy-model.png)
-## Consume the real-time endpoint
+## Consume the online endpoint
-After deployment succeeds, you can find the real-time endpoint in the **Endpoints** asset page. Once there, you will find a REST endpoint, which clients can use to submit requests to the real-time endpoint.
+After deployment succeeds, you can find the endpoint in the **Endpoints** asset page. Once there, you will find a REST endpoint, which clients can use to submit requests to the endpoint.
> [!NOTE] > The designer also generates a sample data json file for testing, you can download `_samples.json` in the **trained_model_outputs** folder.
-Use the following code sample to consume a real-time endpoint.
+Use the following code sample to consume an online endpoint.
```python
score_result = service.run(json.dumps(sample_data))
print(f'Inference result = {score_result}') ```
-### Consume computer vision related real-time endpoints
+### Consume computer vision related online endpoints
-When consuming computer vision related real-time endpoints, you need to convert images to bytes, since web service only accepts string as input. Following is the sample code:
+When consuming computer vision related online endpoints, you need to convert images to bytes, since web service only accepts string as input. Following is the sample code:
```python import base64
score_params = dict(
* [Train a model in the designer](tutorial-designer-automobile-price-train-score.md) * [Deploy models with Azure Machine Learning SDK](how-to-deploy-and-where.md) * [Troubleshoot a failed deployment](how-to-troubleshoot-deployment.md)
-* [Deploy to Azure Kubernetes Service](how-to-deploy-azure-kubernetes-service.md)
+* [Deploy to Azure Kubernetes Service](v1/how-to-deploy-azure-kubernetes-service.md)
* [Create client applications to consume web services](how-to-consume-web-service.md) * [Update web service](how-to-deploy-update-web-service.md)
machine-learning How To Deploy Package Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-package-models.md
-+ # How to package a registered model with Docker
docker kill mycontainer
## Next steps * [Troubleshoot a failed deployment](how-to-troubleshoot-deployment.md)
-* [Deploy to Azure Kubernetes Service](how-to-deploy-azure-kubernetes-service.md)
+* [Deploy to Azure Kubernetes Service](v1/how-to-deploy-azure-kubernetes-service.md)
* [Create client applications to consume web services](how-to-consume-web-service.md) * [Update web service](how-to-deploy-update-web-service.md) * [How to deploy a model using a custom Docker image](./how-to-deploy-custom-container.md) * [Use TLS to secure a web service through Azure Machine Learning](how-to-secure-web-service.md) * [Monitor your Azure Machine Learning models with Application Insights](how-to-enable-app-insights.md) * [Collect data for models in production](how-to-enable-data-collection.md)
-* [Create event alerts and triggers for model deployments](how-to-use-event-grid.md)
+* [Create event alerts and triggers for model deployments](how-to-use-event-grid.md)
machine-learning How To Deploy Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-pipelines.md
Last updated 10/21/2021 --+ # Publish and track machine learning pipelines - This article will show you how to share a machine learning pipeline with your colleagues or customers.
machine-learning How To Deploy Profile Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-profile-model.md
--
-description: Learn to profile your model before deployment. Profiling determines the memory and CPU usage of your model.
--- Previously updated : 07/31/2020-
-zone_pivot_groups: aml-control-methods
-----
-# Profile your model to determine resource utilization
-
-This article shows how to profile a machine learning to model to determine how much CPU and memory you will need to allocate for the model when deploying it as a web service.
-
-## Prerequisites
-
-This article assumes you have trained and registered a model with Azure Machine Learning. See the [sample tutorial here](how-to-train-scikit-learn.md) for an example of training and registering a scikit-learn model with Azure Machine Learning.
-
-## Limitations
-
-* Profiling will not work when the Azure Container Registry (ACR) for your workspace is behind a virtual network.
-
-## Run the profiler
-
-Once you have registered your model and prepared the other components necessary for its deployment, you can determine the CPU and memory the deployed service will need. Profiling tests the service that runs your model and returns information such as the CPU usage, memory usage, and response latency. It also provides a recommendation for the CPU and memory based on resource usage.
-
-In order to profile your model, you will need:
-* A registered model.
-* An inference configuration based on your entry script and inference environment definition.
-* A single column tabular dataset, where each row contains a string representing sample request data.
-
-> [!IMPORTANT]
-> At this point we only support profiling of services that expect their request data to be a string, for example: string serialized json, text, string serialized image, etc. The content of each row of the dataset (string) will be put into the body of the HTTP request and sent to the service encapsulating the model for scoring.
-
-> [!IMPORTANT]
-> We only support profiling up to 2 CPUs in ChinaEast2 and USGovArizona region.
-
-Below is an example of how you can construct an input dataset to profile a service that expects its incoming request data to contain serialized json. In this case, we created a dataset based 100 instances of the same request data content. In real world scenarios we suggest that you use larger datasets containing various inputs, especially if your model resource usage/behavior is input dependent.
--
-```python
-import json
-from azureml.core import Datastore
-from azureml.core.dataset import Dataset
-from azureml.data import dataset_type_definitions
-
-input_json = {'data': [[1, 2, 3, 4, 5, 6, 7, 8, 9, 10],
- [10, 9, 8, 7, 6, 5, 4, 3, 2, 1]]}
-# create a string that can be utf-8 encoded and
-# put in the body of the request
-serialized_input_json = json.dumps(input_json)
-dataset_content = []
-for i in range(100):
- dataset_content.append(serialized_input_json)
-dataset_content = '\n'.join(dataset_content)
-file_name = 'sample_request_data.txt'
-f = open(file_name, 'w')
-f.write(dataset_content)
-f.close()
-
-# upload the txt file created above to the Datastore and create a dataset from it
-data_store = Datastore.get_default(ws)
-data_store.upload_files(['./' + file_name], target_path='sample_request_data')
-datastore_path = [(data_store, 'sample_request_data' +'/' + file_name)]
-sample_request_data = Dataset.Tabular.from_delimited_files(
- datastore_path, separator='\n',
- infer_column_types=True,
- header=dataset_type_definitions.PromoteHeadersBehavior.NO_HEADERS)
-sample_request_data = sample_request_data.register(workspace=ws,
- name='sample_request_data',
- create_new_version=True)
-```
-
-Once you have the dataset containing sample request data ready, create an inference configuration. Inference configuration is based on the score.py and the environment definition. The following example demonstrates how to create the inference configuration and run profiling:
-
-```python
-from azureml.core.model import InferenceConfig, Model
-from azureml.core.dataset import Dataset
--
-model = Model(ws, id=model_id)
-inference_config = InferenceConfig(entry_script='path-to-score.py',
- environment=myenv)
-input_dataset = Dataset.get_by_name(workspace=ws, name='sample_request_data')
-profile = Model.profile(ws,
- 'unique_name',
- [model],
- inference_config,
- input_dataset=input_dataset)
-
-profile.wait_for_completion(True)
-
-# see the result
-details = profile.get_details()
-```
----
-The following command demonstrates how to profile a model by using the CLI:
-
-```azurecli-interactive
-az ml model profile -g <resource-group-name> -w <workspace-name> --inference-config-file <path-to-inf-config.json> -m <model-id> --idi <input-dataset-id> -n <unique-name>
-```
-
-> [!TIP]
-> To persist the information returned by profiling, use tags or properties for the model. Using tags or properties stores the data with the model in the model registry. The following examples demonstrate adding a new tag containing the `requestedCpu` and `requestedMemoryInGb` information:
->
-> ```python
-> model.add_tags({'requestedCpu': details['requestedCpu'],
-> 'requestedMemoryInGb': details['requestedMemoryInGb']})
-> ```
->
-> ```azurecli-interactive
-> az ml model profile -g <resource-group-name> -w <workspace-name> --i <model-id> --add-tag requestedCpu=1 --add-tag requestedMemoryInGb=0.5
-> ```
--
-## Next steps
-
-* [Troubleshoot a failed deployment](how-to-troubleshoot-deployment.md)
-* [Deploy to Azure Kubernetes Service](how-to-deploy-azure-kubernetes-service.md)
-* [Create client applications to consume web services](how-to-consume-web-service.md)
-* [Update web service](how-to-deploy-update-web-service.md)
-* [How to deploy a model using a custom Docker image](./how-to-deploy-custom-container.md)
-* [Use TLS to secure a web service through Azure Machine Learning](how-to-secure-web-service.md)
-* [Monitor your Azure Machine Learning models with Application Insights](how-to-enable-app-insights.md)
-* [Collect data for models in production](how-to-enable-data-collection.md)
-* [Create event alerts and triggers for model deployments](how-to-use-event-grid.md)
machine-learning How To Deploy Update Web Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-update-web-service.md
Last updated 10/21/2021-+
-# Update a deployed web service
+# Update a deployed web service (v1)
+ In this article, you learn how to update a web service that was deployed with Azure Machine Learning.
In this article, you learn how to update a web service that was deployed with Az
- This article assumes you have already deployed a web service with Azure Machine Learning. If you need to learn how to deploy a web service, [follow these steps](how-to-deploy-and-where.md). - The code snippets in this article assume that the `ws` variable has already been initialized to your workspace by using the [Workflow()](/python/api/azureml-core/azureml.core.workspace.workspace#constructor) constructor or loading a saved configuration with [Workspace.from_config()](/python/api/azureml-core/azureml.core.workspace.workspace#azureml-core-workspace-workspace-from-config). The following snippet demonstrates how to use the constructor:
+ [!INCLUDE [sdk v1](../../includes/machine-learning-sdk-v1.md)]
+ ```python from azureml.core import Workspace ws = Workspace(subscription_id="mysubscriptionid",
See [ACI Service Update Method.](/python/api/azureml-core/azureml.core.webservic
The following code shows how to use the SDK to update the model, environment, and entry script for a web service: + ```python from azureml.core import Environment from azureml.core.webservice import Webservice > [!TIP] > In this example, a JSON document is used to pass the model information from the registration command into the update command. >
-> To update the service to use a new entry script or environment, create an [inference configuration file](./reference-azure-machine-learning-cli.md#inference-configuration-schema) and specify it with the `ic` parameter.
+> To update the service to use a new entry script or environment, create an [inference configuration file](./v1/reference-azure-machine-learning-cli.md#inference-configuration-schema) and specify it with the `ic` parameter.
For more information, see the [az ml service update](/cli/azure/ml(v1)/service#az-ml-v1--service-update) documentation. ## Next steps * [Troubleshoot a failed deployment](how-to-troubleshoot-deployment.md)
-* [Deploy to Azure Kubernetes Service](how-to-deploy-azure-kubernetes-service.md)
* [Create client applications to consume web services](how-to-consume-web-service.md) * [How to deploy a model using a custom Docker image](./how-to-deploy-custom-container.md) * [Use TLS to secure a web service through Azure Machine Learning](how-to-secure-web-service.md)
machine-learning How To Deploy With Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-with-rest.md
Title: "Deploy models using online endpoints with REST APIs (preview)"
+ Title: "Deploy models using online endpoints with REST APIs"
description: Learn how to deploy models using online endpoints with REST APIs.
Last updated 12/22/2021 -+
-# Deploy models with REST (preview)
+# Deploy models with REST
-Learn how to use the Azure Machine Learning REST API to deploy models (preview).
-
+Learn how to use the Azure Machine Learning REST API to deploy models.
The REST API uses standard HTTP verbs to create, retrieve, update, and delete resources. The REST API works with any language or tool that can make HTTP requests. REST's straightforward structure makes it a good choice in scripting environments and for MLOps automation.
In this article, you learn how to use the new REST APIs to:
## Azure Machine Learning online endpoints
-Online endpoints (preview) allow you to deploy your model without having to create and manage the underlying infrastructure as well as Kubernetes clusters. In this article, you'll create an online endpoint and deployment, and validate it by invoking it. But first you'll have to register the assets needed for deployment, including model, code, and environment.
+Online endpoints allow you to deploy your model without having to create and manage the underlying infrastructure as well as Kubernetes clusters. In this article, you'll create an online endpoint and deployment, and validate it by invoking it. But first you'll have to register the assets needed for deployment, including model, code, and environment.
There are many ways to create an Azure Machine Learning online endpoints [including the Azure CLI](how-to-deploy-managed-online-endpoints.md), and visually with [the studio](how-to-use-managed-online-endpoint-studio.md). The following example an online endpoint with the REST API.
If you aren't going use the deployment, you should delete it with the below comm
* Learn how to deploy your model [using the Azure CLI](how-to-deploy-managed-online-endpoints.md). * Learn how to deploy your model [using studio](how-to-use-managed-online-endpoint-studio.md).
-* Learn to [Troubleshoot online endpoints deployment and scoring (preview)](how-to-troubleshoot-managed-online-endpoints.md)
-* Learn how to [Access Azure resources with a online endpoint and managed identity (preview)](how-to-access-resources-from-endpoints-managed-identities.md)
+* Learn to [Troubleshoot online endpoints deployment and scoring](how-to-troubleshoot-managed-online-endpoints.md)
+* Learn how to [Access Azure resources with a online endpoint and managed identity](how-to-access-resources-from-endpoints-managed-identities.md)
* Learn how to [monitor online endpoints](how-to-monitor-online-endpoints.md).
-* Learn [Safe rollout for online endpoints (preview)](how-to-safely-rollout-managed-endpoints.md).
-* [View costs for an Azure Machine Learning managed online endpoint (preview)](how-to-view-online-endpoints-costs.md).
-* [Managed online endpoints SKU list (preview)](reference-managed-online-endpoints-vm-sku-list.md).
-* Learn about limits on managed online endpoints in [Manage and increase quotas for resources with Azure Machine Learning](how-to-manage-quotas.md#azure-machine-learning-managed-online-endpoints-preview).
+* Learn [Safe rollout for online endpoints](how-to-safely-rollout-managed-endpoints.md).
+* [View costs for an Azure Machine Learning managed online endpoint](how-to-view-online-endpoints-costs.md).
+* [Managed online endpoints SKU list](reference-managed-online-endpoints-vm-sku-list.md).
+* Learn about limits on managed online endpoints in [Manage and increase quotas for resources with Azure Machine Learning](how-to-manage-quotas.md#azure-machine-learning-managed-online-endpoints).
machine-learning How To Deploy With Triton https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-with-triton.md
-+ ms.devlang: azurecli # High-performance serving with Triton Inference Server (Preview) ++ Learn how to use [NVIDIA Triton Inference Server](https://aka.ms/nvidia-triton-docs) in Azure Machine Learning with [Managed online endpoints](concept-endpoints.md#managed-online-endpoints).
Triton is multi-framework, open-source software that is optimized for inference.
In this article, you will learn how to deploy Triton and a model to a managed online endpoint. Information is provided on using both the CLI (command line) and Azure Machine Learning studio. - > [!NOTE] > [NVIDIA Triton Inference Server](https://aka.ms/nvidia-triton-docs) is an open-source third-party software that is integrated in Azure Machine Learning.
machine-learning How To Designer Import Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-designer-import-data.md
Last updated 10/21/2021 -+ # Import data into Azure Machine Learning designer In this article, you learn how to import your own data in the designer to create custom solutions. There are two ways you can import data into the designer:
-* **Azure Machine Learning datasets** - Register [datasets](concept-data.md#datasets) in Azure Machine Learning to enable advanced features that help you manage your data.
-* **Import Data component** - Use the [Import Data](algorithm-module-reference/import-data.md) component to directly access data from online datasources.
+* **Azure Machine Learning datasets** - Register [datasets](./v1/concept-data.md) in Azure Machine Learning to enable advanced features that help you manage your data.
+* **Import Data component** - Use the [Import Data](algorithm-module-reference/import-data.md) component to directly access data from online data sources.
[!INCLUDE [machine-learning-missing-ui](../../includes/machine-learning-missing-ui.md)] ## Use Azure Machine Learning datasets
-We recommend that you use [datasets](concept-data.md#datasets) to import data into the designer. When you register a dataset, you can take full advantage of advanced data features like [versioning and tracking](how-to-version-track-datasets.md) and [data monitoring](how-to-monitor-datasets.md).
+We recommend that you use [datasets](./v1/concept-data.md) to import data into the designer. When you register a dataset, you can take full advantage of advanced data features like [versioning and tracking](how-to-version-track-datasets.md) and [data monitoring](how-to-monitor-datasets.md).
### Register a dataset
-You can register existing datasets [programatically with the SDK](how-to-create-register-datasets.md#datasets-sdk) or [visually in Azure Machine Learning studio](how-to-connect-data-ui.md#create-datasets).
+You can register existing datasets [programatically with the SDK](./v1/how-to-create-register-datasets.md#create-datasets-from-datastores) or [visually in Azure Machine Learning studio](how-to-connect-data-ui.md#create-datasets).
You can also register the output for any designer component as a dataset.
If you register a file dataset, the output port type of the dataset is **AnyDire
## Import data using the Import Data component
-While we recommend that you use datasets to import data, you can also use the [Import Data](algorithm-module-reference/import-data.md) component. The Import Data component skips registering your dataset in Azure Machine Learning and imports data directly from a [datastore](concept-data.md#datastores) or HTTP URL.
+While we recommend that you use datasets to import data, you can also use the [Import Data](algorithm-module-reference/import-data.md) component. The Import Data component skips registering your dataset in Azure Machine Learning and imports data directly from a [datastore](./v1/concept-data.md) or HTTP URL.
For detailed information on how to use the Import Data component, see the [Import Data reference page](algorithm-module-reference/import-data.md).
For detailed information on how to use the Import Data component, see the [Impor
## Supported sources
-This section lists the data sources supported by the designer. Data comes into the designer from either a datastore or from [tabular dataset](how-to-create-register-datasets.md#dataset-types).
+This section lists the data sources supported by the designer. Data comes into the designer from either a datastore or from [tabular dataset](./v1/how-to-create-register-datasets.md#dataset-types).
### Datastore sources For a list of supported datastore sources, see [Access data in Azure storage services](how-to-access-data.md#supported-data-storage-service-types).
machine-learning How To Enable App Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-enable-app-insights.md
Last updated 01/04/2022 -+ # Monitor and collect data from ML web service endpoints In this article, you learn how to collect data from models deployed to web service endpoints in Azure Kubernetes Service (AKS) or Azure Container Instances (ACI). Use [Azure Application Insights](../azure-monitor/app/app-insights-overview.md) to collect the following data from an endpoint: * Output data
You can also enable Azure Application Insights from Azure Machine Learning studi
### Query logs for deployed models
-Logs of real-time endpoints are customer data. You can use the `get_logs()` function to retrieve logs from a previously deployed web service. The logs may contain detailed information about any errors that occurred during deployment.
+Logs of online endpoints are customer data. You can use the `get_logs()` function to retrieve logs from a previously deployed web service. The logs may contain detailed information about any errors that occurred during deployment.
```python from azureml.core import Workspace
Use Application Insights' [continuous export](../azure-monitor/app/export-teleme
In this article, you learned how to enable logging and view logs for web service endpoints. Try these articles for next steps:
-* [How to deploy a model to an AKS cluster](./how-to-deploy-azure-kubernetes-service.md)
+* [How to deploy a model to an AKS cluster](./v1/how-to-deploy-azure-kubernetes-service.md)
-* [How to deploy a model to Azure Container Instances](./how-to-deploy-azure-container-instance.md)
+* [How to deploy a model to Azure Container Instances](./v1/how-to-deploy-azure-container-instance.md)
-* [MLOps: Manage, deploy, and monitor models with Azure Machine Learning](./concept-model-management-and-deployment.md) to learn more about leveraging data collected from models in production. Such data can help to continually improve your machine learning process.
+* [MLOps: Manage, deploy, and monitor models with Azure Machine Learning](./concept-model-management-and-deployment.md) to learn more about leveraging data collected from models in production. Such data can help to continually improve your machine learning process.
machine-learning How To Enable Data Collection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-enable-data-collection.md
Last updated 10/21/2021 --+ # Collect data from models in production + This article shows how to collect data from an Azure Machine Learning model deployed on an Azure Kubernetes Service (AKS) cluster. The collected data is then stored in Azure Blob storage. Once collection is enabled, the data you collect helps you:
machine-learning How To Enable Studio Virtual Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-enable-studio-virtual-network.md
Last updated 11/19/2021--+ # Use Azure Machine Learning studio in an Azure virtual network
In this article, you learn how to:
> * [Virtual network overview](how-to-network-security-overview.md) > * [Secure the workspace resources](how-to-secure-workspace-vnet.md) > * [Secure the training environment](how-to-secure-training-vnet.md)
-> * [Secure the inference environment](how-to-secure-inferencing-vnet.md)
+> * For securing inference, see the following documents:
+> * If using CLI v1 or SDK v1 - [Secure inference environment](how-to-secure-inferencing-vnet.md)
+> * If using CLI v2 or SDK v2 - [Network isolation for managed online endpoints](how-to-secure-online-endpoint.md)
> * [Use custom DNS](how-to-custom-dns.md) > * [Use a firewall](how-to-access-azureml-behind-firewall.md) >
This article is part of a series on securing an Azure Machine Learning workflow.
* [Virtual network overview](how-to-network-security-overview.md) * [Secure the workspace resources](how-to-secure-workspace-vnet.md) * [Secure the training environment](how-to-secure-training-vnet.md)
-* [Secure the inference environment](how-to-secure-inferencing-vnet.md)
+* For securing inference, see the following documents:
+ * If using CLI v1 or SDK v1 - [Secure inference environment](how-to-secure-inferencing-vnet.md)
+ * If using CLI v2 or SDK v2 - [Network isolation for managed online endpoints](how-to-secure-online-endpoint.md)
* [Use custom DNS](how-to-custom-dns.md) * [Use a firewall](how-to-access-azureml-behind-firewall.md)
machine-learning How To Generate Automl Training Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-generate-automl-training-code.md
+ Last updated 02/16/2022 # View automated ML model's training code (preview) + [!INCLUDE [preview disclaimer](../../includes/machine-learning-preview-generic-disclaimer.md)] In this article, you learn how to view the generated training code from any automated machine learning trained model.
However, in order to load that model in a notebook in your custom local Conda en
## Next steps * Learn more about [how and where to deploy a model](how-to-deploy-and-where.md).
-* See how to [enable interpretability features](how-to-machine-learning-interpretability-automl.md) specifically within automated ML experiments.
+* See how to [enable interpretability features](how-to-machine-learning-interpretability-automl.md) specifically within automated ML experiments.
machine-learning How To High Availability Machine Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-high-availability-machine-learning.md
description: Learn how to plan for disaster recovery and maintain business conti
+
Runs in Azure Machine Learning are defined by a run specification. This specific
* Manage configurations as code.
- * Avoid hardcoded references to the workspace. Instead, configure a reference to the workspace instance using a [config file](how-to-configure-environment.md#workspace) and use [Workspace.from_config()](/python/api/azureml-core/azureml.core.workspace.workspace#remarks) to initialize the workspace. To automate the process, use the [Azure CLI extension for machine learning](reference-azure-machine-learning-cli.md) command [az ml folder attach](/cli/azure/ml(v1)/folder#ext_azure_cli_ml_az_ml_folder_attach).
+ * Avoid hardcoded references to the workspace. Instead, configure a reference to the workspace instance using a [config file](how-to-configure-environment.md#workspace) and use [Workspace.from_config()](/python/api/azureml-core/azureml.core.workspace.workspace#remarks) to initialize the workspace. To automate the process, use the [Azure CLI extension for machine learning](v1/reference-azure-machine-learning-cli.md) command [az ml folder attach](/cli/azure/ml(v1)/folder#ext_azure_cli_ml_az_ml_folder_attach).
* Use run submission helpers such as [ScriptRunConfig](/python/api/azureml-core/azureml.core.scriptrunconfig) and [Pipeline](/python/api/azureml-pipeline-core/azureml.pipeline.core.pipeline(class)). * Use [Environments.save_to_directory()](/python/api/azureml-core/azureml.core.environment(class)#save-to-directory-path--overwrite-false-) to save your environment definitions. * Use a Dockerfile if you use custom Docker images.
Azure Machine Learning cannot sync or recover artifacts or metadata between work
Depending on your recovery approach, you may need to copy artifacts such as dataset and model objects between the workspaces to continue your work. Currently, the portability of artifacts between workspaces is limited. We recommend managing artifacts as code where possible so that they can be recreated in the failover instance.
-The following artifacts can be exported and imported between workspaces by using the [Azure CLI extension for machine learning](reference-azure-machine-learning-cli.md):
+The following artifacts can be exported and imported between workspaces by using the [Azure CLI extension for machine learning](v1/reference-azure-machine-learning-cli.md):
| Artifact | Export | Import | | -- | -- | -- |
machine-learning How To Identity Based Data Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-identity-based-data-access.md
Last updated 01/25/2022--
-# Customer intent: As an experienced Python developer, I need to make my data in Azure Storage available to my compute for training my machine learning models.
+
+#Customer intent: As an experienced Python developer, I need to make my data in Azure Storage available to my compute for training my machine learning models.
# Connect to storage by using identity-based data access
There are two scenarios in which you can apply identity-based data access in Azu
### Accessing storage services
-You can connect to storage services via identity-based data access with Azure Machine Learning datastores or [Azure Machine Learning datasets](how-to-create-register-datasets.md).
+You can connect to storage services via identity-based data access with Azure Machine Learning datastores or [Azure Machine Learning datasets](./v1/how-to-create-register-datasets.md).
Your authentication credentials are usually kept in a datastore, which is used to ensure you have permission to access the storage service. When these credentials are registered via datastores, any user with the workspace Reader role can retrieve them. That scale of access can be a security concern for some organizations. [Learn more about the workspace Reader role.](how-to-assign-roles.md#default-roles)
The same behavior applies when you:
### Model training on private data
-Certain machine learning scenarios involve training models with private data. In such cases, data scientists need to run training workflows without being exposed to the confidential input data. In this scenario, a [managed identity](how-to-use-managed-identities.md) of the training compute is used for data access authentication. This approach allows storage admins to grant Storage Blob Data Reader access to the managed identity that the training compute uses to run the training job. The individual data scientists don't need to be granted access. For more information, see [Set up managed identity on a compute cluster](how-to-create-attach-compute-cluster.md#managed-identity).
+Certain machine learning scenarios involve training models with private data. In such cases, data scientists need to run training workflows without being exposed to the confidential input data. In this scenario, a [managed identity](how-to-use-managed-identities.md) of the training compute is used for data access authentication. This approach allows storage admins to grant Storage Blob Data Reader access to the managed identity that the training compute uses to run the training job. The individual data scientists don't need to be granted access. For more information, see [Set up managed identity on a compute cluster](how-to-create-attach-compute-cluster.md#set-up-managed-identity).
## Prerequisites
Certain machine learning scenarios involve training models with private data. In
- [Azure Blob Storage](../storage/blobs/storage-blobs-overview.md) - [Azure Data Lake Storage Gen1](../data-lake-store/index.yml) - [Azure Data Lake Storage Gen2](../storage/blobs/data-lake-storage-introduction.md)
- - [Azure SQL Database](/azure/azure-sql/database/sql-database-paas-overview)
- The [Azure Machine Learning SDK for Python](/python/api/overview/azure/ml/install).
adls2_dstore = Datastore.register_azure_data_lake_gen2(workspace=ws,
account_name='myadls2') ```
-### Azure SQL database
-
-For an Azure SQL database, use [register_azure_sql_database()](/python/api/azureml-core/azureml.core.datastore.datastore#register-azure-sql-database-workspace--datastore-name--server-name--database-name--tenant-id-none--client-id-none--client-secret-none--resource-url-none--authority-url-none--endpoint-none--overwrite-false--username-none--password-none--subscription-id-none--resource-group-none--grant-workspace-access-false-kwargs-) to register a datastore that connects to an Azure SQL database storage.
-
-The following code creates and registers the `credentialless_sqldb` datastore to the `ws` workspace and assigns it to the variable, `sqldb_dstore`. This datastore accesses the database `mydb` in the `myserver` SQL DB server.
-
-```python
-# Create a sqldatabase datastore without credentials
-
-sqldb_dstore = Datastore.register_azure_sql_database(workspace=ws,
- datastore_name='credentialless_sqldb',
- server_name='myserver',
- database_name='mydb')
-
-```
- ## Storage access permissions
Identity-based data access supports connections to **only** the following storag
* Azure Blob Storage * Azure Data Lake Storage Gen1 * Azure Data Lake Storage Gen2
-* Azure SQL Database
To access these storage services, you must have at least [Storage Blob Data Reader](../role-based-access-control/built-in-roles.md#storage-blob-data-reader) access to the storage account. Only storage account owners can [change your access level via the Azure portal](../storage/blobs/assign-azure-role-data-access.md). If you prefer to not use your user identity (Azure Active Directory), you also have the option to grant a workspace managed-system identity (MSI) permission to create the datastore. To do so, you must have Owner permissions to the storage account and add the `grant_workspace_access= True` parameter to your data register method.
-If you're training a model on a remote compute target and want to access the data for training, the compute identity must be granted at least the Storage Blob Data Reader role from the storage service. Learn how to [set up managed identity on a compute cluster](how-to-create-attach-compute-cluster.md#managed-identity).
+If you're training a model on a remote compute target and want to access the data for training, the compute identity must be granted at least the Storage Blob Data Reader role from the storage service. Learn how to [set up managed identity on a compute cluster](how-to-create-attach-compute-cluster.md#set-up-managed-identity).
## Work with virtual networks By default, Azure Machine Learning can't communicate with a storage account that's behind a firewall or in a virtual network.
-You can configure storage accounts to allow access only from within specific virtual networks. This configuration requires additional steps to ensure data isn't leaked outside of the network. This behavior is the same for credential-based data access. For more information, see [How to configure virtual network scenarios](how-to-access-data.md#virtual-network).
+You can configure storage accounts to allow access only from within specific virtual networks. This configuration requires extra steps to ensure data isn't leaked outside of the network. This behavior is the same for credential-based data access. For more information, see [How to configure virtual network scenarios](how-to-access-data.md#virtual-network).
If your storage account has virtual network settings, that dictates what identity type and permissions access is needed. For example for data preview and data profile, the virtual network settings determine what type of identity is used to authenticate data access.
If your storage account has virtual network settings, that dictates what identit
## Use data in storage
-We recommend that you use [Azure Machine Learning datasets](how-to-create-register-datasets.md) when you interact with your data in storage with Azure Machine Learning.
+We recommend that you use [Azure Machine Learning datasets](./v1/how-to-create-register-datasets.md) when you interact with your data in storage with Azure Machine Learning.
> [!IMPORTANT] > Datasets using identity-based data access are not supported for [automated ML experiments](how-to-configure-auto-train.md). Datasets package your data into a lazily evaluated consumable object for machine learning tasks like training. Also, with datasets you can [download or mount](how-to-train-with-datasets.md#mount-vs-download) files of any format from Azure storage services like Azure Blob Storage and Azure Data Lake Storage to a compute target.
-To create a dataset, you can reference paths from datastores that also use identity-based data access .
+To create a dataset, you can reference paths from datastores that also use identity-based data access.
* If you're underlying storage account type is Blob or ADLS Gen 2, your user identity needs Blob Reader role. * If your underlying storage is ADLS Gen 1, permissions need can be set via the storage's Access Control List (ACL).
Another option is to skip datastore creation and create datasets directly from s
blob_dset = Dataset.File.from_files('https://myblob.blob.core.windows.net/may/keras-mnist-fashion/') ```
-When you submit a training job that consumes a dataset created with identity-based data access, the managed identity of the training compute is used for data access authentication. Your Azure Active Directory token isn't used. For this scenario, ensure that the managed identity of the compute is granted at least the Storage Blob Data Reader role from the storage service. For more information, see [Set up managed identity on compute clusters](how-to-create-attach-compute-cluster.md#managed-identity).
+When you submit a training job that consumes a dataset created with identity-based data access, the managed identity of the training compute is used for data access authentication. Your Azure Active Directory token isn't used. For this scenario, ensure that the managed identity of the compute is granted at least the Storage Blob Data Reader role from the storage service. For more information, see [Set up managed identity on compute clusters](how-to-create-attach-compute-cluster.md#set-up-managed-identity).
## Access data for training jobs on compute clusters (preview) [!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)] - When training on [Azure Machine Learning compute clusters](how-to-create-attach-compute-cluster.md#what-is-a-compute-cluster), you can authenticate to storage with your Azure Active Directory token. This authentication mode allows you to:
This authentication mode allows you to:
> [!WARNING] > This functionality has the following limitations
-> * Feature is only supported for experiments submitted via the [Azure Machine Learning CLI v2 (preview)](how-to-configure-cli.md)
+> * Feature is only supported for experiments submitted via the [Azure Machine Learning CLI](how-to-configure-cli.md)
> * Only CommandJobs, and PipelineJobs with CommandSteps and AutoMLSteps are supported > * User identity and compute managed identity cannot be used for authentication within same job.
identity:
## Next steps
-* [Create an Azure Machine Learning dataset](how-to-create-register-datasets.md)
+* [Create an Azure Machine Learning dataset](./v1/how-to-create-register-datasets.md)
* [Train with datasets](how-to-train-with-datasets.md) * [Create a datastore with key-based data access](how-to-access-data.md)
machine-learning How To Inference Onnx Automl Image Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-inference-onnx-automl-image-models.md
Last updated 10/18/2021+ # Make predictions with ONNX on computer vision models from AutoML + In this article, you learn how to use Open Neural Network Exchange (ONNX) to make predictions on computer vision models generated from automated machine learning (AutoML) in Azure Machine Learning. To use ONNX for predictions, you need to:
display_detections(img, boxes.copy(), labels, scores, masks.copy(),
## Next steps * [Learn more about computer vision tasks in AutoML](how-to-auto-train-image-models.md) * [Troubleshoot AutoML experiments](how-to-troubleshoot-auto-ml.md)-
machine-learning How To Kubernetes Instance Type https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-kubernetes-instance-type.md
- Title: How to create and select Kubernetes instance types (preview)
-description: Create and select Azure Arc-enabled Kubernetes cluster instance types for training and inferencing workloads in Azure Machine Learning.
----- Previously updated : 10/21/2021----
-# Create and select Kubernetes instance types (preview)
-
-Learn how to create and select Kubernetes instances for Azure Machine Learning training and inferencing workloads on Azure Arc-enabled Kubernetes clusters.
-
-## What are instance types?
-
-Instance types are an Azure Machine Learning concept that allows targeting certain types of compute nodes for training and inference workloads. For an Azure VM, an example for an instance type is `STANDARD_D2_V3`.
-
-In Kubernetes clusters, instance types are defined by two elements:
-
-* [nodeSelector](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector) - A `nodeSelector` lets you specify which node a pod should run on. The node must have a corresponding label.
-* [resources](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/) - The `resources` section defines the compute resources (CPU, memory and Nvidia GPU) for the pod.
-
-## Create instance types
-
-Instance types are represented in a custom resource definition (CRD) that is installed with the Azure Machine Learning extension.
-
-### Create a single instance type
-
-To create a new instance type, create a new custom resource for the instance type CRD.
-
-For example, given the CRD `my_instance_type.yaml`:
-
-```yaml
-apiVersion: amlarc.azureml.com/v1alpha1
-kind: InstanceType
-metadata:
- name: myinstancetypename
-spec:
- nodeSelector:
- mylabel: mylabelvalue
- resources:
- limits:
- cpu: "1"
- nvidia.com/gpu: 1
- memory: "2Gi"
- requests:
- cpu: "700m"
- memory: "1500Mi"
-```
-
-Use the `kubectl apply` command to create a new instance type.
-
-```bash
-kubectl apply -f my_instance_type.yaml
-```
-
-This operation creates an instance type with the following properties:
--- Pods are scheduled only on nodes with label `mylabel: mylabelvalue`.-- Pods are assigned resource requests of `700m` CPU and `1500Mi` memory. -- Pods are assigned resource limits of `1` CPU, `2Gi` memory and `1` Nvidia GPU.-
-> [!NOTE]
-> When you specify your CRD, consider make note of the following conventions:
-> - Nvidia GPU resources are only specified in the `limits` section. For more information, see the [Kubernetes documentation](https://kubernetes.io/docs/tasks/manage-gpus/scheduling-gpus/#using-device-plugins).
-> - CPU and memory resources are string values.
-> - CPU can be specified in millicores (`100m`) or in full numbers (`"1"` which is equivalent to `1000m`).
-> - Memory can be specified as a full number + suffix (`1024Mi` for `1024 MiB`).
-
-### Create multiple instance types
-
-You can also use CRDs to create multiple instance types at once.
-
-For example, given the CRD `my_instance_type.yaml`:
-
-```yaml
-apiVersion: amlarc.azureml.com/v1alpha1
-kind: InstanceTypeList
-items:
- - metadata:
- name: cpusmall
- spec:
- resources:
- limits:
- cpu: "100m"
- nvidia.com/gpu: 0
- memory: "1Gi"
- requests:
- cpu: "100m"
- memory: "10Mi"
-
- - metadata:
- name: defaultinstancetype
- spec:
- resources:
- limits:
- cpu: "1"
- nvidia.com/gpu: 0
- memory: "1Gi"
- requests:
- cpu: "1"
- memory: "1Gi"
-```
-
-Use the `kubectl apply` command to create multiple instance types.
-
-```bash
-kubectl apply -f my_instance_type.yaml
-```
-
-This operation creates two instance types, one called `defaultinstancetype` and the other `cpusmall` with different resource specifications. For more information on default instance types, see the [default instance types](#default-instance-types) section of this document.
-
-## Default instance types
-
-When a training or inference workload is submitted without an instance type, it uses the default instance type.
-
-To specify a default instance type for a Kubernetes cluster, create an instance type with name `defaultinstancetype` and define respective `nodeSelector` and `resources` properties like any other instance type. The instance type is automatically recognized as the default.
-
-If no default instance type is defined, the following default behavior applies:
-
-* No nodeSelector is applied, meaning the pod can get scheduled on any node.
-* The workload's pods are assigned the default resources:
-
- ```yaml
- resources:
- limits:
- cpu: "0.6"
- memory: "1536Mi"
- requests:
- cpu: "0.6"
- memory: "1536Mi"
- ```
-
-> [!IMPORTANT]
-> The default instance type will not appear as an InstanceType custom resource in the cluster, but it will appear in all clients (Azure Machine Learning studio, Azure CLI, Python SDK).
-
-## Select an instance type for training workloads
-
-To select an instance type for a training job using Azure Machine Learning 2.0 CLI, specify its name as part of the `compute` section in the YAML specification file.
-
-```yaml
-command: python -c "print('Hello world!')"
-environment:
- docker:
- image: python
-compute:
- target: azureml:<compute_target_name>
- instance_type: <instance_type_name>
-```
-
-In the example above, replace `<compute_target_name>` with the name of your Kubernetes compute target and `<instance_type_name>` with the name of the instance type you wish to select.
-
-> [!TIP]
-> The default instance type purposefully uses little resources. To ensure all machine learning workloads run successfully with the adequate resources, it is highly recommended to create custom instance types.
-
-## Select an instance type for inferencing workloads
-
-To select an instance type for inferencing workloads using the Azure Machine Learning 2.0 CLI, specify its name as part of the `deployments` section. For example:
-
-```yaml
-type: online
-auth_mode: key
-target: azureml:<your compute target name>
-traffic:
- blue: 100
-
-deployments:
- - name: blue
- app_insights_enabled: true
- model:
- name: sklearn_mnist_model
- version: 1
- local_path: ./model/sklearn_mnist_model.pkl
- code_configuration:
- code:
- local_path: ./script/
- scoring_script: score.py
- instance_type: <instance_type_name>
- environment:
- name: sklearn-mnist-env
- version: 1
- path: .
- conda_file: file:./model/conda.yml
- docker:
- image: mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04:20210727.v1
-```
-
-In the example above, replace `<instance_type_name>` with the name of the instance type you wish to select.
-
-## Next steps
--- [Configure Azure Arc-enabled machine learning (preview)](how-to-attach-arc-kubernetes.md)-- [Train models (create jobs) with the CLI (v2)](how-to-train-cli.md)-- [Deploy and score a machine learning model by using an online endpoint (preview)](how-to-deploy-managed-online-endpoints.md)
machine-learning How To Link Synapse Ml Workspaces https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-link-synapse-ml-workspaces.md
Last updated 10/21/2021---
-# Customer intent: As a workspace administrator, I want to link Azure Synapse workspaces and Azure Machine Learning workspaces and attach Apache Spark pools for a unified data wrangling experience.
+
+#Customer intent: As a workspace administrator, I want to link Azure Synapse workspaces and Azure Machine Learning workspaces and attach Apache Spark pools for a unified data wrangling experience.
# Link Azure Synapse Analytics and Azure Machine Learning workspaces and attach Apache Spark pools(preview) + In this article, you learn how to create a linked service that links your [Azure Synapse Analytics](../synapse-analytics/overview-what-is.md) workspace and [Azure Machine Learning workspace](concept-workspace.md). With your Azure Machine Learning workspace linked with your Azure Synapse workspace, you can attach an Apache Spark pool, powered by Azure Synapse Analytics, as a dedicated compute for data wrangling at scale or conduct model training all from the same Python notebook.
ws.compute_targets['Synapse Spark pool alias']
* [How to data wrangle with Azure Synapse (preview)](how-to-data-prep-synapse-spark-pool.md). * [How to use Apache Spark in your machine learning pipeline with Azure Synapse (preview)](how-to-use-synapsesparkstep.md) * [Train a model](how-to-set-up-training-targets.md).
-* [How to securely integrate Azure Synapse and Azure Machine Learning workspaces](how-to-private-endpoint-integration-synapse.md).
+* [How to securely integrate Azure Synapse and Azure Machine Learning workspaces](how-to-private-endpoint-integration-synapse.md).
machine-learning How To Log Pipelines Application Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-log-pipelines-application-insights.md
Last updated 10/21/2021 -+ # Collect machine learning pipeline log files in Application Insights for alerts and debugging The [OpenCensus](https://opencensus.io/quickstart/python/) Python library can be used to route logs to Application Insights from your scripts. Aggregating logs from pipeline runs in one place allows you to build queries and diagnose issues. Using Application Insights will allow you to track logs over time and compare pipeline logs across runs.
Some of the queries below use 'customDimensions.Level'. These severity levels co
Once you have logs in your Application Insights instance, they can be used to set [Azure Monitor alerts](../azure-monitor/alerts/alerts-overview.md#what-you-can-alert-on) based on query results.
-You can also add results from queries to an [Azure Dashboard](../azure-monitor/app/tutorial-app-dashboards.md#add-logs-query) for additional insights.
+You can also add results from queries to an [Azure Dashboard](../azure-monitor/app/tutorial-app-dashboards.md#add-logs-query) for additional insights.
machine-learning How To Log View Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-log-view-metrics.md
Title: Log & view metrics and log files
-description: Enable logging on your ML training runs to monitor real-time run metrics, and to help diagnose errors and warnings.
+description: Enable logging on your ML training runs to monitor real-time run metrics with MLflow, and to help diagnose errors and warnings.
Previously updated : 04/19/2021 Last updated : 04/28/2022 -+ # Log & view metrics and log files
-Log real-time information using both the default Python logging package and Azure Machine Learning Python SDK-specific functionality. You can log locally and send logs to your workspace in the portal.
+> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning Python SDK you are using:"]
+> * [v1](./v1/how-to-log-view-metrics.md)
+> * [v2 (preview)](how-to-log-view-metrics.md)
+
+Log real-time information using [MLflow Tracking](https://www.mlflow.org/docs/latest/tracking.html). You can log models, metrics, and artifacts with MLflow as it supports local mode to cloud portability.
+
+> [!IMPORTANT]
+> Unlike the Azure Machine Learning SDK v1, there is no logging functionality in the SDK v2 preview.
Logs can help you diagnose errors and warnings, or track performance metrics like parameters and model performance. In this article, you learn how to enable logging in the following scenarios: > [!div class="checklist"]
-> * Log run metrics
+> * Log training run metrics
> * Interactive training sessions
-> * Submitting training jobs using ScriptRunConfig
> * Python native `logging` settings > * Logging from additional sources
Logs can help you diagnose errors and warnings, or track performance metrics lik
> [!TIP] > This article shows you how to monitor the model training process. If you're interested in monitoring resource usage and events from Azure Machine learning, such as quotas, completed training runs, or completed model deployments, see [Monitoring Azure Machine Learning](monitor-azure-machine-learning.md).
-## Data types
-
-You can log multiple data types including scalar values, lists, tables, images, directories, and more. For more information, and Python code examples for different data types, see the [Run class reference page](/python/api/azureml-core/azureml.core.run%28class%29).
+## Prerequisites
-## Logging run metrics
+* To use Azure Machine Learning, you must have an Azure subscription. If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/).
+* You must have an Azure Machine Learning workspace. A workspace is created in [Install, set up, and use the CLI (v2)](how-to-configure-cli.md).
+* You must have the `aureml-core`, `mlflow`, and `azure-mlflow` packages installed. If you don't, use the following command to install them in your development environment:
-Use the following methods in the logging APIs to influence the metrics visualizations. Note the [service limits](./resource-limits-quotas-capacity.md#metrics) for these logged metrics.
+ ```bash
+ pip install azureml-core mlflow azureml-mlflow
+ ```
-|Logged Value|Example code| Format in portal|
-|-|-|-|
-|Log an array of numeric values| `run.log_list(name='Fibonacci', value=[0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89])`|single-variable line chart|
-|Log a single numeric value with the same metric name repeatedly used (like from within a for loop)| `for i in tqdm(range(-10, 10)): run.log(name='Sigmoid', value=1 / (1 + np.exp(-i))) angle = i / 2.0`| Single-variable line chart|
-|Log a row with 2 numerical columns repeatedly|`run.log_row(name='Cosine Wave', angle=angle, cos=np.cos(angle)) sines['angle'].append(angle) sines['sine'].append(np.sin(angle))`|Two-variable line chart|
-|Log table with 2 numerical columns|`run.log_table(name='Sine Wave', value=sines)`|Two-variable line chart|
-|Log image|`run.log_image(name='food', path='./breadpudding.jpg', plot=None, description='desert')`|Use this method to log an image file or a matplotlib plot to the run. These images will be visible and comparable in the run record|
+## Data types
-## Logging with MLflow
+The following table describes how to log specific value types:
-We recommend logging your models, metrics and artifacts with MLflow as it's open source and it supports local mode to cloud portability. The following table and code examples show how to use MLflow to log metrics and artifacts from your training runs.
-[Learn more about MLflow's logging methods and design patterns](https://mlflow.org/docs/latest/python_api/mlflow.html#mlflow.log_artifact).
+|Logged Value|Example code| Notes|
+|-|-|-|
+|Log a numeric value (int or float) | `mlflow.log_metric('my_metric', 1)`| |
+|Log a boolean value | `mlflow.log_metric('my_metric', 0)`| 0 = True, 1 = False|
+|Log a string | `mlflow.log_text('foo', 'my_string')`| Logged as an artifact|
+|Log numpy metrics or PIL image objects|`mlflow.log_image(img, 'figure.png')`||
+|Log matlotlib plot or image file|` mlflow.log_figure(fig, "figure.png")`||
-Be sure to install the `mlflow` and `azureml-mlflow` pip packages to your workspace.
+## Log a training run with MLflow
-```conda
-pip install mlflow
-pip install azureml-mlflow
-```
+To set up for logging with MLflow, import `mlflow` and set the tracking URI:
-Set the MLflow tracking URI to point at the Azure Machine Learning backend to ensure that your metrics and artifacts are logged to your workspace.
+> [!TIP]
+> You do not need to set the tracking URI when using a notebook running on an Azure Machine Learning compute instance.
```python from azureml.core import Workspace import mlflow
-from mlflow.tracking import MlflowClient
ws = Workspace.from_config()
+# Set the tracking URI to the Azure ML backend
+# Not needed if running on Azure ML compute instance
+# or compute cluster
mlflow.set_tracking_uri(ws.get_mlflow_tracking_uri())
+```
+### Interactive runs
+
+When training interactively, such as in a Jupyter Notebook, use the following pattern:
+
+1. Create or set the active experiment.
+1. Start the run.
+1. Use logging methods to log metrics and other information.
+1. End the run.
+
+For example, the following code snippet demonstrates setting the tracking URI, creating an experiment, and then logging during a run
+
+```python
+from mlflow.tracking import MlflowClient
+
+# Create a new experiment if one doesn't already exist
mlflow.create_experiment("mlflow-experiment")
-mlflow.set_experiment("mlflow-experiment")
+
+# Start the run, log metrics, end the run
mlflow_run = mlflow.start_run()
+mlflow.log_metric('mymetric', 1)
+mlflow.end_run()
```
-|Logged Value|Example code| Notes|
-|-|-|-|
-|Log a numeric value (int or float) | `mlflow.log_metric('my_metric', 1)`| |
-|Log a boolean value | `mlflow.log_metric('my_metric', 0)`| 0 = True, 1 = False|
-|Log a string | `mlflow.log_text('foo', 'my_string')`| Logged as an artifact|
-|Log numpy metrics or PIL image objects|`mlflow.log_image(img, 'figure.png')`||
-|Log matlotlib plot or image file|` mlflow.log_figure(fig, "figure.png")`||
+> [!TIP]
+> Technically you don't have to call `start_run()` as a new run is created if one doesn't exist and you call a logging API. In that case, you can use `mlflow.active_run()` to retrieve the run. However, the `mlflow.ActiveRun` object returned by `mlflow.active_run()` won't contain items like parameters, metrics, etc. For more information, see [mlflow.active_run()](https://mlflow.org/docs/latest/python_api/mlflow.html#mlflow.active_run).
-## View run metrics via the SDK
-You can view the metrics of a trained model using `run.get_metrics()`.
+You can also use the context manager paradigm:
```python
-from azureml.core import Run
-run = Run.get_context()
-run.log('metric-name', metric_value)
+from mlflow.tracking import MlflowClient
-metrics = run.get_metrics()
-# metrics is of type Dict[str, List[float]] mapping metric names
-# to a list of the values for that metric in the given run.
+# Create a new experiment if one doesn't already exist
+mlflow.create_experiment("mlflow-experiment")
-metrics.get('metric-name')
-# list of metrics in the order they were recorded
+# Start the run, log metrics, end the run
+with mlflow.start_run() as run:
+ # Run started when context manager is entered, and ended when context manager exits
+ mlflow.log_metric('mymetric', 1)
+ mlflow.log_metric('anothermetric',1)
+ pass
```
-You can also access run information using MLflow through the run object's data and info properties. See the [MLflow.entities.Run object](https://mlflow.org/docs/latest/python_api/mlflow.entities.html#mlflow.entities.Run) documentation for more information.
+For more information on MLflow logging APIs, see the [MLflow reference](https://www.mlflow.org/docs/latest/python_api/mlflow.html#mlflow.log_artifact).
+
+### Remote runs
+
+For remote training runs, the tracking URI and experiment are set automatically. Otherwise, the options for logging the run are the same as for interactive logging:
+
+* Call `mlflow.start_run()`, log information, and then call `mlflow.end_run()`.
+* Use the context manager paradigm with `mlflow.start_run()`.
+* Call a logging API such as `mlflow.log_metric()`, which will start a run if one doesn't already exist.
+
+## Log a model
-After the run completes, you can retrieve it using the MlFlowClient().
+To save the model from a training run, use the `log_model()` API for the framework you're working with. For example, [mlflow.sklearn.log_model()](https://mlflow.org/docs/latest/python_api/mlflow.sklearn.html#mlflow.sklearn.log_model). For frameworks that MLflow doesn't support, see [Convert custom models to MLflow](how-to-convert-custom-model-to-mlflow.md).
+
+## View run information
+
+You can view the logged information using MLflow through the [MLflow.entities.Run](https://mlflow.org/docs/latest/python_api/mlflow.entities.html#mlflow.entities.Run) object. After a training job completes, you can retrieve it using the [MlFlowClient()](https://mlflow.org/docs/latest/python_api/mlflow.tracking.html#mlflow.tracking.MlflowClient):
```python from mlflow.tracking import MlflowClient
Log files are an essential resource for debugging the Azure ML workloads. After
#### user_logs folder
-This folder contains information about the user generated logs. This folder is open by default, and the **std_log.txt** log is selected. The **std_log.txt** is where your code's logs (for example, print statements) show up. This file contains `stdout` log and `stderr` logs from your control script and training script, one per process. In the majority of cases, you will monitor the logs here.
+This folder contains information about the user generated logs. This folder is open by default, and the **std_log.txt** log is selected. The **std_log.txt** is where your code's logs (for example, print statements) show up. This file contains `stdout` log and `stderr` logs from your control script and training script, one per process. In most cases, you'll monitor the logs here.
#### system_logs folder
-This folder contains the logs generated by Azure Machine Learning and it will be closed by default.The logs generated by the system are grouped into different folders, based on the stage of the job in the runtime.
+This folder contains the logs generated by Azure Machine Learning and it will be closed by default. The logs generated by the system are grouped into different folders, based on the stage of the job in the runtime.
#### Other folders
-For jobs training on multi-compute clusters, logs are present for each node IP. The structure for each node is the same as single node jobs. There is one more logs folder for overall execution, stderr, and stdout logs.
+For jobs training on multi-compute clusters, logs are present for each node IP. The structure for each node is the same as single node jobs. There's one more logs folder for overall execution, stderr, and stdout logs.
-Azure Machine Learning logs information from various sources during training, such as AutoML or the Docker container that runs the training job. Many of these logs are not documented. If you encounter problems and contact Microsoft support, they may be able to use these logs during troubleshooting.
+Azure Machine Learning logs information from various sources during training, such as AutoML or the Docker container that runs the training job. Many of these logs aren't documented. If you encounter problems and contact Microsoft support, they may be able to use these logs during troubleshooting.
## Interactive logging session
-Interactive logging sessions are typically used in notebook environments. The method [Experiment.start_logging()](/python/api/azureml-core/azureml.core.experiment%28class%29#start-logging--args-kwargs-) starts an interactive logging session. Any metrics logged during the session are added to the run record in the experiment. The method [run.complete()](/python/api/azureml-core/azureml.core.run%28class%29#complete--set-status-true-) ends the sessions and marks the run as completed.
-
-## ScriptRun logs
-
-In this section, you learn how to add logging code inside of runs created when configured with ScriptRunConfig. You can use the [**ScriptRunConfig**](/python/api/azureml-core/azureml.core.scriptrunconfig) class to encapsulate scripts and environments for repeatable runs. You can also use this option to show a visual Jupyter Notebooks widget for monitoring.
-
-This example performs a parameter sweep over alpha values and captures the results using the [run.log()](/python/api/azureml-core/azureml.core.run%28class%29#log-name--value--description-) method.
-
-1. Create a training script that includes the logging logic, `train.py`.
-
- [!code-python[](~/MachineLearningNotebooks/how-to-use-azureml/training/train-on-local/train.py)]
--
-1. Submit the ```train.py``` script to run in a user-managed environment. The entire script folder is submitted for training.
-
- [!notebook-python[] (~/MachineLearningNotebooks/how-to-use-azureml/training/train-on-local/train-on-local.ipynb?name=src)]
--
- [!notebook-python[] (~/MachineLearningNotebooks/how-to-use-azureml/training/train-on-local/train-on-local.ipynb?name=run)]
-
- The `show_output` parameter turns on verbose logging, which lets you see details from the training process as well as information about any remote resources or compute targets. Use the following code to turn on verbose logging when you submit the experiment.
-
-```python
-run = exp.submit(src, show_output=True)
-```
-
-You can also use the same parameter in the `wait_for_completion` function on the resulting run.
-
-```python
-run.wait_for_completion(show_output=True)
-```
-
-## Native Python logging
-
-Some logs in the SDK may contain an error that instructs you to set the logging level to DEBUG. To set the logging level, add the following code to your script.
-
-```python
-import logging
-logging.basicConfig(level=logging.DEBUG)
-```
+Interactive logging sessions are typically used in notebook environments. The method [mlflow.start_run()](https://www.mlflow.org/docs/latest/python_api/mlflow.html#mlflow.start_run) starts a new MLflow run and sets it as active. Any metrics logged during the run are added the run record. The method [mlflow.end_run()](https://www.mlflow.org/docs/latest/python_api/mlflow.html#mlflow.end_run) ends the current active run.
## Other logging sources Azure Machine Learning can also log information from other sources during training, such as automated machine learning runs, or Docker containers that run the jobs. These logs aren't documented, but if you encounter problems and contact Microsoft support, they may be able to use these logs during troubleshooting.
-For information on logging metrics in Azure Machine Learning designer, see [How to log metrics in the designer](how-to-track-designer-experiments.md)
-
-## Example notebooks
-
-The following notebooks demonstrate concepts in this article:
-* [how-to-use-azureml/training/train-on-local](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/training/train-on-local)
-* [how-to-use-azureml/track-and-monitor-experiments/logging-api](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/track-and-monitor-experiments/logging-api)
-
+For information on logging metrics in Azure Machine Learning designer, see [How to log metrics in the designer](how-to-track-designer-experiments.md).
## Next steps
-See these articles to learn more on how to use Azure Machine Learning:
-
-* See an example of how to register the best model and deploy it in the tutorial, [Train an image classification model with Azure Machine Learning](tutorial-train-deploy-notebook.md).
+* [Train ML models with MLflow and Azure Machine Learning](how-to-train-mlflow-projects.md).
+* [Migrate from SDK v1 logging to MLflow tracking](reference-migrate-sdk-v1-mlflow-tracking.md).
machine-learning How To Machine Learning Fairness Aml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-machine-learning-fairness-aml.md
Last updated 10/21/2021 -+ # Use Azure Machine Learning with the Fairlearn open-source package to assess the fairness of ML models (preview) + In this how-to guide, you will learn to use the [Fairlearn](https://fairlearn.github.io/) open-source Python package with Azure Machine Learning to perform the following tasks: * Assess the fairness of your model predictions. To learn more about fairness in machine learning, see the [fairness in machine learning article](concept-fairness-ml.md).
machine-learning How To Machine Learning Interpretability Aml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-machine-learning-interpretability-aml.md
Last updated 10/21/2021 -+ # Use the Python interpretability package to explain ML models & predictions (preview) + In this how-to guide, you learn to use the interpretability package of the Azure Machine Learning Python SDK to perform the following tasks:
machine-learning How To Machine Learning Interpretability Automl https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-machine-learning-interpretability-automl.md
-+ Last updated 10/21/2021
Last updated 10/21/2021
# Interpretability: Model explainability in automated ML (preview) In this article, you learn how to get explanations for automated machine learning (automated ML) models in Azure Machine Learning using the Python SDK. Automated ML helps you understand feature importance of the models that are generated.
machine-learning How To Machine Learning Interpretability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-machine-learning-interpretability.md
Title: Model interpretability (preview)
-description: Learn how to understand & explain how your machine learning model makes predictions during training & inferencing using the Azure Machine Learning Python SDK.
+description: Learn how to understand & explain how your machine learning model makes predictions during training & inferencing using Azure Machine Learning CLI and Python SDK.
- Previously updated : 11/04/2021+ Last updated : 05/10/2022
-# Model interpretability in Azure Machine Learning (preview)
+# Model interpretablity (preview)
-This article describes methods you can use for model interpretability in Azure Machine Learning.
+This article describes methods you can use for model interpretability in Azure Machine Learning.
-## Why does model interpretability matter?
+> [!IMPORTANT]
+> With the release of the Responsible AI dashboard which includes model interpretability, we recommend users to migrate to the new experience as the older SDKv1 preview model interpretability dashboard will no longer be actively maintained.
-Model interpretability is critical for data scientists, auditors, and business decision makers alike to ensure compliance with company policies, industry standards, and government regulations:
+## Why is model interpretability important to model debugging?
-+ Data scientists need the ability to explain their models to executives and stakeholders, so they can understand the value and accuracy of their findings. They also require interpretability to debug their models and make informed decisions about how to improve them.
+When machine learning models are used in ways that impact peopleΓÇÖs lives, it is critically important to understand what influences the behavior of models. Interpretability helps answer questions in scenarios such as model debugging (Why did my model make this mistake? How can I improve my model?), human-AI collaboration (How can I understand and trust the modelΓÇÖs decisions?), and regulatory compliance (Does my model satisfy legal requirements?).
-+ Legal auditors require tools to validate models with respect to regulatory compliance and monitor how models' decisions are impacting humans.
+The interpretability component of the [Responsible AI dashboard](LINK TO CONCEPT DOC RESPONSIBLE AI DASHBOARD) contributes to the ΓÇ£diagnoseΓÇ¥ stage of the model lifecycle workflow by generating human-understandable descriptions of the predictions of a Machine Learning model. It provides multiple views into a modelΓÇÖs behavior: global explanations (e.g., what features affect the overall behavior of a loan allocation model) and local explanations (e.g., why a customerΓÇÖs loan application was approved or rejected). One can also observe model explanations for a selected cohort as a subgroup of data points. This is valuable when, for example, assessing fairness in model predictions for individuals in a particular demographic group. The local explanation tab of this component also represents a full data visualization which is great for general eyeballing the data and looking at differences between correct and incorrect predictions of each cohort.
-+ Business decision makers need peace-of-mind by having the ability to provide transparency for end users. This allows them to earn and maintain trust.
+The capabilities of this component are founded by [InterpretML](https://interpret.ml/) capabilities on generating model explanations.
-Enabling the capability of explaining a machine learning model is important during two main phases of model development:
+Use interpretability when you need to...
++ Determine how trustworthy your AI systemΓÇÖs predictions are by understanding what features are most important for the predictions.++ Approach the debugging of your model by understanding it first and identifying if the model is using healthy features or merely spurious correlations.++ Uncover potential sources of unfairness by understanding whether the model is predicting based on sensitive features or features highly correlated with them.++ Build end user trust in your modelΓÇÖs decisions by generating local explanations to illustrate their outcomes.++ Complete a regulatory audit of an AI system to validate models and monitor the impact of model decisions on humans.
-+ During the training phase, as model designers and evaluators can use interpretability output of a model to verify hypotheses and build trust with stakeholders. They also use the insights into the model for debugging, validating model behavior matches their objectives, and to check for model unfairness or insignificant features.
+## How to interpret your model?
+In machine learning, **features** are the data fields used to predict a target data point. For example, to predict credit risk, data fields for age, account size, and account age might be used. In this case, age, account size, and account age are **features**. Feature importance tells you how each data field affected the model's predictions. For example, age may be heavily used in the prediction while account size and account age do not affect the prediction values significantly. This process allows data scientists to explain resulting predictions, so that stakeholders have visibility into what features are most important in the model.
-+ During the inferencing phase, as having transparency around deployed models empowers executives to understand "when deployed" how the model is working and how its decisions are treating and impacting people in real life.
+Using the classes and methods in the Responsible AI dashboard using SDK v2 and CLI v2, you can:
++ Explain model prediction by generating feature importance values for the entire model (global explanation) and/or individual datapoints (local explanation).++ Achieve model interpretability on real-world datasets at scale++ Use an interactive visualization dashboard to discover patterns in data and explanations at training time
-## Interpretability with Azure Machine Learning
+Using the classes and methods in the SDK v1, you can:
++ Explain model prediction by generating feature importance values for the entire model and/or individual datapoints.++ Achieve model interpretability on real-world datasets at scale, during training and inference.++ Use an interactive visualization dashboard to discover patterns in data and explanations at training time
-The model interpretability classes are made available through the following SDK package: (Learn how to [install SDK packages for Azure Machine Learning](/python/api/overview/azure/ml/install))
+The model interpretability classes are made available through the following SDK v1 package: (Learn how to [install SDK packages for Azure Machine Learning](/python/api/overview/azure/ml/install))
* `azureml.interpret`, contains functionalities supported by Microsoft. Use `pip install azureml-interpret` for general use.
-## How to interpret your model
-
-Using the classes and methods in the SDK, you can:
-+ Explain model prediction by generating feature importance values for the entire model and/or individual datapoints.
-+ Achieve model interpretability on real-world datasets at scale, during training and inference.
-+ Use an interactive visualization dashboard to discover patterns in data and explanations at training time
-
-In machine learning, **features** are the data fields used to predict a target data point. For example, to predict credit risk, data fields for age, account size, and account age might be used. In this case, age, account size, and account age are **features**. Feature importance tells you how each data field affected the model's predictions. For example, age may be heavily used in the prediction while account size and account age do not affect the prediction values significantly. This process allows data scientists to explain resulting predictions, so that stakeholders have visibility into what features are most important in the model.
- ## Supported model interpretability techniques
+The Responsible AI dashboard and `azureml-interpret` use the interpretability techniques developed in [Interpret-Community](https://github.com/interpretml/interpret-community/), an open source Python package for training interpretable models and helping to explain opaque-box AI systems. Opaque-box models are those for which we have no information about their internal workings. interpret-Community serves as the host for this SDK's supported explainers.
- `azureml-interpret` uses the interpretability techniques developed in [Interpret-Community](https://github.com/interpretml/interpret-community/), an open source Python package for training interpretable models and helping to explain blackbox AI systems. [Interpret-Community](https://github.com/interpretml/interpret-community/) serves as the host for this SDK's supported explainers, and currently supports the following interpretability techniques:
+[Interpret-Community](https://github.com/interpretml/interpret-community/) serves as the host for the following supported explainers, and currently supports the following interpretability techniques:
+### Supported in Responsible AI dashboard in Python SDK v2 and CLI v2
+|Interpretability Technique|Description|Type|
+|--|--|--|
+|Mimic Explainer (Global Surrogate) + SHAP tree|Mimic explainer is based on the idea of training global surrogate models to mimic opaque-box models. A global surrogate model is an intrinsically interpretable model that is trained to approximate the predictions of any opaque-box model as accurately as possible. Data scientists can interpret the surrogate model to draw conclusions about the opaque-box model. The Responsible AI dashboard uses LightGBM (LGBMExplainableModel), paired with the SHAP (SHapley Additive exPlanations) tree explainer, which is a specific explainer to trees and ensembles of trees. The combination of LightGBM and SHAP tree provide model-agnostic global and local explanations of your machine learning models.|Model-agnostic|
+### Supported in Python SDK v1
|Interpretability Technique|Description|Type| |--|--|--| |SHAP Tree Explainer| [SHAP](https://github.com/slundberg/shap)'s tree explainer, which focuses on polynomial time fast SHAP value estimation algorithm specific to **trees and ensembles of trees**.|Model-specific| |SHAP Deep Explainer| Based on the explanation from SHAP, Deep Explainer "is a high-speed approximation algorithm for SHAP values in deep learning models that builds on a connection with DeepLIFT described in the [SHAP NIPS paper](https://papers.nips.cc/paper/7062-a-unified-approach-to-interpreting-model-predictions). **TensorFlow** models and **Keras** models using the TensorFlow backend are supported (there is also preliminary support for PyTorch)".|Model-specific| |SHAP Linear Explainer| SHAP's Linear explainer computes SHAP values for a **linear model**, optionally accounting for inter-feature correlations.|Model-specific| |SHAP Kernel Explainer| SHAP's Kernel explainer uses a specially weighted local linear regression to estimate SHAP values for **any model**.|Model-agnostic|
-|Mimic Explainer (Global Surrogate)| Mimic explainer is based on the idea of training [global surrogate models](https://christophm.github.io/interpretable-ml-book/global.html) to mimic blackbox models. A global surrogate model is an intrinsically interpretable model that is trained to approximate the predictions of **any black box model** as accurately as possible. Data scientists can interpret the surrogate model to draw conclusions about the black box model. You can use one of the following interpretable models as your surrogate model: LightGBM (LGBMExplainableModel), Linear Regression (LinearExplainableModel), Stochastic Gradient Descent explainable model (SGDExplainableModel), and Decision Tree (DecisionTreeExplainableModel).|Model-agnostic|
+|Mimic Explainer (Global Surrogate)| Mimic explainer is based on the idea of training [global surrogate models](https://christophm.github.io/interpretable-ml-book/global.html) to mimic opaque-box models. A global surrogate model is an intrinsically interpretable model that is trained to approximate the predictions of **any opaque-box model** as accurately as possible. Data scientists can interpret the surrogate model to draw conclusions about the opaque-box model. You can use one of the following interpretable models as your surrogate model: LightGBM (LGBMExplainableModel), Linear Regression (LinearExplainableModel), Stochastic Gradient Descent explainable model (SGDExplainableModel), and Decision Tree (DecisionTreeExplainableModel).|Model-agnostic|
|Permutation Feature Importance Explainer (PFI)| Permutation Feature Importance is a technique used to explain classification and regression models that is inspired by [Breiman's Random Forests paper](https://www.stat.berkeley.edu/~breiman/randomforest2001.pdf) (see section 10). At a high level, the way it works is by randomly shuffling data one feature at a time for the entire dataset and calculating how much the performance metric of interest changes. The larger the change, the more important that feature is. PFI can explain the overall behavior of **any underlying model** but does not explain individual predictions. |Model-agnostic| Besides the interpretability techniques described above, we support another SHAP-based explainer, called `TabularExplainer`. Depending on the model, `TabularExplainer` uses one of the supported SHAP explainers:
The explanation functions accept both models and pipelines as input. If a model
The `azureml.interpret` package is designed to work with both local and remote compute targets. If run locally, The SDK functions will not contact any Azure services.
-You can run explanation remotely on Azure Machine Learning Compute and log the explanation info into the Azure Machine Learning Run History Service. Once this information is logged, reports and visualizations from the explanation are readily available on Azure Machine Learning studio for user analysis.
-
+You can run explanation remotely on Azure Machine Learning Compute and log the explanation info into the Azure Machine Learning Run History Service. Once this information is logged, reports and visualizations from the explanation are readily available on Azure Machine Learning studio for analysis.
## Next steps -- See the [how-to](how-to-machine-learning-interpretability-aml.md) for enabling interpretability for models training both locally and on Azure Machine Learning remote compute resources.
+- See the how-to guide for generating a Responsible AI dashboard with model interpretability via [CLIv2 and SDKv2](how-to-responsible-ai-dashboard-sdk-cli.md) or [studio UI ](how-to-responsible-ai-dashboard-ui.md)
+- See the [Responsible AI scorecard](how-to-responsible-ai-scorecard.md) generate a Responsible AI scorecard based on the insights observed in the Responsible AI dashboard.
+- See the [how-to](how-to-machine-learning-interpretability-aml.md) for enabling interpretability for models training both locally and on Azure Machine Learning remote compute resources.
- Learn how to enable [interpretability for automated machine learning models](how-to-machine-learning-interpretability-automl.md). - See the [sample notebooks](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/explain-model) for additional scenarios. - If you're interested in interpretability for text scenarios, see [Interpret-text](https://github.com/interpretml/interpret-text), a related open source repo to [Interpret-Community](https://github.com/interpretml/interpret-community/), for interpretability techniques for NLP. `azureml.interpret` package does not currently support these techniques but you can get started with an [example notebook on text classification](https://github.com/interpretml/interpret-text/blob/master/notebooks/text_classification/text_classification_classical_text_explainer.ipynb).
machine-learning How To Manage Environments V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-environments-v2.md
Last updated 03/31/2022 -+
-# Manage Azure Machine Learning environments with the CLI (v2) (preview)
+# Manage Azure Machine Learning environments with the CLI (v2)
[!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)]+
+> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning CLI extension you are using:"]
+> * [v1](./v1/how-to-use-environments.md)
+> * [v2 (current version)](how-to-manage-environments-v2.md)
++ Azure Machine Learning environments define the execution environments for your jobs or deployments and encapsulate the dependencies for your code. Azure ML uses the environment specification to create the Docker container that your training or scoring code runs in on the specified compute target. You can define an environment from a conda specification, Docker image, or Docker build context. In this article, learn how to create and manage Azure ML environments using the CLI (v2). ## Prerequisites
machine-learning How To Manage Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-models.md
+
+ Title: Register and work with models
+
+description: Learn how to register and work with different model types in Azure Machine Learning (custom, MLflow, and Triton).
+++++ Last updated : 04/15/2022++++
+# Work with Models in Azure Machine Learning
++
+Azure Machine Learning allows you to work with different types of models. In this article, you'll learn about using Azure Machine Learning to work with different model types (Custom, MLflow, and Triton), how to register a model from different locations and how to use the SDK, UI and CLI to manage your models.
+
+> [!TIP]
+> If you have model assets created using the SDK/CLI v1, you can still use those with SDK/CLI v2. For more information, see the [Consuming V1 Model Assets in V2](#consuming-v1-model-assets-in-v2) section.
++
+## Prerequisites
+
+* An Azure subscription - If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/) today.
+* An Azure Machine Learning workspace.
+* The Azure Machine Learning [SDK v2 for Python](https://aka.ms/sdk-v2-install).
+* The Azure Machine Learning [CLI v2](how-to-configure-cli.md).
++
+## Creating a model in the model registry
+
+[Model registration](concept-model-management-and-deployment.md) allows you to store and version your models in the Azure cloud, in your workspace. The model registry makes it easy to organize and keep track of your trained models.
+
+The code snippets in this section cover the following scenarios:
+
+* Registering model as an asset in Azure Machine Learning using CLI
+* Registering model as an asset in Azure Machine Learning using SDK
+* Registering model as an asset in Azure Machine Learning using UI
+
+These snippets use `custom` and `mlflow`.
+
+- `custom` is a type that refers to a model file.
+- `mlflow` is a type that refers to a model trained with [mlflow](how-to-use-mlflow-cli-runs.md). MLflow trained model are in a folder that contains the `MLmodel file`, `model file`, `conda dependecies` and the `requirements.txt`.
+
+### Registering model as an asset in Azure Machine Learning using CLI
+
+Use the tabs below to select where your model is located.
+
+# [Local model](#tab/use-local)
+
+```YAML
+$schema: https://azuremlschemas.azureedge.net/latest/model.schema.json
+name: local-file-example
+path: mlflow-model/model.pkl
+description: Model created from local file.
+```
+
+```bash
+az ml model create -f <file-name>.yml
+```
+
+For a complete example, see the [model YAML](https://github.com/Azure/azureml-examples/tree/main/cli/assets/model).
++
+# [Datastore](#tab/use-datastore)
+
+A model can be created from a cloud path using any one of the following supported URI formats.
++
+```cli
+az ml model create --name my-model --version 1 --path azureml://datastores/myblobstore/paths/models/cifar10/cifar.pt
+```
+
+The examples use shorthand `azureml` scheme for pointing to a path on the `datastore` using syntax `azureml://datastores/${{datastore-name}}/paths/${{path_on_datastore}}`.
+
+For a complete example, see the [CLI Reference](/cli/azure/ml/model).
+
+# [Job Output](#tab/use-job-output)
+
+__Use the mlflow run URI format__
+
+This option is optimized for mlflow users who are likely already familiar with the mlflow run URI format. This option allows mlflow users to create a model from artifacts in the default artifact location (where all mlflow-logged models and artifacts will be located). This establishes a lineage between a registered model and the run the model came from.
+
+Format:
+`runs:/<run-id>/<path-to-model-relative-to-the-root-of-the-artifact-location>`
+
+Example:
+`runs:/$RUN_ID/model/`
+
+```cli
+az ml model create --name my-model --version 1 --path runs:/$RUN_ID/model/ --type mlflow_model
+```
+
+__Use the azureml job URI format__
+
+We'll also have an azureml job reference URI format to allow users to register a model from artifacts in any of the job's outputs. This format is aligned with the existing azureml datastore reference URI format, and also supports referencing artifacts from named outputs of the job (not just the default artifact location). This also enables establishing a lineage between a registered model and the job it was trained from if the user didn't directly register their model within the training script using mlflow.
+
+Format:
+`azureml://jobs/<job-name>/outputs/<output-name>/paths/<path-to-model-relative-to-the-named-output-location>`
+
+Examples:
+- Default artifact location: `azureml://jobs/$RUN_ID/outputs/artifacts/paths/model/`
+ * this is equivalent to `runs:/$RUN_ID/model/` from the #2 mlflow run URI format
+ * **note: "artifacts"** is the reserved key word we use to refer to the output that represents the **default artifact location**
+- From a named output dir: `azureml://jobs/$RUN_ID/outputs/trained-model`
+- From a specific file or folder path within the named output dir:
+ * `azureml://jobs/$RUN_ID/outputs/trained-model/paths/cifar.pt`
+ * `azureml://jobs/$RUN_ID/outputs/checkpoints/paths/model/`
+
+Saving model from a named output:
+
+```cli
+az ml model create --name my-model --version 1 --path azureml://jobs/$RUN_ID/outputs/trained-model
+```
+
+For a complete example, see the [CLI Reference](/cli/azure/ml/model).
+++
+### Registering model as an asset in Azure Machine Learning using SDK
+
+Use the tabs below to select where your model is located.
+
+# [Local model](#tab/use-local)
+
+```python
+from azure.ai.ml.entities import Model
+from azure.ai.ml._constants import ModelType
+
+file_model = Model(
+ path="mlflow-model/model.pkl",
+ type=ModelType.CUSTOM,
+ name="local-file-example",
+ description="Model created from local file."
+)
+ml_client.models.create_or_update(file_model)
+```
++
+# [Datastore](#tab/use-datastore)
+
+A model can be created from a cloud path using any one of the following supported URI formats.
+
+```python
+from azure.ai.ml.entities import Model
+from azure.ai.ml._constants import ModelType
+
+cloud_model = Model(
+ path= "azureml://datastores/workspaceblobstore/paths/model.pkl"
+ name="cloud-path-example",
+ type=ModelType.CUSTOM,
+ description="Model created from cloud path."
+)
+ml_client.models.create_or_update(cloud_model)
+```
+
+The examples use shorthand `azureml` scheme for pointing to a path on the `datastore` using syntax `azureml://datastores/${{datastore-name}}/paths/${{path_on_datastore}}`.
+
+# [Job Output](#tab/use-job-output)
+
+__Use the mlflow run URI format__
+
+This option is optimized for mlflow users who are likely already familiar with the mlflow run URI format. This option allows mlflow users to create a model from artifacts in the default artifact location (where all mlflow-logged models and artifacts will be located). This establishes a lineage between a registered model and the run the model came from.
+
+Format:
+`runs:/<run-id>/<path-to-model-relative-to-the-root-of-the-artifact-location>`
+
+Example:
+`runs:/$RUN_ID/model/`
+
+```python
+from azure.ai.ml.entities import Model
+from azure.ai.ml._constants import ModelType
+
+run_model = Model(
+ path="runs:/$RUN_ID/model/"
+ name="run-model-example",
+ description="Model created from run.",
+ type=ModelType.MLFLOW
+)
+
+ml_client.models.create_or_update(run_model)
+```
+
+__Use the azureml job URI format__
+
+We'll also have an azureml job reference URI format to allow users to register a model from artifacts in any of the job's outputs. This format is aligned with the existing azureml datastore reference URI format, and also supports referencing artifacts from named outputs of the job (not just the default artifact location). This also enables establishing a lineage between a registered model and the job it was trained from if the user didn't directly register their model within the training script using mlflow.
+
+Format:
+`azureml://jobs/<job-name>/outputs/<output-name>/paths/<path-to-model-relative-to-the-named-output-location>`
+
+Examples:
+- Default artifact location: `azureml://jobs/$RUN_ID/outputs/artifacts/paths/model/`
+ * this is equivalent to `runs:/$RUN_ID/model/` from the mlflow run URI format
+ * **note: "artifacts"** is the reserved key word we use to refer to the output that represents the **default artifact location**
+- From a named output dir: `azureml://jobs/$RUN_ID/outputs/trained-model`
+- From a specific file or folder path within the named output dir:
+ * `azureml://jobs/$RUN_ID/outputs/trained-model/paths/cifar.pt`
+ * `azureml://jobs/$RUN_ID/outputs/checkpoints/paths/model/`
+
+Saving model from a named output:
+
+```python
+from azure.ai.ml.entities import Model
+from azure.ai.ml._constants import ModelType
+
+run_model = Model(
+ path="azureml://jobs/$RUN_ID/outputs/artifacts/paths/model/"
+ name="run-model-example",
+ description="Model created from run.",
+ type=ModelType.CUSTOM
+)
+
+ml_client.models.create_or_update(run_model)
+```
+
+For a complete example, see the [model notebook](https://github.com/Azure/azureml-examples/tree/march-sdk-preview/sdk/assets/model).
+++
+### Registering model as an asset in Azure Machine Learning using UI
+
+To create a model in Azure Machine Learning, open the Models page in Azure Machine Learning. Click **Create** and select where your model is located.
++
+Use the tabs below to select where your model is located.
+
+# [Local model](#tab/use-local)
+
+To upload a model from your computer, Select **Local** and upload the model you want to save in the model registry.
++
+# [Datastore](#tab/use-datastore)
+
+To add a model from an Azure Machine Learning Datastore, Select **Datastore** and pick the datastore and folder where the model is located.
++
+# [Job Output](#tab/use-job-output)
+
+To add a model from an Azure Machine Learning Job, Select **Job Output** and pick the job and folder in the job output where the model is located.
++
+To add a model from an Azure Machine Learning Job, locate the Job in the Job UI and select **Create Model**. You can then folder in the job output where the model is located.
+++++
+## Consuming V1 model assets in V2
+
+> [!NOTE]
+> Full backward compatibility is provided, all model registered with the V1 SDK will be assign the type of `custom`.
++
+## Next steps
+
+* [Install and set up Python SDK v2 (preview)](https://aka.ms/sdk-v2-install)
+* [No-code deployment for Mlflow models](how-to-deploy-mlflow-models-online-endpoints.md)
+* Learn more about [MLflow and Azure Machine Learning](concept-mlflow.md)
machine-learning How To Manage Optimize Cost https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-optimize-cost.md
description: Learn tips to optimize your cost when building machine learning models in Azure Machine Learning -+
Low-Priority VMs have a single quota separate from the dedicated quota value, wh
## Schedule compute instances
-When you create a [compute instance](concept-compute-instance.md), the VM stays on so it is available for your work. [Set up a schedule](how-to-create-manage-compute-instance.md#schedule) to automatically start and stop the compute instance (preview) to save cost when you aren't planning to use it.
+When you create a [compute instance](concept-compute-instance.md), the VM stays on so it is available for your work. [Set up a schedule](how-to-create-manage-compute-instance.md#schedule-automatic-start-and-stop-preview) to automatically start and stop the compute instance (preview) to save cost when you aren't planning to use it.
## Use reserved instances
machine-learning How To Manage Quotas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-quotas.md
Previously updated : 10/21/2021 Last updated : 05/24/2022 -+ # Manage and increase quotas for resources with Azure Machine Learning
In addition, the maximum **run time** is 30 days and the maximum number of **met
### Azure Machine Learning Compute [Azure Machine Learning Compute](concept-compute-target.md#azure-machine-learning-compute-managed) has a default quota limit on both the number of cores (split by each VM Family and cumulative total cores) as well as the number of unique compute resources allowed per region in a subscription. This quota is separate from the VM core quota listed in the previous section as it applies only to the managed compute resources of Azure Machine Learning.
-[Request a quota increase](#request-quota-increases) to raise the limits for various VM family core quotas, total subscription core quotas and resources in this section.
+[Request a quota increase](#request-quota-increases) to raise the limits for various VM family core quotas, total subscription core quotas, cluster quota and resources in this section.
Available resources: + **Dedicated cores per region** have a default limit of 24 to 300, depending on your subscription offer type. You can increase the number of dedicated cores per subscription for each VM family. Specialized VM families like NCv2, NCv3, or ND series start with a default of zero cores. + **Low-priority cores per region** have a default limit of 100 to 3,000, depending on your subscription offer type. The number of low-priority cores per subscription can be increased and is a single value across VM families.
-+ **Clusters per region** have a default limit of 200. These are shared between a training cluster and a compute instance. (A compute instance is considered a single-node cluster for quota purposes.)
++ **Clusters per region** have a default limit of 200. These are shared between training clusters, compute instances and MIR endpoint deployments. (A compute instance is considered a single-node cluster for quota purposes.) Cluster quota can be increased up to a value of 500 per region within a given subscription. > [!TIP] > To learn more about which VM family to request a quota increase for, check out [virtual machine sizes in Azure](../virtual-machines/sizes.md). For instance GPU VM families start with an "N" in their family name (eg. NCv3 series)
The following table shows additional limits in the platform. Please reach out to
<sup>2</sup> Jobs on a low-priority node can be preempted whenever there's a capacity constraint. We recommend that you implement checkpoints in your job.
-### Azure Machine Learning managed online endpoints (preview)
+### Azure Machine Learning managed online endpoints
Azure Machine Learning managed online endpoints have limits described in the following table.
To determine the current usage for an endpoint, [view the metrics](how-to-monito
| Number of endpoints per subscription | 50 | | Number of deployments per subscription | 200 | | Number of deployments per endpoint | 20 |
-| Number of instances per deployment | 20 |
+| Number of instances per deployment | 20 <sup>2</sup> |
| Max request time out at endpoint level | 90 seconds |
-| Total requests per second at endpoint level for all deployments | 500 <sup>2</sup> |
-| Total connections per second at endpoint level for all deployments | 500 <sup>2</sup> |
-| Total connections active at endpoint level for all deployments | 500 <sup>2</sup> |
-| Total bandwidth at endpoint level for all deployments | 5 MBPS <sup>2</sup> |
+| Total requests per second at endpoint level for all deployments | 500 <sup>3</sup> |
+| Total connections per second at endpoint level for all deployments | 500 <sup>3</sup> |
+| Total connections active at endpoint level for all deployments | 500 <sup>3</sup> |
+| Total bandwidth at endpoint level for all deployments | 5 MBPS <sup>3</sup> |
<sup>1</sup> Single dashes like, `my-endpoint-name`, are accepted in endpoint and deployment names.
-<sup>2</sup> If you request a limit increase, be sure to calculate related limit increases you might need. For example, if you request a limit increase for requests per second, you might also want to compute the required connections and bandwidth limits and include these limit increases in the same request.
+<sup>2</sup> We reserve 20% extra compute resources for performing upgrades. For example, if you request 10 instances in a deployment, you must have a quota for 12. Otherwise, you will receive an error.
+
+<sup>3</sup> If you request a limit increase, be sure to calculate related limit increases you might need. For example, if you request a limit increase for requests per second, you might also want to compute the required connections and bandwidth limits and include these limit increases in the same request.
### Azure Machine Learning pipelines [Azure Machine Learning pipelines](concept-ml-pipelines.md) have the following limits.
When you're requesting a quota increase, select the service that you have in min
+ [Plan and manage costs for Azure Machine Learning](concept-plan-manage-cost.md) + [Service limits in Azure Machine Learning](resource-limits-quotas-capacity.md)
-+ [Troubleshooting managed online endpoints deployment and scoring (preview)](./how-to-troubleshoot-online-endpoints.md)
++ [Troubleshooting managed online endpoints deployment and scoring](./how-to-troubleshoot-online-endpoints.md)
machine-learning How To Manage Resources Vscode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-resources-vscode.md
+ Last updated 05/25/2021
The extension currently supports the following dataset types:
- *Tabular*: Allows you to materialize data into a DataFrame. - *File*: A file or collection of files. Allows you to download or mount files to your compute.
-For more information, see [datasets](concept-data.md#datasets)
+For more information, see [datasets](./v1/concept-data.md)
### Create dataset
Alternatively, use the `> Azure ML: View Environment` command in the command pal
## Experiments
-For more information, see [experiments](concept-azure-machine-learning-architecture.md#experiments).
+For more information, see [experiments](v1/concept-azure-machine-learning-architecture.md#experiments).
### Create job
Alternatively, use the `> Azure ML: Create Job` command in the command palette.
### View job
-To view your job in Azure Machine Learning Studio:
+To view your job in Azure Machine Learning studio:
1. Expand the subscription node that contains your workspace. 1. Expand the **Experiments** node inside your workspace.
Alternatively, use the `> Azure ML: View Compute Properties` and `> Azure ML: De
## Models
-For more information, see [models](concept-azure-machine-learning-architecture.md#models)
+For more information, see [models](v1/concept-azure-machine-learning-architecture.md#models)
### Create model
Alternatively, use the `> Azure ML: Remove Model` command in the command palette
## Endpoints
-For more information, see [endpoints](concept-azure-machine-learning-architecture.md#endpoints).
+For more information, see [endpoints](v1/concept-azure-machine-learning-architecture.md#endpoints).
### Create endpoint
Alternatively, use the `> Azure ML: View Service Properties` command in the comm
## Next steps
-[Train an image classification model](tutorial-train-deploy-image-classification-model-vscode.md) with the VS Code extension.
+[Train an image classification model](tutorial-train-deploy-image-classification-model-vscode.md) with the VS Code extension.
machine-learning How To Manage Workspace Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-workspace-cli.md
Last updated 01/05/2022 -+ # Manage Azure Machine Learning workspaces using Azure CLI
az ml workspace create -g <resource-group-name> --file cmk.yml
> Authorize the __Machine Learning App__ (in Identity and Access Management) with contributor permissions on your subscription to manage the data encryption additional resources. > [!NOTE]
-> Azure Cosmos DB is __not__ used to store information such as model performance, information logged by experiments, or information logged from your model deployments. For more information on monitoring these items, see the [Monitoring and logging](concept-azure-machine-learning-architecture.md) section of the architecture and concepts article.
+> Azure Cosmos DB is __not__ used to store information such as model performance, information logged by experiments, or information logged from your model deployments. For more information on monitoring these items, see the [Monitoring and logging](v1/concept-azure-machine-learning-architecture.md) section of the architecture and concepts article.
> [!IMPORTANT] > Selecting high business impact can only be done when creating a workspace. You cannot change this setting after workspace creation.
For more information on the Azure CLI extension for machine learning, see the [a
To check for problems with your workspace, see [How to use workspace diagnostics](how-to-workspace-diagnostic-api.md).
-To learn how to move a workspace to a new Azure subscription, see [How to move a workspace](how-to-move-workspace.md).
+To learn how to move a workspace to a new Azure subscription, see [How to move a workspace](how-to-move-workspace.md).
machine-learning How To Manage Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-workspace.md
Last updated 03/08/2022 --+ # Manage Azure Machine Learning workspaces in the portal or with the Python SDK In this article, you create, view, and delete [**Azure Machine Learning workspaces**](concept-workspace.md) for [Azure Machine Learning](overview-what-is-azure-machine-learning.md), using the Azure portal or the [SDK for Python](/python/api/overview/azure/ml/)
-As your needs change or requirements for automation increase you can also manage workspaces [using the CLI](reference-azure-machine-learning-cli.md), or [via the VS Code extension](how-to-setup-vs-code.md).
+As your needs change or requirements for automation increase you can also manage workspaces [using the CLI](v1/reference-azure-machine-learning-cli.md), or [via the VS Code extension](how-to-setup-vs-code.md).
## Prerequisites
As your needs change or requirements for automation increase you can also manage
# [Python](#tab/python) + * **Default specification.** By default, dependent resources and the resource group will be created automatically. This code creates a workspace named `myworkspace` and a resource group named `myresourcegroup` in `eastus2`. ```python
Place the file into the directory structure with your Python scripts or Jupyter
## Connect to a workspace + In your Python code, you create a workspace object to connect to your workspace. This code will read the contents of the configuration file to find your workspace. You will get a prompt to sign in if you are not already authenticated. ```python
ws = Workspace.from_config()
* **[Sovereign cloud](reference-machine-learning-cloud-parity.md)**. You'll need extra code to authenticate to Azure if you're working in a sovereign cloud.
+ [!INCLUDE [sdk v1](../../includes/machine-learning-sdk-v1.md)]
+ ```python from azureml.core.authentication import InteractiveLoginAuthentication from azureml.core import Workspace
See a list of all the workspaces you can use.
# [Python](#tab/python) + Find your subscriptions in the [Subscriptions page in the Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_Billing/SubscriptionsBlade). Copy the ID and use it in the code below to see all workspaces available for that subscription. ```python
If you accidentally deleted your workspace, you may still be able to retrieve yo
# [Python](#tab/python) + Delete the workspace `ws`: ```python
machine-learning How To Migrate From Estimators To Scriptrunconfig https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-migrate-from-estimators-to-scriptrunconfig.md
Last updated 12/14/2020 -+ # Migrating from Estimators to ScriptRunConfig + Up until now, there have been multiple methods for configuring a training job in Azure Machine Learning via the SDK, including Estimators, ScriptRunConfig, and the lower-level RunConfiguration. To address this ambiguity and inconsistency, we are simplifying the job configuration process in Azure ML. You should now use ScriptRunConfig as the recommended option for configuring training jobs. Estimators are deprecated with the 1.19.0 release of the Python SDK. You should also generally avoid explicitly instantiating a RunConfiguration object yourself, and instead configure your job using the ScriptRunConfig class.
src.run_config
## Next steps
-* [Configure and submit training runs](how-to-set-up-training-targets.md)
+* [Configure and submit training runs](how-to-set-up-training-targets.md)
machine-learning How To Monitor Datasets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-monitor-datasets.md
Last updated 10/21/2021 -+ #Customer intent: As a data scientist, I want to detect data drift in my datasets and set alerts for when drift is large. # Detect data drift (preview) on datasets + Learn how to monitor data drift and set alerts when drift is high. With Azure Machine Learning dataset monitors (preview), you can:
With Azure Machine Learning dataset monitors (preview), you can:
* **Set up alerts on data drift** for early warnings to potential issues. * **[Create a new dataset version](how-to-version-track-datasets.md)** when you determine the data has drifted too much.
-An [Azure Machine learning dataset](how-to-create-register-datasets.md) is used to create the monitor. The dataset must include a timestamp column.
+An [Azure Machine learning dataset](./v1/how-to-create-register-datasets.md) is used to create the monitor. The dataset must include a timestamp column.
You can view data drift metrics with the Python SDK or in Azure Machine Learning studio. Other metrics and insights are available through the [Azure Application Insights](../azure-monitor/app/app-insights-overview.md) resource associated with the Azure Machine Learning workspace.
Dataset monitors depend on the following Azure services.
### Baseline and target datasets
-You monitor [Azure machine learning datasets](how-to-create-register-datasets.md) for data drift. When you create a dataset monitor, you will reference your:
+You monitor [Azure machine learning datasets](./v1/how-to-create-register-datasets.md) for data drift. When you create a dataset monitor, you will reference your:
* Baseline dataset - usually the training dataset for a model. * Target dataset - usually model input data - is compared over time to your baseline dataset. This comparison means that your target dataset must have a timestamp column specified.
The target dataset needs the `timeseries` trait set on it by specifying the time
# [Python](#tab/python) <a name="sdk-dataset"></a> + The [`Dataset`](/python/api/azureml-core/azureml.data.tabulardataset#with-timestamp-columns-timestamp-none--partition-timestamp-none--validate-false-kwargs-) class [`with_timestamp_columns()`](/python/api/azureml-core/azureml.data.tabulardataset#with-timestamp-columns-timestamp-none--partition-timestamp-none--validate-false-kwargs-) method defines the time stamp column for the dataset. ```python
Create a dataset monitor to detect and alert to data drift on a new dataset. Us
# [Python](#tab/python) <a name="sdk-monitor"></a>++ See the [Python SDK reference documentation on data drift](/python/api/azureml-datadrift/azureml.datadrift) for full details. The following example shows how to create a dataset monitor using the Python SDK
machine-learning How To Monitor Online Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-monitor-online-endpoints.md
Title: Monitor managed online endpoints (preview)
+ Title: Monitor managed online endpoints
description: Monitor managed online endpoints and create alerts with Application Insights.
Last updated 10/21/2021 -+
-# Monitor managed online endpoints (preview)
+# Monitor managed online endpoints
-
-In this article, you learn how to monitor [Azure Machine Learning managed online endpoints (preview)](concept-endpoints.md). Use Application Insights to view metrics and create alerts to stay up to date with your managed online endpoints.
+In this article, you learn how to monitor [Azure Machine Learning managed online endpoints](concept-endpoints.md). Use Application Insights to view metrics and create alerts to stay up to date with your managed online endpoints.
In this article you learn how to:
In this article you learn how to:
## Prerequisites -- Deploy an Azure Machine Learning managed online endpoint (preview).
+- Deploy an Azure Machine Learning managed online endpoint.
- You must have at least [Reader access](../role-based-access-control/role-assignments-portal.md) on the endpoint. ## View metrics
Use the following steps to view metrics for a managed endpoint or deployment:
## Available metrics
-Depending on the resource that you select, the metrics that you see will be different. Metrics are scoped differently for managed online endpoints and managed online deployments (preview).
+Depending on the resource that you select, the metrics that you see will be different. Metrics are scoped differently for managed online endpoints and managed online deployments.
### Metrics at endpoint scope
Depending on the resource that you select, the metrics that you see will be diff
- Active connection count - Network bytes
-> [!NOTE]
-> Bandwidth will be throttled if the limits are exceeded (see managed online endpoints section in [Manage and increase quotas for resources with Azure Machine Learning](how-to-manage-quotas.md#azure-machine-learning-managed-online-endpoints-preview)). To determine if requests are throttled:
-> - Monitor the "Network bytes" metric
-> - The response headers will have the fields: `ms-azureml-bandwidth-request-delay-ms` and `ms-azureml-bandwidth-response-delay-ms`. The values of the fields are the delays, in milliseconds, of the bandwidth throttling.
- Split on the following dimensions: - Deployment - Status Code - Status Code Class
+#### Bandwidth throttling
+
+Bandwidth will be throttled if the limits are exceeded (see managed online endpoints section in [Manage and increase quotas for resources with Azure Machine Learning](how-to-manage-quotas.md#azure-machine-learning-managed-online-endpoints)). To determine if requests are throttled:
+- Monitor the "Network bytes" metric
+- The response trailers will have the fields: `ms-azureml-bandwidth-request-delay-ms` and `ms-azureml-bandwidth-response-delay-ms`. The values of the fields are the delays, in milliseconds, of the bandwidth throttling.
+ ### Metrics at deployment scope - CPU Utilization Percentage
You can also create custom alerts to notify you of important status updates to y
## Next steps * Learn how to [view costs for your deployed endpoint](./how-to-view-online-endpoints-costs.md).
-* Read more about [metrics explorer](../azure-monitor/essentials/metrics-charts.md).
+* Read more about [metrics explorer](../azure-monitor/essentials/metrics-charts.md).
machine-learning How To Monitor Tensorboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-monitor-tensorboard.md
Last updated 10/21/2021 -+ # Visualize experiment runs and metrics with TensorBoard and Azure Machine Learning In this article, you learn how to view your experiment runs and metrics in TensorBoard using [the `tensorboard` package](/python/api/azureml-tensorboard/) in the main Azure Machine Learning SDK. Once you've inspected your experiment runs, you can better tune and retrain your machine learning models.
machine-learning How To Move Data In Out Of Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-move-data-in-out-of-pipelines.md
Last updated 10/21/2021 -+ #Customer intent: As a data scientist using Python, I want to get data into my pipeline and flowing between steps. # Moving data into and between ML pipeline steps (Python) + This article provides code for importing, transforming, and moving data between steps in an Azure Machine Learning pipeline. For an overview of how data works in Azure Machine Learning, see [Access data in Azure storage services](how-to-access-data.md). For the benefits and structure of Azure Machine Learning pipelines, see [What are Azure Machine Learning pipelines?](concept-ml-pipelines.md) This article will show you how to:
datastore_path = [
cats_dogs_dataset = Dataset.File.from_files(path=datastore_path) ```
-For more options on creating datasets with different options and from different sources, registering them and reviewing them in the Azure Machine Learning UI, understanding how data size interacts with compute capacity, and versioning them, see [Create Azure Machine Learning datasets](how-to-create-register-datasets.md).
+For more options on creating datasets with different options and from different sources, registering them and reviewing them in the Azure Machine Learning UI, understanding how data size interacts with compute capacity, and versioning them, see [Create Azure Machine Learning datasets](./v1/how-to-create-register-datasets.md).
### Pass datasets to your script
After the initial pipeline step writes some data to the `OutputFileDatasetConfig
In the following code:
-* `step1_output_data` indicates that the output of the PythonScriptStep, `step1` is written to the ADLS Gen 2 datastore, `my_adlsgen2` in upload access mode. Learn more about how to [set up role permissions](how-to-access-data.md#azure-data-lake-storage-generation-2) in order to write data back to ADLS Gen 2 datastores.
+* `step1_output_data` indicates that the output of the PythonScriptStep, `step1` is written to the ADLS Gen 2 datastore, `my_adlsgen2` in upload access mode. Learn more about how to [set up role permissions](./v1/how-to-access-data.md) in order to write data back to ADLS Gen 2 datastores.
* After `step1` completes and the output is written to the destination indicated by `step1_output_data`, then step2 is ready to use `step1_output_data` as an input.
For more information, see [Plan and manage costs for Azure Machine Learning](con
## Next steps
-* [Create an Azure machine learning dataset](how-to-create-register-datasets.md)
+* [Create an Azure machine learning dataset](./v1/how-to-create-register-datasets.md)
* [Create and run machine learning pipelines with Azure Machine Learning SDK](./how-to-create-machine-learning-pipelines.md)
machine-learning How To Network Security Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-network-security-overview.md
Last updated 02/02/2022 -+ <!-- # Virtual network isolation and privacy overview -->
Secure Azure Machine Learning workspace resources and compute environments using
> > * [Secure the workspace resources](how-to-secure-workspace-vnet.md) > * [Secure the training environment](how-to-secure-training-vnet.md)
-> * [Secure the inference environment](how-to-secure-inferencing-vnet.md)
+> * For securing inference, see the following documents:
+> * If using CLI v1 or SDK v1 - [Secure inference environment](how-to-secure-inferencing-vnet.md)
+> * If using CLI v2 or SDK v2 - [Network isolation for managed online endpoints](how-to-secure-online-endpoint.md)
> * [Enable studio functionality](how-to-enable-studio-virtual-network.md) > * [Use custom DNS](how-to-custom-dns.md) > * [Use a firewall](how-to-access-azureml-behind-firewall.md)
The next sections show you how to secure the network scenario described above. T
1. Secure the [**workspace and associated resources**](#secure-the-workspace-and-associated-resources). 1. Secure the [**training environment**](#secure-the-training-environment).
-1. Secure the [**inferencing environment**](#secure-the-inferencing-environment).
+1. Secure the **inferencing environment** [v1](#secure-the-inferencing-environment-v1) or [v2](#secure-the-inferencing-environment-v1).
1. Optionally: [**enable studio functionality**](#optional-enable-studio-functionality). 1. Configure [**firewall settings**](#configure-firewall-settings). 1. Configure [**DNS name resolution**](#custom-dns).
In this section, you learn how Azure Machine Learning securely communicates betw
- Azure Compute Instance and Azure Compute Clusters must be in the same VNet, region, and subscription as the workspace and its associated resources.
-## Secure the inferencing environment
+## Secure the inferencing environment (v2)
-In this section, you learn the options available for securing an inferencing environment. We recommend that you use Azure Kubernetes Services (AKS) clusters for high-scale, production deployments.
+
+You can enable network isolation for managed online endpoints to secure the following network traffic:
+
+* Inbound scoring requests.
+* Outbound communication with the workspace, Azure Container Registry, and Azure Blob Storage.
+
+> [!IMPORTANT]
+> Using network isolation for managed online endpoints is a __preview__ feature, and isn't fully supported.
+
+For more information, see [Enable network isolation for managed online endpoints](how-to-secure-online-endpoint.md).
+
+## Secure the inferencing environment (v1)
++
+In this section, you learn the options available for securing an inferencing environment. When doing a v1 deployment, we recommend that you use Azure Kubernetes Services (AKS) clusters for high-scale, production deployments.
You have two options for AKS clusters in a virtual network:
After securing the workspace with a private endpoint, use the following steps to
## Optional: enable studio functionality
-[Secure the workspace](#secure-the-workspace-and-associated-resources) > [Secure the training environment](#secure-the-training-environment) > [Secure the inferencing environment](#secure-the-inferencing-environment) > **Enable studio functionality** > [Configure firewall settings](#configure-firewall-settings)
- If your storage is in a VNet, you must use extra configuration steps to enable full functionality in studio. By default, the following features are disabled: * Preview data in the studio.
This article is part of a series on securing an Azure Machine Learning workflow.
* [Secure the workspace resources](how-to-secure-workspace-vnet.md) * [Secure the training environment](how-to-secure-training-vnet.md)
-* [Secure the inference environment](how-to-secure-inferencing-vnet.md)
+* For securing inference, see the following documents:
+ * If using CLI v1 or SDK v1 - [Secure inference environment](how-to-secure-inferencing-vnet.md)
+ * If using CLI v2 or SDK v2 - [Network isolation for managed online endpoints](how-to-secure-online-endpoint.md)
* [Enable studio functionality](how-to-enable-studio-virtual-network.md) * [Use custom DNS](how-to-custom-dns.md) * [Use a firewall](how-to-access-azureml-behind-firewall.md)
-* [API platform network isolation](how-to-configure-network-isolation-with-v2.md)
+* [API platform network isolation](how-to-configure-network-isolation-with-v2.md)
machine-learning How To Prebuilt Docker Images Inference Python Extensibility https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-prebuilt-docker-images-inference-python-extensibility.md
Last updated 10/21/2021 -+ # Python package extensibility for prebuilt Docker images (preview) + The [prebuilt Docker images for model inference](concept-prebuilt-docker-images-inference.md) contain packages for popular machine learning frameworks. There are two methods that can be used to add Python packages __without rebuilding the Docker image__: * [Dynamic installation](#dynamic): This approach uses a [requirements](https://pip.pypa.io/en/stable/cli/pip_install/#requirements-file-format) file to automatically restore Python packages when the Docker container boots.
machine-learning How To Prepare Datasets For Automl Images https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-prepare-datasets-for-automl-images.md
description: Image data preparation for Azure Machine Learning automated ML to t
-+ - Previously updated : 10/13/2021+ Last updated : 04/15/2022 # Prepare data for computer vision tasks with automated machine learning (preview) +
+> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning CLI extension you are using:"]
+> * [v1](v1/how-to-prepare-datasets-for-automl-images-v1.md)
+> * [v2 (current version)](how-to-prepare-datasets-for-automl-images.md)
+ > [!IMPORTANT] > Support for training computer vision models with automated ML in Azure Machine Learning is an experimental public preview feature. Certain features might not be supported or might have constrained capabilities. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). In this article, you learn how to prepare image data for training computer vision models with [automated machine learning in Azure Machine Learning](concept-automated-ml.md).
-To generate models for computer vision tasks with automated machine learning, you need to bring labeled image data as input for model training in the form of an [Azure Machine Learning TabularDataset](/python/api/azureml-core/azureml.data.tabulardataset).
+To generate models for computer vision tasks with automated machine learning, you need to bring labeled image data as input for model training in the form of an `MLTable`.
-To ensure your TabularDataset contains the accepted schema for consumption in automated ML, you can use the Azure Machine Learning data labeling tool or use a conversion script.
+You can create an `MLTable` from labeled training data in JSONL format.
+If your labeled training data is in a different format (like, pascal VOC or COCO), you can use a conversion script to first convert it to JSONL, and then create an `MLTable`. Alternatively, you can use Azure Machine Learning's [data labeling tool](how-to-create-image-labeling-projects.md) to manually label images, and export the labeled data to use for training your AutoML model.
## Prerequisites * Familiarize yourself with the accepted [schemas for JSONL files for AutoML computer vision experiments](reference-automl-images-schema.md).
-* Labeled data you want to use to train computer vision models with automated ML.
-
-## Azure Machine Learning data labeling
+## Get labeled data
+In order to train computer vision models using AutoML, you need to first get labeled training data. The images need to be uploaded to the cloud and label annotations need to be in JSONL format. You can either use the Azure ML Data Labeling tool to label your data or you could start with pre-labeled image data.
-If you don't have labeled data, you can use Azure Machine Learning's [data labeling tool](how-to-create-image-labeling-projects.md) to manually label images. This tool automatically generates the data required for training in the accepted format.
+### Using Azure ML Data Labeling tool to label your training data
+If you don't have pre-labeled data, you can use Azure Machine Learning's [data labeling tool](how-to-create-image-labeling-projects.md) to manually label images. This tool automatically generates the data required for training in the accepted format.
It helps to create, manage, and monitor data labeling tasks for
It helps to create, manage, and monitor data labeling tasks for
+ Object detection (bounding box) + Instance segmentation (polygon)
-If you already have a data labeling project and you want to use that data, you can [export your labeled data as an Azure Machine Learning TabularDataset](how-to-create-image-labeling-projects.md#export-the-labels), which can then be used directly with automated ML for training computer vision models.
+If you already have a data labeling project and you want to use that data, you can [export your labeled data as an Azure ML Dataset](how-to-create-image-labeling-projects.md#export-the-labels). You can then access the exported dataset under the 'Datasets' tab in Azure ML Studio, and download the underlying JSONL file from the Dataset details page under Data sources. The downloaded JSONL file can then be used to create an `MLTable` that can be used by automated ML for training computer vision models.
+
+### Using pre-labeled training data
+If you have previously labeled data that you would like to use to train your model, you will first need to upload the images to the default Azure Blob Storage of your Azure ML Workspace and register it as a data asset.
+
+# [CLI v2](#tab/CLI-v2)
-## Use conversion scripts
+Create a .yml file with the following configuration.
-If you have labeled data in popular computer vision data formats, like VOC or COCO, [helper scripts](https://github.com/Azure/azureml-examples/blob/main/python-sdk/tutorials/automl-with-azureml/image-object-detection/coco2jsonl.py) to generate JSONL files for training and validation data are available in [notebook examples](https://github.com/Azure/azureml-examples/tree/main/python-sdk/tutorials/automl-with-azureml).
+```yml
+$schema: https://azuremlschemas.azureedge.net/latest/data.schema.json
+name: fridge-items-images-object-detection
+description: Fridge-items images Object detection
+path: ./data/odFridgeObjects
+type: uri_folder
+```
-If your data doesn't follow any of the previously mentioned formats, you can use your own script to generate JSON Lines files based on schemas defined in [Schema for JSONL files for AutoML image experiments](reference-automl-images-schema.md).
+To upload the images as a data asset, you run the following CLI v2 command with the path to your .yml file, workspace name, resource group and subscription ID.
-After your data file(s) are converted to the accepted JSONL format, you can upload them to your storage account on Azure.
+```azurecli
+az ml data create -f [PATH_TO_YML_FILE] --workspace-name [YOUR_AZURE_WORKSPACE] --resource-group [YOUR_AZURE_RESOURCE_GROUP] --subscription [YOUR_AZURE_SUBSCRIPTION]
+```
-## Upload the JSONL file and images to storage
+# [Python SDK v2 (preview)](#tab/SDK-v2)
+[!Notebook-python[] (~/azureml-examples-sdk-preview/sdk/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=upload-data)]
+
-To use the data for automated ML training, upload the data to your [Azure Machine Learning workspace](concept-workspace.md) via a [datastore](how-to-access-data.md). The datastore provides a mechanism for you to upload/download data to storage on Azure, and interact with it from your remote compute targets.
+Next, you will need to get the label annotations in JSONL format. The schema of labeled data depends on the computer vision task at hand. Refer to [schemas for JSONL files for AutoML computer vision experiments](reference-automl-images-schema.md) to learn more about the required JSONL schema for each task type.
-Upload the entire parent directory consisting of images and JSONL files to the default datastore that is automatically created upon workspace creation. This datastore connects to the default Azure blob storage container that was created as part of workspace creation.
+If your training data is in a different format (like, pascal VOC or COCO), [helper scripts](https://github.com/Azure/azureml-examples/blob/main/python-sdk/tutorials/automl-with-azureml/image-object-detection/coco2jsonl.py) to convert the data to JSONL are available in [notebook examples](https://github.com/Azure/azureml-examples/blob/sdk-preview/sdk/jobs/automl-standalone-jobs).
-```python
-# Retrieve default datastore that's automatically created when we setup a workspace
-ds = ws.get_default_datastore()
-ds.upload(src_dir='./fridgeObjects', target_path='fridgeObjects')
-```
-Once the data upload is done, you can create an [Azure Machine Learning TabularDataset](/python/api/azureml-core/azureml.data.tabulardataset) and register it to your workspace for future use as input to your automated ML experiments for computer vision models.
+## Create MLTable
-```python
-from azureml.core import Dataset
-from azureml.data import DataType
+Once you have your labeled data in JSONL format, you can use it to create `MLTable` as shown below. MLtable packages your data into a consumable object for training.
-training_dataset_name = 'fridgeObjectsTrainingDataset'
-# create training dataset
-training_dataset = Dataset.Tabular.from_json_lines_files(path=ds.path("fridgeObjects/train_annotations.jsonl"),
- set_column_types={"image_url": DataType.to_stream(ds.workspace)}
- )
-training_dataset = training_dataset.register( workspace=ws,name=training_dataset_name)
-print("Training dataset name: " + training_dataset.name)
-```
+You can then pass in the `MLTable` as a data input for your AutoML training job.
## Next steps * [Train computer vision models with automated machine learning](how-to-auto-train-image-models.md). * [Train a small object detection model with automated machine learning](how-to-use-automl-small-object-detect.md).
-* [Tutorial: Train an object detection model (preview) with AutoML and Python](tutorial-auto-train-image-models.md).
+* [Tutorial: Train an object detection model (preview) with AutoML and Python](tutorial-auto-train-image-models.md).
machine-learning How To Read Write Data V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-read-write-data-v2.md
+
+ Title: Read and write data
+
+description: Learn how to read and write data for consumption in Azure Machine Learning training jobs.
+++++++ Last updated : 04/15/2022+
+#Customer intent: As an experienced Python developer, I need to read in my data to make it available to a remote compute to train my machine learning models.
++
+# Read and write data for ML experiments
+
+Learn how to read and write data for your training jobs with the Azure Machine Learning Python SDK v2(preview) and the Azure Machine Learning CLI extension v2.
+
+## Prerequisites
+
+- An Azure subscription. If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/).
+
+- The [Azure Machine Learning SDK for Python v2](/python/api/overview/azure/ml/intro).
+
+- An Azure Machine Learning workspace
+
+```python
+
+from azure.ai.ml import MLClient
+from azure.identity import InteractiveBrowserCredential
+
+#enter details of your AML workspace
+subscription_id = '<SUBSCRIPTION_ID>'
+resource_group = '<RESOURCE_GROUP>'
+workspace = '<AML_WORKSPACE_NAME>'
+
+#get a handle to the workspace
+ml_client = MLClient(InteractiveBrowserCredential(), subscription_id, resource_group, workspace)
+```
+
+## Read local data in a job
+
+You can use data from your current working directory in a training job with the JobInput class.
+The JobInput class allows you to define data inputs from a specific file, `uri_file` or a folder location, `uri_folder`. In the JobInput object, you specify the `path` of where your data is located; the path can be a local path or a cloud path. Azure Machine Learning supports `https://`, `abfss://`, `wasbs://` and `azureml://` URIs.
+
+> [!IMPORTANT]
+> If the path is local, but your compute is defined to be in the cloud, Azure Machine Learning will automatically upload the data to cloud storage for you.
++
+# [Python-SDK](#tab/Python-SDK)
+```python
+
+from azure.ai.ml.entities import Data, UriReference, JobInput, CommandJob
+from azure.ai.ml._constants import AssetTypes
+
+my_job_inputs = {
+ "input_data": JobInput(
+ path='./sample_data', # change to be your local directory
+ type=AssetTypes.URI_FOLDER
+ )
+}
+
+job = CommandJob(
+ code="./src", # local path where the code is stored
+ command='python train.py --input_folder ${{inputs.input_data}}',
+ inputs=my_job_inputs,
+ environment="AzureML-sklearn-0.24-ubuntu18.04-py37-cpu:9",
+ compute="cpu-cluster"
+)
+
+#submit the command job
+returned_job = ml_client.create_or_update(job)
+#get a URL for the status of the job
+returned_job.services["Studio"].endpoint
+```
+
+# [CLI](#tab/CLI)
+The following code shows how to read in uri_file type data from local.
+
+```azurecli
+az ml job create -f <file-name>.yml
+```
+```yaml
+$schema: https://azuremlschemas.azureedge.net/latest/commandJob.schema.json
+command: |
+ python hello-iris.py --iris-csv ${{inputs.iris_csv}}
+code: src
+inputs:
+ iris_csv:
+ type: uri_file
+ path: ./example-data/iris.csv
+environment: azureml:AzureML-sklearn-1.0-ubuntu20.04-py38-cpu@latest
+compute: azureml:cpu-cluster
+```
+++
+## Read data stored in storage service on Azure in a job
+
+You can read your data in from existing storage on Azure.
+You can leverage Azure Machine Learning datastore to register these exiting Azure storage.
+Azure Machine Learning datastores securely keep the connection information to your data storage on Azure, so you don't have to code it in your scripts.
+You can access your data and create datastores with,
+- Credential-based data authentication, like a service principal or shared access signature (SAS) token. These credentials can be accessed by users who have Reader access to the workspace.
+- Identity-based data authentication to connect to storage services with your Azure Active Directory ID or other managed identity.
+
+# [Python-SDK](#tab/Python-SDK)
+
+The following code shows how to read in uri_folder type data from Azure Data Lake Storage Gen 2 or Blob via SDK V2.
+
+```python
+
+from azure.ai.ml.entities import Data, UriReference, JobInput, CommandJob
+from azure.ai.ml._constants import AssetTypes
+
+my_job_inputs = {
+ "input_data": JobInput(
+ path='abfss://<file_system>@<account_name>.dfs.core.windows.net/<path>', # Blob: 'https://<account_name>.blob.core.windows.net/<container_name>/path'
+ type=AssetTypes.URI_FOLDER
+ )
+}
+
+job = CommandJob(
+ code="./src", # local path where the code is stored
+ command='python train.py --input_folder ${{inputs.input_data}}',
+ inputs=my_job_inputs,
+ environment="AzureML-sklearn-0.24-ubuntu18.04-py37-cpu:9",
+ compute="cpu-cluster"
+)
+
+#submit the command job
+returned_job = ml_client.create_or_update(job)
+#get a URL for the status of the job
+returned_job.services["Studio"].endpoint
+```
+
+# [CLI](#tab/CLI)
+The following code shows how to read in uri_file type data from Azure ML datastore via CLI V2.
+
+```azurecli
+az ml job create -f <file-name>.yml
+```
+```yaml
+$schema: https://azuremlschemas.azureedge.net/latest/commandJob.schema.json
+command: |
+ echo "--iris-csv: ${{inputs.iris_csv}}"
+ python hello-iris.py --iris-csv ${{inputs.iris_csv}}
+code: src
+inputs:
+ iris_csv:
+ type: uri_file
+ path: azureml://datastores/workspaceblobstore/paths/example-data/iris.csv
+environment: azureml:AzureML-sklearn-1.0-ubuntu20.04-py38-cpu@latest
+compute: azureml:cpu-cluster
+```
+++
+## Read and write data to cloud-based storage
+
+You can read and write data from your job into your cloud-based storage.
+
+The JobInput defaults the mode - how the input will be exposed during job runtime - to InputOutputModes.RO_MOUNT (read-only mount). Put another way, Azure Machine Learning will mount the file or folder to the compute and set the file/folder to read-only. By design, you can't write to JobInputs only JobOutputs. The data is automatically uploaded to cloud storage.
+
+Matrix of possible types and modes for job inputs and outputs:
+
+Type | Input/Output | `upload` | `download` | `ro_mount` | `rw_mount` | `direct` | `eval_download` | `eval_mount`
+ | | | | | | | |
+`uri_folder` | Input | ❌ | ✅ | ✅ | ❌ | ✅ | ❌ | ❌
+`uri_file` | Input | ❌ | ✅ | ✅ | ❌ | ✅ | ❌ | ❌
+`mltable` | Input | ❌ | ✅ | ✅ | ❌ | ✅ | ✅ | ✅
+`uri_folder` | Output | ✅ | ❌ | ❌ | ✅ | ✅ | ❌ | ❌
+`uri_file` | Output | ✅ | ❌ | ❌ | ✅ | ✅ | ❌ | ❌
+`mltable` | Output | ✅ | ❌ | ❌ | ✅ | ✅ | ❌ | ❌
+
+As you can see from the table, `eval_download` and `eval_mount` are unique to `mltable`. A MLTable-artifact can yield files that are not necessarily located in the `mltable`'s storage. Or it can subset or shuffle the data that resides in the storage. That view is only visible if the MLTable file is actually evaluated by the engine. These modes will provide that view of the files.
++++
+# [Python-SDK](#tab/Python-SDK)
+
+```python
+from azure.ai.ml.entities import Data, UriReference, JobInput, CommandJob, JobOutput
+from azure.ai.ml._constants import AssetTypes
+
+my_job_inputs = {
+ "input_data": JobInput(
+ path='abfss://<file_system>@<account_name>.dfs.core.windows.net/<path>',
+ type=AssetTypes.URI_FOLDER
+ )
+}
+
+my_job_outputs = {
+ "output_folder": JobOutput(
+ path='abfss://<file_system>@<account_name>.dfs.core.windows.net/<path>',
+ type=AssetTypes.URI_FOLDER
+ )
+}
+
+job = CommandJob(
+ code="./src", #local path where the code is stored
+ command='python pre-process.py --input_folder ${{inputs.input_data}} --output_folder ${{outputs.output_folder}}',
+ inputs=my_job_inputs,
+ outputs=my_job_outputs,
+ environment="AzureML-sklearn-0.24-ubuntu18.04-py37-cpu:9",
+ compute="cpu-cluster"
+)
+
+#submit the command job
+returned_job = ml_client.create_or_update(job)
+
+#get a URL for the status of the job
+returned_job.services["Studio"].endpoint
+
+```
+
+# [CLI](#tab/CLI)
+
+```yaml
+$schema: https://azuremlschemas.azureedge.net/latest/CommandJob.schema.json
+code: src/prep
+command: >-
+ python prep.py
+ --raw_data ${{inputs.raw_data}}
+ --prep_data ${{outputs.prep_data}}
+inputs:
+ raw_data:
+ type: uri_folder
+ path: ./data
+outputs:
+ prep_data:
+ mode: upload
+environment: azureml:AzureML-sklearn-0.24-ubuntu18.04-py37-cpu@latest
+compute: azureml:cpu-cluster
+
+```
+++
+## Register data
+
+You can register data as an asset to your workspace. The benefits of registering data are:
+
+* Easy to share with other members of the team (no need to remember file locations)
+* Versioning of the metadata (location, description, etc.)
+* Lineage tracking
+
+The following example demonstrates versioning of sample data, and shows how to register a local file as a data asset. The data is uploaded to cloud storage and registered as an asset.
+
+```python
+
+from azure.ai.ml.entities import Data
+from azure.ai.ml._constants import AssetTypes
+
+my_data = Data(
+ path="./sample_data/titanic.csv",
+ type=AssetTypes.URI_FILE,
+ description="Titanic Data",
+ name="titanic",
+ version='1'
+)
+
+ml_client.data.create_or_update(my_data)
+```
+
+To register data that is in a cloud location, you can specify the path with any of the supported protocols for the storage type. The following example shows what the path looks like for data from Azure Data Lake Storage Gen 2.
+
+```python
+from azure.ai.ml.entities import Data
+from azure.ai.ml._constants import AssetTypes
+
+my_path = 'abfss://<file_system>@<account_name>.dfs.core.windows.net/<path>' # adls gen2
+
+my_data = Data(
+ path=my_path,
+ type=AssetTypes.URI_FOLDER,
+ description="description here",
+ name="a_name",
+ version='1'
+)
+
+ml_client.data.create_or_update(my_data)
+
+```
+
+## Consume registered data assets in jobs
+
+Once your data is registered as an asset to the workspace, you can consume that data asset in jobs.
+The following example demonstrates how to consume `version` 1 of the registered data asset `titanic`.
+
+```python
+
+from azure.ai.ml.entities import Data, UriReference, JobInput, CommandJob
+from azure.ai.ml._constants import AssetTypes
+
+registered_data_asset = ml_client.data.get(name='titanic', version='1')
+
+my_job_inputs = {
+ "input_data": JobInput(
+ type=AssetTypes.URI_FOLDER,
+ path=registered_data_asset.id
+ )
+}
+
+job = CommandJob(
+ code="./src",
+ command='python read_data_asset.py --input_folder ${{inputs.input_data}}',
+ inputs=my_job_inputs,
+ environment="AzureML-sklearn-0.24-ubuntu18.04-py37-cpu:9",
+ compute="cpu-cluster"
+)
+
+#submit the command job
+returned_job = ml_client.create_or_update(job)
+
+#get a URL for the status of the job
+returned_job.services["Studio"].endpoint
+```
+
+## Use data in pipelines
+
+If you're working with Azure Machine Learning pipelines, you can read data into and move data between pipeline components with the Azure Machine Learning CLI v2 extension or the Python SDK v2 (preview).
+
+### Azure Machine Learning CLI v2
+The following YAML file demonstrates how to use the output data from one component as the input for another component of the pipeline using the Azure Machine Learning CLI v2 extension:
+++
+## Python SDK v2 (preview)
+
+The following example defines a pipeline containing three nodes and moves data between each node.
+
+* `prepare_data_node` that loads the image and labels from Fashion MNIST data set into `mnist_train.csv` and `mnist_test.csv`.
+* `train_node` that trains a CNN model with Keras using the training data, `mnist_train.csv` .
+* `score_node` that scores the model using test data, `mnist_test.csv`.
+
+[!notebook-python[] (~/azureml-examples-sdk-preview/sdk/jobs/pipelines/2e_image_classification_keras_minist_convnet/image_classification_keras_minist_convnet.ipynb?name=build-pipeline)]
+
+## Next steps
+* [Install and set up Python SDK v2 (preview)](https://aka.ms/sdk-v2-install)
+* [Install and use the CLI (v2)](how-to-configure-cli.md)
+* [Train models with the Python SDK v2 (preview)](how-to-train-sdk.md)
+* [Tutorial: Create production ML pipelines with Python SDK v2 (preview)](tutorial-pipeline-python-sdk.md)
+* Learn more about [Data in Azure Machine Learning](concept-data.md)
machine-learning How To Responsible Ai Dashboard Sdk Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-responsible-ai-dashboard-sdk-cli.md
+
+ Title: Generate Responsible AI dashboard with YAML and Python (preview)
+
+description: Learn how to generate the Responsible AI dashboard with Python and YAML in Azure Machine Learning.
++++++ Last updated : 05/10/2022+++
+# Generate Responsible AI dashboard with YAML and Python (preview)
++
+The Responsible AI (RAI) dashboard can be generated via a pipeline job using RAI components. There are six core components for creating Responsible AI dashboards, along with a couple of helper components. A sample experiment graph:
++
+## Getting started
+
+To use the Responsible AI components, you must first register them in your Azure Machine Learning workspace. This section documents the required steps.
+
+### Prerequisites
+You'll need:
+
+- An AzureML workspace
+- A git installation
+- A MiniConda installation
+- An Azure CLI installation
+
+### Installation Steps
+
+1. Clone the Repository
+ ```bash
+ git clone https://github.com/Azure/RAI-vNext-Preview.git
+
+ cd RAI-vNext-Preview
+ ```
+2. Log into Azure
+
+ ```bash
+ Az login
+ ```
+
+3. Run the setup script
+
+ We provide a setup script which:
+
+ - Creates a new conda environment with a name you specify
+ - Installs all the required Python packages
+ - Registers all the RAI components in your AzureML workspace
+ - Registers some sample datasets in your AzureML workspace
+ - Sets the defaults for the Azure CLI to point to your workspace
+
+ We provide PowerShell and bash versions of the script. From the repository root, run:
+
+ ```powershell
+ Quick-Setup.ps1
+ ```
+
+ This will prompt for the desired conda environment name and AzureML workspace details. Alternatively, use the bash script:
+
+ ```bash
+ ./quick-setup.bash <CONDA-ENV-NAME> <SUBSCRIPTION-ID> <RESOURCE-GROUP-NAME> <WORKSPACE-NAME>
+ ```
+
+ This script will echo the supplied parameters, and pause briefly before continuing.
+
+## Responsible AI components
+
+The core components for constructing a Responsible AI dashboard in AzureML are:
+
+- `RAI Insights Dashboard Constructor`
+- The tool components:
+ - `Add Explanation to RAI Insights Dashboard`
+ - `Add Causal to RAI Insights Dashboard`
+ - `Add Counterfactuals to RAI Insights Dashboard`
+ - `Add Error Analysis to RAI Insights Dashboard`
+ - `Gather RAI Insights Dashboard`
++
+The ` RAI Insights Dashboard Constructor` and `Gather RAI Insights Dashboard ` components are always required, plus at least one of the tool components. However, it isn't necessary to use all the tools in every Responsible AI dashboard.
+
+Below are specifications of the Responsible AI components and examples of code snippets in YAML and Python. To view the full code, see [sample YAML and Python notebook](https://aka.ms/RAIsamplesProgrammer)
+
+### RAI Insights Dashboard Constructor
+
+This component has three input ports:
+
+- The machine learning model
+- The training dataset
+- The test dataset
+
+Use the train and test dataset that you used when training your model to generate model-debugging insights with components such as Error analysis and Model explanations. For components like Causal analysis that doesn't require a model, the train dataset will be used to train the causal model to generate the causal insights. The test dataset is used to populate your Responsible AI dashboard visualizations.
+
+The easiest way to supply the model is using our `Fetch Registered Model` component, which will be discussed below.
+
+> [!NOTE]
+> Currently only models with MLFlow format, with a sklearn flavor are supported.
+
+The two datasets should be file datasets (of type uri_file) in Parquet format. Tabular datasets aren't supported, we provide a `TabularDataset to Parquet file` component to help with conversions. The training and test datasets provided don't have to be the same datasets used in training the model (although it's permissible for them to be the same). By default, the test dataset is restricted to 5000 rows for performance reasons of the visualization UI.
+
+The constructor component also accepts the following parameters:
+
+| Parameter name | Description | Type |
+|-|--|-|
+| title | Brief description of the dashboard | String |
+| task_type | Specifies whether the model is for classification or regression | String, `classification` or `regression` |
+| target_column_name | The name of the column in the input datasets, which the model is trying to predict | String |
+| maximum_rows_for_test_dataset | The maximum number of rows allowed in the test dataset (for performance reasons) | Integer (defaults to 5000) |
+| categorical_column_names | The columns in the datasets, which represent categorical data | Optional list of strings (see note below) |
+| classes | The full list of class labels in the training dataset | Optional list of strings (see note below) |
+
+> [!NOTE]
+> The lists should be supplied as a single JSON encoded string for`categorical_column_names` and `classes` inputs.
+
+The constructor component has a single output named `rai_insights_dashboard`. This is an empty dashboard, which the individual tool components will operate on, and then all the results will be assembled by the `Gather RAI Insights Dashboard` component at the end.
+
+# [YAML](#tab/yaml)
+
+```yml
+ create_rai_job:
+
+ΓÇ» ΓÇ» type: command
+ΓÇ» ΓÇ» component: azureml:rai_insights_constructor:1
+ΓÇ» ΓÇ» inputs:
+ΓÇ» ΓÇ» ΓÇ» Title: From YAML snippet
+ΓÇ» ΓÇ» ΓÇ» task_type: regression
+ΓÇ» ΓÇ» ΓÇ» model_info_path: ${{parent.jobs.fetch_model_job.outputs.model_info_output_path}}
+ΓÇ» ΓÇ» ΓÇ» train_dataset: ${{parent.inputs.my_training_data}}
+ΓÇ» ΓÇ» ΓÇ» test_dataset: ${{parent.inputs.my_test_data}}
+ΓÇ» ΓÇ» ΓÇ» target_column_name: ${{parent.inputs.target_column_name}}
+ΓÇ» ΓÇ» ΓÇ» categorical_column_names: '["location", "style", "job title", "OS", "Employer", "IDE", "Programming language"]'
+```
+
+# [Python](#tab/python)
+
+First load the component:
+
+```python
+# First load the component:
+
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» rai_constructor_component = load_component(
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» client=ml_client, name="rai_insights_constructor", version="1"
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» )
+
+#Then inside the pipeline:
+
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» construct_job = rai_constructor_component(
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» title="From Python",
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» task_type="classification",
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» model_info_path=fetch_model_job.outputs.model_info_output_path,
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» train_dataset=train_data,
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» test_dataset=test_data,
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» target_column_name=target_column_name,
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» categorical_column_names='["location", "style", "job title", "OS", "Employer", "IDE", "Programming language"]',
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» maximum_rows_for_test_dataset=5000,
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» classes="[]",
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» )
+```
+
+### Exporting pre-built Cohorts for score card generation
+
+Pre-built cohorts can be exported for use in score card generation. Find example of building cohorts in this Jupyter Notebook example: [responsibleaidashboard-diabetes-decision-making.ipynb](https://github.com/microsoft/responsible-ai-toolbox/blob/main/notebooks/responsibleaidashboard/responsibleaidashboard-diabetes-decision-making.ipynb). Once a cohort is defined, it can be exported to json as follows:
+
+```python
+# cohort1, cohort2 are cohorts defined in sample notebook of type raiwidgets.cohort.Cohort
+import json
+json.dumps([cohort1.to_json(), cohort2.to_json)
+```
+A sample json string generated is shown below:
+
+```json
+[
+ {
+ "name": "High Yoe",
+ "cohort_filter_list": [
+ {
+ "method": "greater",
+ "arg": [
+ 5
+ ],
+ "column": "YOE"
+ }
+ ]
+ },
+ {
+ "name": "Low Yoe",
+ "cohort_filter_list": [
+
+ {
+ "method": "less",
+ "arg": [
+ 6.5
+ ],
+ "column": "YOE"
+ }
+ ]
+ }
+]
+```
+
+### Add Causal to RAI Insights Dashboard
+
+This component performs a causal analysis on the supplied datasets. It has a single input port, which accepts the output of the `RAI Insights Dashboard Constructor`. It also accepts the following parameters:
+
+| Parameter name | Description | Type |
+|-|--|-|
+| treatment_features | A list of feature names in the datasets, which are potentially ΓÇÿtreatableΓÇÖ to obtain different outcomes. | List of strings (see note below) |
+| heterogeneity_features | A list of feature names in the datasets, which might affect how the ΓÇÿtreatableΓÇÖ features behave. By default all features will be considered | Optional list of strings (see note below).|
+| nuisance_model | The model used to estimate the outcome of changing the treatment features. | Optional string. Must be ΓÇÿlinearΓÇÖ or ΓÇÿAutoMLΓÇÖ defaulting to ΓÇÿlinear.ΓÇÖ |
+| heterogeneity_model | The model used to estimate the effect of the heterogeneity features on the outcome. | Optional string. Must be ΓÇÿlinearΓÇÖ or ΓÇÿforestΓÇÖ defaulting to ΓÇÿlinear.ΓÇÖ |
+| alpha | Confidence level of confidence intervals | Optional floating point number. Defaults to 0.05. |
+| upper_bound_on_cat_expansion | Maximum expansion for categorical features. | Optional integer. Defaults to 50. |
+| treatment_cost | The cost of the treatments. If 0, then all treatments will have zero cost. If a list is passed, then each element is applied to one of the treatment_features. Each element can be a scalar value to indicate a constant cost of applying that treatment or an array indicating the cost for each sample. If the treatment is a discrete treatment, then the array for that feature should be two dimensional with the first dimension representing samples and the second representing the difference in cost between the non-default values and the default value. | Optional integer or list (see note below).|
+| min_tree_leaf_samples | Minimum number of samples per leaf in policy tree. | Optional integer. Defaults to 2 |
+| max_tree_depth | Maximum depth of the policy tree | Optional integer. Defaults to 2 |
+| skip_cat_limit_checks | By default, categorical features need to have several instances of each category in order for a model to be fit robustly. Setting this to True will skip these checks. |Optional Boolean. Defaults to False. |
+| categories | What categories to use for the categorical columns. If `auto`, then the categories will be inferred for all categorical columns. Otherwise, this argument should have as many entries as there are categorical columns. Each entry should be either `auto` to infer the values for that column or the list of values for the column. If explicit values are provided, the first value is treated as the "control" value for that column against which other values are compared. | Optional. `auto` or list (see note below.) |
+| n_jobs | Degree of parallelism to use. | Optional integer. Defaults to 1. |
+| verbose | Whether to provide detailed output during the computation. | Optional integer. Defaults to 1. |
+| random_state | Seed for the PRNG. | Optional integer. |
+
+> [!NOTE]
+> For the `list` parameters: Several of the parameters accept lists of other types (strings, numbers, even other lists). To pass these into the component, they must first be JSON-encoded into a single string.
+
+This component has a single output port, which can be connected to one of the `insight_[n]` input ports of the Gather RAI Insights Dashboard component.
+
+# [YAML](#tab/yaml)
+
+```yml
+ΓÇ» causal_01:
+ΓÇ» ΓÇ» type: command
+ΓÇ» ΓÇ» component: azureml:rai_insights_causal:1
+ΓÇ» ΓÇ» inputs:
+ΓÇ» ΓÇ» ΓÇ» rai_insights_dashboard: ${{parent.jobs.create_rai_job.outputs.rai_insights_dashboard}}
+ΓÇ» ΓÇ» ΓÇ» treatment_features: `["Number of github repos contributed to", "YOE"]'
+```
+
+# [Python](#tab/python)
+
+```python
+#First load the component:
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» rai_causal_component = load_component(
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» client=ml_client, name="rai_insights_causal", version="1"
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» )
+#Use it inside a pipeline definition:
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» causal_job = rai_causal_component(
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» rai_insights_dashboard=construct_job.outputs.rai_insights_dashboard,
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» treatment_features='`["Number of github repos contributed to", "YOE"]',
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» )
+```
+++
+### Add Counterfactuals to RAI Insights Dashboard
+
+This component generates counterfactual points for the supplied test dataset. It has a single input port, which accepts the output of the RAI Insights Dashboard Constructor. It also accepts the following parameters:
+
+| Parameter Name | Description | Type |
+|-|-||
+| total_CFs | How many counterfactual points to generate for each row in the test dataset | Optional integer. Defaults to 10 |
+| method | The `dice-ml` explainer to use | Optional string. Either `random`, `genetic` or `kdtree`. Defaults to `random` |
+| desired_class | Index identifying the desired counterfactual class. For binary classification, this should be set to `opposite` | Optional string or integer. Defaults to 0 |
+| desired_range | For regression problems, identify the desired range of outcomes | Optional list of two numbers (see note below). |
+| permitted_range | Dictionary with feature names as keys and permitted range in list as values. Defaults to the range inferred from training data. | Optional string or list (see note below).|
+| features_to_vary | Either a string "all" or a list of feature names to vary. | Optional string or list (see note below)|
+| feature_importance | Flag to enable computation of feature importances using `dice-ml` |Optional Boolean. Defaults to True |
+
+> [!NOTE]
+> For the non-scalar parameters: Parameters which are lists or dictionaries should be passed as single JSON-encoded strings.
+
+This component has a single output port, which can be connected to one of the `insight_[n]` input ports of the Gather RAI Insights Dashboard component.
+
+# [YAML](#tab/yaml)
+
+```yml
+ counterfactual_01:
+ΓÇ» ΓÇ» type: command
+ΓÇ» ΓÇ» component: azureml:rai_insights_counterfactual:1
+ΓÇ» ΓÇ» inputs:
+ΓÇ» ΓÇ» ΓÇ» rai_insights_dashboard: ${{parent.jobs.create_rai_job.outputs.rai_insights_dashboard}}
+ΓÇ» ΓÇ» ΓÇ» total_CFs: 10
+ΓÇ» ΓÇ» ΓÇ» desired_range: "[5, 10]"
+```
++
+# [Python](#tab/python)
+
+```python
+#First load the component:
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» rai_counterfactual_component = load_component(
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» client=ml_client, name="rai_insights_counterfactual", version="1"
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» )
+#Use it in a pipeline function:
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» counterfactual_job = rai_counterfactual_component(
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» rai_insights_dashboard=create_rai_job.outputs.rai_insights_dashboard,
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» total_cfs=10,
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» desired_range="[5, 10]",
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» )
+```
+++
+### Add Error Analysis to RAI Insights Dashboard
+
+This component generates an error analysis for the model. It has a single input port, which accepts the output of the RAI Insights Dashboard Constructor. It also accepts the following parameters:
+
+| Parameter Name | Description | Type |
+|-|-||
+| max_depth | The maximum depth of the error analysis tree | Optional integer. Defaults to 3 |
+| num_leaves | The maximum number of leaves in the error tree | Optional integer. Defaults to 31 |
+| min_child_samples | The minimum number of datapoints required to produce a leaf | Optional integer. Defaults to 20 |
+| filter_features | A list of one or two features to use for the matrix filter | Optional list of two feature names (see note below). |
+
+> [!NOTE]
+> filter_features: This list of one or two feature names should be passed as a single JSON-encoded string.
+
+This component has a single output port, which can be connected to one of the `insight_[n]` input ports of the Gather RAI Insights Dashboard component.
+
+# [YAML](#tab/yaml)
+
+```yml
+ΓÇ» error_analysis_01:
+ΓÇ» ΓÇ» type: command
+ΓÇ» ΓÇ» component: azureml:rai_insights_erroranalysis:1
+ΓÇ» ΓÇ» inputs:
+ΓÇ» ΓÇ» ΓÇ» rai_insights_dashboard: ${{parent.jobs.create_rai_job.outputs.rai_insights_dashboard}}
+ΓÇ» ΓÇ» ΓÇ» filter_features: `["style", "Employer"]'
+```
+
+# [Python](#tab/python)
+
+```python
+#First load the component:
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» rai_erroranalysis_component = load_component(
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» client=ml_client, name="rai_insights_erroranalysis", version="1"
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» )
+#Use inside a pipeline:
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» erroranalysis_job = rai_erroranalysis_component(
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» rai_insights_dashboard=create_rai_job.outputs.rai_insights_dashboard,
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» filter_features='["style", "Employer"]',
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» )
+```
+++
+### Add Explanation to RAI Insights Dashboard
+
+This component generates an explanation for the model. It has a single input port, which accepts the output of the RAI Insights Dashboard Constructor. It accepts a single, optional comment string as a parameter.
+
+This component has a single output port, which can be connected to one of the `insight_[n]` input ports of the Gather RAI Insights Dashboard component.
++
+# [YAML](#tab/yaml)
+
+```yml
+ΓÇ» explain_01:
+ΓÇ» ΓÇ» type: command
+ΓÇ» ΓÇ» component: azureml:rai_insights_explanation:VERSION_REPLACEMENT_STRING
+ΓÇ» ΓÇ» inputs:
+ΓÇ» ΓÇ» ΓÇ» comment: My comment
+ΓÇ» ΓÇ» ΓÇ» rai_insights_dashboard: ${{parent.jobs.create_rai_job.outputs.rai_insights_dashboard}}
+```
++
+# [Python](#tab/python)
+
+```python
+#First load the component:
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» rai_explanation_component = load_component(
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» client=ml_client, name="rai_insights_explanation", version="1"
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ) 1
+#Use inside a pipeline:
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» explain_job = rai_explanation_component(
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» comment="My comment",
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» rai_insights_dashboard=create_rai_job.outputs.rai_insights_dashboard,
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» )
+```
++
+### Gather RAI Insights Dashboard
+
+This component assembles the generated insights into a single Responsible AI dashboard. It has five input ports:
+
+- The `constructor` port that must be connected to the RAI Insights Dashboard Constructor component.
+- Four `insight_[n]` ports that can be connected to the output of the tool components. At least one of these ports must be connected.
+
+There are two output ports. The `dashboard` port contains the completed `RAIInsights` object, while the `ux_json` contains the data required to display a minimal dashboard.
++
+# [YAML](#tab/yaml)
+
+```yml
+ΓÇ» gather_01:
+ΓÇ» ΓÇ» type: command
+ΓÇ» ΓÇ» component: azureml:rai_insights_gather:1
+ΓÇ» ΓÇ» inputs:
+ΓÇ» ΓÇ» ΓÇ» constructor: ${{parent.jobs.create_rai_job.outputs.rai_insights_dashboard}}
+ΓÇ» ΓÇ» ΓÇ» insight_1: ${{parent.jobs.causal_01.outputs.causal}}
+ΓÇ» ΓÇ» ΓÇ» insight_2: ${{parent.jobs.counterfactual_01.outputs.counterfactual}}
+ΓÇ» ΓÇ» ΓÇ» insight_3: ${{parent.jobs.error_analysis_01.outputs.error_analysis}}
+ΓÇ» ΓÇ» ΓÇ» insight_4: ${{parent.jobs.explain_01.outputs.explanation}}
+```
++
+# [Python](#tab/python)
+
+```python
+#First load the component:
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» rai_gather_component = load_component(
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» client=ml_client, name="rai_insights_gather", version="1"
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» )
+#Use in a pipeline:
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» rai_gather_job = rai_gather_component(
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» constructor=create_rai_job.outputs.rai_insights_dashboard,
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» insight_1=explain_job.outputs.explanation,
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» insight_2=causal_job.outputs.causal,
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» insight_3=counterfactual_job.outputs.counterfactual,
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» insight_4=erroranalysis_job.outputs.error_analysis,
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» )
+```
+++
+## Helper components
+
+We provide two helper components to aid in connecting the Responsible AI components to your existing assets.
+
+### Fetch registered model
+
+This component produces information about a registered model, which can be consumed by the `model_info_path` input port of the RAI Insights Dashboard Constructor component. It has a single input parameter ΓÇô the AzureML ID (`<NAME>:<VERSION>`) of the desired model.
+
+# [YAML](#tab/yaml)
+
+```yml
+ΓÇ» fetch_model_job:
+ΓÇ» ΓÇ» type: command
+ΓÇ» ΓÇ» component: azureml:fetch_registered_model:1
+ΓÇ» ΓÇ» inputs:
+ΓÇ» ΓÇ» ΓÇ» model_id: my_model_name:12
+```
+
+# [Python](#tab/python)
+
+```python
+#First load the component:
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» fetch_model_component = load_component(
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» client=ml_client, name="fetch_registered_model", version="1"
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» )
+#Use it in a pipeline:
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» fetch_model_job = fetch_model_component(model_id=registered_adult_model_id)
+```
+++
+### Tabular dataset to parquet file
+
+This component converts the tabular dataset named in its sole input parameter into a Parquet file, which can be consumed by the `train_dataset` and `test_dataset` input ports of the RAI Insights Dashboard Constructor component. Its single input parameter is the name of the desired dataset.
+
+# [YAML](#tab/yaml)
+
+```yml
+ΓÇ» convert_train_job:
+ΓÇ» ΓÇ» type: command
+ΓÇ» ΓÇ» component: azureml:convert_tabular_to_parquet:1
+ΓÇ» ΓÇ» inputs:
+ΓÇ» ΓÇ» ΓÇ» tabular_dataset_name: tabular_dataset_name
+```
++
+# [Python](#tab/python)
+
+```python
+#First load the component:
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» tabular_to_parquet_component = load_component(
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» client=ml_client, name="convert_tabular_to_parquet", version="1"
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» )
+#Use it in a pipeline:
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» to_parquet_job_train = tabular_to_parquet_component(
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» tabular_dataset_name=train_data_name
+```
+++
+## Input constraints
+
+### What model formats and flavors are supported?
+
+The model must be in MLFlow directory with a sklearn flavor available. Furthermore, the model needs to be loadable in the environment used by the Responsible AI components.
+
+### What data formats are supported?
+
+The supplied datasets should be file datasets (uri_file type) in Parquet format. We provide the `TabularDataset to Parquet File` component to help convert the data into the required format.
+
+## Next steps
+
+- Once your Responsible AI dashboard is generated, [view how to access and use it in Azure Machine Learning studio](how-to-responsible-ai-dashboard.md)
+- Summarize and share your Responsible AI insights with the [Responsible AI scorecard as a PDF export](how-to-responsible-ai-scorecard.md).
+- Learn more about the [concepts and techniques behind the Responsible AI dashboard](concept-responsible-ai-dashboard.md).
+- Learn more about how to [collect data responsibly](concept-sourcing-human-data.md)
+- View [sample YAML and Python notebooks](https://aka.ms/RAIsamples) to generate a Responsible AI dashboard with YAML or Python.
machine-learning How To Responsible Ai Dashboard Ui https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-responsible-ai-dashboard-ui.md
+
+ Title: Generate Responsible AI dashboard in the studio UI (preview)
+
+description: Learn how to generate the Responsible AI dashboard with no-code experience in the Azure Machine Learning studio UI.
++++++ Last updated : 05/10/2022+++
+# Generate Responsible AI dashboard in the studio UI (preview)
+
+You can create a Responsible AI dashboard with a no-code experience in the Azure Machine Learning studio UI. To start the wizard, navigate to the registered model youΓÇÖd like to create Responsible AI insights for and select the **Details** tab. Then select the **Create Responsible AI dashboard (preview)** button.
++
+The wizard is designed to provide an interface to input all the necessary parameters to instantiate your Responsible AI dashboard without having to touch code. The experience takes place entirely in the Azure Machine Learning studio UI with a guided flow and instructional text to help contextualize the variety of choices in which Responsible AI components youΓÇÖd like to populate your dashboard with. The wizard is divided into five steps:
+
+1. Datasets
+2. Modeling task
+3. Dashboard components
+4. Component parameters
+5. Experiment configuration
+
+## Select your datasets
+
+The first step is to select the train and test dataset that you used when training your model to generate model-debugging insights. For components like Causal analysis, which doesn't require a model, the train dataset will be used to train the causal model to generate the causal insights.
+
+> [!NOTE]
+> Only tabular dataset formats are supported.
++
+1. **Select a dataset for training**: Select the dropdown to view your registered datasets in Azure Machine Learning workspace. This dataset will be used to generate Responsible AI insights for components such as model explanations and error analysis.
+2. **Create new dataset**: If the desired datasets aren't in your Azure Machine Learning workspace, select ΓÇ£New datasetΓÇ¥ to upload your dataset
+3. **Select a dataset for testing**: Select the dropdown to view your registered datasets in Azure Machine Learning workspace. This dataset is used to populate your Responsible AI dashboard visualizations.
+
+## Select your modeling task
+
+After you picked your dataset, select your modeling task type.
++
+> [!NOTE]
+> The wizard only supports models with MLflow format and sci-kit learn flavor.
+
+## Select your dashboard components
+
+The Responsible AI dashboard offers two profiles for recommended sets of tools you can generate:
+
+- **Model debugging**: Understand and debug erroneous data cohorts in your ML model using Error analysis, Counterfactual what-if examples, and Model explainability
+- **Real life interventions**: Understand and debug erroneous data cohorts in your ML model using Causal analysis
+
+> [!NOTE]
+> Multi-class classification does not support Real-life intervention analysis profile.
+Select the desired profile, then **Next**.
++
+## Configure parameters for dashboard components
+
+Once youΓÇÖve selected a profile, the configuration step for the corresponding components will appear.
++
+Component parameters for model debugging:
+
+1. **Target feature (required)**: Specify the feature that your model was trained to predict
+2. **Categorical features**: Indicate which features are categorical to properly render them as categorical values in the dashboard UI. This is pre-loaded for you based on your dataset metadata.
+3. **Generate error tree and heat map**: Toggle on and off to generate an error analysis component for your Responsible AI dashboard
+4. **Features for error heat map**: Select up to two features to pre-generate an error heatmap for.
+5. **Advanced configuration**: Specify additional parameters for your error tree such as Maximum depth, Number of leaves, Minimum number of samples in one leaf.
+6. **Generate counterfactual what-if examples**: Toggle on and off to generate counterfactual what-if component for your Responsible AI dashboard
+7. **Number of counterfactuals (required)**: Specify the number of counterfactual examples you want generated per datapoint. A minimum of at least 10 should be generated to enable a bar chart view in the dashboard of which features were most perturbed on average to achieve the desired prediction.
+8. **Range of value predictions (required)**: Specify for regression scenarios the desired range you want counterfactual examples to have prediction values in. For binary classification scenarios, it will automatically be set to generate counterfactuals for the opposite class of each datapoint. For multi-classification scenarios, there will be a drop-down to specify which class you want each datapoints to be predicted as.
+9. **Specify features to perturb**: By default, all features will be perturbed. However, if there are specific features you want perturbed, clicking this will open a panel with the list of features to select. (See below)
+10. **Generate explanations**: Toggle on and off to generate a model explanation component for your Responsible AI dashboard. No configuration is necessary as a default opaque box mimic explainer will be used to generate feature importances.
+
+For counterfactuals when you select ΓÇ£Specify features to perturbΓÇ¥, you can specify which range you want to allow perturbations in. For example: for the feature YOE (Years of experience), specify that counterfactuals should only have feature values ranging from 10 to 21 instead of the default 5 to 21.
++
+Alternatively, if you're interested in selecting **Real-life interventions** profile, youΓÇÖll see the following screen generate a causal analysis. This will help you understand causal effects of features you want to ΓÇ£treatΓÇ¥ on a certain outcome you wish to optimize.
++
+Component parameters for real-life intervention use causal analysis:
+
+1. **Target feature (required)**: Choose the outcome you want the causal effects to be calculated for.
+2. **Treatment features (required)**: Choose one or more features youΓÇÖre interested in changing (ΓÇ£treatingΓÇ¥) to optimize the target outcome.
+3. **Categorical features**: Indicate which features are categorical to properly render them as categorical values in the dashboard UI. This is pre-loaded for you based on your dataset metadata.
+4. **Advanced settings**: Specify additional parameters for your causal analysis such as heterogenous features (additional features to understand causal segmentation in your analysis in addition to your treatment features) and which causal model youΓÇÖd like to be used.
+
+## Experiment configuration
+
+Finally, configure your experiment to kick off a job to generate your Responsible AI dashboard.
++
+1. **Name**: Give your dashboard a unique name so that you can differentiate it when youΓÇÖre viewing the list of dashboards for a given model.
+2. **Experiment name**: Select an existing experiment to run the job in, or create a new experiment.
+3. **Existing experiment**: Select an existing experiment from drop-down.
+4. **Select compute type**: Specify which compute type youΓÇÖd like to use to execute your job.
+5. **Select compute**: Select from a drop-down that compute youΓÇÖd like to use. If there are no existing compute resources, select the ΓÇ£+ΓÇ¥ to create a new compute resource and refresh the list.
+6. **Description**: Add a more verbose description for your Responsible AI dashboard.
+7. **Tags**: Add any tags to this Responsible AI dashboard.
+
+After youΓÇÖve finished your experiment configuration, select **Create** to start the generation of your Responsible AI dashboard. You'll be redirected to the experiment page to track the progress of your job. See below next steps on how to view your Responsible AI dashboard.
+
+## Next steps
+
+- Once your Responsible AI dashboard is generated, [view how to access and use it in Azure Machine Learning studio](how-to-responsible-ai-dashboard.md)
+- Summarize and share your Responsible AI insights with the [Responsible AI scorecard as a PDF export](how-to-responsible-ai-scorecard.md).
+- Learn more about the [concepts and techniques behind the Responsible AI dashboard](concept-responsible-ai-dashboard.md).
+- Learn more about how to [collect data responsibly](concept-sourcing-human-data.md)
machine-learning How To Responsible Ai Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-responsible-ai-dashboard.md
+
+ Title: How to use the Responsible AI dashboard in studio (preview)
+
+description: Learn how to use the different tools and visualization charts in the Responsible AI dashboard in Azure Machine Learning.
++++++ Last updated : 05/10/2022+++
+# How to use the Responsible AI dashboard in studio (preview)
+
+Responsible AI dashboards are linked to your registered models. To view your Responsible AI dashboard, go into your model registry and select the registered model you've generated a Responsible AI dashboard for. Once you select into your model, select the **Responsible AI (preview)** tab to view a list of generated dashboards.
++
+Multiple dashboards can be configured and attached to your registered model. Different combinations of components (explainers, causal analysis, etc.) can be attached to each Responsible AI dashboard. The list below only shows whether a component was generated for your dashboard, but different components can be viewed or hidden within the dashboard itself.
++
+Selecting the name of the dashboard will open up your dashboard into a full view in your browser. At anytime, select the **Back to models details** to get back to your list of dashboards.
++
+## Full functionality with integrated compute resource
+
+Some features of the Responsible AI dashboard require dynamic, real-time computation. Without connecting a compute resource to the dashboard, you may find some functionality missing. Connecting to a compute resource will enable full functionality of your Responsible AI dashboard for the following components:
+
+- **Error analysis**
+ - Setting your global data cohort to any cohort of interest will update the error tree instead of disabling it.
+ - Selecting other error or performance metrics is supported.
+ - Selecting any subset of features for training the error tree map is supported.
+ - Changing the minimum number of samples required per leaf node and error tree depth is supported.
+ - Dynamically updating the heatmap for up to two features is supported.
+- **Feature importance**
+ - An individual conditional expectation (ICE) plot in the individual feature importance tab is supported.
+- Counterfactual what-if
+ - Generating a new what-if counterfactual datapoint to understand the minimum change required for a desired outcome is supported.
+- **Causal analysis**
+ - Selecting any individual datapoint, perturbing its treatment features, and seeing the expected causal outcome of causal what-if is supported (only for regression ML scenarios).
+
+The information above can also be found on the Responsible AI dashboard page by selecting the information icon button:
++
+### How to enable full functionality of Responsible AI dashboard
+
+1. Select a running compute instance from compute dropdown above your dashboard. If you donΓÇÖt have a running compute, create a new compute instance by selecting ΓÇ£+ ΓÇ¥ button next to the compute dropdown, or ΓÇ£Start computeΓÇ¥ button to start a stopped compute instance. Creating or starting a compute instance may take few minutes.
+
+ :::image type="content" source="./media/how-to-responsible-ai-dashboard/select-compute.png" alt-text="Screenshot showing how to selecting a compute." lightbox = "./media/how-to-responsible-ai-dashboard/select-compute.png":::
+
+2. Once compute is in ΓÇ£RunningΓÇ¥ state, your Responsible AI dashboard will start to connect to the compute instance. To achieve this, a terminal process will be created on the selected compute instance, and Responsible AI endpoint will be started on the terminal. Select **View terminal outputs** to view current terminal process.
+
+ :::image type="content" source="./media/how-to-responsible-ai-dashboard/compute-connect-terminal.png" alt-text="Screenshot showing the responsible A I dashboard is connecting to a compute resource." lightbox = "./media/how-to-responsible-ai-dashboard/compute-connect-terminal.png":::
+
+3. When your Responsible AI dashboard is connected to the compute instance, you'll see a green message bar, and the dashboard is now fully functional.
+
+ :::image type="content" source="./media/how-to-responsible-ai-dashboard/compute-terminal-connected.png" alt-text="Screenshot showing that the dashboard is connected to the compute instance." lightbox= "./media/how-to-responsible-ai-dashboard/compute-terminal-connected.png":::
+
+4. If it takes a while and your Responsible AI dashboard is still not connected to the compute instance, or a red error message bar shows up, it means there are issues with starting your Responsible AI endpoint. Select **View terminal outputs** and scroll down to the bottom to view the error message.
+
+ :::image type="content" source="./media/how-to-responsible-ai-dashboard/compute-terminal-error.png" alt-text="Screenshot of an error connecting to a compute." lightbox ="./media/how-to-responsible-ai-dashboard/compute-terminal-error.png":::
+
+ If you're having issues with figuring out how to resolve the failed to connect to compute instance issue, select the ΓÇ£smileΓÇ¥ icon on the upper right corner, and submit feedback to us to let us know what error or issue you hit. You can include screenshot and/or your email address in the feedback form.
+
+## UI overview of the Responsible AI dashboard
+
+The Responsible AI dashboard includes a robust and rich set of visualizations and functionality to help you analyze your machine learning model or making data-driven business decisions:
+
+- [Global controls](#global-controls)
+- [Error analysis](#error-analysis)
+- [Model overview](#model-overview)
+- [Data explorer](#data-explorer)
+- [Feature importances (model explanations)](#feature-importances-model-explanations)
+- [Counterfactual what-if](#counterfactual-what-if)
+- [Causal analysis](#causal-analysis)
+
+### Global controls
+
+At the top of the dashboard, you can create cohorts, subgroups of datapoints sharing specified characteristics, to focus your analysis in each component on. The name of the cohort currently applied to the dashboard is always shown on the top left above your dashboard. The default shown in your dashboard will always be your whole dataset denoted by the title **All data (default)**.
++
+1. **Cohort settings**: allows you to view and modify the details of each cohort in a side panel.
+2. **Dashboard configuration**: allows you to view and modify the layout of the overall dashboard in a side panel.
+3. **Switch cohort**: allows you to select a different cohort and view its statistics in a popup.
+4. **New cohort**: allows you to create and add a new cohort to your dashboard.
+
+Selecting Cohort settings will open a panel with a list of your cohorts, where you can create, edit, duplicate, or delete your cohorts.
++
+Selecting the **New cohort** button on the top of the dashboard or in the Cohort settings opens a new panel with options to filter on the following:
+
+1. **Index**: filters by the position of the datapoint in the full dataset
+2. **Dataset**: filters by the value of a particular feature in the dataset
+3. **Predicted Y**: filters by the prediction made by the model
+4. **True Y**: filters by the actual value of the target feature
+5. **Error (regression)**: filters by error or Classification Outcome (classification): filters by type and accuracy of classification
+6. **Categorical Values**: filter by a list of values that should be included
+7. **Numerical Values**: filter by a Boolean operation over the values (for example, select datapoints where age < 64)
++
+You can name your new dataset cohort, select **Add filter** to add each desired filter, then select **Save** to save the new cohort to your cohort list or Save and switch to save and immediately switch the global cohort of the dashboard to the newly created cohort.
++
+Selecting **Dashboard configuration** will open a panel with a list of the components youΓÇÖve configured in your dashboard. You can hide components in your dashboard by selecting the ΓÇÿtrashΓÇÖ icon.
++
+You can add components back into your dashboard via the blue circular ΓÇÿ+ΓÇÖ icon in the divider between each component.
++
+### Error analysis
+
+#### Error tree map
+
+The first tab of the Error analysis component is the Tree map, which illustrates how model failure is distributed across different cohorts with a tree visualization. Select any node to see the prediction path on your features where error was found.
++
+1. **Heatmap view**: switches to heatmap visualization of error distribution.
+2. **Feature list:** allows you to modify the features used in the heatmap using a side panel.
+3. **Error coverage**: displays the percentage of all error in the dataset concentrated in the selected node.
+4. **Error (regression) or Error rate (classification)**: displays the error or percentage of failures of all the datapoints in the selected node.
+5. **Node**: represents a cohort of the dataset, potentially with filters applied, and the number of errors out of the total number of datapoints in the cohort.
+6. **Fill line**: visualizes the distribution of datapoints into child cohorts based on filters, with number of datapoints represented through line thickness.
+7. **Selection information**: contains information about the selected node in a side panel.
+8. **Save as a new cohort:** creates a new cohort with the given filters.
+9. **Instances in the base cohort**: displays the total number of points in the entire dataset and the number of correctly and incorrectly predicted points.
+10. **Instances in the selected cohort**: displays the total number of points in the selected node and the number of correctly and incorrectly predicted points.
+11. **Prediction path (filters)**: lists the filters placed over the full dataset to create this smaller cohort.
+
+Selecting the "Feature list" button opens a side panel, which allows you to retrain the error tree on specific features.
++
+1. **Search features**: allows you to find specific features in the dataset.
+2. **Features:** lists the name of the feature in the dataset.
+3. **Importances**: A guideline for how related the feature may be to the error. Calculated via mutual information score between the feature and the error on the labels. You can use this score to help you decide which features to choose in Error Analysis.
+4. **Check mark**: allows you to add or remove the feature from the tree map.
+5. **Maximum depth**: The maximum depth of the surrogate tree trained on errors.
+6. **Number of leaves**: The number of leaves of the surrogate tree trained on errors.
+7. **Minimum number of samples in one leaf**: The minimum number of data required to create one leaf.
+
+#### Error heat map
+
+Selecting the **Heat map** tab switches to a different view of the error in the dataset. You can select on one or many heat map cells and create new cohorts. You can choose up to two features to create a heatmap.
++
+1. **Number of Cells**: displays the number of cells selected.
+2. **Error coverage**: displays the percentage of all errors concentrated in the selected cell(s).
+3. **Error rate**: displays the percentage of failures of all datapoints in the selected cell(s).
+4. **Axis features**: selects the intersection of features to display in the heatmap.
+5. **Cells**: represents a cohort of the dataset, with filters applied, and the percentage of errors out of the total number of datapoints in the cohort. A blue outline indicates selected cells, and the darkness of red represents the concentration of failures.
+6. **Prediction path (filters)**: lists the filters placed over the full dataset for each selected cohort.
+
+### Model overview
+
+The model overview component provides a set of commonly used model performance metrics and a box plot visualization to explore the distribution of your prediction values and errors.
+
+| ML scenario | Metrics |
+|-|-|
+| Regression | Mean absolute error, Mean squared error, R,<sup>2</sup>, Mean prediction. |
+| Classification | Accuracy, Precision, Recall, F1 score, False positive rate, False negative rate, Selection rate |
+
+You can further investigate your model by looking at a comparative analysis of its performance across different cohorts or subgroups of your dataset, including automatically created ΓÇ£temporary cohortsΓÇ¥ based on selected nodes from the Error analysis component. Select filters along y-value and x-value to cut across different dimensions.
++
+### Data explorer
+
+The Data explorer component allows you to analyze data statistics along axes filters such as predicted outcome, dataset features and error groups. This component helps you understand over and underrepresentation in your dataset.
++
+1. **Select a dataset cohort to explore**: Specify which dataset cohort from your list of cohorts you want to view data statistics for.
+2. **X-axis**: displays the type of value being plotted horizontally, modify by clicking the button to open a side panel.
+3. **Y-axis**: displays the type of value being plotted vertically, modify by clicking the button to open a side panel.
+4. **Chart type**: specifies chart type, choose between aggregate plots (bar charts) or individual datapoints (scatter plot).
+
+ Selecting the "Individual datapoints" option under "Chart type" shifts to a disaggregated view of the data with the availability of a color axis.
++
+### Feature importances (model explanations)
+
+The model explanation component allows you to see which features were most important in your modelΓÇÖs predictions. You can view what features impacted your modelΓÇÖs prediction overall in the **Aggregate feature importance** tab or view feature importances for individual datapoints in the **Individual feature importance** tab.
+
+#### Aggregate feature importances (global explanations)
++
+1. **Top k features**: lists the most important global features for a prediction and allows you to change it through a slider bar.
+2. **Aggregate feature importance**: visualizes the weight of each feature in influencing model decisions across all predictions.
+3. **Sort by**: allows you to select which cohort's importances to sort the aggregate feature importance graph by.
+4. **Chart type**: allows you to select between a bar plot view of average importances for each feature and a box plot of importances for all data.
+
+When you select on one of the features in the bar plot, the below dependence plot will be populated. The dependence plot shows the relationship of the values of a feature to its corresponding feature importance values impacting the model prediction.
++
+5. **Feature importance of [feature] (regression) or Feature importance of [feature] on [predicted class] (classification)**: plots the importance of a particular feature across the predictions. For regression scenarios, the importance values are in terms of the output so positive feature importance means it contributed positively towards the output; vice versa for negative feature importance. For classification scenarios, positive feature importances mean that feature value is contributing towards the predicted class denoted in the y-axis title; and negative feature importance means it's contributing against the predicted class.
+6. **View dependence plot for**: selects the feature whose importances you want to plot.
+7. **Select a dataset cohort**: selects the cohort whose importances you want to plot.
+
+#### Individual feature importances (local explanations)
+
+This tab explains how features influence the predictions made on specific datapoints. You can choose up to five datapoints to compare feature importances for.
++
+**Point selection table**: view your datapoints and select up to five points to display in the feature importance plot or the ICE plot below the table.
++
+**Feature importance plot**: bar plot of the importance of each feature for the model's prediction on the selected datapoint(s)
+
+1. **Top k features**: allows you to specify the number of features to show importances for through a slider.
+2. **Sort by**: allows you to select the point (of those checked above) whose feature importances are displayed in descending order on the feature importance plot.
+3. **View absolute values**: Toggle on to sort the bar plot by the absolute values; this allows you to see the top highest impacting features regardless of its positive or negative direction.
+4. **Bar plot**: displays the importance of each feature in the dataset for the model prediction of the selected datapoints.
+
+**Individual conditional expectation (ICE) plot**: switches to the ICE plot showing model predictions across a range of values of a particular feature
++
+- **Min (numerical features)**: specifies the lower bound of the range of predictions in the ICE plot.
+- **Max (numerical features)**: specifies the upper bound of the range of predictions in the ICE plot.
+- **Steps (numerical features)**: specifies the number of points to show predictions for within the interval.
+- **Feature values (categorical features)**: specifies which categorical feature values to show predictions for.
+- **Feature**: specifies the feature to make predictions for.
+
+### Counterfactual what-if
+
+Counterfactual analysis provides a diverse set of ΓÇ£what-ifΓÇ¥ examples generated by changing the values of features minimally to produce the desired prediction class (classification) or range (regression).
++
+1. **Point selection**: selects the point to create a counterfactual for and display in the top-ranking features plot below
+ :::image type="content" source="./media/how-to-responsible-ai-dashboard/counterfactuals-top-ranked-features.png" alt-text="Screenshot of the dashboard showing a the top ranked features plot." lightbox="./media/how-to-responsible-ai-dashboard/counterfactuals-top-ranked-features.png":::
+
+ **Top ranked features plot**: displays, in descending order in terms of average frequency, the features to perturb to create a diverse set of counterfactuals of the desired class. You must generate at least 10 diverse counterfactuals per datapoint to enable this chart due to lack of accuracy with a lesser number of counterfactuals.
+2. **Selected datapoint**: performs the same action as the point selection in the table, except in a dropdown menu.
+3. **Desired class for counterfactual(s)**: specifies the class or range to generate counterfactuals for.
+4. **Create what-if counterfactual**: opens a panel for counterfactual what-if datapoint creation.
+
+Selecting the **Create what-if counterfactual** button opens a full window panel.
++
+5. **Search features**: finds features to observe and change values.
+6. **Sort counterfactual by ranked features**: sorts counterfactual examples in order of perturbation effect (see above for top ranked features plot).
+7. **Counterfactual Examples**: lists feature values of example counterfactuals with the desired class or range. The first row is the original reference datapoint. Select on ΓÇ£Set valueΓÇ¥ to set all the values of your own counterfactual datapoint in the bottom row with the values of the pre-generated counterfactual example.
+8. **Predicted value or class** lists the model prediction of a counterfactual's class given those changed features.
+9. **Create your own counterfactual**: allows you to perturb your own features to modify the counterfactual, features that have been changed from the original feature value will be denoted by the title being bolded (ex. Employer and Programming language). Clicking on ΓÇ£See prediction deltaΓÇ¥ will show you the difference in the new prediction value from the original datapoint.
+10. **What-if counterfactual name**: allows you to name the counterfactual uniquely.
+11. **Save as new datapoint**: saves the counterfactual you've created.
+
+### Causal analysis
+
+#### Aggregate causal effects
+
+Selecting on the **Aggregate causal effects** tab of the Causal analysis component shows the average causal effects for pre-defined treatment features (the features that you want to treat to optimize your outcome).
+
+> [!NOTE]
+> Global cohort functionality is not supported for the causal analysis component.
++
+1. **Direct aggregate causal effect table**: displays the causal effect of each feature aggregated on the entire dataset and associated confidence statistics
+ 1. **Continuous treatments**: On average in this sample, increasing this feature by one unit will cause the probability of class to increase by X units, where X is the causal effect.
+ 1. **Binary treatments**: On average in this sample, turning on this feature will cause the probability of class to increase by X units, where X is the causal effect.
+1. **Direct aggregate causal effect whisker plot**: visualizes the causal effects and confidence intervals of the points in the table
+
+#### Individual causal effects and causal what-if
+
+To get a granular view of causal effects on an individual datapoint, switch to the **Individual causal what-if** tab.
++
+1. **X axis**: selects feature to plot on the x-axis.
+2. **Y axis**: selects feature to plot on the y-axis.
+3. **Individual causal scatter plot**: visualizes points in table as scatter plot to select datapoint for analyzing causal-what-if and viewing the individual causal effects below
+4. **Set new treatment value**
+ 1. **(numerical)**: shows slider to change the value of the numerical feature as a real-world intervention.
+ 1. **(categorical)**: shows dropdown to select the value of the categorical feature.
+
+#### Treatment policy
+
+Selecting the Treatment policy tab switches to a view to help determine real-world interventions and shows treatment(s) to apply to achieve a particular outcome.
++
+1. **Set treatment feature**: selects feature to change as a real-world intervention
+2. **Recommended global treatment policy**: displays recommended interventions for data cohorts to improve target feature value. The table can be read from left to right, where the segmentation of the dataset is first in rows and then in columns. For example, 658 individuals whose employer isn't Snapchat, and their programming language isn't JavaScript, the recommended treatment policy is to increase the number of GitHub repos contributed to.
+
+**Average gains of alternative policies over always applying treatment**: plots the target feature value in a bar chart of the average gain in your outcome for the above recommended treatment policy versus always applying treatment.
++
+**Recommended individual treatment policy**:
+
+3. **Show top k datapoint samples ordered by causal effects for recommended treatment feature**: selects the number of datapoints to show in the table below.
+4. **Recommended individual treatment policy table**: lists, in descending order of causal effect, the datapoints whose target features would be most improved by an intervention.
+
+## Next steps
+
+- Summarize and share your Responsible AI insights with the [Responsible AI scorecard as a PDF export](how-to-responsible-ai-scorecard.md).
+- Learn more about the [concepts and techniques behind the Responsible AI dashboard](concept-responsible-ai-dashboard.md).
+- View [sample YAML and Python notebooks](https://aka.ms/RAIsamples) to generate a Responsible AI dashboard with YAML or Python.
machine-learning How To Responsible Ai Scorecard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-responsible-ai-scorecard.md
+
+ Title: Share insights with Responsible AI scorecard (preview)
+
+description: Share insights with non-technical business stakeholders by exporting a PDF Responsible AI scorecard from Azure Machine Learning.
++++++ Last updated : 05/10/2022+++
+# Share insights with Responsible AI scorecard (preview)
++
+Azure Machine LearningΓÇÖs Responsible AI dashboard is designed for machine learning professionals and data scientists to explore and evaluate model insights and inform their data-driven decisions, and while it can help you implement Responsible AI practically in your machine learning lifecycle, there are some needs left unaddressed:
+
+- There often exists a gap between the technical Responsible AI tools (designed for machine-learning professionals) and the ethical, regulatory, and business requirements that define the production environment.
+- While an end-to-end machine learning life cycle includes both technical and non-technical stakeholders in the loop, there's very little support to enable an effective multi-stakeholder alignment, helping technical experts get timely feedback and direction from the non-technical stakeholders.
+- AI regulations make it essential to be able to share model and data insights with auditors and risk officers for auditability purposes.
+
+One of the biggest benefits of using the Azure Machine Learning ecosystem is related to the archival of model and data insights in the Azure Machine Learning Run History (for quick reference in future). As a part of that infrastructure and to accompany machine learning models and their corresponding Responsible AI dashboards, we introduce the Responsible AI scorecard, a customizable report that you can easily configure, download, and share with your technical and non-technical stakeholders to educate them about your data and model health and compliance and build trust. This scorecard could also be used in audit reviews to inform the stakeholders about the characteristics of your model.
+
+## Who should use a Responsible AI scorecard?
+
+As a data scientist or machine learning professional, after you train a model and generate its corresponding Responsible AI dashboard for assessment and decision-making purposes, you can share your data and model health and ethical insights with non-technical stakeholders to build trust and gain their approval for deployment.
+
+As a technical or non-technical product owner of a model, you can pass some target values such as minimum accuracy, maximum error rate, etc., to your data science team, asking them to generate this scorecard with respect to your identified target values and whether your model meets them. That can provide guidance into whether the model should be deployed or further improved.
+
+## How to generate a Responsible AI scorecard
+
+The configuration stage requires you to use your domain expertise around the problem to set your desired target values on model performance and fairness metrics.
+
+Like other Responsible AI dashboard components configured in the YAML pipeline, you can add a component to generate the scorecard in the YAML pipeline.
+
+Where pdf_gen.json is the scorecard generation configuration json file and cohorts.json is the prebuilt cohorts definition json file.
+
+```yml
+scorecard_01:
+
+ type: command
+ component: azureml:rai_score_card@latest
+ inputs:
+ dashboard: ${{parent.jobs.gather_01.outputs.dashboard}}
+ pdf_generation_config:
+ type: uri_file
+ path: ./pdf_gen.json
+ mode: download
+
+ predefined_cohorts_json:
+ type: uri_file
+ path: ./cohorts.json
+ mode: download
+
+```
+
+Sample json for cohorts definition and score card generation config can be found below:
+
+Cohorts definition:
+
+```yml
+[
+ {
+ "name": "High Yoe",
+ "cohort_filter_list": [
+
+ {
+ "method": "greater",
+ "arg": [
+ 5
+ ],
+ "column": "YOE"
+ }
+ ]
+ },
+ {
+ "name": "Low Yoe",
+ "cohort_filter_list": [
+ {
+ "method": "less",
+ "arg": [
+ 6.5
+ ],
+ "column": "YOE"
+ }
+ ]
+ }
+]
+
+```
+
+Scorecard generation config:
+
+```yml
+{
+ "Model": {
+ "ModelName": "GPT2 Access",
+ "ModelType": "Regression",
+ "ModelSummary": "This is a regression model to analyzer how likely a programmer is given access to gpt 2"
+ },
+ "Metrics": {
+ "mean_absolute_error": {
+ "threshold": "<=20"
+ },
+ "mean_squared_error": {}
+ },
+ "FeatureImportance": {
+ "top_n": 6
+ },
+ "DataExplorer": {
+ "features": [
+ "YOE",
+ "age"
+ ]
+ },
+ "Cohorts": [
+ "High Yoe",
+ "Low Yoe"
+ ]
+}
+```
+
+### Definition of inputs of the Responsible AI scorecard component
+
+This section defines the list of parameters required to configure the Responsible AI scorecard component.
+
+#### Model
+
+| ModelName | Name of Model |
+|--|-|
+| ModelType | Values in [ΓÇÿclassificationΓÇÖ, ΓÇÿregressionΓÇÖ, ΓÇÿmulticlassΓÇÖ]. |
+| ModelSummary | Input a blurb of text summarizing what the model is for. |
+
+#### Metrics
+
+| Performance Metric | Definition | Model Type |
+|--|-|-|
+| accuracy_score | The fraction of data points classified correctly. | Classification |
+| precision_score | The fraction of data points classified correctly among those classified as 1. | Classification |
+| recall_score | The fraction of data points classified correctly among those whose true label is 1. Alternative names: true positive rate, sensitivity | Classification |
+| f1_score | F1-score is the harmonic mean of precision and recall. | Classification |
+| error_rate | Proportion of instances misclassified over the whole set of instances. | Classification |
+| mean_absolute_error | The average of absolute values of errors. More robust to outliers than MSE. | Regression |
+| mean_squared_error | The average of squared errors. | Regression |
+| median_absolute_error | The median of squared errors. | Regression |
+| r2_score | The fraction of variance in the labels explained by the model. | Regression |
+
+Threshold:
+ Desired threshold for selected metric. Allowed mathematical tokens are >, <, >=, and <= followed by a real number. For example, >= 0.75 means that the target for selected metric is greater than or equal to 0.75.
+
+#### Feature importance
+
+top_n:
+Number of features to show with a maximum of 10. Positive integers up to 10 are allowed.
+
+#### Fairness
+
+| Metric | Definition |
+|--|--|
+| metric | Primary metric for evaluation fairness |
+| sensitive_features | A list of feature name from input dataset to be designated as sensitive feature for fairness report. |
+| fairness_evaluation_kind | Values in [ΓÇÿdifferenceΓÇÖ, ΓÇÿratioΓÇÖ]. |
+| threshold | **Desired target values** of the fairness evaluation. Allowed mathematical tokens are >, <, >=, and <= followed by a real number. For example, metric=ΓÇ£accuracyΓÇ¥, fairness_evaluation_kind=ΓÇ¥differenceΓÇ¥ <= 0.05 means that the target of for the difference in accuracy is less than or equal to 0.05. |
+
+> [!NOTE]
+ Your choice of `fairness_evaluation_kind` (selecting ΓÇÿdifferenceΓÇÖ vs ΓÇÿratio) impacts the scale of your target value. Be mindful of your selection to choose a meaningful target value.
+
+You can select from the following metrics, paired with the `fairness_evaluation_kind` to configure your fairness assessment component of the scorecard:
+
+| Metric | fairness_evaluation_kind | Definition | Model Type |
+|||||
+| accuracy_score | difference | The maximum difference in accuracy score between any two groups. | Classification |
+|accuracy_score | ratio | The minimum ratio in accuracy score between any two groups. | Classification |
+| precision_score | difference | The maximum difference in precision score between any two groups. | Classification |
+| precision_score | ratio | The maximum ratio in precision score between any two groups. | Classification |
+| recall_score | difference | The maximum difference in recall score between any two groups. | Classification|
+| recall_score | ratio | The maximum ratio in recall score between any two groups. | Classification|
+|f1_score | difference | The maximum difference in f1 score between any two groups.|Classification|
+| f1_score | ratio | The maximum ratio in f1 score between any two groups.| Classification|
+| error_rate | difference | The maximum difference in error rate between any two groups. | Classification |
+| error_rate | ratio | The maximum ratio in error rate between any two groups.|Classification|
+| Selection_rate | difference | The maximum difference in selection rate between any two groups. | Classification |
+| Selection_rate | ratio | The maximum ratio in selection rate between any two groups. | Classification |
+| mean_absolute_error | difference | The maximum difference in mean absolute error between any two groups. | Regression |
+| mean_absolute_error | ratio | The maximum ratio in mean absolute error between any two groups. | Regression |
+| mean_squared_error | difference | The maximum difference in mean squared error between any two groups. | Regression |
+| mean_squared_error | ratio | The maximum ratio in mean squared error between any two groups. | Regression |
+| median_absolute_error | difference | The maximum difference in median absolute error between any two groups. | Regression |
+| median_absolute_error | ratio | The maximum ratio in median absolute error between any two groups. | Regression |
+| r2_score | difference | The maximum difference in R<sup>2</sup> score between any two groups. | Regression |
+| r2_Score | ratio | The maximum ratio in R<sup>2</sup> score between any two groups. | Regression |
+
+## How to view your Responsible AI scorecard?
+
+Responsible AI scorecards are linked to your Responsible AI dashboards. To view your Responsible AI scorecard, go into your model registry and select the registered model you've generated a Responsible AI dashboard for. Once you select your model, select the Responsible AI (preview) tab to view a list of generated dashboards. Select which dashboard youΓÇÖd like to export a Responsible AI scorecard PDF for by selecting **Responsible AI scorecard (preview)**.
++
+Selecting **Responsible AI scorecard (preview)** will show you a dropdown to view all Responsible A I scorecards generated for this dashboard.
++
+Select which scorecard youΓÇÖd like to download from the list and select Download to download the PDF to your machine.
++
+## How to read your Responsible AI scorecard
+
+The Responsible AI scorecard is a PDF summary of your key insights from the Responsible AI dashboard. The first summary segment of the scorecard gives you an overview of the machine learning model and the key target values you have set to help all stakeholders determine if your model is ready to be deployed.
++
+The data explorer segment shows you characteristics of your data, as any model story is incomplete without the right understanding of data
++
+The model performance segment displays your modelΓÇÖs most important metrics and characteristics of your predictions and how well they satisfy your desired target values.
++
+Next, you can also view the top performing and worst performing data cohorts and subgroups that are automatically extracted for you to see the blind spots of your model.
++
+Then you can see the top important factors impacting your model predictions, which is a requirement to build trust with how your model is performing its task.
++
+You can further see your model fairness insights summarized and inspect how well your model is satisfying the fairness target values you had set for your desired sensitive groups.
++
+Finally, you can observe your datasetΓÇÖs causal insights summarized, figuring out whether your identified factors/treatments have any causal effect on the real-world outcome.
++
+## Next steps
+
+- See the how-to guide for generating a Responsible AI dashboard via [CLIv2 and SDKv2](how-to-responsible-ai-dashboard-sdk-cli.md) or [studio UI ](how-to-responsible-ai-dashboard-ui.md).
+- Learn more about the [concepts and techniques behind the Responsible AI dashboard](concept-responsible-ai-dashboard.md).
+- View [sample YAML and Python notebooks](https://aka.ms/RAIsamples) to generate a Responsible AI dashboard with YAML or Python.
machine-learning How To Run Batch Predictions Designer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-run-batch-predictions-designer.md
Previously updated : 10/21/2021 Last updated : 05/10/2022 -+ # Run batch predictions using Azure Machine Learning designer - In this article, you learn how to use the designer to create a batch prediction pipeline. Batch prediction lets you continuously score large datasets on-demand using a web service that can be triggered from any HTTP library. In this how-to, you learn to do the following tasks:
In this how-to, you learn to do the following tasks:
> * Consume a pipeline endpoint > * Manage endpoint versions
-To learn how to set up batch scoring services using the SDK, see the accompanying [how-to](./tutorial-pipeline-batch-scoring-classification.md).
+To learn how to set up batch scoring services using the SDK, see the accompanying [tutorial on pipeline batch scoring](./tutorial-pipeline-batch-scoring-classification.md).
[!INCLUDE [endpoints-option](../../includes/machine-learning-endpoints-preview-note.md)] ## Prerequisites
-This how-to assumes you already have a training pipeline. For a guided introduction to the designer, complete [part one of the designer tutorial](tutorial-designer-automobile-price-train-score.md).
+This how-to assumes you already have a training pipeline. For a guided introduction to the designer, complete [part one of the designer tutorial](tutorial-designer-automobile-price-train-score.md).
[!INCLUDE [machine-learning-missing-ui](../../includes/machine-learning-missing-ui.md)]
Your training pipeline must be run at least once to be able to create an inferen
1. **Submit** the pipeline.
- ![Submit the pipeline](./media/how-to-run-batch-predictions-designer/run-training-pipeline.png)
+![Submit the pipeline](./media/how-to-run-batch-predictions-designer/run-training-pipeline.png)
+
+ :::image type="content" source="./media/how-to-run-batch-predictions-designer/run-training-pipeline.png" alt-text="Screenshot showing the set up pipeline job with the experiment drop-down and submit button highlighted." lightbox= "./media/how-to-run-batch-predictions-designer/run-training-pipeline.png":::
-Now that the training pipeline has been run, you can create a batch inference pipeline.
+You'll see a submission list on the left of canvas. You can select the job detail link to go to the job detail page, and after the training pipeline job completes, you can create a batch inference pipeline.
-1. Next to **Submit**, select the new dropdown **Create inference pipeline**.
+ :::image type="content" source="./media/how-to-run-batch-predictions-designer/submission-list.png" alt-text="Screenshot showing the submitted job list." lightbox= "./media/how-to-run-batch-predictions-designer/submission-list.png":::
-1. Select **Batch inference pipeline**.
+1. In job detail page, above the canvas, select the dropdown **Create inference pipeline**. Select **Batch inference pipeline**.
- ![Create batch inference pipeline](./media/how-to-run-batch-predictions-designer/create-batch-inference.png)
+ > [!NOTE]
+ > Currently auto-generating inference pipeline only works for training pipeline built purely by the designer built-in components.
+
+ :::image type="content" source="./media/how-to-run-batch-predictions-designer/create-batch-inference.png" alt-text="Screenshot of the create inference pipeline drop-down with batch inference pipeline highlighted." lightbox= "./media/how-to-run-batch-predictions-designer/create-batch-inference.png":::
-The result is a default batch inference pipeline.
+ It will create a batch inference pipeline draft for you. The batch inference pipeline draft uses the trained model as **MD-** node and transformation as **TD-** node from the training pipeline job.
+
+ You can also modify this inference pipeline draft to better handle your input data for batch inference.
+
+ :::image type="content" source="./media/how-to-run-batch-predictions-designer/batch-inference-draft.png" alt-text="Screenshot showing a batch inference pipeline draft." lightbox= "./media/how-to-run-batch-predictions-designer/batch-inference-draft.png":::
### Add a pipeline parameter
In this section, you create a dataset parameter to specify a different dataset t
Enter a name for the parameter, or accept the default value.
- > [!div class="mx-imgBorder"]
- > ![Set dataset as pipeline parameter](./media/how-to-run-batch-predictions-designer/set-dataset-as-pipeline-parameter.png)
+ :::image type="content" source="./media/how-to-run-batch-predictions-designer/create-pipeline-parameter.png" alt-text="Screenshot of cleaned dataset tab with set as pipeline parameter checked." lightbox= "./media/how-to-run-batch-predictions-designer/create-pipeline-parameter.png":::
++
+1. Submit the batch inference pipeline and go to job detail page by selecting the job link in the left pane.
## Publish your batch inference pipeline
Now you're ready to deploy the inference pipeline. This will deploy the pipeline
1. Select **Publish**.
-![Publish a pipeline](./media/how-to-run-batch-predictions-designer/publish-inference-pipeline.png)
- ## Consume an endpoint Now, you have a published pipeline with a dataset parameter. The pipeline will use the trained model created in the training pipeline to score the dataset you provide as a parameter.
-### Submit a pipeline run
+### Submit a pipeline run
-In this section, you will set up a manual pipeline run and alter the pipeline parameter to score new data.
+In this section, you'll set up a manual pipeline run and alter the pipeline parameter to score new data.
1. After the deployment is complete, go to the **Endpoints** section.
In this section, you will set up a manual pipeline run and alter the pipeline pa
1. Select the name of the endpoint you created.
-![Endpoint link](./media/how-to-run-batch-predictions-designer/manage-endpoints.png)
1. Select **Published pipelines**.
In this section, you will set up a manual pipeline run and alter the pipeline pa
1. Select the pipeline you published.
- The pipeline details page shows you a detailed run history and connection string information for your pipeline.
+ The pipeline details page shows you a detailed run history and connection string information for your pipeline.
1. Select **Submit** to create a manual run of the pipeline.
- ![Pipeline details](./media/how-to-run-batch-predictions-designer/submit-manual-run.png)
+ :::image type="content" source="./media/how-to-run-batch-predictions-designer/submit-manual-run.png" alt-text="Screenshot of set up pipeline job with parameters highlighted." lightbox= "./media/how-to-run-batch-predictions-designer/submit-manual-run.png" :::
1. Change the parameter to use a different dataset.
In this section, you will set up a manual pipeline run and alter the pipeline pa
You can find information on how to consume pipeline endpoints and published pipeline in the **Endpoints** section.
-You can find the REST endpoint of a pipeline endpoint in the run overview panel. By calling the endpoint, you are consuming its default published pipeline.
+You can find the REST endpoint of a pipeline endpoint in the run overview panel. By calling the endpoint, you're consuming its default published pipeline.
You can also consume a published pipeline in the **Published pipelines** page. Select a published pipeline and you can find the REST endpoint of it in the **Published pipeline overview** panel to the right of the graph.
-To make a REST call, you will need an OAuth 2.0 bearer-type authentication header. See the following [tutorial section](tutorial-pipeline-batch-scoring-classification.md#publish-and-run-from-a-rest-endpoint) for more detail on setting up authentication to your workspace and making a parameterized REST call.
+To make a REST call, you'll need an OAuth 2.0 bearer-type authentication header. See the following [tutorial section](tutorial-pipeline-batch-scoring-classification.md#publish-and-run-from-a-rest-endpoint) for more detail on setting up authentication to your workspace and making a parameterized REST call.
## Versioning endpoints
The designer assigns a version to each subsequent pipeline that you publish to a
When you publish a pipeline, you can choose to make it the new default pipeline for that endpoint.
-![Set default pipeline](./media/how-to-run-batch-predictions-designer/set-default-pipeline.png)
You can also set a new default pipeline in the **Published pipelines** tab of your endpoint.
-![Set default pipeline in published pipeline page](./media/how-to-run-batch-predictions-designer/set-new-default-pipeline.png)
-## Limitations
+## Update pipeline endpoint
-If you make some modifications in your training pipeline, you should re-submit the training pipeline, **Update** the inference pipeline and run the inference pipeline again.
+If you make some modifications in your training pipeline, you may want to update the newly trained model to the pipeline endpoint.
-Note that only models will be updated in the inference pipeline, while data transformation will not be updated.
+1. After your modified training pipeline completes successfully, go to the job detail page.
-To use the updated transformation in inference pipeline, you need to register the transformation output of the transformation component as dataset.
+1. Right click **Train Model** component and select **Register data**
-![Screenshot showing how to register transformation dataset](./media/how-to-run-batch-predictions-designer/register-transformation-dataset.png)
+ :::image type="content" source="./media/how-to-run-batch-predictions-designer/register-train-model-as-dataset.png" alt-text="Screenshot of the train model component options with register data highlighted." lightbox= "./media/how-to-run-batch-predictions-designer/register-train-model-as-dataset.png" :::
-Then manually replace the **TD-** component in inference pipeline with the registered dataset.
+ Input name and select **File** type.
-![Screenshot showing how to replace transformation component](./media/how-to-run-batch-predictions-designer/replace-td-module-batch-inference-pipeline.png)
+ :::image type="content" source="./media/how-to-run-batch-predictions-designer/register-train-model-as-dataset-2.png" alt-text="Screenshot of register as data asset with new data asset selected." lightbox= "./media/how-to-run-batch-predictions-designer/register-train-model-as-dataset-2.png" :::
-Then you can submit the inference pipeline with the updated model and transformation, and publish.
+1. Find the previous batch inference pipeline draft, or you can just **Clone** the published pipeline into a new draft.
-## Next steps
+1. Replace the **MD-** node in the inference pipeline draft with the registered data in the step above.
+
+ :::image type="content" source="./media/how-to-run-batch-predictions-designer/update-inference-pipeline-draft.png" alt-text="Screenshot of updating the inference pipeline draft with the registered data in the step above." :::
-Follow the designer [tutorial](tutorial-designer-automobile-price-train-score.md) to train and deploy a regression model.
+1. Updating data transformation node **TD-** is the same as the trained model.
+
+1. Then you can submit the inference pipeline with the updated model and transformation, and publish again.
+
+## Next steps
-For how to publish and run a published pipeline using SDK, see [this article](how-to-deploy-pipelines.md).
+* Follow the [designer tutorial to train and deploy a regression model](tutorial-designer-automobile-price-train-score.md).
+* For how to publish and run a published pipeline using SDK, see the [How to deploy pipelines](how-to-deploy-pipelines.md) article.
machine-learning How To Safely Rollout Managed Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-safely-rollout-managed-endpoints.md
Previously updated : 03/31/2022 Last updated : 04/29/2022 -+
-# Safe rollout for online endpoints (preview)
+# Safe rollout for online endpoints
[!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)]
-You've an existing model deployed in production and you want to deploy a new version of the model. How do you roll out your new ML model without causing any disruption? A good answer is blue-green deployment, an approach in which a new version of a web service is introduced to production by rolling out the change to a small subset of users/requests before rolling it out completely. This article assumes you're using online endpoints; for more information, see [What are Azure Machine Learning endpoints (preview)?](concept-endpoints.md).
+You've an existing model deployed in production and you want to deploy a new version of the model. How do you roll out your new ML model without causing any disruption? A good answer is blue-green deployment, an approach in which a new version of a web service is introduced to production by rolling out the change to a small subset of users/requests before rolling it out completely. This article assumes you're using online endpoints; for more information, see [What are Azure Machine Learning endpoints?](concept-endpoints.md).
In this article, you'll learn to:
In this article, you'll learn to:
> * Fully cut-over all live traffic to the green deployment > * Delete the now-unused v1 blue deployment - ## Prerequisites * To use Azure machine learning, you must have an Azure subscription. If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/) today.
-* You must install and configure the Azure CLI and ML extension. For more information, see [Install, set up, and use the CLI (v2) (preview)](how-to-configure-cli.md).
+* You must install and configure the Azure CLI and ML extension. For more information, see [Install, set up, and use the CLI (v2)](how-to-configure-cli.md).
* You must have an Azure Resource group, in which you (or the service principal you use) need to have `Contributor` access. You'll have such a resource group if you configured your ML extension per the above article.
In this article, you'll learn to:
az configure --defaults workspace=<azureml workspace name> group=<resource group> ```
-* An existing online endpoint and deployment. This article assumes that your deployment is as described in [Deploy and score a machine learning model with an online endpoint (preview)](how-to-deploy-managed-online-endpoints.md).
+* An existing online endpoint and deployment. This article assumes that your deployment is as described in [Deploy and score a machine learning model with an online endpoint](how-to-deploy-managed-online-endpoints.md).
* If you haven't already set the environment variable $ENDPOINT_NAME, do so now:
You should see the endpoint identified by `$ENDPOINT_NAME` and, a deployment cal
## Scale your existing deployment to handle more traffic
-In the deployment described in [Deploy and score a machine learning model with an online endpoint (preview)](how-to-deploy-managed-online-endpoints.md), you set the `instance_count` to the value `1` in the deployment yaml file. You can scale out using the `update` command:
+In the deployment described in [Deploy and score a machine learning model with an online endpoint](how-to-deploy-managed-online-endpoints.md), you set the `instance_count` to the value `1` in the deployment yaml file. You can scale out using the `update` command:
:::code language="azurecli" source="~/azureml-examples-main/cli/deploy-safe-rollout-online-endpoints.sh" ID="scale_blue" :::
If you want to use a REST client to invoke the deployment directly without going
:::code language="azurecli" source="~/azureml-examples-main/cli/deploy-safe-rollout-online-endpoints.sh" ID="test_green_using_curl" :::
+## Test the deployment with mirrored traffic (preview)
++
+Once you've tested your `green` deployment, you can copy (or 'mirror') a percentage of the live traffic to it. Mirroring traffic doesn't change results returned to clients. Requests still flow 100% to the blue deployment. The mirrored percentage of the traffic is copied and submitted to the `green` deployment so you can gather metrics and logging without impacting your clients. Mirroring is useful when you want to validate a new deployment without impacting clients. For example, to check if latency is within acceptable bounds and that there are no HTTP errors.
+
+> [!WARNING]
+> Mirroring traffic uses your [endpoint bandwidth quota](how-to-manage-quotas.md#azure-machine-learning-managed-online-endpoints) (default 5 MBPS). Your endpoint bandwidth will be throttled if you exceed the allocated quota. For information on monitoring bandwidth throttling, see [Monitor managed online endpoints](how-to-monitor-online-endpoints.md#bandwidth-throttling).
+
+The following command mirrors 10% of the traffic to the `green` deployment:
+
+```azurecli
+az ml online-endpoint update --name $ENDPOINT_NAME --mirror-traffic "green=10"
+```
+
+> [!IMPORTANT]
+> Mirroring has the following limitations:
+> * You can only mirror traffic to one deployment.
+> * A deployment can only be set to live or mirror traffic, not both.
+> * Mirrored traffic is not currently supported with K8s.
+> * The maximum mirrored traffic you can configure is 50%. This limit is to reduce the impact on your endpoint bandwidth quota.
++
+After testing, you can set the mirror traffic to zero to disable mirroring:
+
+```azurecli
+az ml online-endpoint update --name $ENDPOINT_NAME --mirror-traffic "green=0"
+```
+ ## Test the new deployment with a small percentage of live traffic Once you've tested your `green` deployment, allocate a small percentage of traffic to it:
Once you've tested your `green` deployment, allocate a small percentage of traff
Now, your `green` deployment will receive 10% of requests. + ## Send all traffic to your new deployment Once you're satisfied that your `green` deployment is fully satisfactory, switch all traffic to it.
If you aren't going use the deployment, you should delete it with:
## Next steps-- [Deploy models with REST (preview)](how-to-deploy-with-rest.md)-- [Create and use online endpoints (preview) in the studio](how-to-use-managed-online-endpoint-studio.md)-- [Access Azure resources with a online endpoint and managed identity (preview)](how-to-access-resources-from-endpoints-managed-identities.md)-- [Monitor managed online endpoints (preview)](how-to-monitor-online-endpoints.md)-- [Manage and increase quotas for resources with Azure Machine Learning](how-to-manage-quotas.md#azure-machine-learning-managed-online-endpoints-preview)-- [View costs for an Azure Machine Learning managed online endpoint (preview)](how-to-view-online-endpoints-costs.md)-- [Managed online endpoints SKU list (preview)](reference-managed-online-endpoints-vm-sku-list.md)-- [Troubleshooting online endpoints deployment and scoring (preview)](how-to-troubleshoot-managed-online-endpoints.md)-- [Online endpoint (preview) YAML reference](reference-yaml-endpoint-online.md)
+- [Deploy models with REST](how-to-deploy-with-rest.md)
+- [Create and use online endpoints in the studio](how-to-use-managed-online-endpoint-studio.md)
+- [Access Azure resources with a online endpoint and managed identity](how-to-access-resources-from-endpoints-managed-identities.md)
+- [Monitor managed online endpoints](how-to-monitor-online-endpoints.md)
+- [Manage and increase quotas for resources with Azure Machine Learning](how-to-manage-quotas.md#azure-machine-learning-managed-online-endpoints)
+- [View costs for an Azure Machine Learning managed online endpoint](how-to-view-online-endpoints-costs.md)
+- [Managed online endpoints SKU list](reference-managed-online-endpoints-vm-sku-list.md)
+- [Troubleshooting online endpoints deployment and scoring](how-to-troubleshoot-managed-online-endpoints.md)
+- [Online endpoint YAML reference](reference-yaml-endpoint-online.md)
machine-learning How To Secure Inferencing Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-secure-inferencing-vnet.md
Last updated 04/04/2022--+ # Secure an Azure Machine Learning inferencing environment with virtual networks
To add AKS in a virtual network to your workspace, use the following steps:
You can also use the Azure Machine Learning SDK to add Azure Kubernetes Service in a virtual network. If you already have an AKS cluster in a virtual network, attach it to the workspace as described in [How to deploy to AKS](how-to-deploy-and-where.md). The following code creates a new AKS instance in the `default` subnet of a virtual network named `mynetwork`: + ```python from azureml.core.compute import ComputeTarget, AksCompute
The following examples demonstrate how to __create a new AKS cluster with a priv
# [Python](#tab/python) + ```python import azureml.core from azureml.core.compute import AksCompute, ComputeTarget
machine-learning How To Secure Online Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-secure-online-endpoint.md
+
+ Title: Network isolation of managed online endpoints
+
+description: Use private endpoints to provide network isolation for Azure Machine Learning managed online endpoints.
+++++++ Last updated : 04/22/2022+++
+# Use network isolation with managed online endpoints (preview)
+
+When deploying a machine learning model to a managed online endpoint, you can secure communication with the online endpoint by using [private endpoints](/azure/private-link/private-endpoint-overview). Using a private endpoint with online endpoints is currently a preview feature.
++
+You can secure the inbound scoring requests from clients to an _online endpoint_. You can also secure the outbound communications between a _deployment_ and the Azure resources used by the deployment. Security for inbound and outbound communication is configured separately. For more information on endpoints and deployments, see [What are endpoints and deployments](concept-endpoints.md#what-are-endpoints-and-deployments).
+
+The following diagram shows how communications flow through private endpoints to the managed online endpoint. Incoming scoring requests from clients are received through the workspace private endpoint from your virtual network. Outbound communication with services is handled through private endpoints to those service instances from the deployment:
++
+## Prerequisites
+
+* To use Azure machine learning, you must have an Azure subscription. If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/) today.
+
+* You must install and configure the Azure CLI and ML extension. For more information, see [Install, set up, and use the CLI (v2)](how-to-configure-cli.md).
+
+* You must have an Azure Resource Group, in which you (or the service principal you use) need to have `Contributor` access. You'll have such a resource group if you configured your ML extension per the above article.
+
+* You must have an Azure Machine Learning workspace, and the workspace must use a private endpoint. If you don't have one, the steps in this article create an example workspace, VNet, and VM. For more information, see [Configure a private endpoint for Azure Machine Learning workspace](how-to-configure-private-link.md).
+
+* The Azure Container Registry for your workspace must be configured for __Premium__ tier. For more information, see [Azure Container Registry service tiers](/azure/container-registry/container-registry-skus).
+
+* The Azure Container Registry and Azure Storage Account must be in the same Azure Resource Group as the workspace.
+
+> [!IMPORTANT]
+> The end-to-end example in this article comes from the files in the __azureml-examples__ GitHub repository. To clone the samples repository and switch to the repository's `cli/` directory, use the following commands:
+>
+> ```azurecli
+> git clone https://github.com/Azure/azureml-examples
+> cd azureml-examples/cli
+> ```
+
+## Limitations
+
+* If your Azure Machine Learning workspace has a private endpoint that was created before May 24, 2022, you must recreate the workspace's private endpoint before configuring your online endpoints to use a private endpoint. For more information on creating a private endpoint for your workspace, see [How to configure a private endpoint for Azure Machine Learning workspace](how-to-configure-private-link.md).
+
+* Secure outbound communication creates three private endpoints per deployment. One to Azure Blob storage, one to Azure Container Registry, and one to your workspace.
+
+* Azure Log Analytics and Application Insights aren't supported when using network isolation with a deployment. To see the logs for the deployment, use the [az ml online-deployment get_logs](/cli/azure/ml/online-deployment#az-ml-online-deployment-get-logs) command instead.
+
+> [!NOTE]
+> Requests to create, update, or retrieve the authentication keys are sent to the Azure Resource Manager over the public network.
+
+## Inbound (scoring)
+
+To secure scoring requests to the online endpoint to your virtual network, set the `public_network_access` flag for the endpoint to `disabled`:
+
+```azurecli
+az ml online-endpoint create -f endpoint.yml --set public_network_access=disabled
+```
+
+When `public_network_access` is `disabled`, inbound scoring requests are received using the [private endpoint of the Azure Machine Learning workspace](how-to-configure-private-link.md) and the endpoint can't be reached from public networks.
+
+## Outbound (resource access)
+
+To restrict communication between a deployment and the Azure resources used to by the deployment, set the `egress_public_network_access` flag to `disabled`. Use this flag to ensure that the download of the model, code, and images needed by your deployment are secured with a private endpoint.
+
+The following are the resources that the deployment communicates with over the private endpoint:
+
+* The Azure Machine Learning workspace.
+* The Azure Storage blob that is the default storage for the workspace.
+* The Azure Container Registry for the workspace.
+
+When you configure the `egress_public_network_access` to `disabled`, a new private endpoint is created per deployment, per service. For example, if you set the flag to `disabled` for three deployments to an online endpoint, nine private endpoints are created. Each deployment would have three private endpoints that are used to communicate with the workspace, blob, and container registry.
+
+```azurecli
+az ml online-deployment create -f deployment.yml --set egress_public_network_access=disabled
+```
+
+## Scenarios
+
+The following table lists the supported configurations when configuring inbound and outbound communications for an online endpoint:
+
+| Configuration | Inbound </br> (Endpoint property) | Outbound </br> (Deployment property) | Supported? |
+| -- | -- | | |
+| secure inbound with secure outbound | `public_network_access` is disabled | `egress_public_network_access` is disabled | Yes |
+| secure inbound with public outbound | `public_network_access` is disabled | `egress_public_network_access` is enabled | Yes |
+| public inbound with secure outbound | `public_network_access` is enabled | `egress_public_network_access` is disabled | Yes |
+| public inbound with public outbound | `public_network_access` is enabled | `egress_public_network_access` is enabled | Yes |
+
+## End-to-end example
+
+Use the information in this section to create an example configuration that uses private endpoints to secure online endpoints.
+
+> [!TIP]
+> In this example, and Azure Virtual Machine is created inside the VNet. You connect to the VM using SSH, and run the deployment from the VM. This configuration is used to simplify the steps in this example, and does not represent a typical secure configuration. For example, in a production environment you would most likely use a VPN client or Azure ExpressRoute to directly connect clients to the virtual network.
+
+### Create workspace and secured resources
+
+The steps in this section use an Azure Resource Manager template to create the following Azure resources:
+
+* Azure Virtual Network
+* Azure Machine Learning workspace
+* Azure Container Registry
+* Azure Key Vault
+* Azure Storage account (blob & file storage)
+
+Public access is disabled for all the services. While the Azure Machine Learning workspace is secured behind a vnet, it's configured to allow public network access. For more information, see [CLI 2.0 secure communications](how-to-configure-cli.md#secure-communications). A scoring subnet is created, along with outbound rules that allow communication with the following Azure
+
+* Azure Active Directory
+* Azure Resource Manager
+* Azure Front Door
+* Microsoft Container Registries
+
+The following diagram shows the different components created in this architecture:
+
+The following diagram shows the overall architecture of this example:
++
+To create the resources, use the following Azure CLI commands. Replace `<UNIQUE_SUFFIX>` with a unique suffix for the resources that are created.
+++
+### Create the virtual machine jump box
+
+To create an Azure Virtual Machine that can be used to connect to the VNet, use the following command. Replace `<your-new-password>` with the password you want to use when connecting to this VM:
+
+```azurecli
+# create vm
+az vm create --name test-vm --vnet-name vnet-$SUFFIX --subnet snet-scoring --image UbuntuLTS --admin-username azureuser --admin-password <your-new-password>
+```
+
+> [!IMPORTANT]
+> The VM created by these commands has a public endpoint that you can connect to over the public network.
+
+The response from this command is similar to the following JSON document:
+
+```json
+{
+ "fqdns": "",
+ "id": "/subscriptions/<GUID>/resourceGroups/<my-resource-group>/providers/Microsoft.Compute/virtualMachines/test-vm",
+ "location": "westus",
+ "macAddress": "00-0D-3A-ED-D8-E8",
+ "powerState": "VM running",
+ "privateIpAddress": "192.168.0.12",
+ "publicIpAddress": "20.114.122.77",
+ "resourceGroup": "<my-resource-group>",
+ "zones": ""
+}
+```
+
+Use the following command to connect to the VM using SSH. Replace `publicIpAddress` with the value of the public IP address in the response from the previous command:
+
+```azurecli
+ssh azureusere@publicIpAddress
+```
+
+When prompted, enter the password you used when creating the VM.
+
+### Configure the VM
+
+1. Use the following commands from the SSH session to install the CLI and Docker:
+
+ :::code language="azurecli" source="~/azureml-examples-online-endpoint-vnet/cli/endpoints/online/managed/vnet/setup_vm/scripts/vmsetup.sh" id="setup_docker_az_cli":::
+
+1. To create the environment variables used by this example, run the following commands. Replace `<YOUR_SUBSCRIPTION_ID>` with your Azure subscription ID. Replace `<YOUR_RESOURCE_GROUP>` with the resource group that contains your workspace. Replace `<SUFFIX_USED_IN_SETUP>` with the suffix you provided earlier. Replace `<LOCATION>` with the location of your Azure workspace. Replace `<YOUR_ENDPOINT_NAME>` with the name to use for the endpoint.
+
+ > [!TIP]
+ > Use the tabs to select whether you want to perform a deployment using an MLflow model or generic ML model.
+
+ # [Generic model](#tab/model)
+
+ :::code language="azurecli" source="~/azureml-examples-online-endpoint-vnet/cli/deploy-moe-vnet.sh" id="set_env_vars":::
+
+ # [MLflow model](#tab/mlflow)
+
+ :::code language="azurecli" source="~/azureml-examples-online-endpoint-vnet/cli/deploy-moe-vnet-mlflow.sh" id="set_env_vars":::
+
+
+
+1. To sign in to the Azure CLI in the VM environment, use the following command:
+
+ :::code language="azurecli" source="~/azureml-examples-main/cli/misc.sh" id="az_login":::
+
+1. To configure the defaults for the CLI, use the following commands:
+
+ :::code language="azurecli" source="~/azureml-examples-online-endpoint-vnet/cli/endpoints/online/managed/vnet/setup_vm/scripts/vmsetup.sh" id="configure_defaults":::
+
+1. To clone the example files for the deployment, use the following command:
+
+ ```azurecli
+ sudo mkdir -p /home/samples; sudo git clone -b main --depth 1 https://github.com/Azure/azureml-examples.git /home/samples/azureml-examples
+ ```
+
+1. To build a custom docker image to use with the deployment, use the following commands:
+
+ :::code language="azurecli" source="~/azureml-examples-online-endpoint-vnet/cli/endpoints/online/managed/vnet/setup_vm/scripts/build_image.sh" id="build_image":::
+
+ > [!TIP]
+ > In this example, we build the Docker image before pushing it to Azure Container Registry. Alternatively, you can build the image in your vnet by using an Azure Machine Learning compute cluster and environments. For more information, see [Secure Azure Machine Learning workspace](how-to-secure-workspace-vnet.md#enable-azure-container-registry-acr).
+
+### Create a secured managed online endpoint
+
+1. To create a managed online endpoint that is secured using a private endpoint for inbound and outbound communication, use the following commands:
+
+ > [!TIP]
+ > You can test or debug the Docker image locally by using the `--local` flag when creating the deployment. For more information, see the [Deploy and debug locally](how-to-deploy-managed-online-endpoints.md#deploy-and-debug-locally-by-using-local-endpoints) article.
+
+ :::code language="azurecli" source="~/azureml-examples-online-endpoint-vnet/cli/endpoints/online/managed/vnet/setup_vm/scripts/create_moe.sh" id="create_vnet_deployment":::
++
+1. To make a scoring request with the endpoint, use the following commands:
+
+ :::code language="azurecli" source="~/azureml-examples-online-endpoint-vnet/cli/endpoints/online/managed/vnet/setup_vm/scripts/score_endpoint.sh" id="check_deployment":::
+
+### Cleanup
+
+To delete the endpoint, use the following command:
++
+To delete the VM, use the following command:
++
+To delete all the resources created in this article, use the following command. Replace `<resource-group-name>` with the name of the resource group used in this example:
+
+```azurecli
+az group delete --resource-group <resource-group-name>
+```
+
+## Troubleshooting
++
+## Next steps
+
+- [Safe rollout for online endpoints](how-to-safely-rollout-managed-endpoints.md)
+- [How to autoscale managed online endpoints](how-to-autoscale-endpoints.md)
+- [View costs for an Azure Machine Learning managed online endpoint](how-to-view-online-endpoints-costs.md)
+- [Access Azure resources with a online endpoint and managed identity](how-to-access-resources-from-endpoints-managed-identities.md)
+- [Troubleshoot online endpoints deployment](how-to-troubleshoot-online-endpoints.md)
machine-learning How To Secure Training Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-secure-training-vnet.md
Last updated 03/29/2022-+ ms.devlang: azurecli- # Secure an Azure Machine Learning training environment with virtual networks
In this article, you learn how to secure training environments with a virtual ne
> > * [Virtual network overview](how-to-network-security-overview.md) > * [Secure the workspace resources](how-to-secure-workspace-vnet.md)
-> * [Secure the inference environment](how-to-secure-inferencing-vnet.md)
+> * For securing inference, see the following documents:
+> * If using CLI v1 or SDK v1 - [Secure inference environment](how-to-secure-inferencing-vnet.md)
+> * If using CLI v2 or SDK v2 - [Network isolation for managed online endpoints](how-to-secure-online-endpoint.md)
> * [Enable studio functionality](how-to-enable-studio-virtual-network.md) > * [Use custom DNS](how-to-custom-dns.md) > * [Use a firewall](how-to-access-azureml-behind-firewall.md)
Use the following steps to create a compute cluster in the Azure Machine Learnin
# [Python](#tab/python) + The following code creates a new Machine Learning Compute cluster in the `default` subnet of a virtual network named `mynetwork`: ```python
This article is part of a series on securing an Azure Machine Learning workflow.
* [Virtual network overview](how-to-network-security-overview.md) * [Secure the workspace resources](how-to-secure-workspace-vnet.md)
-* [Secure the inference environment](how-to-secure-inferencing-vnet.md)
+* For securing inference, see the following documents:
+ * If using CLI v1 or SDK v1 - [Secure inference environment](how-to-secure-inferencing-vnet.md)
+ * If using CLI v2 or SDK v2 - [Network isolation for managed online endpoints](how-to-secure-online-endpoint.md)
* [Enable studio functionality](how-to-enable-studio-virtual-network.md) * [Use custom DNS](how-to-custom-dns.md) * [Use a firewall](how-to-access-azureml-behind-firewall.md)
machine-learning How To Secure Web Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-secure-web-service.md
Last updated 10/21/2021 --+ # Use TLS to secure a web service through Azure Machine Learning This article shows you how to secure a web service that's deployed through Azure Machine Learning.
aks_target.update(update_config)
Learn how to: + [Consume a machine learning model deployed as a web service](how-to-consume-web-service.md) + [Virtual network isolation and privacy overview](how-to-network-security-overview.md)
-+ [How to use your workspace with a custom DNS server](how-to-custom-dns.md)
++ [How to use your workspace with a custom DNS server](how-to-custom-dns.md)
machine-learning How To Secure Workspace Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-secure-workspace-vnet.md
Previously updated : 03/09/2022 Last updated : 04/20/2022 --+ # Secure an Azure Machine Learning workspace with virtual networks
In this article, you learn how to secure an Azure Machine Learning workspace and
> > * [Virtual network overview](how-to-network-security-overview.md) > * [Secure the training environment](how-to-secure-training-vnet.md)
-> * [Secure the inference environment](how-to-secure-inferencing-vnet.md)
+> * For securing inference, see the following documents:
+> * If using CLI v1 or SDK v1 - [Secure inference environment](how-to-secure-inferencing-vnet.md)
+> * If using CLI v2 or SDK v2 - [Network isolation for managed online endpoints](how-to-secure-online-endpoint.md)
> * [Enable studio functionality](how-to-enable-studio-virtual-network.md) > * [Use custom DNS](how-to-custom-dns.md) > * [Use a firewall](how-to-access-azureml-behind-firewall.md)
Azure Container Registry can be configured to use a private endpoint. Use the fo
# [Python SDK](#tab/python)
+ [!INCLUDE [sdk v1](../../includes/machine-learning-sdk-v1.md)]
+ The following code snippet demonstrates how to get the container registry information using the [Azure Machine Learning SDK](/python/api/overview/azure/ml/): ```python
Azure Container Registry can be configured to use a private endpoint. Use the fo
The following code snippet demonstrates how to update the workspace to set a build compute using the [Azure Machine Learning SDK](/python/api/overview/azure/ml/). Replace `mycomputecluster` with the name of the cluster to use:
+ [!INCLUDE [sdk v1](../../includes/machine-learning-sdk-v1.md)]
+ ```python from azureml.core import Workspace # Load workspace from an existing config file
This article is part of a series on securing an Azure Machine Learning workflow.
* [Virtual network overview](how-to-network-security-overview.md) * [Secure the training environment](how-to-secure-training-vnet.md)
-* [Secure the inference environment](how-to-secure-inferencing-vnet.md)
+* [Secure online endpoints (inference)](how-to-secure-online-endpoint.md)
+* For securing inference, see the following documents:
+ * If using CLI v1 or SDK v1 - [Secure inference environment](how-to-secure-inferencing-vnet.md)
+ * If using CLI v2 or SDK v2 - [Network isolation for managed online endpoints](how-to-secure-online-endpoint.md)
* [Enable studio functionality](how-to-enable-studio-virtual-network.md) * [Use custom DNS](how-to-custom-dns.md) * [Use a firewall](how-to-access-azureml-behind-firewall.md) * [Tutorial: Create a secure workspace](tutorial-create-secure-workspace.md) * [Tutorial: Create a secure workspace using a template](tutorial-create-secure-workspace-template.md)
-* [API platform network isolation](how-to-configure-network-isolation-with-v2.md)
+* [API platform network isolation](how-to-configure-network-isolation-with-v2.md)
machine-learning How To Set Up Training Targets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-set-up-training-targets.md
Last updated 10/21/2021 -+ # Configure and submit training runs + In this article, you learn how to configure and submit Azure Machine Learning runs to train your models. Snippets of code explain the key parts of configuration and submission of a training script. Then use one of the [example notebooks](#notebooks) to find the full end-to-end working examples. When training, it is common to start on your local computer, and then later scale out to a cloud-based cluster. With Azure Machine Learning, you can run your script on various compute targets without having to change your training script.
Or you can:
## Create an experiment
-Create an [experiment](concept-azure-machine-learning-architecture.md#experiments) in your workspace. An experiment is a light-weight container that helps to organize run submissions and keep track of code.
+Create an [experiment](v1/concept-azure-machine-learning-architecture.md#experiments) in your workspace. An experiment is a light-weight container that helps to organize run submissions and keep track of code.
```python from azureml.core import Experiment
run.wait_for_completion(show_output=True)
> > [!INCLUDE [amlinclude-info](../../includes/machine-learning-amlignore-gitignore.md)] >
-> For more information about snapshots, see [Snapshots](concept-azure-machine-learning-architecture.md#snapshots).
+> For more information about snapshots, see [Snapshots](v1/concept-azure-machine-learning-architecture.md#snapshots).
> [!IMPORTANT] > **Special Folders**
machine-learning How To Setup Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-setup-authentication.md
Last updated 02/02/2022 -+ # Set up authentication for Azure Machine Learning resources and workflows
Azure AD Conditional Access can be used to further control or restrict access to
[!INCLUDE [cli-version-info](../../includes/machine-learning-cli-version-1-only.md)] * Create an [Azure Machine Learning workspace](how-to-manage-workspace.md).
-* [Configure your development environment](how-to-configure-environment.md) to install the Azure Machine Learning SDK, or use a [Azure Machine Learning compute instance](concept-azure-machine-learning-architecture.md#compute-instance) with the SDK already installed.
+* [Configure your development environment](how-to-configure-environment.md) to install the Azure Machine Learning SDK, or use a [Azure Machine Learning compute instance](v1/concept-azure-machine-learning-architecture.md#computes) with the SDK already installed.
## Azure Active Directory
The easiest way to create an SP and grant access to your workspace is by using t
} ```
-1. Allow the SP to access your Azure Machine Learning workspace. You will need your workspace name, and its resource group name for the `-w` and `-g` parameters, respectively. For the `--user` parameter, use the `objectId` value from the previous step. The `--role` parameter allows you to set the access role for the service principal. In the following example, the SP is assigned to the **owner** role.
+1. To grant access to the workspace and other resources used by Azure Machine Learning, use the information in the following articles:
+ * [How to assign roles and actions in AzureML](how-to-assign-roles.md)
+ * [How to assign roles in the CLI](../role-based-access-control/role-assignments-cli.md)
> [!IMPORTANT] > Owner access allows the service principal to do virtually any operation in your workspace. It is used in this document to demonstrate how to grant access; in a production environment Microsoft recommends granting the service principal the minimum access needed to perform the role you intend it for. For information on creating a custom role with the access needed for your scenario, see [Manage access to Azure Machine Learning workspace](how-to-assign-roles.md).
- [!INCLUDE [cli v1](../../includes/machine-learning-cli-v1.md)]
-
- ```azurecli-interactive
- az ml workspace share -w your-workspace-name -g your-resource-group-name --user your-sp-object-id --role owner
- ```
-
- This call does not produce any output on success.
## Configure a managed identity
The easiest way to create an SP and grant access to your workspace is by using t
### Managed identity with compute cluster
-For more information, see [Set up managed identity for compute cluster](how-to-create-attach-compute-cluster.md#managed-identity).
+For more information, see [Set up managed identity for compute cluster](how-to-create-attach-compute-cluster.md#set-up-managed-identity).
-<a id="interactive-authentication"></a>
## Use interactive authentication
Most examples in the documentation and samples use interactive authentication. F
* Calling the `from_config()` function will issue the prompt.
+ [!INCLUDE [sdk v1](../../includes/machine-learning-sdk-v1.md)]
+ ```python from azureml.core import Workspace ws = Workspace.from_config()
Most examples in the documentation and samples use interactive authentication. F
* Using the `Workspace` constructor to provide subscription, resource group, and workspace information, will also prompt for interactive authentication.
+ [!INCLUDE [sdk v1](../../includes/machine-learning-sdk-v1.md)]
+ ```python ws = Workspace(subscription_id="your-sub-id", resource_group="your-resource-group-id",
Most examples in the documentation and samples use interactive authentication. F
> [!TIP] > If you have access to multiple tenants, you may need to import the class and explicitly define what tenant you are targeting. Calling the constructor for `InteractiveLoginAuthentication` will also prompt you to login similar to the calls above. >
+> [!INCLUDE [sdk v1](../../includes/machine-learning-sdk-v1.md)]
> ```python > from azureml.core.authentication import InteractiveLoginAuthentication > interactive_auth = InteractiveLoginAuthentication(tenant_id="your-tenant-id")
When using the Azure CLI, the `az login` command is used to authenticate the CLI
> [!TIP] > If you are using the SDK from an environment where you have previously authenticated interactively using the Azure CLI, you can use the `AzureCliAuthentication` class to authenticate to the workspace using the credentials cached by the CLI: >
+> [!INCLUDE [sdk v1](../../includes/machine-learning-sdk-v1.md)]
> ```python > from azureml.core.authentication import AzureCliAuthentication > cli_auth = AzureCliAuthentication()
When using the Azure CLI, the `az login` command is used to authenticate the CLI
To authenticate to your workspace from the SDK, using a service principal, use the `ServicePrincipalAuthentication` class constructor. Use the values you got when creating the service provider as the parameters. The `tenant_id` parameter maps to `tenantId` from above, `service_principal_id` maps to `clientId`, and `service_principal_password` maps to `clientSecret`. + ```python from azureml.core.authentication import ServicePrincipalAuthentication
sp = ServicePrincipalAuthentication(tenant_id="your-tenant-id", # tenantID
The `sp` variable now holds an authentication object that you use directly in the SDK. In general, it is a good idea to store the ids/secrets used above in environment variables as shown in the following code. Storing in environment variables prevents the information from being accidentally checked into a GitHub repo. + ```python import os
sp = ServicePrincipalAuthentication(tenant_id=os.environ['AML_TENANT_ID'],
For automated workflows that run in Python and use the SDK primarily, you can use this object as-is in most cases for your authentication. The following code authenticates to your workspace using the auth object you created. + ```python from azureml.core import Workspace
For information and samples on authenticating with MSAL, see the following artic
To authenticate to the workspace from a VM or compute cluster that is configured with a managed identity, use the `MsiAuthentication` class. The following example demonstrates how to use this class to authenticate to a workspace: + ```python from azureml.core.authentication import MsiAuthentication
can require two-factor authentication, or allow sign in only from managed device
* [How to use secrets in training](how-to-use-secrets-in-runs.md). * [How to configure authentication for models deployed as a web service](how-to-authenticate-web-service.md).
-* [Consume an Azure Machine Learning model deployed as a web service](how-to-consume-web-service.md).
+* [Consume an Azure Machine Learning model deployed as a web service](how-to-consume-web-service.md).
machine-learning How To Setup Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-setup-customer-managed-keys.md
description: 'Learn how to improve data security with Azure Machine Learning by
+
For more information on creating and using a deployment configuration, see the f
* [AciWebservice.deploy_configuration()](/python/api/azureml-core/azureml.core.webservice.aci.aciwebservice#deploy-configuration-cpu-cores-none--memory-gb-none--tags-none--properties-none--description-none--location-none--auth-enabled-none--ssl-enabled-none--enable-app-insights-none--ssl-cert-pem-file-none--ssl-key-pem-file-none--ssl-cname-none--dns-name-label-none--primary-key-none--secondary-key-none--collect-model-data-none--cmk-vault-base-url-none--cmk-key-name-none--cmk-key-version-none-) reference * [Where and how to deploy](how-to-deploy-and-where.md)
-* [Deploy a model to Azure Container Instances](how-to-deploy-azure-container-instance.md)
+* [Deploy a model to Azure Container Instances](v1/how-to-deploy-azure-container-instance.md)
For more information on using a customer-managed key with ACI, see [Encrypt data with a customer-managed key](../container-instances/container-instances-encrypt-data.md#encrypt-data-with-a-customer-managed-key).
This process allows you to encrypt both the Data and the OS Disk of the deployed
* [Create a workspace with Azure CLI](how-to-manage-workspace-cli.md#customer-managed-key-and-high-business-impact-workspace) | * [Create and manage a workspace](how-to-manage-workspace.md#use-your-own-key) | * [Create a workspace with a template](how-to-create-workspace-template.md#deploy-an-encrypted-workspace) |
-* [Create, run, and delete Azure ML resources with REST](how-to-manage-rest.md#create-a-workspace-using-customer-managed-encryption-keys) |
+* [Create, run, and delete Azure ML resources with REST](how-to-manage-rest.md#create-a-workspace-using-customer-managed-encryption-keys) |
machine-learning How To Setup Vs Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-setup-vs-code.md
Last updated 10/21/2021 -+ # Set up the Visual Studio Code Azure Machine Learning extension (preview)
The Azure Machine Learning extension for VS Code provides a user interface to:
- Azure subscription. If you don't have one, sign up to try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/). - Visual Studio Code. If you don't have it, [install it](https://code.visualstudio.com/docs/setup/setup-overview). - [Python](https://www.python.org/downloads/)-- (Optional) To create resources using the extension, you need to install the CLI (v2). For setup instructions, see [Install, set up, and use the CLI (v2) (preview)](how-to-configure-cli.md).
+- (Optional) To create resources using the extension, you need to install the CLI (v2). For setup instructions, see [Install, set up, and use the CLI (v2)](how-to-configure-cli.md).
- Clone the community driven repository ```bash git clone https://github.com/Azure/azureml-examples.git --depth 1
machine-learning How To Track Designer Experiments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-track-designer-experiments.md
Last updated 10/21/2021 -+ # Enable logging in Azure Machine Learning designer pipelines
The following example shows you how to log the mean squared error of two trained
1. Paste the following code into the __Execute Python Script__ code editor to log the mean absolute error for your trained model. You can use a similar pattern to log any other value in the designer:
+ [!INCLUDE [sdk v1](../../includes/machine-learning-sdk-v1.md)]
+ ```python # dataframe1 contains the values from Evaluate Model def azureml_main(dataframe1=None, dataframe2=None):
machine-learning How To Track Monitor Analyze Runs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-track-monitor-analyze-runs.md
Title: Track, monitor, and analyze runs-
-description: Learn how to start, monitor, and track your machine learning experiment runs with the Azure Machine Learning Python SDK.
+ Title: Track, monitor, and analyze runs in studio
+
+description: Learn how to start, monitor, and track your machine learning experiment runs with the Azure Machine Learning studio.
Previously updated : 10/21/2021 Last updated : 04/28/2022 -+
-# Start, monitor, and track run history
+# Start, monitor, and track run history in studio
-The [Azure Machine Learning SDK for Python](/python/api/overview/azure/ml/intro), [Machine Learning CLI](reference-azure-machine-learning-cli.md), and [Azure Machine Learning studio](https://ml.azure.com) provide various methods to monitor, organize, and track your runs for training and experimentation. Your ML run history is an important part of an explainable and repeatable ML development process.
+You can use [Azure Machine Learning studio](https://ml.azure.com) to monitor, organize, and track your runs for training and experimentation. Your ML run history is an important part of an explainable and repeatable ML development process.
This article shows how to do the following tasks:
-* Monitor run performance.
* Add run display name. * Create a custom view. * Add a run description. * Tag and find runs. * Run search over your run history. * Cancel or fail runs.
-* Create child runs.
* Monitor the run status by email notification. > [!TIP]
-> If you're looking for information on monitoring the Azure Machine Learning service and associated Azure services, see [How to monitor Azure Machine Learning](monitor-azure-machine-learning.md).
+> * If you're looking for information on using the Azure Machine Learning SDK v1 or CLI v1, see [How to track, monitor, and analyze runs (v1)](./v1/how-to-track-monitor-analyze-runs.md).
+> * If you're looking for information on monitoring training runs from the CLI or SDK v2, see [Track experiments with MLflow and CLI v2](how-to-use-mlflow-cli-runs.md).
+> * If you're looking for information on monitoring the Azure Machine Learning service and associated Azure services, see [How to monitor Azure Machine Learning](monitor-azure-machine-learning.md).
+>
> If you're looking for information on monitoring models deployed as web services, see [Collect model data](how-to-enable-data-collection.md) and [Monitor with Application Insights](how-to-enable-app-insights.md). ## Prerequisites You'll need the following items:
-* An Azure subscription. If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/) today.
+* To use Azure Machine Learning, you must have an Azure subscription. If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/).
+* You must have an Azure Machine Learning workspace. A workspace is created in [Install, set up, and use the CLI (v2)](how-to-configure-cli.md).
-* An [Azure Machine Learning workspace](how-to-manage-workspace.md).
-
-* The Azure Machine Learning SDK for Python (version 1.0.21 or later). To install or update to the latest version of the SDK, see [Install or update the SDK](/python/api/overview/azure/ml/install).
-
- To check your version of the Azure Machine Learning SDK, use the following code:
-
- ```python
- print(azureml.core.VERSION)
- ```
-
-* The [Azure CLI](/cli/azure/) and [CLI extension for Azure Machine Learning](reference-azure-machine-learning-cli.md).
--
-## Monitor run performance
-
-* Start a run and its logging process
-
- # [Python](#tab/python)
-
- 1. Set up your experiment by importing the [Workspace](/python/api/azureml-core/azureml.core.workspace.workspace), [Experiment](/python/api/azureml-core/azureml.core.experiment.experiment), [Run](/python/api/azureml-core/azureml.core.run%28class%29), and [ScriptRunConfig](/python/api/azureml-core/azureml.core.scriptrunconfig) classes from the [azureml.core](/python/api/azureml-core/azureml.core) package.
-
- ```python
- import azureml.core
- from azureml.core import Workspace, Experiment, Run
- from azureml.core import ScriptRunConfig
-
- ws = Workspace.from_config()
- exp = Experiment(workspace=ws, name="explore-runs")
- ```
-
- 1. Start a run and its logging process with the [`start_logging()`](/python/api/azureml-core/azureml.core.experiment%28class%29#start-logging--args-kwargs-) method.
-
- ```python
- notebook_run = exp.start_logging()
- notebook_run.log(name="message", value="Hello from run!")
- ```
-
- # [Azure CLI](#tab/azure-cli)
-
- [!INCLUDE [cli v1](../../includes/machine-learning-cli-v1.md)]
-
- To start a run of your experiment, use the following steps:
-
- 1. From a shell or command prompt, use the Azure CLI to authenticate to your Azure subscription:
-
- ```azurecli-interactive
- az login
- ```
- [!INCLUDE [select-subscription](../../includes/machine-learning-cli-subscription.md)]
-
- 1. Attach a workspace configuration to the folder that contains your training script. Replace `myworkspace` with your Azure Machine Learning workspace. Replace `myresourcegroup` with the Azure resource group that contains your workspace:
-
--
- ```azurecli-interactive
- az ml folder attach -w myworkspace -g myresourcegroup
- ```
-
- This command creates a `.azureml` subdirectory that contains example runconfig and conda environment files. It also contains a `config.json` file that is used to communicate with your Azure Machine Learning workspace.
-
- For more information, see [az ml folder attach](/cli/azure/ml(v1)/folder#az-ml-folder-attach).
-
- 2. To start the run, use the following command. When using this command, specify the name of the runconfig file (the text before \*.runconfig if you're looking at your file system) against the -c parameter.
-
- ```azurecli-interactive
- az ml run submit-script -c sklearn -e testexperiment train.py
- ```
-
- > [!TIP]
- > The `az ml folder attach` command created a `.azureml` subdirectory, which contains two example runconfig files.
- >
- > If you have a Python script that creates a run configuration object programmatically, you can use [RunConfig.save()](/python/api/azureml-core/azureml.core.runconfiguration#save-path-none--name-none--separate-environment-yaml-false-) to save it as a runconfig file.
- >
- > For more example runconfig files, see [https://github.com/MicrosoftDocs/pipelines-azureml/](https://github.com/MicrosoftDocs/pipelines-azureml/).
-
- For more information, see [az ml run submit-script](/cli/azure/ml(v1)/run#az-ml-run-submit-script).
-
- # [Studio](#tab/azure-studio)
-
- For an example of training a model in the Azure Machine Learning designer, see [Tutorial: Predict automobile price with the designer](tutorial-designer-automobile-price-train-score.md).
-
-
-
-* Monitor the status of a run
-
- # [Python](#tab/python)
-
- * Get the status of a run with the [`get_status()`](/python/api/azureml-core/azureml.core.run%28class%29#get-status--) method.
-
- ```python
- print(notebook_run.get_status())
- ```
-
- * To get the run ID, execution time, and other details about the run, use the [`get_details()`](/python/api/azureml-core/azureml.core.workspace.workspace#get-details--) method.
-
- ```python
- print(notebook_run.get_details())
- ```
-
- * When your run finishes successfully, use the [`complete()`](/python/api/azureml-core/azureml.core.run%28class%29#complete--set-status-true-) method to mark it as completed.
-
- ```python
- notebook_run.complete()
- print(notebook_run.get_status())
- ```
-
- * If you use Python's `with...as` design pattern, the run will automatically mark itself as completed when the run is out of scope. You don't need to manually mark the run as completed.
-
- ```python
- with exp.start_logging() as notebook_run:
- notebook_run.log(name="message", value="Hello from run!")
- print(notebook_run.get_status())
-
- print(notebook_run.get_status())
- ```
-
- # [Azure CLI](#tab/azure-cli)
-
- [!INCLUDE [cli v1](../../includes/machine-learning-cli-v1.md)]
-
- * To view a list of runs for your experiment, use the following command. Replace `experiment` with the name of your experiment:
-
- ```azurecli-interactive
- az ml run list --experiment-name experiment
- ```
-
- This command returns a JSON document that lists information about runs for this experiment.
-
- For more information, see [az ml experiment list](/cli/azure/ml(v1)/experiment#az-ml-experiment-list).
-
- * To view information on a specific run, use the following command. Replace `runid` with the ID of the run:
-
- ```azurecli-interactive
- az ml run show -r runid
- ```
-
- This command returns a JSON document that lists information about the run.
-
- For more information, see [az ml run show](/cli/azure/ml(v1)/run#az-ml-run-show).
-
-
- # [Studio](#tab/azure-studio)
-
-
-
## Run Display Name + The run display name is an optional and customizable name that you can provide for your run. To edit the run display name: 1. Navigate to the runs list.
Navigate to the **Run Details** page for your run and select the edit or pencil
In Azure Machine Learning, you can use properties and tags to help organize and query your runs for important information.
-* Add properties and tags
+* Edit tags
- # [Python](#tab/python)
-
- To add searchable metadata to your runs, use the [`add_properties()`](/python/api/azureml-core/azureml.core.run%28class%29#add-properties-properties-) method. For example, the following code adds the `"author"` property to the run:
-
- ```Python
- local_run.add_properties({"author":"azureml-user"})
- print(local_run.get_properties())
- ```
-
- Properties are immutable, so they create a permanent record for auditing purposes. The following code example results in an error, because we already added `"azureml-user"` as the `"author"` property value in the preceding code:
-
- ```Python
- try:
- local_run.add_properties({"author":"different-user"})
- except Exception as e:
- print(e)
- ```
-
- Unlike properties, tags are mutable. To add searchable and meaningful information for consumers of your experiment, use the [`tag()`](/python/api/azureml-core/azureml.core.run%28class%29#tag-key--value-none-) method.
-
- ```Python
- local_run.tag("quality", "great run")
- print(local_run.get_tags())
-
- local_run.tag("quality", "fantastic run")
- print(local_run.get_tags())
- ```
-
- You can also add simple string tags. When these tags appear in the tag dictionary as keys, they have a value of `None`.
-
- ```Python
- local_run.tag("worth another look")
- print(local_run.get_tags())
- ```
-
- # [Azure CLI](#tab/azure-cli)
-
- [!INCLUDE [cli v1](../../includes/machine-learning-cli-v1.md)]
-
- > [!NOTE]
- > Using the CLI, you can only add or update tags.
-
- To add or update a tag, use the following command:
-
- ```azurecli-interactive
- az ml run update -r runid --add-tag quality='fantastic run'
- ```
-
- For more information, see [az ml run update](/cli/azure/ml(v1)/run#az-ml-run-update).
-
- # [Studio](#tab/azure-studio)
-
You can add, edit, or delete run tags from the studio. Navigate to the **Run Details** page for your run and select the edit, or pencil icon to add, edit, or delete tags for your runs. You can also search and filter on these tags from the runs list page. :::image type="content" source="media/how-to-track-monitor-analyze-runs/run-tags.gif" alt-text="Screenshot: Add, edit, or delete run tags":::
-
* Query properties and tags You can query runs within an experiment to return a list of runs that match specific properties and tags.-
- # [Python](#tab/python)
-
- ```Python
- list(exp.get_runs(properties={"author":"azureml-user"},tags={"quality":"fantastic run"}))
- list(exp.get_runs(properties={"author":"azureml-user"},tags="worth another look"))
- ```
-
- # [Azure CLI](#tab/azure-cli)
-
- [!INCLUDE [cli v1](../../includes/machine-learning-cli-v1.md)]
-
- The Azure CLI supports [JMESPath](http://jmespath.org) queries, which can be used to filter runs based on properties and tags. To use a JMESPath query with the Azure CLI, specify it with the `--query` parameter. The following examples show some queries using properties and tags:
-
- ```azurecli-interactive
- # list runs where the author property = 'azureml-user'
- az ml run list --experiment-name experiment [?properties.author=='azureml-user']
- # list runs where the tag contains a key that starts with 'worth another look'
- az ml run list --experiment-name experiment [?tags.keys(@)[?starts_with(@, 'worth another look')]]
- # list runs where the author property = 'azureml-user' and the 'quality' tag starts with 'fantastic run'
- az ml run list --experiment-name experiment [?properties.author=='azureml-user' && tags.quality=='fantastic run']
- ```
-
- For more information on querying Azure CLI results, see [Query Azure CLI command output](/cli/azure/query-azure-cli).
-
- # [Studio](#tab/azure-studio)
To search for specific runs, navigate to the **All runs** list. From there you have two options:
In Azure Machine Learning, you can use properties and tags to help organize and
OR 1. Use the search bar to quickly find runs by searching on the run metadata like the run status, descriptions, experiment names, and submitter name.
-
+ ## Cancel or fail runs If you notice a mistake or if your run is taking too long to finish, you can cancel the run.
-# [Python](#tab/python)
-
-To cancel a run using the SDK, use the [`cancel()`](/python/api/azureml-core/azureml.core.run%28class%29#cancel--) method:
-
-```python
-src = ScriptRunConfig(source_directory='.', script='hello_with_delay.py')
-local_run = exp.submit(src)
-print(local_run.get_status())
-
-local_run.cancel()
-print(local_run.get_status())
-```
-
-If your run finishes, but it contains an error (for example, the incorrect training script was used), you can use the [`fail()`](/python/api/azureml-core/azureml.core.run%28class%29#fail-error-details-none--error-code-noneset-status-true-) method to mark it as failed.
-
-```python
-local_run = exp.submit(src)
-local_run.fail()
-print(local_run.get_status())
-```
-
-# [Azure CLI](#tab/azure-cli)
--
-To cancel a run using the CLI, use the following command. Replace `runid` with the ID of the run
-
-```azurecli-interactive
-az ml run cancel -r runid -w workspace_name -e experiment_name
-```
-
-For more information, see [az ml run cancel](/cli/azure/ml(v1)/run#az-ml-run-cancel).
-
-# [Studio](#tab/azure-studio)
- To cancel a run in the studio, using the following steps: 1. Go to the running pipeline in either the **Experiments** or **Pipelines** section.
To cancel a run in the studio, using the following steps:
1. In the toolbar, select **Cancel**. --
-## Create child runs
-
-Create child runs to group together related runs, such as for different hyperparameter-tuning iterations.
-
-> [!NOTE]
-> Child runs can only be created using the SDK.
-
-This code example uses the `hello_with_children.py` script to create a batch of five child runs from within a submitted run by using the [`child_run()`](/python/api/azureml-core/azureml.core.run%28class%29#child-run-name-none--run-id-none--outputs-none-) method:
-
-```python
-!more hello_with_children.py
-src = ScriptRunConfig(source_directory='.', script='hello_with_children.py')
-
-local_run = exp.submit(src)
-local_run.wait_for_completion(show_output=True)
-print(local_run.get_status())
-
-with exp.start_logging() as parent_run:
- for c,count in enumerate(range(5)):
- with parent_run.child_run() as child:
- child.log(name="Hello from child run", value=c)
-```
-
-> [!NOTE]
-> As they move out of scope, child runs are automatically marked as completed.
-
-To create many child runs efficiently, use the [`create_children()`](/python/api/azureml-core/azureml.core.run.run#create-children-count-none--tag-key-none--tag-values-none-) method. Because each creation results in a network call,
-creating a batch of runs is more efficient than creating them one by one.
-
-### Submit child runs
-
-Child runs can also be submitted from a parent run. This allows you to create hierarchies of parent and child runs. You can't create a parentless child run: even if the parent run does nothing but launch child runs, it's still necessary to create the hierarchy. The statuses of all runs are independent: a parent can be in the `"Completed"` successful state even if one or more child runs were canceled or failed.
-
-You may wish your child runs to use a different run configuration than the parent run. For instance, you might use a less-powerful, CPU-based configuration for the parent, while using GPU-based configurations for your children. Another common wish is to pass each child different arguments and data. To customize a child run, create a `ScriptRunConfig` object for the child run.
-
-> [!IMPORTANT]
-> To submit a child run from a parent run on a remote compute, you must sign in to the workspace in the parent run code first. By default, the run context object in a remote run does not have credentials to submit child runs. Use a service principal or managed identity credentials to sign in. For more information on authenticating, see [set up authentication](how-to-setup-authentication.md).
-
-The below code:
--- Retrieves a compute resource named `"gpu-cluster"` from the workspace `ws`-- Iterates over different argument values to be passed to the children `ScriptRunConfig` objects-- Creates and submits a new child run, using the custom compute resource and argument-- Blocks until all of the child runs complete-
-```python
-# parent.py
-# This script controls the launching of child scripts
-from azureml.core import Run, ScriptRunConfig
-
-compute_target = ws.compute_targets["gpu-cluster"]
-
-run = Run.get_context()
-
-child_args = ['Apple', 'Banana', 'Orange']
-for arg in child_args:
- run.log('Status', f'Launching {arg}')
- child_config = ScriptRunConfig(source_directory=".", script='child.py', arguments=['--fruit', arg], compute_target=compute_target)
- # Starts the run asynchronously
- run.submit_child(child_config)
-
-# Experiment will "complete" successfully at this point.
-# Instead of returning immediately, block until child runs complete
-
-for child in run.get_children():
- child.wait_for_completion()
-```
-
-To create many child runs with identical configurations, arguments, and inputs efficiently, use the [`create_children()`](/python/api/azureml-core/azureml.core.run.run#create-children-count-none--tag-key-none--tag-values-none-) method. Because each creation results in a network call, creating a batch of runs is more efficient than creating them one by one.
-
-Within a child run, you can view the parent run ID:
-
-```python
-## In child run script
-child_run = Run.get_context()
-child_run.parent.id
-```
-
-### Query child runs
-
-To query the child runs of a specific parent, use the [`get_children()`](/python/api/azureml-core/azureml.core.run%28class%29#get-children-recursive-false--tags-none--properties-none--type-none--status-nonerehydrate-runs-true-) method.
-The ``recursive = True`` argument allows you to query a nested tree of children and grandchildren.
-
-```python
-print(parent_run.get_children())
-```
-
-### Log to parent or root run
-
-You can use the `Run.parent` field to access the run that launched the current child run. A common use-case for using `Run.parent` is to combine log results in a single place. Child runs execute asynchronously and there's no guarantee of ordering or synchronization beyond the ability of the parent to wait for its child runs to complete.
-
-```python
-# in child (or even grandchild) run
-
-def root_run(self : Run) -> Run :
- if self.parent is None :
- return self
- return root_run(self.parent)
-
-current_child_run = Run.get_context()
-root_run(current_child_run).log("MyMetric", f"Data from child run {current_child_run.id}")
-
-```
- ## Monitor the run status by email notification 1. In the [Azure portal](https://portal.azure.com/), in the left navigation bar, select the **Monitor** tab.
machine-learning How To Train Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-cli.md
Last updated 03/31/2022 -+
-# Train models with the CLI (v2) (preview)
+# Train models with the CLI (v2)
[!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)]+ The Azure Machine Learning CLI (v2) is an Azure CLI extension enabling you to accelerate the model training process while scaling up and out on Azure compute, with the model lifecycle tracked and auditable. Training a machine learning model is typically an iterative process. Modern tooling makes it easier than ever to train larger models on more data faster. Previously tedious manual processes like hyperparameter tuning and even algorithm selection are often automated. With the Azure Machine Learning CLI (v2), you can track your jobs (and models) in a [workspace](concept-workspace.md) with hyperparameter sweeps, scale-up on high-performance Azure compute, and scale-out utilizing distributed training. - ## Prerequisites - To use the CLI (v2), you must have an Azure subscription. If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/) today.
You can run this:
## Train a model
-At this point, a model still hasn't been trained. Let's add some `sklearn` code into a Python script with MLflow tracking to train a model on the Iris CSV:
+In Azure Machine Learning you basically have two possible ways to train a model:
+
+1. Leverage automated ML to train models with your data and get the best model for you. This approach maximizes productivity by automating the iterative process of tuning hyperparameters and trying out different algorithms.
+1. Train a model with your own custom training script. This approach offers the most control and allows you to customize your training.
++
+### Train a model with automated ML
+
+Automated ML is the easiest way to train a model because you don't need to know how training algorithms work exactly but you just need to provide your training/validation/test datasets and some basic configuration parameters such as 'ML Task', 'target column', 'primary metric, 'timeout' etc, and the service will train multiple models and try out various algorithms and hyperparameter combinations for you.
+
+When you train with automated ML via the CLI (v2), you just need to create a .YAML file with an AutoML configuration and provide it to the CLI for training job creation and submission.
+
+The following example shows an AutoML configuration file for training a classification model where,
+* The primary metric is `accuracy`
+* The training has a time out of 180 minutes
+* The data for training is in the folder "./training-mltable-folder". Automated ML jobs only accept data in the form of an `MLTable`.
++
+That mentioned MLTable definition is what points to the training data file, in this case a local .csv file that will be uploaded automatically:
++
+Finally, you can run it (create the AutoML job) with this CLI command:
+
+```
+/> az ml job create --file ./hello-automl-job-basic.yml
+```
+
+Or like the following if providing workspace IDs explicitly instead of using the by default workspace:
+
+```
+/> az ml job create --file ./hello-automl-job-basic.yml --workspace-name [YOUR_AZURE_WORKSPACE] --resource-group [YOUR_AZURE_RESOURCE_GROUP] --subscription [YOUR_AZURE_SUBSCRIPTION]
+```
+
+To investigate additional AutoML model training examples using other ML-tasks such as regression, time-series forecasting, image classification, object detection, NLP text-classification, etc., see the complete list of [AutoML CLI examples](https://github.com/Azure/azureml-examples/tree/sdk-preview/cli/jobs/automl-standalone-jobs).
+
+### Train a model with a custom script
+
+When training by using your own custom script, the first thing you need is that python script (.py), so let's add some `sklearn` code into a Python script with MLflow tracking to train a model on the Iris CSV:
:::code language="python" source="~/azureml-examples-main/cli/jobs/single-step/scikit-learn/iris/src/main.py":::
machine-learning How To Train Distributed Gpu https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-distributed-gpu.md
Last updated 10/21/2021+ # Distributed GPU training guide + Learn more about how to use distributed GPU training code in Azure Machine Learning (ML). This article will not teach you about distributed training. It will help you run your existing distributed training code on Azure Machine Learning. It offers tips and examples for you to follow for each framework: * Message Passing Interface (MPI)
machine-learning How To Train Keras https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-keras.md
Last updated 09/28/2020 -
-#Customer intent: As a Python Keras developer, I need to combine open-source with a cloud platform to train, evaluate, and deploy my deep learning models at scale.
+
+#Customer intent: As a Python Keras developer, I need to combine open-source with a cloud platform to train, evaluate, and deploy my deep learning models at scale.
# Train Keras models at scale with Azure Machine Learning + In this article, learn how to run your Keras training scripts with Azure Machine Learning. The example code in this article shows you how to train and register a Keras classification model built using the TensorFlow backend with Azure Machine Learning. It uses the popular [MNIST dataset](http://yann.lecun.com/exdb/mnist/) to classify handwritten digits using a deep neural network (DNN) built using the [Keras Python library](https://keras.io) running on top of [TensorFlow](https://www.tensorflow.org/overview).
ws = Workspace.from_config()
### Create a file dataset
-A `FileDataset` object references one or multiple files in your workspace datastore or public urls. The files can be of any format, and the class provides you with the ability to download or mount the files to your compute. By creating a `FileDataset`, you create a reference to the data source location. If you applied any transformations to the data set, they will be stored in the data set as well. The data remains in its existing location, so no extra storage cost is incurred. See the [how-to](./how-to-create-register-datasets.md) guide on the `Dataset` package for more information.
+A `FileDataset` object references one or multiple files in your workspace datastore or public urls. The files can be of any format, and the class provides you with the ability to download or mount the files to your compute. By creating a `FileDataset`, you create a reference to the data source location. If you applied any transformations to the data set, they will be stored in the data set as well. The data remains in its existing location, so no extra storage cost is incurred. See the [how-to](./v1/how-to-create-register-datasets.md) guide on the `Dataset` package for more information.
```python from azureml.core.dataset import Dataset
machine-learning How To Train Mlflow Projects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-mlflow-projects.md
Last updated 06/16/2021 -+ # Train ML models with MLflow Projects and Azure Machine Learning (preview) ++ [!INCLUDE [preview disclaimer](../../includes/machine-learning-preview-generic-disclaimer.md)] In this article, learn how to enable MLflow's tracking URI and logging API, collectively known as [MLflow Tracking](https://mlflow.org/docs/latest/quickstart.html#using-the-tracking-api), to submit training jobs with [MLflow Projects](https://www.mlflow.org/docs/latest/projects.html) and Azure Machine Learning backend support. You can submit jobs locally with Azure Machine Learning tracking or migrate your runs to the cloud like via an [Azure Machine Learning Compute](./how-to-create-attach-compute-cluster.md).
If you don't plan to use the logged metrics and artifacts in your workspace, the
1. In the Azure portal, select **Resource groups** on the far left.
- ![Delete in the Azure portal](./media/how-to-use-mlflow/delete-resources.png)
+ ![Delete in the Azure portal](./v1/media/how-to-use-mlflow/delete-resources.png)
1. From the list, select the resource group you created.
machine-learning How To Train Pytorch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-pytorch.md
Last updated 02/28/2022 -
-#Customer intent: As a Python PyTorch developer, I need to combine open-source with a cloud platform to train, evaluate, and deploy my deep learning models at scale.
+
+#Customer intent: As a Python PyTorch developer, I need to combine open-source with a cloud platform to train, evaluate, and deploy my deep learning models at scale.
# Train PyTorch models at scale with Azure Machine Learning + In this article, learn how to run your [PyTorch](https://pytorch.org/) training scripts at enterprise scale using Azure Machine Learning. The example scripts in this article are used to classify chicken and turkey images to build a deep learning neural network (DNN) based on [PyTorch's transfer learning tutorial](https://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html). Transfer learning is a technique that applies knowledge gained from solving one problem to a different but related problem. Transfer learning shortens the training process by requiring less data, time, and compute resources than training from scratch. To learn more about transfer learning, see the [deep learning vs machine learning](./concept-deep-learning-vs-machine-learning.md#what-is-transfer-learning) article.
machine-learning How To Train Scikit Learn https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-scikit-learn.md
Last updated 03/21/2022 --+ #Customer intent: As a Python scikit-learn developer, I need to combine open-source with a cloud platform to train, evaluate, and deploy my machine learning models at scale. # Train scikit-learn models at scale with Azure Machine Learning + In this article, learn how to run your scikit-learn training scripts with Azure Machine Learning. The example scripts in this article are used to classify iris flower images to build a machine learning model based on scikit-learn's [iris dataset](https://archive.ics.uci.edu/ml/datasets/iris).
machine-learning How To Train Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-sdk.md
+
+ Title: Train models with the Azure ML Python SDK v2 (preview)
+
+description: Configure and submit Azure Machine Learning jobs to train your models with SDK v2.
++++++ Last updated : 05/10/2022++++
+# Train models with the Azure ML Python SDK v2 (preview)
+
+> [!div class="op_single_selector" title1="Select the Azure Machine Learning SDK version you are using:"]
+> * [v1](v1/how-to-attach-compute-targets.md)
+> * [v2 (preview)](how-to-train-sdk.md)
++
+> [!IMPORTANT]
+> SDK v2 is currently in public preview.
+> The preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+In this article, you learn how to configure and submit Azure Machine Learning jobs to train your models. Snippets of code explain the key parts of configuration and submission of a training job. Then use one of the [example notebooks](https://github.com/Azure/azureml-examples/tree/sdk-preview/sdk) to find the full end-to-end working examples.
+
+## Prerequisites
+
+* If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/) today
+* The Azure Machine Learning SDK v2 for Python
+* An Azure Machine Learning workspace
+
+### Clone examples repository
+
+To run the training examples, first clone the examples repository and change into the `sdk` directory:
+
+```bash
+git clone --depth 1 https://github.com/Azure/azureml-examples --branch sdk-preview
+cd azureml-examples/sdk
+```
+
+> [!TIP]
+> Use `--depth 1` to clone only the latest commit to the repository, which reduces time to complete the operation.
+
+## Start on your local machine
+
+Start by running a script, which trains a model using `lightgbm`. The script file is available [here](https://github.com/Azure/azureml-examples/blob/sdk-preview/sdk/jobs/single-step/lightgbm/iris/src/main.py). The script needs three inputs
+
+* _input data_: You'll use data from a web location for your run - [web location](https://azuremlexamples.blob.core.windows.net/datasets/iris.csv). In this example, we're using a file in a remote location for brevity, but you can use a local file as well.
+* _learning-rate_: You'll use a learning rate of _0.9_
+* _boosting_: You'll use the Gradient Boosting _gdbt_
+
+Run this script file as follows
+
+```bash
+cd jobs/single-step/lightgbm/iris
+
+python src/main.py --iris-csv https://azuremlexamples.blob.core.windows.net/datasets/iris.csv --learning-rate 0.9 --boosting gbdt
+```
+
+The output expected is as follows:
+
+```terminal
+2022/04/21 15:02:44 INFO mlflow.tracking.fluent: Autologging successfully enabled for lightgbm.
+2022/04/21 15:02:44 INFO mlflow.tracking.fluent: Autologging successfully enabled for sklearn.
+2022/04/21 15:02:45 INFO mlflow.utils.autologging_utils: Created MLflow autologging run with ID 'a1d5f652796e4d88961176166de52253', which will track hyperparameters, performance metrics, model artifacts, and lineage information for the current lightgbm workflow
+lightgbm\engine.py:177: UserWarning: Found `num_iterations` in params. Will use it instead of argument
+[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000164 seconds.
+You can set `force_col_wise=true` to remove the overhead.
+[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
+[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
+[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
+```
+
+## Move to the cloud
+
+Now that the local run works, move this run to an Azure Machine Learning workspace. To run this on Azure ML, you need:
+
+* A workspace to run
+* A compute on which to run it
+* An environment on the compute to ensure you have the required packages to run your script
+
+Let us tackle these steps below
+
+### 1. Connect to the workspace
+
+To connect to the workspace, you need identifier parameters - a subscription, resource group and workspace name. You'll use these details in the `MLClient` from `azure.ai.ml` to get a handle to the required Azure Machine Learning workspace. To authenticate, you use the [default Azure authentication](/python/api/azure-identity/azure.identity.defaultazurecredential?view=azure-python). Check this [example](https://github.com/Azure/azureml-examples/blob/sdk-preview/sdk/jobs/configuration.ipynb) for more details on how to configure credentials and connect to a workspace.
+
+```python
+#import required libraries
+from azure.ai.ml import MLClient
+from azure.identity import DefaultAzureCredential
+
+#Enter details of your AML workspace
+subscription_id = '<SUBSCRIPTION_ID>'
+resource_group = '<RESOURCE_GROUP>'
+workspace = '<AML_WORKSPACE_NAME>'
+
+#connect to the workspace
+ml_client = MLClient(DefaultAzureCredential(), subscription_id, resource_group, workspace)
+```
+
+### 2. Create compute
+
+You'll create a compute called `cpu-cluster` for your job, with this code:
++
+[!notebook-python[] (~/azureml-examples-sdk-preview/sdk/jobs/configuration.ipynb?name=create-cpu-compute)]
++
+### 3. Environment to run the script
+
+To run your script on `cpu-cluster`, you need an environment, which has the required packages and dependencies to run your script. There are a few options available for environments:
+
+* Use a curated environment in your workspace - Azure ML offers several curated [environments](https://ml.azure.com/environments), which cater to various needs.
+* Use a custom environment - Azure ML allows you to create your own environment using
+ * A docker image
+ * A base docker image with a conda YAML to customize further
+ * A docker build context
+
+ Check this [example](https://github.com/Azure/azureml-examples/sdk/assets/environment/environment.ipynb) on how to create custom environments.
+
+You'll use a curated environment provided by Azure ML for `lightgm` called `AzureML-lightgbm-3.2-ubuntu18.04-py37-cpu`
+
+### 4. Submit a job to run the script
+
+To run this script, you'll use a `command`. The command will be run by submitting it as a `job` to Azure ML.
+
+[!notebook-python[] (~/azureml-examples-sdk-preview/sdk/jobs/single-step/lightgbm/iris/lightgbm-iris-sweep.ipynb?name=create-command)]
+
+[!notebook-python[] (~/azureml-examples-sdk-preview/sdk/jobs/single-step/lightgbm/iris/lightgbm-iris-sweep.ipynb?name=run-command)]
++
+In the above, you configured:
+- `code` - path where the code to run the command is located
+- `command` - command that needs to be run
+- `inputs` - dictionary of inputs using name value pairs to the command. The key is a name for the input within the context of the job and the value is the input value. Inputs are referenced in the `command` using the `${{inputs.<input_name>}}` expression. To use files or folders as inputs, you can use the `Input` class.
+
+For more details, refer to the [reference documentation](/python/api/azure-ai-ml/azure.ai.ml#azure-ai-ml-command).
+
+## Improve the model using hyperparameter sweep
+
+Now that you have run a job on Azure, let us make it better using Hyperparameter tuning. Also called hyperparameter optimization, this is the process of finding the configuration of hyperparameters that results in the best performance. Azure Machine Learning provides a `sweep` function on the `command` to do hyperparameter tuning.
+
+To perform a sweep, there needs to be input(s) against which the sweep needs to be performed. These inputs can have a discrete or continuous value. The `sweep` function will run the `command` multiple times using different combination of input values specified. Each input is a dictionary of name value pairs. The key is the name of the hyperparameter and the value is the parameter expression.
+
+Let us improve our model by sweeping on `learning_rate` and `boosting` inputs to the script. In the previous step, you used a specific value for these parameters, but now you'll use a range or choice of values.
+
+[!notebook-python[] (~/azureml-examples-sdk-preview/sdk/jobs/single-step/lightgbm/iris/lightgbm-iris-sweep.ipynb?name=search-space)]
++
+Now that you've defined the parameters, run the sweep
+
+[!notebook-python[] (~/azureml-examples-sdk-preview/sdk/jobs/single-step/lightgbm/iris/lightgbm-iris-sweep.ipynb?name=configure-sweep)]
+
+[!notebook-python[] (~/azureml-examples-sdk-preview/sdk/jobs/single-step/lightgbm/iris/lightgbm-iris-sweep.ipynb?name=run-sweep)]
++
+As seen above, the `sweep` function allows user to configure the following key aspects:
+
+* `sampling_algorithm`- The hyperparameter sampling algorithm to use over the search_space. Allowed values are `random`, `grid` and `bayesian`.
+* `objective` - the objective of the sweep
+ * `primary_metric` - The name of the primary metric reported by each trial job. The metric must be logged in the user's training script using `mlflow.log_metric()` with the same corresponding metric name.
+ * `goal` - The optimization goal of the objective.primary_metric. The allowed values are `maximize` and `minimize`.
+* `compute` - Name of the compute target to execute the job on.
+* `limits` - Limits for the sweep job
+
+Once this job completes, you can look at the metrics and the job details in the [Azure ML Portal](https://ml.azure.com/). The job details page will identify the best performing child run.
+
+
+## Distributed training
+
+Azure Machine Learning supports PyTorch, TensorFlow, and MPI-based distributed training. Let us look at how to configure a command for distribution for the `command_job` you created earlier
+
+```python
+# Distribute using PyTorch
+from azure.ai.ml import PyTorchDistribution
+command_job.distribution = PyTorchDistribution(process_count_per_instance=4)
+
+# Distribute using TensorFlow
+from azure.ai.ml import TensorFlowDistribution
+command_job.distribution = TensorFlowDistribution(parameter_server_count=1, worker_count=2)
+
+# Distribute using MPI
+from azure.ai.ml import MpiDistribution
+job.distribution = MpiDistribution(process_count_per_instance=3)
+```
+
+## Next steps
+
+Try these next steps to learn how to use the Azure Machine Learning SDK (v2) for Python:
+
+* Use pipelines with the Azure ML Python SDK (v2)
machine-learning How To Train Tensorflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-tensorflow.md
Last updated 02/23/2022 -
-# Customer intent: As a TensorFlow developer, I need to combine open-source with a cloud platform to train, evaluate, and deploy my deep learning models at scale.
+
+#Customer intent: As a TensorFlow developer, I need to combine open-source with a cloud platform to train, evaluate, and deploy my deep learning models at scale.
# Train TensorFlow models at scale with Azure Machine Learning + In this article, learn how to run your [TensorFlow](https://www.tensorflow.org/overview) training scripts at scale using Azure Machine Learning. This example trains and registers a TensorFlow model to classify handwritten digits using a deep neural network (DNN).
ws = Workspace.from_config()
### Create a file dataset
-A `FileDataset` object references one or multiple files in your workspace datastore or public urls. The files can be of any format, and the class provides you with the ability to download or mount the files to your compute. By creating a `FileDataset`, you create a reference to the data source location. If you applied any transformations to the data set, they'll be stored in the data set as well. The data remains in its existing location, so no extra storage cost is incurred. For more information the `Dataset` package, see the [How to create register datasets article](./how-to-create-register-datasets.md).
+A `FileDataset` object references one or multiple files in your workspace datastore or public urls. The files can be of any format, and the class provides you with the ability to download or mount the files to your compute. By creating a `FileDataset`, you create a reference to the data source location. If you applied any transformations to the data set, they'll be stored in the data set as well. The data remains in its existing location, so no extra storage cost is incurred. For more information the `Dataset` package, see the [How to create register datasets article](./v1/how-to-create-register-datasets.md).
```python from azureml.core.dataset import Dataset
machine-learning How To Train With Custom Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-with-custom-image.md
Last updated 08/11/2021--++ # Train a model by using a custom Docker image + In this article, learn how to use a custom Docker image when you're training models with Azure Machine Learning. You'll use the example scripts in this article to classify pet images by creating a convolutional neural network. Azure Machine Learning provides a default Docker base image. You can also use Azure Machine Learning environments to specify a different base image, such as one of the maintained [Azure Machine Learning base images](https://github.com/Azure/AzureML-Containers) or your own [custom image](./how-to-deploy-custom-container.md). Custom base images allow you to closely manage your dependencies and maintain tighter control over component versions when running training jobs.
For more information about creating and managing Azure Machine Learning environm
### Create or attach a compute target
-You need to create a [compute target](concept-azure-machine-learning-architecture.md#compute-targets) for training your model. In this tutorial, you create `AmlCompute` as your training compute resource.
+You need to create a [compute target](v1/concept-azure-machine-learning-architecture.md#compute-targets) for training your model. In this tutorial, you create `AmlCompute` as your training compute resource.
Creation of `AmlCompute` takes a few minutes. If the `AmlCompute` resource is already in your workspace, this code skips the creation process.
machine-learning How To Train With Datasets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-with-datasets.md
Last updated 10/21/2021 --
-# Customer intent: As an experienced Python developer, I need to make my data available to my local or remote compute target to train my machine learning models.
-+
+#Customer intent: As an experienced Python developer, I need to make my data available to my local or remote compute target to train my machine learning models.
# Train models with Azure Machine Learning datasets + In this article, you learn how to work with [Azure Machine Learning datasets](/python/api/azureml-core/azureml.core.dataset%28class%29) to train machine learning models. You can use datasets in your local or remote compute target without worrying about connection strings or data paths. * For structured data, see [Consume datasets in machine learning training scripts](#consume-datasets-in-machine-learning-training-scripts).
In this article, you learn how to work with [Azure Machine Learning datasets](/p
Azure Machine Learning datasets provide a seamless integration with Azure Machine Learning training functionality like [ScriptRunConfig](/python/api/azureml-core/azureml.core.scriptrunconfig), [HyperDrive](/python/api/azureml-train-core/azureml.train.hyperdrive), and [Azure Machine Learning pipelines](./how-to-create-machine-learning-pipelines.md).
-If you are not ready to make your data available for model training, but want to load your data to your notebook for data exploration, see how to [explore the data in your dataset](how-to-create-register-datasets.md#explore-data).
+If you are not ready to make your data available for model training, but want to load your data to your notebook for data exploration, see how to [explore the data in your dataset](./v1/how-to-create-register-datasets.md).
## Prerequisites
To create and train with datasets, you need:
If you have structured data not yet registered as a dataset, create a TabularDataset and use it directly in your training script for your local or remote experiment.
-In this example, you create an unregistered [TabularDataset](/python/api/azureml-core/azureml.data.tabulardataset) and specify it as a script argument in the [ScriptRunConfig](/python/api/azureml-core/azureml.core.script_run_config.scriptrunconfig) object for training. If you want to reuse this TabularDataset with other experiments in your workspace, see [how to register datasets to your workspace](how-to-create-register-datasets.md#register-datasets).
+In this example, you create an unregistered [TabularDataset](/python/api/azureml-core/azureml.data.tabulardataset) and specify it as a script argument in the [ScriptRunConfig](/python/api/azureml-core/azureml.core.script_run_config.scriptrunconfig) object for training. If you want to reuse this TabularDataset with other experiments in your workspace, see [how to register datasets to your workspace](./v1/how-to-create-register-datasets.md).
### Create a TabularDataset
The following code configures a script argument `--input-data` that you will spe
> [!Note] > If your original data source contains NaN, empty strings or blank values, when you use `to_pandas_dataframe()`, then those values are replaced as a *Null* value.
-If you need to load the prepared data into a new dataset from an in-memory pandas dataframe, write the data to a local file, like a parquet, and create a new dataset from that file. Learn more about [how to create datasets](how-to-create-register-datasets.md).
+If you need to load the prepared data into a new dataset from an in-memory pandas dataframe, write the data to a local file, like a parquet, and create a new dataset from that file. Learn more about [how to create datasets](./v1/how-to-create-register-datasets.md).
```Python %%writefile $script_folder/train_titanic.py
For the notebook example , see [How to configure a training run with data input
The following example creates an unregistered FileDataset, `mnist_data` from web urls. This FileDataset is the input data for your training run.
-Learn more about [how to create datasets](how-to-create-register-datasets.md) from other sources.
+Learn more about [how to create datasets](./v1/how-to-create-register-datasets.md) from other sources.
```Python
machine-learning How To Train With Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-with-rest.md
Last updated 03/31/2022 -+ # Train models with REST (preview) Learn how to use the Azure Machine Learning REST API to create and manage training jobs (preview). + The REST API uses standard HTTP verbs to create, retrieve, update, and delete resources. The REST API works with any language or tool that can make HTTP requests. REST's straightforward structure makes it a good choice in scripting environments and for MLOps automation.
machine-learning How To Train With Ui https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-with-ui.md
-+ Last updated 10/21/2021
# Create a training job with the job creation UI (preview)
-There are many ways to create a training job with Azure Machine Learning. You can use the CLI (see [Train models (create jobs) with the CLI (v2) (preview)](how-to-train-cli.md)), the REST API (see [Train models with REST (preview)](how-to-train-with-rest.md)), or you can use the UI to directly create a training job. In this article, you'll learn how to use your own data and code to train a machine learning model with the job creation UI in Azure Machine Learning studio.
+There are many ways to create a training job with Azure Machine Learning. You can use the CLI (see [Train models (create jobs) with the CLI (v2)](how-to-train-cli.md)), the REST API (see [Train models with REST (preview)](how-to-train-with-rest.md)), or you can use the UI to directly create a training job. In this article, you'll learn how to use your own data and code to train a machine learning model with the job creation UI in Azure Machine Learning studio.
## Prerequisites
The first step in the job creation UI is to select the compute target on which y
| | | | Compute instance | [What is an Azure Machine Learning compute instance?](concept-compute-instance.md) | | Compute cluster | [What is a compute cluster?](how-to-create-attach-compute-cluster.md#what-is-a-compute-cluster) |
-| Attached Kubernetes cluster | [Configure Azure Arc-enabled machine learning (preview)](how-to-attach-arc-kubernetes.md). |
+| Attached Kubernetes cluster | [Configure and attach Kubernetes cluster anywhere (preview)](how-to-attach-kubernetes-anywhere.md). |
1. Select a compute type 1. Select an existing compute resource. The dropdown shows the node information and SKU type to help your choice.
For more information on creating the various types, see:
| | | | Compute instance | [Create and manage an Azure Machine Learning compute instance](how-to-create-manage-compute-instance.md) | | Compute cluster | [Create an Azure Machine Learning compute cluster](how-to-create-attach-compute-cluster.md) |
-| Attached Kubernetes cluster | [Attach an Azure Arc-enabled Kubernetes cluster](how-to-attach-arc-kubernetes.md) |
+| Attached Kubernetes cluster | [Attach an Azure Arc-enabled Kubernetes cluster](how-to-attach-kubernetes-anywhere.md) |
## Specify the necessary environment
When you use an input in the command, you need to specify the input name. To ind
Once you've configured your job, choose **Next** to go to the **Review** page. To modify a setting, choose the pencil icon and make the change.
-You may choose **view the YAML spec** to review and download the yaml file generated by this job configuration. This job yaml file can be used to submit the job from the CLI (v2). (See [Train models (create jobs) with the CLI (v2) (preview)](how-to-train-cli.md).)
+You may choose **view the YAML spec** to review and download the yaml file generated by this job configuration. This job yaml file can be used to submit the job from the CLI (v2). (See [Train models (create jobs) with the CLI (v2)](how-to-train-cli.md).)
[![view yaml spec](media/how-to-train-with-ui/view-yaml.png)](media/how-to-train-with-ui/view-yaml.png) [![Yaml spec](media/how-to-train-with-ui/yaml-spec.png)](media/how-to-train-with-ui/yaml-spec.png)
To launch the job, choose **Create**. Once the job is created, Azure will show y
* [Deploy and score a machine learning model with a managed online endpoint (preview)](how-to-deploy-managed-online-endpoints.md).
-* [Train models (create jobs) with the CLI (v2) (preview)](how-to-train-cli.md)
+* [Train models (create jobs) with the CLI (v2)](how-to-train-cli.md)
machine-learning How To Trigger Published Pipeline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-trigger-published-pipeline.md
Last updated 10/21/2021 --
-# Customer intent: As a Python coding data scientist, I want to improve my operational efficiency by scheduling my training pipeline of my model using the latest data.
+
+#Customer intent: As a Python coding data scientist, I want to improve my operational efficiency by scheduling my training pipeline of my model using the latest data.
# Trigger machine learning pipelines + In this article, you'll learn how to programmatically schedule a pipeline to run on Azure. You can create a schedule based on elapsed time or on file-system changes. Time-based schedules can be used to take care of routine tasks, such as monitoring for data drift. Change-based schedules can be used to react to irregular or unpredictable changes, such as new data being uploaded or old data being edited. After learning how to create schedules, you'll learn how to retrieve and deactivate them. Finally, you'll learn how to use other Azure services, Azure Logic App and Azure Data Factory, to run pipelines. An Azure Logic App allows for more complex triggering logic or behavior. Azure Data Factory pipelines allow you to call a machine learning pipeline as part of a larger data orchestration pipeline. ## Prerequisites
machine-learning How To Troubleshoot Auto Ml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-troubleshoot-auto-ml.md
Last updated 10/21/2021 -+ # Troubleshoot automated ML experiments in Python + In this guide, learn how to identify and resolve known issues in your automated machine learning experiments with the [Azure Machine Learning SDK](/python/api/overview/azure/ml/intro). ## Version dependencies
If you have over 100 automated ML experiments, this may cause new automated ML e
## Next steps
-+ Learn more about [how to train a regression model with Automated machine learning](tutorial-auto-train-models.md) or [how to train using Automated machine learning on a remote resource](concept-automated-ml.md#local-remote).
++ Learn more about [how to train a regression model with Automated machine learning](tutorial-auto-train-models.md) or [how to train using Automated machine learning on a remote resource](./v1/concept-automated-ml-v1.md#local-remote). + Learn more about [how and where to deploy a model](how-to-deploy-and-where.md).
machine-learning How To Troubleshoot Batch Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-troubleshoot-batch-endpoints.md
Title: Troubleshooting batch endpoints (preview)
+ Title: Troubleshooting batch endpoints
description: Tips to help you succeed with batch endpoints. -+ Last updated 03/31/2022 #Customer intent: As an ML Deployment Pro, I want to figure out why my batch endpoint doesn't run so that I can fix it.-
-# Troubleshooting batch endpoints (preview)
+# Troubleshooting batch endpoints
[!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)]
-Learn how to troubleshoot and solve, or work around, common errors you may come across when using [batch endpoints](how-to-use-batch-endpoint.md) (preview) for batch scoring.
- [!INCLUDE [preview disclaimer](../../includes/machine-learning-preview-generic-disclaimer.md)]
+Learn how to troubleshoot and solve, or work around, common errors you may come across when using [batch endpoints](how-to-use-batch-endpoint.md) for batch scoring.
The following table contains common problems and solutions you may see during batch endpoint development and consumption. | Problem | Possible solution | |--|--| | Code configuration or Environment is missing. | Ensure you provide the scoring script and an environment definition if you're using a non-MLflow model. No-code deployment is supported for the MLflow model only. For more, see [Track ML models with MLflow and Azure Machine Learning](how-to-use-mlflow.md)|
-| Unsupported input data. | Batch endpoint accepts input data in three forms: 1) registered data 2) data in the cloud 3) data in local. Ensure you're using the right format. For more, see [Use batch endpoints (preview) for batch scoring](how-to-use-batch-endpoint.md)|
+| Unsupported input data. | Batch endpoint accepts input data in three forms: 1) registered data 2) data in the cloud 3) data in local. Ensure you're using the right format. For more, see [Use batch endpoints for batch scoring](how-to-use-batch-endpoint.md)|
| Output already exists. | If you configure your own output location, ensure you provide a new output for each endpoint invocation. | ## Understanding logs of a batch scoring job
logger = logging.getLogger(__name__)
logger.setLevel(args.logging_level.upper()) logger.info("Info log statement") logger.debug("Debug log statement")
-```
+```
machine-learning How To Troubleshoot Deployment Local https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-troubleshoot-deployment-local.md
Last updated 10/21/2021 -+ #Customer intent: As a data scientist, I want to try a local deployment so that I can troubleshoot my model deployment problems.
Try a local model deployment as a first step in troubleshooting deployment to Az
* Option B - Debug locally on your compute * The [Azure Machine Learning SDK](/python/api/overview/azure/ml/install). * The [Azure CLI](/cli/azure/install-azure-cli).
- * The [CLI extension for Azure Machine Learning](reference-azure-machine-learning-cli.md).
+ * The [CLI extension for Azure Machine Learning](v1/reference-azure-machine-learning-cli.md).
* Have a working Docker installation on your local system. * To verify your Docker installation, use the command `docker run hello-world` from a terminal or command prompt. For information on installing Docker, or troubleshooting Docker errors, see the [Docker Documentation](https://docs.docker.com/). * Option C - Enable local debugging with Azure Machine Learning inference HTTP server.
You can find a sample [local deployment notebook](https://github.com/Azure/Machi
To deploy locally, modify your code to use `LocalWebservice.deploy_configuration()` to create a deployment configuration. Then use `Model.deploy()` to deploy the service. The following example deploys a model (contained in the model variable) as a local web service: + ```python from azureml.core.environment import Environment from azureml.core.model import InferenceConfig, Model
Learn more about deployment:
* [Azure Machine Learning inference HTTP Server](how-to-inference-server-http.md) * [How to deploy and where](how-to-deploy-and-where.md) * [Tutorial: Train & deploy models](tutorial-train-deploy-notebook.md)
-* [How to run and debug experiments locally](./how-to-debug-visual-studio-code.md)
+* [How to run and debug experiments locally](./how-to-debug-visual-studio-code.md)
machine-learning How To Troubleshoot Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-troubleshoot-deployment.md
Last updated 10/21/2021
-+ #Customer intent: As a data scientist, I want to figure out why my model deployment fails so that I can fix it.
Learn how to troubleshoot and solve, or work around, common errors you may encou
* An **Azure subscription**. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/). * The [Azure Machine Learning SDK](/python/api/overview/azure/ml/install). * The [Azure CLI](/cli/azure/install-azure-cli).
-* The [CLI extension for Azure Machine Learning](reference-azure-machine-learning-cli.md).
+* The [CLI extension for Azure Machine Learning](v1/reference-azure-machine-learning-cli.md).
## Steps for Docker deployment of machine learning models
az ml service get-logs --verbose --workspace-name <my workspace name> --name <se
# [Python](#tab/python) Assuming you have an object of type `azureml.core.Workspace` called `ws`, you can do the following:
The most common failure for `azureml-fe-aci` is that the provided SSL certificat
Often, in the `init()` function in the scoring script, [Model.get_model_path()](/python/api/azureml-core/azureml.core.model.model#get-model-path-model-name--version-noneworkspace-none-) function is called to locate a model file or a folder of model files in the container. If the model file or folder cannot be found, the function fails. The easiest way to debug this error is to run the below Python code in the Container shell: + ```python from azureml.core.model import Model import logging
machine-learning How To Troubleshoot Environments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-troubleshoot-environments.md
Last updated 03/01/2022 -+ # Troubleshoot environment image builds
Learn how to troubleshoot issues with Docker environment image builds and packag
* An Azure subscription. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/). * The [Azure Machine Learning SDK](/python/api/overview/azure/ml/install). * The [Azure CLI](/cli/azure/install-azure-cli).
-* The [CLI extension for Azure Machine Learning](reference-azure-machine-learning-cli.md).
+* The [CLI extension for Azure Machine Learning](v1/reference-azure-machine-learning-cli.md).
* To debug locally, you must have a working Docker installation on your local system. ## Docker image build failures
machine-learning How To Troubleshoot Online Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-troubleshoot-online-endpoints.md
Title: Troubleshooting online endpoints deployment (preview)
+ Title: Troubleshooting online endpoints deployment
description: Learn how to troubleshoot some common deployment and scoring errors with online endpoints.
Last updated 04/12/2022 -+ #Customer intent: As a data scientist, I want to figure out why my online endpoint deployment failed so that I can fix it.
-# Troubleshooting online endpoints deployment and scoring (preview)
+# Troubleshooting online endpoints deployment and scoring
[!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)]
-Learn how to resolve common issues in the deployment and scoring of Azure Machine Learning online endpoints (preview).
+Learn how to resolve common issues in the deployment and scoring of Azure Machine Learning online endpoints.
This document is structured in the way you should approach troubleshooting:
This document is structured in the way you should approach troubleshooting:
The section [HTTP status codes](#http-status-codes) explains how invocation and prediction errors map to HTTP status codes when scoring endpoints with REST requests. - ## Prerequisites * An **Azure subscription**. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/). * The [Azure CLI](/cli/azure/install-azure-cli).
-* The [Install, set up, and use the CLI (v2) (preview)](how-to-configure-cli.md).
+* The [Install, set up, and use the CLI (v2)](how-to-configure-cli.md).
## Deploy locally
As a part of local deployment the following steps take place:
- Docker either builds a new container image or pulls an existing image from the local Docker cache. An existing image is used if there's one that matches the environment part of the specification file. - Docker starts a new container with mounted local artifacts such as model and code files.
-For more, see [Deploy locally in Deploy and score a machine learning model with a managed online endpoint (preview)](how-to-deploy-managed-online-endpoints.md#deploy-and-debug-locally-by-using-local-endpoints).
+For more, see [Deploy locally in Deploy and score a machine learning model with a managed online endpoint](how-to-deploy-managed-online-endpoints.md#deploy-and-debug-locally-by-using-local-endpoints).
## Conda installation
There are three supported tracing headers:
> [!Note] > When you create a support ticket for a failed request, attach the failed request ID to expedite investigation. -- `x-ms-request-id` and `x-ms-client-request-id` are available for client tracing scenarios. We sanitize these headers to remove non-alphanumeric symbols. These headers are truncated to 72 characters.
+- `x-ms-client-request-id` is available for client tracing scenarios. We sanitize this header to remove non-alphanumeric symbols. This header is truncated to 72 characters.
## Common deployment errors
If you are having trouble with autoscaling, see [Troubleshooting Azure autoscale
## Bandwidth limit issues
-Managed online endpoints have bandwidth limits for each endpoint. You find the limit configuration in [Manage and increase quotas for resources with Azure Machine Learning](how-to-manage-quotas.md#azure-machine-learning-managed-online-endpoints-preview) here. If your bandwidth usage exceeds the limit, your request will be delayed. To monitor the bandwidth delay:
+Managed online endpoints have bandwidth limits for each endpoint. You find the limit configuration in [Manage and increase quotas for resources with Azure Machine Learning](how-to-manage-quotas.md#azure-machine-learning-managed-online-endpoints) here. If your bandwidth usage exceeds the limit, your request will be delayed. To monitor the bandwidth delay:
- Use metric ΓÇ£Network bytesΓÇ¥ to understand the current bandwidth usage. For more information, see [Monitor managed online endpoints](how-to-monitor-online-endpoints.md). - There are two response trailers will be returned if the bandwidth limit enforced:
When you access online endpoints with REST requests, the returned status codes a
| 429 | Too many pending requests | Your model is getting more requests than it can handle. We allow 2 * `max_concurrent_requests_per_instance` * `instance_count` requests at any time. Additional requests are rejected. You can confirm these settings in your model deployment config under `request_settings` and `scale_settings`. If you are using auto-scaling, your model is getting requests faster than the system can scale up. With auto-scaling, you can try to resend requests with [exponential backoff](https://aka.ms/exponential-backoff). Doing so can give the system time to adjust. | | 500 | Internal server error | Azure ML-provisioned infrastructure is failing. |
-## Next steps
+## Common network isolation issues
-- [Deploy and score a machine learning model with a managed online endpoint (preview)](how-to-deploy-managed-online-endpoints.md)-- [Safe rollout for online endpoints (preview)](how-to-safely-rollout-managed-endpoints.md)-- [Online endpoint (preview) YAML reference](reference-yaml-endpoint-online.md)+
+## Next steps
+- [Deploy and score a machine learning model with a managed online endpoint](how-to-deploy-managed-online-endpoints.md)
+- [Safe rollout for online endpoints](how-to-safely-rollout-managed-endpoints.md)
+- [Online endpoint YAML reference](reference-yaml-endpoint-online.md)
machine-learning How To Tune Hyperparameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-tune-hyperparameters.md
Title: Hyperparameter tuning a model
+ Title: Hyperparameter tuning a model (v2)
description: Automate hyperparameter tuning for deep learning and machine learning models using Azure Machine Learning.--++ Previously updated : 02/26/2021 Last updated : 05/02/2022 --+
-# Hyperparameter tuning a model with Azure Machine Learning
+# Hyperparameter tuning a model (v2)
+
+> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning CLI extension you are using:"]
+> * [v1](v1/how-to-tune-hyperparameters-v1.md)
+> * [v2 (current version)](how-to-tune-hyperparameters.md)
-Automate efficient hyperparameter tuning by using Azure Machine Learning [HyperDrive package](/python/api/azureml-train-core/azureml.train.hyperdrive). Learn how to complete the steps required to tune hyperparameters with the [Azure Machine Learning SDK](/python/api/overview/azure/ml/):
+Automate efficient hyperparameter tuning using Azure Machine Learning SDK v2 and CLI v2 by way of the SweepJob type.
-1. Define the parameter search space
-1. Specify a primary metric to optimize
-1. Specify early termination policy for low-performing runs
-1. Create and assign resources
-1. Launch an experiment with the defined configuration
-1. Visualize the training runs
-1. Select the best configuration for your model
+1. Define the parameter search space for your trial
+2. Specify the sampling algorithm for your sweep job
+3. Specify the objective to optimize
+4. Specify early termination policy for low-performing jobs
+5. Define limits for the sweep job
+6. Launch an experiment with the defined configuration
+7. Visualize the training jobs
+8. Select the best configuration for your model
## What is hyperparameter tuning?
Azure Machine Learning lets you automate hyperparameter tuning and run experimen
Tune hyperparameters by exploring the range of values defined for each hyperparameter. Hyperparameters can be discrete or continuous, and has a distribution of values described by a
-[parameter expression](/python/api/azureml-train-core/azureml.train.hyperdrive.parameter_expressions).
+[parameter expression](reference-yaml-job-sweep.md#parameter-expressions).
### Discrete hyperparameters
-Discrete hyperparameters are specified as a `choice` among discrete values. `choice` can be:
+Discrete hyperparameters are specified as a `Choice` among discrete values. `Choice` can be:
* one or more comma-separated values * a `range` object * any arbitrary `list` object - ```Python
- {
- "batch_size": choice(16, 32, 64, 128)
- "number_of_hidden_layers": choice(range(1,5))
- }
+from azure.ai.ml.sweep import Choice
+
+command_job_for_sweep = command_job(
+ batch_size=Choice(values=[16, 32, 64, 128]),
+ number_of_hidden_layers=Choice(values=range(1,5)),
+)
``` In this case, `batch_size` one of the values [16, 32, 64, 128] and `number_of_hidden_layers` takes one of the values [1, 2, 3, 4]. The following advanced discrete hyperparameters can also be specified using a distribution:
-* `quniform(low, high, q)` - Returns a value like round(uniform(low, high) / q) * q
-* `qloguniform(low, high, q)` - Returns a value like round(exp(uniform(low, high)) / q) * q
-* `qnormal(mu, sigma, q)` - Returns a value like round(normal(mu, sigma) / q) * q
-* `qlognormal(mu, sigma, q)` - Returns a value like round(exp(normal(mu, sigma)) / q) * q
+* `QUniform(min_value, max_value, q)` - Returns a value like round(Uniform(min_value, max_value) / q) * q
+* `QLogUniform(min_value, max_value, q)` - Returns a value like round(exp(Uniform(min_value, max_value)) / q) * q
+* `QNormal(mu, sigma, q)` - Returns a value like round(Normal(mu, sigma) / q) * q
+* `QLogNormal(mu, sigma, q)` - Returns a value like round(exp(Normal(mu, sigma)) / q) * q
### Continuous hyperparameters The Continuous hyperparameters are specified as a distribution over a continuous range of values:
-* `uniform(low, high)` - Returns a value uniformly distributed between low and high
-* `loguniform(low, high)` - Returns a value drawn according to exp(uniform(low, high)) so that the logarithm of the return value is uniformly distributed
-* `normal(mu, sigma)` - Returns a real value that's normally distributed with mean mu and standard deviation sigma
-* `lognormal(mu, sigma)` - Returns a value drawn according to exp(normal(mu, sigma)) so that the logarithm of the return value is normally distributed
+* `Uniform(min_value, max_value)` - Returns a value uniformly distributed between min_value and max_value
+* `LogUniform(min_value, max_value)` - Returns a value drawn according to exp(Uniform(min_value, max_value)) so that the logarithm of the return value is uniformly distributed
+* `Normal(mu, sigma)` - Returns a real value that's normally distributed with mean mu and standard deviation sigma
+* `LogNormal(mu, sigma)` - Returns a value drawn according to exp(Normal(mu, sigma)) so that the logarithm of the return value is normally distributed
An example of a parameter space definition: ```Python
- {
- "learning_rate": normal(10, 3),
- "keep_probability": uniform(0.05, 0.1)
- }
+from azure.ai.ml.sweep import Normal, Uniform
+
+command_job_for_sweep = command_job(
+ learning_rate=Normal(mu=10, sigma=3),
+ keep_probability=Uniform(min_value=0.05, max_value=0.1),
+)
``` This code defines a search space with two parameters - `learning_rate` and `keep_probability`. `learning_rate` has a normal distribution with mean value 10 and a standard deviation of 3. `keep_probability` has a uniform distribution with a minimum value of 0.05 and a maximum value of 0.1.
-### Sampling the hyperparameter space
+For the CLI, you can use the [sweep job YAML schema](/articles/machine-learning/reference-yaml-job-sweep)., to define the search space in your YAML:
+```YAML
+ search_space:
+ conv_size:
+ type: choice
+ values: [2, 5, 7]
+ dropout_rate:
+ type: uniform
+ min_value: 0.1
+ max_value: 0.2
+```
+
+## Sampling the hyperparameter space
Specify the parameter sampling method to use over the hyperparameter space. Azure Machine Learning supports the following methods:
Specify the parameter sampling method to use over the hyperparameter space. Azur
* Grid sampling * Bayesian sampling
-#### Random sampling
+### Random sampling
-[Random sampling](/python/api/azureml-train-core/azureml.train.hyperdrive.randomparametersampling) supports discrete and continuous hyperparameters. It supports early termination of low-performance runs. Some users do an initial search with random sampling and then refine the search space to improve results.
+[Random sampling](/python/api/azure-ai-ml/azure.ai.ml.sweep.randomparametersampling) supports discrete and continuous hyperparameters. It supports early termination of low-performance jobs. Some users do an initial search with random sampling and then refine the search space to improve results.
+
+In random sampling, hyperparameter values are randomly selected from the defined search space. After creating your command job, you can use the sweep parameter to define the sampling algorithm.
+
+```Python
+from azure.ai.ml.sweep import Normal, Uniform, RandomParameterSampling
+
+command_job_for_sweep = command_job(
+ learning_rate=Normal(mu=10, sigma=3),
+ keep_probability=Uniform(min_value=0.05, max_value=0.1),
+ batch_size=Choice(values=[16, 32, 64, 128]),
+)
+
+sweep_job = command_job_for_sweep.sweep(
+ compute="cpu-cluster",
+ sampling_algorithm = "random",
+ ...
+)
+```
+#### Sobol
+Sobol is a type of random sampling supported by sweep job types. You can use sobol to reproduce your results using seed and cover the search space distribution more evenly.
-In random sampling, hyperparameter values are randomly selected from the defined search space.
+To use sobol, use the RandomParameterSampling class to add the seed and rule as shown in the example below.
```Python
-from azureml.train.hyperdrive import RandomParameterSampling
-from azureml.train.hyperdrive import normal, uniform, choice
-param_sampling = RandomParameterSampling( {
- "learning_rate": normal(10, 3),
- "keep_probability": uniform(0.05, 0.1),
- "batch_size": choice(16, 32, 64, 128)
- }
+from azure.ai.ml.sweep import RandomParameterSampling
+
+sweep_job = command_job_for_sweep.sweep(
+ compute="cpu-cluster",
+ sampling_algorithm = RandomParameterSampling(seed=123, rule="sobol"),
+ ...
) ```
-#### Grid sampling
+### Grid sampling
-[Grid sampling](/python/api/azureml-train-core/azureml.train.hyperdrive.gridparametersampling) supports discrete hyperparameters. Use grid sampling if you can budget to exhaustively search over the search space. Supports early termination of low-performance runs.
+[Grid sampling](/python/api/azure-ai-ml/azure.ai.ml.sweep.gridparametersampling) supports discrete hyperparameters. Use grid sampling if you can budget to exhaustively search over the search space. Supports early termination of low-performance jobs.
Grid sampling does a simple grid search over all possible values. Grid sampling can only be used with `choice` hyperparameters. For example, the following space has six samples: ```Python
-from azureml.train.hyperdrive import GridParameterSampling
-from azureml.train.hyperdrive import choice
-param_sampling = GridParameterSampling( {
- "num_hidden_layers": choice(1, 2, 3),
- "batch_size": choice(16, 32)
- }
+from azure.ai.ml.sweep import Choice
+
+command_job_for_sweep = command_job(
+ batch_size=Choice(values=[16, 32]),
+ number_of_hidden_layers=Choice(values=[1,2,3]),
+)
+
+sweep_job = command_job_for_sweep.sweep(
+ compute="cpu-cluster",
+ sampling_algorithm = "grid",
+ ...
) ```
-#### Bayesian sampling
+### Bayesian sampling
-[Bayesian sampling](/python/api/azureml-train-core/azureml.train.hyperdrive.bayesianparametersampling) is based on the Bayesian optimization algorithm. It picks samples based on how previous samples did, so that new samples improve the primary metric.
+[Bayesian sampling](/python/api/azure-ai-ml/azure.ai.ml.sweep.bayesianparametersampling) is based on the Bayesian optimization algorithm. It picks samples based on how previous samples did, so that new samples improve the primary metric.
-Bayesian sampling is recommended if you have enough budget to explore the hyperparameter space. For best results, we recommend a maximum number of runs greater than or equal to 20 times the number of hyperparameters being tuned.
+Bayesian sampling is recommended if you have enough budget to explore the hyperparameter space. For best results, we recommend a maximum number of jobs greater than or equal to 20 times the number of hyperparameters being tuned.
-The number of concurrent runs has an impact on the effectiveness of the tuning process. A smaller number of concurrent runs may lead to better sampling convergence, since the smaller degree of parallelism increases the number of runs that benefit from previously completed runs.
+The number of concurrent jobs has an impact on the effectiveness of the tuning process. A smaller number of concurrent jobs may lead to better sampling convergence, since the smaller degree of parallelism increases the number of jobs that benefit from previously completed jobs.
Bayesian sampling only supports `choice`, `uniform`, and `quniform` distributions over the search space. ```Python
-from azureml.train.hyperdrive import BayesianParameterSampling
-from azureml.train.hyperdrive import uniform, choice
-param_sampling = BayesianParameterSampling( {
- "learning_rate": uniform(0.05, 0.1),
- "batch_size": choice(16, 32, 64, 128)
- }
+from azure.ai.ml.sweep import Uniform, Choice
+
+command_job_for_sweep = command_job(
+ learning_rate=Uniform(min_value=0.05, max_value=0.1),
+ batch_size=Choice(values=[16, 32, 64, 128]),
+)
+
+sweep_job = command_job_for_sweep.sweep(
+ compute="cpu-cluster",
+ sampling_algorithm = "bayesian",
+ ...
) ```
+## <a name="specify-objective-to-optimize"></a> Specify the objective of the sweep
-## <a name="specify-primary-metric-to-optimize"></a> Specify primary metric
+Define the objective of your sweep job by specifying the [primary metric](/python/api/azure-ai-ml/azure.ai.ml.sweep.primary_metric) and [goal](/python/api/azure-ai-ml/azure.ai.ml.sweep.goal) you want hyperparameter tuning to optimize. Each training job is evaluated for the primary metric. The early termination policy uses the primary metric to identify low-performance jobs.
-Specify the [primary metric](/python/api/azureml-train-core/azureml.train.hyperdrive.primarymetricgoal) you want hyperparameter tuning to optimize. Each training run is evaluated for the primary metric. The early termination policy uses the primary metric to identify low-performance runs.
+* `primary_metric`: The name of the primary metric needs to exactly match the name of the metric logged by the training script
+* `goal`: It can be either `Maximize` or `Minimize` and determines whether the primary metric will be maximized or minimized when evaluating the jobs.
-Specify the following attributes for your primary metric:
+```Python
+from azure.ai.ml.sweep import Uniform, Choice
-* `primary_metric_name`: The name of the primary metric needs to exactly match the name of the metric logged by the training script
-* `primary_metric_goal`: It can be either `PrimaryMetricGoal.MAXIMIZE` or `PrimaryMetricGoal.MINIMIZE` and determines whether the primary metric will be maximized or minimized when evaluating the runs.
+command_job_for_sweep = command_job(
+ learning_rate=Uniform(min_value=0.05, max_value=0.1),
+ batch_size=Choice(values=[16, 32, 64, 128]),
+)
-```Python
-primary_metric_name="accuracy",
-primary_metric_goal=PrimaryMetricGoal.MAXIMIZE
+sweep_job = command_job_for_sweep.sweep(
+ compute="cpu-cluster",
+ sampling_algorithm = "bayesian",
+ primary_metric="accuracy",
+ goal="Maximize",
+)
``` This sample maximizes "accuracy". ### <a name="log-metrics-for-hyperparameter-tuning"></a>Log metrics for hyperparameter tuning
-The training script for your model **must** log the primary metric during model training so that HyperDrive can access it for hyperparameter tuning.
+The training script for your model **must** log the primary metric during model training using the same corresponding metric name so that the SweepJob can access it for hyperparameter tuning.
Log the primary metric in your training script with the following sample snippet: ```Python
-from azureml.core.run import Run
-run_logger = Run.get_context()
-run_logger.log("accuracy", float(val_accuracy))
+import mlflow
+mlflow.log_metric("accuracy", float(val_accuracy))
``` The training script calculates the `val_accuracy` and logs it as the primary metric "accuracy". Each time the metric is logged, it's received by the hyperparameter tuning service. It's up to you to determine the frequency of reporting.
-For more information on logging values in model training runs, see [Enable logging in Azure ML training runs](how-to-log-view-metrics.md).
+For more information on logging values for training jobs, see [Enable logging in Azure ML training jobs](how-to-log-view-metrics.md).
## <a name="early-termination"></a> Specify early termination policy
-Automatically end poorly performing runs with an early termination policy. Early termination improves computational efficiency.
+Automatically end poorly performing jobs with an early termination policy. Early termination improves computational efficiency.
You can configure the following parameters that control when a policy is applied:
-* `evaluation_interval`: the frequency of applying the policy. Each time the training script logs the primary metric counts as one interval. An `evaluation_interval` of 1 will apply the policy every time the training script reports the primary metric. An `evaluation_interval` of 2 will apply the policy every other time. If not specified, `evaluation_interval` is set to 1 by default.
-* `delay_evaluation`: delays the first policy evaluation for a specified number of intervals. This is an optional parameter that avoids premature termination of training runs by allowing all configurations to run for a minimum number of intervals. If specified, the policy applies every multiple of evaluation_interval that is greater than or equal to delay_evaluation.
+* `evaluation_interval`: the frequency of applying the policy. Each time the training script logs the primary metric counts as one interval. An `evaluation_interval` of 1 will apply the policy every time the training script reports the primary metric. An `evaluation_interval` of 2 will apply the policy every other time. If not specified, `evaluation_interval` is set to 0 by default.
+* `delay_evaluation`: delays the first policy evaluation for a specified number of intervals. This is an optional parameter that avoids premature termination of training jobs by allowing all configurations to run for a minimum number of intervals. If specified, the policy applies every multiple of evaluation_interval that is greater than or equal to delay_evaluation. If not specified, `delay_evaluation` is set to 0 by default.
Azure Machine Learning supports the following early termination policies: * [Bandit policy](#bandit-policy)
Azure Machine Learning supports the following early termination policies:
### Bandit policy
-[Bandit policy](/python/api/azureml-train-core/azureml.train.hyperdrive.banditpolicy#definition) is based on slack factor/slack amount and evaluation interval. Bandit ends runs when the primary metric isn't within the specified slack factor/slack amount of the most successful run.
+[Bandit policy](/python/api/azure-ai-ml/azure.ai.ml.sweep.banditpolicy) is based on slack factor/slack amount and evaluation interval. Bandit policy ends a job when the primary metric isn't within the specified slack factor/slack amount of the most successful job.
> [!NOTE] > Bayesian sampling does not support early termination. When using Bayesian sampling, set `early_termination_policy = None`. Specify the following configuration parameters:
-* `slack_factor` or `slack_amount`: the slack allowed with respect to the best performing training run. `slack_factor` specifies the allowable slack as a ratio. `slack_amount` specifies the allowable slack as an absolute amount, instead of a ratio.
+* `slack_factor` or `slack_amount`: the slack allowed with respect to the best performing training job. `slack_factor` specifies the allowable slack as a ratio. `slack_amount` specifies the allowable slack as an absolute amount, instead of a ratio.
- For example, consider a Bandit policy applied at interval 10. Assume that the best performing run at interval 10 reported a primary metric is 0.8 with a goal to maximize the primary metric. If the policy specifies a `slack_factor` of 0.2, any training runs whose best metric at interval 10 is less than 0.66 (0.8/(1+`slack_factor`)) will be terminated.
+ For example, consider a Bandit policy applied at interval 10. Assume that the best performing job at interval 10 reported a primary metric is 0.8 with a goal to maximize the primary metric. If the policy specifies a `slack_factor` of 0.2, any training jobs whose best metric at interval 10 is less than 0.66 (0.8/(1+`slack_factor`)) will be terminated.
* `evaluation_interval`: (optional) the frequency for applying the policy * `delay_evaluation`: (optional) delays the first policy evaluation for a specified number of intervals ```Python
-from azureml.train.hyperdrive import BanditPolicy
-early_termination_policy = BanditPolicy(slack_factor = 0.1, evaluation_interval=1, delay_evaluation=5)
+from azure.ai.ml.sweep import BanditPolicy
+sweep_job.early_termination = BanditPolicy(slack_factor = 0.1, delay_evaluation = 5, evaluation_interval = 1)
```
-In this example, the early termination policy is applied at every interval when metrics are reported, starting at evaluation interval 5. Any run whose best metric is less than (1/(1+0.1) or 91% of the best performing run will be terminated.
+In this example, the early termination policy is applied at every interval when metrics are reported, starting at evaluation interval 5. Any jobs whose best metric is less than (1/(1+0.1) or 91% of the best performing jobs will be terminated.
### Median stopping policy
-[Median stopping](/python/api/azureml-train-core/azureml.train.hyperdrive.medianstoppingpolicy) is an early termination policy based on running averages of primary metrics reported by the runs. This policy computes running averages across all training runs and stops runs whose primary metric value is worse than the median of the averages.
+[Median stopping](/python/api/azure-ai-ml/azure.ai.ml.sweep.medianstoppingpolicy) is an early termination policy based on running averages of primary metrics reported by the jobs. This policy computes running averages across all training jobs and stops jobs whose primary metric value is worse than the median of the averages.
This policy takes the following configuration parameters: * `evaluation_interval`: the frequency for applying the policy (optional parameter).
This policy takes the following configuration parameters:
```Python
-from azureml.train.hyperdrive import MedianStoppingPolicy
-early_termination_policy = MedianStoppingPolicy(evaluation_interval=1, delay_evaluation=5)
+from azure.ai.ml.sweep import MedianStoppingPolicy
+sweep_job.early_termination = MedianStoppingPolicy(delay_evaluation = 5, evaluation_interval = 1)
```
-In this example, the early termination policy is applied at every interval starting at evaluation interval 5. A run is stopped at interval 5 if its best primary metric is worse than the median of the running averages over intervals 1:5 across all training runs.
+In this example, the early termination policy is applied at every interval starting at evaluation interval 5. A job is stopped at interval 5 if its best primary metric is worse than the median of the running averages over intervals 1:5 across all training jobs.
### Truncation selection policy
-[Truncation selection](/python/api/azureml-train-core/azureml.train.hyperdrive.truncationselectionpolicy) cancels a percentage of lowest performing runs at each evaluation interval. Runs are compared using the primary metric.
+[Truncation selection](/python/api/azure-ai-ml/azure.ai.ml.sweep.truncationselectionpolicy) cancels a percentage of lowest performing jobs at each evaluation interval. jobs are compared using the primary metric.
This policy takes the following configuration parameters:
-* `truncation_percentage`: the percentage of lowest performing runs to terminate at each evaluation interval. An integer value between 1 and 99.
+* `truncation_percentage`: the percentage of lowest performing jobs to terminate at each evaluation interval. An integer value between 1 and 99.
* `evaluation_interval`: (optional) the frequency for applying the policy * `delay_evaluation`: (optional) delays the first policy evaluation for a specified number of intervals * `exclude_finished_jobs`: specifies whether to exclude finished jobs when applying the policy ```Python
-from azureml.train.hyperdrive import TruncationSelectionPolicy
-early_termination_policy = TruncationSelectionPolicy(evaluation_interval=1, truncation_percentage=20, delay_evaluation=5, exclude_finished_jobs=true)
+from azure.ai.ml.sweep import TruncationSelectionPolicy
+sweep_job.early_termination = TruncationSelectionPolicy(evaluation_interval=1, truncation_percentage=20, delay_evaluation=5, exclude_finished_jobs=true)
```
-In this example, the early termination policy is applied at every interval starting at evaluation interval 5. A run terminates at interval 5 if its performance at interval 5 is in the lowest 20% of performance of all runs at interval 5 and will exclude finished jobs when applying the policy.
+In this example, the early termination policy is applied at every interval starting at evaluation interval 5. A job terminates at interval 5 if its performance at interval 5 is in the lowest 20% of performance of all jobs at interval 5 and will exclude finished jobs when applying the policy.
### No termination policy (default)
-If no policy is specified, the hyperparameter tuning service will let all training runs execute to completion.
+If no policy is specified, the hyperparameter tuning service will let all training jobs execute to completion.
```Python
-policy=None
+sweep_job.early_termination = None
``` ### Picking an early termination policy
policy=None
* For a conservative policy that provides savings without terminating promising jobs, consider a Median Stopping Policy with `evaluation_interval` 1 and `delay_evaluation` 5. These are conservative settings, that can provide approximately 25%-35% savings with no loss on primary metric (based on our evaluation data). * For more aggressive savings, use Bandit Policy with a smaller allowable slack or Truncation Selection Policy with a larger truncation percentage.
-## Create and assign resources
+## Set limits for your sweep job
-Control your resource budget by specifying the maximum number of training runs.
+Control your resource budget by setting limits for your sweep job.
-* `max_total_runs`: Maximum number of training runs. Must be an integer between 1 and 1000.
-* `max_duration_minutes`: (optional) Maximum duration, in minutes, of the hyperparameter tuning experiment. Runs after this duration are canceled.
+* `max_total_trials`: Maximum number of trial jobs. Must be an integer between 1 and 1000.
+* `max_concurrent_trials`: (optional) Maximum number of trial jobs that can run concurrently. If not specified, all jobs launch in parallel. If specified, must be an integer between 1 and 100.
+* `timeout`: Maximum time in minutes the entire sweep job is allowed to run. Once this limit is reached the system will cancel the sweep job, including all its trials.
+* `trial_timeout`: Maximum time in seconds each trial job is allowed to run. Once this limit is reached the system will cancel the trial.
>[!NOTE]
->If both `max_total_runs` and `max_duration_minutes` are specified, the hyperparameter tuning experiment terminates when the first of these two thresholds is reached.
-
-Additionally, specify the maximum number of training runs to run concurrently during your hyperparameter tuning search.
-
-* `max_concurrent_runs`: (optional) Maximum number of runs that can run concurrently. If not specified, all runs launch in parallel. If specified, must be an integer between 1 and 100.
+>If both max_total_trials and max_concurrent_trials are specified, the hyperparameter tuning experiment terminates when the first of these two thresholds is reached.
>[!NOTE]
->The number of concurrent runs is gated on the resources available in the specified compute target. Ensure that the compute target has the available resources for the desired concurrency.
+>The number of concurrent trial jobs is gated on the resources available in the specified compute target. Ensure that the compute target has the available resources for the desired concurrency.
```Python
-max_total_runs=20,
-max_concurrent_runs=4
+sweep_job.set_limits(max_total_trials=20, max_concurrent_trials=4, timeout=120)
```
-This code configures the hyperparameter tuning experiment to use a maximum of 20 total runs, running four configurations at a time.
+This code configures the hyperparameter tuning experiment to use a maximum of 20 total trial jobs, running four trial jobs at a time with a timeout of 120 minutes for the entire sweep job.
## Configure hyperparameter tuning experiment
-To [configure your hyperparameter tuning](/python/api/azureml-train-core/azureml.train.hyperdrive.hyperdriverunconfig) experiment, provide the following:
+To [configure your hyperparameter tuning](/python/api/azure-ai-ml/azure.ai.ml.train.sweep) experiment, provide the following:
* The defined hyperparameter search space
+* Your sampling algorithm
* Your early termination policy
-* The primary metric
-* Resource allocation settings
-* ScriptRunConfig `script_run_config`
+* Your objective
+* Resource limits
+* CommandJob or CommandComponent
+* SweepJob
-The ScriptRunConfig is the training script that will run with the sampled hyperparameters. It defines the resources per job (single or multi-node), and the compute target to use.
+SweepJob can run a hyperparameter sweep on the Command or Command Component.
> [!NOTE]
->The compute target used in `script_run_config` must have enough resources to satisfy your concurrency level. For more information on ScriptRunConfig, see [Configure training runs](how-to-set-up-training-targets.md).
+>The compute target used in `sweep_job` must have enough resources to satisfy your concurrency level. For more information on compute targets, see [Compute targets](concept-compute-target.md).
Configure your hyperparameter tuning experiment: ```Python
-from azureml.train.hyperdrive import HyperDriveConfig
-from azureml.train.hyperdrive import RandomParameterSampling, BanditPolicy, uniform, PrimaryMetricGoal
-
-param_sampling = RandomParameterSampling( {
- 'learning_rate': uniform(0.0005, 0.005),
- 'momentum': uniform(0.9, 0.99)
- }
+from azure.ai.ml import MLClient
+from azure.ai.ml import command, Input
+from azure.ai.ml.sweep import Choice, Uniform, MedianStoppingPolicy
+from azure.identity import DefaultAzureCredential
+
+# Create your base command job
+command_job = command(
+ code="./src",
+ command="python main.py --iris-csv ${{inputs.iris_csv}} --learning-rate ${{inputs.learning_rate}} --boosting ${{inputs.boosting}}",
+ environment="AzureML-lightgbm-3.2-ubuntu18.04-py37-cpu@latest",
+ inputs={
+ "iris_csv": Input(
+ type="uri_file",
+ path="https://azuremlexamples.blob.core.windows.net/datasets/iris.csv",
+ ),
+ "learning_rate": 0.9,
+ "boosting": "gbdt",
+ },
+ compute="cpu-cluster",
)
-early_termination_policy = BanditPolicy(slack_factor=0.15, evaluation_interval=1, delay_evaluation=10)
-
-hd_config = HyperDriveConfig(run_config=script_run_config,
- hyperparameter_sampling=param_sampling,
- policy=early_termination_policy,
- primary_metric_name="accuracy",
- primary_metric_goal=PrimaryMetricGoal.MAXIMIZE,
- max_total_runs=100,
- max_concurrent_runs=4)
-```
-
-The `HyperDriveConfig` sets the parameters passed to the `ScriptRunConfig script_run_config`. The `script_run_config`, in turn, passes parameters to the training script. The above code snippet is taken from the sample notebook [Train, hyperparameter tune, and deploy with PyTorch](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/ml-frameworks/pytorch/train-hyperparameter-tune-deploy-with-pytorch). In this sample, the `learning_rate` and `momentum` parameters will be tuned. Early stopping of runs will be determined by a `BanditPolicy`, which stops a run whose primary metric falls outside the `slack_factor` (see [BanditPolicy class reference](/python/api/azureml-train-core/azureml.train.hyperdrive.banditpolicy)).
-
-The following code from the sample shows how the being-tuned values are received, parsed, and passed to the training script's `fine_tune_model` function:
-
-```python
-# from pytorch_train.py
-def main():
- print("Torch version:", torch.__version__)
-
- # get command-line arguments
- parser = argparse.ArgumentParser()
- parser.add_argument('--num_epochs', type=int, default=25,
- help='number of epochs to train')
- parser.add_argument('--output_dir', type=str, help='output directory')
- parser.add_argument('--learning_rate', type=float,
- default=0.001, help='learning rate')
- parser.add_argument('--momentum', type=float, default=0.9, help='momentum')
- args = parser.parse_args()
-
- data_dir = download_data()
- print("data directory is: " + data_dir)
- model = fine_tune_model(args.num_epochs, data_dir,
- args.learning_rate, args.momentum)
- os.makedirs(args.output_dir, exist_ok=True)
- torch.save(model, os.path.join(args.output_dir, 'model.pt'))
-```
-
-> [!Important]
-> Every hyperparameter run restarts the training from scratch, including rebuilding the model and _all the data loaders_. You can minimize
-> this cost by using an Azure Machine Learning pipeline or manual process to do as much data preparation as possible prior to your training runs.
-
-## Submit hyperparameter tuning experiment
-
-After you define your hyperparameter tuning configuration, [submit the experiment](/python/api/azureml-core/azureml.core.experiment%28class%29#submit-config--tags-none-kwargs-):
-
-```Python
-from azureml.core.experiment import Experiment
-experiment = Experiment(workspace, experiment_name)
-hyperdrive_run = experiment.submit(hd_config)
-```
-
-## Warm start hyperparameter tuning (optional)
-
-Finding the best hyperparameter values for your model can be an iterative process. You can reuse knowledge from the five previous runs to accelerate hyperparameter tuning.
+# Override your inputs with parameter expressions
+command_job_for_sweep = command_job(
+ learning_rate=Uniform(min_value=0.01, max_value=0.9),
+ boosting=Choice(values=["gbdt", "dart"]),
+)
-Warm starting is handled differently depending on the sampling method:
-- **Bayesian sampling**: Trials from the previous run are used as prior knowledge to pick new samples, and to improve the primary metric.-- **Random sampling** or **grid sampling**: Early termination uses knowledge from previous runs to determine poorly performing runs.
+# Call sweep() on your command job to sweep over your parameter expressions
+sweep_job = command_job_for_sweep.sweep(
+ compute="cpu-cluster",
+ sampling_algorithm="random",
+ primary_metric="test-multi_logloss",
+ goal="Minimize",
+)
-Specify the list of parent runs you want to warm start from.
+# Specify your experiment details
+sweep_job.display_name = "lightgbm-iris-sweep-example"
+sweep_job.experiment_name = "lightgbm-iris-sweep-example"
+sweep_job.description = "Run a hyperparameter sweep job for LightGBM on Iris dataset."
-```Python
-from azureml.train.hyperdrive import HyperDriveRun
+# Define the limits for this sweep
+sweep_job.set_limits(max_total_trials=20, max_concurrent_trials=10, timeout=7200)
-warmstart_parent_1 = HyperDriveRun(experiment, "warmstart_parent_run_ID_1")
-warmstart_parent_2 = HyperDriveRun(experiment, "warmstart_parent_run_ID_2")
-warmstart_parents_to_resume_from = [warmstart_parent_1, warmstart_parent_2]
+# Set early stopping on this one
+sweep_job.early_termination = MedianStoppingPolicy(
+ delay_evaluation=5, evaluation_interval=2
+)
```
-If a hyperparameter tuning experiment is canceled, you can resume training runs from the last checkpoint. However, your training script must handle checkpoint logic.
+The `command_job` is called as a function so we can apply the parameter expressions to the sweep inputs. The `sweep` function is then configured with `trial`, `sampling-algorithm`, `objective`, `limits`, and `compute`. The above code snippet is taken from the sample notebook [Run hyperparameter sweep on a Command or CommandComponent](https://github.com/Azure/azureml-examples/blob/sdk-preview/sdk/jobs/single-step/lightgbm/iris/lightgbm-iris-sweep.ipynb). In this sample, the `learning_rate` and `boosting` parameters will be tuned. Early stopping of jobs will be determined by a `MedianStoppingPolicy`, which stops a job whose primary metric value is worse than the median of the averages across all training jobs.(see [MedianStoppingPolicy class reference](/python/api/azure-ai-ml/azure.ai.ml.sweep.medianstoppingpolicy)).
-The training run must use the same hyperparameter configuration and mounted the outputs folders. The training script must accept the `resume-from` argument, which contains the checkpoint or model files from which to resume the training run. You can resume individual training runs using the following snippet:
+To see how the parameter values are received, parsed, and passed to the training script to be tuned, refer to this [code sample](https://github.com/Azure/azureml-examples/blob/sdk-preview/sdk/jobs/single-step/lightgbm/iris/src/main.py)
-```Python
-from azureml.core.run import Run
+> [!Important]
+> Every hyperparameter sweep job restarts the training from scratch, including rebuilding the model and _all the data loaders_. You can minimize
+> this cost by using an Azure Machine Learning pipeline or manual process to do as much data preparation as possible prior to your training jobs.
-resume_child_run_1 = Run(experiment, "resume_child_run_ID_1")
-resume_child_run_2 = Run(experiment, "resume_child_run_ID_2")
-child_runs_to_resume = [resume_child_run_1, resume_child_run_2]
-```
+## Submit hyperparameter tuning experiment
-You can configure your hyperparameter tuning experiment to warm start from a previous experiment or resume individual training runs using the optional parameters `resume_from` and `resume_child_runs` in the config:
+After you define your hyperparameter tuning configuration, [submit the experiment](/python/api/azureml-core/azureml.core.experiment%28class%29#submit-config--tags-none-kwargs-):
```Python
-from azureml.train.hyperdrive import HyperDriveConfig
-
-hd_config = HyperDriveConfig(run_config=script_run_config,
- hyperparameter_sampling=param_sampling,
- policy=early_termination_policy,
- resume_from=warmstart_parents_to_resume_from,
- resume_child_runs=child_runs_to_resume,
- primary_metric_name="accuracy",
- primary_metric_goal=PrimaryMetricGoal.MAXIMIZE,
- max_total_runs=100,
- max_concurrent_runs=4)
+# submit the sweep
+returned_sweep_job = ml_client.create_or_update(sweep_job)
+# get a URL for the status of the job
+returned_sweep_job.services["Studio"].endpoint
```
-## Visualize hyperparameter tuning runs
+## Visualize hyperparameter tuning jobs
-You can visualize your hyperparameter tuning runs in the Azure Machine Learning studio, or you can use a notebook widget.
+You can visualize all of your hyperparameter tuning jobs in the [Azure Machine Learning studio](https://ml.azure.com). For more information on how to view an experiment in the portal, see [View job records in the studio](how-to-log-view-metrics.md#view-the-experiment-in-the-web-portal).
-### Studio
-
-You can visualize all of your hyperparameter tuning runs in the [Azure Machine Learning studio](https://ml.azure.com). For more information on how to view an experiment in the portal, see [View run records in the studio](how-to-log-view-metrics.md#view-the-experiment-in-the-web-portal).
--- **Metrics chart**: This visualization tracks the metrics logged for each hyperdrive child run over the duration of hyperparameter tuning. Each line represents a child run, and each point measures the primary metric value at that iteration of runtime.
+- **Metrics chart**: This visualization tracks the metrics logged for each hyperdrive child job over the duration of hyperparameter tuning. Each line represents a child job, and each point measures the primary metric value at that iteration of runtime.
:::image type="content" source="media/how-to-tune-hyperparameters/hyperparameter-tuning-metrics.png" alt-text="Hyperparameter tuning metrics chart"::: -- **Parallel Coordinates Chart**: This visualization shows the correlation between primary metric performance and individual hyperparameter values. The chart is interactive via movement of axes (click and drag by the axis label), and by highlighting values across a single axis (click and drag vertically along a single axis to highlight a range of desired values). The parallel coordinates chart includes an axis on the right most portion of the chart that plots the best metric value corresponding to the hyperparameters set for that run instance. This axis is provided in order to project the chart gradient legend onto the data in a more readable fashion.
+- **Parallel Coordinates Chart**: This visualization shows the correlation between primary metric performance and individual hyperparameter values. The chart is interactive via movement of axes (click and drag by the axis label), and by highlighting values across a single axis (click and drag vertically along a single axis to highlight a range of desired values). The parallel coordinates chart includes an axis on the right most portion of the chart that plots the best metric value corresponding to the hyperparameters set for that job instance. This axis is provided in order to project the chart gradient legend onto the data in a more readable fashion.
:::image type="content" source="media/how-to-tune-hyperparameters/hyperparameter-tuning-parallel-coordinates.png" alt-text="Hyperparameter tuning parallel coordinates chart":::
You can visualize all of your hyperparameter tuning runs in the [Azure Machine L
:::image type="content" source="media/how-to-tune-hyperparameters/hyperparameter-tuning-3-dimensional-scatter.png" alt-text="Hyparameter tuning 3-dimensional scatter chart":::
-### Notebook widget
-
-Use the [Notebook widget](/python/api/azureml-widgets/azureml.widgets.rundetails) to visualize the progress of your training runs. The following snippet visualizes all your hyperparameter tuning runs in one place in a Jupyter notebook:
-
-```Python
-from azureml.widgets import RunDetails
-RunDetails(hyperdrive_run).show()
-```
-
-This code displays a table with details about the training runs for each of the hyperparameter configurations.
-
-You can also visualize the performance of each of the runs as training progresses.
+## Find the best trial job
-## Find the best model
-
-Once all of the hyperparameter tuning runs have completed, identify the best performing configuration and hyperparameter values:
+Once all of the hyperparameter tuning jobs have completed, retrieve your best trial outputs:
```Python
-best_run = hyperdrive_run.get_best_run_by_primary_metric()
-best_run_metrics = best_run.get_metrics()
-parameter_values = best_run.get_details()['runDefinition']['arguments']
-
-print('Best Run Id: ', best_run.id)
-print('\n Accuracy:', best_run_metrics['accuracy'])
-print('\n learning rate:',parameter_values[3])
-print('\n keep probability:',parameter_values[5])
-print('\n batch size:',parameter_values[7])
+# Download best trial model output
+ml_client.jobs.download(returned_sweep_job.name, output_name="model")
```
-## Sample notebook
+You can use the CLI to download all default and named outputs of the best trial job and logs of the sweep job.
+```
+az ml job download --name <sweep-job> --all
+```
-Refer to train-hyperparameter-* notebooks in this folder:
-* [how-to-use-azureml/ml-frameworks](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/ml-frameworks)
+Optionally, to solely download the best trial output
+```
+az ml job download --name <sweep-job> --output-name model
+```
+
+## References
+- [Hyperparameter tuning example](https://github.com/Azure/azureml-examples/blob/sdk-preview/sdk/jobs/single-step/lightgbm/iris/src/main.py)
+- [CLI (v2) sweep job YAML schema here](reference-yaml-job-sweep.md#parameter-expressions)
## Next steps * [Track an experiment](how-to-log-view-metrics.md)
machine-learning How To Understand Automated Ml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-understand-automated-ml.md
Previously updated : 10/21/2021 Last updated : 04/08/2022 -+ # Evaluate automated machine learning experiment results
In this article, learn how to evaluate and compare models trained by your automa
For example, automated ML generates the following charts based on experiment type. | Classification| Regression/forecasting |
-| -- | - |
-| [Confusion matrix](#confusion-matrix) | [Residuals histogram](#residuals) |
-| [Receiver operating characteristic (ROC) curve](#roc-curve) | [Predicted vs. true](#predicted-vs-true) |
-| [Precision-recall (PR) curve](#precision-recall-curve) | |
-| [Lift curve](#lift-curve) | |
-| [Cumulative gains curve](#cumulative-gains-curve) | |
+| -- | --|
+| [Confusion matrix](#confusion-matrix) | [Residuals histogram](#residuals) |
+| [Receiver operating characteristic (ROC) curve](#roc-curve) | [Predicted vs. true](#predicted-vs-true) |
+| [Precision-recall (PR) curve](#precision-recall-curve) | [Forecast horizon (preview)](#forecast-horizon-preview) |
+| [Lift curve](#lift-curve) | |
+| [Cumulative gains curve](#cumulative-gains-curve) | |
| [Calibration curve](#calibration-curve) |
In this example, note that the better model has a predicted vs. true line that i
### Predicted vs. true chart for a bad model ![Predicted vs. true chart for a bad model](./media/how-to-understand-automated-ml/chart-predicted-true-bad.png)
+## Forecast horizon (preview)
+
+For forecasting experiments, the forecast horizon chart plots the relationship between the models predicted value and the actual values mapped over time per cross validation fold, up to 5 folds. The x axis maps time based on the frequency you provided during training setup. The vertical line in the chart marks the forecast horizon point also referred to as the horizon line, which is the time period at which you would want to start generating predictions. To the left of the forecast horizon line, you can view historic training data to better visualize past trends. To the right of the forecast horizon, you can visualize the predictions (the purple line) against the actuals (the blue line) for the different cross validation folds and time series identifiers. The shaded purple area indicates the confidence intervals or variance of predictions around that mean.
+
+You can choose which cross validation fold and time series identifier combinations to display by clicking the edit pencil icon on the top right corner of the chart. Select from the first 5 cross validation folds and up to 20 different time series identifiers to visualize the chart for your various time series.
+
+> [!IMPORTANT]
+> This chart is only available for models generated from training and validation data. We allow up to 20 data points before and up to 80 data points after the forecast origin. Visuals for models based on test data are not supported at this time.
+
+![Forecast horizon chart](./media/how-to-understand-automated-ml/forecast-horizon.png)
+ ## Metrics for image models (preview) Automated ML uses the images from the validation dataset for evaluating the performance of the model. The performance of the model is measured at an **epoch-level** to understand how the training progresses. An epoch elapses when an entire dataset is passed forward and backward through the neural network exactly once.
machine-learning How To Use Automated Ml For Ml Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-automated-ml-for-ml-models.md
Last updated 11/15/2021 -+ # Set up no-code AutoML training with the studio UI
Otherwise, you'll see a list of your recent automated ML experiments, including
| Primary metric| Main metric used for scoring your model. [Learn more about model metrics](how-to-configure-auto-train.md#primary-metric). Explain best model | Select to enable or disable, in order to show explanations for the recommended best model. <br> This functionality is not currently available for [certain forecasting algorithms](how-to-machine-learning-interpretability-automl.md#interpretability-during-training-for-the-best-model).
- Blocked algorithm| Select algorithms you want to exclude from the training job. <br><br> Allowing algorithms is only available for [SDK experiments](how-to-configure-auto-train.md#supported-models). <br> See the [supported models for each task type](/python/api/azureml-automl-core/azureml.automl.core.shared.constants.supportedmodels).
+ Blocked algorithm| Select algorithms you want to exclude from the training job. <br><br> Allowing algorithms is only available for [SDK experiments](how-to-configure-auto-train.md#supported-algorithms). <br> See the [supported algorithms for each task type](/python/api/azureml-automl-core/azureml.automl.core.shared.constants.supportedmodels).
Exit criterion| When any of these criteria are met, the training job is stopped. <br> *Training job time (hours)*: How long to allow the training job to run. <br> *Metric score threshold*: Minimum metric score for all pipelines. This ensures that if you have a defined target metric you want to reach, you do not spend more time on the training job than necessary. Concurrency| *Max concurrent iterations*: Maximum number of pipelines (iterations) to test in the training job. The job will not run more than the specified number of iterations. Learn more about how automated ML performs [multiple child runs on clusters](how-to-configure-auto-train.md#multiple-child-runs-on-clusters).
Otherwise, you'll see a list of your recent automated ML experiments, including
> Providing a test dataset to evaluate generated models is a preview feature. This capability is an [experimental](/python/api/overview/azure/ml/#stable-vs-experimental) preview feature, and may change at any time. * Test data is considered a separate from training and validation, so as to not bias the results of the test run of the recommended model. [Learn more about bias during model validation](concept-automated-ml.md#training-validation-and-test-data).
- * You can either provide your own test dataset or opt to use a percentage of your training dataset. Test data must be in the form of an [Azure Machine Learning TabularDataset](how-to-create-register-datasets.md#tabulardataset).
+ * You can either provide your own test dataset or opt to use a percentage of your training dataset. Test data must be in the form of an [Azure Machine Learning TabularDataset](./v1/how-to-create-register-datasets.md#tabulardataset).
* The schema of the test dataset should match the training dataset. The target column is optional, but if no target column is indicated no test metrics are calculated. * The test dataset should not be the same as the training dataset or the validation dataset. * Forecasting runs do not support train/test split.
machine-learning How To Use Automl Small Object Detect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-automl-small-object-detect.md
Last updated 10/13/2021-+ # Train a small object detection model with AutoML (preview) + > [!IMPORTANT] > This feature is currently in public preview. This preview version is provided without a service-level agreement. Certain features might not be supported or might have constrained capabilities. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
machine-learning How To Use Automlstep In Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-automlstep-in-pipelines.md
Last updated 10/21/2021 --+ # Use automated ML in an Azure Machine Learning pipeline in Python Azure Machine Learning's automated ML capability helps you discover high-performing models without you reimplementing every possible approach. Combined with Azure Machine Learning pipelines, you can create deployable workflows that can quickly discover the algorithm that works best for your data. This article will show you how to efficiently join a data preparation step to an automated ML step. Automated ML can quickly discover the algorithm that works best for your data, while putting you on the road to MLOps and model lifecycle operationalization with pipelines.
To make things concrete, this article creates a simple pipeline for a classifica
### Retrieve initial dataset
-Often, an ML workflow starts with pre-existing baseline data. This is a good scenario for a registered dataset. Datasets are visible across the workspace, support versioning, and can be interactively explored. There are many ways to create and populate a dataset, as discussed in [Create Azure Machine Learning datasets](how-to-create-register-datasets.md). Since we'll be using the Python SDK to create our pipeline, use the SDK to download baseline data and register it with the name 'titanic_ds'.
+Often, an ML workflow starts with pre-existing baseline data. This is a good scenario for a registered dataset. Datasets are visible across the workspace, support versioning, and can be interactively explored. There are many ways to create and populate a dataset, as discussed in [Create Azure Machine Learning datasets](./v1/how-to-create-register-datasets.md). Since we'll be using the Python SDK to create our pipeline, use the SDK to download baseline data and register it with the name 'titanic_ds'.
```python from azureml.core import Workspace, Dataset
machine-learning How To Use Azure Ad Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-azure-ad-identity.md
+ Last updated 10/21/2021 - # Use Azure AD identity with your machine learning web service in Azure Kubernetes Service
In this how-to, you learn how to assign an Azure Active Directory (Azure AD) ide
## Prerequisites -- The [Azure CLI extension for the Machine Learning service](reference-azure-machine-learning-cli.md), the [Azure Machine Learning SDK for Python](/python/api/overview/azure/ml/intro), or the [Azure Machine Learning Visual Studio Code extension](how-to-setup-vs-code.md).
+- The [Azure CLI extension for the Machine Learning service](v1/reference-azure-machine-learning-cli.md), the [Azure Machine Learning SDK for Python](/python/api/overview/azure/ml/intro), or the [Azure Machine Learning Visual Studio Code extension](how-to-setup-vs-code.md).
- Access to your AKS cluster using the `kubectl` command. For more information, see [Connect to the cluster](../aks/learn/quick-kubernetes-deploy-cli.md#connect-to-the-cluster)
blob_data.readall()
## Next steps * For more information on how to use the Python Azure Identity client library, see the [repository](https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/identity/azure-identity#azure-identity-client-library-for-python) on GitHub.
-* For a detailed guide on deploying models to Azure Kubernetes Service clusters, see the [how-to](how-to-deploy-azure-kubernetes-service.md).
machine-learning How To Use Batch Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-batch-endpoint.md
--++ Previously updated : 03/31/2022--
-# Customer intent: As an ML engineer or data scientist, I want to create an endpoint to host my models for batch scoring, so that I can use the same endpoint continuously for different large datasets on-demand or on-schedule.
Last updated : 05/24/2022+
+#Customer intent: As an ML engineer or data scientist, I want to create an endpoint to host my models for batch scoring, so that I can use the same endpoint continuously for different large datasets on-demand or on-schedule.
-# Use batch endpoints (preview) for batch scoring
+# Use batch endpoints for batch scoring
[!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)]
-Learn how to use batch endpoints (preview) to do batch scoring. Batch endpoints simplify the process of hosting your models for batch scoring, so you can focus on machine learning, not infrastructure. For more information, see [What are Azure Machine Learning endpoints (preview)?](concept-endpoints.md).
+
+Learn how to use batch endpoints to do batch scoring. Batch endpoints simplify the process of hosting your models for batch scoring, so you can focus on machine learning, not infrastructure. For more information, see [What are Azure Machine Learning endpoints?](concept-endpoints.md).
In this article, you learn to do the following tasks:
In this article, you learn to do the following tasks:
> * Test the new deployment and set it as the default deployment > * Delete the not in-use endpoint and deployment + ## Prerequisites * You must have an Azure subscription to use Azure Machine Learning. If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/) today.
-* Install the Azure CLI and the `ml` extension. Follow the installation steps in [Install, set up, and use the CLI (v2) (preview)](how-to-configure-cli.md).
+* Install the Azure CLI and the `ml` extension. Follow the installation steps in [Install, set up, and use the CLI (v2)](how-to-configure-cli.md).
-* Create an Azure resource group if you don't have one, and you (or the service principal you use) must have `Contributor` permission. For resource group creation, see [Install, set up, and use the CLI (v2) (preview)](how-to-configure-cli.md).
+* Create an Azure resource group if you don't have one, and you (or the service principal you use) must have `Contributor` permission. For resource group creation, see [Install, set up, and use the CLI (v2)](how-to-configure-cli.md).
-* Create an Azure Machine Learning workspace if you don't have one. For workspace creation, see [Install, set up, and use the CLI (v2) (preview)](how-to-configure-cli.md).
+* Create an Azure Machine Learning workspace if you don't have one. For workspace creation, see [Install, set up, and use the CLI (v2)](how-to-configure-cli.md).
-* Configure your default workspace and resource group for the Azure CLI. Machine Learning CLI commands require the `--workspace/-w` and `--resource-group/-g` parameters. Configure the defaults can avoid passing in the values multiple times. You can override these on the command line. Run the following code to set up your defaults. For more information, see [Install, set up, and use the CLI (v2) (preview)](how-to-configure-cli.md).
+* Configure your default workspace and resource group for the Azure CLI. Machine Learning CLI commands require the `--workspace/-w` and `--resource-group/-g` parameters. Configure the defaults can avoid passing in the values multiple times. You can override these on the command line. Run the following code to set up your defaults. For more information, see [Install, set up, and use the CLI (v2)](how-to-configure-cli.md).
```azurecli az account set -s "<subscription ID>"
Batch endpoint runs only on cloud computing resources, not locally. The cloud co
## Understand batch endpoints and batch deployments
-A batch endpoint is an HTTPS endpoint that clients can call to trigger a batch scoring job. A batch scoring job is a job that scores multiple inputs (for more, see [What are batch endpoints?](concept-endpoints.md#what-are-batch-endpoints-preview)). A batch deployment is a set of compute resources hosting the model that does the actual batch scoring. One batch endpoint can have multiple batch deployments.
+A batch endpoint is an HTTPS endpoint that clients can call to trigger a batch scoring job. A batch scoring job is a job that scores multiple inputs (for more, see [What are batch endpoints?](concept-endpoints.md#what-are-batch-endpoints)). A batch deployment is a set of compute resources hosting the model that does the actual batch scoring. One batch endpoint can have multiple batch deployments.
> [!TIP]
-> One of the batch deployments will serve as the default deployment for the endpoint. The default deployment will be used to do the actual batch scoring when the endpoint is invoked. Learn more about [batch endpoints and batch deployment](concept-endpoints.md#what-are-batch-endpoints-preview).
+> One of the batch deployments will serve as the default deployment for the endpoint. The default deployment will be used to do the actual batch scoring when the endpoint is invoked. Learn more about [batch endpoints and batch deployment](concept-endpoints.md#what-are-batch-endpoints).
The following YAML file defines a batch endpoint, which you can include in the CLI command for [batch endpoint creation](#create-a-batch-endpoint). In the repository, this file is located at `/cli/endpoints/batch/batch-endpoint.yml`.
To create a batch deployment, you need all the following items:
For more information about how to reference an Azure ML entity, see [Referencing an Azure ML entity](reference-yaml-core-syntax.md#referencing-an-azure-ml-entity).
-The example repository contains all the required files. The following YAML file defines a batch deployment with all the required inputs and optional settings. You can include this file in your CLI command to [create your batch deployment](#create-a-batch-deployment). In the repository, this file is located at `/cli/endpoints/batch/nonmlflow-deployment.yml`.
+The example repository contains all the required files. The following YAML file defines a batch deployment with all the required inputs and optional settings. You can include this file in your CLI command to [create your batch deployment](#create-a-batch-deployment). In the repository, this file is located at `/cli/endpoints/batch/nonmlflow-deployment.yml`.
:::code language="yaml" source="~/azureml-examples-main/cli/endpoints/batch/nonmlflow-deployment.yml":::
-The following table describes the key properties of the deployment YAML. For the full batch deployment YAML schema, see [CLI (v2) batch deployment YAML schema](./reference-yaml-deployment-batch.md).
+For the full batch deployment YAML schema, see [CLI (v2) batch deployment YAML schema](./reference-yaml-deployment-batch.md).
| Key | Description | | | -- |
The following table describes the key properties of the deployment YAML. For the
| `name` | The name of the deployment. | | `endpoint_name` | The name of the endpoint to create the deployment under. | | `model` | The model to be used for batch scoring. The example defines a model inline using `path`. Model files will be automatically uploaded and registered with an autogenerated name and version. Follow the [Model schema](reference-yaml-model.md#yaml-syntax) for more options. As a best practice for production scenarios, you should create the model separately and reference it here. To reference an existing model, use the `azureml:<model-name>:<model-version>` syntax. |
-| `code_configuration.code.path` | The directory that contains all the Python source code to score the model. |
-| `code_configuration.scoring_script` | The Python file in the above directory. This file must have an `init()` function and a `run()` function. Use the `init()` function for any costly or common preparation (for example, load the model in memory). `init()` will be called only once at beginning of process. Use `run(mini_batch)` to score each entry; the value of `mini_batch` is a list of file paths. The `run()` function should return a pandas DataFrame or an array. Each returned element indicates one successful run of input element in the `mini_batch`. Make sure that enough data is included in your `run()` response to correlate the input with the output. |
+| `code_configuration.code.path` | The local directory that contains all the Python source code to score the model. |
+| `code_configuration.scoring_script` | The Python file in the above directory. This file must have an `init()` function and a `run()` function. Use the `init()` function for any costly or common preparation (for example, load the model in memory). `init()` will be called only once at beginning of process. Use `run(mini_batch)` to score each entry; the value of `mini_batch` is a list of file paths. The `run()` function should return a pandas DataFrame or an array. Each returned element indicates one successful run of input element in the `mini_batch`. For more information on how to author scoring script, see [Understanding the scoring script](how-to-use-batch-endpoint.md#understanding-the-scoring-script). |
| `environment` | The environment to score the model. The example defines an environment inline using `conda_file` and `image`. The `conda_file` dependencies will be installed on top of the `image`. The environment will be automatically registered with an autogenerated name and version. Follow the [Environment schema](reference-yaml-environment.md#yaml-syntax) for more options. As a best practice for production scenarios, you should create the environment separately and reference it here. To reference an existing environment, use the `azureml:<environment-name>:<environment-version>` syntax. | | `compute` | The compute to run batch scoring. The example uses the `batch-cluster` created at the beginning and reference it using `azureml:<compute-name>` syntax. | | `resources.instance_count` | The number of instances to be used for each batch scoring job. | | `max_concurrency_per_instance` | [Optional] The maximum number of parallel `scoring_script` runs per instance. | | `mini_batch_size` | [Optional] The number of files the `scoring_script` can process in one `run()` call. |
-| `output_action` | [Optional] How the output should be organized in the output file. `append_row` will merge all `run()` returned output results into one single file named `output_file_name`. `summary_only` will not merge the output results and only calculate `error_threshold`. |
+| `output_action` | [Optional] How the output should be organized in the output file. `append_row` will merge all `run()` returned output results into one single file named `output_file_name`. `summary_only` won't merge the output results and only calculate `error_threshold`. |
| `output_file_name` | [Optional] The name of the batch scoring output file for `append_row` `output_action`. | | `retry_settings.max_retries` | [Optional] The number of max tries for a failed `scoring_script` `run()`. | | `retry_settings.timeout` | [Optional] The timeout in seconds for a `scoring_script` `run()` for scoring a mini batch. | | `error_threshold` | [Optional] The number of input file scoring failures that should be ignored. If the error count for the entire input goes above this value, the batch scoring job will be terminated. The example uses `-1`, which indicates that any number of failures is allowed without terminating the batch scoring job. | | `logging_level` | [Optional] Log verbosity. Values in increasing verbosity are: WARNING, INFO, and DEBUG. |
-### Understand the scoring script
+### Understanding the scoring script
As mentioned earlier, the `code_configuration.scoring_script` must contain two functions: - `init()`: Use this function for any costly or common preparation. For example, use it to load the model into a global object. This function will be called once at the beginning of the process. - `run(mini_batch)`: This function will be called for each `mini_batch` and do the actual scoring. - `mini_batch`: The `mini_batch` value is a list of file paths.
- - `response`: The `run()` method should return a pandas DataFrame or an array. Each returned output element indicates one successful run of an input element in the input `mini_batch`. Make sure that enough data (for example, an identifier of each input element) is included in the `run()` response to correlate an input with an output result.
+ - `response`: The `run()` method should return a pandas DataFrame or an array. Each returned output element indicates one successful run of an input element in the input `mini_batch`. Make sure that enough data is included in your `run()` response to correlate the input with the output. The resulting DataFrame or array is populated according to this scoring script. It's up to you how much or how little information youΓÇÖd like to output to correlate output values with the input value, for example, the array can represent a list of tuples containing both the model's output and input. There's no requirement on the cardinality of the results. All elements in the result DataFrame or array will be written to the output file as-is (given that the `output_action` isn't `summary_only`).
The example uses `/cli/endpoints/batch/mnist/code/digit_identification.py`. The model is loaded in `init()` from `AZUREML_MODEL_DIR`, which is the path to the model folder created during deployment. `run(mini_batch)` iterates each file in `mini_batch`, does the actual model scoring and then returns output results.
To check a batch endpoint, run the following code. As the newly created deployme
### Invoke the batch endpoint to start a batch scoring job
-Invoke a batch endpoint triggers a batch scoring job. A job `name` will be returned from the invoke response and can be used to track the batch scoring progress. The batch scoring job runs for a period of time. It splits the entire inputs into multiple `mini_batch` and processes in parallel on the compute cluster. One `scoring_scrip` `run()` takes one `mini_batch` and processes it by a process on an instance. The batch scoring job outputs will be stored in cloud storage, either in the workspace's default blob storage, or the storage you specified.
+Invoke a batch endpoint triggers a batch scoring job. A job `name` will be returned from the invoke response and can be used to track the batch scoring progress. The batch scoring job runs for a period of time. It splits the entire inputs into multiple `mini_batch` and processes in parallel on the compute cluster. One `scoring_script` `run()` takes one `mini_batch` and processes it by a process on an instance. The batch scoring job outputs will be stored in cloud storage, either in the workspace's default blob storage, or the storage you specified.
#### Invoke the batch endpoint with different input options
-You can either use CLI or REST to `invoke` the endpoint. For REST experience, see [Use batch endpoints(preview) with REST](how-to-deploy-batch-with-rest.md)
+You can either use CLI or REST to `invoke` the endpoint. For REST experience, see [Use batch endpoints with REST](how-to-deploy-batch-with-rest.md)
+
+There are several options to specify the data inputs in CLI `invoke`.
-There are three options to specify the data inputs in CLI `invoke`.
+* __Option 1-1: Data in the cloud__
-* __Option 1: Data in the cloud__
+ Use `--input` and `--input-type` to specify a file or folder on an Azure Machine Learning registered datastore or a publicly accessible path. When you're specifying a single file, use `--input-type uri_file`, and when you're specifying a folder, use `--input-type uri_folder`).
- Use `--input-path` to specify a folder (use prefix `folder:`) or a file (use prefix `file:`) in an Azure Machine Learning registered datastore. The syntax for the data URI is `folder:azureml://datastores/<datastore-name>/paths/<data-path>/` for folder, and `file:azureml://datastores/<datastore-name>/paths/<data-path>/<file-name>` for a specific file. For more information about data URI, see [Azure Machine Learning data reference URI](reference-yaml-core-syntax.md#azure-ml-data-reference-uri).
+ When the file or folder is on Azure ML registered datastore, the syntax for the URI is `azureml://datastores/<datastore-name>/paths/<path-on-datastore>/` for folder, and `azureml://datastores/<datastore-name>/paths/<path-on-datastore>/<file-name>` for a specific file. When the file of folder is on a publicly accessible path, the syntax for the URI is `https://<public-path>/` for folder, `https://<public-path>/<file-name>` for a specific file.
+
+ For more information about data URI, see [Azure Machine Learning data reference URI](reference-yaml-core-syntax.md#azure-ml-data-reference-uri).
The example uses publicly available data in a folder from `https://pipelinedata.blob.core.windows.net/sampledata/mnist`, which contains thousands of hand-written digits. Name of the batch scoring job will be returned from the invoke response. Run the following code to invoke the batch endpoint using this data. `--query name` is added to only return the job name from the invoke response, and it will be used later to [Monitor batch scoring job execution progress](#monitor-batch-scoring-job-execution-progress) and [Check batch scoring results](#check-batch-scoring-results). Remove `--query name -o tsv` if you want to see the full invoke response. For more information on the `--query` parameter, see [Query Azure CLI command output](/cli/azure/query-azure-cli). :::code language="azurecli" source="~/azureml-examples-main/cli/batch-score.sh" ID="start_batch_scoring_job" :::
-* __Option 2: Registered dataset__
-
- Use `--input-dataset` to pass in an Azure Machine Learning registered dataset. To create a dataset, check `az ml dataset create -h` for instruction, and follow the [Dataset schema](reference-yaml-data.md#yaml-syntax).
+* __Option 1-2: Registered data asset__
- > [!NOTE]
- > FileDataset that is created using the preceding version of the CLI and Python SDK can also be used. TabularDataset is not supported.
+ Use `--input` to pass in an Azure Machine Learning registered V2 data asset (with the type of either `uri_file` or `url_folder`). You don't need to specify `--input-type` in this option. The syntax for this option is `azureml:<dataset-name>:<dataset-version>`.
```azurecli
- az ml batch-endpoint invoke --name $ENDPOINT_NAME --input-dataset azureml:<dataset-name>:<dataset-version>
+ az ml batch-endpoint invoke --name $ENDPOINT_NAME --input azureml:<dataset-name>:<dataset-version>
```
-* __Option 3: Data stored locally__
+* __Option 2: Data stored locally__
- Use `--input-local-path` to pass in data files stored locally. The data files will be automatically uploaded and registered with an autogenerated name and version.
+ Use `--input` to pass in data files stored locally. You don't need to specify `--input-type` in this option. The data files will be automatically uploaded as a folder to Azure ML datastore, and passed to the batch scoring job.
```azurecli
- az ml batch-endpoint invoke --name $ENDPOINT_NAME --input-local-path <local-path>
+ az ml batch-endpoint invoke --name $ENDPOINT_NAME --input <local-path>
```
+> [!NOTE]
+> - If you are using existing V1 FileDataset for batch endpoint, we recommend migrating them to V2 data assets and refer to them directly when invoking batch endpoints. Currently only data assets of type `uri_folder` or `uri_file` are supported. Batch endpoints created with GA CLIv2 (2.4.0 and newer) or GA REST API (2022-05-01 and newer) will not support V1 Dataset.
+> - You can also extract the URI or path on datastore extracted from V1 FileDataset by using `az ml dataset show` command with `--query` parameter and use that information for invoke.
+> - While Batch endpoints created with earlier APIs will continue to support V1 FileDataset, we will be adding further V2 data assets support with the latest API versions for even more usability and flexibility. For more information on V2 data assets, see [Work with data using SDK v2 (preview)](how-to-use-data.md). For more information on the new V2 experience, see [What is v2](concept-v2.md).
+ #### Configure the output location and overwrite settings
-The batch scoring results are by default stored in the workspace's default blob store within a folder named by job name (a system-generated GUID). You can configure where to store the scoring outputs when you invoke the batch endpoint. Use `--output-path` to configure any `folder:` in an Azure Machine Learning registered datastore. The syntax for the `--output-path` `folder:` is the same as `--input-path` `folder:`. Use `--set output_file_name=<your-file-name>` to configure a new output file name if you prefer having one output file containing all scoring results (specified `output_action=append_row` in your deployment YAML).
+The batch scoring results are by default stored in the workspace's default blob store within a folder named by job name (a system-generated GUID). You can configure where to store the scoring outputs when you invoke the batch endpoint. Use `--output-path` to configure any folder in an Azure Machine Learning registered datastore. The syntax for the `--output-path` is the same as `--input` when you're specifying a folder, that is, `azureml://datastores/<datastore-name>/paths/<path-on-datastore>/`. The prefix `folder:` isn't required anymore. Use `--set output_file_name=<your-file-name>` to configure a new output file name if you prefer having one output file containing all scoring results (specified `output_action=append_row` in your deployment YAML).
> [!IMPORTANT] > You must use a unique output location. If the output file exists, the batch scoring job will fail.
To create a new batch deployment under the existing batch endpoint but not set i
:::code language="azurecli" source="~/azureml-examples-main/cli/batch-score.sh" ID="create_new_deployment_not_default" :::
-Notice that `--set-default` is not used. If you `show` the batch endpoint again, you should see no change of the `defaults.deployment_name`.
+Notice that `--set-default` isn't used. If you `show` the batch endpoint again, you should see no change of the `defaults.deployment_name`.
-The example uses a model (`/cli/endpoints/batch/autolog_nyc_taxi`) trained and tracked with MLflow. `scoring_script` and `environment` can be auto generated using model's metadata, no need to specify in the YAML file. For more about MLflow, see [Train and track ML models with MLflow and Azure Machine Learning (preview)](how-to-use-mlflow.md).
+The example uses a model (`/cli/endpoints/batch/autolog_nyc_taxi`) trained and tracked with MLflow. `scoring_script` and `environment` can be auto generated using model's metadata, no need to specify in the YAML file. For more about MLflow, see [Train and track ML models with MLflow and Azure Machine Learning](how-to-use-mlflow.md).
Below is the YAML file the example uses to deploy an MLflow model, which only contains the minimum required properties. The source file in repository is `/cli/endpoints/batch/mlflow-deployment.yml`.
If you aren't going to use the old batch deployment, you should delete it by run
::: code language="azurecli" source="~/azureml-examples-main/cli/batch-score.sh" ID="delete_deployment" :::
-Run the following code to delete the batch endpoint and all the underlying deployments. Batch scoring jobs will not be deleted.
+Run the following code to delete the batch endpoint and all the underlying deployments. Batch scoring jobs won't be deleted.
::: code language="azurecli" source="~/azureml-examples-main/cli/batch-score.sh" ID="delete_endpoint" ::: ## Next steps * [Batch endpoints in studio](how-to-use-batch-endpoints-studio.md)
-* [Deploy models with REST (preview) for batch scoring](how-to-deploy-batch-with-rest.md)
+* [Deploy models with REST for batch scoring](how-to-deploy-batch-with-rest.md)
* [Troubleshooting batch endpoints](how-to-troubleshoot-batch-endpoints.md)
machine-learning How To Use Batch Endpoints Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-batch-endpoints-studio.md
Previously updated : 03/31/2022- Last updated : 04/26/2022+
-# How to use batch endpoints (preview) in Azure Machine Learning studio
+# How to use batch endpoints in Azure Machine Learning studio
+
+In this article, you learn how to use batch endpoints to do batch scoring in [Azure Machine Learning studio](https://ml.azure.com). For more, see [What are Azure Machine Learning endpoints?](concept-endpoints.md).
-In this article, you learn how to use batch endpoints (preview) to do batch scoring in [Azure Machine Learning studio](https://ml.azure.com). For more, see [What are Azure Machine Learning endpoints (preview)?](concept-endpoints.md).
In this article, you learn about:
In this article, you learn about:
> * Start a batch scoring job > * Overview of batch endpoint features in Azure machine learning studio + ## Prerequisites
There are two ways to create Batch Endpoints in Azure Machine Learning studio:
OR
-* From the **Models** page, select the model you want to deploy and then select **Deploy to batch endpoint (preview)**.
+* From the **Models** page, select the model you want to deploy and then select **Deploy to batch endpoint**.
:::image type="content" source="media/how-to-use-batch-endpoints-studio/models-page-deployment.png" alt-text="Screenshot of creating a batch endpoint/deployment from Models page"::: > [!TIP]
-> If you're using an MLflow model, you can use no-code batch endpoint creation. That is, you don't need to prepare a scoring script and environment, both can be auto generated. For more, see [Train and track ML models with MLflow and Azure Machine Learning (preview)](how-to-use-mlflow.md).
+> If you're using an MLflow model, you can use no-code batch endpoint creation. That is, you don't need to prepare a scoring script and environment, both can be auto generated. For more, see [Train and track ML models with MLflow and Azure Machine Learning](how-to-use-mlflow.md).
> > :::image type="content" source="media/how-to-use-batch-endpoints-studio/mlflow-model-wizard.png" alt-text="Screenshot of deploying an MLflow model":::
In Azure machine learning studio, there are two ways to add a deployment to an e
OR
-* From the **Models** page, select the model you want to deploy. Then select **Deploy to batch endpoint (preview)** option from the drop-down. In the wizard, on the **Endpoint** screen, select **Existing**. Complete the wizard to add the new deployment.
+* From the **Models** page, select the model you want to deploy. Then select **Deploy to batch endpoint** option from the drop-down. In the wizard, on the **Endpoint** screen, select **Existing**. Complete the wizard to add the new deployment.
:::image type="content" source="media/how-to-use-batch-endpoints-studio/add-deployment-models-page.png" alt-text="Screenshot of selecting an existing batch endpoint to add new deployment":::
To delete a **deployment**, select the endpoint from the **Endpoints** page, sel
In this article, you learned how to create and call batch endpoints. See these other articles to learn more about Azure Machine Learning: * [Troubleshooting batch endpoints](how-to-troubleshoot-batch-endpoints.md)
-* [Deploy and score a machine learning model with a managed online endpoint (preview)](how-to-deploy-managed-online-endpoints.md)
+* [Deploy and score a machine learning model with a managed online endpoint](how-to-deploy-managed-online-endpoints.md)
machine-learning How To Use Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-data.md
+
+ Title: Work with data using SDK v2 (preview)
+
+description: 'Learn to how work with data using the Python SDK v2 preview for Azure Machine Learning.'
+++++ Last updated : 05/10/2022++++
+# Work with data using SDK v2 preview
++
+Azure Machine Learning allows you to work with different types of data. In this article, you'll learn about using the Python SDK v2 to work with _URIs_ and _Tables_. URIs reference a location either local to your development environment or in the cloud. Tables are a tabular data abstraction.
+
+For most scenarios, you'll use URIs (`uri_folder` and `uri_file`). A URI references a location in storage that can be easily mapped to the filesystem of a compute node when you run a job. The data is accessed by either mounting or downloading the storage to the node.
+
+When using tables, you'll use `mltable`. It's an abstraction for tabular data that is used for AutoML jobs, parallel jobs, and some advanced scenarios. If you're just starting to use Azure Machine Learning, and aren't using AutoML, we strongly encourage you to begin with URIs.
+
+> [!TIP]
+> If you have dataset assets created using the SDK v1, you can still use those with SDK v2. For more information, see the [Consuming V1 Dataset Assets in V2](#consuming-v1-dataset-assets-in-v2) section.
+++
+## Prerequisites
+
+* An Azure subscription - If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/) today.
+* An Azure Machine Learning workspace.
+* The Azure Machine Learning SDK v2 for Python
+++
+## URIs
+
+The code snippets in this section cover the following scenarios:
+
+* Reading data in a job
+* Reading *and* writing data in a job
+* Registering data as an asset in Azure Machine Learning
+* Reading registered data assets from Azure Machine Learning in a job
+
+These snippets use `uri_file` and `uri_folder`.
+
+- `uri_file` is a type that refers to a specific file. For example, `'https://<account_name>.blob.core.windows.net/<container_name>/path/file.csv'`.
+- `uri_folder` is a type that refers to a specific folder. For example, `'https://<account_name>.blob.core.windows.net/<container_name>/path'`.
+
+> [!TIP]
+> We recommend using an argument parser to pass folder information into _data-plane_ code. By data-plane code, we mean your data processing and/or training code that you run in the cloud. The code that runs in your development environment and submits code to the data-plane is _control-plane_ code.
+>
+> Data-plane code is typically a Python script, but can be any programming language. Passing the folder as part of job submission allows you to easily adjust the path from training locally using local data, to training in the cloud. For example, the following example uses `argparse` to get a `uri_folder`, which is joined with the file name to form a path:
+>
+> ```python
+> # train.py
+> import argparse
+> import os
+> import pandas as pd
+>
+> parser = argparse.ArgumentParser()
+> parser.add_argument("--input_folder", type=str)
+> args = parser.parse_args()
+>
+> file_name = os.path.join(args.input_folder, "MY_CSV_FILE.csv")
+> df = pd.read_csv(file_name)
+> print(df.head(10))
+> # process data
+> # train a model
+> # etc
+> ```
+>
+> If you wanted to pass in just an individual file rather than the entire folder you can use the `uri_file` type.
+
+For a complete example, see the [working_with_uris.ipynb notebook](https://github.com/azure/azureml-previews/sdk/docs/working_with_uris.ipynb).
+
+Below are some common data access patterns that you can use in your *control-plane* code to submit a job to Azure Machine Learning:
+
+### Use data with a training job
+
+Use the tabs below to select where your data is located.
+
+# [Local data](#tab/use-local)
+
+When you pass local data, the data is automatically uploaded to cloud storage as part of the job submission.
+
+```python
+from azure.ai.ml.entities import Data, UriReference, JobInput, CommandJob
+from azure.ai.ml._constants import AssetTypes
+
+my_job_inputs = {
+ "input_data": JobInput(
+ path='./sample_data', # change to be your local directory
+ type=AssetTypes.URI_FOLDER
+ )
+}
+
+job = CommandJob(
+ code="./src", # local path where the code is stored
+ command='python train.py --input_folder ${{inputs.input_data}}',
+ inputs=my_job_inputs,
+ environment="AzureML-sklearn-0.24-ubuntu18.04-py37-cpu:9",
+ compute="cpu-cluster"
+)
+
+#submit the command job
+returned_job = ml_client.create_or_update(job)
+#get a URL for the status of the job
+returned_job.services["Studio"].endpoint
+```
+
+# [ADLS Gen2](#tab/use-adls)
+
+```python
+from azure.ai.ml.entities import Data, UriReference, JobInput, CommandJob
+from azure.ai.ml._constants import AssetTypes
+
+# in this example we
+my_job_inputs = {
+ "input_data": JobInput(
+ path='abfss://<file_system>@<account_name>.dfs.core.windows.net/<path>',
+ type=AssetTypes.URI_FOLDER
+ )
+}
+
+job = CommandJob(
+ code="./src", # local path where the code is stored
+ command='python train.py --input_folder ${{inputs.input_data}}',
+ inputs=my_job_inputs,
+ environment="AzureML-sklearn-0.24-ubuntu18.04-py37-cpu:9",
+ compute="cpu-cluster"
+)
+
+#submit the command job
+returned_job = ml_client.create_or_update(job)
+#get a URL for the status of the job
+returned_job.services["Studio"].endpoint
+```
+
+# [Blob](#tab/use-blob)
+
+```python
+from azure.ai.ml.entities import Data, UriReference, JobInput, CommandJob
+from azure.ai.ml._constants import AssetTypes
+
+# in this example we
+my_job_inputs = {
+ "input_data": JobInput(
+ path='https://<account_name>.blob.core.windows.net/<container_name>/path',
+ type=AssetTypes.URI_FOLDER
+ )
+}
+
+job = CommandJob(
+ code="./src", # local path where the code is stored
+ command='python train.py --input_folder ${{inputs.input_data}}',
+ inputs=my_job_inputs,
+ environment="AzureML-sklearn-0.24-ubuntu18.04-py37-cpu:9",
+ compute="cpu-cluster"
+)
+
+#submit the command job
+returned_job = ml_client.create_or_update(job)
+#get a URL for the status of the job
+returned_job.services["Studio"].endpoint
+```
+++
+### Read and write data in a job
+
+Use the tabs below to select where your data is located.
+
+# [Blob](#tab/rw-blob)
+
+```python
+from azure.ai.ml.entities import Data, UriReference, JobInput, CommandJob, JobOutput
+from azure.ai.ml._constants import AssetTypes
+
+my_job_inputs = {
+ "input_data": JobInput(
+ path='https://<account_name>.blob.core.windows.net/<container_name>/path',
+ type=AssetTypes.URI_FOLDER
+ )
+}
+
+my_job_outputs = {
+ "output_folder": JobOutput(
+ path='https://<account_name>.blob.core.windows.net/<container_name>/path',
+ type=AssetTypes.URI_FOLDER
+ )
+}
+
+job = CommandJob(
+ code="./src", #local path where the code is stored
+ command='python pre-process.py --input_folder ${{inputs.input_data}} --output_folder ${{outputs.output_folder}}',
+ inputs=my_job_inputs,
+ outputs=my_job_outputs,
+ environment="AzureML-sklearn-0.24-ubuntu18.04-py37-cpu:9",
+ compute="cpu-cluster"
+)
+
+#submit the command job
+returned_job = ml_client.create_or_update(job)
+#get a URL for the status of the job
+returned_job.services["Studio"].endpoint
+```
+
+# [ADLS Gen2](#tab/rw-adls)
+
+```python
+from azure.ai.ml.entities import Data, UriReference, JobInput, CommandJob, JobOutput
+from azure.ai.ml._constants import AssetTypes
+
+my_job_inputs = {
+ "input_data": JobInput(
+ path='abfss://<file_system>@<account_name>.dfs.core.windows.net/<path>',
+ type=AssetTypes.URI_FOLDER
+ )
+}
+
+my_job_outputs = {
+ "output_folder": JobOutput(
+ path='abfss://<file_system>@<account_name>.dfs.core.windows.net/<path>',
+ type=AssetTypes.URI_FOLDER
+ )
+}
+
+job = CommandJob(
+ code="./src", #local path where the code is stored
+ command='python pre-process.py --input_folder ${{inputs.input_data}} --output_folder ${{outputs.output_folder}}',
+ inputs=my_job_inputs,
+ outputs=my_job_outputs,
+ environment="AzureML-sklearn-0.24-ubuntu18.04-py37-cpu:9",
+ compute="cpu-cluster"
+)
+
+#submit the command job
+returned_job = ml_client.create_or_update(job)
+#get a URL for the status of the job
+returned_job.services["Studio"].endpoint
+```
++
+### Register data assets
+
+```python
+from azure.ai.ml.entities import Data
+from azure.ai.ml._constants import AssetTypes
+
+# select one from:
+my_path = 'abfss://<file_system>@<account_name>.dfs.core.windows.net/<path>' # adls gen2
+my_path = 'https://<account_name>.blob.core.windows.net/<container_name>/path' # blob
+
+my_data = Data(
+ path=my_path,
+ type=AssetTypes.URI_FOLDER,
+ description="description here",
+ name="a_name",
+ version='1'
+)
+
+ml_client.data.create_or_update(my_data)
+```
+
+### Consume registered data assets in job
+
+```python
+from azure.ai.ml.entities import Data, UriReference, JobInput, CommandJob
+from azure.ai.ml._constants import AssetTypes
+
+registered_data_asset = ml_client.data.get(name='titanic', version='1')
+
+my_job_inputs = {
+ "input_data": JobInput(
+ type=AssetTypes.URI_FOLDER,
+ path=registered_data_asset.id
+ )
+}
+
+job = CommandJob(
+ code="./src",
+ command='python read_data_asset.py --input_folder ${{inputs.input_data}}',
+ inputs=my_job_inputs,
+ environment="AzureML-sklearn-0.24-ubuntu18.04-py37-cpu:9",
+ compute="cpu-cluster"
+)
+
+#submit the command job
+returned_job = ml_client.create_or_update(job)
+#get a URL for the status of the job
+returned_job.services["Studio"].endpoint
+```
+
+## Table
+
+An MLTable is primarily an abstraction over tabular data, but it can also be used for some advanced scenarios involving multiple paths. The following YAML describes an MLTable:
+
+```yaml
+paths:
+ - file: ./titanic.csv
+transformations:
+ - read_delimited:
+ delimiter: ','
+ encoding: 'ascii'
+ empty_as_string: false
+ header: from_first_file
+```
+
+The contents of the MLTable file specify the underlying data location (here a local path) and also the transforms to perform on the underlying data before materializing into a pandas/spark/dask data frame. The important part here's that the MLTable-artifact doesn't have any absolute paths, making it *self-contained*. All the information stored in one folder; regardless of whether that folder is stored on your local drive or in your cloud drive or on a public http server.
+
+To consume the data in a job or interactive session, use `mltable`:
+
+```python
+import mltable
+
+tbl = mltable.load("./sample_data")
+df = tbl.to_pandas_dataframe()
+```
+
+For a full example of using an MLTable, see the [Working with MLTable notebook].
+
+## Consuming V1 dataset assets in V2
+
+> [!NOTE]
+> While full backward compatibility is provided, if your intention with your V1 `FileDataset` assets was to have a single path to a file or folder with no loading transforms (sample, take, filter, etc.), then we recommend that you re-create them as a `uri_file`/`uri_folder` using the v2 CLI:
+>
+> ```cli
+> az ml data create --file my-data-asset.yaml
+> ```
+
+Registered v1 `FileDataset` and `TabularDataset` data assets can be consumed in an v2 job using `mltable`. To use the v1 assets, add the following definition in the `inputs` section of your job yaml:
+
+```yaml
+inputs:
+ my_v1_dataset:
+ type: mltable
+ path: azureml:myv1ds:1
+ mode: eval_mount
+```
+
+The following example shows how to do this using the v2 SDK:
+
+```python
+from azure.ai.ml.entities import Data, UriReference, JobInput, CommandJob
+from azure.ai.ml._constants import AssetTypes
+
+registered_v1_data_asset = ml_client.data.get(name='<ASSET NAME>', version='<VERSION NUMBER>')
+
+my_job_inputs = {
+ "input_data": JobInput(
+ type=AssetTypes.MLTABLE,
+ path=registered_v1_data_asset.id,
+ mode="eval_mount"
+ )
+}
+
+job = CommandJob(
+ code="./src", #local path where the code is stored
+ command='python train.py --input_data ${{inputs.input_data}}',
+ inputs=my_job_inputs,
+ environment="AzureML-sklearn-0.24-ubuntu18.04-py37-cpu:9",
+ compute="cpu-cluster"
+)
+
+#submit the command job
+returned_job = ml_client.jobs.create_or_update(job)
+#get a URL for the status of the job
+returned_job.services["Studio"].endpoint
+```
+
+## Next steps
+
+* [Install and set up Python SDK v2 (preview)](https://aka.ms/sdk-v2-install)
+* [Train models with the Python SDK v2 (preview)](how-to-train-sdk.md)
+* [Tutorial: Create production ML pipelines with Python SDK v2 (preview)](tutorial-pipeline-python-sdk.md)
machine-learning How To Use Environments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-environments.md
Previously updated : 10/21/2021 Last updated : 04/19/2022 --
-## As a developer, I need to configure my experiment context with the necessary software packages so my machine learning models can be trained and deployed on different compute targets.
-+ # Create & use software environments in Azure Machine Learning In this article, learn how to create and manage Azure Machine Learning [environments](/python/api/azureml-core/azureml.core.environment.environment). Use the environments to track and reproduce your projects' software dependencies as they evolve.
This [example notebook](https://github.com/Azure/MachineLearningNotebooks/tree/m
## Create and manage environments with the Azure CLI --
-The [Azure Machine Learning CLI](reference-azure-machine-learning-cli.md) mirrors most of the functionality of the Python SDK. You can use it to create and manage environments. The commands that we discuss in this section demonstrate fundamental functionality.
-
-The following command scaffolds the files for a default environment definition in the specified directory. These files are JSON files. They work like the corresponding class in the SDK. You can use the files to create new environments that have custom settings.
-
-```azurecli-interactive
-az ml environment scaffold -n myenv -d myenvdir
-```
-
-Run the following command to register an environment from a specified directory.
-
-```azurecli-interactive
-az ml environment register -d myenvdir
-```
-
-Run the following command to list all registered environments.
-
-```azurecli-interactive
-az ml environment list
-```
-
-Download a registered environment by using the following command.
-
-```azurecli-interactive
-az ml environment download -n myenv -d downloaddir
-```
+For information on using the CLI v2, see [Manage environments with CLI v2](how-to-manage-environments-v2.md).
## Create and manage environments with Visual Studio Code
machine-learning How To Use Labeled Dataset https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-labeled-dataset.md
-+ Last updated 03/11/2022-
-# Customer intent: As an experienced Python developer, I need to export my data labels and use them for machine learning tasks.
+#Customer intent: As an experienced Python developer, I need to export my data labels and use them for machine learning tasks.
# Create and explore Azure Machine Learning dataset with labels
The exported dataset is a [TabularDataset](/python/api/azureml-core/azureml.data
> [!NOTE] > The public preview methods download() and mount() are [experimental](/python/api/overview/azure/ml/#stable-vs-experimental) preview features, and may change at any time. ```Python import azureml.core
machine-learning How To Use Managed Identities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-managed-identities.md
Previously updated : 10/21/2021-- Last updated : 05/06/2021+ # Use Managed identities with Azure Machine Learning +
+> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning CLI extension you are using:"]
+> * [v1](v1/how-to-use-managed-identities.md)
+> * [v2 (current version)](how-to-use-managed-identities.md)
+ [Managed identities](../active-directory/managed-identities-azure-resources/overview.md) allow you to configure your workspace with the *minimum required permissions to access resources*.
-When configuring Azure Machine Learning workspace in trustworthy manner, it is important to ensure that different services associated with the workspace have the correct level of access. For example, during machine learning workflow the workspace needs access to Azure Container Registry (ACR) for Docker images, and storage accounts for training data.
+When configuring Azure Machine Learning workspace in trustworthy manner, it's important to ensure that different services associated with the workspace have the correct level of access. For example, during machine learning workflow the workspace needs access to Azure Container Registry (ACR) for Docker images, and storage accounts for training data.
Furthermore, managed identities allow fine-grained control over permissions, for example you can grant or revoke access from specific compute resources to a specific ACR.
In this article, you'll learn how to use managed identities to:
## Prerequisites - An Azure Machine Learning workspace. For more information, see [Create an Azure Machine Learning workspace](how-to-manage-workspace.md).-- The [Azure CLI extension for Machine Learning service](reference-azure-machine-learning-cli.md)
+- The [Azure CLI extension for Machine Learning service](v1/reference-azure-machine-learning-cli.md)
- The [Azure Machine Learning Python SDK](/python/api/overview/azure/ml/intro). - To assign roles, the login for your Azure subscription must have the [Managed Identity Operator](../role-based-access-control/built-in-roles.md#managed-identity-operator) role, or other role that grants the required actions (such as __Owner__). - You must be familiar with creating and working with [Managed Identities](../active-directory/managed-identities-azure-resources/overview.md).
If ACR admin user is disallowed by subscription policy, you should first create
> [!TIP] > To get the value for the `--container-registry` parameter, use the [az acr show](/cli/azure/acr#az-acr-show) command to show information for your ACR. The `id` field contains the resource ID for your ACR. ```azurecli-interactive az ml workspace create -w <workspace name> \
az ml workspace create -w <workspace name> \
### Let Azure Machine Learning service create workspace ACR
-If you do not bring your own ACR, Azure Machine Learning service will create one for you when you perform an operation that needs one. For example, submit a training run to Machine Learning Compute, build an environment, or deploy a web service endpoint. The ACR created by the workspace will have admin user enabled, and you need to disable the admin user manually.
+If you don't bring your own ACR, Azure Machine Learning service will create one for you when you perform an operation that needs one. For example, submit a training run to Machine Learning Compute, build an environment, or deploy a web service endpoint. The ACR created by the workspace will have admin user enabled, and you need to disable the admin user manually.
1. Create a new workspace
If you do not bring your own ACR, Azure Machine Learning service will create one
### Create compute with managed identity to access Docker images for training
-To access the workspace ACR, create machine learning compute cluster with system-assigned managed identity enabled. You can enable the identity from Azure portal or Studio when creating compute, or from Azure CLI using the below. For more information, see [using managed identity with compute clusters](how-to-create-attach-compute-cluster.md#managed-identity).
+To access the workspace ACR, create machine learning compute cluster with system-assigned managed identity enabled. You can enable the identity from Azure portal or Studio when creating compute, or from Azure CLI using the below. For more information, see [using managed identity with compute clusters](how-to-create-attach-compute-cluster.md#set-up-managed-identity).
# [Python](#tab/python)
When creating a compute cluster with the [AmlComputeProvisioningConfiguration](/
# [Azure CLI](#tab/azure-cli) ```azurecli-interaction
-az ml computetarget create amlcompute --name cpucluster -w <workspace> -g <resource group> --vm-size <vm sku> --assign-identity '[system]'
+az ml compute create --name cpucluster --type <cluster name> --identity-type systemassigned
``` # [Portal](#tab/azure-portal)
To use a custom base image internal to your enterprise, you can use managed iden
Create machine learning compute cluster with system-assigned managed identity enabled as described earlier. Then, determine the principal ID of the managed identity. ```azurecli-interactive
-az ml computetarget amlcompute identity show --name <cluster name> -w <workspace> -g <resource group>
+az ml compute show --name <cluster name> -w <workspace> -g <resource group>
``` Optionally, you can update the compute cluster to assign a user-assigned managed identity: + ```azurecli-interactive
-az ml computetarget amlcompute identity assign --name cpucluster \
--w $mlws -g $mlrg --identities <my-identity-id>
+az ml compute update --name <cluster name> --user-assigned-identities <my-identity-id>
``` + To allow the compute cluster to pull the base images, grant the managed service identity ACRPull role on the private ACR + ```azurecli-interactive az role assignment create --assignee <principal ID> \ --role acrpull \
az role assignment create --assignee <principal ID> \
Finally, when submitting a training run, specify the base image location in the [environment definition](how-to-use-environments.md#use-existing-environments). + ```python from azureml.core import Environment env = Environment(name="private-acr")
env.python.user_managed_dependencies = True
### Build Azure Machine Learning managed environment into base image from private ACR for training or inference In this scenario, Azure Machine Learning service builds the training or inference environment on top of a base image you supply from a private ACR. Because the image build task happens on the workspace ACR using ACR Tasks, you must perform more steps to allow access.
In this scenario, Azure Machine Learning service builds the training or inferenc
1. Specify the external ACR and client ID of the __user-assigned managed identity__ in workspace connections by using [Workspace.set_connection method](/python/api/azureml-core/azureml.core.workspace.workspace#set-connection-name--category--target--authtype--value-):
+ [!INCLUDE [sdk v1](../../includes/machine-learning-sdk-v1.md)]
+ ```python workspace.set_connection( name="privateAcr",
In this scenario, Azure Machine Learning service builds the training or inferenc
Once the configuration is complete, you can use the base images from private ACR when building environments for training or inference. The following code snippet demonstrates how to specify the base image ACR and image name in an environment definition: + ```python from azureml.core import Environment
env.docker.base_image = "<acr url>/my-repo/my-image:latest"
Optionally, you can specify the managed identity resource URL and client ID in the environment definition itself by using [RegistryIdentity](/python/api/azureml-core/azureml.core.container_registry.registryidentity). If you use registry identity explicitly, it overrides any workspace connections specified earlier: + ```python from azureml.core.container_registry import RegistryIdentity
Use Azure CLI or Python SDK to create the workspace. When using the CLI, specify
__Azure CLI__ ```azurecli-interactive az ml workspace create -w <workspace name> -g <resource group> --primary-user-assigned-identity <managed identity ARM ID>
az ml workspace create -w <workspace name> -g <resource group> --primary-user-as
__Python__ + ```python from azureml.core import Workspace
machine-learning How To Use Managed Online Endpoint Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-managed-online-endpoint-studio.md
Title: Use managed online endpoints (preview) in the studio
+ Title: Use managed online endpoints in the studio
-description: 'Learn how to create and use managed online endpoints (preview) using the Azure Machine Learning studio.'
+description: 'Learn how to create and use managed online endpoints using the Azure Machine Learning studio.'
-+ Last updated 10/21/2021
-# Create and use managed online endpoints (preview) in the studio
+# Create and use managed online endpoints in the studio
-Learn how to use the studio to create and manage your managed online endpoints (preview) in Azure Machine Learning. Use managed online endpoints to streamline production-scale deployments. For more information on managed online endpoints, see [What are endpoints](concept-endpoints.md).
+Learn how to use the studio to create and manage your managed online endpoints in Azure Machine Learning. Use managed online endpoints to streamline production-scale deployments. For more information on managed online endpoints, see [What are endpoints](concept-endpoints.md).
In this article, you learn how to:
In this article, you learn how to:
> * Update managed online endpoints > * Delete managed online endpoints and deployments - ## Prerequisites - An Azure Machine Learning workspace. For more information, see [Create an Azure Machine Learning workspace](how-to-manage-workspace.md). - The examples repository - Clone the [AzureML Example repository](https://github.com/Azure/azureml-examples). This article uses the assets in `/cli/endpoints/online`.
-## Create a managed online endpoint (preview)
+## Create a managed online endpoint
-Use the studio to create a managed online endpoint (preview) directly in your browser. When you create a managed online endpoint in the studio, you must define an initial deployment. You cannot create an empty managed online endpoint.
+Use the studio to create a managed online endpoint directly in your browser. When you create a managed online endpoint in the studio, you must define an initial deployment. You cannot create an empty managed online endpoint.
1. Go to the [Azure Machine Learning studio](https://ml.azure.com). 1. In the left navigation bar, select the **Endpoints** page.
-1. Select **+ Create (preview)**.
+1. Select **+ Create**.
:::image type="content" source="media/how-to-create-managed-online-endpoint-studio/endpoint-create-managed-online-endpoint.png" lightbox="media/how-to-create-managed-online-endpoint-studio/endpoint-create-managed-online-endpoint.png" alt-text="A screenshot for creating managed online endpoint from the Endpoints tab.":::
You can also create a managed online endpoint from the **Models** page in the st
1. Go to the [Azure Machine Learning studio](https://ml.azure.com). 1. In the left navigation bar, select the **Models** page. 1. Select a model by checking the circle next to the model name.
-1. Select **Deploy** > **Deploy to real-time endpoint (preview)**.
+1. Select **Deploy** > **Deploy to real-time endpoint**.
:::image type="content" source="media/how-to-create-managed-online-endpoint-studio/deploy-from-models-page.png" lightbox="media/how-to-create-managed-online-endpoint-studio/deploy-from-models-page.png" alt-text="A screenshot of creating a managed online endpoint from the Models UI.":::
-## View managed online endpoints (preview)
+## View managed online endpoints
-You can view your managed online endpoints (preview) in the **Endpoints** page. Use the endpoint details page to find critical information including the endpoint URI, status, testing tools, activity monitors, deployment logs, and sample consumption code:
+You can view your managed online endpoints in the **Endpoints** page. Use the endpoint details page to find critical information including the endpoint URI, status, testing tools, activity monitors, deployment logs, and sample consumption code:
1. In the left navigation bar, select **Endpoints**. 1. (Optional) Create a **Filter** on **Compute type** to show only **Managed** compute types.
You can add a deployment to your existing managed online endpoint.
From the **Endpoint details page**
-1. Select **+ Add Deployment** button in the [endpoint details page](#view-managed-online-endpoints-preview).
+1. Select **+ Add Deployment** button in the [endpoint details page](#view-managed-online-endpoints).
2. Follow the instructions to complete the deployment. :::image type="content" source="media/how-to-create-managed-online-endpoint-studio/add-deploy-option-from-endpoint-page.png" lightbox="media/how-to-create-managed-online-endpoint-studio/add-deploy-option-from-endpoint-page.png" alt-text="A screenshot of Add deployment option from Endpoint details page.":::
Alternatively, you can use the **Models** page to add a deployment:
1. In the left navigation bar, select the **Models** page. 1. Select a model by checking the circle next to the model name.
-1. Select **Deploy** > **Deploy to real-time endpoint (preview)**.
+1. Select **Deploy** > **Deploy to real-time endpoint**.
1. Choose to deploy to an existing managed online endpoint. :::image type="content" source="media/how-to-create-managed-online-endpoint-studio/select-existing-managed-endpoints.png" lightbox="media/how-to-create-managed-online-endpoint-studio/select-existing-managed-endpoints.png" alt-text="A screenshot of Add deployment option from Models page.":::
Alternatively, you can use the **Models** page to add a deployment:
> > :::image type="content" source="media/how-to-create-managed-online-endpoint-studio/adjust-deployment-traffic.png" lightbox="media/how-to-create-managed-online-endpoint-studio/adjust-deployment-traffic.png" alt-text="A screenshot of how to use sliders to control traffic distribution across multiple deployments.":::
-## Update managed online endpoints (preview)
+## Update managed online endpoints
You can update deployment traffic percentage and instance count from Azure Machine Learning studio.
Use the following instructions to scale an individual deployment up or down by a
1. Update the instance count. 1. Select **Update**.
-## Delete managed online endpoints and deployments (preview)
+## Delete managed online endpoints and deployments
-Learn how to delete an entire managed online endpoint (preview) and it's associated deployments (preview). Or, delete an individual deployment from a managed online endpoint.
+Learn how to delete an entire managed online endpoint and it's associated deployments. Or, delete an individual deployment from a managed online endpoint.
### Delete a managed online endpoint
Deleting a managed online endpoint also deletes any deployments associated with
1. Select an endpoint by checking the circle next to the model name. 1. Select **Delete**.
-Alternatively, you can delete a managed online endpoint directly in the [endpoint details page](#view-managed-online-endpoints-preview).
+Alternatively, you can delete a managed online endpoint directly in the [endpoint details page](#view-managed-online-endpoints).
### Delete an individual deployment
In this article, you learned how to use Azure Machine Learning managed online en
- [What are endpoints?](concept-endpoints.md) - [How to deploy managed online endpoints with the Azure CLI](how-to-deploy-managed-online-endpoints.md)-- [Deploy models with REST (preview)](how-to-deploy-with-rest.md)
+- [Deploy models with REST](how-to-deploy-with-rest.md)
- [How to monitor managed online endpoints](how-to-monitor-online-endpoints.md)-- [Troubleshooting managed online endpoints deployment and scoring (preview)](./how-to-troubleshoot-online-endpoints.md)-- [View costs for an Azure Machine Learning managed online endpoint (preview)](how-to-view-online-endpoints-costs.md)-- [Manage and increase quotas for resources with Azure Machine Learning](how-to-manage-quotas.md#azure-machine-learning-managed-online-endpoints-preview)
+- [Troubleshooting managed online endpoints deployment and scoring](./how-to-troubleshoot-online-endpoints.md)
+- [View costs for an Azure Machine Learning managed online endpoint](how-to-view-online-endpoints-costs.md)
+- [Manage and increase quotas for resources with Azure Machine Learning](how-to-manage-quotas.md#azure-machine-learning-managed-online-endpoints)
machine-learning How To Use Mlflow Azure Databricks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-mlflow-azure-databricks.md
Last updated 10/21/2021 -+ # Track Azure Databricks ML experiments with MLflow and Azure Machine Learning + In this article, learn how to enable MLflow's tracking URI and logging API, collectively known as [MLflow Tracking](https://mlflow.org/docs/latest/quickstart.html#using-the-tracking-api), to connect your Azure Databricks (ADB) experiments, MLflow, and Azure Machine Learning. [MLflow](https://www.mlflow.org) is an open-source library for managing the life cycle of your machine learning experiments. MLFlow Tracking is a component of MLflow that logs and tracks your training run metrics and model artifacts. Learn more about [Azure Databricks and MLflow](/azure/databricks/applications/mlflow/).
machine-learning How To Use Mlflow Cli Runs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-mlflow-cli-runs.md
Previously updated : 12/16/2021 Last updated : 04/08/2022 -+ ms.devlang: azurecli
-# Track ML experiments and models with MLflow or the Azure Machine Learning CLI (v2) (preview)
+# Track ML experiments and models with MLflow or the Azure Machine Learning CLI (v2)
-In this article, learn how to enable MLflow's tracking URI and logging API, collectively known as [MLflow Tracking](https://mlflow.org/docs/latest/quickstart.html#using-the-tracking-api), to connect Azure Machine Learning as the backend of your MLflow experiments. You can accomplish this connection with either the MLflow Python API or the [Azure Machine Learning CLI (v2) (preview)](how-to-train-cli.md) in your terminal. You also learn how to use [MLflow's Model Registry](https://mlflow.org/docs/latest/model-registry.html) capabilities with Azure Machine Learning.
+
+> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning developer platform you are using:"]
+> * [v1](./v1/how-to-use-mlflow.md)
+> * [v2 (current version)](how-to-use-mlflow-cli-runs.md)
+
+In this article, learn how to enable MLflow's tracking URI and logging API, collectively known as [MLflow Tracking](https://mlflow.org/docs/latest/quickstart.html#using-the-tracking-api), to connect Azure Machine Learning as the backend of your MLflow experiments. You can accomplish this connection with either the MLflow Python API or the [Azure Machine Learning CLI v2](how-to-train-cli.md) in your terminal. You also learn how to use [MLflow's Model Registry](https://mlflow.org/docs/latest/model-registry.html) capabilities with Azure Machine Learning.
[MLflow](https://www.mlflow.org) is an open-source library for managing the lifecycle of your machine learning experiments. MLflow Tracking is a component of MLflow that logs and tracks your training run metrics and model artifacts, no matter your experiment's environment--locally on your computer, on a remote compute target, a virtual machine, or an [Azure Databricks cluster](how-to-use-mlflow-azure-databricks.md). See [MLflow and Azure Machine Learning](concept-mlflow.md) for all supported MLflow and Azure Machine Learning functionality including MLflow Project support (preview) and model deployment.+
+> [!IMPORTANT]
+> When using the Azure Machine Learning SDK v2, no native logging is provided. Instead, use MLflow's tracking capabilities. For more information, see [How to log and view metrics (v2)](how-to-log-view-metrics.md).
> [!TIP] > The information in this document is primarily for data scientists and developers who want to monitor the model training process. If you are an administrator interested in monitoring resource usage and events from Azure Machine Learning, such as quotas, completed training runs, or completed model deployments, see [Monitoring Azure Machine Learning](monitor-azure-machine-learning.md).
See [MLflow and Azure Machine Learning](concept-mlflow.md) for all supported MLf
* See which [access permissions you need to perform your MLflow operations with your workspace](how-to-assign-roles.md#mlflow-operations). * Install and [set up CLI (v2)](how-to-configure-cli.md#prerequisites) and make sure you install the ml extension.-
+* Install and set up SDK(v2) for Python
## Track runs from your local machine
-MLflow Tracking with Azure Machine Learning lets you store the logged metrics and artifacts runs that were executed on your local machine into your Azure Machine Learning workspace.
+MLflow Tracking with Azure Machine Learning lets you store the logged metrics and artifacts runs that were executed on your local machine into your Azure Machine Learning workspace.
### Set up tracking environment To track a local run, you need to point your local machine to the Azure Machine Learning MLflow Tracking URI. >[!IMPORTANT]
-> Make sure you are logged in to your Azure account, otherwise the tracking URI returns an empty string.
+> Make sure you are logged in to your Azure account on your local machine, otherwise the tracking URI returns an empty string. If you are using any Azure ML compute the tracking environment and experiment name is already configured..
# [MLflow SDK](#tab/mlflow) ++ The following code uses `mlflow` and your Azure Machine Learning workspace details to construct the unique MLFLow tracking URI associated with your workspace. Then the method [`set_tracking_uri()`](https://mlflow.org/docs/latest/python_api/mlflow.html#mlflow.set_tracking_uri) points the MLflow tracking URI to that URI. ```Python
+from azure.ai.ml import MLClient
+from azure.identity import DefaultAzureCredential
import mlflow
-## Construct AzureML MLFLOW TRACKING URI
-def get_azureml_mlflow_tracking_uri(region, subscription_id, resource_group, workspace):
- return "azureml://{}.api.azureml.ms/mlflow/v1.0/subscriptions/{}/resourceGroups/{}/providers/Microsoft.MachineLearningServices/workspaces/{}".format(region, subscription_id, resource_group, workspace)
+#Enter details of your AML workspace
+subscription_id = '<SUBSCRIPTION_ID>'
+resource_group = '<RESOURCE_GROUP>'
+workspace = '<AML_WORKSPACE_NAME>'
-region='<REGION>' ## example: westus
-subscription_id = '<SUBSCRIPTION_ID>' ## example: 11111111-1111-1111-1111-111111111111
-resource_group = '<RESOURCE_GROUP>' ## example: myresourcegroup
-workspace = '<AML_WORKSPACE_NAME>' ## example: myworkspacename
+#get a handle to the workspace
+ml_client = MLClient(DefaultAzureCredential(), subscription_id, resource_group, workspace)
-MLFLOW_TRACKING_URI = get_azureml_mlflow_tracking_uri(region, subscription_id, resource_group, workspace)
+tracking_uri = ml_client.workspaces.get(name=workspace).mlflow_tracking_uri
-## Set the MLFLOW TRACKING URI
-mlflow.set_tracking_uri(MLFLOW_TRACKING_URI)
+mlflow.set_tracking_uri(tracking_uri)
-## Make sure the MLflow URI looks something like this:
-## azureml://<REGION>.api.azureml.ms/mlflow/v1.0/subscriptions/<SUBSCRIPTION_ID>/resourceGroups/<RESOURCE_GROUP>/providers/Microsoft.MachineLearningServices/workspaces/<AML_WORKSPACE_NAME>
-
-print("MLFlow Tracking URI:", MLFLOW_TRACKING_URI)
+print(tracking_uri)
``` # [Terminal](#tab/terminal)
All MLflow runs are logged to the active experiment, which can be set with the M
# [MLflow SDK](#tab/mlflow) ++ With MLflow you can use the [`mlflow.set_experiment()`](https://mlflow.org/docs/latest/python_api/mlflow.html#mlflow.set_experiment) command. ```Python
with mlflow.start_run() as mlflow_run:
mlflow.log_artifact("helloworld.txt") ```
-## Track remote runs with Azure Machine Learning CLI (v2) (preview)
+## Track remote runs with Azure Machine Learning CLI (v2)
Remote runs (jobs) let you train your models on more powerful computes, such as GPU enabled virtual machines, or Machine Learning Compute clusters. See [Use compute targets for model training](how-to-set-up-training-targets.md) to learn about different compute options. MLflow Tracking with Azure Machine Learning lets you store the logged metrics and artifacts from your remote runs into your Azure Machine Learning workspace. Any run with MLflow Tracking code in it logs metrics automatically to the workspace.
-First, you should create a `src` subdirectory and create a file with your training code in a `train.py` file in the `src` subdirectory. All your training code will go into the `src` subdirectory, including `train.py`.
+First, you should create a `src` subdirectory and create a file with your training code in a `hello_world.py` file in the `src` subdirectory. All your training code will go into the `src` subdirectory, including `train.py`.
The training code is taken from this [MLfLow example](https://github.com/Azure/azureml-examples/blob/main/cli/jobs/basics/src/hello-mlflow.py) in the Azure Machine Learning example repo. Copy this code into the file:
-```Python
-# imports
-import os
-import mlflow
-
-from random import random
-
-# define functions
-def main():
- mlflow.log_param("hello_param", "world")
- mlflow.log_metric("hello_metric", random())
- os.system(f"echo 'hello world' > helloworld.txt")
- mlflow.log_artifact("helloworld.txt")
--
-# run functions
-if __name__ == "__main__":
- # run main function
- main()
-```
Use the [Azure Machine Learning CLI (v2)](how-to-train-cli.md) to submit a remote run. When using the Azure Machine Learning CLI (v2), the MLflow tracking URI and experiment name are set automatically and directs the logging from MLflow to your workspace. Learn more about [logging Azure Machine Learning CLI (v2) experiments with MLflow](how-to-train-cli.md#model-tracking-with-mlflow) Create a YAML file with your job definition in a `job.yml` file. This file should be created outside the `src` directory. Copy this code into the file:
-```YAML
-$schema: https://azuremlschemas.azureedge.net/latest/commandJob.schema.json
-experiment_name: experiment_with_mlflow
-command: >-
- pip install mlflow azureml-mlflow
- &&
- python train.py
-code:
- local_path: src
-environment:
- image: python:3.8
-compute: azureml:MyCluster
-```
Open your terminal and use the following to submit the job.
az ml job create -f job.yml --web
## View metrics and artifacts in your workspace ++ The metrics and artifacts from MLflow logging are tracked in your workspace. To view them anytime, navigate to your workspace and find the experiment by name in your workspace in [Azure Machine Learning studio](https://ml.azure.com). Or run the below code. Retrieve run metric using MLflow [get_run()](https://mlflow.org/docs/latest/python_api/mlflow.html#mlflow.get_run).
To register and view a model from a run, use the following steps:
1. Once a run is complete, call the [`register_model()`](https://mlflow.org/docs/latest/python_api/mlflow.html#mlflow.register_model) method.
+
+ ```Python # the model folder produced from a run is registered. This includes the MLmodel file, model.pkl and the conda.yaml. model_path = "model"
To register and view a model from a run, use the following steps:
![MLmodel-schema](./media/how-to-use-mlflow-cli-runs/mlmodel-view.png)
-## Example notebooks
+## Example files
[Use MLflow and CLI (v2)](https://github.com/Azure/azureml-examples/blob/main/cli/jobs/basics/hello-mlflow.yml)
+## Limitations
+
+The following MLflow methods are not fully supported with Azure Machine Learning.
+
+* `mlflow.tracking.MlflowClient.create_experiment() `
+* `mlflow.tracking.MlflowClient.rename_experiment()`
+* `mlflow.tracking.MlflowClient.search_runs()`
+* `mlflow.tracking.MlflowClient.download_artifacts()`
+* `mlflow.tracking.MlflowClient.rename_registered_model()`
++ ## Next steps * [Deploy MLflow models to managed online endpoint (preview)](how-to-deploy-mlflow-models-online-endpoints.md).
machine-learning How To Use Mlflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-mlflow.md
- Title: MLflow Tracking for models-
-description: Set up MLflow Tracking with Azure Machine Learning to log metrics and artifacts from ML models.
----- Previously updated : 10/21/2021----
-# Track ML models with MLflow and Azure Machine Learning
-
-In this article, learn how to enable MLflow's tracking URI and logging API, collectively known as [MLflow Tracking](https://mlflow.org/docs/latest/quickstart.html#using-the-tracking-api), to connect Azure Machine Learning as the backend of your MLflow experiments.
-
-> [!TIP]
-> For a more streamlined experience, see how to [Track experiments with the MLflow SDK or the Azure Machine Learning CLI (v2) (preview)](how-to-use-mlflow-cli-runs.md)
-
-Supported capabilities include:
-
-+ Track and log experiment metrics and artifacts in your [Azure Machine Learning workspace](./concept-azure-machine-learning-architecture.md#workspace). If you already use MLflow Tracking for your experiments, the workspace provides a centralized, secure, and scalable location to store training metrics and models.
-
-+ [Submit training jobs with MLflow Projects with Azure Machine Learning backend support (preview)](how-to-train-mlflow-projects.md). You can submit jobs locally with Azure Machine Learning tracking or migrate your runs to the cloud like via an [Azure Machine Learning Compute](how-to-create-attach-compute-cluster.md).
-
-+ Track and manage models in MLflow and Azure Machine Learning model registry.
-
-[MLflow](https://www.mlflow.org) is an open-source library for managing the life cycle of your machine learning experiments. MLflow Tracking is a component of MLflow that logs and tracks your training run metrics and model artifacts, no matter your experiment's environment--locally on your computer, on a remote compute target, a virtual machine, or an [Azure Databricks cluster](how-to-use-mlflow-azure-databricks.md).
-
-See [MLflow and Azure Machine Learning](concept-mlflow.md) for additional MLflow and Azure Machine Learning functionality integrations.
-
-The following diagram illustrates that with MLflow Tracking, you track an experiment's run metrics and store model artifacts in your Azure Machine Learning workspace.
-
-![mlflow with azure machine learning diagram](./media/how-to-use-mlflow/mlflow-diagram-track.png)
-
-> [!TIP]
-> The information in this document is primarily for data scientists and developers who want to monitor the model training process. If you are an administrator interested in monitoring resource usage and events from Azure Machine Learning, such as quotas, completed training runs, or completed model deployments, see [Monitoring Azure Machine Learning](monitor-azure-machine-learning.md).
-
-> [!NOTE]
-> You can use the [MLflow Skinny client](https://github.com/mlflow/mlflow/blob/master/README_SKINNY.rst) which is a lightweight MLflow package without SQL storage, server, UI, or data science dependencies. This is recommended for users who primarily need the tracking and logging capabilities without importing the full suite of MLflow features including deployments.
-
-## Prerequisites
-
-* Install the `azureml-mlflow` package.
- * This package automatically brings in `azureml-core` of the [The Azure Machine Learning Python SDK](/python/api/overview/azure/ml/install), which provides the connectivity for MLflow to access your workspace.
-* [Create an Azure Machine Learning Workspace](how-to-manage-workspace.md).
- * See which [access permissions you need to perform your MLflow operations with your workspace](how-to-assign-roles.md#mlflow-operations).
-
-## Track local runs
-
-MLflow Tracking with Azure Machine Learning lets you store the logged metrics and artifacts runs that were executed on your local machine into your Azure Machine Learning workspace.
-
-### Set up tracking environment
-
-To track a local run, you need to point your local machine to the Azure Machine Learning MLflow Tracking URI.
-
-Import the `mlflow` and [`Workspace`](/python/api/azureml-core/azureml.core.workspace%28class%29) classes to access MLflow's tracking URI and configure your workspace.
-
-In the following code, the `get_mlflow_tracking_uri()` method assigns a unique tracking URI address to the workspace, `ws`, and `set_tracking_uri()` points the MLflow tracking URI to that address.
-
-```Python
-import mlflow
-from azureml.core import Workspace
-
-ws = Workspace.from_config()
-
-mlflow.set_tracking_uri(ws.get_mlflow_tracking_uri())
-```
-
-### Set experiment name
-
-All MLflow runs are logged to the active experiment, which can be set with the MLflow SDK or Azure CLI.
-
-Set the MLflow experiment name with [`set_experiment()`](https://mlflow.org/docs/latest/python_api/mlflow.html#mlflow.set_experiment) command.
-
-```Python
-experiment_name = 'experiment_with_mlflow'
-mlflow.set_experiment(experiment_name)
-```
-
-### Start training run
-
-After you set the MLflow experiment name, you can start your training run with `start_run()`. Then use `log_metric()` to activate the MLflow logging API and begin logging your training run metrics.
-
-```Python
-import os
-from random import random
-
-with mlflow.start_run() as mlflow_run:
- mlflow.log_param("hello_param", "world")
- mlflow.log_metric("hello_metric", random())
- os.system(f"echo 'hello world' > helloworld.txt")
- mlflow.log_artifact("helloworld.txt")
-```
-
-## Track remote runs
-
-Remote runs let you train your models on more powerful computes, such as GPU enabled virtual machines, or Machine Learning Compute clusters. See [Use compute targets for model training](how-to-set-up-training-targets.md) to learn about different compute options.
-
-MLflow Tracking with Azure Machine Learning lets you store the logged metrics and artifacts from your remote runs into your Azure Machine Learning workspace. Any run with MLflow Tracking code in it will have metrics logged automatically to the workspace.
-
-First, you should create a `src` subdirectory and create a file with your training code in a `train.py` file in the `src` subdirectory. All your training code will go into the `src` subdirectory, including `train.py`.
-
-The training code is taken from this [MLflow example](https://github.com/Azure/azureml-examples/blob/main/cli/jobs/basics/src/hello-mlflow.py) in the Azure Machine Learning example repo.
-
-Copy this code into the file:
-
-```Python
-# imports
-import os
-import mlflow
-
-from random import random
-
-# define functions
-def main():
- mlflow.log_param("hello_param", "world")
- mlflow.log_metric("hello_metric", random())
- os.system(f"echo 'hello world' > helloworld.txt")
- mlflow.log_artifact("helloworld.txt")
--
-# run functions
-if __name__ == "__main__":
- # run main function
- main()
-```
-
-Load training script to submit an experiement.
-
-```Python
-script_dir = "src"
-training_script = 'train.py'
-with open("{}/{}".format(script_dir,training_script), 'r') as f:
- print(f.read())
-```
-
-In your script, configure your compute and training run environment with the [`Environment`](/python/api/azureml-core/azureml.core.environment.environment) class.
-
-```Python
-from azureml.core import Environment
-from azureml.core.conda_dependencies import CondaDependencies
-
-env = Environment(name="mlflow-env")
-
-# Specify conda dependencies with scikit-learn and temporary pointers to mlflow extensions
-cd = CondaDependencies.create(
- conda_packages=["scikit-learn", "matplotlib"],
- pip_packages=["azureml-mlflow", "pandas", "numpy"]
- )
-
-env.python.conda_dependencies = cd
-```
-
-Then, construct [`ScriptRunConfig`](/python/api/azureml-core/azureml.core.script_run_config.scriptrunconfig) with your remote compute as the compute target.
-
-```Python
-from azureml.core import ScriptRunConfig
-
-src = ScriptRunConfig(source_directory="src",
- script=training_script,
- compute_target="<COMPUTE_NAME>",
- environment=env)
-```
-
-With this compute and training run configuration, use the `Experiment.submit()` method to submit a run. This method automatically sets the MLflow tracking URI and directs the logging from MLflow to your Workspace.
-
-```Python
-from azureml.core import Experiment
-from azureml.core import Workspace
-ws = Workspace.from_config()
-
-experiment_name = "experiment_with_mlflow"
-exp = Experiment(workspace=ws, name=experiment_name)
-
-run = exp.submit(src)
-```
-
-## View metrics and artifacts in your workspace
-
-The metrics and artifacts from MLflow logging are tracked in your workspace. To view them anytime, navigate to your workspace and find the experiment by name in your workspace in [Azure Machine Learning studio](https://ml.azure.com). Or run the below code.
-
-Retrieve run metric using MLflow [get_run()](https://mlflow.org/docs/latest/python_api/mlflow.html#mlflow.get_run).
-
-```Python
-from mlflow.entities import ViewType
-from mlflow.tracking import MlflowClient
-
-# Retrieve run ID for the last run experiement
-current_experiment=mlflow.get_experiment_by_name(experiment_name)
-runs = mlflow.search_runs(experiment_ids=current_experiment.experiment_id, run_view_type=ViewType.ALL)
-run_id = runs.tail(1)["run_id"].tolist()[0]
-
-# Use MlFlow to retrieve the run that was just completed
-client = MlflowClient()
-finished_mlflow_run = MlflowClient().get_run(run_id)
-
-metrics = finished_mlflow_run.data.metrics
-tags = finished_mlflow_run.data.tags
-params = finished_mlflow_run.data.params
-
-print(metrics,tags,params)
-```
-
-### Retrieve artifacts with MLFLow
-
-To view the artifacts of a run, you can use [MlFlowClient.list_artifacts()](https://mlflow.org/docs/latest/python_api/mlflow.tracking.html#mlflow.tracking.MlflowClient.list_artifacts)
-
-```Python
-client.list_artifacts(run_id)
-```
-
-To download an artifact to the current directory, you can use [MLFlowClient.download_artifacts()](https://www.mlflow.org/docs/latest/python_api/mlflow.tracking.html#mlflow.tracking.MlflowClient.download_artifacts)
-
-```Python
-client.download_artifacts(run_id, "helloworld.txt", ".")
-```
-
-### Compare and query
-
-Compare and query all MLflow runs in your Azure Machine Learning workspace with the following code.
-[Learn more about how to query runs with MLflow](https://mlflow.org/docs/latest/search-syntax.html#programmatically-searching-runs).
-
-```Python
-from mlflow.entities import ViewType
-
-all_experiments = [exp.experiment_id for exp in MlflowClient().list_experiments()]
-query = "metrics.hello_metric > 0"
-runs = mlflow.search_runs(experiment_ids=all_experiments, filter_string=query, run_view_type=ViewType.ALL)
-
-runs.head(10)
-```
-
-## Automatic logging
-With Azure Machine Learning and MLFlow, users can log metrics, model parameters and model artifacts automatically when training a model. A [variety of popular machine learning libraries](https://mlflow.org/docs/latest/tracking.html#automatic-logging) are supported.
-
-To enable [automatic logging](https://mlflow.org/docs/latest/tracking.html#automatic-logging) insert the following code before your training code:
-
-```Python
-mlflow.autolog()
-```
-
-[Learn more about Automatic logging with MLflow](https://mlflow.org/docs/latest/python_api/mlflow.html#mlflow.autolog).
-
-## Manage models
-
-Register and track your models with the [Azure Machine Learning model registry](concept-model-management-and-deployment.md#register-package-and-deploy-models-from-anywhere), which supports the MLflow model registry. Azure Machine Learning models are aligned with the MLflow model schema making it easy to export and import these models across different workflows. The MLflow related metadata such as, run ID is also tagged with the registered model for traceability. Users can submit training runs, register, and deploy models produced from MLflow runs.
-
-If you want to deploy and register your production ready model in one step, see [Deploy and register MLflow models](how-to-deploy-mlflow-models.md).
-
-To register and view a model from a run, use the following steps:
-
-1. Once a run is complete, call the [`register_model()`](https://mlflow.org/docs/latest/python_api/mlflow.html#mlflow.register_model) method.
-
- ```Python
- # the model folder produced from a run is registered. This includes the MLmodel file, model.pkl and the conda.yaml.
- model_path = "model"
- model_uri = 'runs:/{}/{}'.format(run_id, model_path)
- mlflow.register_model(model_uri,"registered_model_name")
- ```
-
-1. View the registered model in your workspace with [Azure Machine Learning studio](overview-what-is-machine-learning-studio.md).
-
- In the following example the registered model, `my-model` has MLflow tracking metadata tagged.
-
- ![register-mlflow-model](./media/how-to-use-mlflow/registered-mlflow-model.png)
-
-1. Select the **Artifacts** tab to see all the model files that align with the MLflow model schema (conda.yaml, MLmodel, model.pkl).
-
- ![model-schema](./media/how-to-use-mlflow/mlflow-model-schema.png)
-
-1. Select MLmodel to see the MLmodel file generated by the run.
-
- ![MLmodel-schema](./media/how-to-use-mlflow/mlmodel-view.png)
--
-## Clean up resources
-
-If you don't plan to use the logged metrics and artifacts in your workspace, the ability to delete them individually is currently unavailable. Instead, delete the resource group that contains the storage account and workspace, so you don't incur any charges:
-
-1. In the Azure portal, select **Resource groups** on the far left.
-
- ![Delete in the Azure portal](./media/how-to-use-mlflow/delete-resources.png)
-
-1. From the list, select the resource group you created.
-
-1. Select **Delete resource group**.
-
-1. Enter the resource group name. Then select **Delete**.
-
-## Example notebooks
-
-The [MLflow with Azure ML notebooks](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/track-and-monitor-experiments/using-mlflow) demonstrate and expand upon concepts presented in this article. Also see the community-driven repository, [AzureML-Examples](https://github.com/Azure/azureml-examples).
-
-## Next steps
-
-* [Deploy models with MLflow](how-to-deploy-mlflow-models.md).
-* Monitor your production models for [data drift](./how-to-enable-data-collection.md).
-* [Track Azure Databricks runs with MLflow](how-to-use-mlflow-azure-databricks.md).
-* [Manage your models](concept-model-management-and-deployment.md).
machine-learning How To Use Pipeline Ui https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-pipeline-ui.md
+
+ Title: 'How to use studio UI to build and debug Machine Learning pipelines'
+
+description: Learn how to build, debug, clone, and compare V2 pipeline with the studio UI.
+++++++ Last updated : 05/10/2022+++
+# How to use studio UI to build and debug Azure Machine Learning pipelines
+
+Azure Machine Learning studio provides UI to build and debug your pipeline. You can use components to author a pipeline in the designer, and you can debug your pipeline in the job detail page.
+
+This article will introduce how to use the studio UI to build and debug machine learning pipelines.
+
+## Build machine learning pipeline
+
+### Drag and drop components to build pipeline
+
+In the designer homepage, you can select **New pipeline** to open a blank pipeline draft.
+
+In the asset library left of the canvas, there are **Data assets** and **Components** tabs, which contain components and data registered to the workspace. For what is component and how to create custom component, you can refer to the [component concept article](concept-component.md).
+
+You can quickly filter **My assets** or **Designer built-in assets**.
++
+Then you can drag and drop either built-in components or custom components to the canvas. You can construct your pipeline or configure your components in any order. Just hide the right pane to construct your pipeline first, and open the right pane to configure your component.
+
+> [!NOTE]
+> Currently built-in components and custom components cannot be used together.
++
+### Submit pipeline
+
+Now you've built your pipeline. Select **Submit** button above the canvas, and configure your pipeline job.
++
+After you submit your pipeline job, you'll see a submitted job list in the left pane, which shows all the pipeline job you create from the current pipeline draft in the same session. There's also notification popping up from the notification center. You can select through the pipeline job link in the submission list or the notification to check pipeline job status or debugging.
+
+> [!NOTE]
+> Pipeline job status and results will not be filled back to the authoring page.
+
+If you want to try a few different parameter values for the same pipeline, you can change values and submit for multiple times, without having to waiting for the running status.
++
+> [!NOTE]
+> The submission list only contains jobs submitted in the same session.
+> If you refresh current page, it will not preserve the previous submitted job list.
+
+On the pipeline job detail page, you can check the status of the overall job and each node inside, and logs of each node.
++
+## Debug your pipeline in job detail page
+
+### Using outline to quickly find node
+
+In pipeline job detail page, there's an outline left to the canvas, which shows the overall structure of your pipeline job. Hovering on any row, you can select the "Locate" button to locate that node in the canvas.
++
+You can filter failed or completed nodes, and filter by only components or dataset for further search. The left pane will show the matched nodes with more information including status, duration, and created time.
++
+You can also sort the filtered nodes.
++
+### Check logs and outputs of component
+
+If your pipeline fails or gets stuck on a node, first view the logs.
+
+1. You can select the specific node and open the right pane.
+
+1. Select **Outputs+logs** tab and you can explore all the outputs and logs of this node.
+
+ The **user_logs folder** contains information about user code generated logs. This folder is open by default, and the **std_log.txt** log is selected. The **std_log.txt** is where your code's logs (for example, print statements) show up.
+
+ The **system_logs folder** contains logs generated by Azure Machine Learning. Learn more about [how to view and download log files for a run](how-to-log-view-metrics.md#view-and-download-log-files-for-a-run).
+
+ :::image type="content" source="./media/how-to-use-pipeline-ui/view-user-log.png" alt-text="Screenshot showing the user logs of a node." lightbox= "./media/how-to-use-pipeline-ui/view-user-log.png":::
+
+ If you don't see those folders, this is due to the compute run time update isn't released to the compute cluster yet, and you can look at **70_driver_log.txt** under **azureml-logs** folder first.
+
+ :::image type="content" source="./media/how-to-use-pipeline-ui/view-driver-logs.png" alt-text="Screenshot showing the driver logs of a node." lightbox= "./media/how-to-use-pipeline-ui/view-driver-logs.png":::
+
+## Clone a pipeline job to continue editing
+
+If you would like to work based on an existing pipeline job in the workspace, you can easily clone it into a new pipeline draft to continue editing.
++
+After cloning, you can also know which pipeline job it's cloned from by selecting **Show lineage**.
++
+You can edit your pipeline and then submit again. After submitting, you can see the lineage between the job you submit and the original job by selecting **Show lineage** in the job detail page.
+
+## Next steps
+
+In this article, you learned the key features in how to create, explore, and debug a pipeline in UI. To learn more about how you can use the pipeline, see the following articles:
+++ [How to train a model in the designer](tutorial-designer-automobile-price-train-score.md)++ [How to deploy model to real-time endpoint in the designer](tutorial-designer-automobile-price-deploy.md)++ [What is machine learning component](concept-component.md)
machine-learning How To Use Private Python Packages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-private-python-packages.md
Last updated 10/21/2021-
-## As a developer, I need to use private Python packages securely when training machine learning models.
-+ # Use private Python packages with Azure Machine Learning In this article, learn how to use private Python packages securely within Azure Machine Learning. Use cases for private Python packages include:
After completing these configurations, you can reference the packages in the Azu
## Next steps
- * Learn more about [enterprise security in Azure Machine Learning](concept-enterprise-security.md)
+ * Learn more about [enterprise security in Azure Machine Learning](concept-enterprise-security.md)
machine-learning How To Use Reinforcement Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-reinforcement-learning.md
Last updated 10/21/2021 --+ # Reinforcement learning (preview) with Azure Machine Learning - > [!WARNING] > Azure Machine Learning reinforcement learning via the [`azureml.contrib.train.rl`](/python/api/azureml-contrib-reinforcementlearning/azureml.contrib.train.rl) package will no longer be supported after June 2022. We recommend customers use the [Ray on Azure Machine Learning library](https://github.com/microsoft/ray-on-aml) for reinforcement learning experiments with Azure Machine Learning. For an example, see the notebook [Reinforcement Learning in Azure Machine Learning - Pong problem](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/reinforcement-learning/atari-on-distributed-compute/pong_rllib.ipynb).
machine-learning How To Use Secrets In Runs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-secrets-in-runs.md
Last updated 10/21/2021 -+ # Use authentication credential secrets in Azure Machine Learning training runs + In this article, you learn how to use secrets in training runs securely. Authentication information such as your user name and password are secrets. For example, if you connect to an external database in order to query training data, you would need to pass your username and password to the remote run context. Coding such values into training scripts in cleartext is insecure as it would expose the secret. Instead, your Azure Machine Learning workspace has an associated resource called a [Azure Key Vault](../key-vault/general/overview.md). Use this Key Vault to pass secrets to remote runs securely through a set of APIs in the Azure Machine Learning Python SDK.
There is also a batch version, [get_secrets()](/python/api/azureml-core/azureml.
## Next steps * [View example notebook](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/manage-azureml-service/authentication-in-azureml/authentication-in-azureml.ipynb)
- * [Learn about enterprise security with Azure Machine Learning](concept-enterprise-security.md)
+ * [Learn about enterprise security with Azure Machine Learning](concept-enterprise-security.md)
machine-learning How To Use Sweep In Pipeline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-sweep-in-pipeline.md
+
+ Title: How to do hyperparameter sweep in pipeline
+
+description: How to use sweep to do hyperparameter tuning in Azure Machine Learning pipeline using CLI v2 and Python SDK
++++++ Last updated : 05/10/2022+++
+# How to do hyperparameter tuning in pipeline (V2) (preview)
++
+In this article, you'll learn how to do hyperparameter tuning in Azure Machine Learning pipeline.
+
+## Prerequisite
+
+1. Understand what is [hyperparameter tuning](how-to-tune-hyperparameters.md) and how to do hyperparameter tuning in Azure Machine Learning use SweepJob.
+2. Understand what is a [Azure Machine Learning pipeline](concept-ml-pipelines.md)
+3. Build a command component that takes hyperparameter as input.
+
+## How to do hyperparameter tuning in Azure Machine Learning pipeline
+
+This section explains how to do hyperparameter tuning in Azure Machine Learning pipeline using CLI v2 and Python SDK. Both approaches share the same prerequisite: you already have a command component created and the command component takes hyperparameters as inputs. If you don't have a command component yet. Follow below links to create a command component first.
+
+- [AzureML CLI v2](how-to-create-component-pipelines-cli.md)
+- [AzureML Python SDK v2](how-to-create-component-pipeline-python.md)
+
+### CLI v2
+
+The example used in this article can be found in [azureml-example repo](https://github.com/Azure/azureml-examples). Navigate to *[azureml-examples/cli/jobs/pipelines-with-components/pipeline_with_hyperparameter_sweep* to check the example.
+
+Assume you already have a command component defined in `train.yaml`. A two-step pipeline job (train and predict) YAML file looks like below.
++
+The `sweep_step` is the step for hyperparameter tuning. Its type needs to be `sweep`. And `trial` refers to the command component defined in `train.yaml`. From the `search sapce` field we can see three hyparmeters (`c_value`, `kernel`, and `coef`) are added to the search space. After you submit this pipeline job, Azure Machine Learning will run the trial component multiple times to sweep over hyperparameters based on the search space and terminate policy you defined in `sweep_step`. Check [sweep job YAML schema](reference-yaml-job-sweep.md) for full schema of sweep job.
+
+Below is the trial component definition (train.yml file).
++
+The hyperparameters added to search space in pipeline.yml need to be inputs for the trial component. The source code of the trial component is under `./train-src` folder. In this example, it's a single `train.py` file. This is the code that will be executed in every trial of the sweep job. Make sure you've logged the metrics in the trial component source code with exactly the same name as `primary_metric` value in pipeline.yml file. In this example, we use `mlflow.autolog()`, which is the recommended way to track your ML experiments. See more about mlflow [here](./how-to-use-mlflow-cli-runs.md)
+
+Below code snippet is the source code of trial component.
++
+### Python SDK
+
+The python SDK example can be found in [azureml-example repo](https://github.com/Azure/azureml-examples). Navigate to *azureml-examples/sdk/jobs/pipelines/1c_pipeline_with_hyperparameter_sweep* to check the example.
+
+In Azure Machine Learning Python SDK v2, you can enable hyperparameter tuning for any command component by calling `.sweep()` method.
+
+Below code snippet shows how to enable sweep for `train_model`.
+
+[!notebook-python[] (~/azureml-examples-sdk-preview/sdk/jobs/pipelines/1c_pipeline_with_hyperparameter_sweep/pipeline_with_hyperparameter_sweep.ipynb?name=enable-sweep)]
+
+ We first load `train_component_func` defined in `train.yml` file. When creating `train_model`, we add `c_value`, `kernel` and `coef0` into search space(line 15-17). Line 30-35 defines the primary metric, sampling algorithm etc.
+
+## Check pipeline job with sweep step in Studio
+
+After you submit a pipeline job, the SDK or CLI widget will give you a web URL link to Studio UI. The link will guide you to the pipeline graph view by default.
+
+To check details of the sweep step, double click the sweep step and navigate to the **child run** tab in the panel on the right.
++
+This will link you to the sweep job page as seen in the below screenshot. Navigate to **child run** tab, here you can see the metrics of all child runs and list of all child runs.
++
+If a child runs failed, select the name of that child run to enter detail page of that specific child run (see screenshot below). The useful debug information is under **Outputs + Logs**.
++
+## Sample notebooks
+
+- [Build pipeline with sweep node](https://github.com/Azure/azureml-examples/blob/sdk-preview/sdk/jobs/pipelines/1c_pipeline_with_hyperparameter_sweep/pipeline_with_hyperparameter_sweep.ipynb)
+- [Run hyperparameter sweep on a command job](https://github.com/Azure/azureml-examples/blob/sdk-preview/sdk/jobs/single-step/lightgbm/iris/lightgbm-iris-sweep.ipynb)
+
+## Next steps
+
+- [Track an experiment](how-to-log-view-metrics.md)
+- [Deploy a trained model](how-to-deploy-managed-online-endpoints.md)
machine-learning How To Use Synapsesparkstep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-synapsesparkstep.md
Last updated 10/21/2021 --
-# Customer intent: As a user of both Azure Machine Learning pipelines and Azure Synapse Analytics, I'd like to use Apache Spark for the data preparation of my pipeline
-+
+#Customer intent: As a user of both Azure Machine Learning pipelines and Azure Synapse Analytics, I'd like to use Apache Spark for the data preparation of my pipeline
# How to use Apache Spark (powered by Azure Synapse Analytics) in your machine learning pipeline (preview)
-In this article, you'll learn how to use Apache Spark pools powered by Azure Synapse Analytics as the compute target for a data preparation step in an Azure Machine Learning pipeline. You'll learn how a single pipeline can use compute resources suited for the specific step, such as data preparation or training. You'll see how data is prepared for the Spark step and how it's passed to the next step.
++ [!INCLUDE [preview disclaimer](../../includes/machine-learning-preview-generic-disclaimer.md)]
+In this article, you'll learn how to use Apache Spark pools powered by Azure Synapse Analytics as the compute target for a data preparation step in an Azure Machine Learning pipeline. You'll learn how a single pipeline can use compute resources suited for the specific step, such as data preparation or training. You'll see how data is prepared for the Spark step and how it's passed to the next step.
+++ ## Prerequisites * Create an [Azure Machine Learning workspace](how-to-manage-workspace.md) to hold all your pipeline resources.
machine-learning How To Version Track Datasets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-version-track-datasets.md
Last updated 10/21/2021 --
-# Customer intent: As a data scientist, I want to version and track datasets so I can use and share them across multiple machine learning experiments.
+
+#Customer intent: As a data scientist, I want to version and track datasets so I can use and share them across multiple machine learning experiments.
# Version and track Azure Machine Learning datasets + In this article, you'll learn how to version and track Azure Machine Learning datasets for reproducibility. Dataset versioning is a way to bookmark the state of your data so that you can apply a specific version of the dataset for future experiments. Typical versioning scenarios:
For this tutorial, you need:
ws = Workspace.from_config() ```-- An [Azure Machine Learning dataset](how-to-create-register-datasets.md).
+- An [Azure Machine Learning dataset](./v1/how-to-create-register-datasets.md).
<a name="register"></a>
The following view is from the **Datasets** pane under **Assets**. Select the da
## Next steps * [Train with datasets](how-to-train-with-datasets.md)
-* [More sample dataset notebooks](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/work-with-data/)
+* [More sample dataset notebooks](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/work-with-data/)
machine-learning How To View Online Endpoints Costs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-view-online-endpoints-costs.md
Last updated 05/03/2021 -+ # View costs for an Azure Machine Learning managed online endpoint (preview) Learn how to view costs for a managed online endpoint (preview). Costs for your endpoints will accrue to the associated workspace. You can see costs for a specific endpoint using tags. + > [!IMPORTANT] > This article only applies to viewing costs for Azure Machine Learning managed online endpoints (preview). Managed online endpoints are different from other resources since they must use tags to track costs. For more information on viewing the costs of other Azure resources, see [Quickstart: Explore and analyze costs with cost analysis](../cost-management-billing/costs/quick-acm-cost-analysis.md).
Create a tag filter to show your managed online endpoint and/or managed online d
- [What are endpoints?](concept-endpoints.md) - Learn how to [monitor your managed online endpoint](./how-to-monitor-online-endpoints.md). - [How to deploy managed online endpoints with the Azure CLI](how-to-deploy-managed-online-endpoints.md)-- [How to deploy managed online endpoints with the studio](how-to-use-managed-online-endpoint-studio.md)
+- [How to deploy managed online endpoints with the studio](how-to-use-managed-online-endpoint-studio.md)
machine-learning How To Workspace Diagnostic Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-workspace-diagnostic-api.md
Last updated 11/18/2021 -+ # How to use workspace diagnostics
After diagnostics run, a list of any detected problems is returned. This list in
The following snippet demonstrates how to use workspace diagnostics from Python + ```python from azureml.core import Workspace
machine-learning Migrate Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/migrate-overview.md
description: Migrate from Studio (classic) to Azure Machine Learning for a moder
+
After you've defined a strategy, migrate your first model.
1. Use the designer to [redeploy web services](migrate-rebuild-web-service.md). >[!NOTE]
- > Azure Machine Learning also supports code-first workflows for migrating [datasets](how-to-create-register-datasets.md), [training](how-to-set-up-training-targets.md), and [deployment](how-to-deploy-and-where.md).
+ > Above guidance are built on top of AzureML v1 concepts and features. AzureML has CLI v2 and Python SDK v2. We suggest to rebuild your ML Studio(classic) models using v2 instead of v1. Start with AzureML v2 [here](./concept-v2.md)
## Step 4: Integrate client apps
In Studio (classic), **datasets** were saved in your workspace and could only be
![automobile-price-classic-dataset](./media/migrate-overview/studio-classic-dataset.png)
-In Azure Machine Learning, **datasets** are registered to the workspace and can be used across all of Azure Machine Learning. For more information on the benefits of Azure Machine Learning datasets, see [Secure data access](concept-data.md#reference-data-in-storage-with-datasets).
+In Azure Machine Learning, **datasets** are registered to the workspace and can be used across all of Azure Machine Learning. For more information on the benefits of Azure Machine Learning datasets, see [Secure data access](./v1/concept-data.md).
![automobile-price-aml-dataset](./media/migrate-overview/aml-dataset.png)
Studio (classic) used **REQUEST/RESPOND API** for real-time prediction and **BAT
![automobile-price-classic-webservice](./media/migrate-overview/studio-classic-web-service.png)
-Azure Machine Learning uses **real-time endpoints** for real-time prediction and **pipeline endpoints** for batch prediction or retraining.
+Azure Machine Learning uses **real-time endpoints** (managed endpoints) for real-time prediction and **pipeline endpoints** for batch prediction or retraining.
![automobile-price-aml-endpoint](./media/migrate-overview/aml-endpoint.png)
machine-learning Migrate Rebuild Web Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/migrate-rebuild-web-service.md
description: Rebuild Studio (classic) web services as pipeline endpoints in Azur
+
There are multiple ways to deploy a model in Azure Machine Learning. One of the
| Compute target | Used for | Description | Creation | | -- | -- | -- | -- |
- |[Azure Kubernetes Service (AKS)](how-to-deploy-azure-kubernetes-service.md) |Real-time inference|Large-scale, production deployments. Fast response time and service autoscaling.| User-created. For more information, see [Create compute targets](how-to-create-attach-compute-studio.md#inference-clusters). |
- |[Azure Container Instances](how-to-deploy-azure-container-instance.md)|Testing or development | Small-scale, CPU-based workloads that require less than 48 GB of RAM.| Automatically created by Azure Machine Learning.
+ |[Azure Kubernetes Service (AKS)](v1/how-to-deploy-azure-kubernetes-service.md) |Real-time inference|Large-scale, production deployments. Fast response time and service autoscaling.| User-created. For more information, see [Create compute targets](how-to-create-attach-compute-studio.md#inference-clusters). |
+ |[Azure Container Instances](v1/how-to-deploy-azure-container-instance.md)|Testing or development | Small-scale, CPU-based workloads that require less than 48 GB of RAM.| Automatically created by Azure Machine Learning.
### Test the real-time endpoint
machine-learning Migrate Register Dataset https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/migrate-register-dataset.md
description: Rebuild Studio (classic) datasets in Azure Machine Learning designe
+
You have three options to migrate a dataset to Azure Machine Learning. Read each
|Cloud storage | Option 2: [Register a dataset from a cloud source](#import-data-from-cloud-sources). <br><br> Option 3: [Use the Import Data module to get data from a cloud source](#import-data-from-cloud-sources). | > [!NOTE]
-> Azure Machine Learning also supports [code-first workflows](how-to-create-register-datasets.md) for creating and managing datasets.
+> Azure Machine Learning also supports [code-first workflows](./v1/how-to-create-register-datasets.md) for creating and managing datasets.
## Prerequisites
machine-learning Overview What Happened To Workbench https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/overview-what-happened-to-workbench.md
description: Azure Machine Learning is an integrated data science solution to mo
--++ Last updated 03/05/2020 # What happened to Azure Machine Learning Workbench?
-The Azure Machine Learning Workbench application and some other early features were deprecated and replaced in the **September 2018** release to make way for an improved [architecture](concept-azure-machine-learning-architecture.md).
+The Azure Machine Learning Workbench application and some other early features were deprecated and replaced in the **September 2018** release to make way for an improved [architecture](v1/concept-azure-machine-learning-architecture.md).
-To improve your experience, the release contains many significant updates prompted by customer feedback. The core functionality from experiment runs to model deployment hasn't changed. But now, you can use the robust <a href="/python/api/overview/azure/ml/intro" target="_blank">Python SDK</a>, and the [Azure CLI](reference-azure-machine-learning-cli.md) to accomplish your machine learning tasks and pipelines.
+To improve your experience, the release contains many significant updates prompted by customer feedback. The core functionality from experiment runs to model deployment hasn't changed. But now, you can use the robust <a href="/python/api/overview/azure/ml/intro" target="_blank">Python SDK</a>, and the [Azure CLI](v1/reference-azure-machine-learning-cli.md) to accomplish your machine learning tasks and pipelines.
Most of the artifacts that were created in the earlier version of Azure Machine Learning are stored in your own local or cloud storage. These artifacts won't ever disappear.
In this article, you learn about what changed and how it affects your pre-existi
## What changed? The latest release of Azure Machine Learning includes the following features:
-+ A [simplified Azure resources model](concept-azure-machine-learning-architecture.md).
++ A [simplified Azure resources model](v1/concept-azure-machine-learning-architecture.md). + A [new portal UI](how-to-log-view-metrics.md) to manage your experiments and compute targets. + A new, more comprehensive Python <a href="/python/api/overview/azure/ml/intro" target="_blank">SDK</a>.
-+ The new expanded [Azure CLI extension](reference-azure-machine-learning-cli.md) for machine learning.
++ The new expanded [Azure CLI extension](v1/reference-azure-machine-learning-cli.md) for machine learning.
-The [architecture](concept-azure-machine-learning-architecture.md) was redesigned for ease of use. Instead of multiple Azure resources and accounts, you only need an [Azure Machine Learning Workspace](concept-workspace.md). You can create workspaces quickly in the [Azure portal](how-to-manage-workspace.md). By using a workspace, multiple users can store training and deployment compute targets, model experiments, Docker images, deployed models, and so on.
+The [architecture](v1/concept-azure-machine-learning-architecture.md) was redesigned for ease of use. Instead of multiple Azure resources and accounts, you only need an [Azure Machine Learning Workspace](concept-workspace.md). You can create workspaces quickly in the [Azure portal](how-to-manage-workspace.md). By using a workspace, multiple users can store training and deployment compute targets, model experiments, Docker images, deployed models, and so on.
Although there are new improved CLI and SDK clients in the current release, the desktop workbench application itself has been retired. Experiments can be managed in the [workspace dashboard in Azure Machine Learning studio](how-to-log-view-metrics.md#view-the-experiment-in-the-web-portal). Use the dashboard to get your experiment history, manage the compute targets attached to your workspace, manage your models and Docker images, and even deploy web services.
Although there are new improved CLI and SDK clients in the current release, the
On January 9th, 2019 support for Machine Learning Workbench, Azure Machine Learning Experimentation and Model Management accounts, and their associated SDK and CLI ended.
-All the latest capabilities are available by using this <a href="/python/api/overview/azure/ml/intro" target="_blank">SDK</a>, the [CLI](reference-azure-machine-learning-cli.md), and the [portal](how-to-manage-workspace.md).
+All the latest capabilities are available by using this <a href="/python/api/overview/azure/ml/intro" target="_blank">SDK</a>, the [CLI](v1/reference-azure-machine-learning-cli.md), and the [portal](how-to-manage-workspace.md).
## What about run histories?
Start training your models and tracking the run histories using the new CLI and
## Will projects persist?
-You won't lose any code or work. In the older version, projects are cloud entities with a local directory. In the latest version, you attach local directories to the Azure Machine Learning workspace by using a local config file. See a [diagram of the latest architecture](concept-azure-machine-learning-architecture.md).
+You won't lose any code or work. In the older version, projects are cloud entities with a local directory. In the latest version, you attach local directories to the Azure Machine Learning workspace by using a local config file. See a [diagram of the latest architecture](v1/concept-azure-machine-learning-architecture.md).
Much of the project content was already on your local machine. So you just need to create a config file in that directory and reference it in your code to connect to your workspace. To continue using the local directory containing your files and scripts, specify the directory's name in the ['experiment.submit'](/python/api/azureml-core/azureml.core.experiment.experiment) Python command or using the `az ml project attach` CLI command. For example:+++ ```python run = exp.submit(source_directory=script_folder, script='train.py', run_config=run_config_system_managed)
Learn more in these articles:
## Next steps
-Learn about the [latest architecture for Azure Machine Learning](concept-azure-machine-learning-architecture.md).
+Learn about the [latest architecture for Azure Machine Learning](v1/concept-azure-machine-learning-architecture.md).
For an overview of the service, read [What is Azure Machine Learning?](overview-what-is-azure-machine-learning.md).
machine-learning Overview What Is Azure Machine Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/overview-what-is-azure-machine-learning.md
Last updated 08/03/2021-+ adobe-target: true
Developers find familiar interfaces in Azure Machine Learning, such as:
- [Python SDK](/python/api/overview/azure/ml/) - [Azure Resource Manager REST APIs (preview)](/rest/api/azureml/)-- [CLI v2 (preview)](/cli/azure/ml)
+- [CLI v2 ](/cli/azure/ml)
### Studio UI
machine-learning Reference Azure Machine Learning Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-azure-machine-learning-cli.md
- Title: 'Install & use Azure Machine Learning CLI'
-description: Learn how to use the Azure CLI extension for ML to create & manage resources such as your workspace, datastores, datasets, pipelines, models, and deployments.
------- Previously updated : 04/02/2021---
-# Install & use the CLI extension for Azure Machine Learning
---
-The Azure Machine Learning CLI is an extension to the [Azure CLI](/cli/azure/), a cross-platform command-line interface for the Azure platform. This extension provides commands for working with Azure Machine Learning. It allows you to automate your machine learning activities. The following list provides some example actions that you can do with the CLI extension:
-
-+ Run experiments to create machine learning models
-
-+ Register machine learning models for customer usage
-
-+ Package, deploy, and track the lifecycle of your machine learning models
-
-The CLI is not a replacement for the Azure Machine Learning SDK. It is a complementary tool that is optimized to handle highly parameterized tasks which suit themselves well to automation.
-
-## Prerequisites
-
-* To use the CLI, you must have an Azure subscription. If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/) today.
-
-* To use the CLI commands in this document from your **local environment**, you need the [Azure CLI](/cli/azure/install-azure-cli).
-
- If you use the [Azure Cloud Shell](https://azure.microsoft.com/features/cloud-shell/), the CLI is accessed through the browser and lives in the cloud.
-
-## Full reference docs
-
-Find the [full reference docs for the azure-cli-ml extension of Azure CLI](/cli/azure/ml(v1)/).
-
-## Connect the CLI to your Azure subscription
-
-> [!IMPORTANT]
-> If you are using the Azure Cloud Shell, you can skip this section. The cloud shell automatically authenticates you using the account you log into your Azure subscription.
-
-There are several ways that you can authenticate to your Azure subscription from the CLI. The most basic is to interactively authenticate using a browser. To authenticate interactively, open a command line or terminal and use the following command:
-
-```azurecli-interactive
-az login
-```
-
-If the CLI can open your default browser, it will do so and load a sign-in page. Otherwise, you need to open a browser and follow the instructions on the command line. The instructions involve browsing to [https://aka.ms/devicelogin](https://aka.ms/devicelogin) and entering an authorization code.
--
-For other methods of authenticating, see [Sign in with Azure CLI](/cli/azure/authenticate-azure-cli).
-
-## Install the extension
-
-To install the CLI (v1) extension:
-```azurecli-interactive
-az extension add -n azure-cli-ml
-```
-
-## Update the extension
-
-To update the Machine Learning CLI extension, use the following command:
-
-```azurecli-interactive
-az extension update -n azure-cli-ml
-```
-
-## Remove the extension
-
-To remove the CLI extension, use the following command:
-
-```azurecli-interactive
-az extension remove -n azure-cli-ml
-```
-
-## Resource management
-
-The following commands demonstrate how to use the CLI to manage resources used by Azure Machine Learning.
-
-+ If you do not already have one, create a resource group:
-
- ```azurecli-interactive
- az group create -n myresourcegroup -l westus2
- ```
-
-+ Create an Azure Machine Learning workspace:
-
- ```azurecli-interactive
- az ml workspace create -w myworkspace -g myresourcegroup
- ```
-
- For more information, see [az ml workspace create](/cli/azure/ml/workspace#az-ml-workspace-create).
-
-+ Attach a workspace configuration to a folder to enable CLI contextual awareness.
-
- ```azurecli-interactive
- az ml folder attach -w myworkspace -g myresourcegroup
- ```
-
- This command creates a `.azureml` subdirectory that contains example runconfig and conda environment files. It also contains a `config.json` file that is used to communicate with your Azure Machine Learning workspace.
-
- For more information, see [az ml folder attach](/cli/azure/ml(v1)/folder#az-ml-folder-attach).
-
-+ Attach an Azure blob container as a Datastore.
-
- ```azurecli-interactive
- az ml datastore attach-blob -n datastorename -a accountname -c containername
- ```
-
- For more information, see [az ml datastore attach-blob](/cli/azure/ml/datastore#az-ml-datastore-attach-blob).
-
-+ Upload files to a Datastore.
-
- ```azurecli-interactive
- az ml datastore upload -n datastorename -p sourcepath
- ```
-
- For more information, see [az ml datastore upload](/cli/azure/ml/datastore#az-ml-datastore-upload).
-
-+ Attach an AKS cluster as a Compute Target.
-
- ```azurecli-interactive
- az ml computetarget attach aks -n myaks -i myaksresourceid -g myresourcegroup -w myworkspace
- ```
-
- For more information, see [az ml computetarget attach aks](/cli/azure/ml(v1)/computetarget/attach#az-ml-computetarget-attach-aks)
-
-### Compute clusters
-
-+ Create a new managed compute cluster.
-
- ```azurecli-interactive
- az ml computetarget create amlcompute -n cpu --min-nodes 1 --max-nodes 1 -s STANDARD_D3_V2
- ```
---
-+ Create a new managed compute cluster with managed identity
-
- + User-assigned managed identity
-
- ```azurecli
- az ml computetarget create amlcompute --name cpu-cluster --vm-size Standard_NC6 --max-nodes 5 --assign-identity '/subscriptions/<subcription_id>/resourcegroups/<resource_group>/providers/Microsoft.ManagedIdentity/userAssignedIdentities/<user_assigned_identity>'
- ```
-
- + System-assigned managed identity
-
- ```azurecli
- az ml computetarget create amlcompute --name cpu-cluster --vm-size Standard_NC6 --max-nodes 5 --assign-identity '[system]'
- ```
-+ Add a managed identity to an existing cluster:
-
- + User-assigned managed identity
- ```azurecli
- az ml computetarget amlcompute identity assign --name cpu-cluster '/subscriptions/<subcription_id>/resourcegroups/<resource_group>/providers/Microsoft.ManagedIdentity/userAssignedIdentities/<user_assigned_identity>'
- ```
- + System-assigned managed identity
-
- ```azurecli
- az ml computetarget amlcompute identity assign --name cpu-cluster '[system]'
- ```
-
-For more information, see [az ml computetarget create amlcompute](/cli/azure/ml(v1)/computetarget/create#az-ml-computetarget-create-amlcompute).
--
-<a id="computeinstance"></a>
-
-### Compute instance
-Manage compute instances. In all the examples below, the name of the compute instance is **cpu**
-
-+ Create a new computeinstance.
-
- ```azurecli-interactive
- az ml computetarget create computeinstance -n cpu -s "STANDARD_D3_V2" -v
- ```
-
- For more information, see [az ml computetarget create computeinstance](/cli/azure/ml(v1)/computetarget/create#az-ml-computetarget-create-computeinstance).
-
-+ Stop a computeinstance.
-
- ```azurecli-interactive
- az ml computetarget computeinstance stop -n cpu -v
- ```
-
- For more information, see [az ml computetarget computeinstance stop](/cli/azure/ml(v1)/computetarget/computeinstance#az-ml-computetarget-computeinstance-stop).
-
-+ Start a computeinstance.
-
- ```azurecli-interactive
- az ml computetarget computeinstance start -n cpu -v
- ```
-
- For more information, see [az ml computetarget computeinstance start](/cli/azure/ml(v1)/computetarget/computeinstance#az-ml-computetarget-computeinstance-start).
-
-+ Restart a computeinstance.
-
- ```azurecli-interactive
- az ml computetarget computeinstance restart -n cpu -v
- ```
-
- For more information, see [az ml computetarget computeinstance restart](/cli/azure/ml(v1)/computetarget/computeinstance#az-ml-computetarget-computeinstance-restart).
-
-+ Delete a computeinstance.
-
- ```azurecli-interactive
- az ml computetarget delete -n cpu -v
- ```
-
- For more information, see [az ml computetarget delete computeinstance](/cli/azure/ml(v1)/computetarget#az-ml-computetarget-delete).
--
-## <a id="experiments"></a>Run experiments
-
-* Start a run of your experiment. When using this command, specify the name of the runconfig file (the text before \*.runconfig if you are looking at your file system) against the -c parameter.
-
- ```azurecli-interactive
- az ml run submit-script -c sklearn -e testexperiment train.py
- ```
-
- > [!TIP]
- > The `az ml folder attach` command creates a `.azureml` subdirectory, which contains two example runconfig files.
- >
- > If you have a Python script that creates a run configuration object programmatically, you can use [RunConfig.save()](/python/api/azureml-core/azureml.core.runconfiguration#save-path-none--name-none--separate-environment-yaml-false-) to save it as a runconfig file.
- >
- > The full runconfig schema can be found in this [JSON file](https://github.com/microsoft/MLOps/blob/b4bdcf8c369d188e83f40be8b748b49821f71cf2/infra-as-code/runconfigschema.json). The schema is self-documenting through the `description` key of each object. Additionally, there are enums for possible values, and a template snippet at the end.
-
- For more information, see [az ml run submit-script](/cli/azure/ml(v1)/run#az-ml-run-submit-script).
-
-* View a list of experiments:
-
- ```azurecli-interactive
- az ml experiment list
- ```
-
- For more information, see [az ml experiment list](/cli/azure/ml(v1)/experiment#az-ml-experiment-list).
-
-### HyperDrive run
-
-You can use HyperDrive with Azure CLI to perform parameter tuning runs. First, create a HyperDrive configuration file in the following format. See [Tune hyperparameters for your model](how-to-tune-hyperparameters.md) article for details on hyperparameter tuning parameters.
-
-```yml
-# hdconfig.yml
-sampling:
- type: random # Supported options: Random, Grid, Bayesian
- parameter_space: # specify a name|expression|values tuple for each parameter.
- - name: --penalty # The name of a script parameter to generate values for.
- expression: choice # supported options: choice, randint, uniform, quniform, loguniform, qloguniform, normal, qnormal, lognormal, qlognormal
- values: [0.5, 1, 1.5] # The list of values, the number of values is dependent on the expression specified.
-policy:
- type: BanditPolicy # Supported options: BanditPolicy, MedianStoppingPolicy, TruncationSelectionPolicy, NoTerminationPolicy
- evaluation_interval: 1 # Policy properties are policy specific. See the above link for policy specific parameter details.
- slack_factor: 0.2
-primary_metric_name: Accuracy # The metric used when evaluating the policy
-primary_metric_goal: Maximize # Maximize|Minimize
-max_total_runs: 8 # The maximum number of runs to generate
-max_concurrent_runs: 2 # The number of runs that can run concurrently.
-max_duration_minutes: 100 # The maximum length of time to run the experiment before cancelling.
-```
-
-Add this file alongside the run configuration files. Then submit a HyperDrive run using:
-```azurecli
-az ml run submit-hyperdrive -e <experiment> -c <runconfig> --hyperdrive-configuration-name <hdconfig> my_train.py
-```
-
-Note the *arguments* section in runconfig and *parameter space* in HyperDrive config. They contain the command-line arguments to be passed to training script. The value in runconfig stays the same for each iteration, while the range in HyperDrive config is iterated over. Do not specify the same argument in both files.
-
-## Dataset management
-
-The following commands demonstrate how to work with datasets in Azure Machine Learning:
-
-+ Register a dataset:
-
- ```azurecli-interactive
- az ml dataset register -f mydataset.json
- ```
-
- For information on the format of the JSON file used to define the dataset, use `az ml dataset register --show-template`.
-
- For more information, see [az ml dataset register](/cli/azure/ml(v1)/dataset#az-ml-dataset-register).
-
-+ List all datasets in a workspace:
-
- ```azurecli-interactive
- az ml dataset list
- ```
-
- For more information, see [az ml dataset list](/cli/azure/ml(v1)/dataset#az-ml-dataset-list).
-
-+ Get details of a dataset:
-
- ```azurecli-interactive
- az ml dataset show -n dataset-name
- ```
-
- For more information, see [az ml dataset show](/cli/azure/ml(v1)/dataset#az-ml-dataset-show).
-
-+ Unregister a dataset:
-
- ```azurecli-interactive
- az ml dataset unregister -n dataset-name
- ```
-
- For more information, see [az ml dataset unregister](/cli/azure/ml(v1)/dataset#az-ml-dataset-archive).
-
-## Environment management
-
-The following commands demonstrate how to create, register, and list Azure Machine Learning [environments](how-to-configure-environment.md) for your workspace:
-
-+ Create scaffolding files for an environment:
-
- ```azurecli-interactive
- az ml environment scaffold -n myenv -d myenvdirectory
- ```
-
- For more information, see [az ml environment scaffold](/cli/azure/ml/environment#az-ml-environment-scaffold).
-
-+ Register an environment:
-
- ```azurecli-interactive
- az ml environment register -d myenvdirectory
- ```
-
- For more information, see [az ml environment register](/cli/azure/ml/environment#az-ml-environment-register).
-
-+ List registered environments:
-
- ```azurecli-interactive
- az ml environment list
- ```
-
- For more information, see [az ml environment list](/cli/azure/ml/environment#az-ml-environment-list).
-
-+ Download a registered environment:
-
- ```azurecli-interactive
- az ml environment download -n myenv -d downloaddirectory
- ```
-
- For more information, see [az ml environment download](/cli/azure/ml/environment#az-ml-environment-download).
-
-### Environment configuration schema
-
-If you used the `az ml environment scaffold` command, it generates a template `azureml_environment.json` file that can be modified and used to create custom environment configurations with the CLI. The top level object loosely maps to the [`Environment`](/python/api/azureml-core/azureml.core.environment%28class%29) class in the Python SDK.
-
-```json
-{
- "name": "testenv",
- "version": null,
- "environmentVariables": {
- "EXAMPLE_ENV_VAR": "EXAMPLE_VALUE"
- },
- "python": {
- "userManagedDependencies": false,
- "interpreterPath": "python",
- "condaDependenciesFile": null,
- "baseCondaEnvironment": null
- },
- "docker": {
- "enabled": false,
- "baseImage": "mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04:20210615.v1",
- "baseDockerfile": null,
- "sharedVolumes": true,
- "shmSize": "2g",
- "arguments": [],
- "baseImageRegistry": {
- "address": null,
- "username": null,
- "password": null
- }
- },
- "spark": {
- "repositories": [],
- "packages": [],
- "precachePackages": true
- },
- "databricks": {
- "mavenLibraries": [],
- "pypiLibraries": [],
- "rcranLibraries": [],
- "jarLibraries": [],
- "eggLibraries": []
- },
- "inferencingStackVersion": null
-}
-```
-
-The following table details each top-level field in the JSON file, it's type, and a description. If an object type is linked to a class from the Python SDK, there is a loose 1:1 match between each JSON field and the public variable name in the Python class. In some cases the field may map to a constructor argument rather than a class variable. For example, the `environmentVariables` field maps to the `environment_variables` variable in the [`Environment`](/python/api/azureml-core/azureml.core.environment%28class%29) class.
-
-| JSON field | Type | Description |
-||||
-| `name` | `string` | Name of the environment. Do not start name with **Microsoft** or **AzureML**. |
-| `version` | `string` | Version of the environment. |
-| `environmentVariables` | `{string: string}` | A hash-map of environment variable names and values. |
-| `python` | [`PythonSection`](/python/api/azureml-core/azureml.core.environment.pythonsection)hat defines the Python environment and interpreter to use on target compute resource. |
-| `docker` | [`DockerSection`](/python/api/azureml-core/azureml.core.environment.dockersection) | Defines settings to customize the Docker image built to the environment's specifications. |
-| `spark` | [`SparkSection`](/python/api/azureml-core/azureml.core.environment.sparksection) | The section configures Spark settings. It is only used when framework is set to PySpark. |
-| `databricks` | [`DatabricksSection`](/python/api/azureml-core/azureml.core.databricks.databrickssection) | Configures Databricks library dependencies. |
-| `inferencingStackVersion` | `string` | Specifies the inferencing stack version added to the image. To avoid adding an inferencing stack, leave this field `null`. Valid value: "latest". |
-
-## ML pipeline management
-
-The following commands demonstrate how to work with machine learning pipelines:
-
-+ Create a machine learning pipeline:
-
- ```azurecli-interactive
- az ml pipeline create -n mypipeline -y mypipeline.yml
- ```
-
- For more information, see [az ml pipeline create](/cli/azure/ml(v1)/pipeline#az-ml-pipeline-create).
-
- For more information on the pipeline YAML file, see [Define machine learning pipelines in YAML](reference-pipeline-yaml.md).
-
-+ Run a pipeline:
-
- ```azurecli-interactive
- az ml run submit-pipeline -n myexperiment -y mypipeline.yml
- ```
-
- For more information, see [az ml run submit-pipeline](/cli/azure/ml(v1)/run#az-ml-run-submit-pipeline).
-
- For more information on the pipeline YAML file, see [Define machine learning pipelines in YAML](reference-pipeline-yaml.md).
-
-+ Schedule a pipeline:
-
- ```azurecli-interactive
- az ml pipeline create-schedule -n myschedule -e myexperiment -i mypipelineid -y myschedule.yml
- ```
-
- For more information, see [az ml pipeline create-schedule](/cli/azure/ml(v1)/pipeline#az-ml-pipeline-create-schedule).
-
-## Model registration, profiling, deployment
-
-The following commands demonstrate how to register a trained model, and then deploy it as a production service:
-
-+ Register a model with Azure Machine Learning:
-
- ```azurecli-interactive
- az ml model register -n mymodel -p sklearn_regression_model.pkl
- ```
-
- For more information, see [az ml model register](/cli/azure/ml/model#az-ml-model-register).
-
-+ **OPTIONAL** Profile your model to get optimal CPU and memory values for deployment.
- ```azurecli-interactive
- az ml model profile -n myprofile -m mymodel:1 --ic inferenceconfig.json -d "{\"data\": [[1,2,3,4,5,6,7,8,9,10],[10,9,8,7,6,5,4,3,2,1]]}" -t myprofileresult.json
- ```
-
- For more information, see [az ml model profile](/cli/azure/ml/model#az-ml-model-profile).
-
-+ Deploy your model to AKS
- ```azurecli-interactive
- az ml model deploy -n myservice -m mymodel:1 --ic inferenceconfig.json --dc deploymentconfig.json --ct akscomputetarget
- ```
-
- For more information on the inference configuration file schema, see [Inference configuration schema](#inferenceconfig).
-
- For more information on the deployment configuration file schema, see [Deployment configuration schema](#deploymentconfig).
-
- For more information, see [az ml model deploy](/cli/azure/ml/model#az-ml-model-deploy).
-
-<a id="inferenceconfig"></a>
-
-## Inference configuration schema
--
-<a id="deploymentconfig"></a>
-
-## Deployment configuration schema
-
-### Local deployment configuration schema
--
-### Azure Container Instance deployment configuration schema
--
-### Azure Kubernetes Service deployment configuration schema
--
-## Next steps
-
-* [Command reference for the Machine Learning CLI extension](/cli/azure/ml).
-
-* [Train and deploy machine learning models using Azure Pipelines](/azure/devops/pipelines/targets/azure-machine-learning)
machine-learning Reference Machine Learning Cloud Parity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-machine-learning-cloud-parity.md
Previously updated : 03/14/2022- Last updated : 05/09/2022+ # Azure Machine Learning feature availability across clouds regions
In the list of global Azure regions, there are several regions that serve specif
* Azure Government regions **US-Arizona** and **US-Virginia**. * Azure China 21Vianet region **China-East-2**.
+Azure Machine Learning is still in devlopment in Airgap Regions.
+ The information in the rest of this document provides information on what features of Azure Machine Learning are available in these regions, along with region-specific information on using these features. ## Azure Government
The information in the rest of this document provides information on what featur
| ACI behind VNet | Public Preview | NO | NO | | ACR behind VNet | GA | YES | YES | | Private IP of AKS cluster | Public Preview | NO | NO |
+| Network isolation for managed online endpoints | Public Preview | NO | NO |
| **Compute** | | | | | [quota management across workspaces](how-to-manage-quotas.md) | GA | YES | YES | | **[Data for machine learning](concept-data.md)** | | | |
The information in the rest of this document provides information on what featur
| View, edit, or delete dataset drift monitors from the SDK | Public Preview | YES | YES | | View, edit, or delete dataset drift monitors from the UI | Public Preview | YES | YES | | **Machine learning lifecycle** | | | |
-| [Model profiling](how-to-deploy-profile-model.md) | GA | YES | PARTIAL |
-| [The Azure ML CLI 1.0](reference-azure-machine-learning-cli.md) | GA | YES | YES |
+| [Model profiling](v1/how-to-deploy-profile-model.md) | GA | YES | PARTIAL |
+| [The Azure ML CLI 1.0](v1/reference-azure-machine-learning-cli.md) | GA | YES | YES |
| [FPGA-based Hardware Accelerated Models](how-to-deploy-fpga-web-service.md) | GA | NO | NO | | [Visual Studio Code integration](how-to-setup-vs-code.md) | Public Preview | NO | NO | | [Event Grid integration](how-to-use-event-grid.md) | Public Preview | NO | NO |
The information in the rest of this document provides information on what featur
| [Experimentation UI](how-to-track-monitor-analyze-runs.md) | Public Preview | YES | YES | | [.NET integration ML.NET 1.0](/dotnet/machine-learning/tutorials/object-detection-model-builder) | GA | YES | YES | | **Inference** | | | |
+| Managed online endpoints | GA | YES | YES |
| [Batch inferencing](tutorial-pipeline-batch-scoring-classification.md) | GA | YES | YES | | [Azure Stack Edge with FPGA](how-to-deploy-fpga-web-service.md#deploy-to-a-local-edge-server) | Public Preview | NO | NO | | **Other** | | | |
The information in the rest of this document provides information on what featur
| ACI behind VNet | Preview | NO | N/A | | ACR behind VNet | GA | YES | N/A | | Private IP of AKS cluster | Preview | NO | N/A |
+| Network isolation for managed online endpoints | Preview | NO | N/A |
| **Compute** | | | | | quota management across workspaces | GA | YES | N/A | | **Data for machine learning** | | | |
The information in the rest of this document provides information on what featur
| Experimentation UI | GA | YES | N/A | | .NET integration ML.NET 1.0 | GA | YES | N/A | | **Inference** | | | |
+| Managed online endpoints | GA | YES | N/A |
| Batch inferencing | GA | YES | N/A | | Azure Stack Edge with FPGA | Deprecating | Deprecating | N/A | | **Other** | | | |
machine-learning Reference Managed Online Endpoints Vm Sku List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-managed-online-endpoints-vm-sku-list.md
-+ - Previously updated : 05/10/2021+ Last updated : 04/11/2022 # Managed online endpoints SKU list (preview) + This table shows the VM SKUs that are supported for Azure Machine Learning managed online endpoints (preview).
This table shows the VM SKUs that are supported for Azure Machine Learning manag
* For more information on configuration details such as CPU and RAM, see [Azure Machine Learning Pricing](https://azure.microsoft.com/pricing/details/machine-learning/).
+> [!IMPORTANT]
+> If you use a Windows-based image for your deployment, we recommend using a VM SKU that provides a minimum of 4 cores.
+ | Size | General Purpose | Compute Optimized | Memory Optimized | GPU | | | | | | | | | V.Small | DS2 v2 | F2s v2 | E2s v3 | NC4as_T4_v3 |
This table shows the VM SKUs that are supported for Azure Machine Learning manag
| Medium | DS4 v2 | F8s v2 | E8s v3 | NC12s v2 <br/> NC12s v3 <br/> NC16as_T4_v3 | | Large | DS5 v2 | F16s v2 | E16s v3 | NC24s v2 <br/> NC24s v3 <br/> NC64as_T4_v3 | | X-Large| - | F32s v2 <br/> F48s v2 <br/> F64s v2 <br/> F72s v2 | E32s v3 <br/> E48s v3 <br/> E64s v3 | - |--
machine-learning Reference Migrate Sdk V1 Mlflow Tracking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-migrate-sdk-v1-mlflow-tracking.md
+
+ Title: Migrate logging from SDK v1 to MLflow
+
+description: Comparison of SDK v1 logging APIs and MLflow tracking
++++++++ Last updated : 05/04/2022+++
+# Migrate logging from SDK v1 to SDK v2 (preview)
+
+The Azure Machine Learning Python SDK v2 does not provide native logging APIs. Instead, we recommend that you use [MLflow Tracking](https://www.mlflow.org/docs/latest/tracking.html). If you're migrating from SDK v1 to SDK v2 (preview), use the information in this section to understand the MLflow equivalents of SDK v1 logging APIs.
+
+## Setup
+
+To use MLflow tracking, import `mlflow` and optionally set the tracking URI for your workspace. If you're training on an Azure Machine Learning compute resource, such as a compute instance or compute cluster, the tracking URI is set automatically. If you're using a different compute resource, such as your laptop or desktop, you need to set the tracking URI.
+
+```python
+import mlflow
+
+# The rest of this is only needed if you are not using an Azure ML compute
+## Construct AzureML MLFLOW TRACKING URI
+def get_azureml_mlflow_tracking_uri(region, subscription_id, resource_group, workspace):
+return "azureml://{}.api.azureml.ms/mlflow/v1.0/subscriptions/{}/resourceGroups/{}/providers/Microsoft.MachineLearningServices/workspaces/{}".format(region, subscription_id, resource_group, workspace)
+
+region='<REGION>' ## example: westus
+subscription_id = '<SUBSCRIPTION_ID>' ## example: 11111111-1111-1111-1111-111111111111
+resource_group = '<RESOURCE_GROUP>' ## example: myresourcegroup
+workspace = '<AML_WORKSPACE_NAME>' ## example: myworkspacename
+
+MLFLOW_TRACKING_URI = get_azureml_mlflow_tracking_uri(region, subscription_id, resource_group, workspace)
+
+## Set the MLFLOW TRACKING URI
+mlflow.set_tracking_uri(MLFLOW_TRACKING_URI)
+```
+
+## Experiments and runs
+
+__SDK v1__
+
+```python
+from azureml.core import Experiment
+
+# create an AzureML experiment and start a run
+experiment = Experiment(ws, "create-experiment-sdk-v1")
+azureml_run = experiment.start_logging()
+```
+
+__SDK v2 (preview) with MLflow__
+
+```python
+# Set the MLflow experiment and start a run
+mlflow.set_experiment("logging-with-mlflow")
+mlflow_run = mlflow.start_run()
+```
+
+## Logging API comparison
+
+### Log an integer or float metric
+
+__SDK v1__
+
+```python
+azureml_run.log("sample_int_metric", 1)
+```
+
+__SDK v2 (preview) with MLflow__
+
+```python
+mlflow.log_metric("sample_int_metric", 1)
+```
+
+### Log a boolean metric
+
+__SDK v1__
+
+```python
+azureml_run.log("sample_boolean_metric", True)
+```
+
+__SDK v2 (preview) with MLflow__
+
+```python
+mlflow.log_metric("sample_boolean_metric", 1)
+```
+
+### Log a string metric
+
+__SDK v1__
+
+```python
+azureml_run.log("sample_string_metric", "a_metric")
+```
+
+__SDK v2 (preview) with MLflow__
+
+```python
+mlflow.log_text("sample_string_text", "string.txt")
+```
+
+* The string will be logged as an _artifact_, not as a metric. In Azure Machine Learning studio, the value will be displayed in the __Outputs + logs__ tab.
+
+### Log an image to a PNG or JPEG file
+
+__SDK v1__
+
+```python
+azureml_run.log_image("sample_image", path="Azure.png")
+```
+
+__SDK v2 (preview) with MLflow__
+
+```python
+mlflow.log_artifact("Azure.png")
+```
+
+The image is logged as an artifact and will appear in the __Images__ tab in Azure Machine Learning Studio.
+
+### Log a matplotlib.pyplot
+
+__SDK v1__
+
+```python
+import matplotlib.pyplot as plt
+
+plt.plot([1, 2, 3])
+azureml_run.log_image("sample_pyplot", plot=plt)
+```
+
+__SDK v2 (preview) with MLflow__
+
+```python
+import matplotlib.pyplot as plt
+
+plt.plot([1, 2, 3])
+fig, ax = plt.subplots()
+ax.plot([0, 1], [2, 3])
+mlflow.log_figure(fig, "sample_pyplot.png")
+```
+
+* The image is logged as an artifact and will appear in the __Images__ tab in Azure Machine Learning Studio.
+* The `mlflow.log_figure` method is __experimental__.
++
+### Log a list of metrics
+
+__SDK v1__
+
+```python
+list_to_log = [1, 2, 3, 2, 1, 2, 3, 2, 1]
+azureml_run.log_list('sample_list', list_to_log)
+```
+
+__SDK v2 (preview) with MLflow__
+
+```python
+list_to_log = [1, 2, 3, 2, 1, 2, 3, 2, 1]
+from mlflow.entities import Metric
+from mlflow.tracking import MlflowClient
+import time
+
+metrics = [Metric(key="sample_list", value=val, timestamp=int(time.time() * 1000), step=0) for val in list_to_log]
+MlflowClient().log_batch(mlflow_run.info.run_id, metrics=metrics)
+```
+* Metrics appear in the __metrics__ tab in Azure Machine Learning studio.
+* Text values are not supported.
+
+### Log a row of metrics
+
+__SDK v1__
+
+```python
+azureml_run.log_row("sample_table", col1=5, col2=10)
+```
+
+__SDK v2 (preview) with MLflow__
+
+```python
+metrics = {"sample_table.col1": 5, "sample_table.col2": 10}
+mlflow.log_metrics(metrics)
+```
+
+* Metrics do not render as a table in Azure Machine Learning studio.
+* Text values are not supported.
+* Logged as an _artifact_, not as a metric.
+
+### Log a table
+
+__SDK v1__
+
+```python
+table = {
+"col1" : [1, 2, 3],
+"col2" : [4, 5, 6]
+}
+azureml_run.log_table("table", table)
+```
+
+__SDK v2 (preview) with MLflow__
+
+```python
+# Add a metric for each column prefixed by metric name. Similar to log_row
+row1 = {"table.col1": 5, "table.col2": 10}
+# To be done for each row in the table
+mlflow.log_metrics(row1)
+
+# Using mlflow.log_artifact
+import json
+
+with open("table.json", 'w') as f:
+json.dump(table, f)
+mlflow.log_artifact("table.json")
+```
+
+* Logs metrics for each column.
+* Metrics do not render as a table in Azure Machine Learning studio.
+* Text values are not supported.
+* Logged as an _artifact_, not as a metric.
+
+### Log an accuracy table
+
+__SDK v1__
+
+```python
+ACCURACY_TABLE = '{"schema_type": "accuracy_table", "schema_version": "v1", "data": {"probability_tables": ' +\
+ '[[[114311, 385689, 0, 0], [0, 0, 385689, 114311]], [[67998, 432002, 0, 0], [0, 0, ' + \
+ '432002, 67998]]], "percentile_tables": [[[114311, 385689, 0, 0], [1, 0, 385689, ' + \
+ '114310]], [[67998, 432002, 0, 0], [1, 0, 432002, 67997]]], "class_labels": ["0", "1"], ' + \
+ '"probability_thresholds": [0.52], "percentile_thresholds": [0.09]}}'
+
+azureml_run.log_accuracy_table('v1_accuracy_table', ACCURACY_TABLE)
+```
+
+__SDK v2 (preview) with MLflow__
+
+```python
+ACCURACY_TABLE = '{"schema_type": "accuracy_table", "schema_version": "v1", "data": {"probability_tables": ' +\
+ '[[[114311, 385689, 0, 0], [0, 0, 385689, 114311]], [[67998, 432002, 0, 0], [0, 0, ' + \
+ '432002, 67998]]], "percentile_tables": [[[114311, 385689, 0, 0], [1, 0, 385689, ' + \
+ '114310]], [[67998, 432002, 0, 0], [1, 0, 432002, 67997]]], "class_labels": ["0", "1"], ' + \
+ '"probability_thresholds": [0.52], "percentile_thresholds": [0.09]}}'
+
+mlflow.log_dict(ACCURACY_TABLE, 'mlflow_accuracy_table.json')
+```
+
+* Metrics do not render as an accuracy table in Azure Machine Learning studio.
+* Logged as an _artifact_, not as a metric.
+* The `mlflow.log_dict` method is _experimental_.
+
+### Log a confusion matrix
+
+__SDK v1__
+
+```python
+CONF_MATRIX = '{"schema_type": "confusion_matrix", "schema_version": "v1", "data": {"class_labels": ' + \
+ '["0", "1", "2", "3"], "matrix": [[3, 0, 1, 0], [0, 1, 0, 1], [0, 0, 1, 0], [0, 0, 0, 1]]}}'
+
+azureml_run.log_confusion_matrix('v1_confusion_matrix', json.loads(CONF_MATRIX))
+```
+
+__SDK v2 (preview) with MLflow__
+
+```python
+CONF_MATRIX = '{"schema_type": "confusion_matrix", "schema_version": "v1", "data": {"class_labels": ' + \
+ '["0", "1", "2", "3"], "matrix": [[3, 0, 1, 0], [0, 1, 0, 1], [0, 0, 1, 0], [0, 0, 0, 1]]}}'
+
+mlflow.log_dict(CONF_MATRIX, 'mlflow_confusion_matrix.json')
+```
+
+* Metrics do not render as a confusion matrix in Azure Machine Learning studio.
+* Logged as an _artifact_, not as a metric.
+* The `mlflow.log_dict` method is _experimental_.
+
+### Log predictions
+
+__SDK v1__
+
+```python
+PREDICTIONS = '{"schema_type": "predictions", "schema_version": "v1", "data": {"bin_averages": [0.25,' + \
+ ' 0.75], "bin_errors": [0.013, 0.042], "bin_counts": [56, 34], "bin_edges": [0.0, 0.5, 1.0]}}'
+
+azureml_run.log_predictions('test_predictions', json.loads(PREDICTIONS))
+```
+
+__SDK v2 (preview) with MLflow__
+
+```python
+PREDICTIONS = '{"schema_type": "predictions", "schema_version": "v1", "data": {"bin_averages": [0.25,' + \
+ ' 0.75], "bin_errors": [0.013, 0.042], "bin_counts": [56, 34], "bin_edges": [0.0, 0.5, 1.0]}}'
+
+mlflow.log_dict(PREDICTIONS, 'mlflow_predictions.json')
+```
+
+* Metrics do not render as a confusion matrix in Azure Machine Learning studio.
+* Logged as an _artifact_, not as a metric.
+* The `mlflow.log_dict` method is _experimental_.
+
+### Log residuals
+
+__SDK v1__
+
+```python
+RESIDUALS = '{"schema_type": "residuals", "schema_version": "v1", "data": {"bin_edges": [100, 200, 300], ' + \
+'"bin_counts": [0.88, 20, 30, 50.99]}}'
+
+azureml_run.log_residuals('test_residuals', json.loads(RESIDUALS))
+```
+
+__SDK v2 (preview) with MLflow__
+
+```python
+RESIDUALS = '{"schema_type": "residuals", "schema_version": "v1", "data": {"bin_edges": [100, 200, 300], ' + \
+'"bin_counts": [0.88, 20, 30, 50.99]}}'
+
+mlflow.log_dict(RESIDUALS, 'mlflow_residuals.json')
+```
+
+* Metrics do not render as a confusion matrix in Azure Machine Learning studio.
+* Logged as an _artifact_, not as a metric.
+* The `mlflow.log_dict` method is _experimental_.
+
+## View run info and data
+
+You can access run information using the MLflow run object's `data` and `info` properties. For more information, see [mlflow.entities.Run](https://mlflow.org/docs/latest/python_api/mlflow.entities.html#mlflow.entities.Run) reference.
+
+The following example shows how to retrieve a finished run:
+
+```python
+from mlflow.tracking import MlflowClient
+
+# Use MlFlow to retrieve the run that was just completed
+client = MlflowClient()
+finished_mlflow_run = MlflowClient().get_run(mlflow_run.info.run_id)
+```
+
+The following example shows how to view the `metrics`, `tags`, and `params`:
+
+```python
+metrics = finished_mlflow_run.data.metrics
+tags = finished_mlflow_run.data.tags
+params = finished_mlflow_run.data.params
+```
+
+> [!NOTE]
+> The `metrics` will only have the most recently logged value for a given metric. For example, if you log in order a value of `1`, then `2`, `3`, and finally `4` to a metric named `sample_metric`, only `4` will be present in the `metrics` dictionary. To get all metrics logged for a specific named metric, use [MlFlowClient.get_metric_history](https://mlflow.org/docs/latest/python_api/mlflow.tracking.html#mlflow.tracking.MlflowClient.get_metric_history):
+>
+> ```python
+> with mlflow.start_run() as multiple_metrics_run:
+> mlflow.log_metric("sample_metric", 1)
+> mlflow.log_metric("sample_metric", 2)
+> mlflow.log_metric("sample_metric", 3)
+> mlflow.log_metric("sample_metric", 4)
+>
+> print(client.get_run(multiple_metrics_run.info.run_id).data.metrics)
+> print(client.get_metric_history(multiple_metrics_run.info.run_id, "sample_metric"))
+> ```
+>
+> For more information, see the [MlFlowClient](https://mlflow.org/docs/latest/python_api/mlflow.tracking.html#mlflow.tracking.MlflowClient) reference.
+
+The `info` field provides general information about the run, such as start time, run ID, experiment ID, etc.:
+
+```python
+run_start_time = finished_mlflow_run.info.start_time
+run_experiment_id = finished_mlflow_run.info.experiment_id
+run_id = finished_mlflow_run.info.run_id
+```
+
+## View run artifacts
+
+To view the artifacts of a run, use [MlFlowClient.list_artifacts](https://mlflow.org/docs/latest/python_api/mlflow.tracking.html#mlflow.tracking.MlflowClient.list_artifacts):
+
+```python
+client.list_artifacts(finished_mlflow_run.info.run_id)
+```
+
+To download an artifact, use [MlFlowClient.download_artifacts](https://www.mlflow.org/docs/latest/python_api/mlflow.tracking.html#mlflow.tracking.MlflowClient.download_artifacts):
+
+```python
+client.download_artifacts(finished_mlflow_run.info.run_id, "Azure.png")
+```
+## Next steps
+
+* [Track ML experiments and models with MLflow](how-to-use-mlflow-cli-runs.md)
+* [Log and view metrics](how-to-log-view-metrics.md)
machine-learning Reference Pipeline Yaml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-pipeline-yaml.md
- Title: Machine Learning pipeline YAML-
-description: Learn how to define a machine learning pipeline using a YAML file. YAML pipeline definitions are used with the machine learning extension for the Azure CLI.
-------- Previously updated : 07/31/2020---
-# Define machine learning pipelines in YAML
---
-Learn how to define your machine learning pipelines in [YAML](https://yaml.org/). When using the machine learning extension for the [Azure CLI **v1**](reference-azure-machine-learning-cli.md)., many of the pipeline-related commands expect a YAML file that defines the pipeline.
-
-The following table lists what is and is not currently supported when defining a pipeline in YAML for use with CLI v1:
-
-| Step type | Supported? |
-| -- | :--: |
-| PythonScriptStep | Yes |
-| ParallelRunStep | Yes |
-| AdlaStep | Yes |
-| AzureBatchStep | Yes |
-| DatabricksStep | Yes |
-| DataTransferStep | Yes |
-| AutoMLStep | No |
-| HyperDriveStep | No |
-| ModuleStep | Yes |
-| MPIStep | No |
-| EstimatorStep | No |
-
-## Pipeline definition
-
-A pipeline definition uses the following keys, which correspond to the [Pipelines](/python/api/azureml-pipeline-core/azureml.pipeline.core.pipeline.pipeline) class:
-
-| YAML key | Description |
-| -- | -- |
-| `name` | The description of the pipeline. |
-| `parameters` | Parameter(s) to the pipeline. |
-| `data_reference` | Defines how and where data should be made available in a run. |
-| `default_compute` | Default compute target where all steps in the pipeline run. |
-| `steps` | The steps used in the pipeline. |
-
-## Parameters
-
-The `parameters` section uses the following keys, which correspond to the [PipelineParameter](/python/api/azureml-pipeline-core/azureml.pipeline.core.pipelineparameter) class:
-
-| YAML key | Description |
-| - | - |
-| `type` | The value type of the parameter. Valid types are `string`, `int`, `float`, `bool`, or `datapath`. |
-| `default` | The default value. |
-
-Each parameter is named. For example, the following YAML snippet defines three parameters named `NumIterationsParameter`, `DataPathParameter`, and `NodeCountParameter`:
-
-```yaml
-pipeline:
- name: SamplePipelineFromYaml
- parameters:
- NumIterationsParameter:
- type: int
- default: 40
- DataPathParameter:
- type: datapath
- default:
- datastore: workspaceblobstore
- path_on_datastore: sample2.txt
- NodeCountParameter:
- type: int
- default: 4
-```
-
-## Data reference
-
-The `data_references` section uses the following keys, which correspond to the [DataReference](/python/api/azureml-core/azureml.data.data_reference.datareference):
-
-| YAML key | Description |
-| -- | -- |
-| `datastore` | The datastore to reference. |
-| `path_on_datastore` | The relative path in the backing storage for the data reference. |
-
-Each data reference is contained in a key. For example, the following YAML snippet defines a data reference stored in the key named `employee_data`:
-
-```yaml
-pipeline:
- name: SamplePipelineFromYaml
- parameters:
- PipelineParam1:
- type: int
- default: 3
- data_references:
- employee_data:
- datastore: adftestadla
- path_on_datastore: "adla_sample/sample_input.csv"
-```
-
-## Steps
-
-Steps define a computational environment, along with the files to run on the environment. To define the type of a step, use the `type` key:
-
-| Step type | Description |
-| -- | -- |
-| `AdlaStep` | Runs a U-SQL script with Azure Data Lake Analytics. Corresponds to the [AdlaStep](/python/api/azureml-pipeline-steps/azureml.pipeline.steps.adlastep) class. |
-| `AzureBatchStep` | Runs jobs using Azure Batch. Corresponds to the [AzureBatchStep](/python/api/azureml-pipeline-steps/azureml.pipeline.steps.azurebatchstep) class. |
-| `DatabricsStep` | Adds a Databricks notebook, Python script, or JAR. Corresponds to the [DatabricksStep](/python/api/azureml-pipeline-steps/azureml.pipeline.steps.databricksstep) class. |
-| `DataTransferStep` | Transfers data between storage options. Corresponds to the [DataTransferStep](/python/api/azureml-pipeline-steps/azureml.pipeline.steps.datatransferstep) class. |
-| `PythonScriptStep` | Runs a Python script. Corresponds to the [PythonScriptStep](/python/api/azureml-pipeline-steps/azureml.pipeline.steps.python_script_step.pythonscriptstep) class. |
-| `ParallelRunStep` | Runs a Python script to process large amounts of data asynchronously and in parallel. Corresponds to the [ParallelRunStep](/python/api/azureml-pipeline-steps/azureml.pipeline.steps.parallel_run_step.parallelrunstep) class. |
-
-### ADLA step
-
-| YAML key | Description |
-| -- | -- |
-| `script_name` | The name of the U-SQL script (relative to the `source_directory`). |
-| `compute` | The Azure Data Lake compute target to use for this step. |
-| `parameters` | [Parameters](#parameters) to the pipeline. |
-| `inputs` | Inputs can be [InputPortBinding](/python/api/azureml-pipeline-core/azureml.pipeline.core.graph.inputportbinding), [DataReference](#data-reference), [PortDataReference](/python/api/azureml-pipeline-core/azureml.pipeline.core.portdatareference), [PipelineData](/python/api/azureml-pipeline-core/azureml.pipeline.core.pipelinedata), [Dataset](/python/api/azureml-core/azureml.core.dataset%28class%29), [DatasetDefinition](/python/api/azureml-core/azureml.data.dataset_definition.datasetdefinition), or [PipelineDataset](/python/api/azureml-pipeline-core/azureml.pipeline.core.pipelinedataset). |
-| `outputs` | Outputs can be either [PipelineData](/python/api/azureml-pipeline-core/azureml.pipeline.core.pipelinedata) or [OutputPortBinding](/python/api/azureml-pipeline-core/azureml.pipeline.core.graph.outputportbinding). |
-| `source_directory` | Directory that contains the script, assemblies, etc. |
-| `priority` | The priority value to use for the current job. |
-| `params` | Dictionary of name-value pairs. |
-| `degree_of_parallelism` | The degree of parallelism to use for this job. |
-| `runtime_version` | The runtime version of the Data Lake Analytics engine. |
-| `allow_reuse` | Determines whether the step should reuse previous results when run again with the same settings. |
-
-The following example contains an ADLA Step definition:
-
-```yaml
-pipeline:
- name: SamplePipelineFromYaml
- parameters:
- PipelineParam1:
- type: int
- default: 3
- data_references:
- employee_data:
- datastore: adftestadla
- path_on_datastore: "adla_sample/sample_input.csv"
- default_compute: adlacomp
- steps:
- Step1:
- runconfig: "D:\\Yaml\\default_runconfig.yml"
- parameters:
- NUM_ITERATIONS_2:
- source: PipelineParam1
- NUM_ITERATIONS_1: 7
- type: "AdlaStep"
- name: "MyAdlaStep"
- script_name: "sample_script.usql"
- source_directory: "D:\\scripts\\Adla"
- inputs:
- employee_data:
- source: employee_data
- outputs:
- OutputData:
- destination: Output4
- datastore: adftestadla
- bind_mode: mount
-```
-
-### Azure Batch step
-
-| YAML key | Description |
-| -- | -- |
-| `compute` | The Azure Batch compute target to use for this step. |
-| `inputs` | Inputs can be [InputPortBinding](/python/api/azureml-pipeline-core/azureml.pipeline.core.graph.inputportbinding), [DataReference](#data-reference), [PortDataReference](/python/api/azureml-pipeline-core/azureml.pipeline.core.portdatareference), [PipelineData](/python/api/azureml-pipeline-core/azureml.pipeline.core.pipelinedata), [Dataset](/python/api/azureml-core/azureml.core.dataset%28class%29), [DatasetDefinition](/python/api/azureml-core/azureml.data.dataset_definition.datasetdefinition), or [PipelineDataset](/python/api/azureml-pipeline-core/azureml.pipeline.core.pipelinedataset). |
-| `outputs` | Outputs can be either [PipelineData](/python/api/azureml-pipeline-core/azureml.pipeline.core.pipelinedata) or [OutputPortBinding](/python/api/azureml-pipeline-core/azureml.pipeline.core.graph.outputportbinding). |
-| `source_directory` | Directory that contains the module binaries, executable, assemblies, etc. |
-| `executable` | Name of the command/executable that will be ran as part of this job. |
-| `create_pool` | Boolean flag to indicate whether to create the pool before running the job. |
-| `delete_batch_job_after_finish` | Boolean flag to indicate whether to delete the job from the Batch account after it's finished. |
-| `delete_batch_pool_after_finish` | Boolean flag to indicate whether to delete the pool after the job finishes. |
-| `is_positive_exit_code_failure` | Boolean flag to indicate if the job fails if the task exits with a positive code. |
-| `vm_image_urn` | If `create_pool` is `True`, and VM uses `VirtualMachineConfiguration`. |
-| `pool_id` | The ID of the pool where the job will run. |
-| `allow_reuse` | Determines whether the step should reuse previous results when run again with the same settings. |
-
-The following example contains an Azure Batch step definition:
-
-```yaml
-pipeline:
- name: SamplePipelineFromYaml
- parameters:
- PipelineParam1:
- type: int
- default: 3
- data_references:
- input:
- datastore: workspaceblobstore
- path_on_datastore: "input.txt"
- default_compute: testbatch
- steps:
- Step1:
- runconfig: "D:\\Yaml\\default_runconfig.yml"
- parameters:
- NUM_ITERATIONS_2:
- source: PipelineParam1
- NUM_ITERATIONS_1: 7
- type: "AzureBatchStep"
- name: "MyAzureBatchStep"
- pool_id: "MyPoolName"
- create_pool: true
- executable: "azurebatch.cmd"
- source_directory: "D:\\scripts\\AureBatch"
- allow_reuse: false
- inputs:
- input:
- source: input
- outputs:
- output:
- destination: output
- datastore: workspaceblobstore
-```
-
-### Databricks step
-
-| YAML key | Description |
-| -- | -- |
-| `compute` | The Azure Databricks compute target to use for this step. |
-| `inputs` | Inputs can be [InputPortBinding](/python/api/azureml-pipeline-core/azureml.pipeline.core.graph.inputportbinding), [DataReference](#data-reference), [PortDataReference](/python/api/azureml-pipeline-core/azureml.pipeline.core.portdatareference), [PipelineData](/python/api/azureml-pipeline-core/azureml.pipeline.core.pipelinedata), [Dataset](/python/api/azureml-core/azureml.core.dataset%28class%29), [DatasetDefinition](/python/api/azureml-core/azureml.data.dataset_definition.datasetdefinition), or [PipelineDataset](/python/api/azureml-pipeline-core/azureml.pipeline.core.pipelinedataset). |
-| `outputs` | Outputs can be either [PipelineData](/python/api/azureml-pipeline-core/azureml.pipeline.core.pipelinedata) or [OutputPortBinding](/python/api/azureml-pipeline-core/azureml.pipeline.core.graph.outputportbinding). |
-| `run_name` | The name in Databricks for this run. |
-| `source_directory` | Directory that contains the script and other files. |
-| `num_workers` | The static number of workers for the Databricks run cluster. |
-| `runconfig` | The path to a `.runconfig` file. This file is a YAML representation of the [RunConfiguration](/python/api/azureml-core/azureml.core.runconfiguration) class. For more information on the structure of this file, see [runconfigschema.json](https://github.com/microsoft/MLOps/blob/b4bdcf8c369d188e83f40be8b748b49821f71cf2/infra-as-code/runconfigschema.json). |
-| `allow_reuse` | Determines whether the step should reuse previous results when run again with the same settings. |
-
-The following example contains a Databricks step:
-
-```yaml
-pipeline:
- name: SamplePipelineFromYaml
- parameters:
- PipelineParam1:
- type: int
- default: 3
- data_references:
- adls_test_data:
- datastore: adftestadla
- path_on_datastore: "testdata"
- blob_test_data:
- datastore: workspaceblobstore
- path_on_datastore: "dbtest"
- default_compute: mydatabricks
- steps:
- Step1:
- runconfig: "D:\\Yaml\\default_runconfig.yml"
- parameters:
- NUM_ITERATIONS_2:
- source: PipelineParam1
- NUM_ITERATIONS_1: 7
- type: "DatabricksStep"
- name: "MyDatabrickStep"
- run_name: "DatabricksRun"
- python_script_name: "train-db-local.py"
- source_directory: "D:\\scripts\\Databricks"
- num_workers: 1
- allow_reuse: true
- inputs:
- blob_test_data:
- source: blob_test_data
- outputs:
- OutputData:
- destination: Output4
- datastore: workspaceblobstore
- bind_mode: mount
-```
-
-### Data transfer step
-
-| YAML key | Description |
-| -- | -- |
-| `compute` | The Azure Data Factory compute target to use for this step. |
-| `source_data_reference` | Input connection that serves as the source of data transfer operations. Supported values are [InputPortBinding](/python/api/azureml-pipeline-core/azureml.pipeline.core.graph.inputportbinding), [DataReference](#data-reference), [PortDataReference](/python/api/azureml-pipeline-core/azureml.pipeline.core.portdatareference), [PipelineData](/python/api/azureml-pipeline-core/azureml.pipeline.core.pipelinedata), [Dataset](/python/api/azureml-core/azureml.core.dataset%28class%29), [DatasetDefinition](/python/api/azureml-core/azureml.data.dataset_definition.datasetdefinition), or [PipelineDataset](/python/api/azureml-pipeline-core/azureml.pipeline.core.pipelinedataset). |
-| `destination_data_reference` | Input connection that serves as the destination of data transfer operations. Supported values are [PipelineData](/python/api/azureml-pipeline-core/azureml.pipeline.core.pipelinedata) and [OutputPortBinding](/python/api/azureml-pipeline-core/azureml.pipeline.core.graph.outputportbinding). |
-| `allow_reuse` | Determines whether the step should reuse previous results when run again with the same settings. |
-
-The following example contains a data transfer step:
-
-```yaml
-pipeline:
- name: SamplePipelineFromYaml
- parameters:
- PipelineParam1:
- type: int
- default: 3
- data_references:
- adls_test_data:
- datastore: adftestadla
- path_on_datastore: "testdata"
- blob_test_data:
- datastore: workspaceblobstore
- path_on_datastore: "testdata"
- default_compute: adftest
- steps:
- Step1:
- runconfig: "D:\\Yaml\\default_runconfig.yml"
- parameters:
- NUM_ITERATIONS_2:
- source: PipelineParam1
- NUM_ITERATIONS_1: 7
- type: "DataTransferStep"
- name: "MyDataTransferStep"
- adla_compute_name: adftest
- source_data_reference:
- adls_test_data:
- source: adls_test_data
- destination_data_reference:
- blob_test_data:
- source: blob_test_data
-```
-
-### Python script step
-
-| YAML key | Description |
-| -- | -- |
-| `inputs` | Inputs can be [InputPortBinding](/python/api/azureml-pipeline-core/azureml.pipeline.core.graph.inputportbinding), [DataReference](#data-reference), [PortDataReference](/python/api/azureml-pipeline-core/azureml.pipeline.core.portdatareference), [PipelineData](/python/api/azureml-pipeline-core/azureml.pipeline.core.pipelinedata), [Dataset](/python/api/azureml-core/azureml.core.dataset%28class%29), [DatasetDefinition](/python/api/azureml-core/azureml.data.dataset_definition.datasetdefinition), or [PipelineDataset](/python/api/azureml-pipeline-core/azureml.pipeline.core.pipelinedataset). |
-| `outputs` | Outputs can be either [PipelineData](/python/api/azureml-pipeline-core/azureml.pipeline.core.pipelinedata) or [OutputPortBinding](/python/api/azureml-pipeline-core/azureml.pipeline.core.graph.outputportbinding). |
-| `script_name` | The name of the Python script (relative to `source_directory`). |
-| `source_directory` | Directory that contains the script, Conda environment, etc. |
-| `runconfig` | The path to a `.runconfig` file. This file is a YAML representation of the [RunConfiguration](/python/api/azureml-core/azureml.core.runconfiguration) class. For more information on the structure of this file, see [runconfig.json](https://github.com/microsoft/MLOps/blob/b4bdcf8c369d188e83f40be8b748b49821f71cf2/infra-as-code/runconfigschema.json). |
-| `allow_reuse` | Determines whether the step should reuse previous results when run again with the same settings. |
-
-The following example contains a Python script step:
-
-```yaml
-pipeline:
- name: SamplePipelineFromYaml
- parameters:
- PipelineParam1:
- type: int
- default: 3
- data_references:
- DataReference1:
- datastore: workspaceblobstore
- path_on_datastore: testfolder/sample.txt
- default_compute: cpu-cluster
- steps:
- Step1:
- runconfig: "D:\\Yaml\\default_runconfig.yml"
- parameters:
- NUM_ITERATIONS_2:
- source: PipelineParam1
- NUM_ITERATIONS_1: 7
- type: "PythonScriptStep"
- name: "MyPythonScriptStep"
- script_name: "train.py"
- allow_reuse: True
- source_directory: "D:\\scripts\\PythonScript"
- inputs:
- InputData:
- source: DataReference1
- outputs:
- OutputData:
- destination: Output4
- datastore: workspaceblobstore
- bind_mode: mount
-```
-
-### Parallel run step
-
-| YAML key | Description |
-| -- | -- |
-| `inputs` | Inputs can be [Dataset](/python/api/azureml-core/azureml.core.dataset%28class%29), [DatasetDefinition](/python/api/azureml-core/azureml.data.dataset_definition.datasetdefinition), or [PipelineDataset](/python/api/azureml-pipeline-core/azureml.pipeline.core.pipelinedataset). |
-| `outputs` | Outputs can be either [PipelineData](/python/api/azureml-pipeline-core/azureml.pipeline.core.pipelinedata) or [OutputPortBinding](/python/api/azureml-pipeline-core/azureml.pipeline.core.graph.outputportbinding). |
-| `script_name` | The name of the Python script (relative to `source_directory`). |
-| `source_directory` | Directory that contains the script, Conda environment, etc. |
-| `parallel_run_config` | The path to a `parallel_run_config.yml` file. This file is a YAML representation of the [ParallelRunConfig](/python/api/azureml-pipeline-steps/azureml.pipeline.steps.parallelrunconfig) class. |
-| `allow_reuse` | Determines whether the step should reuse previous results when run again with the same settings. |
-
-The following example contains a Parallel run step:
-
-```yaml
-pipeline:
- description: SamplePipelineFromYaml
- default_compute: cpu-cluster
- data_references:
- MyMinistInput:
- dataset_name: mnist_sample_data
- parameters:
- PipelineParamTimeout:
- type: int
- default: 600
- steps:
- Step1:
- parallel_run_config: "yaml/parallel_run_config.yml"
- type: "ParallelRunStep"
- name: "parallel-run-step-1"
- allow_reuse: True
- arguments:
- - "--progress_update_timeout"
- - parameter:timeout_parameter
- - "--side_input"
- - side_input:SideInputData
- parameters:
- timeout_parameter:
- source: PipelineParamTimeout
- inputs:
- InputData:
- source: MyMinistInput
- side_inputs:
- SideInputData:
- source: Output4
- bind_mode: mount
- outputs:
- OutputDataStep2:
- destination: Output5
- datastore: workspaceblobstore
- bind_mode: mount
-```
-
-### Pipeline with multiple steps
-
-| YAML key | Description |
-| -- | -- |
-| `steps` | Sequence of one or more PipelineStep definitions. Note that the `destination` keys of one step's `outputs` become the `source` keys to the `inputs` of the next step.|
-
-```yaml
-pipeline:
- name: SamplePipelineFromYAML
- description: Sample multistep YAML pipeline
- data_references:
- TitanicDS:
- dataset_name: 'titanic_ds'
- bind_mode: download
- default_compute: cpu-cluster
- steps:
- Dataprep:
- type: "PythonScriptStep"
- name: "DataPrep Step"
- compute: cpu-cluster
- runconfig: ".\\default_runconfig.yml"
- script_name: "prep.py"
- arguments:
- - '--train_path'
- - output:train_path
- - '--test_path'
- - output:test_path
- allow_reuse: True
- inputs:
- titanic_ds:
- source: TitanicDS
- bind_mode: download
- outputs:
- train_path:
- destination: train_csv
- datastore: workspaceblobstore
- test_path:
- destination: test_csv
- Training:
- type: "PythonScriptStep"
- name: "Training Step"
- compute: cpu-cluster
- runconfig: ".\\default_runconfig.yml"
- script_name: "train.py"
- arguments:
- - "--train_path"
- - input:train_path
- - "--test_path"
- - input:test_path
- inputs:
- train_path:
- source: train_csv
- bind_mode: download
- test_path:
- source: test_csv
- bind_mode: download
-
-```
-
-## Schedules
-
-When defining the schedule for a pipeline, it can be either datastore-triggered or recurring based on a time interval. The following are the keys used to define a schedule:
-
-| YAML key | Description |
-| -- | -- |
-| `description` | A description of the schedule. |
-| `recurrence` | Contains recurrence settings, if the schedule is recurring. |
-| `pipeline_parameters` | Any parameters that are required by the pipeline. |
-| `wait_for_provisioning` | Whether to wait for provisioning of the schedule to complete. |
-| `wait_timeout` | The number of seconds to wait before timing out. |
-| `datastore_name` | The datastore to monitor for modified/added blobs. |
-| `polling_interval` | How long, in minutes, between polling for modified/added blobs. Default value: 5 minutes. Only supported for datastore schedules. |
-| `data_path_parameter_name` | The name of the data path pipeline parameter to set with the changed blob path. Only supported for datastore schedules. |
-| `continue_on_step_failure` | Whether to continue execution of other steps in the submitted PipelineRun if a step fails. If provided, will override the `continue_on_step_failure` setting of the pipeline.
-| `path_on_datastore` | Optional. The path on the datastore to monitor for modified/added blobs. The path is under the container for the datastore, so the actual path the schedule monitors is container/`path_on_datastore`. If none, the datastore container is monitored. Additions/modifications made in a subfolder of the `path_on_datastore` are not monitored. Only supported for datastore schedules. |
-
-The following example contains the definition for a datastore-triggered schedule:
-
-```yaml
-Schedule:
- description: "Test create with datastore"
- recurrence: ~
- pipeline_parameters: {}
- wait_for_provisioning: True
- wait_timeout: 3600
- datastore_name: "workspaceblobstore"
- polling_interval: 5
- data_path_parameter_name: "input_data"
- continue_on_step_failure: None
- path_on_datastore: "file/path"
-```
-
-When defining a **recurring schedule**, use the following keys under `recurrence`:
-
-| YAML key | Description |
-| -- | -- |
-| `frequency` | How often the schedule recurs. Valid values are `"Minute"`, `"Hour"`, `"Day"`, `"Week"`, or `"Month"`. |
-| `interval` | How often the schedule fires. The integer value is the number of time units to wait until the schedule fires again. |
-| `start_time` | The start time for the schedule. The string format of the value is `YYYY-MM-DDThh:mm:ss`. If no start time is provided, the first workload is run instantly and future workloads are run based on the schedule. If the start time is in the past, the first workload is run at the next calculated run time. |
-| `time_zone` | The time zone for the start time. If no time zone is provided, UTC is used. |
-| `hours` | If `frequency` is `"Day"` or `"Week"`, you can specify one or more integers from 0 to 23, separated by commas, as the hours of the day when the pipeline should run. Only `time_of_day` or `hours` and `minutes` can be used. |
-| `minutes` | If `frequency` is `"Day"` or `"Week"`, you can specify one or more integers from 0 to 59, separated by commas, as the minutes of the hour when the pipeline should run. Only `time_of_day` or `hours` and `minutes` can be used. |
-| `time_of_day` | If `frequency` is `"Day"` or `"Week"`, you can specify a time of day for the schedule to run. The string format of the value is `hh:mm`. Only `time_of_day` or `hours` and `minutes` can be used. |
-| `week_days` | If `frequency` is `"Week"`, you can specify one or more days, separated by commas, when the schedule should run. Valid values are `"Monday"`, `"Tuesday"`, `"Wednesday"`, `"Thursday"`, `"Friday"`, `"Saturday"`, and `"Sunday"`. |
-
-The following example contains the definition for a recurring schedule:
-
-```yaml
-Schedule:
- description: "Test create with recurrence"
- recurrence:
- frequency: Week # Can be "Minute", "Hour", "Day", "Week", or "Month".
- interval: 1 # how often fires
- start_time: 2019-06-07T10:50:00
- time_zone: UTC
- hours:
- - 1
- minutes:
- - 0
- time_of_day: null
- week_days:
- - Friday
- pipeline_parameters:
- 'a': 1
- wait_for_provisioning: True
- wait_timeout: 3600
- datastore_name: ~
- polling_interval: ~
- data_path_parameter_name: ~
- continue_on_step_failure: None
- path_on_datastore: ~
-```
-
-## Next steps
-
-Learn how to [use the CLI extension for Azure Machine Learning](reference-azure-machine-learning-cli.md).
machine-learning Reference Yaml Component Command https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-component-command.md
-+ Last updated 03/31/2022
The source JSON schema can be found at https://azuremlschemas.azureedge.net/latest/commandComponent.schema.json. + [!INCLUDE [schema note](../../includes/machine-learning-preview-old-json-schema-note.md)]
machine-learning Reference Yaml Compute Aml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-compute-aml.md
-+
The source JSON schema can be found at https://azuremlschemas.azureedge.net/latest/amlCompute.schema.json. + [!INCLUDE [schema note](../../includes/machine-learning-preview-old-json-schema-note.md)]
machine-learning Reference Yaml Compute Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-compute-instance.md
-+
The source JSON schema can be found at https://azuremlschemas.azureedge.net/latest/computeInstance.schema.json. + [!INCLUDE [schema note](../../includes/machine-learning-preview-old-json-schema-note.md)]
machine-learning Reference Yaml Compute Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-compute-kubernetes.md
description: Reference documentation for the CLI (v2) Attached Azure Arc-enabled
+
The source JSON schema can be found at `https://azuremlschemas.azureedge.net/latest/kubernetesCompute.schema.json`. + [!INCLUDE [schema note](../../includes/machine-learning-preview-old-json-schema-note.md)]
The `az ml compute` commands can be used for managing Azure Arc-enabled Kubernet
## Next steps - [Install and use the CLI (v2)](how-to-configure-cli.md)-- [Configure and attach Azure Arc-enabled Kubernetes clusters](how-to-attach-arc-kubernetes.md)
+- [Configure and attach Kubernetes clusters anywhere](how-to-attach-kubernetes-anywhere.md)
machine-learning Reference Yaml Compute Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-compute-vm.md
-+
The source JSON schema can be found at https://azuremlschemas.azureedge.net/latest/vmCompute.schema.json. + [!INCLUDE [schema note](../../includes/machine-learning-preview-old-json-schema-note.md)]
machine-learning Reference Yaml Core Syntax https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-core-syntax.md
-+
Every Azure Machine Learning entity has a schematized YAML representation. You c
This article provides an overview of core syntax concepts you will encounter while configuring these YAML files. + ## Referencing an Azure ML entity
machine-learning Reference Yaml Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-data.md
-+
The source JSON schema can be found at https://azuremlschemas.azureedge.net/latest/data.schema.json. + [!INCLUDE [schema note](../../includes/machine-learning-preview-old-json-schema-note.md)]
machine-learning Reference Yaml Datastore Blob https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-datastore-blob.md
-+
The source JSON schema can be found at https://azuremlschemas.azureedge.net/latest/azureBlob.schema.json. + [!INCLUDE [schema note](../../includes/machine-learning-preview-old-json-schema-note.md)]
machine-learning Reference Yaml Datastore Data Lake Gen1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-datastore-data-lake-gen1.md
-+
The source JSON schema can be found at https://azuremlschemas.azureedge.net/latest/azureDataLakeGen1.schema.json. + [!INCLUDE [schema note](../../includes/machine-learning-preview-old-json-schema-note.md)]
machine-learning Reference Yaml Datastore Data Lake Gen2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-datastore-data-lake-gen2.md
-+
The source JSON schema can be found at https://azuremlschemas.azureedge.net/latest/azureDataLakeGen2.schema.json. + [!INCLUDE [schema note](../../includes/machine-learning-preview-old-json-schema-note.md)]
machine-learning Reference Yaml Datastore Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-datastore-files.md
-+
The source JSON schema can be found at https://azuremlschemas.azureedge.net/latest/azureFile.schema.json. + [!INCLUDE [schema note](../../includes/machine-learning-preview-old-json-schema-note.md)]
machine-learning Reference Yaml Deployment Batch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-deployment-batch.md
-+
The source JSON schema can be found at https://azuremlschemas.azureedge.net/latest/batchDeployment.schema.json. + [!INCLUDE [schema note](../../includes/machine-learning-preview-old-json-schema-note.md)]
The source JSON schema can be found at https://azuremlschemas.azureedge.net/late
| `endpoint_name` | string | **Required.** Name of the endpoint to create the deployment under. | | | | `model` | string or object | **Required.** The model to use for the deployment. This value can be either a reference to an existing versioned model in the workspace or an inline model specification. <br><br> To reference an existing model, use the `azureml:<model-name>:<model-version>` syntax. <br><br> To define a model inline, follow the [Model schema](reference-yaml-model.md#yaml-syntax). <br><br> As a best practice for production scenarios, you should create the model separately and reference it here. | | | | `code_configuration` | object | Configuration for the scoring code logic. <br><br> This property is not required if your model is in MLflow format. | | |
-| `code_configuration.code` | string | Local path to the source code directory for scoring the model. | | |
-| `code_configuration.scoring_script` | string | Relative path to the scoring file in the source code directory. | | |
+| `code_configuration.code` | string | The local directory that contains all the Python source code to score the model. | | |
+| `code_configuration.scoring_script` | string | The Python file in the above directory. This file must have an `init()` function and a `run()` function. Use the `init()` function for any costly or common preparation (for example, load the model in memory). `init()` will be called only once at beginning of process. Use `run(mini_batch)` to score each entry; the value of `mini_batch` is a list of file paths. The `run()` function should return a pandas DataFrame or an array. Each returned element indicates one successful run of input element in the `mini_batch`. For more information on how to author scoring script, see [Understanding the scoring script](how-to-use-batch-endpoint.md#understanding-the-scoring-script).| | |
| `environment` | string or object | The environment to use for the deployment. This value can be either a reference to an existing versioned environment in the workspace or an inline environment specification. <br><br> This property is not required if your model is in MLflow format. <br><br> To reference an existing environment, use the `azureml:<environment-name>:<environment-version>` syntax. <br><br> To define an environment inline, follow the [Environment schema](reference-yaml-environment.md#yaml-syntax). <br><br> As a best practice for production scenarios, you should create the environment separately and reference it here. | | | | `compute` | string | **Required.** Name of the compute target to execute the batch scoring jobs on. This value should be a reference to an existing compute in the workspace using the `azureml:<compute-name>` syntax. | | | | `resources.instance_count` | integer | The number of nodes to use for each batch scoring job. | | `1` |
machine-learning Reference Yaml Deployment Kubernetes Online https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-deployment-kubernetes-online.md
description: Reference documentation for the CLI (v2) Azure Arc-enabled Kubernet
+
The source JSON schema can be found at https://azuremlschemas.azureedge.net/latest/kubernetesOnlineDeployment.schema.json. - [!INCLUDE [schema note](../../includes/machine-learning-preview-old-json-schema-note.md)] ## YAML syntax
The source JSON schema can be found at https://azuremlschemas.azureedge.net/late
| Key | Type | Description | Allowed values | Default value | | | - | -- | -- | - | | `$schema` | string | The YAML schema. If you use the Azure Machine Learning VS Code extension to author the YAML file, including `$schema` at the top of your file enables you to invoke schema and resource completions. | | |
-| `name` | string | **Required.** Name of the deployment. <br><br> Naming rules are defined [here](how-to-manage-quotas.md#azure-machine-learning-managed-online-endpoints-preview).| | |
+| `name` | string | **Required.** Name of the deployment. <br><br> Naming rules are defined [here](how-to-manage-quotas.md#azure-machine-learning-managed-online-endpoints).| | |
| `description` | string | Description of the deployment. | | | | `tags` | object | Dictionary of tags for the deployment. | | | | `endpoint_name` | string | **Required.** Name of the endpoint to create the deployment under. | | |
The source JSON schema can be found at https://azuremlschemas.azureedge.net/late
| `code_configuration.scoring_script` | string | Relative path to the scoring file in the source code directory. | | | | `environment_variables` | object | Dictionary of environment variable key-value pairs to set in the deployment container. You can access these environment variables from your scoring scripts. | | | | `environment` | string or object | **Required.** The environment to use for the deployment. This value can be either a reference to an existing versioned environment in the workspace or an inline environment specification. <br><br> To reference an existing environment, use the `azureml:<environment-name>:<environment-version>` syntax. <br><br> To define an environment inline, follow the [Environment schema](reference-yaml-environment.md#yaml-syntax). <br><br> As a best practice for production scenarios, you should create the environment separately and reference it here. | | |
-| `instance_type` | string | The instance type used to place the inference workload. If omitted, the inference workload will be placed on the default instance type of the Kubernetes cluster specified in the endpoint's `compute` field. If specified, the inference workload will be placed on that selected instance type. <br><br> Note that the set of instance types for a Kubernetes cluster is configured via the Kubernetes cluster custom resource definition (CRD), hence they are not part of the Azure ML YAML schema for attaching Kubernetes compute.For more information, see [Create and select Kubernetes instance types](how-to-kubernetes-instance-type.md). | | |
+| `instance_type` | string | The instance type used to place the inference workload. If omitted, the inference workload will be placed on the default instance type of the Kubernetes cluster specified in the endpoint's `compute` field. If specified, the inference workload will be placed on that selected instance type. <br><br> Note that the set of instance types for a Kubernetes cluster is configured via the Kubernetes cluster custom resource definition (CRD), hence they are not part of the Azure ML YAML schema for attaching Kubernetes compute.For more information, see [Create and select Kubernetes instance types](how-to-attach-kubernetes-anywhere.md). | | |
| `instance_count` | integer | The number of instances to use for the deployment. Specify the value based on the workload you expect. This field is only required if you are using the `default` scale type (`scale_settings.type: default`). <br><br> `instance_count` can be updated after deployment creation using `az ml online-deployment update` command. | | | | `app_insights_enabled` | boolean | Whether to enable integration with the Azure Application Insights instance associated with your workspace. | | `false` | | `scale_settings` | object | The scale settings for the deployment. The two types of scale settings supported are the `default` scale type and the `target_utilization` scale type. <br><br> With the `default` scale type (`scale_settings.type: default`), you can manually scale the instance count up and down after deployment creation by updating the `instance_count` property. <br><br> To configure the `target_utilization` scale type (`scale_settings.type: target_utilization`), see [TargetUtilizationScaleSettings](#targetutilizationscalesettings) for the set of configurable properties. | | |
machine-learning Reference Yaml Deployment Managed Online https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-deployment-managed-online.md
-+ Previously updated : 03/31/2022- Last updated : 04/26/2022+ # CLI (v2) managed online deployment YAML schema
The source JSON schema can be found at https://azuremlschemas.azureedge.net/latest/managedOnlineDeployment.schema.json. - [!INCLUDE [schema note](../../includes/machine-learning-preview-old-json-schema-note.md)] ## YAML syntax
The source JSON schema can be found at https://azuremlschemas.azureedge.net/late
| Key | Type | Description | Allowed values | Default value | | | - | -- | -- | - | | `$schema` | string | The YAML schema. If you use the Azure Machine Learning VS Code extension to author the YAML file, including `$schema` at the top of your file enables you to invoke schema and resource completions. | | |
-| `name` | string | **Required.** Name of the deployment. <br><br> Naming rules are defined [here](how-to-manage-quotas.md#azure-machine-learning-managed-online-endpoints-preview).| | |
+| `name` | string | **Required.** Name of the deployment. <br><br> Naming rules are defined [here](how-to-manage-quotas.md#azure-machine-learning-managed-online-endpoints).| | |
| `description` | string | Description of the deployment. | | | | `tags` | object | Dictionary of tags for the deployment. | | | | `endpoint_name` | string | **Required.** Name of the endpoint to create the deployment under. | | |
The source JSON schema can be found at https://azuremlschemas.azureedge.net/late
| `environment_variables` | object | Dictionary of environment variable key-value pairs to set in the deployment container. You can access these environment variables from your scoring scripts. | | | | `environment` | string or object | **Required.** The environment to use for the deployment. This value can be either a reference to an existing versioned environment in the workspace or an inline environment specification. <br><br> To reference an existing environment, use the `azureml:<environment-name>:<environment-version>` syntax. <br><br> To define an environment inline, follow the [Environment schema](reference-yaml-environment.md#yaml-syntax). <br><br> As a best practice for production scenarios, you should create the environment separately and reference it here. | | | | `instance_type` | string | **Required.** The VM size to use for the deployment. For the list of supported sizes, see [Managed online endpoints SKU list](reference-managed-online-endpoints-vm-sku-list.md). | | |
-| `instance_count` | integer | **Required.** The number of instances to use for the deployment. Specify the value based on the workload you expect. For high availability, Microsoft recommends you set it to at least `3`. <br><br> `instance_count` can be updated after deployment creation using `az ml online-deployment update` command. | | |
+| `instance_count` | integer | **Required.** The number of instances to use for the deployment. Specify the value based on the workload you expect. For high availability, Microsoft recommends you set it to at least `3`. <br><br> `instance_count` can be updated after deployment creation using `az ml online-deployment update` command. <br><br> We reserve an extra 20% for performing upgrades. For more information, see [managed online endpoint quotas](how-to-manage-quotas.md#azure-machine-learning-managed-online-endpoints). | | |
| `app_insights_enabled` | boolean | Whether to enable integration with the Azure Application Insights instance associated with your workspace. | | `false` | | `scale_settings` | object | The scale settings for the deployment. Currently only the `default` scale type is supported, so you do not need to specify this property. <br><br> With this `default` scale type, you can either manually scale the instance count up and down after deployment creation by updating the `instance_count` property, or create an [autoscaling policy](how-to-autoscale-endpoints.md). | | | | `scale_settings.type` | string | The scale type. | `default` | `default` | | `request_settings` | object | Scoring request settings for the deployment. See [RequestSettings](#requestsettings) for the set of configurable properties. | | | | `liveness_probe` | object | Liveness probe settings for monitoring the health of the container regularly. See [ProbeSettings](#probesettings) for the set of configurable properties. | | | | `readiness_probe` | object | Readiness probe settings for validating if the container is ready to serve traffic. See [ProbeSettings](#probesettings) for the set of configurable properties. | | |
+| `egress_public_network_access` | string | This flag secures the deployment by restricting communication between the deployment and the Azure resources used by it. Set to `disabled` to ensure that the download of the model, code, and images needed by your deployment are secured with a private endpoint. This flag is applicable only for managed online endpoints. | `enabled`, `disabled` | `enabled` |
### RequestSettings
machine-learning Reference Yaml Endpoint Batch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-endpoint-batch.md
-+
The source JSON schema can be found at https://azuremlschemas.azureedge.net/latest/batchEndpoint.schema.json. + [!INCLUDE [schema note](../../includes/machine-learning-preview-old-json-schema-note.md)]
machine-learning Reference Yaml Endpoint Online https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-endpoint-online.md
-+ Previously updated : 03/31/2022- Last updated : 04/26/2022+ # CLI (v2) online endpoint YAML schema
The source JSON schema can be found at https://azuremlschemas.azureedge.net/latest/managedOnlineEndpoint.schema.json. - [!INCLUDE [schema note](../../includes/machine-learning-preview-old-json-schema-note.md)] > [!NOTE]
The source JSON schema can be found at https://azuremlschemas.azureedge.net/late
| Key | Type | Description | Allowed values | Default value | | | - | -- | -- | - | | `$schema` | string | The YAML schema. If you use the Azure Machine Learning VS Code extension to author the YAML file, including `$schema` at the top of your file enables you to invoke schema and resource completions. | | |
-| `name` | string | **Required.** Name of the endpoint. Needs to be unique at the Azure region level. <br><br> Naming rules are defined under [managed online endpoint limits](how-to-manage-quotas.md#azure-machine-learning-managed-online-endpoints-preview).| | |
+| `name` | string | **Required.** Name of the endpoint. Needs to be unique at the Azure region level. <br><br> Naming rules are defined under [managed online endpoint limits](how-to-manage-quotas.md#azure-machine-learning-managed-online-endpoints).| | |
| `description` | string | Description of the endpoint. | | | | `tags` | object | Dictionary of tags for the endpoint. | | | | `auth_mode` | string | The authentication method for the endpoint. Key-based authentication and Azure ML token-based authentication are supported. Key-based authentication doesn't expire but Azure ML token-based authentication does. | `key`, `aml_token` | `key` |
-| `compute` | string | Name of the compute target to run the endpoint deployments on. This field is only applicable for endpoint deployments to Azure Arc-enabled Kubernetes clusters (the compute target specified in this field must have `type: kubernetes`). Do not specify this field if you are doing managed online inference. | | |
+| `compute` | string | Name of the compute target to run the endpoint deployments on. This field is only applicable for endpoint deployments to Azure Arc-enabled Kubernetes clusters (the compute target specified in this field must have `type: kubernetes`). Don't specify this field if you're doing managed online inference. | | |
| `identity` | object | The managed identity configuration for accessing Azure resources for endpoint provisioning and inference. | | | | `identity.type` | string | The type of managed identity. If the type is `user_assigned`, the `identity.user_assigned_identities` property must also be specified. | `system_assigned`, `user_assigned` | | | `identity.user_assigned_identities` | array | List of fully qualified resource IDs of the user-assigned identities. | | |
-| `traffic` | object | Traffic represents the percentage of requests to be served by different deployments. It is represented by a dictionary of key-value pairs, where keys represent the deployment name and value represent the percentage of traffic to that deployment. For example, `blue: 90 green: 10` means 90% requests are sent to the deployment named `blue` and 10% is sent to deployment `green`. Total traffic has to either be 0 or sum up to 100. See [Safe rollout for online endpoints](how-to-safely-rollout-managed-endpoints.md) to see the traffic configuration in action. <br><br> Note: you cannot set this field during online endpoint creation, as the deployments under that endpoint must be created before traffic can be set. You can update the traffic for an online endpoint after the deployments have been created using `az ml online-endpoint update`; for example `az ml online-endpoint update --name <endpoint_name> --traffic "blue=90 green=10"`. | | |
+| `traffic` | object | Traffic represents the percentage of requests to be served by different deployments. It's represented by a dictionary of key-value pairs, where keys represent the deployment name and value represent the percentage of traffic to that deployment. For example, `blue: 90 green: 10` means 90% requests are sent to the deployment named `blue` and 10% is sent to deployment `green`. Total traffic has to either be 0 or sum up to 100. See [Safe rollout for online endpoints](how-to-safely-rollout-managed-endpoints.md) to see the traffic configuration in action. <br><br> Note: you can't set this field during online endpoint creation, as the deployments under that endpoint must be created before traffic can be set. You can update the traffic for an online endpoint after the deployments have been created using `az ml online-endpoint update`; for example, `az ml online-endpoint update --name <endpoint_name> --traffic "blue=90 green=10"`. | | |
+| `public_network_access` | string | This flag controls the visibility of the managed endpoint. When `disabled`, inbound scoring requests are received using the [private endpoint of the Azure Machine Learning workspace](how-to-configure-private-link.md) and the endpoint can't be reached from public networks. This flag is applicable only for managed endpoints | `enabled`, `disabled` | `enabled` |
+| `mirror_traffic` | string | Percentage of live traffic to mirror to a deployment. Mirroring traffic doesn't change the results returned to clients. The mirrored percentage of traffic is copied and submitted to the specified deployment so you can gather metrics and logging without impacting clients. For example, to check if latency is within acceptable bounds and that there are no HTTP errors. It's represented by a dictionary with a single key-value pair, where the key represents the deployment name and the value represents the percentage of traffic to mirror to the deployment. For more information, see [Test a deployment with mirrored traffic](how-to-safely-rollout-managed-endpoints.md#test-the-deployment-with-mirrored-traffic-preview).
## Remarks
machine-learning Reference Yaml Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-environment.md
-+
The source JSON schema can be found at https://azuremlschemas.azureedge.net/latest/environment.schema.json. + [!INCLUDE [schema note](../../includes/machine-learning-preview-old-json-schema-note.md)]
machine-learning Reference Yaml Job Command https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-job-command.md
-+
The source JSON schema can be found at https://azuremlschemas.azureedge.net/latest/commandJob.schema.json. + [!INCLUDE [schema note](../../includes/machine-learning-preview-old-json-schema-note.md)]
The source JSON schema can be found at https://azuremlschemas.azureedge.net/late
| `distribution` | object | The distribution configuration for distributed training scenarios. One of [MpiConfiguration](#mpiconfiguration), [PyTorchConfiguration](#pytorchconfiguration), or [TensorFlowConfiguration](#tensorflowconfiguration). | | | | `compute` | string | Name of the compute target to execute the job on. This can be either a reference to an existing compute in the workspace (using the `azureml:<compute_name>` syntax) or `local` to designate local execution. | | `local` | | `resources.instance_count` | integer | The number of nodes to use for the job. | | `1` |
-| `resources.instance_type` | string | The instance type to use for the job. Applicable for jobs running on Azure Arc-enabled Kubernetes compute (where the compute target specified in the `compute` field is of `type: kubernentes`). If omitted, this will default to the default instance type for the Kubernetes cluster. For more information, see [Create and select Kubernetes instance types](how-to-kubernetes-instance-type.md). | | |
+| `resources.instance_type` | string | The instance type to use for the job. Applicable for jobs running on Azure Arc-enabled Kubernetes compute (where the compute target specified in the `compute` field is of `type: kubernentes`). If omitted, this will default to the default instance type for the Kubernetes cluster. For more information, see [Create and select Kubernetes instance types](how-to-attach-kubernetes-anywhere.md). | | |
| `limits.timeout` | integer | The maximum time in seconds the job is allowed to run. Once this limit is reached the system will cancel the job. | | | | `inputs` | object | Dictionary of inputs to the job. The key is a name for the input within the context of the job and the value is the input value. <br><br> Inputs can be referenced in the `command` using the `${{ inputs.<input_name> }}` expression. | | | | `inputs.<input_name>` | number, integer, boolean, string or object | One of a literal value (of type number, integer, boolean, or string) or an object containing a [job input data specification](#job-inputs). | | |
machine-learning Reference Yaml Job Pipeline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-job-pipeline.md
-+
[!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)]
-The source JSON schema can be found at https://azuremlschemas.azureedge.net/latest/pipelineJob.schema.json.
+> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning CLI extension you are using:"]
+> * [v1](v1/reference-pipeline-yaml.md)
+> * [v2 (current version)](reference-yaml-job-pipeline.md)
+The source JSON schema can be found at https://azuremlschemas.azureedge.net/latest/pipelineJob.schema.json.
[!INCLUDE [schema note](../../includes/machine-learning-preview-old-json-schema-note.md)]
machine-learning Reference Yaml Job Sweep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-job-sweep.md
-+
The source JSON schema can be found at https://azuremlschemas.azureedge.net/latest/sweepJob.schema.json. + [!INCLUDE [schema note](../../includes/machine-learning-preview-old-json-schema-note.md)]
machine-learning Reference Yaml Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-model.md
-+
The source JSON schema can be found at https://azuremlschemas.azureedge.net/latest/model.schema.json. + [!INCLUDE [schema note](../../includes/machine-learning-preview-old-json-schema-note.md)]
machine-learning Reference Yaml Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-overview.md
-+
The Azure Machine Learning CLI (v2), an extension to the Azure CLI, often uses and sometimes requires YAML files with specific schemas. This article lists reference docs and the source schema for YAML files. Examples are included inline in individual articles. + ## Workspace
machine-learning Reference Yaml Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-workspace.md
-+
The source JSON schema can be found at https://azuremlschemas.azureedge.net/latest/workspace.schema.json. + [!INCLUDE [schema note](../../includes/machine-learning-preview-old-json-schema-note.md)]
machine-learning Tutorial 1St Experiment Bring Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-1st-experiment-bring-data.md
Last updated 12/21/2021-+ # Tutorial: Upload data and train a model (part 3 of 3) + This tutorial shows you how to upload and use your own data to train machine learning models in Azure Machine Learning. This tutorial is *part 3 of a three-part tutorial series*. In [Part 2: Train a model](tutorial-1st-experiment-sdk-train.md), you trained a model in the cloud, using sample data from `PyTorch`. You also downloaded that data through the `torchvision.datasets.CIFAR10` method in the PyTorch API. In this tutorial, you'll use the downloaded data to learn the workflow for working with your own data in Azure Machine Learning.
machine-learning Tutorial 1St Experiment Hello World https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-1st-experiment-hello-world.md
Last updated 12/21/2021-+ # Tutorial: Get started with a Python script in Azure Machine Learning (part 1 of 3) + In this tutorial, you run your first Python script in the cloud with Azure Machine Learning. This tutorial is *part 1 of a three-part tutorial series*. This tutorial avoids the complexity of training a machine learning model. You will run a "Hello World" Python script in the cloud. You will learn how a control script is used to configure and create a run in Azure Machine Learning.
In the next tutorial, you build on these learnings by running something more int
> [Tutorial: Train a model](tutorial-1st-experiment-sdk-train.md) >[!NOTE]
-> If you want to finish the tutorial series here and not progress to the next step, remember to [clean up your resources](tutorial-1st-experiment-bring-data.md#clean-up-resources).
+> If you want to finish the tutorial series here and not progress to the next step, remember to [clean up your resources](tutorial-1st-experiment-bring-data.md#clean-up-resources).
machine-learning Tutorial 1St Experiment Sdk Train https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-1st-experiment-sdk-train.md
Last updated 12/21/2021-+ # Tutorial: Train your first machine learning model (part 2 of 3) + This tutorial shows you how to train a machine learning model in Azure Machine Learning. This tutorial is *part 2 of a three-part tutorial series*. In [Part 1: Run "Hello world!"](tutorial-1st-experiment-hello-world.md) of the series, you learned how to use a control script to run a job in the cloud.
machine-learning Tutorial Auto Train Image Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-auto-train-image-models.md
Title: 'Tutorial: AutoML- train object detection model'
-description: Train an object detection model to identify if an image contains certain objects with automated ML and the Azure Machine Learning Python SDK automated ML.
+description: Train an object detection model to identify if an image contains certain objects with automated ML and the Azure Machine Learning CLI v2 and Python SDK v2(preview).
Previously updated : 10/06/2021- Last updated : 04/15/2022+ # Tutorial: Train an object detection model (preview) with AutoML and Python
+> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning CLI extension you are using:"]
+> * [v1](v1/tutorial-auto-train-image-models-v1.md)
+> * [v2 (current version)](tutorial-auto-train-image-models.md)
+ >[!IMPORTANT] > The features presented in this article are in preview. They should be considered [experimental](/python/api/overview/azure/ml/#stable-vs-experimental) preview features that might change at any time.
-In this tutorial, you learn how to train an object detection model using Azure Machine Learning automated ML with the Azure Machine Learning Python SDK. This object detection model identifies whether the image contains objects, such as a can, carton, milk bottle, or water bottle.
+In this tutorial, you learn how to train an object detection model using Azure Machine Learning automated ML with the Azure Machine Learning CLI extension v2 or the Azure Machine Learning Python SDK v2 (preview).
+This object detection model identifies whether the image contains objects, such as a can, carton, milk bottle, or water bottle.
Automated ML accepts training data and configuration settings, and automatically iterates through combinations of different feature normalization/standardization methods, models, and hyperparameter settings to arrive at the best model.
You'll write code using the Python SDK in this tutorial and learn the following
* Complete the [Quickstart: Get started with Azure Machine Learning](quickstart-create-resources.md#create-the-workspace) if you don't already have an Azure Machine Learning workspace.
-* Download and unzip the [**odFridgeObjects.zip*](https://cvbp-secondary.z19.web.core.windows.net/datasets/object_detection/odFridgeObjects.zip) data file. The dataset is annotated in Pascal VOC format, where each image corresponds to an xml file. Each xml file contains information on where its corresponding image file is located and also contains information about the bounding boxes and the object labels. In order to use this data, you first need to convert it to the required JSONL format as seen in the [Convert the downloaded data to JSONL](https://github.com/Azure/azureml-examples/blob/main/python-sdk/tutorials/automl-with-azureml/image-object-detection/auto-ml-image-object-detection.ipynb) section of the notebook.
+* Download and unzip the [**odFridgeObjects.zip*](https://cvbp-secondary.z19.web.core.windows.net/datasets/object_detection/odFridgeObjects.zip) data file. The dataset is annotated in Pascal VOC format, where each image corresponds to an xml file. Each xml file contains information on where its corresponding image file is located and also contains information about the bounding boxes and the object labels. In order to use this data, you first need to convert it to the required JSONL format as seen in the [Convert the downloaded data to JSONL](https://github.com/Azure/azureml-examples/blob/sdk-preview/sdk/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb) section of the notebook.
+
+# [CLI v2](#tab/CLI-v2)
++
+This tutorial is also available in the [azureml-examples repository on GitHub](https://github.com/Azure/azureml-examples/tree/sdk-preview/cli/jobs/automl-standalone-jobs/cli-automl-image-object-detection-task-fridge-items). If you wish to run it in your own local environment, setup using the following instructions
+
+* Install and [set up CLI (v2)](how-to-configure-cli.md#prerequisites) and make sure you install the `ml` extension.
+
+# [Python SDK v2 (preview)](#tab/SDK-v2)
+
+This tutorial is also available in the [azureml-examples repository on GitHub](https://github.com/Azure/azureml-examples/tree/sdk-preview/sdk/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items). If you wish to run it in your own local environment, setup using the following instructions
+
+* Use the following commands to install Azure ML Python SDK v2:
+ * Uninstall previous preview version:
+ ```python
+ pip uninstall azure-ai-ml
+ ```
+ * Install the Azure ML Python SDK v2:
+ ```python
+ pip install azure-ai-ml
+ ```
+
+ > [!NOTE]
+ > Only Python 3.6 and 3.7 are compatible with automated ML support for computer vision tasks.
-This tutorial is also available in the [azureml-examples repository on GitHub](https://github.com/Azure/azureml-examples/tree/main/python-sdk/tutorials/automl-with-azureml/image-object-detection) if you wish to run it in your own [local environment](how-to-configure-environment.md#local). To get the required packages,
-* Run `pip install azureml`
-* [Install the full `automl` client](https://github.com/Azure/azureml-examples/blob/main/python-sdk/tutorials/automl-with-azureml/README.md#setup-using-a-local-conda-environment)
+ ## Compute target setup
You first need to set up a compute target to use for your automated ML model tra
This tutorial uses the NCsv3-series (with V100 GPUs) as this type of compute target leverages multiple GPUs to speed up training. Additionally, you can set up multiple nodes to take advantage of parallelism when tuning hyperparameters for your model.
-The following code creates a GPU compute of size Standard _NC24s_v3 with four nodes that are attached to the workspace, `ws`.
+The following code creates a GPU compute of size `Standard_NC24s_v3` with four nodes.
-> [!WARNING]
-> Ensure your subscription has sufficient quota for the compute target you wish to use.
+# [CLI v2](#tab/CLI-v2)
-```python
-from azureml.core.compute import AmlCompute, ComputeTarget
+Create a .yml file with the following configuration.
-cluster_name = "gpu-nc24sv3"
+```yml
+$schema: https://azuremlschemas.azureedge.net/latest/amlCompute.schema.json
+name: gpu-cluster
+type: amlcompute
+size: Standard_NC24s_v3
+min_instances: 0
+max_instances: 4
+idle_time_before_scale_down: 120
+```
-try:
- compute_target = ComputeTarget(workspace=ws, name=cluster_name)
- print('Found existing compute target.')
-except KeyError:
- print('Creating a new compute target...')
- compute_config = AmlCompute.provisioning_configuration(vm_size='Standard_NC24s_v3',
- idle_seconds_before_scaledown=1800,
- min_nodes=0,
- max_nodes=4)
+To create the compute, you run the following CLI v2 command with the path to your .yml file, workspace name, resource group and subscription ID.
- compute_target = ComputeTarget.create(ws, cluster_name, compute_config)
+```azurecli
+az ml compute create -f [PATH_TO_YML_FILE] --workspace-name [YOUR_AZURE_WORKSPACE] --resource-group [YOUR_AZURE_RESOURCE_GROUP] --subscription [YOUR_AZURE_SUBSCRIPTION]
+```
-#If no min_node_count is provided, the scale settings are used for the cluster.
-compute_target.wait_for_completion(show_output=True, min_node_count=None, timeout_in_minutes=20)
+The created compute can be provided using `compute` key in the `automl` task configuration yaml:
+
+```yaml
+compute: azureml:gpu-cluster
```
-## Experiment setup
-Next, create an `Experiment` in your workspace to track your model training runs.
+# [Python SDK v2 (preview)](#tab/SDK-v2)
```python
+from azure.ai.ml.entities import AmlCompute
+compute_name = "gpu-cluster"
+cluster_basic = AmlCompute(
+ name=compute_name,
+ type="amlcompute",
+ size="Standard_NC24s_v3",
+ min_instances=0,
+ max_instances=4,
+ idle_time_before_scale_down=120,
+)
+ml_client.begin_create_or_update(cluster_basic)
+```
+This compute is used later while creating the task specific `automl` job.
-from azureml.core import Experiment
+
-experiment_name = 'automl-image-object-detection'
-experiment = Experiment(ws, name=experiment_name)
+## Experiment setup
+
+You can use an Experiment to track your model training runs.
+
+# [CLI v2](#tab/CLI-v2)
+Experiment name can be provided using `experiment_name` key as follows:
+
+```yaml
+experiment_name: dpv2-cli-automl-image-object-detection-experiment
```
+# [Python SDK v2 (preview)](#tab/SDK-v2)
+Experiment name is used later while creating the task specific `automl` job.
+```python
+exp_name = "dpv2-image-object-detection-experiment"
+```
++ ## Visualize input data Once you have the input image data prepared in [JSONL](https://jsonlines.org/) (JSON Lines) format, you can visualize the ground truth bounding boxes for an image. To do so, be sure you have `matplotlib` installed.
def plot_ground_truth_boxes_jsonl(image_file, jsonl_file):
break if not ground_truth_data_found: print("Unable to find ground truth information for image: {}".format(image_file))-
-def plot_ground_truth_boxes_dataset(image_file, dataset_pd):
- image_base_name = os.path.basename(image_file)
- image_pd = dataset_pd[dataset_pd['portable_path'].str.contains(image_base_name)]
- if not image_pd.empty:
- ground_truth_boxes = image_pd.iloc[0]["label"]
- plot_ground_truth_boxes(image_file, ground_truth_boxes)
- else:
- print("Unable to find ground truth information for image: {}".format(image_file))
``` Using the above helper functions, for any given image, you can run the following code to display the bounding boxes.
jsonl_file = "./odFridgeObjects/train_annotations.jsonl"
plot_ground_truth_boxes_jsonl(image_file, jsonl_file) ```
-## Upload data and create dataset
+## Upload data and create MLTable
+In order to use the data for training, upload data to default Blob Storage of your Azure ML Workspace and register it as an asset. The benefits of registering data are:
+- Easy to share with other members of the team
+- Versioning of the metadata (location, description, etc)
+- Lineage tracking
-In order to use the data for training, upload it to your workspace via a datastore. The datastore provides a mechanism for you to upload or download data, and interact with it from your remote compute targets.
+# [CLI v2](#tab/CLI-v2)
-```python
-ds = ws.get_default_datastore()
-ds.upload(src_dir='./odFridgeObjects', target_path='odFridgeObjects')
+Create a .yml file with the following configuration.
+
+```yml
+$schema: https://azuremlschemas.azureedge.net/latest/data.schema.json
+name: fridge-items-images-object-detection
+description: Fridge-items images Object detection
+path: ./data/odFridgeObjects
+type: uri_folder
```
-Once uploaded to the datastore, you can create an Azure Machine Learning dataset from the data. Datasets package your data into a consumable object for training.
+To upload the images as a data asset, you run the following CLI v2 command with the path to your .yml file, workspace name, resource group and subscription ID.
+
+```azurecli
+az ml data create -f [PATH_TO_YML_FILE] --workspace-name [YOUR_AZURE_WORKSPACE] --resource-group [YOUR_AZURE_RESOURCE_GROUP] --subscription [YOUR_AZURE_SUBSCRIPTION]
+```
+
+# [Python SDK v2 (preview)](#tab/SDK-v2)
+[!Notebook-python[] (~/azureml-examples-sdk-preview/sdk/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=upload-data)]
+++
+Next step is to create `MLTable` from your data in jsonl format as shown below. MLtable package your data into a consumable object for training.
-The following code creates a dataset for training. Since no validation dataset is specified, by default 20% of your training data is used for validation.
-``` python
-from azureml.core import Dataset
-from azureml.data import DataType
+# [CLI v2](#tab/CLI-v2)
-training_dataset_name = 'odFridgeObjectsTrainingDataset'
-if training_dataset_name in ws.datasets:
- training_dataset = ws.datasets.get(training_dataset_name)
- print('Found the training dataset', training_dataset_name)
-else:
- # create training dataset
- # create training dataset
- training_dataset = Dataset.Tabular.from_json_lines_files(
- path=ds.path('odFridgeObjects/train_annotations.jsonl'),
- set_column_types={"image_url": DataType.to_stream(ds.workspace)},
- )
- training_dataset = training_dataset.register(workspace=ws, name=training_dataset_name)
+The following configuration creates training and validation data from the MLTable.
-print("Training dataset name: " + training_dataset.name)
+```yaml
+target_column_name: label
+training_data:
+ path: data/training-mltable-folder
+ type: mltable
+validation_data:
+ path: data/validation-mltable-folder
+ type: mltable
```
-### Visualize dataset
+# [Python SDK v2 (preview)](#tab/SDK-v2)
-You can also visualize the ground truth bounding boxes for an image from this dataset.
+You can create data inputs from training and validation MLTable with the following code:
-Load the dataset into a pandas dataframe.
+[!Notebook-python[] (~/azureml-examples-sdk-preview/sdk/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=data-load)]
-```python
-import azureml.dataprep as dprep
+
-from azureml.dataprep.api.functions import get_portable_path
+## Configure your object detection experiment
-# Get pandas dataframe from the dataset
-dflow = training_dataset._dataflow.add_column(get_portable_path(dprep.col("image_url")),
- "portable_path", "image_url")
-dataset_pd = dflow.to_pandas_dataframe(extended_types=True)
-```
+To configure automated ML runs for image-related tasks, create a task specific AutoML job.
-For any given image, you can run the following code to display the bounding boxes.
+# [CLI v2](#tab/CLI-v2)
-```python
-image_file = "./odFridgeObjects/images/31.jpg"
-plot_ground_truth_boxes_dataset(image_file, dataset_pd)
+```yaml
+task: image_object_detection
+primary_metric: mean_average_precision
```
-## Configure your object detection experiment
+# [Python SDK v2 (preview)](#tab/SDK-v2)
-To configure automated ML runs for image-related tasks, use the `AutoMLImageConfig` object. In your `AutoMLImageConfig`, you can specify the model algorithms with the `model_name` parameter and configure the settings to perform a hyperparameter sweep over a defined parameter space to find the optimal model.
+[!Notebook-python[] (~/azureml-examples-sdk-preview/sdk/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=image-object-detection-configuration)]
-In this example, we use the `AutoMLImageConfig` to train an object detection model with `yolov5` and `fasterrcnn_resnet50_fpn`, both of which are pretrained on COCO, a large-scale object detection, segmentation, and captioning dataset that contains over thousands of labeled images with over 80 label categories.
++
+In your AutoML job, you can specify the model algorithms by using `model_name` parameter and configure the settings to perform a hyperparameter sweep over a defined search space to find the optimal model.
+
+In this example, we will train an object detection model with `yolov5` and `fasterrcnn_resnet50_fpn`, both of which are pretrained on COCO, a large-scale object detection, segmentation, and captioning dataset that contains over thousands of labeled images with over 80 label categories.
### Hyperparameter sweeping for image tasks
-You can perform a hyperparameter sweep over a defined parameter space to find the optimal model.
+You can perform a hyperparameter sweep over a defined search space to find the optimal model.
-The following code, defines the parameter space in preparation for the hyperparameter sweep for each defined algorithm, `yolov5` and `fasterrcnn_resnet50_fpn`. In the parameter space, specify the range of values for `learning_rate`, `optimizer`, `lr_scheduler`, etc., for AutoML to choose from as it attempts to generate a model with the optimal primary metric. If hyperparameter values are not specified, then default values are used for each algorithm.
+The following code, defines the search space in preparation for the hyperparameter sweep for each defined algorithm, `yolov5` and `fasterrcnn_resnet50_fpn`. In the search space, specify the range of values for `learning_rate`, `optimizer`, `lr_scheduler`, etc., for AutoML to choose from as it attempts to generate a model with the optimal primary metric. If hyperparameter values are not specified, then default values are used for each algorithm.
-For the tuning settings, use random sampling to pick samples from this parameter space by importing the `GridParameterSampling, RandomParameterSampling` and `BayesianParameterSampling` classes. Doing so, tells automated ML to try a total of 20 iterations with these different samples, running four iterations at a time on our compute target, which was set up using four nodes. The more parameters the space has, the more iterations you need to find optimal models.
+For the tuning settings, use random sampling to pick samples from this parameter space by using the `random` sampling_algorithm. Doing so, tells automated ML to try a total of 10 trials with these different samples, running two trials at a time on our compute target, which was set up using four nodes. The more parameters the search space has, the more trials you need to find optimal models.
The Bandit early termination policy is also used. This policy terminates poor performing configurations; that is, those configurations that are not within 20% slack of the best performing configuration, which significantly saves compute resources.
-```python
-from azureml.train.hyperdrive import RandomParameterSampling
-from azureml.train.hyperdrive import BanditPolicy, HyperDriveConfig
-from azureml.train.hyperdrive import choice, uniform
-
-parameter_space = {
- 'model': choice(
- {
- 'model_name': choice('yolov5'),
- 'learning_rate': uniform(0.0001, 0.01),
- #'model_size': choice('small', 'medium'), # model-specific
- 'img_size': choice(640, 704, 768), # model-specific
- },
- {
- 'model_name': choice('fasterrcnn_resnet50_fpn'),
- 'learning_rate': uniform(0.0001, 0.001),
- #'warmup_cosine_lr_warmup_epochs': choice(0, 3),
- 'optimizer': choice('sgd', 'adam', 'adamw'),
- 'min_size': choice(600, 800), # model-specific
- }
- )
-}
-
-tuning_settings = {
- 'iterations': 20,
- 'max_concurrent_iterations': 4,
- 'hyperparameter_sampling': RandomParameterSampling(parameter_space),
- 'policy': BanditPolicy(evaluation_interval=2, slack_factor=0.2, delay_evaluation=6)
-}
+# [CLI v2](#tab/CLI-v2)
++
+```yaml
+sweep:
+ limits:
+ max_trials: 10
+ max_concurrent_trials: 2
+ sampling_algorithm: random
+ early_termination:
+ type: bandit
+ evaluation_interval: 2
+ slack_factor: 0.2
+ delay_evaluation: 6
```
-Once the parameter space and tuning settings are defined, you can pass them into your `AutoMLImageConfig` object and then submit the experiment to train an image model using your training dataset.
-
-```python
-from azureml.train.automl import AutoMLImageConfig
-automl_image_config = AutoMLImageConfig(task='image-object-detection',
- compute_target=compute_target,
- training_data=training_dataset,
- validation_data=validation_dataset,
- primary_metric='mean_average_precision',
- **tuning_settings)
-
-automl_image_run = experiment.submit(automl_image_config)
-automl_image_run.wait_for_completion(wait_post_processing=True)
+```yaml
+search_space:
+ - model_name: "yolov5"
+ learning_rate: "uniform(0.0001, 0.01)"
+ model_size: "choice('small', 'medium')"
+ - model_name: "fasterrcnn_resnet50_fpn"
+ learning_rate: "uniform(0.0001, 0.001)"
+ optimizer: "choice('sgd', 'adam', 'adamw')"
+ min_size: "choice(600, 800)"
```
-When doing a hyperparameter sweep, it can be useful to visualize the different configurations that were tried using the HyperDrive UI. You can navigate to this UI by going to the 'Child runs' tab in the UI of the main automl_image_run from above, which is the HyperDrive parent run. Then you can go into the 'Child runs' tab of this one. Alternatively, here below you can see directly the HyperDrive parent run and navigate to its 'Child runs' tab:
+# [Python SDK v2 (preview)](#tab/SDK-v2)
-```python
-from azureml.core import Run
-hyperdrive_run = Run(experiment=experiment, run_id=automl_image_run.id + '_HD')
-hyperdrive_run
-```
+[!Notebook-python[] (~/azureml-examples-sdk-preview/sdk/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=sweep-settings)]
-## Register the best model
+[!Notebook-python[] (~/azureml-examples-sdk-preview/sdk/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=search-space-settings)]
-Once the run completes, we can register the model that was created from the best run.
+
-```python
-best_child_run = automl_image_run.get_best_child()
-model_name = best_child_run.properties['model_name']
-model = best_child_run.register_model(model_name = model_name, model_path='outputs/model.pt')
-```
+Once the search space and sweep settings are defined, you can then submit the job to train an image model using your training dataset.
-## Deploy model as a web service
+# [CLI v2](#tab/CLI-v2)
-Once you have your trained model, you can deploy the model on Azure. You can deploy your trained model as a web service on Azure Container Instances (ACI) or Azure Kubernetes Service (AKS). ACI is the perfect option for testing deployments, while AKS is better suited for high-scale, production usage.
-In this tutorial, we deploy the model as a web service in AKS.
+To submit your AutoML job, you run the following CLI v2 command with the path to your .yml file, workspace name, resource group and subscription ID.
-1. Create an AKS compute cluster. In this example, a GPU virtual machine SKU is used for the deployment cluster
+```azurecli
+az ml job create --file ./hello-automl-job-basic.yml --workspace-name [YOUR_AZURE_WORKSPACE] --resource-group [YOUR_AZURE_RESOURCE_GROUP] --subscription [YOUR_AZURE_SUBSCRIPTION]
+```
- ```python
- from azureml.core.compute import ComputeTarget, AksCompute
- from azureml.exceptions import ComputeTargetException
+# [Python SDK v2 (preview)](#tab/SDK-v2)
- # Choose a name for your cluster
- aks_name = "cluster-aks-gpu"
+When you've configured your AutoML Job to the desired settings, you can submit the job.
- # Check to see if the cluster already exists
- try:
- aks_target = ComputeTarget(workspace=ws, name=aks_name)
- print('Found existing compute target')
- except ComputeTargetException:
- print('Creating a new compute target...')
- # Provision AKS cluster with GPU machine
- prov_config = AksCompute.provisioning_configuration(vm_size="STANDARD_NC6",
- location="eastus2")
- # Create the cluster
- aks_target = ComputeTarget.create(workspace=ws,
- name=aks_name,
- provisioning_configuration=prov_config)
- aks_target.wait_for_completion(show_output=True)
- ```
+[!Notebook-python[] (~/azureml-examples-sdk-preview/sdk/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=submit-run)]
-1. Define the inference configuration that describes how to set up the web-service containing your model. You can use the scoring script and the environment from the training run in your inference config.
+
- > [!NOTE]
- > To change the model's settings, open the downloaded scoring script and modify the model_settings variable before deploying the model.
+When doing a hyperparameter sweep, it can be useful to visualize the different configurations that were tried using the HyperDrive UI. You can navigate to this UI by going to the 'Child runs' tab in the UI of the main automl_image_run from above, which is the HyperDrive parent run. Then you can go into the 'Child runs' tab of this one.
- ```python
- from azureml.core.model import InferenceConfig
+# [Python SDK v2 (preview)](#tab/SDK-v2-)
- best_child_run.download_file('outputs/scoring_file_v_1_0_0.py', output_file_path='score.py')
- environment = best_child_run.get_environment()
- inference_config = InferenceConfig(entry_script='score.py', environment=environment)
- ```
+Alternatively, here below you can see directly the HyperDrive parent run and navigate to its 'Child runs' tab:
-1. You can then deploy the model as an AKS web service.
+```python
+hd_job = ml_client.jobs.get(returned_job.name + '_HD')
+hd_job
+```
- ```python
+
- from azureml.core.webservice import AksWebservice
- from azureml.core.webservice import Webservice
- from azureml.core.model import Model
- from azureml.core.environment import Environment
+## Register and deploy model as a web service
- aks_config = AksWebservice.deploy_configuration(autoscale_enabled=True,
- cpu_cores=1,
- memory_gb=50,
- enable_app_insights=True)
+Once you have your trained model, you can deploy the model on Azure. You can deploy your trained model as a web service on Azure Container Instances (ACI) or Azure Kubernetes Service (AKS). ACI is the perfect option for testing deployments, while AKS is better suited for high-scale, production usage.
+
+You can deploy the model from the [Azure Machine Learning studio UI](https://ml.azure.com/).
+Navigate to the model you wish to deploy in the **Models** tab of the automated ML run and select the **Deploy**.
+
+![Select model from the automl runs in studio UI ](./media/how-to-auto-train-image-models/select-model.png)
+
+You can configure the model deployment endpoint name and the inferencing cluster to use for your model deployment in the **Deploy a model** pane.
- aks_service = Model.deploy(ws,
- models=[model],
- inference_config=inference_config,
- deployment_config=aks_config,
- deployment_target=aks_target,
- name='automl-image-test',
- overwrite=True)
- aks_service.wait_for_deployment(show_output=True)
- print(aks_service.state)
- ```
+![Deploy configuration](./media/how-to-auto-train-image-models/deploy-image-model.png)
## Test the web service
You can test the deployed web service to predict new images. For this tutorial,
import requests # URL for the web service
-scoring_uri = aks_service.scoring_uri
+scoring_uri = <scoring_uri from web service>
# If the service is authenticated, set the key or token
-key, _ = aks_service.get_keys()
+key, _ = <keys from the web service>
sample_image = './test_image.jpg'
headers['Authorization'] = f'Bearer {key}'
resp = requests.post(scoring_uri, data, headers=headers) print(resp.text) ```+ ## Visualize detections+ Now that you have scored a test image, you can visualize the bounding boxes for this image. To do so, be sure you have matplotlib installed. ```
In this automated machine learning tutorial, you did the following tasks:
* [Learn how to set up AutoML to train computer vision models with Python (preview)](how-to-auto-train-image-models.md). * [Learn how to configure incremental training on computer vision models](how-to-auto-train-image-models.md#incremental-training-optional). * See [what hyperparameters are available for computer vision tasks](reference-automl-images-hyperparameters.md).
-* Review detailed code examples and use cases in the [GitHub notebook repository for automated machine learning samples](https://github.com/Azure/azureml-examples/tree/main/python-sdk/tutorials/automl-with-azureml). Please check the folders with 'image-' prefix for samples specific to building computer vision models.
+* Code examples:
+ # [CLI v2](#tab/CLI-v2)
+ * Review detailed code examples and use cases in the [azureml-examples repository for automated machine learning samples](https://github.com/Azure/azureml-examples/tree/sdk-preview/cli/jobs/automl-standalone-jobs). Please check the folders with 'cli-automl-image-' prefix for samples specific to building computer vision models.
+ # [Python SDK v2 (preview)](#tab/SDK-v2)
+ * Review detailed code examples and use cases in the [GitHub notebook repository for automated machine learning samples](https://github.com/Azure/azureml-examples/tree/sdk-preview/sdk/jobs/automl-standalone-jobs). Please check the folders with 'automl-image-' prefix for samples specific to building computer vision models.
++ > [!NOTE] > Use of the fridge objects dataset is available through the license under the [MIT License](https://github.com/microsoft/computervision-recipes/blob/master/LICENSE).
machine-learning Tutorial Auto Train Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-auto-train-models.md
Last updated 10/21/2021-+ # Tutorial: Train a regression model with AutoML and Python + In this tutorial, you learn how to train a regression model with the Azure Machine Learning Python SDK using Azure Machine Learning automated ML. This regression model predicts NYC taxi fares. This process accepts training data and configuration settings, and automatically iterates through combinations of different feature normalization/standardization methods, models, and hyperparameter settings to arrive at the best model.
machine-learning Tutorial Convert Ml Experiment To Production https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-convert-ml-experiment-to-production.md
Last updated 10/21/2021-+ # Tutorial: Convert ML experiments to production Python code + In this tutorial, you learn how to convert Jupyter notebooks into Python scripts to make it testing and automation friendly using the MLOpsPython code template and Azure Machine Learning. Typically, this process is used to take experimentation / training code from a Jupyter notebook and convert it into Python scripts. Those scripts can then be used testing and CI/CD automation in your production environment. A machine learning project requires experimentation where hypotheses are tested with agile tools like Jupyter Notebook using real datasets. Once the model is ready for production, the model code should be placed in a production code repository. In some cases, the model code must be converted to Python scripts to be placed in the production code repository. This tutorial covers a recommended approach on how to export experimentation code to Python scripts.
machine-learning Tutorial Create Secure Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-create-secure-workspace.md
Last updated 04/06/2022 -+ # How to create a secure workspace
In this tutorial, you accomplish the following tasks:
## Limitations The steps in this article put Azure Container Registry behind the VNet. In this configuration, you can't deploy models to Azure Container Instances inside the VNet. For more information, see [Secure the inference environment](how-to-secure-inferencing-vnet.md).+
+> [!TIP]
+> As an alternative to Azure Container Instances, try Azure Machine Learning managed online endpoints. For more information, see [Enable network isolation for managed online endpoints (preview)](how-to-secure-online-endpoint.md).
+ ## Create a virtual network To create a virtual network, use the following steps:
To create a virtual network, use the following steps:
:::image type="content" source="./media/tutorial-create-secure-workspace/machine-learning-create.png" alt-text="{alt-text}":::
-1. From the __Basics__ tab, select the __subscription__, __resource group__, and __Region__ you previously used for the virtual network. Use the follow values for the other fields:
+1. From the __Basics__ tab, select the __subscription__, __resource group__, and __Region__ you previously used for the virtual network. Use the following values for the other fields:
* __Workspace name__: A unique name for your workspace. * __Storage account__: Select the storage account you created previously. * __Key vault__: Select the key vault you created previously.
A compute cluster is used by your training jobs. A compute instance provides a J
:::image type="content" source="./media/tutorial-create-secure-workspace/create-compute-instance-vm.png" alt-text="Screenshot of compute instance vm settings":::
-1. From the __Advanced Settings__ dialog, , set the __Subnet__ to __Training__, and then select __Create__.
+1. From the __Advanced Settings__ dialog, set the __Subnet__ to __Training__, and then select __Create__.
:::image type="content" source="./media/tutorial-create-secure-workspace/create-compute-instance-settings.png" alt-text="Screenshot of compute instance settings":::
When Azure Container Registry is behind the virtual network, Azure Machine Learn
> [!IMPORTANT] > The steps in this article put Azure Container Registry behind the VNet. In this configuration, you cannot deploy a model to Azure Container Instances inside the VNet. We do not recommend using Azure Container Instances with Azure Machine Learning in a virtual network. For more information, see [Secure the inference environment](how-to-secure-inferencing-vnet.md).
+>
+> As an alternative to Azure Container Instances, try Azure Machine Learning managed online endpoints. For more information, see [Enable network isolation for managed online endpoints (preview)](how-to-secure-online-endpoint.md).
At this point, you can use studio to interactively work with notebooks on the compute instance and run training jobs on the compute cluster. For a tutorial on using the compute instance and compute cluster, see [run a Python script](tutorial-1st-experiment-hello-world.md).
machine-learning Tutorial Designer Automobile Price Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-designer-automobile-price-deploy.md
Previously updated : 10/21/2021- Last updated : 05/10/2022+ # Tutorial: Designer - deploy a machine learning model Use the designer to deploy a machine learning model to predict the price of cars. This tutorial is part two of a two-part series. - In [part one of the tutorial](tutorial-designer-automobile-price-train-score.md) you trained a linear regression model on car prices. In part two, you deploy the model to give others a chance to use it. In this tutorial, you: > [!div class="checklist"]
Complete [part one of the tutorial](tutorial-designer-automobile-price-train-sco
To deploy your pipeline, you must first convert the training pipeline into a real-time inference pipeline. This process removes training components and adds web service inputs and outputs to handle requests.
+> [!NOTE]
+> Create inference pipeline only supports training pipelines which contain only the designer built-in components and must have a component like **Train Model** which outputs the trained model.
+ ### Create a real-time inference pipeline
-1. Above the pipeline canvas, select **Create inference pipeline** > **Real-time inference pipeline**.
+1. On pipeline job detail page, above the pipeline canvas, select **Create inference pipeline** > **Real-time inference pipeline**.
- :::image type="content" source="./media/tutorial-designer-automobile-price-deploy/tutorial2-create-inference-pipeline.png" alt-text="Screenshot showing where to find the create pipeline button":::
+ :::image type="content" source="./media/tutorial-designer-automobile-price-deploy/create-real-time-inference.png" alt-text="Screenshot of create inference pipeline in pipeline job detail page.":::
- Your pipeline should now look like this:
+ Your new pipeline will now look like this:
- ![Screenshot showing the expected configuration of the pipeline after preparing it for deployment](./media/tutorial-designer-automobile-price-deploy/real-time-inference-pipeline.png)
+ :::image type="content" source="./media/tutorial-designer-automobile-price-deploy/real-time-inference-pipeline.png" alt-text="Screenshot showing the expected configuration of the pipeline after preparing it for deployment.":::
When you select **Create inference pipeline**, several things happen:
To deploy your pipeline, you must first convert the training pipeline into a rea
If this is the first run, it may take up to 20 minutes for your pipeline to finish running. The default compute settings have a minimum node size of 0, which means that the designer must allocate resources after being idle. Repeated pipeline runs will take less time since the compute resources are already allocated. Additionally, the designer uses cached results for each component to further improve efficiency.
-1. Select **Deploy**.
+1. Go to the real-time inference pipeline job detail by selecting **Job detail** link in the left pane.
+
+1. Select **Deploy** in the job detail page.
+
+ :::image type="content" source="./media/tutorial-designer-automobile-price-deploy/deploy-in-job-detail-page.png" alt-text="Screenshot showing deploying in job detail page.":::
## Create an inferencing cluster
In the dialog box that appears, you can select from any existing Azure Kubernete
1. On the navigation ribbon, select **Inference Clusters** > **+ New**.
- ![Screenshot showing how to get to the new inference cluster pane](./media/tutorial-designer-automobile-price-deploy/new-inference-cluster.png)
+ :::image type="content" source="./media/tutorial-designer-automobile-price-deploy/new-inference-cluster.png" alt-text="Screenshot showing how to get to the new inference cluster pane.":::
1. In the inference cluster pane, configure a new Kubernetes Service.
In the dialog box that appears, you can select from any existing Azure Kubernete
> [!NOTE] > It takes approximately 15 minutes to create a new AKS service. You can check the provisioning state on the **Inference Clusters** page.
- >
## Deploy the real-time endpoint
After your AKS service has finished provisioning, return to the real-time infere
1. Select **Deploy** above the canvas.
-1. Select **Deploy new real-time endpoint**.
+1. Select **Deploy new real-time endpoint**.
1. Select the AKS cluster you created.
- :::image type="content" source="./media/tutorial-designer-automobile-price-deploy/setup-endpoint.png" alt-text="Screenshot showing how to set up a new real-time endpoint":::
+ :::image type="content" source="./media/tutorial-designer-automobile-price-deploy/setup-endpoint.png" alt-text="Screenshot showing how to set up a new real-time endpoint.":::
You can also change **Advanced** setting for your real-time endpoint.
-
+ |Advanced setting|Description| |||
- |Enable Application Insights diagnostics and data collection| Whether to enable Azure Application Insights to collect data from the deployed endpoints. </br> By default: false |
- |Scoring timeout| A timeout in milliseconds to enforce for scoring calls to the web service.</br>By default: 60000|
- |Auto scale enabled| Whether to enable autoscaling for the web service.</br>By default: true|
- |Min replicas| The minimum number of containers to use when autoscaling this web service.</br>By default: 1|
- |Max replicas| The maximum number of containers to use when autoscaling this web service.</br> By default: 10|
- |Target utilization|The target utilization (in percent out of 100) that the autoscaler should attempt to maintain for this web service.</br> By default: 70|
- |Refresh period|How often (in seconds) the autoscaler attempts to scale this web service.</br> By default: 1|
- |CPU reserve capacity|The number of CPU cores to allocate for this web service.</br> By default: 0.1|
- |Memory reserve capacity|The amount of memory (in GB) to allocate for this web service.</br> By default: 0.5|
-
+ |Enable Application Insights diagnostics and data collection| Whether to enable Azure Application Insights to collect data from the deployed endpoints. </br> By default: false. |
+ |Scoring timeout| A timeout in milliseconds to enforce for scoring calls to the web service.</br>By default: 60000.|
+ |Auto scale enabled| Whether to enable autoscaling for the web service.</br>By default: true.|
+ |Min replicas| The minimum number of containers to use when autoscaling this web service.</br>By default: 1.|
+ |Max replicas| The maximum number of containers to use when autoscaling this web service.</br> By default: 10.|
+ |Target utilization|The target utilization (in percent out of 100) that the autoscaler should attempt to maintain for this web service.</br> By default: 70.|
+ |Refresh period|How often (in seconds) the autoscaler attempts to scale this web service.</br> By default: 1.|
+ |CPU reserve capacity|The number of CPU cores to allocate for this web service.</br> By default: 0.1.|
+ |Memory reserve capacity|The amount of memory (in GB) to allocate for this web service.</br> By default: 0.5.|
+
+1. Select **Deploy**.
-1. Select **Deploy**.
+ A success notification from the notification center appears after deployment finishes. It might take a few minutes.
- A success notification above the canvas appears after deployment finishes. It might take a few minutes.
+ :::image type="content" source="./media/tutorial-designer-automobile-price-deploy/deploy-notification.png" alt-text="Screenshot showing deployment notification.":::
> [!TIP] > You can also deploy to **Azure Container Instance** (ACI) if you select **Azure Container Instance** for **Compute type** in the real-time endpoint setting box.
After deployment finishes, you can view your real-time endpoint by going to the
1. To test your endpoint, go to the **Test** tab. From here, you can enter test data and select **Test** verify the output of your endpoint.
-For more information on consuming your web service, see [Consume a model deployed as a webservice](how-to-consume-web-service.md)
+For more information on consuming your web service, see [Consume a model deployed as a webservice](how-to-consume-web-service.md).
-## Limitations
+## Update the real-time endpoint
-### Update inference pipeline
+You can update the online endpoint with new model trained in the designer. On the online endpoint detail page, find your previous training pipeline job and inference pipeline job.
-If you make some modifications in your training pipeline, you should resubmit the training pipeline, **Update** the inference pipeline and run the inference pipeline again.
+1. You can directly find and modify your training pipeline draft in the designer homepage.
+
+ Or you can open the training pipeline job link and then clone it into a new pipeline draft to continue editing.
-Note that only trained models will be updated in the inference pipeline, while data transformation will not be updated.
+ :::image type="content" source="./media/tutorial-designer-automobile-price-deploy/endpoint-train-job-link.png" alt-text="Screenshot showing training job link in endpoint detail page.":::
-To use the updated transformation in inference pipeline, you need to register the transformation output of the transformation component as dataset.
+1. After you submit the modified training pipeline, go to the job detail page.
-![Screenshot showing how to register transformation dataset](./media/tutorial-designer-automobile-price-deploy/register-transformation-dataset.png)
+1. When the job completes, right click **Train Model** and select **Register data**.
-Then manually replace the **TD-** component in inference pipeline with the registered dataset.
+ :::image type="content" source="./media/how-to-run-batch-predictions-designer/register-train-model-as-dataset.png" alt-text="Screenshot showing register trained model as dataset.":::
-![Screenshot showing how to replace transformation component](./media/tutorial-designer-automobile-price-deploy/replace-td-module.png)
+ Input name and select **File** type.
-Then you can submit the inference pipeline with the updated model and transformation, and deploy.
+ :::image type="content" source="./media/how-to-run-batch-predictions-designer/register-train-model-as-dataset-2.png" alt-text="Screenshot of register as a data asset with new data asset selected.":::
-### Deploy real-time endpoint
+1. After the dataset registers successfully, open your inference pipeline draft, or clone the previous inference pipeline job into a new draft. In the inference pipeline draft, replace the previous trained model shown as **MD-XXXX** node connected to the **Score Model** component with the newly registered dataset.
+
+ :::image type="content" source="./media/tutorial-designer-automobile-price-deploy/modify-inference-pipeline.png" alt-text="Screenshot showing how to modify inference pipeline.":::
++
+1. If you need to update the data preprocessing part in your training pipeline, and would like to update that into the inference pipeline, the processing is similar as steps above.
+
+ You just need to register the transformation output of the transformation component as dataset.
+
+ Then manually replace the **TD-** component in inference pipeline with the registered dataset.
+
+ :::image type="content" source="./media/tutorial-designer-automobile-price-deploy/replace-td-module.png" alt-text="Screenshot showing how to replace transformation component.":::
+
+1. After modifying your inference pipeline with the newly trained model or transformation, submit it. When the job is completed, deploy it to the existing online endpoint deployed previously.
+
+ :::image type="content" source="./media/tutorial-designer-automobile-price-deploy/deploy-to-existing-endpoint.png" alt-text="Screenshot showing how to replace existing real-time endpoint.":::
+
+## Limitations
-Due to datstore access limitation, if your inference pipeline contains **Import Data** or **Export Data** component, they will be auto-removed when deploy to real-time endpoint.
+Due to datastore access limitation, if your inference pipeline contains **Import Data** or **Export Data** component, they'll be auto-removed when deploy to real-time endpoint.
## Clean up resources
machine-learning Tutorial Designer Automobile Price Train Score https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-designer-automobile-price-train-score.md
Title: 'Tutorial: Designer - train a no-code regression model' description: Train a regression model that predicts car prices using the Azure Machine Learning designer.---++ Previously updated : 10/21/2021- Last updated : 05/10/2022+ # Tutorial: Designer - train a no-code regression model Train a linear regression model that predicts car prices using the Azure Machine Learning designer. This tutorial is part one of a two-part series.
-This tutorial uses the Azure Machine Learning designer, for more information see [What is Azure Machine Learning designer?](concept-designer.md)
+This tutorial uses the Azure Machine Learning designer, for more information, see [What is Azure Machine Learning designer?](concept-designer.md)
In part one of the tutorial, you learn how to:
A pipeline runs on a compute target, which is a compute resource that's attached
You can set a **Default compute target** for the entire pipeline, which will tell every component to use the same compute target by default. However, you can specify compute targets on a per-module basis.
-1. Next to the pipeline name, select the **Gear icon** ![Screenshot of the gear icon](./media/tutorial-designer-automobile-price-train-score/gear-icon.png) at the top of the canvas to open the **Settings** pane.
+1. Next to the pipeline name, select the **Gear icon** ![Screenshot of the gear icon that is in the UI.](./media/tutorial-designer-automobile-price-train-score/gear-icon.png) at the top of the canvas to open the **Settings** pane.
1. In the **Settings** pane to the right of the canvas, select **Select compute target**. If you already have an available compute target, you can select it to run this pipeline.
- > [!NOTE]
- > The designer can only run training experiments on Azure Machine Learning Compute but other compute targets won't be shown.
- 1. Enter a name for the compute resource. 1. Select **Save**.
You can set a **Default compute target** for the entire pipeline, which will tel
## Import data
-There are several sample datasets included in the designer for you to experiment with. For this tutorial, use **Automobile price data (Raw)**.
+There are several sample datasets included in the designer for you to experiment with. For this tutorial, use **Automobile price data (Raw)**.
1. To the left of the pipeline canvas is a palette of datasets and components. Select **Sample datasets** to view the available sample datasets. 1. Select the dataset **Automobile price data (Raw)**, and drag it onto the canvas.
- :::image type="content" source="./media/tutorial-designer-automobile-price-train-score/drag-data.gif" alt-text="Gif of dragging data to the canvas.":::
+ :::image type="content" source="./media/tutorial-designer-automobile-price-train-score/drag-data.gif" alt-text="Gif of dragging the Automobile price data to the canvas.":::
### Visualize the data
Datasets typically require some preprocessing before analysis. You might have no
### Remove a column
-When you train a model, you have to do something about the data that's missing. In this dataset, the **normalized-losses** column is missing many values, so you will exclude that column from the model altogether.
+When you train a model, you have to do something about the data that's missing. In this dataset, the **normalized-losses** column is missing many values, so you'll exclude that column from the model altogether.
1. In the component palette to the left of the canvas, expand the **Data Transformation** section and find the **Select Columns in Dataset** component.
When you train a model, you have to do something about the data that's missing.
> [!TIP] > You create a flow of data through your pipeline when you connect the output port of one component to an input port of another.
- >
- :::image type="content" source="./media/tutorial-designer-automobile-price-train-score/connect-modules.gif" alt-text="Screenshot of connecting components.":::
+ :::image type="content" source="./media/tutorial-designer-automobile-price-train-score/connect-modules.gif" alt-text="Screenshot of connecting Automobile price data component to select columns in dataset component.":::
1. Select the **Select Columns in Dataset** component.
When you train a model, you have to do something about the data that's missing.
1. Select the **+** to add a new rule. 1. From the drop-down menus, select **Exclude** and **Column names**.
-
+ 1. Enter *normalized-losses* in the text box. 1. In the lower right, select **Save** to close the column selector. :::image type="content" source="./media/tutorial-designer-automobile-price-train-score/exclude-column.png" alt-text="Screenshot of select columns with exclude highlighted.":::
-1. Select the **Select Columns in Dataset** component.
+1. Select the **Select Columns in Dataset** component.
1. In the component details pane to the right of the canvas, select the **Comment** text box and enter *Exclude normalized losses*.
Your dataset still has missing values after you remove the **normalized-losses**
1. In the component palette to the left of the canvas, expand the section **Data Transformation**, and find the **Clean Missing Data** component.
-1. Drag the **Clean Missing Data** component to the pipeline canvas. Connect it to the **Select Columns in Dataset** component.
+1. Drag the **Clean Missing Data** component to the pipeline canvas. Connect it to the **Select Columns in Dataset** component.
1. Select the **Clean Missing Data** component.
Your dataset still has missing values after you remove the **normalized-losses**
1. In the component details pane to the right of the canvas, select **Remove entire row** under **Cleaning mode**.
-1. In the component details pane to the right of the canvas, select the **Comment** box, and enter *Remove missing value rows*.
+1. In the component details pane to the right of the canvas, select the **Comment** box, and enter *Remove missing value rows*.
Your pipeline should now look something like this:
- :::image type="content" source="./media/tutorial-designer-automobile-price-train-score/pipeline-clean.png" alt-text="Screenshot of automobilie price data connected to select columns in dataset componet which is connected to clean missing data.":::
+ :::image type="content" source="./media/tutorial-designer-automobile-price-train-score/pipeline-clean.png" alt-text="Screenshot of automobile price data connected to select columns in dataset component, which is connected to clean missing data.":::
## Train a machine learning model
Because you want to predict price, which is a number, you can use a regression a
### Split the data
-Splitting data is a common task in machine learning. You will split your data into two separate datasets. One dataset will train the model and the other will test how well the model performed.
+Splitting data is a common task in machine learning. You'll split your data into two separate datasets. One dataset will train the model and the other will test how well the model performed.
1. In the component palette, expand the section **Data Transformation** and find the **Split Data** component.
Splitting data is a common task in machine learning. You will split your data in
Train the model by giving it a dataset that includes the price. The algorithm constructs a model that explains the relationship between the features and the price as presented by the training data. 1. In the component palette, expand **Machine Learning Algorithms**.
-
+ This option displays several categories of components that you can use to initialize learning algorithms. 1. Select **Regression** > **Linear Regression**, and drag it to the pipeline canvas.
Train the model by giving it a dataset that includes the price. The algorithm co
> [!IMPORTANT] > Be sure that the left output ports of **Split Data** connects to **Train Model**. The left port contains the training set. The right port contains the test set.
- :::image type="content" source="./media/tutorial-designer-automobile-price-train-score/pipeline-train-model.png" alt-text="Screenshot showing the correct configuration of the Train Model component. The Linear Regression component connects to left port of Train Model component and the Split Data component connects to right port of Train Model.":::
+ :::image type="content" source="./media/tutorial-designer-automobile-price-train-score/pipeline-train-model.png" alt-text="Screenshot showing the Linear Regression connects to left port of Train Model and the Split Data connects to right port of Train Model.":::
1. Select the **Train Model** component. 1. In the component details pane to the right of the canvas, select **Edit column** selector.
-1. In the **Label column** dialog box, expand the drop-down menu and select **Column names**.
+1. In the **Label column** dialog box, expand the drop-down menu and select **Column names**.
1. In the text box, enter *price* to specify the value that your model is going to predict. >[!IMPORTANT]
- > Make sure you enter the column name exactly. Do not capitalize **price**.
+ > Make sure you enter the column name exactly. Do not capitalize **price**.
Your pipeline should look like this:
Train the model by giving it a dataset that includes the price. The algorithm co
After you train your model by using 70 percent of the data, you can use it to score the other 30 percent to see how well your model functions.
-1. Enter *score model* in the search box to find the **Score Model** component. Drag the component to the pipeline canvas.
+1. Enter *score model* in the search box to find the **Score Model** component. Drag the component to the pipeline canvas.
1. Connect the output of the **Train Model** component to the left input port of **Score Model**. Connect the test data output (right port) of the **Split Data** component to the right input port of **Score Model**.
After you train your model by using 70 percent of the data, you can use it to sc
Use the **Evaluate Model** component to evaluate how well your model scored the test dataset.
-1. Enter *evaluate* in the search box to find the **Evaluate Model** component. Drag the component to the pipeline canvas.
+1. Enter *evaluate* in the search box to find the **Evaluate Model** component. Drag the component to the pipeline canvas.
-1. Connect the output of the **Score Model** component to the left input of **Evaluate Model**.
+1. Connect the output of the **Score Model** component to the left input of **Evaluate Model**.
The final pipeline should look something like this:
Now that your pipeline is all setup, you can submit a pipeline run to train your
1. At the top of the canvas, select **Submit**.
-1. In the **Set up pipeline run** dialog box, select **Create new**.
+1. In the **Set up pipeline job** dialog box, select **Create new**.
> [!NOTE] > Experiments group similar pipeline runs together. If you run a pipeline multiple times, you can select the same experiment for successive runs.
Now that your pipeline is all setup, you can submit a pipeline run to train your
1. For **New experiment Name**, enter **Tutorial-CarPrices**. 1. Select **Submit**.
-
- You can view run status and details at the top right of the canvas.
-
+
+ 1. You'll see a submission list in the left pane of the canvas, and a notification will pop up at the top right corner of the page. You can select the **Job detail** link to go to job detail page for debugging.
+
+ :::image type="content" source="./media/how-to-run-batch-predictions-designer/submission-list.png" alt-text="Screenshot of the submitted jobs list with a success notification.":::
+ If this is the first run, it may take up to 20 minutes for your pipeline to finish running. The default compute settings have a minimum node size of 0, which means that the designer must allocate resources after being idle. Repeated pipeline runs will take less time since the compute resources are already allocated. Additionally, the designer uses cached results for each component to further improve efficiency. ### View scored labels
+In the job detail page, you can check the pipeline job status, results and logs.
++ After the run completes, you can view the results of the pipeline run. First, look at the predictions generated by the regression model. 1. Right-click the **Score Model** component, and select **Preview data** > **Scored dataset** to view its output.
machine-learning Tutorial Pipeline Python Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-pipeline-python-sdk.md
Title: 'Tutorial: ML pipelines for training'
+ Title: "Tutorial: ML pipelines with Python SDK v2 (preview)"
-description: In this tutorial, you build a machine learning pipeline for image classification. Focus on machine learning instead of infrastructure and automation.
+description: Use Azure Machine Learning to create your production-ready ML project in a cloud-based Python Jupyter Notebook using Azure ML Python SDK V2 (preview).
-+ -- Previously updated : 01/28/2022-+++ Last updated : 05/10/2022+
+#Customer intent: This tutorial is intended to introduce Azure ML to data scientists who want to scale up or publish their ML projects. By completing a familiar end-to-end project, which starts by loading the data and ends by creating and calling an online inference endpoint, the user should become familiar with the core concepts of Azure ML and their most common usage. Each step of this tutorial can be modified or performed in other ways that might have security or scalability advantages. We will cover some of those in the Part II of this tutorial, however, we suggest the reader use the provide links in each section to learn more on each topic.
-# Tutorial: Build an Azure Machine Learning pipeline for image classification
+# Tutorial: Create production ML pipelines with Python SDK v2 (preview) in a Jupyter notebook
-In this tutorial, you learn how to build an [Azure Machine Learning pipeline](concept-ml-pipelines.md) to prepare data and train a machine learning model. Machine learning pipelines optimize your workflow with speed, portability, and reuse, so you can focus on machine learning instead of infrastructure and automation.
-The example trains a small [Keras](https://keras.io/) convolutional neural network to classify images in the [Fashion MNIST](https://github.com/zalandoresearch/fashion-mnist) dataset.
+> [!IMPORTANT]
+> SDK v2 is currently in public preview.
+> The preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+> [!NOTE]
+> For a tutorial that uses SDK v1 to build a pipeline, see [Tutorial: Build an Azure Machine Learning pipeline for image classification](v1/tutorial-pipeline-python-sdk.md)
+>
-In this tutorial, you complete the following tasks:
+In this tutorial, you'll use Azure Machine Learning (Azure ML) to create a production ready machine learning (ML) project, using AzureML Python SDK v2 (preview).
+
+You'll learn how to use the AzureML Python SDK v2 to:
> [!div class="checklist"]
-> * Configure workspace
-> * Create an Experiment to hold your work
-> * Provision a ComputeTarget to do the work
-> * Create a Dataset in which to store compressed data
-> * Create a pipeline step to prepare the data for training
-> * Define a runtime Environment in which to perform training
-> * Create a pipeline step to define the neural network and perform the training
-> * Compose a Pipeline from the pipeline steps
-> * Run the pipeline in the experiment
-> * Review the output of the steps and the trained neural network
-> * Register the model for further use
-
-If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/) today.
+>
+> * Connect to your Azure ML workspace
+> * Create Azure ML data assets
+> * Create reusable Azure ML components
+> * Create, validate and run Azure ML pipelines
+> * Deploy the newly-trained model as an endpoint
+> * Call the Azure ML endpoint for inferencing
## Prerequisites
-* Complete the [Quickstart: Get started with Azure Machine Learning](quickstart-create-resources.md) if you don't already have an Azure Machine Learning workspace.
-* A Python environment in which you've installed both the `azureml-core` and `azureml-pipeline` packages. This environment is for defining and controlling your Azure Machine Learning resources and is separate from the environment used at runtime for training.
+* Complete the [Quickstart: Get started with Azure Machine Learning](quickstart-create-resources.md) to:
+ * Create a workspace.
+ * Create a cloud-based compute instance to use for your development environment.
+ * Create a cloud-based compute cluster to use for training your model.
+
+## Install the SDK
+
+You'll complete the following experiment setup and run steps in Azure Machine Learning studio. This consolidated interface includes machine learning tools to perform data science scenarios for data science practitioners of all skill levels.
+
+First you'll install the v2 SDK on your compute instance:
+
+1. Sign in to [Azure Machine Learning studio](https://ml.azure.com/).
+
+1. Select the subscription and the workspace you created as part of the [Prerequisites](#prerequisites).
+
+1. On the left, select **Compute**.
+
+1. From the list of **Compute Instances**, find the one you created.
+
+1. Select on "Terminal", to open the terminal session on the compute instance.
+
+1. In the terminal window, install Python SDK v2 (preview) with this command:
+
+ ```
+ pip install --pre azure-ai-ml
+ ```
+
+ For more information, see [Install the Python SDK v2](https://aka.ms/sdk-v2-install).
+
+## Clone the azureml-examples repo
+
+1. Now on the terminal, run the command:
+
+ ```
+ git clone --depth 1 https://github.com/Azure/azureml-examples --branch sdk-preview
+ ```
+
+1. On the left, select **Notebooks**.
+
+1. Now, on the left, Select the **Files**
+
+ :::image type="content" source="media/tutorial-pipeline-python-sdk/clone-tutorials-users-files.png" alt-text="Screenshot that shows the Clone tutorials folder.":::
+
+1. A list of folders shows each user who accesses the workspace. Select your folder, you'll find **azureml-samples** is cloned.
+
+## Open the cloned notebook
+
+1. Open the **tutorials** folder that was cloned into your **User files** section.
+
+1. Select the **e2e-ml-workflow.ipynb** file from your **azureml-examples/tutorials/e2e-ds-experience/** folder.
+
+ :::image type="content" source="media/tutorial-pipeline-python-sdk/expand-folder.png" alt-text="Screenshot shows the open tutorials folder.":::
+
+1. On the top bar, select the compute instance you created during the [Quickstart: Get started with Azure Machine Learning](quickstart-create-resources.md) to use for running the notebook.
> [!Important]
-> Currently, the most recent Python release compatible with `azureml-pipeline` is Python 3.8. If you've difficulty installing the `azureml-pipeline` package, ensure that `python --version` is a compatible release. Consult the documentation of your Python virtual environment manager (`venv`, `conda`, and so on) for instructions.
+> The rest of this article contains the same content as you see in the notebook.
+>
+> Switch to the Jupyter Notebook now if you want to run the code while you read along.
+> To run a single code cell in a notebook, click the code cell and hit **Shift+Enter**. Or, run the entire notebook by choosing **Run all** from the top toolbar
+
+## Introduction
+
+In this tutorial, you'll create an Azure ML pipeline to train a model for credit default prediction. The pipeline handles the data preparation, training and registering the trained model. You'll then run the pipeline, deploy the model and use it.
+
+The image below shows the pipeline as you'll see it in the AzureML portal once submitted. It's a rather simple pipeline we'll use to walk you through the AzureML SDK v2.
+
+The two steps are first data preparation and second training.
+
-## Start an interactive Python session
+## Set up the pipeline resources
-This tutorial uses the Python SDK for Azure ML to create and control an Azure Machine Learning pipeline. The tutorial assumes that you'll be running the code snippets interactively in either a Python REPL environment or a Jupyter notebook.
+The Azure ML framework can be used from CLI, Python SDK, or studio interface. In this example, you'll use the AzureML Python SDK v2 to create a pipeline.
-* This tutorial is based on the `image-classification.ipynb` notebook found in the `python-sdk/tutorial/using-pipelines` directory of the [AzureML Examples](https://github.com/azure/azureml-examples) repository. The source code for the steps themselves is in the `keras-mnist-fashion` subdirectory.
+Before creating the pipeline, you'll set up the resources the pipeline will use:
+* The dataset for training
+* The software environment to run the pipeline
+* A compute resource to where the job will run
-## Import types
+## Connect to the workspace
+
+Before we dive in the code, you'll need to connect to your Azure ML workspace. The workspace is the top-level resource for Azure Machine Learning, providing a centralized place to work with all the artifacts you create when you use Azure Machine Learning.
-Import all the Azure Machine Learning types that you'll need for this tutorial:
```python
-import os
-import azureml.core
-from azureml.core import (
- Workspace,
- Experiment,
- Dataset,
- Datastore,
- ComputeTarget,
- Environment,
- ScriptRunConfig
+# handle to the workspace
+from azure.ai.ml import MLClient
+
+# Authentication package
+from azure.identity import DefaultAzureCredential
+```
+
+In the next cell, enter your Subscription ID, Resource Group name and Workspace name. To find your Subscription ID:
+1. In the upper right Azure Machine Learning studio toolbar, select your workspace name.
+1. At the bottom, select **View all properties in Azure portal**
+1. Copy the value from Azure portal into the code.
++
+```python
+# get a handle to the workspace
+ml_client = MLClient(
+ DefaultAzureCredential(),
+ subscription_id="<SUBSCRIPTION_ID>",
+ resource_group_name="<RESOURCE_GROUP>",
+ workspace_name="<AML_WORKSPACE_NAME>",
)
-from azureml.data import OutputFileDatasetConfig
-from azureml.core.compute import AmlCompute
-from azureml.core.compute_target import ComputeTargetException
-from azureml.pipeline.steps import PythonScriptStep
-from azureml.pipeline.core import Pipeline
-
-# check core SDK version number
-print("Azure ML SDK Version: ", azureml.core.VERSION)
```
-The Azure ML SDK version should be 1.37 or greater. If it isn't, upgrade with `pip install --upgrade azureml-core`.
+The result is a handler to the workspace that you'll use to manage other resources and jobs.
+
+> [!IMPORTANT]
+> Creating MLClient will not connect to the workspace. The client initialization is lazy, it will wait for the first time it needs to make a call (in the notebook below, that will happen during dataset registration).
+
+## Register data from an external url
-## Configure workspace
+The data you use for training is usually in one of the locations below:
-Create a workspace object from the existing Azure Machine Learning workspace.
+* Local machine
+* Web
+* Big Data Storage services (for example, Azure Blob, Azure Data Lake Storage, SQL)
+
+Azure ML uses a `Data` object to register a reusable definition of data, and consume data within a pipeline. In the section below, you'll consume some data from web url as one example. Data from other sources can be created as well.
```python
-workspace = Workspace.from_config()
+from azure.ai.ml.entities import Data
+from azure.ai.ml.constants import AssetTypes
+web_path = "https://archive.ics.uci.edu/ml/machine-learning-databases/00350/default%20of%20credit%20card%20clients.xls"
+
+credit_data = Data(
+ name="creditcard_defaults",
+ path=web_path,
+ type=AssetTypes.URI_FILE,
+ description="Dataset for credit card defaults",
+ tags={"source_type": "web", "source": "UCI ML Repo"},
+ version='1.0.0'
+)
```
-> [!IMPORTANT]
-> This code snippet expects the workspace configuration to be saved in the current directory or its parent. For more information on creating a workspace, see [Create and manage Azure Machine Learning workspaces](how-to-manage-workspace.md). For more information on saving the configuration to file, see [Create a workspace configuration file](how-to-configure-environment.md#workspace).
+This code just created a `Data` asset, ready to be consumed as an input by the pipeline that you'll define in the next sections. In addition, you can register the dataset to your workspace so it becomes reusable across pipelines.
+
+Registering the dataset will enable you to:
+
+* Reuse and share the dataset in future pipelines
+* Use versions to track the modification to the dataset
+* Use the dataset from Azure ML designer, which is Azure ML's GUI for pipeline authoring
+
+Since this is the first time that you're making a call to the workspace, you may be asked to authenticate. Once the authentication is complete, you'll then see the dataset registration completion message.
+
-## Create the infrastructure for your pipeline
+```python
+credit_data = ml_client.data.create_or_update(credit_data)
+print(
+ f"Dataset with name {credit_data.name} was registered to workspace, the dataset version is {credit_data.version}"
+)
+```
+
+In the future, you can fetch the same dataset from the workspace using `credit_dataset = ml_client.data.get("<DATA ASSET NAME>", version='<VERSION>')`.
++
+## Create a job environment for pipeline steps
+
+So far, you've created a development environment on the compute instance, your development machine. You'll also need an environment to use for each step of the pipeline. Each step can have its own environment, or you can use some common environments for multiple steps.
+
+In this example, you'll create a conda environment for your jobs, using a conda yaml file.
+First, create a directory to store the file in.
-Create an `Experiment` object to hold the results of your pipeline runs:
```python
-exp = Experiment(workspace=workspace, name="keras-mnist-fashion")
+import os
+dependencies_dir = "./dependencies"
+os.makedirs(dependencies_dir, exist_ok=True)
```
-Create a `ComputeTarget` that represents the machine resource on which your pipeline will run. The simple neural network used in this tutorial trains in just a few minutes even on a CPU-based machine. If you wish to use a GPU for training, set `use_gpu` to `True`. Provisioning a compute target generally takes about five minutes.
+Now, create the file in the dependencies directory.
```python
-use_gpu = False
-
-# choose a name for your cluster
-cluster_name = "gpu-cluster" if use_gpu else "cpu-cluster"
-
-found = False
-# Check if this compute target already exists in the workspace.
-cts = workspace.compute_targets
-if cluster_name in cts and cts[cluster_name].type == "AmlCompute":
- found = True
- print("Found existing compute target.")
- compute_target = cts[cluster_name]
-if not found:
- print("Creating a new compute target...")
- compute_config = AmlCompute.provisioning_configuration(
- vm_size= "STANDARD_NC6" if use_gpu else "STANDARD_D2_V2"
- # vm_priority = 'lowpriority', # optional
- max_nodes=4,
- )
+%%writefile {dependencies_dir}/conda.yml
+name: model-env
+channels:
+ - conda-forge
+dependencies:
+ - python=3.8
+ - numpy=1.21.2
+ - pip=21.2.4
+ - scikit-learn=0.24.2
+ - scipy=1.7.1
+ - pandas>=1.1,<1.2
+ - pip:
+ - azureml-defaults==1.38.0
+ - azureml-mlflow==1.38.0
+ - inference-schema[numpy-support]==1.3.0
+ - joblib==1.0.1
+ - xlrd==2.0.1
+```
- # Create the cluster.
- compute_target = ComputeTarget.create(workspace, cluster_name, compute_config)
+The specification contains some usual packages, that you'll use in your pipeline (numpy, pip), together with some Azure ML specific packages (azureml-defaults, azureml-mlflow).
- # Can poll for a minimum number of nodes and for a specific timeout.
- # If no min_node_count is provided, it will use the scale settings for the cluster.
- compute_target.wait_for_completion(
- show_output=True, min_node_count=None, timeout_in_minutes=10
- )
-# For a more detailed view of current AmlCompute status, use get_status().print(compute_target.get_status().serialize())
+The Azure ML packages aren't mandatory to run Azure ML jobs. However, adding these packages will let you interact with Azure ML for logging metrics and registering models, all inside the Azure ML job. You'll use them in the training script later in this tutorial.
+
+Use the *yaml* file to create and register this custom environment in your workspace:
+
+```Python
+from azure.ai.ml.entities import Environment
+
+custom_env_name = "aml-scikit-learn"
+
+pipeline_job_env = Environment(
+ name=custom_env_name,
+ description="Custom environment for Credit Card Defaults pipeline",
+ tags={"scikit-learn": "0.24.2", "azureml-defaults": "1.38.0"},
+ conda_file=os.path.join(dependencies_dir, "conda.yml"),
+ image="mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04:20210727.v1",
+ version="1.0.0"
+)
+pipeline_job_env = ml_client.environments.create_or_update(pipeline_job_env)
+
+print(
+ f"Environment with name {pipeline_job_env.name} is registered to workspace, the environment version is {pipeline_job_env.version}"
+)
```
-> [!Note]
-> GPU availability depends on the quota of your Azure subscription and upon Azure capacity. See [Manage and increase quotas for resources with Azure Machine Learning](how-to-manage-quotas.md).
+## Build the training pipeline
+
+Now that you have all assets required to run your pipeline, it's time to build the pipeline itself, using the Azure ML Python SDK v2.
+
+Azure ML pipelines are reusable ML workflows that usually consist of several components. The typical life of a component is:
-### Create a dataset for the Azure-stored data
+* Write the yaml specification of the component, or create it programmatically using `ComponentMethod`.
+* Optionally, register the component with a name and version in your workspace, to make it reusable and shareable.
+* Load that component from the pipeline code.
+* Implement the pipeline using the component's inputs, outputs and parameters
+* Submit the pipeline.
-Fashion-MNIST is a dataset of fashion images divided into 10 classes. Each image is a 28x28 grayscale image and there are 60,000 training and 10,000 test images. As an image classification problem, Fashion-MNIST is harder than the classic MNIST handwritten digit database. It's distributed in the same compressed binary form as the original [handwritten digit database](http://yann.lecun.com/exdb/mnist/) .
+## Create component 1: data prep (using programmatic definition)
-To create a `Dataset` that references the Web-based data, run:
+Let's start by creating the first component. This component handles the preprocessing of the data. The preprocessing task is performed in the *data_prep.py* python file.
+
+First create a source folder for the data_prep component:
```python
-data_urls = ["https://data4mldemo6150520719.blob.core.windows.net/demo/mnist-fashion"]
-fashion_ds = Dataset.File.from_files(data_urls)
+import os
-# list the files referenced by fashion_ds
-print(fashion_ds.to_path())
+data_prep_src_dir = "./components/data_prep"
+os.makedirs(data_prep_src_dir, exist_ok=True)
```
-This code completes quickly. The underlying data remains in the Azure storage resource specified in the `data_urls` array.
+This script performs the simple task of splitting the data into train and test datasets.
+Azure ML mounts datasets as folders to the computes, therefore, we created an auxiliary `select_first_file` function to access the data file inside the mounted input folder.
-## Create the data-preparation pipeline step
+[MLFlow](https://mlflow.org/docs/latest/tracking.html) will be used to log the parameters and metrics during our pipeline run.
-The first step in this pipeline will convert the compressed data files of `fashion_ds` into a dataset in your own workspace consisting of CSV files ready for use in training. Once registered with the workspace, your collaborators can access this data for their own analysis, training, and so on
+```python
+%%writefile {data_prep_src_dir}/data_prep.py
+import os
+import argparse
+import pandas as pd
+from sklearn.model_selection import train_test_split
+import logging
+import mlflow
++
+def main():
+ """Main function of the script."""
+
+ # input and output arguments
+ parser = argparse.ArgumentParser()
+ parser.add_argument("--data", type=str, help="path to input data")
+ parser.add_argument("--test_train_ratio", type=float, required=False, default=0.25)
+ parser.add_argument("--train_data", type=str, help="path to train data")
+ parser.add_argument("--test_data", type=str, help="path to test data")
+ args = parser.parse_args()
+
+ # Start Logging
+ mlflow.start_run()
+
+ print(" ".join(f"{k}={v}" for k, v in vars(args).items()))
+
+ print("input data:", args.data)
+
+ credit_df = pd.read_excel(args.data, header=1, index_col=0)
+
+ mlflow.log_metric("num_samples", credit_df.shape[0])
+ mlflow.log_metric("num_features", credit_df.shape[1] - 1)
+
+ credit_train_df, credit_test_df = train_test_split(
+ credit_df,
+ test_size=args.test_train_ratio,
+ )
+
+ # output paths are mounted as folder, therefore, we are adding a filename to the path
+ credit_train_df.to_csv(os.path.join(args.train_data, "data.csv"), index=False)
+
+ credit_test_df.to_csv(os.path.join(args.test_data, "data.csv"), index=False)
+
+ # Stop Logging
+ mlflow.end_run()
++
+if __name__ == "__main__":
+ main()
+```
+
+Now that you have a script that can perform the desired task, create an Azure ML Component from it.
+
+You'll use the general purpose **CommandComponent** that can run command line actions. This command line action can directly call system commands or run a script. The inputs/outputs are specified on the command line via the `${{ ... }}` notation.
```python
-datastore = workspace.get_default_datastore()
-prepared_fashion_ds = OutputFileDatasetConfig(
- destination=(datastore, "outputdataset/{run-id}")
-).register_on_complete(name="prepared_fashion_ds")
+%%writefile {data_prep_src_dir}/data_prep.yml
+# <component>
+name: data_prep_credit_defaults
+display_name: Data preparation for training
+# version: 1 # Not specifying a version will automatically update the version
+type: command
+inputs:
+ data:
+ type: uri_folder
+ test_train_ratio:
+ type: number
+outputs:
+ train_data:
+ type: uri_folder
+ test_data:
+ type: uri_folder
+code: .
+environment:
+ # for this step, we'll use an AzureML curate environment
+ azureml:aml-scikit-learn:1.0.0
+command: >-
+ python data_prep.py
+ --data ${{inputs.data}} --test_train_ratio ${{inputs.test_train_ratio}}
+ --train_data ${{outputs.train_data}} --test_data ${{outputs.test_data}}
+# </component>
```
-The above code specifies a dataset that is based on the output of a pipeline step. The underlying processed files will be put in the workspace's default datastore's blob storage at the path specified in `destination`. The dataset will be registered in the workspace with the name `prepared_fashion_ds`.
+Once the `yaml` file and the script are ready, you can create your component using `load_component()`.
-### Create the pipeline step's source
+```python
+# importing the Component Package
+from azure.ai.ml.entities import load_component
-The code that you've executed so far has create and controlled Azure resources. Now it's time to write code that does the first step in the domain.
+# Loading the component from the yml file
+data_prep_component = load_component(yaml_file=os.path.join(data_prep_src_dir, "data_prep.yml"))
+```
-If you're following along with the example in the [AzureML Examples repo](https://github.com/Azure/azureml-examples/tree/main/python-sdk/tutorials/using-pipelines), the source file is already available as `keras-mnist-fashion/prepare.py`.
+Optionally, register the component in the workspace for future re-use.
-If you're working from scratch, create a subdirectory called `kera-mnist-fashion/`. Create a new file, add the following code to it, and name the file `prepare.py`.
```python
-# prepare.py
-# Converts MNIST-formatted files at the passed-in input path to a passed-in output path
-import os
-import sys
-
-# Conversion routine for MNIST binary format
-def convert(imgf, labelf, outf, n):
- f = open(imgf, "rb")
- l = open(labelf, "rb")
- o = open(outf, "w")
-
- f.read(16)
- l.read(8)
- images = []
-
- for i in range(n):
- image = [ord(l.read(1))]
- for j in range(28 * 28):
- image.append(ord(f.read(1)))
- images.append(image)
-
- for image in images:
- o.write(",".join(str(pix) for pix in image) + "\n")
- f.close()
- o.close()
- l.close()
-
-# The MNIST-formatted source
-mounted_input_path = sys.argv[1]
-# The output directory at which the outputs will be written
-mounted_output_path = sys.argv[2]
-
-# Create the output directory
-os.makedirs(mounted_output_path, exist_ok=True)
-
-# Convert the training data
-convert(
- os.path.join(mounted_input_path, "mnist-fashion/train-images-idx3-ubyte"),
- os.path.join(mounted_input_path, "mnist-fashion/train-labels-idx1-ubyte"),
- os.path.join(mounted_output_path, "mnist_train.csv"),
- 60000,
-)
+data_prep_component = ml_client.create_or_update(data_prep_component)
-# Convert the test data
-convert(
- os.path.join(mounted_input_path, "mnist-fashion/t10k-images-idx3-ubyte"),
- os.path.join(mounted_input_path, "mnist-fashion/t10k-labels-idx1-ubyte"),
- os.path.join(mounted_output_path, "mnist_test.csv"),
- 10000,
+print(
+ f"Component {data_prep_component.name} with Version {data_prep_component.version} is registered"
) ```
-The code in `prepare.py` takes two command-line arguments: the first is assigned to `mounted_input_path` and the second to `mounted_output_path`. If that subdirectory doesn't exist, the call to `os.makedirs` creates it. Then, the program converts the training and testing data and outputs the comma-separated files to the `mounted_output_path`.
+## Create component 2: training (using yaml definition)
+
+The second component that you'll create will consume the training and test data, train a tree based model and return the output model. You'll use Azure ML logging capabilities to record and visualize the learning progress.
-### Specify the pipeline step
+You used the `CommandComponent` class to create your first component. This time you'll use the yaml definition to define the second component. Each method has its own advantages. A yaml definition can actually be checked-in along the code, and would provide a readable history tracking. The programmatic method using `CommandComponent` can be easier with built-in class documentation and code completion.
-Back in the Python environment you're using to specify the pipeline, run this code to create a `PythonScriptStep` for your preparation code:
+
+Create the directory for this component:
```python
-script_folder = "./keras-mnist-fashion"
-
-prep_step = PythonScriptStep(
- name="prepare step",
- script_name="prepare.py",
- # On the compute target, mount fashion_ds dataset as input, prepared_fashion_ds as output
- arguments=[fashion_ds.as_named_input("fashion_ds").as_mount(), prepared_fashion_ds],
- source_directory=script_folder,
- compute_target=compute_target,
- allow_reuse=True,
-)
+import os
+train_src_dir = "./components/train"
+os.makedirs(train_src_dir, exist_ok=True)
```
-The call to `PythonScriptStep` specifies that, when the pipeline step is run:
+Create the training script in the directory:
-* All the files in the `script_folder` directory are uploaded to the `compute_target`
-* Among those uploaded source files, the file `prepare.py` will be run
-* The `fashion_ds` and `prepared_fashion_ds` datasets will be mounted on the `compute_target` and appear as directories
-* The path to the `fashion_ds` files will be the first argument to `prepare.py`. In `prepare.py`, this argument is assigned to `mounted_input_path`
-* The path to the `prepared_fashion_ds` will be the second argument to `prepare.py`. In `prepare.py`, this argument is assigned to `mounted_output_path`
-* Because `allow_reuse` is `True`, it won't be rerun until its source files or inputs change
-* This `PythonScriptStep` will be named `prepare step`
+```python
+%%writefile {train_src_dir}/train.py
+import argparse
+from sklearn.ensemble import GradientBoostingClassifier
+from sklearn.metrics import classification_report
+from azureml.core.model import Model
+from azureml.core import Run
+import os
+import pandas as pd
+import joblib
+import mlflow
-Modularity and reuse are key benefits of pipelines. Azure Machine Learning can automatically determine source code or Dataset changes. The output of a step that isn't affected will be reused without rerunning the steps again if `allow_reuse` is `True`. If a step relies on a data source external to Azure Machine Learning that may change (for instance, a URL that contains sales data), set `allow_reuse` to `False` and the pipeline step will run every time the pipeline is run.
-## Create the training step
+def select_first_file(path):
+ """Selects first file in folder, use under assumption there is only one file in folder
+ Args:
+ path (str): path to directory or file to choose
+ Returns:
+ str: full path of selected file
+ """
+ files = os.listdir(path)
+ return os.path.join(path, files[0])
-Once the data has been converted from the compressed format to CSV files, it can be used for training a convolutional neural network.
-### Create the training step's source
+# Start Logging
+mlflow.start_run()
-With larger pipelines, it's a good practice to put each step's source code in a separate directory (`src/prepare/`, `src/train/`, and so on) but for this tutorial, just use or create the file `train.py` in the same `keras-mnist-fashion/` source directory.
+# enable autologging
+mlflow.sklearn.autolog()
+# This line creates a handles to the current run. It is used for model registration
+run = Run.get_context()
-Most of this code should be familiar to ML developers:
+os.makedirs("./outputs", exist_ok=True)
-* The data is partitioned into train and validation sets for training, and a separate test subset for final scoring
-* The input shape is 28x28x1 (only 1 because the input is grayscale), there will be 256 inputs in a batch, and there are 10 classes
-* The number of training epochs will be 10
-* The model has three convolutional layers, with max pooling and dropout, followed by a dense layer and softmax head
-* The model is fitted for 10 epochs and then evaluated
-* The model architecture is written to `outputs/model/model.json` and the weights to `outputs/model/model.h5`
-Some of the code, though, is specific to Azure Machine Learning. `run = Run.get_context()` retrieves a [`Run`](/python/api/azureml-core/azureml.core.run(class)?view=azure-ml-py&preserve-view=True) object, which contains the current service context. The `train.py` source uses this `run` object to retrieve the input dataset via its name (an alternative to the code in `prepare.py` that retrieved the dataset via the `argv` array of script arguments).
+def main():
+ """Main function of the script."""
-The `run` object is also used to log the training progress at the end of every epoch and, at the end of training, to log the graph of loss and accuracy over time.
+ # input and output arguments
+ parser = argparse.ArgumentParser()
+ parser.add_argument("--train_data", type=str, help="path to train data")
+ parser.add_argument("--test_data", type=str, help="path to test data")
+ parser.add_argument("--n_estimators", required=False, default=100, type=int)
+ parser.add_argument("--learning_rate", required=False, default=0.1, type=float)
+ parser.add_argument("--registered_model_name", type=str, help="model name")
+ parser.add_argument("--model", type=str, help="path to model file")
+ args = parser.parse_args()
-### Create the training pipeline step
+ # paths are mounted as folder, therefore, we are selecting the file from folder
+ train_df = pd.read_csv(select_first_file(args.train_data))
-The training step has a slightly more complex configuration than the preparation step. The preparation step used only standard Python libraries. More commonly, you'll need to modify the runtime environment in which your source code runs.
+ # Extracting the label column
+ y_train = train_df.pop("default payment next month")
-Create a file `conda_dependencies.yml` with the following contents:
+ # convert the dataframe values to array
+ X_train = train_df.values
-```yml
-dependencies:
-- python=3.6.2-- pip:
- - azureml-core
- - azureml-dataset-runtime
- - keras==2.4.3
- - tensorflow==2.4.3
- - numpy
- - scikit-learn
- - pandas
- - matplotlib
+ # paths are mounted as folder, therefore, we are selecting the file from folder
+ test_df = pd.read_csv(select_first_file(args.test_data))
+
+ # Extracting the label column
+ y_test = test_df.pop("default payment next month")
+
+ # convert the dataframe values to array
+ X_test = test_df.values
+
+ print(f"Training with data of shape {X_train.shape}")
+
+ clf = GradientBoostingClassifier(
+ n_estimators=args.n_estimators, learning_rate=args.learning_rate
+ )
+ clf.fit(X_train, y_train)
+
+ y_pred = clf.predict(X_test)
+
+ print(classification_report(y_test, y_pred))
+
+ # setting the full path of the model file
+ model_file = os.path.join(args.model, "model.pkl")
+ with open(model_file, "wb") as mf:
+ joblib.dump(clf, mf)
+
+ # Registering the model to the workspace
+ model = Model.register(
+ run.experiment.workspace,
+ model_name=args.registered_model_name,
+ model_path=model_file,
+ tags={"type": "sklearn.GradientBoostingClassifier"},
+ description="Model created in Azure ML on credit card defaults dataset",
+ )
+
+ # Stop Logging
+ mlflow.end_run()
++
+if __name__ == "__main__":
+ main()
```
-The `Environment` class represents the runtime environment in which a machine learning task runs. Associate the above specification with the training code with:
+As you can see in this training script, once the model is trained, the model file is saved and registered to the workspace. Now you can use the registered model in inferencing endpoints.
++
+For the environment of this step, you'll use one of the built-in (curated) Azure ML environments. The tag `azureml`, tells the system to use look for the name in curated environments.
+
+First, create the *yaml* file describing the component:
```python
-keras_env = Environment.from_conda_specification(
- name="keras-env", file_path="./conda_dependencies.yml"
+%%writefile {train_src_dir}/train.yml
+# <component>
+name: train_credit_defaults_model
+display_name: Train Credit Defaults Model
+# version: 1 # Not specifying a version will automatically update the version
+type: command
+inputs:
+ train_data:
+ type: uri_folder
+ test_data:
+ type: uri_folder
+ learning_rate:
+ type: number
+ registered_model_name:
+ type: string
+outputs:
+ model:
+ type: uri_folder
+code: .
+environment:
+ # for this step, we'll use an AzureML curate environment
+ azureml:AzureML-sklearn-0.24-ubuntu18.04-py37-cpu:21
+command: >-
+ python train.py
+ --train_data ${{inputs.train_data}}
+ --test_data ${{inputs.test_data}}
+ --learning_rate ${{inputs.learning_rate}}
+ --registered_model_name ${{inputs.registered_model_name}}
+ --model ${{outputs.model}}
+# </component>
+
+```
+
+Now create and register the component:
+
+```python
+# importing the Component Package
+from azure.ai.ml.entities import load_component
+
+# Loading the component from the yml file
+train_component = load_component(yaml_file=os.path.join(train_src_dir, "train.yml"))
+```
+
+```python
+# Now we register the component to the workspace
+train_component = ml_client.create_or_update(train_component)
+
+# Create (register) the component in your workspace
+print(
+ f"Component {train_component.name} with Version {train_component.version} is registered"
)
+```
+
+## Create the pipeline from components
+
+Now that both your components are defined and registered, you can start implementing the pipeline.
+
+Here, you'll use *input data*, *split ratio* and *registered model name* as input variables. Then call the components and connect them via their inputs/outputs identifiers. The outputs of each step can be accessed via the `.outputs` property.
+
+The python functions returned by `load_component()` work as any regular python function that we'll use within a pipeline to call each step.
-train_cfg = ScriptRunConfig(
- source_directory=script_folder,
- script="train.py",
- compute_target=compute_target,
- environment=keras_env,
+To code the pipeline, you use a specific `@dsl.pipeline` decorator that identifies the Azure ML pipelines. In the decorator, we can specify the pipeline description and default resources like compute and storage. Like a python function, pipelines can have inputs. You can then create multiple instances of a single pipeline with different inputs.
+
+Here, we used *input data*, *split ratio* and *registered model name* as input variables. We then call the components and connect them via their inputs/outputs identifiers. The outputs of each step can be accessed via the `.outputs` property.
+
+> [!IMPORTANT]
+> In the code below, replace `<CPU-CLUSTER-NAME>` with the name you used when you created a compute cluster in the [Quickstart: Create workspace resources you need to get started with Azure Machine Learning](quickstart-create-resources.md).
+
+```python
+# the dsl decorator tells the sdk that we are defining an Azure ML pipeline
+from azure.ai.ml import dsl, Input, Output
+
+@dsl.pipeline(
+ compute="<CPU-CLUSTER-NAME>",
+ description="E2E data_perp-train pipeline",
)
+def credit_defaults_pipeline(
+ pipeline_job_data_input,
+ pipeline_job_test_train_ratio,
+ pipeline_job_learning_rate,
+ pipeline_job_registered_model_name,
+):
+ # using data_prep_function like a python call with its own inputs
+ data_prep_job = data_prep_component(
+ data=pipeline_job_data_input,
+ test_train_ratio=pipeline_job_test_train_ratio,
+ )
+
+ # using train_func like a python call with its own inputs
+ train_job = train_component(
+ train_data=data_prep_job.outputs.train_data, # note: using outputs from previous step
+ test_data=data_prep_job.outputs.test_data, # note: using outputs from previous step
+ learning_rate=pipeline_job_learning_rate, # note: using a pipeline input as parameter
+ registered_model_name=pipeline_job_registered_model_name,
+ )
+
+ # a pipeline returns a dict of outputs
+ # keys will code for the pipeline output identifier
+ return {
+ "pipeline_job_train_data": data_prep_job.outputs.train_data,
+ "pipeline_job_test_data": data_prep_job.outputs.test_data,
+ }
```
-Creating the training step itself uses code similar to the code used to create the preparation step:
+Now use your pipeline definition to instantiate a pipeline with your dataset, split rate of choice and the name you picked for your model.
```python
-train_step = PythonScriptStep(
- name="train step",
- arguments=[
- prepared_fashion_ds.read_delimited_files().as_input(name="prepared_fashion_ds")
- ],
- source_directory=train_cfg.source_directory,
- script_name=train_cfg.script,
- runconfig=train_cfg.run_config,
+registered_model_name = "credit_defaults_model"
+
+# Let's instantiate the pipeline with the parameters of our choice
+pipeline = credit_defaults_pipeline(
+ # pipeline_job_data_input=credit_data,
+ pipeline_job_data_input=Input(type="uri_file", path=web_path),
+ pipeline_job_test_train_ratio=0.2,
+ pipeline_job_learning_rate=0.25,
+ pipeline_job_registered_model_name=registered_model_name,
) ```
-## Create and run the pipeline
+## Submit the job
+
+It's now time to submit the job to run in Azure ML. This time you'll use `create_or_update` on `ml_client.jobs`.
+
+Here you'll also pass an experiment name. An experiment is a container for all the iterations one does on a certain project. All the jobs submitted under the same experiment name would be listed next to each other in Azure ML studio.
-Now that you've specified data inputs and outputs and created your pipeline's steps, you can compose them into a pipeline and run it:
+Once completed, the pipeline will register a model in your workspace as a result of training.
```python
-pipeline = Pipeline(workspace, steps=[prep_step, train_step])
-run = exp.submit(pipeline)
+import webbrowser
+# submit the pipeline job
+returned_job = ml_client.jobs.create_or_update(
+ pipeline,
+
+ # Project's name
+ experiment_name="e2e_registered_components",
+)
+# open the pipeline in web browser
+webbrowser.open(returned_job.services["Studio"].endpoint)
```
-The `Pipeline` object you create runs in your `workspace` and is composed of the preparation and training steps you've specified.
+An output of "False" is expected from the above cell. You can track the progress of your pipeline, by using the link generated in the cell above.
+
+When you select on each component, you'll see more information about the results of that component.
+There are two important parts to look for at this stage:
+* `Outputs+logs` > `user_logs` > `std_log.txt`
+This section shows the script run sdtout.
+
+ :::image type="content" source="media/tutorial-pipeline-python-sdk/user-logs.jpg" alt-text="Screenshot of std_log.txt." lightbox="media/tutorial-pipeline-python-sdk/user-logs.jpg":::
++
+* `Outputs+logs` > `Metric`
+This section shows different logged metrics. In this example. mlflow `autologging`, has automatically logged the training metrics.
+
+ :::image type="content" source="media/tutorial-pipeline-python-sdk/metrics.jpg" alt-text="Screenshot shows logged metrics.txt." lightbox="media/tutorial-pipeline-python-sdk/metrics.jpg":::
+
+## Deploy the model as an online endpoint
+
+Now deploy your machine learning model as a web service in the Azure cloud.
+
+To deploy a machine learning service, you'll usually need:
+
+* The model assets (filed, metadata) that you want to deploy. You've already registered these assets in your training component.
+* Some code to run as a service. The code executes the model on a given input request. This entry script receives data submitted to a deployed web service and passes it to the model, then returns the model's response to the client. The script is specific to your model. The entry script must understand the data that the model expects and returns.
+
+## Create an inference script
-> [!Note]
-> This pipeline has a simple dependency graph: the training step relies on the preparation step and the preparation step relies on the `fashion_ds` dataset. Production pipelines will often have much more complex dependencies. Steps may rely on multiple upstream steps, a source code change in an early step may have far-reaching consequences, and so on. Azure Machine Learning tracks these concerns for you. You need only pass in the array of `steps` and Azure Machine Learning takes care of calculating the execution graph.
+The two things you need to accomplish in your inference script are:
-The call to `submit` the `Experiment` completes quickly, and produces output similar to:
+* Load your model (using a function called `init()`)
+* Run your model on input data (using a function called `run()`)
-```dotnetcli
-Submitted PipelineRun 5968530a-abcd-1234-9cc1-46168951b5eb
-Link to Azure Machine Learning Portal: https://ml.azure.com/runs/abc-xyz...
+In the following implementation the `init()` function loads the model, and the run function expects the data in `json` format with the input data stored under `data`.
+
+```python
+deploy_dir = "./deploy"
+os.makedirs(deploy_dir, exist_ok=True)
+```
+
+```python
+%%writefile {deploy_dir}/score.py
+import os
+import logging
+import json
+import numpy
+import joblib
++
+def init():
+ """
+ This function is called when the container is initialized/started, typically after create/update of the deployment.
+ You can write the logic here to perform init operations like caching the model in memory
+ """
+ global model
+ # AZUREML_MODEL_DIR is an environment variable created during deployment.
+ # It is the path to the model folder (./azureml-models/$MODEL_NAME/$VERSION)
+ model_path = os.path.join(os.getenv("AZUREML_MODEL_DIR"), "model.pkl")
+ # deserialize the model file back into a sklearn model
+ model = joblib.load(model_path)
+ logging.info("Init complete")
++
+def run(raw_data):
+ """
+ This function is called for every invocation of the endpoint to perform the actual scoring/prediction.
+ In the example we extract the data from the json input and call the scikit-learn model's predict()
+ method and return the result back
+ """
+ logging.info("Request received")
+ data = json.loads(raw_data)["data"]
+ data = numpy.array(data)
+ result = model.predict(data)
+ logging.info("Request processed")
+ return result.tolist()
```
-You can monitor the pipeline run by opening the link or you can block until it completes by running:
+## Create a new online endpoint
+
+Now that you have a registered model and an inference script, it's time to create your online endpoint. The endpoint name needs to be unique in the entire Azure region. For this tutorial, you'll create a unique name using [`UUID`](https://en.wikipedia.org/wiki/Universally_unique_identifier).
```python
-run.wait_for_completion(show_output=True)
+import uuid
+
+# Creating a unique name for the endpoint
+online_endpoint_name = "credit-endpoint-" + str(uuid.uuid4())[:8]
+ ```
-> [!IMPORTANT]
-> The first pipeline run takes roughly *15 minutes*. All dependencies must be downloaded, a Docker image is created, and the Python environment is provisioned and created. Running the pipeline again takes significantly less time because those resources are reused instead of created. However, total run time for the pipeline depends on the workload of your scripts and the processes that are running in each pipeline step.
+```Python
+from azure.ai.ml.entities import (
+ ManagedOnlineEndpoint,
+ ManagedOnlineDeployment,
+ CodeConfiguration,
+ Model,
+ Environment,
+)
-Once the pipeline completes, you can retrieve the metrics you logged in the training step:
+# create an online endpoint
+endpoint = ManagedOnlineEndpoint(
+ name=online_endpoint_name,
+ description="this is an online endpoint",
+ auth_mode="key",
+ tags={
+ "training_dataset": "credit_defaults",
+ "model_type": "sklearn.GradientBoostingClassifier",
+ },
+)
+
+endpoint = ml_client.begin_create_or_update(endpoint)
+
+print(f"Endpint {endpoint.name} provisioning state: {endpoint.provisioning_state}")
+```
+
+Once you've created an endpoint, you can retrieve it as below:
```python
-run.find_step_run("train step")[0].get_metrics()
+endpoint = ml_client.online_endpoints.get(name = online_endpoint_name)
+
+print(f"Endpint \"{endpoint.name}\" with provisioning state \"{endpoint.provisioning_state}\" is retrieved")
```
-If you're satisfied with the metrics, you can register the model in your workspace:
+## Deploy the model to the endpoint
+
+Once the endpoint is created, deploy the model with the entry script. Each endpoint can have multiple deployments and direct traffic to these deployments can be specified using rules. Here you'll create a single deployment that handles 100% of the incoming traffic. We have chosen a color name for the deployment, for example, *blue*, *green*, *red* deployments, which is arbitrary.
+
+You can check the *Models* page on the Azure ML studio, to identify the latest version of your registered model. Alternatively, the code below will retrieve the latest version number for you to use.
+ ```python
-run.find_step_run("train step")[0].register_model(
- model_name="keras-model",
- model_path="outputs/model/",
- datasets=[("train test data", fashion_ds)],
+# Let's pick the latest version of the model
+latest_model_version = max(
+ [int(m.version) for m in ml_client.models.list(name=registered_model_name)]
) ```
-## Clean up resources
+Deploy the latest version of the model.
+
+> [!NOTE]
+> Expect this deployment to take approximately 6 to 8 minutes.
-Don't complete this section if you plan to run other Azure Machine Learning tutorials.
-### Stop the compute instance
+```python
+# picking the model to deploy. Here we use the latest version of our registered model
+model = ml_client.models.get(name=registered_model_name, version=latest_model_version)
++
+#create an online deployment.
+blue_deployment = ManagedOnlineDeployment(
+ name='blue',
+ endpoint_name=online_endpoint_name,
+ model=model,
+ environment="AzureML-sklearn-0.24-ubuntu18.04-py37-cpu:21",
+ code_configuration=CodeConfiguration(
+ code=deploy_dir,
+ scoring_script="score.py"),
+ instance_type='Standard_DS3_v2',
+ instance_count=1)
+
+blue_deployment = ml_client.begin_create_or_update(blue_deployment)
+```
+### Test with a sample query
-### Delete everything
+Now that the model is deployed to the endpoint, you can run inference with it.
-If you don't plan to use the resources you created, delete them, so you don't incur any charges:
+Create a sample request file following the design expected in the run method in the score script.
-1. In the Azure portal, in the left menu, select **Resource groups**.
-1. In the list of resource groups, select the resource group you created.
-1. Select **Delete resource group**.
-1. Enter the resource group name. Then, select **Delete**.
-You can also keep the resource group but delete a single workspace. Display the workspace properties, and then select **Delete**.
+```python
+%%writefile {deploy_dir}/sample-request.json
+{"data": [
+ [20000,2,2,1,24,2,2,-1,-1,-2,-2,3913,3102,689,0,0,0,0,689,0,0,0,0],
+ [10,9,8,7,6,5,4,3,2,1, 10,9,8,7,6,5,4,3,2,1,10,9,8]
+]}
+```
-## Next steps
+```python
+# test the blue deployment with some sample data
+ml_client.online_endpoints.invoke(
+ endpoint_name=online_endpoint_name,
+ request_file="./deploy/sample-request.json",
+ deployment_name='blue'
+)
+```
-In this tutorial, you used the following types:
+## Clean up resources
-* The `Workspace` represents your Azure Machine Learning workspace. It contained:
- * The `Experiment` that contains the results of training runs of your pipeline
- * The `Dataset` that lazily loaded the data held in the Fashion-MNIST datastore
- * The `ComputeTarget` that represents the machine(s) on which the pipeline steps run
- * The `Environment` that is the runtime environment in which the pipeline steps run
- * The `Pipeline` that composes the `PythonScriptStep` steps into a whole
- * The `Model` that you registered after being satisfied with the training process
-
-The `Workspace` object contains references to other resources (notebooks, endpoints, and so on) that weren't used in this tutorial. For more, see [What is an Azure Machine Learning workspace?](concept-workspace.md).
+If you're not going to use the endpoint, delete it to stop using the resource. Make sure no other deployments are using an endpoint before you delete it.
-The `OutputFileDatasetConfig` promotes the output of a run to a file-based dataset. For more information on datasets and working with data, see [How to access data](./how-to-access-data.md).
+> [!NOTE]
+> Expect this step to take approximately 6 to 8 minutes.
-For more on compute targets and environments, see [What are compute targets in Azure Machine Learning?](concept-compute-target.md) and [What are Azure Machine Learning environments?](concept-environments.md)
+```python
+ml_client.online_endpoints.begin_delete(name=online_endpoint_name)
+```
-The `ScriptRunConfig` associates a `ComputeTarget` and `Environment` with Python source files. A `PythonScriptStep` takes that `ScriptRunConfig` and defines its inputs and outputs, which in this pipeline was the file dataset built by the `OutputFileDatasetConfig`.
+## Next steps
-For more examples of how to build pipelines by using the machine learning SDK, see the [example repository](https://github.com/Azure/azureml-examples).
+> [!div class="nextstepaction"]
+> Learn more about [Azure ML logging](https://github.com/Azure/azureml-examples/blob/sdk-preview/notebooks/mlflow/mlflow-v1-comparison.ipynb).
machine-learning Tutorial Power Bi Custom Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-power-bi-custom-model.md
Last updated 12/22/2021+ # Tutorial: Power BI integration - Create the predictive model with a Jupyter Notebook (part 1 of 2) + In part 1 of this tutorial, you train and deploy a predictive machine learning model by using code in a Jupyter Notebook. You also create a scoring script to define the input and output schema of the model for integration into Power BI. In part 2, you use the model to predict outcomes in Microsoft Power BI. In this tutorial, you:
machine-learning Tutorial Train Deploy Image Classification Model Vscode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-train-deploy-image-classification-model-vscode.md
Last updated 05/25/2021--+ #Customer intent: As a professional data scientist, I want to learn how to train an image classification model using TensorFlow and the Azure Machine Learning Visual Studio Code Extension.
In this tutorial, you learn the following tasks:
- Azure subscription. If you don't have one, sign up to try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/). If you're using the free subscription, only CPU clusters are supported. - Install [Visual Studio Code](https://code.visualstudio.com/docs/setup/setup-overview), a lightweight, cross-platform code editor. - Azure Machine Learning Studio Visual Studio Code extension. For install instructions see the [Setup Azure Machine Learning Visual Studio Code extension guide](./how-to-setup-vs-code.md)-- CLI (v2) (preview). For installation instructions, see [Install, set up, and use the CLI (v2) (preview)](how-to-configure-cli.md)
+- CLI (v2). For installation instructions, see [Install, set up, and use the CLI (v2)](how-to-configure-cli.md)
- Clone the community driven repository ```bash git clone https://github.com/Azure/azureml-examples.git
machine-learning Tutorial Train Deploy Notebook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-train-deploy-notebook.md
Last updated 01/05/2022-
-#Customer intent: As a professional data scientist, I can build an image classification model with Azure Machine Learning by using Python in a Jupyter Notebook.
+
+#Customer intent: As a professional data scientist, I can build an image classification model with Azure Machine Learning by using Python in a Jupyter Notebook.
# Tutorial: Train and deploy an image classification model with an example Jupyter Notebook + In this tutorial, you train a machine learning model on remote compute resources. You'll use the training and deployment workflow for Azure Machine Learning in a Python Jupyter Notebook. You can then use the notebook as a template to train your own machine learning model with your own data. This tutorial trains a simple logistic regression by using the [MNIST](http://yann.lecun.com/exdb/mnist/) dataset and [scikit-learn](https://scikit-learn.org) with Azure Machine Learning. MNIST is a popular dataset consisting of 70,000 grayscale images. Each image is a handwritten digit of 28 x 28 pixels, representing a number from zero to nine. The goal is to create a multi-class classifier to identify the digit a given image represents.
Use these steps to delete your Azure Machine Learning workspace and all compute
+ Learn how to [create clients for the web service](how-to-consume-web-service.md). + [Make predictions on large quantities of data](./tutorial-pipeline-batch-scoring-classification.md) asynchronously. + Monitor your Azure Machine Learning models with [Application Insights](how-to-enable-app-insights.md).
-+ Try out the [automatic algorithm selection](tutorial-auto-train-models.md) tutorial.
++ Try out the [automatic algorithm selection](tutorial-auto-train-models.md) tutorial.
machine-learning Concept Automated Ml V1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/concept-automated-ml-v1.md
+
+ Title: What is automated ML? AutoML (v1)
+
+description: Learn how automated machine learning in Azure Machine Learning can automatically generate a model by using the parameters and criteria you provide.
++++++ Last updated : 03/15/2022+++
+# Automated machine learning (AutoML)?
+
+> [!div class="op_single_selector" title1="Select the version of the Azure Machine Learning Python SDK you are using:"]
+> * [v1](concept-automated-ml-v1.md)
+> * [v2 (current version)](../concept-automated-ml.md)
+
+Automated machine learning, also referred to as automated ML or AutoML, is the process of automating the time-consuming, iterative tasks of machine learning model development. It allows data scientists, analysts, and developers to build ML models with high scale, efficiency, and productivity all while sustaining model quality. Automated ML in Azure Machine Learning is based on a breakthrough from our [Microsoft Research division](https://www.microsoft.com/research/project/automl/).
+
+Traditional machine learning model development is resource-intensive, requiring significant domain knowledge and time to produce and compare dozens of models. With automated machine learning, you'll accelerate the time it takes to get production-ready ML models with great ease and efficiency.
+
+<a name="parity"></a>
+
+## Ways to use AutoML in Azure Machine Learning
+
+Azure Machine Learning offers the following two experiences for working with automated ML. See the following sections to understand [feature availability in each experience (v1)](#parity).
+
+* For code-experienced customers, [Azure Machine Learning Python SDK](/python/api/overview/azure/ml/intro). Get started with [Tutorial: Use automated machine learning to predict taxi fares (v1)](../tutorial-auto-train-models.md).
+
+* For limited/no-code experience customers, Azure Machine Learning studio at [https://ml.azure.com](https://ml.azure.com/). Get started with these tutorials:
+ * [Tutorial: Create a classification model with automated ML in Azure Machine Learning](../tutorial-first-experiment-automated-ml.md).
+ * [Tutorial: Forecast demand with automated machine learning](../tutorial-automated-ml-forecast.md)
+
+### Experiment settings
+
+The following settings allow you to configure your automated ML experiment.
+
+| |The Python SDK|The studio web experience|
+|-|:-:|:-:|
+|**Split data into train/validation sets**| Γ£ô|Γ£ô
+|**Supports ML tasks: classification, regression, & forecasting**| Γ£ô| Γ£ô
+|**Supports computer vision tasks (preview): image classification, object detection & instance segmentation**| Γ£ô|
+|**Optimizes based on primary metric**| Γ£ô| Γ£ô
+|**Supports Azure ML compute as compute target** | Γ£ô|Γ£ô
+|**Configure forecast horizon, target lags & rolling window**|Γ£ô|Γ£ô
+|**Set exit criteria** |Γ£ô|Γ£ô
+|**Set concurrent iterations**| Γ£ô|Γ£ô
+|**Drop columns**| Γ£ô|Γ£ô
+|**Block algorithms**|Γ£ô|Γ£ô
+|**Cross validation** |Γ£ô|Γ£ô
+|**Supports training on Azure Databricks clusters**| Γ£ô|
+|**View engineered feature names**|Γ£ô|
+|**Featurization summary**| Γ£ô|
+|**Featurization for holidays**|Γ£ô|
+|**Log file verbosity levels**| Γ£ô|
+
+### Model settings
+
+These settings can be applied to the best model as a result of your automated ML experiment.
+
+| |The Python SDK|The studio web experience|
+|-|:-:|:-:|
+|**Best model registration, deployment, explainability**| Γ£ô|Γ£ô|
+|**Enable voting ensemble & stack ensemble models**| Γ£ô|Γ£ô|
+|**Show best model based on non-primary metric**|Γ£ô||
+|**Enable/disable ONNX model compatibility**|Γ£ô||
+|**Test the model** | Γ£ô| Γ£ô (preview)|
+
+### Run control settings
+
+These settings allow you to review and control your experiment runs and its child runs.
+
+| |The Python SDK|The studio web experience|
+|-|:-:|:-:|
+|**Run summary table**| Γ£ô|Γ£ô|
+|**Cancel runs & child runs**| Γ£ô|Γ£ô|
+|**Get guardrails**| Γ£ô|Γ£ô|
+|**Pause & resume runs**| Γ£ô| |
+
+## When to use AutoML: classification, regression, forecasting, computer vision & NLP
+
+Apply automated ML when you want Azure Machine Learning to train and tune a model for you using the target metric you specify. Automated ML democratizes the machine learning model development process, and empowers its users, no matter their data science expertise, to identify an end-to-end machine learning pipeline for any problem.
+
+ML professionals and developers across industries can use automated ML to:
++ Implement ML solutions without extensive programming knowledge++ Save time and resources++ Leverage data science best practices++ Provide agile problem-solving+
+### Classification
+
+Classification is a common machine learning task. Classification is a type of supervised learning in which models learn using training data, and apply those learnings to new data. Azure Machine Learning offers featurizations specifically for these tasks, such as deep neural network text featurizers for classification. Learn more about [featurization (v1) options](../how-to-configure-auto-features.md#featurization).
+
+The main goal of classification models is to predict which categories new data will fall into based on learnings from its training data. Common classification examples include fraud detection, handwriting recognition, and object detection. Learn more and see an example at [Create a classification model with automated ML (v1)](../tutorial-first-experiment-automated-ml.md).
+
+See examples of classification and automated machine learning in these Python notebooks: [Fraud Detection](https://github.com/Azure/azureml-examples/blob/main/python-sdk/tutorials/automl-with-azureml/classification-credit-card-fraud/auto-ml-classification-credit-card-fraud.ipynb), [Marketing Prediction](https://github.com/Azure/azureml-examples/blob/main/python-sdk/tutorials/automl-with-azureml/classification-bank-marketing-all-features/auto-ml-classification-bank-marketing-all-features.ipynb), and [Newsgroup Data Classification](https://github.com/Azure/azureml-examples/tree/main/python-sdk/tutorials/automl-with-azureml/classification-text-dnn)
+
+### Regression
+
+Similar to classification, regression tasks are also a common supervised learning task.
+
+Different from classification where predicted output values are categorical, regression models predict numerical output values based on independent predictors. In regression, the objective is to help establish the relationship among those independent predictor variables by estimating how one variable impacts the others. For example, automobile price based on features like, gas mileage, safety rating, etc. Learn more and see an example of [regression with automated machine learning (v1)](../tutorial-auto-train-models.md).
+
+See examples of regression and automated machine learning for predictions in these Python notebooks: [CPU Performance Prediction](https://github.com/Azure/azureml-examples/tree/main/python-sdk/tutorials/automl-with-azureml/regression-explanation-featurization),
+
+### Time-series forecasting
+
+Building forecasts is an integral part of any business, whether it's revenue, inventory, sales, or customer demand. You can use automated ML to combine techniques and approaches and get a recommended, high-quality time-series forecast. Learn more with this how-to: [automated machine learning for time series forecasting (v1)](../how-to-auto-train-forecast.md).
+
+An automated time-series experiment is treated as a multivariate regression problem. Past time-series values are "pivoted" to become additional dimensions for the regressor together with other predictors. This approach, unlike classical time series methods, has an advantage of naturally incorporating multiple contextual variables and their relationship to one another during training. Automated ML learns a single, but often internally branched model for all items in the dataset and prediction horizons. More data is thus available to estimate model parameters and generalization to unseen series becomes possible.
+
+Advanced forecasting configuration includes:
+* holiday detection and featurization
+* time-series and DNN learners (Auto-ARIMA, Prophet, ForecastTCN)
+* many models support through grouping
+* rolling-origin cross validation
+* configurable lags
+* rolling window aggregate features
++
+See examples of regression and automated machine learning for predictions in these Python notebooks: [Sales Forecasting](https://github.com/Azure/azureml-examples/blob/main/python-sdk/tutorials/automl-with-azureml/forecasting-orange-juice-sales/auto-ml-forecasting-orange-juice-sales.ipynb), [Demand Forecasting](https://github.com/Azure/azureml-examples/blob/main/python-sdk/tutorials/automl-with-azureml/forecasting-energy-demand/auto-ml-forecasting-energy-demand.ipynb), and [Forecasting GitHub's Daily Active Users](https://github.com/Azure/azureml-examples/blob/main/python-sdk/tutorials/automl-with-azureml/forecasting-github-dau/auto-ml-forecasting-github-dau.ipynb).
+
+### Computer vision (preview)
+
+> [!IMPORTANT]
+> This feature is currently in public preview. This preview version is provided without a service-level agreement. Certain features might not be supported or might have constrained capabilities. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+Support for computer vision tasks allows you to easily generate models trained on image data for scenarios like image classification and object detection.
+
+With this capability you can:
+
+* Seamlessly integrate with the [Azure Machine Learning data labeling](../how-to-create-image-labeling-projects.md) capability
+* Use labeled data for generating image models
+* Optimize model performance by specifying the model algorithm and tuning the hyperparameters.
+* Download or deploy the resulting model as a web service in Azure Machine Learning.
+* Operationalize at scale, leveraging Azure Machine Learning [MLOps](concept-model-management-and-deployment.md) and [ML Pipelines (v1)](../concept-ml-pipelines.md) capabilities.
+
+Authoring AutoML models for vision tasks is supported via the Azure ML Python SDK. The resulting experimentation runs, models, and outputs can be accessed from the Azure Machine Learning studio UI.
+
+Learn how to [set up AutoML training for computer vision models](../how-to-auto-train-image-models.md).
++
+Automated ML for images supports the following computer vision tasks:
+
+Task | Description
+-|-
+Multi-class image classification | Tasks where an image is classified with only a single label from a set of classes - e.g. each image is classified as either an image of a 'cat' or a 'dog' or a 'duck'
+Multi-label image classification | Tasks where an image could have one or more labels from a set of labels - e.g. an image could be labeled with both 'cat' and 'dog'
+Object detection| Tasks to identify objects in an image and locate each object with a bounding box e.g. locate all dogs and cats in an image and draw a bounding box around each.
+Instance segmentation | Tasks to identify objects in an image at the pixel level, drawing a polygon around each object in the image.
+
+<a name="nlp"></a>
+
+### Natural language processing: NLP (preview)
++
+Support for natural language processing (NLP) tasks in automated ML allows you to easily generate models trained on text data for text classification and named entity recognition scenarios. Authoring automated ML trained NLP models is supported via the Azure Machine Learning Python SDK. The resulting experimentation runs, models, and outputs can be accessed from the Azure Machine Learning studio UI.
+
+The NLP capability supports:
+
+* End-to-end deep neural network NLP training with the latest pre-trained BERT models
+* Seamless integration with [Azure Machine Learning data labeling](../how-to-create-text-labeling-projects.md)
+* Use labeled data for generating NLP models
+* Multi-lingual support with 104 languages
+* Distributed training with Horovod
+
+Learn how to [set up AutoML training for NLP models (v1)](../how-to-auto-train-nlp-models.md).
+
+## How automated ML works
+
+During training, Azure Machine Learning creates a number of pipelines in parallel that try different algorithms and parameters for you. The service iterates through ML algorithms paired with feature selections, where each iteration produces a model with a training score. The higher the score, the better the model is considered to "fit" your data. It will stop once it hits the exit criteria defined in the experiment.
+
+Using **Azure Machine Learning**, you can design and run your automated ML training experiments with these steps:
+
+1. **Identify the ML problem** to be solved: classification, forecasting, regression or computer vision (preview).
+
+1. **Choose whether you want to use the Python SDK or the studio web experience**:
+ Learn about the parity between the [Python SDK and studio web experience](#parity).
+
+ * For limited or no code experience, try the Azure Machine Learning studio web experience at [https://ml.azure.com](https://ml.azure.com/)
+ * For Python developers, check out the [Azure Machine Learning Python SDK (v1)](../how-to-configure-auto-train.md)
+
+1. **Specify the source and format of the labeled training data**: Numpy arrays or Pandas dataframe
+
+1. **Configure the compute target for model training**, such as your [local computer, Azure Machine Learning Computes, remote VMs, or Azure Databricks with SDK v1](../how-to-set-up-training-targets.md).
+
+1. **Configure the automated machine learning parameters** that determine how many iterations over different models, hyperparameter settings, advanced preprocessing/featurization, and what metrics to look at when determining the best model.
+1. **Submit the training run.**
+
+1. **Review the results**
+
+The following diagram illustrates this process.
++++
+You can also inspect the logged run information, which [contains metrics](../how-to-understand-automated-ml.md) gathered during the run. The training run produces a Python serialized object (`.pkl` file) that contains the model and data preprocessing.
+
+While model building is automated, you can also [learn how important or relevant features are](../how-to-configure-auto-train.md) to the generated models.
+
+> [!VIDEO https://www.microsoft.com/videoplayer/embed/RE2Xc9t]
+
+<a name="local-remote"></a>
+
+## Guidance on local vs. remote managed ML compute targets
+
+The web interface for automated ML always uses a remote [compute target](../concept-compute-target.md). But when you use the Python SDK, you will choose either a local compute or a remote compute target for automated ML training.
+
+* **Local compute**: Training occurs on your local laptop or VM compute.
+* **Remote compute**: Training occurs on Machine Learning compute clusters.
+
+### Choose compute target
+Consider these factors when choosing your compute target:
+
+ * **Choose a local compute**: If your scenario is about initial explorations or demos using small data and short trains (i.e. seconds or a couple of minutes per child run), training on your local computer might be a better choice. There is no setup time, the infrastructure resources (your PC or VM) are directly available.
+ * **Choose a remote ML compute cluster**: If you are training with larger datasets like in production training creating models which need longer trains, remote compute will provide much better end-to-end time performance because `AutoML` will parallelize trains across the cluster's nodes. On a remote compute, the start-up time for the internal infrastructure will add around 1.5 minutes per child run, plus additional minutes for the cluster infrastructure if the VMs are not yet up and running.
+
+### Pros and cons
+Consider these pros and cons when choosing to use local vs. remote.
+
+| | Pros (Advantages) |Cons (Handicaps) |
+|||||
+|**Local compute target** | <li> No environment start-up time | <li> Subset of features<li> Can't parallelize runs <li> Worse for large data. <li>No data streaming while training <li> No DNN-based featurization <li> Python SDK only |
+|**Remote ML compute clusters**| <li> Full set of features <li> Parallelize child runs <li> Large data support<li> DNN-based featurization <li> Dynamic scalability of compute cluster on demand <li> No-code experience (web UI) also available | <li> Start-up time for cluster nodes <li> Start-up time for each child run |
+
+### Feature availability
+
+More features are available when you use the remote compute, as shown in the table below.
+
+| Feature | Remote | Local |
+||--|-|
+| Data streaming (Large data support, up to 100 GB) | Γ£ô | |
+| DNN-BERT-based text featurization and training | Γ£ô | |
+| Out-of-the-box GPU support (training and inference) | Γ£ô | |
+| Image Classification and Labeling support | Γ£ô | |
+| Auto-ARIMA, Prophet and ForecastTCN models for forecasting | Γ£ô | |
+| Multiple runs/iterations in parallel | Γ£ô | |
+| Create models with interpretability in AutoML studio web experience UI | Γ£ô | |
+| Feature engineering customization in studio web experience UI| Γ£ô | |
+| Azure ML hyperparameter tuning | Γ£ô | |
+| Azure ML Pipeline workflow support | Γ£ô | |
+| Continue a run | Γ£ô | |
+| Forecasting | Γ£ô | Γ£ô |
+| Create and run experiments in notebooks | Γ£ô | Γ£ô |
+| Register and visualize experiment's info and metrics in UI | Γ£ô | Γ£ô |
+| Data guardrails | Γ£ô | Γ£ô |
+
+## Training, validation and test data
+
+With automated ML you provide the **training data** to train ML models, and you can specify what type of model validation to perform. Automated ML performs model validation as part of training. That is, automated ML uses **validation data** to tune model hyperparameters based on the applied algorithm to find the best combination that best fits the training data. However, the same validation data is used for each iteration of tuning, which introduces model evaluation bias since the model continues to improve and fit to the validation data.
+
+To help confirm that such bias isn't applied to the final recommended model, automated ML supports the use of **test data** to evaluate the final model that automated ML recommends at the end of your experiment. When you provide test data as part of your AutoML experiment configuration, this recommended model is tested by default at the end of your experiment (preview).
+
+>[!IMPORTANT]
+> Testing your models with a test dataset to evaluate generated models is a preview feature. This capability is an [experimental](/python/api/overview/azure/ml/#stable-vs-experimental) preview feature, and may change at any time.
+
+Learn how to [configure AutoML experiments to use test data (preview) with the SDK (v1)](../how-to-configure-cross-validation-data-splits.md#provide-test-data-preview) or with the [Azure Machine Learning studio](../how-to-use-automated-ml-for-ml-models.md#create-and-run-experiment).
+
+You can also [test any existing automated ML model (preview) (v1)](../how-to-configure-auto-train.md)), including models from child runs, by providing your own test data or by setting aside a portion of your training data.
+
+## Feature engineering
+
+Feature engineering is the process of using domain knowledge of the data to create features that help ML algorithms learn better. In Azure Machine Learning, scaling and normalization techniques are applied to facilitate feature engineering. Collectively, these techniques and feature engineering are referred to as featurization.
+
+For automated machine learning experiments, featurization is applied automatically, but can also be customized based on your data. [Learn more about what featurization is included (v1)](../how-to-configure-auto-features.md#featurization) and how AutoML helps [prevent over-fitting and imbalanced data](../concept-manage-ml-pitfalls.md) in your models.
+
+> [!NOTE]
+> Automated machine learning featurization steps (feature normalization, handling missing data,
+> converting text to numeric, etc.) become part of the underlying model. When using the model for
+> predictions, the same featurization steps applied during training are applied to
+> your input data automatically.
+
+### Customize featurization
+
+Additional feature engineering techniques such as, encoding and transforms are also available.
+
+Enable this setting with:
+++ Azure Machine Learning studio: Enable **Automatic featurization** in the **View additional configuration** section [with these (v1) steps](../how-to-use-automated-ml-for-ml-models.md#customize-featurization).+++ Python SDK: Specify `"feauturization": 'auto' / 'off' / 'FeaturizationConfig'` in your [AutoMLConfig](/python/api/azureml-train-automl-client/azureml.train.automl.automlconfig.automlconfig) object. Learn more about [enabling featurization (v1)](../how-to-configure-auto-features.md). +
+## <a name="ensemble"></a> Ensemble models
+
+Automated machine learning supports ensemble models, which are enabled by default. Ensemble learning improves machine learning results and predictive performance by combining multiple models as opposed to using single models. The ensemble iterations appear as the final iterations of your run. Automated machine learning uses both voting and stacking ensemble methods for combining models:
+
+* **Voting**: predicts based on the weighted average of predicted class probabilities (for classification tasks) or predicted regression targets (for regression tasks).
+* **Stacking**: stacking combines heterogenous models and trains a meta-model based on the output from the individual models. The current default meta-models are LogisticRegression for classification tasks and ElasticNet for regression/forecasting tasks.
+
+The [Caruana ensemble selection algorithm](http://www.niculescu-mizil.org/papers/shotgun.icml04.revised.rev2.pdf) with sorted ensemble initialization is used to decide which models to use within the ensemble. At a high level, this algorithm initializes the ensemble with up to five models with the best individual scores, and verifies that these models are within 5% threshold of the best score to avoid a poor initial ensemble. Then for each ensemble iteration, a new model is added to the existing ensemble and the resulting score is calculated. If a new model improved the existing ensemble score, the ensemble is updated to include the new model.
+
+See the [how-to (v1)](how-to-configure-auto-train-v1.md#ensemble-configuration) for changing default ensemble settings in automated machine learning.
+
+<a name="use-with-onnx"></a>
+
+## AutoML & ONNX
+
+With Azure Machine Learning, you can use automated ML to build a Python model and have it converted to the ONNX format. Once the models are in the ONNX format, they can be run on a variety of platforms and devices. Learn more about [accelerating ML models with ONNX](../concept-onnx.md).
+
+See how to convert to ONNX format [in this Jupyter notebook example](https://github.com/Azure/azureml-examples/tree/main/python-sdk/tutorials/automl-with-azureml/classification-bank-marketing-all-features). Learn which [algorithms are supported in ONNX (v1)](../how-to-configure-auto-train.md#supported-algorithms).
+
+The ONNX runtime also supports C#, so you can use the model built automatically in your C# apps without any need for recoding or any of the network latencies that REST endpoints introduce. Learn more about [using an AutoML ONNX model in a .NET application with ML.NET](../how-to-use-automl-onnx-model-dotnet.md) and [inferencing ONNX models with the ONNX runtime C# API](https://onnxruntime.ai/docs/api/csharp-api.html).
+
+## Next steps
+
+There are multiple resources to get you up and running with AutoML.
+
+### Tutorials/ how-tos
+Tutorials are end-to-end introductory examples of AutoML scenarios.
++ **For a code first experience**, follow the [Tutorial: Train a regression model with AutoML and Python (v1)](../tutorial-auto-train-models.md).+++ **For a low or no-code experience**, see the [Tutorial: Train a classification model with no-code AutoML in Azure Machine Learning studio](../tutorial-first-experiment-automated-ml.md).+++ **For using AutoML to train computer vision models**, see the [Tutorial: Train an object detection model (preview) with AutoML and Python (v1)](../tutorial-auto-train-image-models.md).
+
+How-to articles provide additional detail into what functionality automated ML offers. For example,
+++ Configure the settings for automatic training experiments
+ + [Without code in the Azure Machine Learning studio](../how-to-use-automated-ml-for-ml-models.md).
+ + [With the Python SDK v1](../how-to-configure-auto-train.md).
+++ Learn how to [train forecasting models with time series data (v1)](../how-to-auto-train-forecast.md).+++ Learn how to [train computer vision models with Python (v1)](../how-to-auto-train-image-models.md).+++ Learn how to [view the generated code from your automated ML models](../how-to-generate-automl-training-code.md).
+
+### Jupyter notebook samples
+
+Review detailed code examples and use cases in the [GitHub notebook repository for automated machine learning samples](https://github.com/Azure/azureml-examples/tree/main/python-sdk/tutorials/automl-with-azureml).
+
+### Python SDK reference
+
+Deepen your expertise of SDK design patterns and class specifications with the [AutoML class reference documentation](/python/api/azureml-train-automl-client/azureml.train.automl.automlconfig.automlconfig).
+
+> [!Note]
+> Automated machine learning capabilities are also available in other Microsoft solutions such as,
+[ML.NET](/dotnet/machine-learning/automl-overview),
+[HDInsight](../../hdinsight/spark/apache-spark-run-machine-learning-automl.md), [Power BI](/power-bi/service-machine-learning-automated) and [SQL Server](https://cloudblogs.microsoft.com/sqlserver/2019/01/09/how-to-automate-machine-learning-on-sql-server-2019-big-data-clusters/)
machine-learning Concept Azure Machine Learning Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/concept-azure-machine-learning-architecture.md
+
+ Title: 'Architecture & key concepts (v1)'
+
+description: This article gives you a high-level understanding of the architecture, terms, and concepts that make up Azure Machine Learning.
++++++ Last updated : 10/21/2021+
+#Customer intent: As a data scientist, I want to understand the big picture about how Azure Machine Learning works.
++
+# How Azure Machine Learning works: Architecture and concepts (v1)
++
+This article applies to the first version (v1) of the Azure Machine Learning CLI & SDK. For version two (v2), see [How Azure Machine Learning works (v2)](../concept-azure-machine-learning-v2.md).
+
+Learn about the architecture and concepts for [Azure Machine Learning](../overview-what-is-azure-machine-learning.md). This article gives you a high-level understanding of the components and how they work together to assist in the process of building, deploying, and maintaining machine learning models.
+
+## <a name="workspace"></a> Workspace
+
+A [machine learning workspace](../concept-workspace.md) is the top-level resource for Azure Machine Learning.
++
+The workspace is the centralized place to:
+
+* Manage resources you use for training and deployment of models, such as [computes](#computes)
+* Store assets you create when you use Azure Machine Learning, including:
+ * [Environments](#environments)
+ * [Experiments](#experiments)
+ * [Pipelines](#ml-pipelines)
+ * [Datasets](#datasets-and-datastores)
+ * [Models](#models)
+ * [Endpoints](#endpoints)
+
+A workspace includes other Azure resources that are used by the workspace:
+++ [Azure Container Registry (ACR)](https://azure.microsoft.com/services/container-registry/): Registers docker containers that you use during training and when you deploy a model. To minimize costs, ACR is only created when deployment images are created.++ [Azure Storage account](https://azure.microsoft.com/services/storage/): Is used as the default datastore for the workspace. Jupyter notebooks that are used with your Azure Machine Learning compute instances are stored here as well.++ [Azure Application Insights](https://azure.microsoft.com/services/application-insights/): Stores monitoring information about your models.++ [Azure Key Vault](https://azure.microsoft.com/services/key-vault/): Stores secrets that are used by compute targets and other sensitive information that's needed by the workspace.+
+You can share a workspace with others.
+
+## Computes
+
+<a name="compute-targets"></a>
+A [compute target](../concept-compute-target.md) is any machine or set of machines you use to run your training script or host your service deployment. You can use your local machine or a remote compute resource as a compute target. With compute targets, you can start training on your local machine and then scale out to the cloud without changing your training script.
+
+Azure Machine Learning introduces two fully managed cloud-based virtual machines (VM) that are configured for machine learning tasks:
+
+* **Compute instance**: A compute instance is a VM that includes multiple tools and environments installed for machine learning. The primary use of a compute instance is for your development workstation. You can start running sample notebooks with no setup required. A compute instance can also be used as a compute target for training and inferencing jobs.
+
+* **Compute clusters**: Compute clusters are a cluster of VMs with multi-node scaling capabilities. Compute clusters are better suited for compute targets for large jobs and production. The cluster scales up automatically when a job is submitted. Use as a training compute target or for dev/test deployment.
+
+For more information about training compute targets, see [Training compute targets](../concept-compute-target.md#train). For more information about deployment compute targets, see [Deployment targets](../concept-compute-target.md#deploy).
+
+## Datasets and datastores
+
+[**Azure Machine Learning Datasets**](concept-data.md) make it easier to access and work with your data. By creating a dataset, you create a reference to the data source location along with a copy of its metadata. Because the data remains in its existing location, you incur no extra storage cost, and don't risk the integrity of your data sources.
+
+For more information, see [Create and register Azure Machine Learning Datasets](how-to-create-register-datasets.md). For more examples using Datasets, see the [sample notebooks](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/work-with-data/datasets-tutorial).
+
+Datasets use [datastores](../concept-data.md#datastores) to securely connect to your Azure storage services. Datastores store connection information without putting your authentication credentials and the integrity of your original data source at risk. They store connection information, like your subscription ID and token authorization in your Key Vault associated with the workspace, so you can securely access your storage without having to hard code them in your script.
+
+## Environments
+
+[Workspace](#workspace) > **Environments**
+
+An [environment](../concept-environments.md) is the encapsulation of the environment where training or scoring of your machine learning model happens. The environment specifies the Python packages, environment variables, and software settings around your training and scoring scripts.
+
+For code samples, see the "Manage environments" section of [How to use environments](../how-to-use-environments.md#manage-environments).
+
+## Experiments
+
+[Workspace](#workspace) > **Experiments**
+
+An experiment is a grouping of many runs from a specified script. It always belongs to a workspace. When you submit a run, you provide an experiment name. Information for the run is stored under that experiment. If the name doesn't exist when you submit an experiment, a new experiment is automatically created.
+
+For an example of using an experiment, see [Tutorial: Train your first model](../tutorial-1st-experiment-sdk-train.md).
+
+### Runs
+
+[Workspace](#workspace) > [Experiments](#experiments) > **Run**
+
+A run is a single execution of a training script. An experiment will typically contain multiple runs.
+
+Azure Machine Learning records all runs and stores the following information in the experiment:
+
+* Metadata about the run (timestamp, duration, and so on)
+* Metrics that are logged by your script
+* Output files that are autocollected by the experiment or explicitly uploaded by you
+* A snapshot of the directory that contains your scripts, prior to the run
+
+You produce a run when you submit a script to train a model. A run can have zero or more child runs. For example, the top-level run might have two child runs, each of which might have its own child run.
+
+### Run configurations
+
+[Workspace](#workspace) > [Experiments](#experiments) > [Run](#runs) > **Run configuration**
+
+A run configuration defines how a script should be run in a specified compute target. You use the configuration to specify the script, the compute target and Azure ML environment to run on, any distributed job-specific configurations, and some additional properties. For more information on the full set of configurable options for runs, see [ScriptRunConfig](/python/api/azureml-core/azureml.core.scriptrunconfig).
+
+A run configuration can be persisted into a file inside the directory that contains your training script. Or it can be constructed as an in-memory object and used to submit a run.
+
+For example run configurations, see [Configure a training run](../how-to-set-up-training-targets.md).
+
+### Snapshots
+
+[Workspace](#workspace) > [Experiments](#experiments) > [Run](#runs) > **Snapshot**
+
+When you submit a run, Azure Machine Learning compresses the directory that contains the script as a zip file and sends it to the compute target. The zip file is then extracted, and the script is run there. Azure Machine Learning also stores the zip file as a snapshot as part of the run record. Anyone with access to the workspace can browse a run record and download the snapshot.
+
+### Logging
+
+Azure Machine Learning automatically logs standard run metrics for you. However, you can also [use the Python SDK to log arbitrary metrics](../how-to-log-view-metrics.md).
+
+There are multiple ways to view your logs: monitoring run status in real time, or viewing results after completion. For more information, see [Monitor and view ML run logs](../how-to-log-view-metrics.md).
++
+> [!NOTE]
+> [!INCLUDE [amlinclude-info](../../../includes/machine-learning-amlignore-gitignore.md)]
+
+### Git tracking and integration
+
+When you start a training run where the source directory is a local Git repository, information about the repository is stored in the run history. This works with runs submitted using a script run configuration or ML pipeline. It also works for runs submitted from the SDK or Machine Learning CLI.
+
+For more information, see [Git integration for Azure Machine Learning](../concept-train-model-git-integration.md).
+
+### Training workflow
+
+When you run an experiment to train a model, the following steps happen. These are illustrated in the training workflow diagram below:
+
+* Azure Machine Learning is called with the snapshot ID for the code snapshot saved in the previous section.
+* Azure Machine Learning creates a run ID (optional) and a Machine Learning service token, which is later used by compute targets like Machine Learning Compute/VMs to communicate with the Machine Learning service.
+* You can choose either a managed compute target (like Machine Learning Compute) or an unmanaged compute target (like VMs) to run training jobs. Here are the data flows for both scenarios:
+ * VMs/HDInsight, accessed by SSH credentials in a key vault in the Microsoft subscription. Azure Machine Learning runs management code on the compute target that:
+
+ 1. Prepares the environment. (Docker is an option for VMs and local computers. See the following steps for Machine Learning Compute to understand how running experiments on Docker containers works.)
+ 1. Downloads the code.
+ 1. Sets up environment variables and configurations.
+ 1. Runs user scripts (the code snapshot mentioned in the previous section).
+
+ * Machine Learning Compute, accessed through a workspace-managed identity.
+Because Machine Learning Compute is a managed compute target (that is, it's managed by Microsoft) it runs under your Microsoft subscription.
+
+ 1. Remote Docker construction is kicked off, if needed.
+ 1. Management code is written to the user's Azure Files share.
+ 1. The container is started with an initial command. That is, management code as described in the previous step.
+
+* After the run completes, you can query runs and metrics. In the flow diagram below, this step occurs when the training compute target writes the run metrics back to Azure Machine Learning from storage in the Cosmos DB database. Clients can call Azure Machine Learning. Machine Learning will in turn pull metrics from the Cosmos DB database and return them back to the client.
+
+[![Training workflow](media/concept-azure-machine-learning-architecture/training-and-metrics.png)](media/concept-azure-machine-learning-architecture/training-and-metrics.png#lightbox)
+
+## Models
+
+At its simplest, a model is a piece of code that takes an input and produces output. Creating a machine learning model involves selecting an algorithm, providing it with data, and [tuning hyperparameters](../how-to-tune-hyperparameters.md). Training is an iterative process that produces a trained model, which encapsulates what the model learned during the training process.
+
+You can bring a model that was trained outside of Azure Machine Learning. Or you can train a model by submitting a [run](#runs) of an [experiment](#experiments) to a [compute target](#compute-targets) in Azure Machine Learning. Once you have a model, you [register the model](#model-registry) in the workspace.
+
+Azure Machine Learning is framework agnostic. When you create a model, you can use any popular machine learning framework, such as Scikit-learn, XGBoost, PyTorch, TensorFlow, and Chainer.
+
+For an example of training a model using Scikit-learn, see [Tutorial: Train an image classification model with Azure Machine Learning](../tutorial-train-deploy-notebook.md).
++
+### Model registry
+
+[Workspace](#workspace) > **Models**
+
+The **model registry** lets you keep track of all the models in your Azure Machine Learning workspace.
+
+Models are identified by name and version. Each time you register a model with the same name as an existing one, the registry assumes that it's a new version. The version is incremented, and the new model is registered under the same name.
+
+When you register the model, you can provide additional metadata tags and then use the tags when you search for models.
+
+> [!TIP]
+> A registered model is a logical container for one or more files that make up your model. For example, if you have a model that is stored in multiple files, you can register them as a single model in your Azure Machine Learning workspace. After registration, you can then download or deploy the registered model and receive all the files that were registered.
+
+You can't delete a registered model that is being used by an active deployment.
+
+For an example of registering a model, see [Train an image classification model with Azure Machine Learning](../tutorial-train-deploy-notebook.md).
+
+## Deployment
+
+You deploy a [registered model](#model-registry) as a service endpoint. You need the following components:
+
+* **Environment**. This environment encapsulates the dependencies required to run your model for inference.
+* **Scoring code**. This script accepts requests, scores the requests by using the model, and returns the results.
+* **Inference configuration**. The inference configuration specifies the environment, entry script, and other components needed to run the model as a service.
+
+For more information about these components, see [Deploy models with Azure Machine Learning](../how-to-deploy-and-where.md).
+
+### Endpoints
+
+[Workspace](#workspace) > **Endpoints**
+
+An endpoint is an instantiation of your model into a web service that can be hosted in the cloud.
+
+#### Web service endpoint
+
+When deploying a model as a web service, the endpoint can be deployed on Azure Container Instances, Azure Kubernetes Service, or FPGAs. You create the service from your model, script, and associated files. These are placed into a base container image, which contains the execution environment for the model. The image has a load-balanced, HTTP endpoint that receives scoring requests that are sent to the web service.
+
+You can enable Application Insights telemetry or model telemetry to monitor your web service. The telemetry data is accessible only to you. It's stored in your Application Insights and storage account instances. If you've enabled automatic scaling, Azure automatically scales your deployment.
+
+The following diagram shows the inference workflow for a model deployed as a web service endpoint:
+
+Here are the details:
+
+* The user registers a model by using a client like the Azure Machine Learning SDK.
+* The user creates an image by using a model, a score file, and other model dependencies.
+* The Docker image is created and stored in Azure Container Registry.
+* The web service is deployed to the compute target (Container Instances/AKS) using the image created in the previous step.
+* Scoring request details are stored in Application Insights, which is in the user's subscription.
+* Telemetry is also pushed to the Microsoft Azure subscription.
+
+[![Inference workflow](media/concept-azure-machine-learning-architecture/inferencing.png)](media/concept-azure-machine-learning-architecture/inferencing.png#lightbox)
++
+For an example of deploying a model as a web service, see [Tutorial: Train and deploy a model](../tutorial-train-deploy-notebook.md).
+
+#### Real-time endpoints
+
+When you deploy a trained model in the designer, you can [deploy the model as a real-time endpoint](../tutorial-designer-automobile-price-deploy.md). A real-time endpoint commonly receives a single request via the REST endpoint and returns a prediction in real-time. This is in contrast to batch processing, which processes multiple values at once and saves the results after completion to a datastore.
+
+#### Pipeline endpoints
+
+Pipeline endpoints let you call your [ML Pipelines](#ml-pipelines) programatically via a REST endpoint. Pipeline endpoints let you automate your pipeline workflows.
+
+A pipeline endpoint is a collection of published pipelines. This logical organization lets you manage and call multiple pipelines using the same endpoint. Each published pipeline in a pipeline endpoint is versioned. You can select a default pipeline for the endpoint, or specify a version in the REST call.
+
+## Automation
+
+### Azure Machine Learning CLI
+
+The [Azure Machine Learning CLI](../how-to-configure-cli.md) is an extension to the Azure CLI, a cross-platform command-line interface for the Azure platform. This extension provides commands to automate your machine learning activities.
+
+### ML Pipelines
+
+You use [machine learning pipelines](../concept-ml-pipelines.md) to create and manage workflows that stitch together machine learning phases. For example, a pipeline might include data preparation, model training, model deployment, and inference/scoring phases. Each phase can encompass multiple steps, each of which can run unattended in various compute targets.
+
+Pipeline steps are reusable, and can be run without rerunning the previous steps if the output of those steps hasn't changed. For example, you can retrain a model without rerunning costly data preparation steps if the data hasn't changed. Pipelines also allow data scientists to collaborate while working on separate areas of a machine learning workflow.
+
+## Monitoring and logging
+
+Azure Machine Learning provides the following monitoring and logging capabilities:
+
+* For __Data Scientists__, you can monitor your experiments and log information from your training runs. For more information, see the following articles:
+ * [Start, monitor, and cancel training runs](../how-to-track-monitor-analyze-runs.md)
+ * [Log metrics for training runs](../how-to-log-view-metrics.md)
+ * [Track experiments with MLflow](../how-to-use-mlflow.md)
+ * [Visualize runs with TensorBoard](../how-to-monitor-tensorboard.md)
+* For __Administrators__, you can monitor information about the workspace, related Azure resources, and events such as resource creation and deletion by using Azure Monitor. For more information, see [How to monitor Azure Machine Learning](../monitor-azure-machine-learning.md).
+* For __DevOps__ or __MLOps__, you can monitor information generated by models deployed as web services to identify problems with the deployments and gather data submitted to the service. For more information, see [Collect model data](../how-to-enable-data-collection.md) and [Monitor with Application Insights](../how-to-enable-app-insights.md).
+
+## Interacting with your workspace
+
+### Studio
+
+[Azure Machine Learning studio](../overview-what-is-machine-learning-studio.md) provides a web view of all the artifacts in your workspace. You can view results and details of your datasets, experiments, pipelines, models, and endpoints. You can also manage compute resources and datastores in the studio.
+
+The studio is also where you access the interactive tools that are part of Azure Machine Learning:
+++ [Azure Machine Learning designer](../concept-designer.md) to perform workflow steps without writing code++ Web experience for [automated machine learning](../concept-automated-ml.md)++ [Azure Machine Learning notebooks](../how-to-run-jupyter-notebooks.md) to write and run your own code in integrated Jupyter notebook servers.++ Data labeling projects to create, manage, and monitor projects for labeling [images](../how-to-create-image-labeling-projects.md) or [text](../how-to-create-text-labeling-projects.md).+
+### Programming tools
+
+> [!IMPORTANT]
+> Tools marked (preview) below are currently in public preview.
+> The preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+++ Interact with the service in any Python environment with the [Azure Machine Learning SDK for Python](/python/api/overview/azure/ml/intro).++ Use [Azure Machine Learning designer](../concept-designer.md) to perform the workflow steps without writing code. ++ Use [Azure Machine Learning CLI](../how-to-configure-cli.md) for automation.+
+## Next steps
+
+To get started with Azure Machine Learning, see:
+
+* [What is Azure Machine Learning?](../overview-what-is-azure-machine-learning.md)
+* [Create an Azure Machine Learning workspace](../how-to-manage-workspace.md)
+* [Tutorial: Train and deploy a model](../tutorial-train-deploy-notebook.md)
machine-learning Concept Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/concept-data.md
+
+ Title: Secure data access in the cloud
+
+description: Learn how to securely connect to your data storage on Azure with Azure Machine Learning datastores and datasets.
+++++++ Last updated : 10/21/2021+
+#Customer intent: As an experienced Python developer, I need to securely access my data in my Azure storage solutions and use it to accomplish my machine learning tasks.
++
+# Secure data access in Azure Machine Learning
+
+> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning developer platform you are using:"]
+> * [v1](concept-data.md)
+> * [v2 (current version)](../concept-data.md)
+
+Azure Machine Learning makes it easy to connect to your data in the cloud. It provides an abstraction layer over the underlying storage service, so you can securely access and work with your data without having to write code specific to your storage type. Azure Machine Learning also provides the following data capabilities:
+
+* Interoperability with Pandas and Spark DataFrames
+* Versioning and tracking of data lineage
+* Data labeling
+* Data drift monitoring
+
+## Data workflow
+
+When you're ready to use the data in your cloud-based storage solution, we recommend the following data delivery workflow. This workflow assumes you have an [Azure storage account](/storage/common/storage-account-create.md?tabs=azure-portal) and data in a cloud-based storage service in Azure.
+
+1. Create an [Azure Machine Learning datastore](#connect-to-storage-with-datastores) to store connection information to your Azure storage.
+
+2. From that datastore, create an [Azure Machine Learning dataset](#reference-data-in-storage-with-datasets) to point to a specific file(s) in your underlying storage.
+
+3. To use that dataset in your machine learning experiment you can either
+ * Mount it to your experiment's compute target for model training.
+
+ **OR**
+
+ * Consume it directly in Azure Machine Learning solutions like, automated machine learning (automated ML) experiment runs, machine learning pipelines, or the [Azure Machine Learning designer](../concept-designer.md).
+
+4. Create [dataset monitors](#monitor-model-performance-with-data-drift) for your model output dataset to detect for data drift.
+
+5. If data drift is detected, update your input dataset and retrain your model accordingly.
+
+The following diagram provides a visual demonstration of this recommended workflow.
+
+![Diagram shows the Azure Storage Service which flows into a datastore, which flows into a dataset.](./media/concept-data/data-concept-diagram.svg)
++
+## Connect to storage with datastores
+
+Azure Machine Learning datastores securely keep the connection information to your data storage on Azure, so you don't have to code it in your scripts. [Register and create a datastore](../how-to-access-data.md) to easily connect to your storage account, and access the data in your underlying storage service.
+
+Supported cloud-based storage services in Azure that can be registered as datastores:
+++ Azure Blob Container++ Azure File Share++ Azure Data Lake++ Azure Data Lake Gen2++ Azure SQL Database++ Azure Database for PostgreSQL++ Databricks File System++ Azure Database for MySQL+
+>[!TIP]
+> You can create datastores with credential-based authentication for accessing storage services, like a service principal or shared access signature (SAS) token. These credentials can be accessed by users who have *Reader* access to the workspace.
+>
+> If this is a concern, [create a datastore that uses identity-based data access](../how-to-identity-based-data-access.md) to connect to storage services.
++
+## Reference data in storage with datasets
+
+Azure Machine Learning datasets aren't copies of your data. By creating a dataset, you create a reference to the data in its storage service, along with a copy of its metadata.
+
+Because datasets are lazily evaluated, and the data remains in its existing location, you
+
+* Incur no extra storage cost.
+* Don't risk unintentionally changing your original data sources.
+* Improve ML workflow performance speeds.
+
+To interact with your data in storage, [create a dataset](how-to-create-register-datasets.md) to package your data into a consumable object for machine learning tasks. Register the dataset to your workspace to share and reuse it across different experiments without data ingestion complexities.
+
+Datasets can be created from local files, public urls, [Azure Open Datasets](https://azure.microsoft.com/services/open-datasets/), or Azure storage services via datastores.
+
+There are 2 types of datasets:
+++ A [FileDataset](/python/api/azureml-core/azureml.data.file_dataset.filedataset) references single or multiple files in your datastores or public URLs. If your data is already cleansed and ready to use in training experiments, you can [download or mount files](../how-to-train-with-datasets.md#mount-files-to-remote-compute-targets) referenced by FileDatasets to your compute target.+++ A [TabularDataset](/python/api/azureml-core/azureml.data.tabulardataset) represents data in a tabular format by parsing the provided file or list of files. You can load a TabularDataset into a pandas or Spark DataFrame for further manipulation and cleansing. For a complete list of data formats you can create TabularDatasets from, see the [TabularDatasetFactory class](/python/api/azureml-core/azureml.data.dataset_factory.tabulardatasetfactory).+
+Additional datasets capabilities can be found in the following documentation:
+++ [Version and track](../how-to-version-track-datasets.md) dataset lineage.++ [Monitor your dataset](../how-to-monitor-datasets.md) to help with data drift detection. +
+## Work with your data
+
+With datasets, you can accomplish a number of machine learning tasks through seamless integration with Azure Machine Learning features.
+++ Create a [data labeling project](#label-data-with-data-labeling-projects).++ Train machine learning models:
+ + [automated ML experiments](../how-to-use-automated-ml-for-ml-models.md)
+ + the [designer](../tutorial-designer-automobile-price-train-score.md#import-data)
+ + [notebooks](../how-to-train-with-datasets.md)
+ + [Azure Machine Learning pipelines](../how-to-create-machine-learning-pipelines.md)
++ Access datasets for scoring with [batch inference](../tutorial-pipeline-batch-scoring-classification.md) in [machine learning pipelines](../how-to-create-machine-learning-pipelines.md).++ Set up a dataset monitor for [data drift](#monitor-model-performance-with-data-drift) detection.+++
+## Label data with data labeling projects
+
+Labeling large amounts of data has often been a headache in machine learning projects. Those with a computer vision component, such as image classification or object detection, generally require thousands of images and corresponding labels.
+
+Azure Machine Learning gives you a central location to create, manage, and monitor labeling projects. Labeling projects help coordinate the data, labels, and team members, allowing you to more efficiently manage the labeling tasks. Currently supported tasks are image classification, either multi-label or multi-class, and object identification using bounded boxes.
+
+Create an [image labeling project](../how-to-create-image-labeling-projects.md) or [text labeling project](../how-to-create-text-labeling-projects.md), and output a dataset for use in machine learning experiments.
+++
+## Monitor model performance with data drift
+
+In the context of machine learning, data drift is the change in model input data that leads to model performance degradation. It is one of the top reasons model accuracy degrades over time, thus monitoring data drift helps detect model performance issues.
+
+See the [Create a dataset monitor](../how-to-monitor-datasets.md) article, to learn more about how to detect and alert to data drift on new data in a dataset.
+
+## Next steps
+++ Create a dataset in Azure Machine Learning studio or with the Python SDK [using these steps.](how-to-create-register-datasets.md)++ Try out dataset training examples with our [sample notebooks](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/work-with-data/).
machine-learning Concept Mlflow V1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/concept-mlflow-v1.md
+
+ Title: MLflow and Azure Machine Learning (v1)
+
+description: Learn about MLflow with Azure Machine Learning to log metrics and artifacts from ML models, and deploy your ML models as a web service.
+++++ Last updated : 10/21/2021++++
+# MLflow and Azure Machine Learning (v1)
++
+> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning developer platform you are using:"]
+> * [v1](concept-mlflow-v1.md)
+> * [v2 (current version)](../concept-mlflow.md)
+
+[MLflow](https://www.mlflow.org) is an open-source library for managing the life cycle of your machine learning experiments. MLflow's tracking URI and logging API, collectively known as [MLflow Tracking](https://mlflow.org/docs/latest/quickstart.html#using-the-tracking-api) is a component of MLflow that logs and tracks your training run metrics and model artifacts, no matter your experiment's environment--locally on your computer, on a remote compute target, a virtual machine, or an Azure Databricks cluster.
+
+Together, MLflow Tracking and Azure Machine learning allow you to track an experiment's run metrics and store model artifacts in your Azure Machine Learning workspace. That experiment could've been run locally on your computer, on a remote compute target or a virtual machine.
+
+## Compare MLflow and Azure Machine Learning clients
+
+ The following table summarizes the different clients that can use Azure Machine Learning, and their respective function capabilities.
+
+ MLflow Tracking offers metric logging and artifact storage functionalities that are only otherwise available via the [Azure Machine Learning Python SDK](/python/api/overview/azure/ml/intro).
+
+| Capability | MLflow Tracking & Deployment | Azure Machine Learning Python SDK | Azure Machine Learning CLI | Azure Machine Learning studio|
+||||||
+| Manage workspace | | Γ£ô | Γ£ô | Γ£ô |
+| Use data stores | | Γ£ô | Γ£ô | |
+| Log metrics | Γ£ô | Γ£ô | | |
+| Upload artifacts | Γ£ô | Γ£ô | | |
+| View metrics | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Manage compute | | Γ£ô | Γ£ô | Γ£ô |
+| Deploy models | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+|Monitor model performance||Γ£ô| | |
+| Detect data drift | | Γ£ô | | Γ£ô |
++
+## Track experiments
+
+With MLflow Tracking you can connect Azure Machine Learning as the backend of your MLflow experiments. By doing so, you can do the following tasks,
+++ Track and log experiment metrics and artifacts in your [Azure Machine Learning workspace](concept-azure-machine-learning-architecture.md#workspace). If you already use MLflow Tracking for your experiments, the workspace provides a centralized, secure, and scalable location to store training metrics and models. Learn more at [Track ML models with MLflow and Azure Machine Learning](../how-to-use-mlflow.md). +++ Track and manage models in MLflow and Azure Machine Learning model registry.+++ [Track Azure Databricks training runs](../how-to-use-mlflow-azure-databricks.md).+
+## Train MLflow projects (preview)
++
+You can use MLflow's tracking URI and logging API, collectively known as MLflow Tracking, to submit training jobs with [MLflow Projects](https://www.mlflow.org/docs/latest/projects.html) and Azure Machine Learning backend support (preview). You can submit jobs locally with Azure Machine Learning tracking or migrate your runs to the cloud like via an [Azure Machine Learning Compute](../how-to-create-attach-compute-cluster.md).
+
+Learn more at [Train ML models with MLflow projects and Azure Machine Learning (preview)](../how-to-train-mlflow-projects.md).
+
+## Deploy MLflow experiments
+
+You can [deploy your MLflow model as an Azure web service](../how-to-deploy-mlflow-models.md), so you can leverage and apply Azure Machine Learning's model management and data drift detection capabilities to your production models.
+
+## Next steps
+* [Track ML models with MLflow and Azure Machine Learning](how-to-use-mlflow.md).
+* [Train ML models with MLflow projects and Azure Machine Learning (preview)](../how-to-train-mlflow-projects.md).
+* [Track Azure Databricks runs with MLflow](../how-to-use-mlflow-azure-databricks.md).
+* [Deploy models with MLflow](how-to-deploy-mlflow-models.md).
machine-learning Concept Model Management And Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/concept-model-management-and-deployment.md
+
+ Title: 'MLOps: ML model management v1'
+
+description: 'Learn about model management (MLOps) with Azure Machine Learning. Deploy, manage, track lineage and monitor your models to continuously improve them. (v1)'
++++++++ Last updated : 11/04/2021++
+# MLOps: Model management, deployment, lineage, and monitoring with Azure Machine Learning v1
++
+> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning developer platform you are using:"]
+> * [v1](concept-model-management-and-deployment.md)
+> * [v2 (current version)](../concept-model-management-and-deployment.md)
+
+In this article, learn about how do Machine Learning Operations (MLOps) in Azure Machine Learning to manage the lifecycle of your models. MLOps improves the quality and consistency of your machine learning solutions.
+
+## What is MLOps?
+
+Machine Learning Operations (MLOps) is based on [DevOps](https://azure.microsoft.com/overview/what-is-devops/) principles and practices that increase the efficiency of workflows. For example, continuous integration, delivery, and deployment. MLOps applies these principles to the machine learning process, with the goal of:
+
+* Faster experimentation and development of models
+* Faster deployment of models into production
+* Quality assurance and end-to-end lineage tracking
+
+## MLOps in Azure Machine Learning
+
+Azure Machine Learning provides the following MLOps capabilities:
+
+- **Create reproducible ML pipelines**. Machine Learning pipelines allow you to define repeatable and reusable steps for your data preparation, training, and scoring processes.
+- **Create reusable software environments** for training and deploying models.
+- **Register, package, and deploy models from anywhere**. You can also track associated metadata required to use the model.
+- **Capture the governance data for the end-to-end ML lifecycle**. The logged lineage information can include who is publishing models, why changes were made, and when models were deployed or used in production.
+- **Notify and alert on events in the ML lifecycle**. For example, experiment completion, model registration, model deployment, and data drift detection.
+- **Monitor ML applications for operational and ML-related issues**. Compare model inputs between training and inference, explore model-specific metrics, and provide monitoring and alerts on your ML infrastructure.
+- **Automate the end-to-end ML lifecycle with Azure Machine Learning and Azure Pipelines**. Using pipelines allows you to frequently update models, test new models, and continuously roll out new ML models alongside your other applications and services.
+
+For more information on MLOps, see [Machine Learning DevOps (MLOps)](/azure/cloud-adoption-framework/ready/azure-best-practices/ai-machine-learning-mlops).
+
+## Create reproducible ML pipelines
+
+Use ML pipelines from Azure Machine Learning to stitch together all of the steps involved in your model training process.
+
+An ML pipeline can contain steps from data preparation to feature extraction to hyperparameter tuning to model evaluation. For more information, see [ML pipelines](../concept-ml-pipelines.md).
+
+If you use the [Designer](../concept-designer.md) to create your ML pipelines, you may at any time click the **"..."** at the top-right of the Designer page and then select **Clone**. Cloning your pipeline allows you to iterate your pipeline design without losing your old versions.
+
+## Create reusable software environments
+
+Azure Machine Learning environments allow you to track and reproduce your projects' software dependencies as they evolve. Environments allow you to ensure that builds are reproducible without manual software configurations.
+
+Environments describe the pip and Conda dependencies for your projects, and can be used for both training and deployment of models. For more information, see [What are Azure Machine Learning environments](../concept-environments.md).
+
+## Register, package, and deploy models from anywhere
+
+### Register and track ML models
+
+Model registration allows you to store and version your models in the Azure cloud, in your workspace. The model registry makes it easy to organize and keep track of your trained models.
+
+> [!TIP]
+> A registered model is a logical container for one or more files that make up your model. For example, if you have a model that is stored in multiple files, you can register them as a single model in your Azure Machine Learning workspace. After registration, you can then download or deploy the registered model and receive all the files that were registered.
+
+Registered models are identified by name and version. Each time you register a model with the same name as an existing one, the registry increments the version. Additional metadata tags can be provided during registration. These tags are then used when searching for a model. Azure Machine Learning supports any model that can be loaded using Python 3.5.2 or higher.
+
+> [!TIP]
+> You can also register models trained outside Azure Machine Learning.
+
+You can't delete a registered model that is being used in an active deployment.
+For more information, see the register model section of [Deploy models](../how-to-deploy-and-where.md#registermodel).
+
+> [!IMPORTANT]
+> When using Filter by `Tags` option on the Models page of Azure Machine Learning Studio, instead of using `TagName : TagValue` customers should use `TagName=TagValue` (without space)
++
+### Package and debug models
+
+Before deploying a model into production, it is packaged into a Docker image. In most cases, image creation happens automatically in the background during deployment. You can manually specify the image.
+
+If you run into problems with the deployment, you can deploy on your local development environment for troubleshooting and debugging.
+
+For more information, see [Deploy models](../how-to-deploy-and-where.md#registermodel) and [Troubleshooting deployments](../how-to-troubleshoot-deployment.md).
+
+### Convert and optimize models
+
+Converting your model to [Open Neural Network Exchange](https://onnx.ai) (ONNX) may improve performance. On average, converting to ONNX can yield a 2x performance increase.
+
+For more information on ONNX with Azure Machine Learning, see the [Create and accelerate ML models](../concept-onnx.md) article.
+
+### Use models
+
+Trained machine learning models are deployed as web services in the cloud or locally. Deployments use CPU, GPU, or field-programmable gate arrays (FPGA) for inferencing. You can also use models from Power BI.
+
+When using a model as a web service, you provide the following items:
+
+* The model(s) that are used to score data submitted to the service/device.
+* An entry script. This script accepts requests, uses the model(s) to score the data, and return a response.
+* An Azure Machine Learning environment that describes the pip and Conda dependencies required by the model(s) and entry script.
+* Any additional assets such as text, data, etc. that are required by the model(s) and entry script.
+
+You also provide the configuration of the target deployment platform. For example, the VM family type, available memory, and number of cores when deploying to Azure Kubernetes Service.
+
+When the image is created, components required by Azure Machine Learning are also added. For example, assets needed to run the web service.
+
+#### Batch scoring
+Batch scoring is supported through ML pipelines. For more information, see [Batch predictions on big data](../tutorial-pipeline-batch-scoring-classification.md).
+
+#### Real-time web services
+
+You can use your models in **web services** with the following compute targets:
+
+* Azure Container Instance
+* Azure Kubernetes Service
+* Local development environment
+
+To deploy the model as a web service, you must provide the following items:
+
+* The model or ensemble of models.
+* Dependencies required to use the model. For example, a script that accepts requests and invokes the model, conda dependencies, etc.
+* Deployment configuration that describes how and where to deploy the model.
+
+For more information, see [Deploy models](../how-to-deploy-and-where.md).
+
+#### Controlled rollout
+
+When deploying to Azure Kubernetes Service, you can use controlled rollout to enable the following scenarios:
+
+* Create multiple versions of an endpoint for a deployment
+* Perform A/B testing by routing traffic to different versions of the endpoint.
+* Switch between endpoint versions by updating the traffic percentage in endpoint configuration.
+
+For more information, see [Controlled rollout of ML models](../how-to-deploy-azure-kubernetes-service.md#deploy-models-to-aks-using-controlled-rollout-preview).
+
+### Analytics
+
+Microsoft Power BI supports using machine learning models for data analytics. For more information, see [Azure Machine Learning integration in Power BI (preview)](/power-bi/service-machine-learning-integration).
+
+## Capture the governance data required for MLOps
+
+Azure ML gives you the capability to track the end-to-end audit trail of all of your ML assets by using metadata.
+
+- Azure ML [integrates with Git](../how-to-set-up-training-targets.md#gitintegration) to track information on which repository / branch / commit your code came from.
+- [Azure ML Datasets](how-to-create-register-datasets.md) help you track, profile, and version data.
+- [Interpretability](../how-to-machine-learning-interpretability.md) allows you to explain your models, meet regulatory compliance, and understand how models arrive at a result for given input.
+- Azure ML Run history stores a snapshot of the code, data, and computes used to train a model.
+- The Azure ML Model Registry captures all of the metadata associated with your model (which experiment trained it, where it is being deployed, if its deployments are healthy).
+- [Integration with Azure](../how-to-use-event-grid.md) allows you to act on events in the ML lifecycle. For example, model registration, deployment, data drift, and training (run) events.
+
+> [!TIP]
+> While some information on models and datasets is automatically captured, you can add additional information by using __tags__. When looking for registered models and datasets in your workspace, you can use tags as a filter.
+>
+> Associating a dataset with a registered model is an optional step. For information on referencing a dataset when registering a model, see the [Model](/python/api/azureml-core/azureml.core.model%28class%29) class reference.
++
+## Notify, automate, and alert on events in the ML lifecycle
+Azure ML publishes key events to Azure Event Grid, which can be used to notify and automate on events in the ML lifecycle. For more information, please see [this document](../how-to-use-event-grid.md).
++
+## Monitor for operational & ML issues
+
+Monitoring enables you to understand what data is being sent to your model, and the predictions that it returns.
+
+This information helps you understand how your model is being used. The collected input data may also be useful in training future versions of the model.
+
+For more information, see [How to enable model data collection](../how-to-enable-data-collection.md).
+
+## Retrain your model on new data
+
+Often, you'll want to validate your model, update it, or even retrain it from scratch, as you receive new information. Sometimes, receiving new data is an expected part of the domain. Other times, as discussed in [Detect data drift (preview) on datasets](../how-to-monitor-datasets.md), model performance can degrade in the face of such things as changes to a particular sensor, natural data changes such as seasonal effects, or features shifting in their relation to other features.
+
+There is no universal answer to "How do I know if I should retrain?" but Azure ML event and monitoring tools previously discussed are good starting points for automation. Once you have decided to retrain, you should:
+
+- Preprocess your data using a repeatable, automated process
+- Train your new model
+- Compare the outputs of your new model to those of your old model
+- Use predefined criteria to choose whether to replace your old model
+
+A theme of the above steps is that your retraining should be automated, not ad hoc. [Azure Machine Learning pipelines](../concept-ml-pipelines.md) are a good answer for creating workflows relating to data preparation, training, validation, and deployment. Read [Retrain models with Azure Machine Learning designer](../how-to-retrain-designer.md) to see how pipelines and the Azure Machine Learning designer fit into a retraining scenario.
+
+## Automate the ML lifecycle
+
+You can use GitHub and Azure Pipelines to create a continuous integration process that trains a model. In a typical scenario, when a Data Scientist checks a change into the Git repo for a project, the Azure Pipeline will start a training run. The results of the run can then be inspected to see the performance characteristics of the trained model. You can also create a pipeline that deploys the model as a web service.
+
+The [Azure Machine Learning extension](https://marketplace.visualstudio.com/items?itemName=ms-air-aiagility.vss-services-azureml) makes it easier to work with Azure Pipelines. It provides the following enhancements to Azure Pipelines:
+
+* Enables workspace selection when defining a service connection.
+* Enables release pipelines to be triggered by trained models created in a training pipeline.
+
+For more information on using Azure Pipelines with Azure Machine Learning, see the following links:
+
+* [Continuous integration and deployment of ML models with Azure Pipelines](/azure/devops/pipelines/targets/azure-machine-learning)
+* [Azure Machine Learning MLOps](https://aka.ms/mlops) repository
+* [Azure Machine Learning MLOpsPython](https://github.com/Microsoft/MLOpspython) repository
+
+You can also use Azure Data Factory to create a data ingestion pipeline that prepares data for use with training. For more information, see [Data ingestion pipeline](../how-to-cicd-data-ingestion.md).
+
+## Next steps
+
+Learn more by reading and exploring the following resources:
+++ [How & where to deploy models](../how-to-deploy-and-where.md) with Azure Machine Learning+++ [Tutorial: Train and deploy a model](../tutorial-train-deploy-notebook.md).+++ [End-to-end MLOps examples repo](https://github.com/microsoft/MLOps)+++ [CI/CD of ML models with Azure Pipelines](/azure/devops/pipelines/targets/azure-machine-learning)+++ Create clients that [consume a deployed model](../how-to-consume-web-service.md)+++ [Machine learning at scale](/azure/architecture/data-guide/big-data/machine-learning-at-scale)+++ [Azure AI reference architectures & best practices rep](https://github.com/microsoft/AI)
machine-learning How To Access Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-access-data.md
+
+ Title: Connect to storage services with CLI v1
+
+description: Learn how to use datastores to securely connect to Azure storage services during training with Azure Machine Learning CLI v1
+++++++ Last updated : 05/11/2022+
+#Customer intent: As an experienced Python developer, I need to make my data in Azure storage available to my remote compute to train my machine learning models.
++
+# Connect to storage services on Azure with datastores
+
+> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning developer platform you are using:"]
+> * [v1](how-to-access-data.md)
+> * [v2 (current version)](../how-to-datastore.md)
++
+In this article, learn how to connect to data storage services on Azure with Azure Machine Learning datastores and the [Azure Machine Learning Python SDK](/python/api/overview/azure/ml/intro).
+
+Datastores securely connect to your storage service on Azure without putting your authentication credentials and the integrity of your original data source at risk. They store connection information, like your subscription ID and token authorization in your [Key Vault](https://azure.microsoft.com/services/key-vault/) that's associated with the workspace, so you can securely access your storage without having to hard code them in your scripts. You can create datastores that connect to [these Azure storage solutions](#supported-data-storage-service-types).
+
+To understand where datastores fit in Azure Machine Learning's overall data access workflow, see the [Securely access data](concept-data.md#data-workflow) article.
+
+For a low code experience, see how to use the [Azure Machine Learning studio to create and register datastores](../how-to-connect-data-ui.md#create-datastores).
+
+>[!TIP]
+> This article assumes you want to connect to your storage service with credential-based authentication credentials, like a service principal or a shared access signature (SAS) token. Keep in mind, if credentials are registered with datastores, all users with workspace *Reader* role are able to retrieve these credentials. [Learn more about workspace *Reader* role.](../how-to-assign-roles.md#default-roles).
+>
+> If this is a concern, learn how to [Connect to storage services with identity based access](../how-to-identity-based-data-access.md).
+
+## Prerequisites
+
+- An Azure subscription. If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/).
+
+- An Azure storage account with a [supported storage type](#supported-data-storage-service-types).
+
+- The [Azure Machine Learning SDK for Python](/python/api/overview/azure/ml/intro).
+
+- An Azure Machine Learning workspace.
+
+ Either [create an Azure Machine Learning workspace](../how-to-manage-workspace.md) or use an existing one via the Python SDK.
+
+ Import the `Workspace` and `Datastore` class, and load your subscription information from the file `config.json` using the function `from_config()`. This looks for the JSON file in the current directory by default, but you can also specify a path parameter to point to the file using `from_config(path="your/file/path")`.
+
+ ```Python
+ import azureml.core
+ from azureml.core import Workspace, Datastore
+
+ ws = Workspace.from_config()
+ ```
+
+ When you create a workspace, an Azure blob container and an Azure file share are automatically registered as datastores to the workspace. They're named `workspaceblobstore` and `workspacefilestore`, respectively. The `workspaceblobstore` is used to store workspace artifacts and your machine learning experiment logs. It's also set as the **default datastore** and can't be deleted from the workspace. The `workspacefilestore` is used to store notebooks and R scripts authorized via [compute instance](../concept-compute-instance.md#accessing-files).
+
+ > [!NOTE]
+ > Azure Machine Learning designer will create a datastore named **azureml_globaldatasets** automatically when you open a sample in the designer homepage. This datastore only contains sample datasets. Please **do not** use this datastore for any confidential data access.
++
+## Supported data storage service types
+
+Datastores currently support storing connection information to the storage services listed in the following matrix.
+
+> [!TIP]
+> **For unsupported storage solutions**, and to save data egress cost during ML experiments, [move your data](#move-data-to-supported-azure-storage-solutions) to a supported Azure storage solution.
+
+| Storage&nbsp;type | Authentication&nbsp;type | [Azure&nbsp;Machine&nbsp;Learning studio](https://ml.azure.com/) | [Azure&nbsp;Machine&nbsp;Learning&nbsp; Python SDK](/python/api/overview/azure/ml/intro) | [Azure&nbsp;Machine&nbsp;Learning CLI](reference-azure-machine-learning-cli.md) | [Azure&nbsp;Machine&nbsp;Learning&nbsp; REST API](/rest/api/azureml/) | VS Code
+||||||
+[Azure&nbsp;Blob&nbsp;Storage](/azure/storage/blobs/storage-blobs-overview)| Account key <br> SAS token | Γ£ô | Γ£ô | Γ£ô |Γ£ô |Γ£ô
+[Azure&nbsp;File&nbsp;Share](/azure/storage/files/storage-files-introduction)| Account key <br> SAS token | Γ£ô | Γ£ô | Γ£ô |Γ£ô|Γ£ô
+[Azure&nbsp;Data Lake&nbsp;Storage Gen&nbsp;1](/azure/data-lake-store/)| Service principal| Γ£ô | Γ£ô | Γ£ô |Γ£ô|
+[Azure&nbsp;Data Lake&nbsp;Storage Gen&nbsp;2](/azure/storage/blobs/data-lake-storage-introduction)| Service principal| Γ£ô | Γ£ô | Γ£ô |Γ£ô|
+[Azure&nbsp;SQL&nbsp;Database](/azure/azure-sql/database/sql-database-paas-overview)| SQL authentication <br>Service principal| Γ£ô | Γ£ô | Γ£ô |Γ£ô|
+[Azure&nbsp;PostgreSQL](/azure/postgresql/overview) | SQL authentication| Γ£ô | Γ£ô | Γ£ô |Γ£ô|
+[Azure&nbsp;Database&nbsp;for&nbsp;MySQL](/azure/mysql/overview) | SQL authentication| | Γ£ô* | Γ£ô* |Γ£ô*|
+[Databricks&nbsp;File&nbsp;System](/azure/databricks/data/databricks-file-system)| No authentication | | Γ£ô** | Γ£ô ** |Γ£ô** |
+
+* MySQL is only supported for pipeline [DataTransferStep](/python/api/azureml-pipeline-steps/azureml.pipeline.steps.datatransferstep).
+* Databricks is only supported for pipeline [DatabricksStep](/python/api/azureml-pipeline-steps/azureml.pipeline.steps.databricks_step.databricksstep).
++
+### Storage guidance
+
+We recommend creating a datastore for an [Azure Blob container](/azure/storage/blobs/storage-blobs-introduction). Both standard and premium storage are available for blobs. Although premium storage is more expensive, its faster throughput speeds might improve the speed of your training runs, particularly if you train against a large dataset. For information about the cost of storage accounts, see the [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator/?service=machine-learning-service).
+
+[Azure Data Lake Storage Gen2](/azure/storage/blobs/data-lake-storage-introduction) is built on top of Azure Blob storage and designed for enterprise big data analytics. A fundamental part of Data Lake Storage Gen2 is the addition of a [hierarchical namespace](/azure/storage/blobs/data-lake-storage-namespace) to Blob storage. The hierarchical namespace organizes objects/files into a hierarchy of directories for efficient data access.
+
+## Storage access and permissions
+
+To ensure you securely connect to your Azure storage service, Azure Machine Learning requires that you have permission to access the corresponding data storage container. This access depends on the authentication credentials used to register the datastore.
+
+> [!NOTE]
+> This guidance also applies to [datastores created with identity-based data access](../how-to-identity-based-data-access.md).
+
+### Virtual network
+
+Azure Machine Learning requires extra configuration steps to communicate with a storage account that is behind a firewall or within a virtual network. If your storage account is behind a firewall, you can [add your client's IP address to an allowlist](/azure/storage/common/storage-network-security#managing-ip-network-rules) via the Azure portal.
+
+Azure Machine Learning can receive requests from clients outside of the virtual network. To ensure that the entity requesting data from the service is safe and to enable data being displayed in your workspace, [use a private endpoint with your workspace](../how-to-configure-private-link.md).
+
+**For Python SDK users**, to access your data via your training script on a compute target, the compute target needs to be inside the same virtual network and subnet of the storage. You can [use a compute cluster in the same virtual network](../how-to-secure-training-vnet.md#compute-cluster) or [use a compute instance in the same virtual network](../how-to-secure-training-vnet.md#compute-instance).
+
+**For Azure Machine Learning studio users**, several features rely on the ability to read data from a dataset, such as dataset previews, profiles, and automated machine learning. For these features to work with storage behind virtual networks, use a [workspace managed identity in the studio](../how-to-enable-studio-virtual-network.md) to allow Azure Machine Learning to access the storage account from outside the virtual network.
+
+> [!NOTE]
+> If your data storage is an Azure SQL Database behind a virtual network, be sure to set *Deny public access* to **No** via the [Azure portal](https://portal.azure.com/) to allow Azure Machine Learning to access the storage account.
+
+### Access validation
+
+> [!WARNING]
+> Cross tenant access to storage accounts is not supported. If cross tenant access is needed for your scenario, please reach out to the AzureML Data Support team alias at amldatasupport@microsoft.com for assistance with a custom code solution.
+
+**As part of the initial datastore creation and registration process**, Azure Machine Learning automatically validates that the underlying storage service exists and the user provided principal (username, service principal, or SAS token) has access to the specified storage.
+
+**After datastore creation**, this validation is only performed for methods that require access to the underlying storage container, **not** each time datastore objects are retrieved. For example, validation happens if you want to download files from your datastore; but if you just want to change your default datastore, then validation doesn't happen.
+
+To authenticate your access to the underlying storage service, you can provide either your account key, shared access signatures (SAS) tokens, or service principal in the corresponding `register_azure_*()` method of the datastore type you want to create. The [storage type matrix](#supported-data-storage-service-types) lists the supported authentication types that correspond to each datastore type.
+
+You can find account key, SAS token, and service principal information on your [Azure portal](https://portal.azure.com).
+
+* If you plan to use an account key or SAS token for authentication, select **Storage Accounts** on the left pane, and choose the storage account that you want to register.
+ * The **Overview** page provides information such as the account name, container, and file share name.
+ * For account keys, go to **Access keys** on the **Settings** pane.
+ * For SAS tokens, go to **Shared access signatures** on the **Settings** pane.
+
+* If you plan to use a service principal for authentication, go to your **App registrations** and select which app you want to use.
+ * Its corresponding **Overview** page will contain required information like tenant ID and client ID.
+
+> [!IMPORTANT]
+> If you need to change your access keys for an Azure Storage account (account key or SAS token), be sure to sync the new credentials with your workspace and the datastores connected to it. Learn how to [sync your updated credentials](../how-to-change-storage-access-key.md).
+
+### Permissions
+
+For Azure blob container and Azure Data Lake Gen 2 storage, make sure your authentication credentials have **Storage Blob Data Reader** access. Learn more about [Storage Blob Data Reader](/azure/role-based-access-control/built-in-roles#storage-blob-data-reader). An account SAS token defaults to no permissions.
+* For data **read access**, your authentication credentials must have a minimum of list and read permissions for containers and objects.
+
+* For data **write access**, write and add permissions also are required.
+++
+## Create and register datastores
+
+When you register an Azure storage solution as a datastore, you automatically create and register that datastore to a specific workspace. Review the [storage access & permissions](#storage-access-and-permissions) section for guidance on virtual network scenarios, and where to find required authentication credentials.
+
+Within this section are examples for how to create and register a datastore via the Python SDK for the following storage types. The parameters provided in these examples are the **required parameters** to create and register a datastore.
+
+* [Azure blob container](#azure-blob-container)
+* [Azure file share](#azure-file-share)
+* [Azure Data Lake Storage Generation 2](#azure-data-lake-storage-generation-2)
+
+ To create datastores for other supported storage services, see the [reference documentation for the applicable `register_azure_*` methods](/python/api/azureml-core/azureml.core.datastore.datastore#methods).
+
+If you prefer a low code experience, see [Connect to data with Azure Machine Learning studio](../how-to-connect-data-ui.md).
+>[!IMPORTANT]
+> If you unregister and re-register a datastore with the same name, and it fails, the Azure Key Vault for your workspace may not have soft-delete enabled. By default, soft-delete is enabled for the key vault instance created by your workspace, but it may not be enabled if you used an existing key vault or have a workspace created prior to October 2020. For information on how to enable soft-delete, see [Turn on Soft Delete for an existing key vault](/azure/key-vault/general/soft-delete-change#turn-on-soft-delete-for-an-existing-key-vault).
++
+> [!NOTE]
+> Datastore name should only consist of lowercase letters, digits and underscores.
+
+### Azure blob container
+
+To register an Azure blob container as a datastore, use [`register_azure_blob_container()`](/python/api/azureml-core/azureml.core.datastore%28class%29#register-azure-blob-container-workspace--datastore-name--container-name--account-name--sas-token-none--account-key-none--protocol-none--endpoint-none--overwrite-false--create-if-not-exists-false--skip-validation-false--blob-cache-timeout-none--grant-workspace-access-false--subscription-id-none--resource-group-none-).
+
+The following code creates and registers the `blob_datastore_name` datastore to the `ws` workspace. This datastore accesses the `my-container-name` blob container on the `my-account-name` storage account, by using the provided account access key. Review the [storage access & permissions](#storage-access-and-permissions) section for guidance on virtual network scenarios, and where to find required authentication credentials.
+
+```Python
+blob_datastore_name='azblobsdk' # Name of the datastore to workspace
+container_name=os.getenv("BLOB_CONTAINER", "<my-container-name>") # Name of Azure blob container
+account_name=os.getenv("BLOB_ACCOUNTNAME", "<my-account-name>") # Storage account name
+account_key=os.getenv("BLOB_ACCOUNT_KEY", "<my-account-key>") # Storage account access key
+
+blob_datastore = Datastore.register_azure_blob_container(workspace=ws,
+ datastore_name=blob_datastore_name,
+ container_name=container_name,
+ account_name=account_name,
+ account_key=account_key)
+```
+
+### Azure file share
+
+To register an Azure file share as a datastore, use [`register_azure_file_share()`](/python/api/azureml-core/azureml.core.datastore%28class%29#register-azure-file-share-workspace--datastore-name--file-share-name--account-name--sas-token-none--account-key-none--protocol-none--endpoint-none--overwrite-false--create-if-not-exists-false--skip-validation-false-).
+
+The following code creates and registers the `file_datastore_name` datastore to the `ws` workspace. This datastore accesses the `my-fileshare-name` file share on the `my-account-name` storage account, by using the provided account access key. Review the [storage access & permissions](#storage-access-and-permissions) section for guidance on virtual network scenarios, and where to find required authentication credentials.
+
+```Python
+file_datastore_name='azfilesharesdk' # Name of the datastore to workspace
+file_share_name=os.getenv("FILE_SHARE_CONTAINER", "<my-fileshare-name>") # Name of Azure file share container
+account_name=os.getenv("FILE_SHARE_ACCOUNTNAME", "<my-account-name>") # Storage account name
+account_key=os.getenv("FILE_SHARE_ACCOUNT_KEY", "<my-account-key>") # Storage account access key
+
+file_datastore = Datastore.register_azure_file_share(workspace=ws,
+ datastore_name=file_datastore_name,
+ file_share_name=file_share_name,
+ account_name=account_name,
+ account_key=account_key)
+```
+
+### Azure Data Lake Storage Generation 2
+
+For an Azure Data Lake Storage Generation 2 (ADLS Gen 2) datastore, use [register_azure_data_lake_gen2()](/python/api/azureml-core/azureml.core.datastore.datastore#register-azure-data-lake-gen2-workspace--datastore-name--filesystem--account-name--tenant-id--client-id--client-secret--resource-url-none--authority-url-none--protocol-none--endpoint-none--overwrite-false-) to register a credential datastore connected to an Azure DataLake Gen 2 storage with [service principal permissions](/azure/active-directory/develop/howto-create-service-principal-portal).
+
+In order to utilize your service principal, you need to [register your application](/azure/active-directory/develop/app-objects-and-service-principals) and grant the service principal data access via either Azure role-based access control (Azure RBAC) or access control lists (ACL). Learn more about [access control set up for ADLS Gen 2](/azure/storage/blobs/data-lake-storage-access-control-model).
+
+The following code creates and registers the `adlsgen2_datastore_name` datastore to the `ws` workspace. This datastore accesses the file system `test` in the `account_name` storage account, by using the provided service principal credentials.
+Review the [storage access & permissions](#storage-access-and-permissions) section for guidance on virtual network scenarios, and where to find required authentication credentials.
+
+```python
+adlsgen2_datastore_name = 'adlsgen2datastore'
+
+subscription_id=os.getenv("ADL_SUBSCRIPTION", "<my_subscription_id>") # subscription id of ADLS account
+resource_group=os.getenv("ADL_RESOURCE_GROUP", "<my_resource_group>") # resource group of ADLS account
+
+account_name=os.getenv("ADLSGEN2_ACCOUNTNAME", "<my_account_name>") # ADLS Gen2 account name
+tenant_id=os.getenv("ADLSGEN2_TENANT", "<my_tenant_id>") # tenant id of service principal
+client_id=os.getenv("ADLSGEN2_CLIENTID", "<my_client_id>") # client id of service principal
+client_secret=os.getenv("ADLSGEN2_CLIENT_SECRET", "<my_client_secret>") # the secret of service principal
+
+adlsgen2_datastore = Datastore.register_azure_data_lake_gen2(workspace=ws,
+ datastore_name=adlsgen2_datastore_name,
+ account_name=account_name, # ADLS Gen2 account name
+ filesystem='test', # ADLS Gen2 filesystem
+ tenant_id=tenant_id, # tenant id of service principal
+ client_id=client_id, # client id of service principal
+ client_secret=client_secret) # the secret of service principal
+```
+++
+## Create datastores with other Azure tools
+In addition to creating datastores with the Python SDK and the studio, you can also use Azure Resource Manager templates or the Azure Machine Learning VS Code extension.
++
+### Azure Resource Manager
+
+There are several templates at [https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.machinelearningservices](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.machinelearningservices) that can be used to create datastores.
+
+For information on using these templates, see [Use an Azure Resource Manager template to create a workspace for Azure Machine Learning](../how-to-create-workspace-template.md).
+
+### VS Code extension
+
+If you prefer to create and manage datastores using the Azure Machine Learning VS Code extension, visit the [VS Code resource management how-to guide](../how-to-manage-resources-vscode.md#datastores) to learn more.
+
+## Use data in your datastores
+
+After you create a datastore, [create an Azure Machine Learning dataset](how-to-create-register-datasets.md) to interact with your data. Datasets package your data into a lazily evaluated consumable object for machine learning tasks, like training.
+
+With datasets, you can [download or mount](../how-to-train-with-datasets.md#mount-vs-download) files of any format from Azure storage services for model training on a compute target. [Learn more about how to train ML models with datasets](../how-to-train-with-datasets.md).
+++
+## Get datastores from your workspace
+
+To get a specific datastore registered in the current workspace, use the [`get()`](/python/api/azureml-core/azureml.core.datastore%28class%29#get-workspace--datastore-name-) static method on the `Datastore` class:
+
+```Python
+# Get a named datastore from the current workspace
+datastore = Datastore.get(ws, datastore_name='your datastore name')
+```
+To get the list of datastores registered with a given workspace, you can use the [`datastores`](/python/api/azureml-core/azureml.core.workspace%28class%29#datastores) property on a workspace object:
+
+```Python
+# List all datastores registered in the current workspace
+datastores = ws.datastores
+for name, datastore in datastores.items():
+ print(name, datastore.datastore_type)
+```
+
+To get the workspace's default datastore, use this line:
+
+```Python
+datastore = ws.get_default_datastore()
+```
+You can also change the default datastore with the following code. This ability is only supported via the SDK.
+
+```Python
+ ws.set_default_datastore(new_default_datastore)
+```
+
+## Access data during scoring
+
+Azure Machine Learning provides several ways to use your models for scoring. Some of these methods don't provide access to datastores. Use the following table to understand which methods allow you to access datastores during scoring:
+
+| Method | Datastore access | Description |
+| -- | :--: | -- |
+| [Batch prediction](../tutorial-pipeline-batch-scoring-classification.md) | Γ£ö | Make predictions on large quantities of data asynchronously. |
+| [Web service](../how-to-deploy-and-where.md) | &nbsp; | Deploy models as a web service. |
+
+For situations where the SDK doesn't provide access to datastores, you might be able to create custom code by using the relevant Azure SDK to access the data. For example, the [Azure Storage SDK for Python](https://github.com/Azure/azure-storage-python) is a client library that you can use to access data stored in blobs or files.
+
+## Move data to supported Azure storage solutions
+
+Azure Machine Learning supports accessing data from Azure Blob storage, Azure Files, Azure Data Lake Storage Gen1, Azure Data Lake Storage Gen2, Azure SQL Database, and Azure Database for PostgreSQL. If you're using unsupported storage, we recommend that you move your data to supported Azure storage solutions by using [Azure Data Factory and these steps](/azure/data-factory/quickstart-create-data-factory-copy-data-tool.). Moving data to supported storage can help you save data egress costs during machine learning experiments.
+
+Azure Data Factory provides efficient and resilient data transfer with more than 80 prebuilt connectors at no extra cost. These connectors include Azure data services, on-premises data sources, Amazon S3 and Redshift, and Google BigQuery.
+
+## Next steps
+
+* [Create an Azure machine learning dataset](how-to-create-register-datasets.md)
+* [Train a model](../how-to-set-up-training-targets.md)
+* [Deploy a model](../how-to-deploy-and-where.md)
machine-learning How To Attach Compute Targets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-attach-compute-targets.md
+
+ Title: Train models with the Azure ML Python SDK (v1) (preview)
+
+description: Add compute resources (compute targets) to your workspace to use for machine learning training and inference with SDK v1.
++++++ Last updated : 10/21/2021+++
+# Train models with the Azure Machine Learning Python SDK (v1)
++
+> [!div class="op_single_selector" title1="Select the Azure Machine Learning SDK version you are using:"]
+> * [v1](how-to-attach-compute-targets.md)
+> * [v2 (preview)](../how-to-train-sdk.md)
+
+Learn how to attach Azure compute resources to your Azure Machine Learning workspace with SDK v1. Then you can use these resources as training and inference [compute targets](../concept-compute-target.md) in your machine learning tasks.
+
+In this article, learn how to set up your workspace to use these compute resources:
+
+* Your local computer
+* Remote virtual machines
+* Apache Spark pools (powered by Azure Synapse Analytics)
+* Azure HDInsight
+* Azure Batch
+* Azure Databricks - used as a training compute target only in [machine learning pipelines](../how-to-create-machine-learning-pipelines.md)
+* Azure Data Lake Analytics
+* Azure Container Instance
+* Azure Kubernetes Service & Azure Arc-enabled Kubernetes (preview)
+
+To use compute targets managed by Azure Machine Learning, see:
+
+* [Azure Machine Learning compute instance](../how-to-create-manage-compute-instance.md)
+* [Azure Machine Learning compute cluster](../how-to-create-attach-compute-cluster.md)
+* [Azure Kubernetes Service cluster](../how-to-create-attach-kubernetes.md)
+
+## Prerequisites
+
+* An Azure Machine Learning workspace. For more information, see [Create an Azure Machine Learning workspace](../how-to-manage-workspace.md).
+
+* The [Azure CLI extension for Machine Learning service](reference-azure-machine-learning-cli.md), [Azure Machine Learning Python SDK](/python/api/overview/azure/ml/intro), or the [Azure Machine Learning Visual Studio Code extension](../how-to-setup-vs-code.md).
+
+## Limitations
+
+* **Do not create multiple, simultaneous attachments to the same compute** from your workspace. For example, attaching one Azure Kubernetes Service cluster to a workspace using two different names. Each new attachment will break the previous existing attachment(s).
+
+ If you want to reattach a compute target, for example to change TLS or other cluster configuration setting, you must first remove the existing attachment.
+
+## What's a compute target?
+
+With Azure Machine Learning, you can train your model on various resources or environments, collectively referred to as [__compute targets__](concept-azure-machine-learning-architecture.md#compute-targets). A compute target can be a local machine or a cloud resource, such as an Azure Machine Learning Compute, Azure HDInsight, or a remote virtual machine. You also use compute targets for model deployment as described in ["Where and how to deploy your models"](../how-to-deploy-and-where.md).
++
+## Local computer
+
+When you use your local computer for **training**, there is no need to create a compute target. Just [submit the training run](../how-to-set-up-training-targets.md) from your local machine.
+
+When you use your local computer for **inference**, you must have Docker installed. To perform the deployment, use [LocalWebservice.deploy_configuration()](/python/api/azureml-core/azureml.core.webservice.local.localwebservice#deploy-configuration-port-none-) to define the port that the web service will use. Then use the normal deployment process as described in [Deploy models with Azure Machine Learning](../how-to-deploy-and-where.md).
+
+## Remote virtual machines
+
+Azure Machine Learning also supports attaching an Azure Virtual Machine. The VM must be an Azure Data Science Virtual Machine (DSVM). The VM offers a curated choice of tools and frameworks for full-lifecycle machine learning development. For more information on how to use the DSVM with Azure Machine Learning, see [Configure a development environment](../how-to-configure-environment.md#dsvm).
+
+> [!TIP]
+> Instead of a remote VM, we recommend using the [Azure Machine Learning compute instance](../concept-compute-instance.md). It is a fully managed, cloud-based compute solution that is specific to Azure Machine Learning. For more information, see [create and manage Azure Machine Learning compute instance](../how-to-create-manage-compute-instance.md).
+
+1. **Create**: Azure Machine Learning cannot create a remote VM for you. Instead, you must create the VM and then attach it to your Azure Machine Learning workspace. For information on creating a DSVM, see [Provision the Data Science Virtual Machine for Linux (Ubuntu)](../data-science-virtual-machine/dsvm-ubuntu-intro.md).
+
+ > [!WARNING]
+ > Azure Machine Learning only supports virtual machines that run **Ubuntu**. When you create a VM or choose an existing VM, you must select a VM that uses Ubuntu.
+ >
+ > Azure Machine Learning also requires the virtual machine to have a __public IP address__.
+
+1. **Attach**: To attach an existing virtual machine as a compute target, you must provide the resource ID, user name, and password for the virtual machine. The resource ID of the VM can be constructed using the subscription ID, resource group name, and VM name using the following string format: `/subscriptions/<subscription_id>/resourceGroups/<resource_group>/providers/Microsoft.Compute/virtualMachines/<vm_name>`
+
+ ```python
+ from azureml.core.compute import RemoteCompute, ComputeTarget
+
+ # Create the compute config
+ compute_target_name = "attach-dsvm"
+
+ attach_config = RemoteCompute.attach_configuration(resource_id='<resource_id>',
+ ssh_port=22,
+ username='<username>',
+ password="<password>")
+
+ # Attach the compute
+ compute = ComputeTarget.attach(ws, compute_target_name, attach_config)
+
+ compute.wait_for_completion(show_output=True)
+ ```
+
+ Or you can attach the DSVM to your workspace [using Azure Machine Learning studio](../how-to-create-attach-compute-studio.md#attached-compute).
+
+ > [!WARNING]
+ > Do not create multiple, simultaneous attachments to the same DSVM from your workspace. Each new attachment will break the previous existing attachment(s).
+
+1. **Configure**: Create a run configuration for the DSVM compute target. Docker and conda are used to create and configure the training environment on the DSVM.
+
+ ```python
+ from azureml.core import ScriptRunConfig
+ from azureml.core.environment import Environment
+ from azureml.core.conda_dependencies import CondaDependencies
+
+ # Create environment
+ myenv = Environment(name="myenv")
+
+ # Specify the conda dependencies
+ myenv.python.conda_dependencies = CondaDependencies.create(conda_packages=['scikit-learn'])
+
+ # If no base image is explicitly specified the default CPU image "azureml.core.runconfig.DEFAULT_CPU_IMAGE" will be used
+ # To use GPU in DSVM, you should specify the default GPU base Docker image or another GPU-enabled image:
+ # myenv.docker.enabled = True
+ # myenv.docker.base_image = azureml.core.runconfig.DEFAULT_GPU_IMAGE
+
+ # Configure the run configuration with the Linux DSVM as the compute target and the environment defined above
+ src = ScriptRunConfig(source_directory=".", script="train.py", compute_target=compute, environment=myenv)
+ ```
+
+> [!TIP]
+> If you want to __remove__ (detach) a VM from your workspace, use the [RemoteCompute.detach()](/python/api/azureml-core/azureml.core.compute.remotecompute#detach--) method.
+>
+> Azure Machine Learning does not delete the VM for you. You must manually delete the VM using the Azure portal, CLI, or the SDK for Azure VM.
+
+## <a id="synapse"></a>Apache Spark pools
+
+The Azure Synapse Analytics integration with Azure Machine Learning (preview) allows you to attach an Apache Spark pool backed by Azure Synapse for interactive data exploration and preparation. With this integration, you can have a dedicated compute for data wrangling at scale. For more information, see [How to attach Apache Spark pools powered by Azure Synapse Analytics](../how-to-link-synapse-ml-workspaces.md#attach-synapse-spark-pool-as-a-compute).
+
+## Azure HDInsight
+
+Azure HDInsight is a popular platform for big-data analytics. The platform provides Apache Spark, which can be used to train your model.
+
+1. **Create**: Azure Machine Learning cannot create an HDInsight cluster for you. Instead, you must create the cluster and then attach it to your Azure Machine Learning workspace. For more information, see [Create a Spark Cluster in HDInsight](../../hdinsight/spark/apache-spark-jupyter-spark-sql.md).
+
+ > [!WARNING]
+ > Azure Machine Learning requires the HDInsight cluster to have a __public IP address__.
+
+ When you create the cluster, you must specify an SSH user name and password. Take note of these values, as you need them to use HDInsight as a compute target.
+
+ After the cluster is created, connect to it with the hostname \<clustername>-ssh.azurehdinsight.net, where \<clustername> is the name that you provided for the cluster.
+
+1. **Attach**: To attach an HDInsight cluster as a compute target, you must provide the resource ID, user name, and password for the HDInsight cluster. The resource ID of the HDInsight cluster can be constructed using the subscription ID, resource group name, and HDInsight cluster name using the following string format: `/subscriptions/<subscription_id>/resourceGroups/<resource_group>/providers/Microsoft.HDInsight/clusters/<cluster_name>`
+
+ ```python
+ from azureml.core.compute import ComputeTarget, HDInsightCompute
+ from azureml.exceptions import ComputeTargetException
+
+ try:
+ # if you want to connect using SSH key instead of username/password you can provide parameters private_key_file and private_key_passphrase
+
+ attach_config = HDInsightCompute.attach_configuration(resource_id='<resource_id>',
+ ssh_port=22,
+ username='<ssh-username>',
+ password='<ssh-pwd>')
+ hdi_compute = ComputeTarget.attach(workspace=ws,
+ name='myhdi',
+ attach_configuration=attach_config)
+
+ except ComputeTargetException as e:
+ print("Caught = {}".format(e.message))
+
+ hdi_compute.wait_for_completion(show_output=True)
+ ```
+
+ Or you can attach the HDInsight cluster to your workspace [using Azure Machine Learning studio](../how-to-create-attach-compute-studio.md#attached-compute).
+
+ > [!WARNING]
+ > Do not create multiple, simultaneous attachments to the same HDInsight from your workspace. Each new attachment will break the previous existing attachment(s).
+
+1. **Configure**: Create a run configuration for the HDI compute target.
+
+ [!code-python[](~/aml-sdk-samples/ignore/doc-qa/how-to-set-up-training-targets/hdi.py?name=run_hdi)]
+
+> [!TIP]
+> If you want to __remove__ (detach) an HDInsight cluster from the workspace, use the [HDInsightCompute.detach()](/python/api/azureml-core/azureml.core.compute.hdinsight.hdinsightcompute#detach--) method.
+>
+> Azure Machine Learning does not delete the HDInsight cluster for you. You must manually delete it using the Azure portal, CLI, or the SDK for Azure HDInsight.
+
+## <a id="azbatch"></a>Azure Batch
+
+Azure Batch is used to run large-scale parallel and high-performance computing (HPC) applications efficiently in the cloud. AzureBatchStep can be used in an Azure Machine Learning Pipeline to submit jobs to an Azure Batch pool of machines.
+
+To attach Azure Batch as a compute target, you must use the Azure Machine Learning SDK and provide the following information:
+
+- **Azure Batch compute name**: A friendly name to be used for the compute within the workspace
+- **Azure Batch account name**: The name of the Azure Batch account
+- **Resource Group**: The resource group that contains the Azure Batch account.
+
+The following code demonstrates how to attach Azure Batch as a compute target:
+
+```python
+from azureml.core.compute import ComputeTarget, BatchCompute
+from azureml.exceptions import ComputeTargetException
+
+# Name to associate with new compute in workspace
+batch_compute_name = 'mybatchcompute'
+
+# Batch account details needed to attach as compute to workspace
+batch_account_name = "<batch_account_name>" # Name of the Batch account
+# Name of the resource group which contains this account
+batch_resource_group = "<batch_resource_group>"
+
+try:
+ # check if the compute is already attached
+ batch_compute = BatchCompute(ws, batch_compute_name)
+except ComputeTargetException:
+ print('Attaching Batch compute...')
+ provisioning_config = BatchCompute.attach_configuration(
+ resource_group=batch_resource_group, account_name=batch_account_name)
+ batch_compute = ComputeTarget.attach(
+ ws, batch_compute_name, provisioning_config)
+ batch_compute.wait_for_completion()
+ print("Provisioning state:{}".format(batch_compute.provisioning_state))
+ print("Provisioning errors:{}".format(batch_compute.provisioning_errors))
+
+print("Using Batch compute:{}".format(batch_compute.cluster_resource_id))
+```
+
+> [!WARNING]
+> Do not create multiple, simultaneous attachments to the same Azure Batch from your workspace. Each new attachment will break the previous existing attachment(s).
+
+## Azure Databricks
+
+Azure Databricks is an Apache Spark-based environment in the Azure cloud. It can be used as a compute target with an Azure Machine Learning pipeline.
+
+> [!IMPORTANT]
+> Azure Machine Learning cannot create an Azure Databricks compute target. Instead, you must create an Azure Databricks workspace, and then attach it to your Azure Machine Learning workspace. To create a workspace resource, see the [Run a Spark job on Azure Databricks](/azure/databricks/scenarios/quickstart-create-databricks-workspace-portal) document.
+>
+> To attach an Azure Databricks workspace from a __different Azure subscription__, you (your Azure AD account) must be granted the **Contributor** role on the Azure Databricks workspace. Check your access in the [Azure portal](https://portal.azure.com/).
+
+To attach Azure Databricks as a compute target, provide the following information:
+
+* __Databricks compute name__: The name you want to assign to this compute resource.
+* __Databricks workspace name__: The name of the Azure Databricks workspace.
+* __Databricks access token__: The access token used to authenticate to Azure Databricks. To generate an access token, see the [Authentication](/azure/databricks/dev-tools/api/latest/authentication) document.
+
+The following code demonstrates how to attach Azure Databricks as a compute target with the Azure Machine Learning SDK:
+
+```python
+import os
+from azureml.core.compute import ComputeTarget, DatabricksCompute
+from azureml.exceptions import ComputeTargetException
+
+databricks_compute_name = os.environ.get(
+ "AML_DATABRICKS_COMPUTE_NAME", "<databricks_compute_name>")
+databricks_workspace_name = os.environ.get(
+ "AML_DATABRICKS_WORKSPACE", "<databricks_workspace_name>")
+databricks_resource_group = os.environ.get(
+ "AML_DATABRICKS_RESOURCE_GROUP", "<databricks_resource_group>")
+databricks_access_token = os.environ.get(
+ "AML_DATABRICKS_ACCESS_TOKEN", "<databricks_access_token>")
+
+try:
+ databricks_compute = ComputeTarget(
+ workspace=ws, name=databricks_compute_name)
+ print('Compute target already exists')
+except ComputeTargetException:
+ print('compute not found')
+ print('databricks_compute_name {}'.format(databricks_compute_name))
+ print('databricks_workspace_name {}'.format(databricks_workspace_name))
+ print('databricks_access_token {}'.format(databricks_access_token))
+
+ # Create attach config
+ attach_config = DatabricksCompute.attach_configuration(resource_group=databricks_resource_group,
+ workspace_name=databricks_workspace_name,
+ access_token=databricks_access_token)
+ databricks_compute = ComputeTarget.attach(
+ ws,
+ databricks_compute_name,
+ attach_config
+ )
+
+ databricks_compute.wait_for_completion(True)
+```
+
+For a more detailed example, see an [example notebook](https://aka.ms/pl-databricks) on GitHub.
+
+> [!WARNING]
+> Do not create multiple, simultaneous attachments to the same Azure Databricks from your workspace. Each new attachment will break the previous existing attachment(s).
+
+## Azure Data Lake Analytics
+
+Azure Data Lake Analytics is a big data analytics platform in the Azure cloud. It can be used as a compute target with an Azure Machine Learning pipeline.
+
+Create an Azure Data Lake Analytics account before using it. To create this resource, see the [Get started with Azure Data Lake Analytics](../../data-lake-analytics/data-lake-analytics-get-started-portal.md) document.
+
+To attach Data Lake Analytics as a compute target, you must use the Azure Machine Learning SDK and provide the following information:
+
+* __Compute name__: The name you want to assign to this compute resource.
+* __Resource Group__: The resource group that contains the Data Lake Analytics account.
+* __Account name__: The Data Lake Analytics account name.
+
+The following code demonstrates how to attach Data Lake Analytics as a compute target:
+
+```python
+import os
+from azureml.core.compute import ComputeTarget, AdlaCompute
+from azureml.exceptions import ComputeTargetException
++
+adla_compute_name = os.environ.get(
+ "AML_ADLA_COMPUTE_NAME", "<adla_compute_name>")
+adla_resource_group = os.environ.get(
+ "AML_ADLA_RESOURCE_GROUP", "<adla_resource_group>")
+adla_account_name = os.environ.get(
+ "AML_ADLA_ACCOUNT_NAME", "<adla_account_name>")
+
+try:
+ adla_compute = ComputeTarget(workspace=ws, name=adla_compute_name)
+ print('Compute target already exists')
+except ComputeTargetException:
+ print('compute not found')
+ print('adla_compute_name {}'.format(adla_compute_name))
+ print('adla_resource_id {}'.format(adla_resource_group))
+ print('adla_account_name {}'.format(adla_account_name))
+ # create attach config
+ attach_config = AdlaCompute.attach_configuration(resource_group=adla_resource_group,
+ account_name=adla_account_name)
+ # Attach ADLA
+ adla_compute = ComputeTarget.attach(
+ ws,
+ adla_compute_name,
+ attach_config
+ )
+
+ adla_compute.wait_for_completion(True)
+```
+
+For a more detailed example, see an [example notebook](https://aka.ms/pl-adla) on GitHub.
+
+> [!WARNING]
+> Do not create multiple, simultaneous attachments to the same ADLA from your workspace. Each new attachment will break the previous existing attachment(s).
+
+> [!TIP]
+> Azure Machine Learning pipelines can only work with data stored in the default data store of the Data Lake Analytics account. If the data you need to work with is in a non-default store, you can use a [`DataTransferStep`](/python/api/azureml-pipeline-steps/azureml.pipeline.steps.data_transfer_step.datatransferstep) to copy the data before training.
+
+## <a id="aci"></a>Azure Container Instance
+
+Azure Container Instances (ACI) are created dynamically when you deploy a model. You cannot create or attach ACI to your workspace in any other way. For more information, see [Deploy a model to Azure Container Instances](how-to-deploy-azure-container-instance.md).
+
+## <a id="kubernetes"></a>Kubernetes (preview)
+
+Azure Machine Learning provides you with the following options to attach your own Kubernetes clusters for training and inferencing:
+
+* [Azure Kubernetes Service](../../aks/intro-kubernetes.md). Azure Kubernetes Service provides a managed cluster in Azure.
+* [Azure Arc Kubernetes](../../azure-arc/kubernetes/overview.md). Use Azure Arc-enabled Kubernetes clusters if your cluster is hosted outside of Azure.
++
+To detach a Kubernetes cluster from your workspace, use the following method:
+
+```python
+compute_target.detach()
+```
+
+> [!WARNING]
+> Detaching a cluster **does not delete the cluster**. To delete an Azure Kubernetes Service cluster, see [Use the Azure CLI with AKS](../../aks/kubernetes-walkthrough.md#delete-the-cluster). To delete an Azure Arc-enabled Kubernetes cluster, see [Azure Arc quickstart](../../azure-arc/kubernetes/quickstart-connect-cluster.md#clean-up-resources).
+
+## Notebook examples
+
+See these notebooks for examples of training with various compute targets:
+* [how-to-use-azureml/training](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/training)
+* [tutorials/img-classification-part1-training.ipynb](https://github.com/Azure/MachineLearningNotebooks/blob/master/tutorials/image-classification-mnist-data/img-classification-part1-training.ipynb)
++
+## Next steps
+
+* Use the compute resource to [configure and submit a training run](../how-to-set-up-training-targets.md).
+* [Tutorial: Train and deploy a model](../tutorial-train-deploy-notebook.md) uses a managed compute target to train a model.
+* Learn how to [efficiently tune hyperparameters](../how-to-tune-hyperparameters.md) to build better models.
+* Once you have a trained model, learn [how and where to deploy models](../how-to-deploy-and-where.md).
+* [Use Azure Machine Learning with Azure Virtual Networks](../how-to-network-security-overview.md)
machine-learning How To Auto Train Image Models V1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-auto-train-image-models-v1.md
+
+ Title: Set up AutoML for computer vision (v1)
+
+description: Set up Azure Machine Learning automated ML to train computer vision models with the Azure Machine Learning Python SDK (v1).
++++++ Last updated : 01/18/2022+
+#Customer intent: I'm a data scientist with ML knowledge in the computer vision space, looking to build ML models using image data in Azure Machine Learning with full control of the model algorithm, hyperparameters, and training and deployment environments.
++
+# Set up AutoML to train computer vision models with Python (v1)
+
+
+> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning CLI extension you are using:"]
+> * [v1](how-to-auto-train-image-models-v1.md)
+> * [v2 (current version)](../how-to-auto-train-image-models.md)
+
++
+> [!IMPORTANT]
+> This feature is currently in public preview. This preview version is provided without a service-level agreement. Certain features might not be supported or might have constrained capabilities. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+In this article, you learn how to train computer vision models on image data with automated ML in the [Azure Machine Learning Python SDK](/python/api/overview/azure/ml/).
+
+Automated ML supports model training for computer vision tasks like image classification, object detection, and instance segmentation. Authoring AutoML models for computer vision tasks is currently supported via the Azure Machine Learning Python SDK. The resulting experimentation runs, models, and outputs are accessible from the Azure Machine Learning studio UI. [Learn more about automated ml for computer vision tasks on image data](../concept-automated-ml.md).
+
+> [!NOTE]
+> Automated ML for computer vision tasks is only available via the Azure Machine Learning Python SDK.
+
+## Prerequisites
+
+* An Azure Machine Learning workspace. To create the workspace, see [Create an Azure Machine Learning workspace](../how-to-manage-workspace.md).
+
+* The Azure Machine Learning Python SDK installed.
+ To install the SDK you can either,
+ * Create a compute instance, which automatically installs the SDK and is pre-configured for ML workflows. For more information, see [Create and manage an Azure Machine Learning compute instance](../how-to-create-manage-compute-instance.md).
+
+ * [Install the `automl` package yourself](https://github.com/Azure/azureml-examples/blob/main/python-sdk/tutorials/automl-with-azureml/README.md#setup-using-a-local-conda-environment), which includes the [default installation](/python/api/overview/azure/ml/install#default-install) of the SDK.
+
+ > [!NOTE]
+ > Only Python 3.6 and 3.7 are compatible with automated ML support for computer vision tasks.
+
+## Select your task type
+Automated ML for images supports the following task types:
++
+Task type | AutoMLImage config syntax
+|
+ image classification | `ImageTask.IMAGE_CLASSIFICATION`
+image classification multi-label | `ImageTask.IMAGE_CLASSIFICATION_MULTILABEL`
+image object detection | `ImageTask.IMAGE_OBJECT_DETECTION`
+image instance segmentation| `ImageTask.IMAGE_INSTANCE_SEGMENTATION`
+
+This task type is a required parameter and is passed in using the `task` parameter in the [`AutoMLImageConfig`](/python/api/azureml-train-automl-client/azureml.train.automl.automlimageconfig.automlimageconfig).
+
+For example:
+
+```python
+from azureml.train.automl import AutoMLImageConfig
+from azureml.automl.core.shared.constants import ImageTask
+automl_image_config = AutoMLImageConfig(task=ImageTask.IMAGE_OBJECT_DETECTION)
+```
+
+## Training and validation data
+
+In order to generate computer vision models, you need to bring labeled image data as input for model training in the form of an Azure Machine Learning [TabularDataset](/python/api/azureml-core/azureml.data.tabulardataset). You can either use a `TabularDataset` that you have [exported from a data labeling project](../how-to-create-image-labeling-projects.md#export-the-labels), or create a new `TabularDataset` with your labeled training data.
+
+If your training data is in a different format (like, pascal VOC or COCO), you can apply the helper scripts included with the sample notebooks to convert the data to JSONL. Learn more about how to [prepare data for computer vision tasks with automated ML](../how-to-prepare-datasets-for-automl-images.md).
+
+> [!Warning]
+> Creation of TabularDatasets is only supported using the SDK to create datasets from data in JSONL format for this capability. Creating the dataset via UI is not supported at this time.
+
+> [!Note]
+> The training dataset needs to have at least 10 images in order to be able to submit an AutoML run.
+
+### JSONL schema samples
+
+The structure of the TabularDataset depends upon the task at hand. For computer vision task types, it consists of the following fields:
+
+Field| Description
+|
+`image_url`| Contains filepath as a StreamInfo object
+`image_details`|Image metadata information consists of height, width, and format. This field is optional and hence may or may not exist.
+`label`| A json representation of the image label, based on the task type.
+
+The following is a sample JSONL file for image classification:
+
+```python
+{
+ "image_url": "AmlDatastore://image_data/Image_01.png",
+ "image_details":
+ {
+ "format": "png",
+ "width": "2230px",
+ "height": "4356px"
+ },
+ "label": "cat"
+ }
+ {
+ "image_url": "AmlDatastore://image_data/Image_02.jpeg",
+ "image_details":
+ {
+ "format": "jpeg",
+ "width": "3456px",
+ "height": "3467px"
+ },
+ "label": "dog"
+ }
+ ```
+
+ The following code is a sample JSONL file for object detection:
+
+ ```python
+ {
+ "image_url": "AmlDatastore://image_data/Image_01.png",
+ "image_details":
+ {
+ "format": "png",
+ "width": "2230px",
+ "height": "4356px"
+ },
+ "label":
+ {
+ "label": "cat",
+ "topX": "1",
+ "topY": "0",
+ "bottomX": "0",
+ "bottomY": "1",
+ "isCrowd": "true",
+ }
+ }
+ {
+ "image_url": "AmlDatastore://image_data/Image_02.png",
+ "image_details":
+ {
+ "format": "jpeg",
+ "width": "1230px",
+ "height": "2356px"
+ },
+ "label":
+ {
+ "label": "dog",
+ "topX": "0",
+ "topY": "1",
+ "bottomX": "0",
+ "bottomY": "1",
+ "isCrowd": "false",
+ }
+ }
+ ```
++
+### Consume data
+
+Once your data is in JSONL format, you can create a TabularDataset with the following code:
+
+```python
+ws = Workspace.from_config()
+ds = ws.get_default_datastore()
+from azureml.core import Dataset
+
+training_dataset = Dataset.Tabular.from_json_lines_files(
+ path=ds.path('odFridgeObjects/odFridgeObjects.jsonl'),
+ set_column_types={'image_url': DataType.to_stream(ds.workspace)})
+training_dataset = training_dataset.register(workspace=ws, name=training_dataset_name)
+```
+
+Automated ML does not impose any constraints on training or validation data size for computer vision tasks. Maximum dataset size is only limited by the storage layer behind the dataset (i.e. blob store). There is no minimum number of images or labels. However, we recommend to start with a minimum of 10-15 samples per label to ensure the output model is sufficiently trained. The higher the total number of labels/classes, the more samples you need per label.
+
+Training data is a required and is passed in using the `training_data` parameter. You can optionally specify another TabularDataset as a validation dataset to be used for your model with the `validation_data` parameter of the AutoMLImageConfig. If no validation dataset is specified, 20% of your training data will be used for validation by default, unless you pass `validation_size` argument with a different value.
+
+For example:
+
+```python
+from azureml.train.automl import AutoMLImageConfig
+automl_image_config = AutoMLImageConfig(training_data=training_dataset)
+```
+
+## Compute to run experiment
+
+Provide a [compute target](../v1/concept-azure-machine-learning-architecture.md#compute-targets) for automated ML to conduct model training. Automated ML models for computer vision tasks require GPU SKUs and support NC and ND families. We recommend the NCsv3-series (with v100 GPUs) for faster training. A compute target with a multi-GPU VM SKU leverages multiple GPUs to also speed up training. Additionally, when you set up a compute target with multiple nodes you can conduct faster model training through parallelism when tuning hyperparameters for your model.
+
+The compute target is a required parameter and is passed in using the `compute_target` parameter of the `AutoMLImageConfig`. For example:
+
+```python
+from azureml.train.automl import AutoMLImageConfig
+automl_image_config = AutoMLImageConfig(compute_target=compute_target)
+```
+
+## Configure model algorithms and hyperparameters
+
+With support for computer vision tasks, you can control the model algorithm and sweep hyperparameters. These model algorithms and hyperparameters are passed in as the parameter space for the sweep.
+
+The model algorithm is required and is passed in via `model_name` parameter. You can either specify a single `model_name` or choose between multiple.
+
+### Supported model algorithms
+
+The following table summarizes the supported models for each computer vision task.
+
+Task | Model algorithms | String literal syntax<br> ***`default_model`\**** denoted with \*
+|-|-
+Image classification<br> (multi-class and multi-label)| **MobileNet**: Light-weighted models for mobile applications <br> **ResNet**: Residual networks<br> **ResNeSt**: Split attention networks<br> **SE-ResNeXt50**: Squeeze-and-Excitation networks<br> **ViT**: Vision transformer networks| `mobilenetv2` <br>`resnet18` <br>`resnet34` <br> `resnet50` <br> `resnet101` <br> `resnet152` <br> `resnest50` <br> `resnest101` <br> `seresnext` <br> `vits16r224` (small) <br> ***`vitb16r224`\**** (base) <br>`vitl16r224` (large)|
+Object detection | **YOLOv5**: One stage object detection model <br> **Faster RCNN ResNet FPN**: Two stage object detection models <br> **RetinaNet ResNet FPN**: address class imbalance with Focal Loss <br> <br>*Note: Refer to [`model_size` hyperparameter](../reference-automl-images-hyperparameters.md#model-specific-hyperparameters) for YOLOv5 model sizes.*| ***`yolov5`\**** <br> `fasterrcnn_resnet18_fpn` <br> `fasterrcnn_resnet34_fpn` <br> `fasterrcnn_resnet50_fpn` <br> `fasterrcnn_resnet101_fpn` <br> `fasterrcnn_resnet152_fpn` <br> `retinanet_resnet50_fpn`
+Instance segmentation | **MaskRCNN ResNet FPN**| `maskrcnn_resnet18_fpn` <br> `maskrcnn_resnet34_fpn` <br> ***`maskrcnn_resnet50_fpn`\**** <br> `maskrcnn_resnet101_fpn` <br> `maskrcnn_resnet152_fpn` <br>`maskrcnn_resnet50_fpn`
++
+In addition to controlling the model algorithm, you can also tune hyperparameters used for model training. While many of the hyperparameters exposed are model-agnostic, there are instances where hyperparameters are task-specific or model-specific. [Learn more about the available hyperparameters for these instances](../reference-automl-images-hyperparameters.md).
+
+### Data augmentation
+
+In general, deep learning model performance can often improve with more data. Data augmentation is a practical technique to amplify the data size and variability of a dataset which helps to prevent overfitting and improve the modelΓÇÖs generalization ability on unseen data. Automated ML applies different data augmentation techniques based on the computer vision task, before feeding input images to the model. Currently, there is no exposed hyperparameter to control data augmentations.
+
+|Task | Impacted dataset | Data augmentation technique(s) applied |
+|-|-||
+|Image classification (multi-class and multi-label) | Training <br><br><br> Validation & Test| Random resize and crop, horizontal flip, color jitter (brightness, contrast, saturation, and hue), normalization using channel-wise ImageNetΓÇÖs mean and standard deviation <br><br><br>Resize, center crop, normalization |
+|Object detection, instance segmentation| Training <br><br> Validation & Test |Random crop around bounding boxes, expand, horizontal flip, normalization, resize <br><br><br>Normalization, resize
+|Object detection using yolov5| Training <br><br> Validation & Test |Mosaic, random affine (rotation, translation, scale, shear), horizontal flip <br><br><br> Letterbox resizing|
+
+## Configure your experiment settings
+
+Before doing a large sweep to search for the optimal models and hyperparameters, we recommend trying the default values to get a first baseline. Next, you can explore multiple hyperparameters for the same model before sweeping over multiple models and their parameters. This way, you can employ a more iterative approach, because with multiple models and multiple hyperparameters for each, the search space grows exponentially and you need more iterations to find optimal configurations.
+
+If you wish to use the default hyperparameter values for a given algorithm (say yolov5), you can specify the config for your AutoML Image runs as follows:
+
+```python
+from azureml.train.automl import AutoMLImageConfig
+from azureml.train.hyperdrive import GridParameterSampling, choice
+from azureml.automl.core.shared.constants import ImageTask
+
+automl_image_config_yolov5 = AutoMLImageConfig(task=ImageTask.IMAGE_OBJECT_DETECTION,
+ compute_target=compute_target,
+ training_data=training_dataset,
+ validation_data=validation_dataset,
+ hyperparameter_sampling=GridParameterSampling({'model_name': choice('yolov5')}),
+ iterations=1)
+```
+
+Once you've built a baseline model, you might want to optimize model performance in order to sweep over the model algorithm and hyperparameter space. You can use the following sample config to sweep over the hyperparameters for each algorithm, choosing from a range of values for learning_rate, optimizer, lr_scheduler, etc., to generate a model with the optimal primary metric. If hyperparameter values are not specified, then default values are used for the specified algorithm.
+
+### Primary metric
+
+The primary metric used for model optimization and hyperparameter tuning depends on the task type. Using other primary metric values is currently not supported.
+
+* `accuracy` for IMAGE_CLASSIFICATION
+* `iou` for IMAGE_CLASSIFICATION_MULTILABEL
+* `mean_average_precision` for IMAGE_OBJECT_DETECTION
+* `mean_average_precision` for IMAGE_INSTANCE_SEGMENTATION
+
+### Experiment budget
+
+You can optionally specify the maximum time budget for your AutoML Vision experiment using `experiment_timeout_hours` - the amount of time in hours before the experiment terminates. If none specified, default experiment timeout is seven days (maximum 60 days).
++
+## Sweeping hyperparameters for your model
+
+When training computer vision models, model performance depends heavily on the hyperparameter values selected. Often, you might want to tune the hyperparameters to get optimal performance.
+With support for computer vision tasks in automated ML, you can sweep hyperparameters to find the optimal settings for your model. This feature applies the hyperparameter tuning capabilities in Azure Machine Learning. [Learn how to tune hyperparameters](../how-to-tune-hyperparameters.md).
+
+### Define the parameter search space
+
+You can define the model algorithms and hyperparameters to sweep in the parameter space.
+
+* See [Configure model algorithms and hyperparameters](#configure-model-algorithms-and-hyperparameters) for the list of supported model algorithms for each task type.
+* See [Hyperparameters for computer vision tasks](../reference-automl-images-hyperparameters.md) hyperparameters for each computer vision task type.
+* See [details on supported distributions for discrete and continuous hyperparameters](../how-to-tune-hyperparameters.md#define-the-search-space).
+
+### Sampling methods for the sweep
+
+When sweeping hyperparameters, you need to specify the sampling method to use for sweeping over the defined parameter space. Currently, the following sampling methods are supported with the `hyperparameter_sampling` parameter:
+
+* [Random sampling](../how-to-tune-hyperparameters.md#random-sampling)
+* [Grid sampling](../how-to-tune-hyperparameters.md#grid-sampling)
+* [Bayesian sampling](../how-to-tune-hyperparameters.md#bayesian-sampling)
+
+> [!NOTE]
+> Currently only random sampling supports conditional hyperparameter spaces.
+
+### Early termination policies
+
+You can automatically end poorly performing runs with an early termination policy. Early termination improves computational efficiency, saving compute resources that would have been otherwise spent on less promising configurations. Automated ML for images supports the following early termination policies using the `early_termination_policy` parameter. If no termination policy is specified, all configurations are run to completion.
+
+* [Bandit policy](../how-to-tune-hyperparameters.md#bandit-policy)
+* [Median stopping policy](../how-to-tune-hyperparameters.md#median-stopping-policy)
+* [Truncation selection policy](../how-to-tune-hyperparameters.md#truncation-selection-policy)
+
+Learn more about [how to configure the early termination policy for your hyperparameter sweep](../how-to-tune-hyperparameters.md#early-termination).
+
+### Resources for the sweep
+
+You can control the resources spent on your hyperparameter sweep by specifying the `iterations` and the `max_concurrent_iterations` for the sweep.
+
+Parameter | Detail
+--|-
+`iterations` | Required parameter for maximum number of configurations to sweep. Must be an integer between 1 and 1000. When exploring just the default hyperparameters for a given model algorithm, set this parameter to 1.
+`max_concurrent_iterations`| Maximum number of runs that can run concurrently. If not specified, all runs launch in parallel. If specified, must be an integer between 1 and 100. <br><br> **NOTE:** The number of concurrent runs is gated on the resources available in the specified compute target. Ensure that the compute target has the available resources for the desired concurrency.
++
+> [!NOTE]
+> For a complete sweep configuration sample, please refer to this [tutorial](../tutorial-auto-train-image-models.md#hyperparameter-sweeping-for-image-tasks).
+
+### Arguments
+
+You can pass fixed settings or parameters that don't change during the parameter space sweep as arguments. Arguments are passed in name-value pairs and the name must be prefixed by a double dash.
+
+```python
+from azureml.train.automl import AutoMLImageConfig
+arguments = ["--early_stopping", 1, "--evaluation_frequency", 2]
+automl_image_config = AutoMLImageConfig(arguments=arguments)
+```
+
+## Incremental training (optional)
+
+Once the training run is done, you have the option to further train the model by loading the trained model checkpoint. You can either use the same dataset or a different one for incremental training.
+
+There are two available options for incremental training. You can,
+
+* Pass the run ID that you want to load the checkpoint from.
+* Pass the checkpoints through a FileDataset.
+
+### Pass the checkpoint via run ID
+To find the run ID from the desired model, you can use the following code.
+
+```python
+# find a run id to get a model checkpoint from
+target_checkpoint_run = automl_image_run.get_best_child()
+```
+
+To pass a checkpoint via the run ID, you need to use the `checkpoint_run_id` parameter.
+
+```python
+automl_image_config = AutoMLImageConfig(task='image-object-detection',
+ compute_target=compute_target,
+ training_data=training_dataset,
+ validation_data=validation_dataset,
+ checkpoint_run_id= target_checkpoint_run.id,
+ primary_metric='mean_average_precision',
+ **tuning_settings)
+
+automl_image_run = experiment.submit(automl_image_config)
+automl_image_run.wait_for_completion(wait_post_processing=True)
+```
+
+### Pass the checkpoint via FileDataset
+To pass a checkpoint via a FileDataset, you need to use the `checkpoint_dataset_id` and `checkpoint_filename` parameters.
+
+```python
+# download the checkpoint from the previous run
+model_name = "outputs/model.pt"
+model_local = "checkpoints/model_yolo.pt"
+target_checkpoint_run.download_file(name=model_name, output_file_path=model_local)
+
+# upload the checkpoint to the blob store
+ds.upload(src_dir="checkpoints", target_path='checkpoints')
+
+# create a FileDatset for the checkpoint and register it with your workspace
+ds_path = ds.path('checkpoints/model_yolo.pt')
+checkpoint_yolo = Dataset.File.from_files(path=ds_path)
+checkpoint_yolo = checkpoint_yolo.register(workspace=ws, name='yolo_checkpoint')
+
+automl_image_config = AutoMLImageConfig(task='image-object-detection',
+ compute_target=compute_target,
+ training_data=training_dataset,
+ validation_data=validation_dataset,
+ checkpoint_dataset_id= checkpoint_yolo.id,
+ checkpoint_filename='model_yolo.pt',
+ primary_metric='mean_average_precision',
+ **tuning_settings)
+
+automl_image_run = experiment.submit(automl_image_config)
+automl_image_run.wait_for_completion(wait_post_processing=True)
+
+```
+
+## Submit the run
+
+When you have your `AutoMLImageConfig` object ready, you can submit the experiment.
+
+```python
+ws = Workspace.from_config()
+experiment = Experiment(ws, "Tutorial-automl-image-object-detection")
+automl_image_run = experiment.submit(automl_image_config)
+```
+
+## Outputs and evaluation metrics
+
+The automated ML training runs generates output model files, evaluation metrics, logs and deployment artifacts like the scoring file and the environment file which can be viewed from the outputs and logs and metrics tab of the child runs.
+
+> [!TIP]
+> Check how to navigate to the run results from the [View run results](../how-to-understand-automated-ml.md#view-run-results) section.
+
+For definitions and examples of the performance charts and metrics provided for each run, see [Evaluate automated machine learning experiment results](../how-to-understand-automated-ml.md#metrics-for-image-models-preview)
+
+## Register and deploy model
+
+Once the run completes, you can register the model that was created from the best run (configuration that resulted in the best primary metric)
+
+```Python
+best_child_run = automl_image_run.get_best_child()
+model_name = best_child_run.properties['model_name']
+model = best_child_run.register_model(model_name = model_name, model_path='outputs/model.pt')
+```
+
+After you register the model you want to use, you can deploy it as a web service on [Azure Container Instances (ACI)](../v1/how-to-deploy-azure-container-instance.md) or [Azure Kubernetes Service (AKS)](../v1/how-to-deploy-azure-kubernetes-service.md). ACI is the perfect option for testing deployments, while AKS is better suited for high-scale, production usage.
+
+This example deploys the model as a web service in AKS. To deploy in AKS, first create an AKS compute cluster or use an existing AKS cluster. You can use either GPU or CPU VM SKUs for your deployment cluster.
+
+```python
+
+from azureml.core.compute import ComputeTarget, AksCompute
+from azureml.exceptions import ComputeTargetException
+
+# Choose a name for your cluster
+aks_name = "cluster-aks-gpu"
+
+# Check to see if the cluster already exists
+try:
+ aks_target = ComputeTarget(workspace=ws, name=aks_name)
+ print('Found existing compute target')
+except ComputeTargetException:
+ print('Creating a new compute target...')
+ # Provision AKS cluster with GPU machine
+ prov_config = AksCompute.provisioning_configuration(vm_size="STANDARD_NC6",
+ location="eastus2")
+ # Create the cluster
+ aks_target = ComputeTarget.create(workspace=ws,
+ name=aks_name,
+ provisioning_configuration=prov_config)
+ aks_target.wait_for_completion(show_output=True)
+```
+
+Next, you can define the inference configuration, that describes how to set up the web-service containing your model. You can use the scoring script and the environment from the training run in your inference config.
+
+```python
+from azureml.core.model import InferenceConfig
+
+best_child_run.download_file('outputs/scoring_file_v_1_0_0.py', output_file_path='score.py')
+environment = best_child_run.get_environment()
+inference_config = InferenceConfig(entry_script='score.py', environment=environment)
+```
+
+You can then deploy the model as an AKS web service.
+
+```python
+# Deploy the model from the best run as an AKS web service
+from azureml.core.webservice import AksWebservice
+from azureml.core.webservice import Webservice
+from azureml.core.model import Model
+from azureml.core.environment import Environment
+
+aks_config = AksWebservice.deploy_configuration(autoscale_enabled=True,
+ cpu_cores=1,
+ memory_gb=50,
+ enable_app_insights=True)
+
+aks_service = Model.deploy(ws,
+ models=[model],
+ inference_config=inference_config,
+ deployment_config=aks_config,
+ deployment_target=aks_target,
+ name='automl-image-test',
+ overwrite=True)
+aks_service.wait_for_deployment(show_output=True)
+print(aks_service.state)
+```
+
+Alternatively, you can deploy the model from the [Azure Machine Learning studio UI](https://ml.azure.com/).
+Navigate to the model you wish to deploy in the **Models** tab of the automated ML run and select the **Deploy**.
+
+![Select model from the automl runs in studio UI ](.././media/how-to-auto-train-image-models/select-model.png)
+
+You can configure the model deployment endpoint name and the inferencing cluster to use for your model deployment in the **Deploy a model** pane.
+
+![Deploy configuration](.././media/how-to-auto-train-image-models/deploy-image-model.png)
+
+### Update inference configuration
+
+In the previous step, we downloaded the scoring file `outputs/scoring_file_v_1_0_0.py` from the best model into a local `score.py` file and we used it to create an `InferenceConfig` object. This script can be modified to change the model specific inference settings if needed after it has been downloaded and before creating the `InferenceConfig`. For instance, this is the code section that initializes the model in the scoring file:
+
+```
+...
+def init():
+ ...
+ try:
+ logger.info("Loading model from path: {}.".format(model_path))
+ model_settings = {...}
+ model = load_model(TASK_TYPE, model_path, **model_settings)
+ logger.info("Loading successful.")
+ except Exception as e:
+ logging_utilities.log_traceback(e, logger)
+ raise
+...
+```
+
+Each of the tasks (and some models) have a set of parameters in the `model_settings` dictionary. By default, we use the same values for the parameters that were used during the training and validation. Depending on the behavior that we need when using the model for inference, we can change these parameters. Below you can find a list of parameters for each task type and model.
+
+| Task | Parameter name | Default |
+| |- | |
+|Image classification (multi-class and multi-label) | `valid_resize_size`<br>`valid_crop_size` | 256<br>224 |
+|Object detection | `min_size`<br>`max_size`<br>`box_score_thresh`<br>`nms_iou_thresh`<br>`box_detections_per_img` | 600<br>1333<br>0.3<br>0.5<br>100 |
+|Object detection using `yolov5`| `img_size`<br>`model_size`<br>`box_score_thresh`<br>`nms_iou_thresh` | 640<br>medium<br>0.1<br>0.5 |
+|Instance segmentation| `min_size`<br>`max_size`<br>`box_score_thresh`<br>`nms_iou_thresh`<br>`box_detections_per_img`<br>`mask_pixel_score_threshold`<br>`max_number_of_polygon_points`<br>`export_as_image`<br>`image_type` | 600<br>1333<br>0.3<br>0.5<br>100<br>0.5<br>100<br>False<br>JPG|
+
+For a detailed description on task specific hyperparameters, please refer to [Hyperparameters for computer vision tasks in automated machine learning](../reference-automl-images-hyperparameters.md).
+
+If you want to use tiling, and want to control tiling behavior, the following parameters are available: `tile_grid_size`, `tile_overlap_ratio` and `tile_predictions_nms_thresh`. For more details on these parameters please check [Train a small object detection model using AutoML](../how-to-use-automl-small-object-detect.md).
+
+## Example notebooks
+Review detailed code examples and use cases in the [GitHub notebook repository for automated machine learning samples](https://github.com/Azure/azureml-examples/tree/main/python-sdk/tutorials/automl-with-azureml). Please check the folders with 'image-' prefix for samples specific to building computer vision models.
++
+## Next steps
+
+* [Tutorial: Train an object detection model (preview) with AutoML and Python](../tutorial-auto-train-image-models.md).
+* [Make predictions with ONNX on computer vision models from AutoML](../how-to-inference-onnx-automl-image-models.md)
+* [Troubleshoot automated ML experiments](../how-to-troubleshoot-auto-ml.md).
machine-learning How To Auto Train Nlp Models V1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-auto-train-nlp-models-v1.md
+
+ Title: Set up AutoML for NLP (v1)
+
+description: Set up Azure Machine Learning automated ML to train natural language processing models with the Azure Machine Learning Python SDK v1.
+++++++ Last updated : 03/15/2022
+#Customer intent: I'm a data scientist with ML knowledge in the natural language processing space, looking to build ML models using language specific data in Azure Machine Learning with full control of the model algorithm, hyperparameters, and training and deployment environments.
++
+# Set up AutoML to train a natural language processing model with Python (preview)
+
+> [!div class="op_single_selector" title1="Select the version of the developer platform of Azure Machine Learning you are using:"]
+> * [v1](how-to-auto-train-nlp-models-v1.md)
+> * [v2 (current version)](../how-to-auto-train-nlp-models.md)
++
+In this article, you learn how to train natural language processing (NLP) models with [automated ML](../concept-automated-ml.md) in the [Azure Machine Learning Python SDK](/python/api/overview/azure/ml/).
+
+Automated ML supports NLP which allows ML professionals and data scientists to bring their own text data and build custom models for tasks such as, multi-class text classification, multi-label text classification, and named entity recognition (NER).
+
+You can seamlessly integrate with the [Azure Machine Learning data labeling](../how-to-create-text-labeling-projects.md) capability to label your text data or bring your existing labeled data. Automated ML provides the option to use distributed training on multi-GPU compute clusters for faster model training. The resulting model can be operationalized at scale by leveraging Azure MLΓÇÖs MLOps capabilities.
+
+## Prerequisites
+
+* Azure subscription. If you don't have an Azure subscription, sign up to try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/) today.
+
+* An Azure Machine Learning workspace with a GPU training compute. To create the workspace, see [Create an Azure Machine Learning workspace](../how-to-manage-workspace.md). See [GPU optimized virtual machine sizes](../../virtual-machines/sizes-gpu.md) for more details of GPU instances provided by Azure.
+
+ > [!Warning]
+ > Support for multilingual models and the use of models with longer max sequence length is necessary for several NLP use cases, such as non-english datasets and longer range documents. As a result, these scenarios may require higher GPU memory for model training to succeed, such as the NC_v3 series or the ND series.
+
+* The Azure Machine Learning Python SDK installed.
+
+ To install the SDK you can either,
+ * Create a compute instance, which automatically installs the SDK and is pre-configured for ML workflows. See [Create and manage an Azure Machine Learning compute instance](../how-to-create-manage-compute-instance.md) for more information.
+
+ * [Install the `automl` package yourself](https://github.com/Azure/azureml-examples/blob/main/python-sdk/tutorials/automl-with-azureml/README.md#setup-using-a-local-conda-environment), which includes the [default installation](/python/api/overview/azure/ml/install#default-install) of the SDK.
+
+ [!INCLUDE [automl-sdk-version](../../../includes/machine-learning-automl-sdk-version.md)]
+
+
+* This article assumes some familiarity with setting up an automated machine learning experiment. Follow the [tutorial](../tutorial-auto-train-models.md) or [how-to](../how-to-configure-auto-train.md) to see the main automated machine learning experiment design patterns.
+
+## Select your NLP task
+
+Determine what NLP task you want to accomplish. Currently, automated ML supports the follow deep neural network NLP tasks.
+
+Task |AutoMLConfig syntax| Description
+-|-|
+Multi-class text classification | `task = 'text-classification'`| There are multiple possible classes and each sample can be classified as exactly one class. The task is to predict the correct class for each sample. <br> <br> For example, classifying a movie script as "Comedy" or "Romantic".
+Multi-label text classification | `task = 'text-classification-multilabel'`| There are multiple possible classes and each sample can be assigned any number of classes. The task is to predict all the classes for each sample<br> <br> For example, classifying a movie script as "Comedy", or "Romantic", or "Comedy and Romantic".
+Named Entity Recognition (NER)| `task = 'text-ner'`| There are multiple possible tags for tokens in sequences. The task is to predict the tags for all the tokens for each sequence. <br> <br> For example, extracting domain-specific entities from unstructured text, such as contracts or financial documents
+
+## Preparing data
+
+For NLP experiments in automated ML, you can bring an Azure Machine Learning dataset with `.csv` format for multi-class and multi-label classification tasks. For NER tasks, two-column `.txt` files that use a space as the separator and adhere to the CoNLL format are supported. The following sections provide additional detail for the data format accepted for each task.
+
+### Multi-class
+
+For multi-class classification, the dataset can contain several text columns and exactly one label column. The following example has only one text column.
+
+```python
+
+text,labels
+"I love watching Chicago Bulls games.","NBA"
+"Tom Brady is a great player.","NFL"
+"There is a game between Yankees and Orioles tonight","MLB"
+"Stephen Curry made the most number of 3-Pointers","NBA"
+```
+
+### Multi-label
+
+For multi-label classification, the dataset columns would be the same as multi-class, however there are special format requirements for data in the label column. The two accepted formats and examples are in the following table.
+
+|Label column format options |Multiple labels| One label | No labels
+||||
+|Plain text|`"label1, label2, label3"`| `"label1"`| `""`
+|Python list with quotes| `"['label1','label2','label3']"`| `"['label1']"`|`"[]"`
+
+> [!IMPORTANT]
+> Different parsers are used to read labels for these formats. If you are using the plain text format, only use alphabetical, numerical and `'_'` in your labels. All other characters are recognized as the separator of labels.
+>
+> For example, if your label is `"cs.AI"`, it's read as `"cs"` and `"AI"`. Whereas with the Python list format, the label would be `"['cs.AI']"`, which is read as `"cs.AI"` .
++
+Example data for multi-label in plain text format.
+
+```python
+text,labels
+"I love watching Chicago Bulls games.","basketball"
+"The four most popular leagues are NFL, MLB, NBA and NHL","football,baseball,basketball,hockey"
+"I like drinking beer.",""
+```
+
+Example data for multi-label in Python list with quotes format.
+
+``` python
+text,labels
+"I love watching Chicago Bulls games.","['basketball']"
+"The four most popular leagues are NFL, MLB, NBA and NHL","['football','baseball','basketball','hockey']"
+"I like drinking beer.","[]"
+```
+
+### Named entity recognition (NER)
+
+Unlike multi-class or multi-label, which takes `.csv` format datasets, named entity recognition requires [CoNLL](https://www.clips.uantwerpen.be/conll2003/ner/) format. The file must contain exactly two columns and in each row, the token and the label is separated by a single space.
+
+For example,
+
+``` python
+Hudson B-loc
+Square I-loc
+is O
+a O
+famous O
+place O
+in O
+New B-loc
+York I-loc
+City I-loc
+
+Stephen B-per
+Curry I-per
+got O
+three O
+championship O
+rings O
+```
+
+### Data validation
+
+Before training, automated ML applies data validation checks on the input data to ensure that the data can be preprocessed correctly. If any of these checks fail, the run fails with the relevant error message. The following are the requirements to pass data validation checks for each task.
+
+> [!Note]
+> Some data validation checks are applicable to both the training and the validation set, whereas others are applicable only to the training set. If the test dataset could not pass the data validation, that means that automated ML couldn't capture it and there is a possibility of model inference failure, or a decline in model performance.
+
+Task | Data validation check
+|
+All tasks | - Both training and validation sets must be provided <br> - At least 50 training samples are required
+Multi-class and Multi-label | The training data and validation data must have <br> - The same set of columns <br>- The same order of columns from left to right <br>- The same data type for columns with the same name <br>- At least two unique labels <br> - Unique column names within each dataset (For example, the training set can't have multiple columns named **Age**)
+Multi-class only | None
+Multi-label only | - The label column format must be in [accepted format](#multi-label) <br> - At least one sample should have 0 or 2+ labels, otherwise it should be a `multiclass` task <br> - All labels should be in `str` or `int` format, with no overlapping. You should not have both label `1` and label `'1'`
+NER only | - The file should not start with an empty line <br> - Each line must be an empty line, or follow format `{token} {label}`, where there is exactly one space between the token and the label and no white space after the label <br> - All labels must start with `I-`, `B-`, or be exactly `O`. Case sensitive <br> - Exactly one empty line between two samples <br> - Exactly one empty line at the end of the file
+
+## Configure experiment
+
+Automated ML's NLP capability is triggered through `AutoMLConfig`, which is the same workflow for submitting automated ML experiments for classification, regression and forecasting tasks. You would set most of the parameters as you would for those experiments, such as `task`, `compute_target` and data inputs.
+
+However, there are key differences:
+* You can ignore `primary_metric`, as it is only for reporting purpose. Currently, automated ML only trains one model per run for NLP and there is no model selection.
+* The `label_column_name` parameter is only required for multi-class and multi-label text classification tasks.
+* If the majority of the samples in your dataset contain more than 128 words, it's considered long range. For this scenario, you can enable the long range text option with the `enable_long_range_text=True` parameter in your `AutoMLConfig`. Doing so, helps improve model performance but requires longer training times.
+ * If you enable long range text, then a GPU with higher memory is required such as, [NCv3](../../virtual-machines/ncv3-series.md) series or [ND](../../virtual-machines/nd-series.md) series.
+ * The `enable_long_range_text` parameter is only available for multi-class classification tasks.
++
+```python
+automl_settings = {
+ "verbosity": logging.INFO,
+ "enable_long_range_text": True, # # You only need to set this parameter if you want to enable the long-range text setting
+}
+
+automl_config = AutoMLConfig(
+ task="text-classification",
+ debug_log="automl_errors.log",
+ compute_target=compute_target,
+ training_data=train_dataset,
+ validation_data=val_dataset,
+ label_column_name=target_column_name,
+ **automl_settings
+)
+```
+
+### Language settings
+
+As part of the NLP functionality, automated ML supports 104 languages leveraging language specific and multilingual pre-trained text DNN models, such as the BERT family of models. Currently, language selection defaults to English.
+
+ The following table summarizes what model is applied based on task type and language. See the full list of [supported languages and their codes](/python/api/azureml-automl-core/azureml.automl.core.constants.textdnnlanguages#azureml-automl-core-constants-textdnnlanguages-supported).
+
+ Task type |Syntax for `dataset_language` | Text model algorithm
+-|-|
+Multi-label text classification| `'eng'` <br> `'deu'` <br> `'mul'`| English&nbsp;BERT&nbsp;[uncased](https://huggingface.co/bert-base-uncased) <br> [German BERT](https://huggingface.co/bert-base-german-cased)<br> [Multilingual BERT](https://huggingface.co/bert-base-multilingual-cased) <br><br>For all other languages, automated ML applies multilingual BERT
+Multi-class text classification| `'eng'` <br> `'deu'` <br> `'mul'`| English&nbsp;BERT&nbsp;[cased](https://huggingface.co/bert-base-cased)<br> [Multilingual BERT](https://huggingface.co/bert-base-multilingual-cased) <br><br>For all other languages, automated ML applies multilingual BERT
+Named entity recognition (NER)| `'eng'` <br> `'deu'` <br> `'mul'`| English&nbsp;BERT&nbsp;[cased](https://huggingface.co/bert-base-cased) <br> [German BERT](https://huggingface.co/bert-base-german-cased)<br> [Multilingual BERT](https://huggingface.co/bert-base-multilingual-cased) <br><br>For all other languages, automated ML applies multilingual BERT
++
+You can specify your dataset language in your `FeaturizationConfig`. BERT is also used in the featurization process of automated ML experiment training, learn more about [BERT integration and featurization in automated ML](../how-to-configure-auto-features.md#bert-integration-in-automated-ml).
+
+```python
+from azureml.automl.core.featurization import FeaturizationConfig
+
+featurization_config = FeaturizationConfig(dataset_language='{your language code}')
+automl_config = AutomlConfig("featurization": featurization_config)
+```
+
+## Distributed training
+
+You can also run your NLP experiments with distributed training on an Azure ML compute cluster. This is handled automatically by automated ML when the parameters `max_concurrent_iterations = number_of_vms` and `enable_distributed_dnn_training = True` are provided in your `AutoMLConfig` during experiment set up.
+
+```python
+max_concurrent_iterations = number_of_vms
+enable_distributed_dnn_training = True
+```
+
+Doing so, schedules distributed training of the NLP models and automatically scales to every GPU on your virtual machine or cluster of virtual machines. The max number of virtual machines allowed is 32. The training is scheduled with number of virtual machines that is in powers of two.
+
+## Example notebooks
+
+See the sample notebooks for detailed code examples for each NLP task.
+* [Multi-class text classification](https://github.com/Azure/azureml-examples/blob/main/python-sdk/tutorials/automl-with-azureml/automl-nlp-multiclass/automl-nlp-text-classification-multiclass.ipynb)
+* [Multi-label text classification](
+https://github.com/Azure/azureml-examples/blob/main/python-sdk/tutorials/automl-with-azureml/automl-nlp-multilabel/automl-nlp-text-classification-multilabel.ipynb)
+* [Named entity recognition](https://github.com/Azure/azureml-examples/blob/main/python-sdk/tutorials/automl-with-azureml/automl-nlp-ner/automl-nlp-ner.ipynb)
+
+## Next steps
++ Learn more about [how and where to deploy a model](../how-to-deploy-and-where.md).++ [Troubleshoot automated ML experiments](../how-to-troubleshoot-auto-ml.md).
machine-learning How To Configure Auto Train V1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-configure-auto-train-v1.md
+
+ Title: Set up AutoML with Python
+
+description: Learn how to set up an AutoML training run with the Azure Machine Learning Python SDK using Azure Machine Learning automated ML.
++++++ Last updated : 01/24/2021++++
+# Set up AutoML training with Python
+
+> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning Python you are using:"]
+> * [v1](how-to-configure-auto-train-v1.md)
+> * [v2 (current version)](../how-to-configure-auto-train.md)
+
+In this guide, learn how to set up an automated machine learning, AutoML, training run with the [Azure Machine Learning Python SDK](/python/api/overview/azure/ml/intro) using Azure Machine Learning automated ML. Automated ML picks an algorithm and hyperparameters for you and generates a model ready for deployment. This guide provides details of the various options that you can use to configure automated ML experiments.
+
+For an end to end example, see [Tutorial: AutoML- train regression model](../tutorial-auto-train-models.md).
+
+If you prefer a no-code experience, you can also [Set up no-code AutoML training in the Azure Machine Learning studio](../how-to-use-automated-ml-for-ml-models.md).
+
+## Prerequisites
+
+For this article you need,
+* An Azure Machine Learning workspace. To create the workspace, see [Create an Azure Machine Learning workspace](../how-to-manage-workspace.md).
+
+* The Azure Machine Learning Python SDK installed.
+ To install the SDK you can either,
+ * Create a compute instance, which automatically installs the SDK and is preconfigured for ML workflows. See [Create and manage an Azure Machine Learning compute instance](../how-to-create-manage-compute-instance.md) for more information.
+
+ * [Install the `automl` package yourself](https://github.com/Azure/azureml-examples/blob/main/python-sdk/tutorials/automl-with-azureml/README.md#setup-using-a-local-conda-environment), which includes the [default installation](/python/api/overview/azure/ml/install#default-install) of the SDK.
+
+ [!INCLUDE [automl-sdk-version](../../../includes/machine-learning-automl-sdk-version.md)]
+
+ > [!WARNING]
+ > Python 3.8 is not compatible with `automl`.
+
+## Select your experiment type
+
+Before you begin your experiment, you should determine the kind of machine learning problem you are solving. Automated machine learning supports task types of `classification`, `regression`, and `forecasting`. Learn more about [task types](../concept-automated-ml.md#when-to-use-automl-classification-regression-forecasting-computer-vision--nlp).
+
+>[!NOTE]
+> Support for computer vision tasks: image classification (multi-class and multi-label), object detection, and instance segmentation is available in public preview. [Learn more about computer vision tasks in automated ML](../concept-automated-ml.md#computer-vision-preview).
+>
+>Support for natural language processing (NLP) tasks: image classification (multi-class and multi-label) and named entity recognition is available in public preview. [Learn more about NLP tasks in automated ML](../concept-automated-ml.md#nlp).
+>
+> These preview capabilities are provided without a service-level agreement. Certain features might not be supported or might have constrained functionality. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+The following code uses the `task` parameter in the `AutoMLConfig` constructor to specify the experiment type as `classification`.
+
+```python
+from azureml.train.automl import AutoMLConfig
+
+# task can be one of classification, regression, forecasting
+automl_config = AutoMLConfig(task = "classification")
+```
+
+## Data source and format
+
+Automated machine learning supports data that resides on your local desktop or in the cloud such as Azure Blob Storage. The data can be read into a **Pandas DataFrame** or an **Azure Machine Learning TabularDataset**. [Learn more about datasets](../how-to-create-register-datasets.md).
+
+Requirements for training data in machine learning:
+- Data must be in tabular form.
+- The value to predict, target column, must be in the data.
+
+> [!IMPORTANT]
+> Automated ML experiments do not support training with datasets that use [identity-based data access](../how-to-identity-based-data-access.md).
+
+**For remote experiments**, training data must be accessible from the remote compute. Automated ML only accepts [Azure Machine Learning TabularDatasets](/python/api/azureml-core/azureml.data.tabulardataset) when working on a remote compute.
+
+Azure Machine Learning datasets expose functionality to:
+
+* Easily transfer data from static files or URL sources into your workspace.
+* Make your data available to training scripts when running on cloud compute resources. See [How to train with datasets](../how-to-train-with-datasets.md#mount-files-to-remote-compute-targets) for an example of using the `Dataset` class to mount data to your remote compute target.
+
+The following code creates a TabularDataset from a web url. See [Create a TabularDataset](how-to-create-register-datasets.md) for code examples on how to create datasets from other sources like local files and datastores.
+
+```python
+from azureml.core.dataset import Dataset
+data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/creditcard.csv"
+dataset = Dataset.Tabular.from_delimited_files(data)
+```
+
+**For local compute experiments**, we recommend pandas dataframes for faster processing times.
+
+ ```python
+ import pandas as pd
+ from sklearn.model_selection import train_test_split
+
+ df = pd.read_csv("your-local-file.csv")
+ train_data, test_data = train_test_split(df, test_size=0.1, random_state=42)
+ label = "label-col-name"
+ ```
+
+## Training, validation, and test data
+
+You can specify separate **training data and validation data sets** directly in the `AutoMLConfig` constructor. Learn more about [how to configure training, validation, cross validation, and test data](../how-to-configure-cross-validation-data-splits.md) for your AutoML experiments.
+
+If you do not explicitly specify a `validation_data` or `n_cross_validation` parameter, automated ML applies default techniques to determine how validation is performed. This determination depends on the number of rows in the dataset assigned to your `training_data` parameter.
+
+|Training&nbsp;data&nbsp;size| Validation technique |
+||--|
+|**Larger&nbsp;than&nbsp;20,000&nbsp;rows**| Train/validation data split is applied. The default is to take 10% of the initial training data set as the validation set. In turn, that validation set is used for metrics calculation.
+|**Smaller&nbsp;than&nbsp;20,000&nbsp;rows**| Cross-validation approach is applied. The default number of folds depends on the number of rows. <br> **If the dataset is less than 1,000 rows**, 10 folds are used. <br> **If the rows are between 1,000 and 20,000**, then three folds are used.
++
+> [!TIP]
+> You can upload **test data (preview)** to evaluate models that automated ML generated for you. These features are [experimental](/python/api/overview/azure/ml/#stable-vs-experimental) preview capabilities, and may change at any time.
+> Learn how to:
+> * [Pass in test data to your AutoMLConfig object](../how-to-configure-cross-validation-data-splits.md#provide-test-data-preview).
+> * [Test the models automated ML generated for your experiment](#test-models-preview).
+>
+> If you prefer a no-code experience, see [step 12 in Set up AutoML with the studio UI](../how-to-use-automated-ml-for-ml-models.md#create-and-run-experiment)
++
+### Large data
+
+Automated ML supports a limited number of algorithms for training on large data that can successfully build models for big data on small virtual machines. Automated ML heuristics depend on properties such as data size, virtual machine memory size, experiment timeout and featurization settings to determine if these large data algorithms should be applied. [Learn more about what models are supported in automated ML](#supported-models).
+
+* For regression, [Online Gradient Descent Regressor](/python/api/nimbusml/nimbusml.linear_model.onlinegradientdescentregressor?preserve-view=true&view=nimbusml-py-latest) and
+[Fast Linear Regressor](/python/api/nimbusml/nimbusml.linear_model.fastlinearregressor?preserve-view=true&view=nimbusml-py-latest)
+
+* For classification, [Averaged Perceptron Classifier](/python/api/nimbusml/nimbusml.linear_model.averagedperceptronbinaryclassifier?preserve-view=true&view=nimbusml-py-latest) and [Linear SVM Classifier](/python/api/nimbusml/nimbusml.linear_model.linearsvmbinaryclassifier?preserve-view=true&view=nimbusml-py-latest); where the Linear SVM classifier has both large data and small data versions.
+
+If you want to override these heuristics, apply the following settings:
+
+Task | Setting | Notes
+|||
+Block&nbsp;data streaming algorithms | `blocked_models` in your `AutoMLConfig` object and list the model(s) you don't want to use. | Results in either run failure or long run time
+Use&nbsp;data&nbsp;streaming&nbsp;algorithms| `allowed_models` in your `AutoMLConfig` object and list the model(s) you want to use.|
+Use&nbsp;data&nbsp;streaming&nbsp;algorithms <br> [(studio UI experiments)](../how-to-use-automated-ml-for-ml-models.md#create-and-run-experiment)|Block all models except the big data algorithms you want to use. |
+
+## Compute to run experiment
+
+Next determine where the model will be trained. An automated ML training experiment can run on the following compute options.
+
+ * **Choose a local compute**: If your scenario is about initial explorations or demos using small data and short trains (i.e. seconds or a couple of minutes per child run), training on your local computer might be a better choice. There is no setup time, the infrastructure resources (your PC or VM) are directly available. See [this notebook](https://github.com/Azure/azureml-examples/blob/main/python-sdk/tutorials/automl-with-azureml/local-run-classification-credit-card-fraud/auto-ml-classification-credit-card-fraud-local.ipynb) for a local compute example.
+
+ * **Choose a remote ML compute cluster**: If you are training with larger datasets like in production training creating models which need longer trains, remote compute will provide much better end-to-end time performance because `AutoML` will parallelize trains across the cluster's nodes. On a remote compute, the start-up time for the internal infrastructure will add around 1.5 minutes per child run, plus additional minutes for the cluster infrastructure if the VMs are not yet up and running.[Azure Machine Learning Managed Compute](../concept-compute-target.md#amlcompute) is a managed service that enables the ability to train machine learning models on clusters of Azure virtual machines. Compute instance is also supported as a compute target.
+
+ * An **Azure Databricks cluster** in your Azure subscription. You can find more details in [Set up an Azure Databricks cluster for automated ML](../how-to-configure-databricks-automl-environment.md). See this [GitHub site](https://github.com/Azure/azureml-examples/tree/main/python-sdk/tutorials/automl-with-databricks) for examples of notebooks with Azure Databricks.
+
+Consider these factors when choosing your compute target:
+
+| | Pros (Advantages) |Cons (Handicaps) |
+|||||
+|**Local compute target** | <li> No environment start-up time | <li> Subset of features<li> Can't parallelize runs <li> Worse for large data. <li>No data streaming while training <li> No DNN-based featurization <li> Python SDK only |
+|**Remote ML compute clusters**| <li> Full set of features <li> Parallelize child runs <li> Large data support<li> DNN-based featurization <li> Dynamic scalability of compute cluster on demand <li> No-code experience (web UI) also available | <li> Start-up time for cluster nodes <li> Start-up time for each child run |
+
+<a name='configure-experiment'></a>
+
+## Configure your experiment settings
+
+There are several options that you can use to configure your automated ML experiment. These parameters are set by instantiating an `AutoMLConfig` object. See the [AutoMLConfig class](/python/api/azureml-train-automl-client/azureml.train.automl.automlconfig.automlconfig) for a full list of parameters.
+
+The following example is for a classification task. The experiment uses AUC weighted as the [primary metric](#primary-metric) and has an experiment time out set to 30 minutes and 2 cross-validation folds.
+
+```python
+ automl_classifier=AutoMLConfig(task='classification',
+ primary_metric='AUC_weighted',
+ experiment_timeout_minutes=30,
+ blocked_models=['XGBoostClassifier'],
+ training_data=train_data,
+ label_column_name=label,
+ n_cross_validations=2)
+```
+You can also configure forecasting tasks, which requires extra setup. See the [Set up AutoML for time-series forecasting](../how-to-auto-train-forecast.md) article for more details.
+
+```python
+ time_series_settings = {
+ 'time_column_name': time_column_name,
+ 'time_series_id_column_names': time_series_id_column_names,
+ 'forecast_horizon': n_test_periods
+ }
+
+ automl_config = AutoMLConfig(
+ task = 'forecasting',
+ debug_log='automl_oj_sales_errors.log',
+ primary_metric='normalized_root_mean_squared_error',
+ experiment_timeout_minutes=20,
+ training_data=train_data,
+ label_column_name=label,
+ n_cross_validations=5,
+ path=project_folder,
+ verbosity=logging.INFO,
+ **time_series_settings
+ )
+```
+
+### Supported models
+
+Automated machine learning tries different models and algorithms during the automation and tuning process. As a user, there is no need for you to specify the algorithm.
+
+The three different `task` parameter values determine the list of algorithms, or models, to apply. Use the `allowed_models` or `blocked_models` parameters to further modify iterations with the available models to include or exclude.
+The following table summarizes the supported models by task type.
+
+> [!NOTE]
+> If you plan to export your automated ML created models to an [ONNX model](../concept-onnx.md), only those algorithms indicated with an * (asterisk) are able to be converted to the ONNX format. Learn more about [converting models to ONNX](../concept-automated-ml.md#use-with-onnx). <br> <br> Also note, ONNX only supports classification and regression tasks at this time.
+>
+Classification | Regression | Time Series Forecasting
+|-- |-- |--
+[Logistic Regression](/python/api/azureml-train-automl-client/azureml.train.automl.constants.supportedmodels.classification#logisticregression-logisticregression-)* | [Elastic Net](/python/api/azureml-train-automl-client/azureml.train.automl.constants.supportedmodels.regression#elasticnet-elasticnet-)* | [AutoARIMA](/python/api/azureml-train-automl-client/azureml.train.automl.constants.supportedmodels.forecasting#autoarima-autoarima-)
+[Light GBM](/python/api/azureml-train-automl-client/azureml.train.automl.constants.supportedmodels.classification#lightgbmclassifier-lightgbm-)* | [Light GBM](/python/api/azureml-train-automl-client/azureml.train.automl.constants.supportedmodels.regression#lightgbmregressor-lightgbm-)* | [Prophet](/python/api/azureml-automl-core/azureml.automl.core.shared.constants.supportedmodels.forecasting#prophet-prophet-)
+[Gradient Boosting](/python/api/azureml-train-automl-client/azureml.train.automl.constants.supportedmodels.classification#gradientboosting-gradientboosting-)* | [Gradient Boosting](/python/api/azureml-train-automl-client/azureml.train.automl.constants.supportedmodels.regression#gradientboostingregressor-gradientboosting-)* | [Elastic Net](https://scikit-learn.org/stable/modules/linear_model.html#elastic-net)
+[Decision Tree](/python/api/azureml-train-automl-client/azureml.train.automl.constants.supportedmodels.classification#decisiontree-decisiontree-)* |[Decision Tree](/python/api/azureml-train-automl-client/azureml.train.automl.constants.supportedmodels.regression#decisiontreeregressor-decisiontree-)* |[Light GBM](https://lightgbm.readthedocs.io/en/latest/https://docsupdatetracker.net/index.html)
+[K Nearest Neighbors](/python/api/azureml-train-automl-client/azureml.train.automl.constants.supportedmodels.classification#knearestneighborsclassifier-knn-)* |[K Nearest Neighbors](/python/api/azureml-train-automl-client/azureml.train.automl.constants.supportedmodels.regression#knearestneighborsregressor-knn-)* | [Gradient Boosting](https://scikit-learn.org/stable/modules/ensemble.html#regression)
+[Linear SVC](/python/api/azureml-train-automl-client/azureml.train.automl.constants.supportedmodels.classification#linearsupportvectormachine-linearsvm-)* |[LARS Lasso](/python/api/azureml-train-automl-client/azureml.train.automl.constants.supportedmodels.regression#lassolars-lassolars-)* | [Decision Tree](https://scikit-learn.org/stable/modules/tree.html#regression)
+[Support Vector Classification (SVC)](/python/api/azureml-train-automl-client/azureml.train.automl.constants.supportedmodels.classification#supportvectormachine-svm-)* |[Stochastic Gradient Descent (SGD)](/python/api/azureml-train-automl-client/azureml.train.automl.constants.supportedmodels.regression#sgdregressor-sgd-)* | [Arimax](/python/api/azureml-automl-core/azureml.automl.core.shared.constants.supportedmodels.forecasting#arimax-arimax-)
+[Random Forest](/python/api/azureml-train-automl-client/azureml.train.automl.constants.supportedmodels.classification#randomforest-randomforest-)* | [Random Forest](/python/api/azureml-train-automl-client/azureml.train.automl.constants.supportedmodels.regression#randomforestregressor-randomforest-) | [LARS Lasso](https://scikit-learn.org/stable/modules/linear_model.html#lars-lasso)
+[Extremely Randomized Trees](https://scikit-learn.org/stable/modules/ensemble.html#extremely-randomized-trees)* | [Extremely Randomized Trees](https://scikit-learn.org/stable/modules/ensemble.html#extremely-randomized-trees)* | [Stochastic Gradient Descent (SGD)](https://scikit-learn.org/stable/modules/sgd.html#regression)
+[Xgboost](/python/api/azureml-train-automl-client/azureml.train.automl.constants.supportedmodels.classification#xgboostclassifier-xgboostclassifier-)* |[Xgboost](/python/api/azureml-train-automl-client/azureml.train.automl.constants.supportedmodels.regression#xgboostregressor-xgboostregressor-)* | [Random Forest](https://scikit-learn.org/stable/modules/ensemble.html#random-forests)
+[Averaged Perceptron Classifier](/python/api/azureml-train-automl-client/azureml.train.automl.constants.supportedmodels.classification#averagedperceptronclassifier-averagedperceptronclassifier-)| [Online Gradient Descent Regressor](/python/api/nimbusml/nimbusml.linear_model.onlinegradientdescentregressor?preserve-view=true&view=nimbusml-py-latest) | [Xgboost](https://xgboost.readthedocs.io/en/latest/parameter.html)
+[Naive Bayes](https://scikit-learn.org/stable/modules/naive_bayes.html#bernoulli-naive-bayes)* |[Fast Linear Regressor](/python/api/azureml-train-automl-client/azureml.train.automl.constants.supportedmodels.regression#fastlinearregressor-fastlinearregressor-)| [ForecastTCN](/python/api/azureml-automl-core/azureml.automl.core.shared.constants.supportedmodels.forecasting#tcnforecaster-tcnforecaster-)
+[Stochastic Gradient Descent (SGD)](/python/api/azureml-train-automl-client/azureml.train.automl.constants.supportedmodels.classification#sgdclassifier-sgd-)* || Naive
+[Linear SVM Classifier](/python/api/nimbusml/nimbusml.linear_model.linearsvmbinaryclassifier?preserve-view=true&view=nimbusml-py-latest)* || SeasonalNaive
+||| Average
+||| SeasonalAverage
+||| [ExponentialSmoothing](/python/api/azureml-automl-core/azureml.automl.core.shared.constants.supportedmodels.forecasting#exponentialsmoothing-exponentialsmoothing-)
+
+### Primary metric
+
+The `primary_metric` parameter determines the metric to be used during model training for optimization. The available metrics you can select is determined by the task type you choose.
+
+Choosing a primary metric for automated ML to optimize depends on many factors. We recommend your primary consideration be to choose a metric that best represents your business needs. Then consider if the metric is suitable for your dataset profile (data size, range, class distribution, etc.). The following sections summarize the recommended primary metrics based on task type and business scenario.
+
+Learn about the specific definitions of these metrics in [Understand automated machine learning results](../how-to-understand-automated-ml.md).
+
+#### Metrics for classification scenarios
+
+Threshold-dependent metrics, like `accuracy`, `recall_score_weighted`, `norm_macro_recall`, and `precision_score_weighted` may not optimize as well for datasets that are small, have very large class skew (class imbalance), or when the expected metric value is very close to 0.0 or 1.0. In those cases, `AUC_weighted` can be a better choice for the primary metric. After automated ML completes, you can choose the winning model based on the metric best suited to your business needs.
+
+| Metric | Example use case(s) |
+| | - |
+| `accuracy` | Image classification, Sentiment analysis, Churn prediction |
+| `AUC_weighted` | Fraud detection, Image classification, Anomaly detection/spam detection |
+| `average_precision_score_weighted` | Sentiment analysis |
+| `norm_macro_recall` | Churn prediction |
+| `precision_score_weighted` | |
+
+#### Metrics for regression scenarios
+
+`r2_score`, `normalized_mean_absolute_error` and `normalized_root_mean_squared_error` are all trying to minimize prediction errors. `r2_score` and `normalized_root_mean_squared_error` are both minimizing average squared errors while `normalized_mean_absolute_error` is minizing the average absolute value of errors. Absolute value treats errors at all magnitudes alike and squared errors will have a much larger penalty for errors with larger absolute values. Depending on whether larger errors should be punished more or not, one can choose to optimize squared error or absolute error.
+
+The main difference between `r2_score` and `normalized_root_mean_squared_error` is the way they are normalized and their meanings. `normalized_root_mean_squared_error` is root mean squared error normalized by range and can be interpreted as the average error magnitude for prediction. `r2_score` is mean squared error normalized by an estimate of variance of data. It is the proportion of variation that can be captured by the model.
+
+> [!Note]
+> `r2_score` and `normalized_root_mean_squared_error` also behave similarly as primary metrics. If a fixed validation set is applied, these two metrics are optimizing the same target, mean squared error, and will be optimized by the same model. When only a training set is available and cross-validation is applied, they would be slightly different as the normalizer for `normalized_root_mean_squared_error` is fixed as the range of training set, but the normalizer for `r2_score` would vary for every fold as it's the variance for each fold.
+
+If the rank, instead of the exact value is of interest, `spearman_correlation` can be a better choice as it measures the rank correlation between real values and predictions.
+
+However, currently no primary metrics for regression addresses relative difference. All of `r2_score`, `normalized_mean_absolute_error`, and `normalized_root_mean_squared_error` treat a $20k prediction error the same for a worker with a $30k salary as a worker making $20M, if these two data points belongs to the same dataset for regression, or the same time series specified by the time series identifier. While in reality, predicting only $20k off from a $20M salary is very close (a small 0.1% relative difference), whereas $20k off from $30k is not close (a large 67% relative difference). To address the issue of relative difference, one can train a model with available primary metrics, and then select the model with best `mean_absolute_percentage_error` or `root_mean_squared_log_error`.
+
+| Metric | Example use case(s) |
+| | - |
+| `spearman_correlation` | |
+| `normalized_root_mean_squared_error` | Price prediction (house/product/tip), Review score prediction |
+| `r2_score` | Airline delay, Salary estimation, Bug resolution time |
+| `normalized_mean_absolute_error` | |
+
+#### Metrics for time series forecasting scenarios
+
+The recommendations are similar to those noted for regression scenarios.
+
+| Metric | Example use case(s) |
+| | - |
+| `normalized_root_mean_squared_error` | Price prediction (forecasting), Inventory optimization, Demand forecasting |
+| `r2_score` | Price prediction (forecasting), Inventory optimization, Demand forecasting |
+| `normalized_mean_absolute_error` | |
+
+### Data featurization
+
+In every automated ML experiment, your data is automatically scaled and normalized to help *certain* algorithms that are sensitive to features that are on different scales. This scaling and normalization is referred to as featurization.
+See [Featurization in AutoML](../how-to-configure-auto-features.md) for more detail and code examples.
+
+> [!NOTE]
+> Automated machine learning featurization steps (feature normalization, handling missing data, converting text to numeric, etc.) become part of the underlying model. When using the model for predictions, the same featurization steps applied during training are applied to your input data automatically.
+
+When configuring your experiments in your `AutoMLConfig` object, you can enable/disable the setting `featurization`. The following table shows the accepted settings for featurization in the [AutoMLConfig object](/python/api/azureml-train-automl-client/azureml.train.automl.automlconfig.automlconfig).
+
+|Featurization Configuration | Description |
+| - | - |
+|`"featurization": 'auto'`| Indicates that as part of preprocessing, [data guardrails and featurization steps](../how-to-configure-auto-features.md#featurization) are performed automatically. **Default setting**.|
+|`"featurization": 'off'`| Indicates featurization step shouldn't be done automatically.|
+|`"featurization":`&nbsp;`'FeaturizationConfig'`| Indicates customized featurization step should be used. [Learn how to customize featurization](../how-to-configure-auto-features.md#customize-featurization).|
+++
+<a name="ensemble"></a>
+
+### Ensemble configuration
+
+Ensemble models are enabled by default, and appear as the final run iterations in an AutoML run. Currently **VotingEnsemble** and **StackEnsemble** are supported.
+
+Voting implements soft-voting, which uses weighted averages. The stacking implementation uses a two layer implementation, where the first layer has the same models as the voting ensemble, and the second layer model is used to find the optimal combination of the models from the first layer.
+
+If you are using ONNX models, **or** have model-explainability enabled, stacking is disabled and only voting is utilized.
+
+Ensemble training can be disabled by using the `enable_voting_ensemble` and `enable_stack_ensemble` boolean parameters.
+
+```python
+automl_classifier = AutoMLConfig(
+ task='classification',
+ primary_metric='AUC_weighted',
+ experiment_timeout_minutes=30,
+ training_data=data_train,
+ label_column_name=label,
+ n_cross_validations=5,
+ enable_voting_ensemble=False,
+ enable_stack_ensemble=False
+ )
+```
+
+To alter the default ensemble behavior, there are multiple default arguments that can be provided as `kwargs` in an `AutoMLConfig` object.
+
+> [!IMPORTANT]
+> The following parameters aren't explicit parameters of the AutoMLConfig class.
+* `ensemble_download_models_timeout_sec`: During **VotingEnsemble** and **StackEnsemble** model generation, multiple fitted models from the previous child runs are downloaded. If you encounter this error: `AutoMLEnsembleException: Could not find any models for running ensembling`, then you may need to provide more time for the models to be downloaded. The default value is 300 seconds for downloading these models in parallel and there is no maximum timeout limit. Configure this parameter with a higher value than 300 secs, if more time is needed.
+
+ > [!NOTE]
+ > If the timeout is reached and there are models downloaded, then the ensembling proceeds with as many models it has downloaded. It's not required that all the models need to be downloaded to finish within that timeout.
+The following parameters only apply to **StackEnsemble** models:
+
+* `stack_meta_learner_type`: the meta-learner is a model trained on the output of the individual heterogeneous models. Default meta-learners are `LogisticRegression` for classification tasks (or `LogisticRegressionCV` if cross-validation is enabled) and `ElasticNet` for regression/forecasting tasks (or `ElasticNetCV` if cross-validation is enabled). This parameter can be one of the following strings: `LogisticRegression`, `LogisticRegressionCV`, `LightGBMClassifier`, `ElasticNet`, `ElasticNetCV`, `LightGBMRegressor`, or `LinearRegression`.
+
+* `stack_meta_learner_train_percentage`: specifies the proportion of the training set (when choosing train and validation type of training) to be reserved for training the meta-learner. Default value is `0.2`.
+
+* `stack_meta_learner_kwargs`: optional parameters to pass to the initializer of the meta-learner. These parameters and parameter types mirror the parameters and parameter types from the corresponding model constructor, and are forwarded to the model constructor.
+
+The following code shows an example of specifying custom ensemble behavior in an `AutoMLConfig` object.
+
+```python
+ensemble_settings = {
+ "ensemble_download_models_timeout_sec": 600
+ "stack_meta_learner_type": "LogisticRegressionCV",
+ "stack_meta_learner_train_percentage": 0.3,
+ "stack_meta_learner_kwargs": {
+ "refit": True,
+ "fit_intercept": False,
+ "class_weight": "balanced",
+ "multi_class": "auto",
+ "n_jobs": -1
+ }
+ }
+automl_classifier = AutoMLConfig(
+ task='classification',
+ primary_metric='AUC_weighted',
+ experiment_timeout_minutes=30,
+ training_data=train_data,
+ label_column_name=label,
+ n_cross_validations=5,
+ **ensemble_settings
+ )
+```
+
+<a name="exit"></a>
+
+### Exit criteria
+
+There are a few options you can define in your AutoMLConfig to end your experiment.
+
+|Criteria| description
+|-|-
+No&nbsp;criteria | If you do not define any exit parameters the experiment continues until no further progress is made on your primary metric.
+After&nbsp;a&nbsp;length&nbsp;of&nbsp;time| Use `experiment_timeout_minutes` in your settings to define how long, in minutes, your experiment should continue to run. <br><br> To help avoid experiment time out failures, there is a minimum of 15 minutes, or 60 minutes if your row by column size exceeds 10 million.
+A&nbsp;score&nbsp;has&nbsp;been&nbsp;reached| Use `experiment_exit_score` completes the experiment after a specified primary metric score has been reached.
+
+## Run experiment
+
+> [!WARNING]
+> If you run an experiment with the same configuration settings and primary metric multiple times, you'll likely see variation in each experiments final metrics score and generated models. The algorithms automated ML employs have inherent randomness that can cause slight variation in the models output by the experiment and the recommended model's final metrics score, like accuracy. You'll likely also see results with the same model name, but different hyperparameters used.
+
+For automated ML, you create an `Experiment` object, which is a named object in a `Workspace` used to run experiments.
+
+```python
+from azureml.core.experiment import Experiment
+
+ws = Workspace.from_config()
+
+# Choose a name for the experiment and specify the project folder.
+experiment_name = 'Tutorial-automl'
+project_folder = './sample_projects/automl-classification'
+
+experiment = Experiment(ws, experiment_name)
+```
+
+Submit the experiment to run and generate a model. Pass the `AutoMLConfig` to the `submit` method to generate the model.
+
+```python
+run = experiment.submit(automl_config, show_output=True)
+```
+
+>[!NOTE]
+>Dependencies are first installed on a new machine. It may take up to 10 minutes before output is shown.
+>Setting `show_output` to `True` results in output being shown on the console.
+
+### Multiple child runs on clusters
+
+Automated ML experiment child runs can be performed on a cluster that is already running another experiment. However, the timing depends on how many nodes the cluster has, and if those nodes are available to run a different experiment.
+
+Each node in the cluster acts as an individual virtual machine (VM) that can accomplish a single training run; for automated ML this means a child run. If all the nodes are busy, the new experiment is queued. But if there are free nodes, the new experiment will run automated ML child runs in parallel in the available nodes/VMs.
+
+To help manage child runs and when they can be performed, we recommend you create a dedicated cluster per experiment, and match the number of `max_concurrent_iterations` of your experiment to the number of nodes in the cluster. This way, you use all the nodes of the cluster at the same time with the number of concurrent child runs/iterations you want.
+
+Configure `max_concurrent_iterations` in your `AutoMLConfig` object. If it is not configured, then by default only one concurrent child run/iteration is allowed per experiment.
+In case of compute instance, `max_concurrent_iterations` can be set to be the same as number of cores on the compute instance VM.
+
+## Explore models and metrics
+
+Automated ML offers options for you to monitor and evaluate your training results.
+
+* You can view your training results in a widget or inline if you are in a notebook. See [Monitor automated machine learning runs](#monitor) for more details.
+
+* For definitions and examples of the performance charts and metrics provided for each run, see [Evaluate automated machine learning experiment results](../how-to-understand-automated-ml.md).
+
+* To get a featurization summary and understand what features were added to a particular model, see [Featurization transparency](../how-to-configure-auto-features.md#featurization-transparency).
+
+You can view the hyperparameters, the scaling and normalization techniques, and algorithm applied to a specific automated ML run with the [custom code solution, `print_model()`](../how-to-configure-auto-features.md#scaling-and-normalization).
+
+> [!TIP]
+> Automated ML also let's you [view the generated model training code for Auto ML trained models](../how-to-generate-automl-training-code.md). This functionality is in public preview and can change at any time.
+
+## <a name="monitor"></a> Monitor automated machine learning runs
+
+For automated ML runs, to access the charts from a previous run, replace `<<experiment_name>>` with the appropriate experiment name:
+
+```python
+from azureml.widgets import RunDetails
+from azureml.core.run import Run
+
+experiment = Experiment (workspace, <<experiment_name>>)
+run_id = 'autoML_my_runID' #replace with run_ID
+run = Run(experiment, run_id)
+RunDetails(run).show()
+```
+
+![Jupyter notebook widget for Automated Machine Learning](../media/how-to-configure-auto-train/azure-machine-learning-auto-ml-widget.png)
+
+## Test models (preview)
+
+>[!IMPORTANT]
+> Testing your models with a test dataset to evaluate automated ML generated models is a preview feature. This capability is an [experimental](/python/api/overview/azure/ml/#stable-vs-experimental) preview feature, and may change at any time.
+
+> [!WARNING]
+> This feature is not available for the following automated ML scenarios
+> * [Computer vision tasks (preview)](../how-to-auto-train-image-models.md)
+> * [Many models and hiearchical time series forecasting training (preview)](../how-to-auto-train-forecast.md)
+> * [Forecasting tasks where deep learning neural networks (DNN) are enabled](../how-to-auto-train-forecast.md#enable-deep-learning)
+> * [Automated ML runs from local computes or Azure Databricks clusters](../how-to-configure-auto-train.md#compute-to-run-experiment)
+
+Passing the `test_data` or `test_size` parameters into the `AutoMLConfig`, automatically triggers a remote test run that uses the provided test data to evaluate the best model that automated ML recommends upon completion of the experiment. This remote test run is done at the end of the experiment, once the best model is determined. See how to [pass test data into your `AutoMLConfig`](../how-to-configure-cross-validation-data-splits.md#provide-test-data-preview).
+
+### Get test run results
+
+You can get the predictions and metrics from the remote test run from the [Azure Machine Learning studio](../how-to-use-automated-ml-for-ml-models.md#view-remote-test-run-results-preview) or with the following code.
++
+```python
+best_run, fitted_model = remote_run.get_output()
+test_run = next(best_run.get_children(type='automl.model_test'))
+test_run.wait_for_completion(show_output=False, wait_post_processing=True)
+
+# Get test metrics
+test_run_metrics = test_run.get_metrics()
+for name, value in test_run_metrics.items():
+ print(f"{name}: {value}")
+
+# Get test predictions as a Dataset
+test_run_details = test_run.get_details()
+dataset_id = test_run_details['outputDatasets'][0]['identifier']['savedId']
+test_run_predictions = Dataset.get_by_id(workspace, dataset_id)
+predictions_df = test_run_predictions.to_pandas_dataframe()
+
+# Alternatively, the test predictions can be retrieved via the run outputs.
+test_run.download_file("predictions/predictions.csv")
+predictions_df = pd.read_csv("predictions.csv")
+
+```
+
+The model test run generates the predictions.csv file that's stored in the default datastore created with the workspace. This datastore is visible to all users with the same subscription. Test runs are not recommended for scenarios if any of the information used for or created by the test run needs to remain private.
+
+### Test existing automated ML model
+
+To test other existing automated ML models created, best run or child run, use [`ModelProxy()`](/python/api/azureml-train-automl-client/azureml.train.automl.model_proxy.modelproxy) to test a model after the main AutoML run has completed. `ModelProxy()` already returns the predictions and metrics and does not require further processing to retrieve the outputs.
+
+> [!NOTE]
+> ModelProxy is an [experimental](/python/api/overview/azure/ml/#stable-vs-experimental) preview class, and may change at any time.
+
+The following code demonstrates how to test a model from any run by using [ModelProxy.test()](/python/api/azureml-train-automl-client/azureml.train.automl.model_proxy.modelproxy#test-test-data--azureml-data-abstract-dataset-abstractdataset--include-predictions-only--boolfalse--typing-tuple-azureml-data-abstract-dataset-abstractdataset--typing-dict-str--typing-any--) method. In the test() method you have the option to specify if you only want to see the predictions of the test run with the `include_predictions_only` parameter.
+
+```python
+from azureml.train.automl.model_proxy import ModelProxy
+
+model_proxy = ModelProxy(child_run=my_run, compute_target=cpu_cluster)
+predictions, metrics = model_proxy.test(test_data, include_predictions_only= True
+)
+```
+
+## Register and deploy models
+
+After you test a model and confirm you want to use it in production, you can register it for later use and
+
+To register a model from an automated ML run, use the [`register_model()`](/python/api/azureml-train-automl-client/azureml.train.automl.run.automlrun#register-model-model-name-none--description-none--tags-none--iteration-none--metric-none-) method.
+
+```Python
+
+best_run = run.get_best_child()
+print(fitted_model.steps)
+
+model_name = best_run.properties['model_name']
+description = 'AutoML forecast example'
+tags = None
+
+model = run.register_model(model_name = model_name,
+ description = description,
+ tags = tags)
+```
++
+For details on how to create a deployment configuration and deploy a registered model to a web service, see [how and where to deploy a model](../how-to-deploy-and-where.md?tabs=python#define-a-deployment-configuration).
+
+> [!TIP]
+> For registered models, one-click deployment is available via the [Azure Machine Learning studio](https://ml.azure.com). See [how to deploy registered models from the studio](../how-to-use-automated-ml-for-ml-models.md#deploy-your-model).
+<a name="explain"></a>
+
+## Model interpretability
+
+Model interpretability allows you to understand why your models made predictions, and the underlying feature importance values. The SDK includes various packages for enabling model interpretability features, both at training and inference time, for local and deployed models.
+
+See how to [enable interpretability features](../how-to-machine-learning-interpretability-automl.md) specifically within automated ML experiments.
+
+For general information on how model explanations and feature importance can be enabled in other areas of the SDK outside of automated machine learning, see the [concept article on interpretability](../how-to-machine-learning-interpretability.md) .
+
+> [!NOTE]
+> The ForecastTCN model is not currently supported by the Explanation Client. This model will not return an explanation dashboard if it is returned as the best model, and does not support on-demand explanation runs.
+
+## Next steps
+++ Learn more about [how and where to deploy a model](../how-to-deploy-and-where.md).+++ Learn more about [how to train a regression model with Automated machine learning](../tutorial-auto-train-models.md).+++ [Troubleshoot automated ML experiments](../how-to-troubleshoot-auto-ml.md).
machine-learning How To Create Attach Compute Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-create-attach-compute-cluster.md
+
+ Title: Create compute clusters CLI v1
+
+description: Learn how to create compute clusters in your Azure Machine Learning workspace with CLI v1. Use the compute cluster as a compute target for training or inference.
++++++++ Last updated : 05/02/2022++
+# Create an Azure Machine Learning compute cluster with CLI v1
++
+> [!div class="op_single_selector" title1="Select the Azure Machine Learning CLI version you are using:"]
+> * [v1](how-to-create-attach-compute-cluster.md)
+> * [v2 (preview)](../how-to-create-attach-compute-cluster.md)
+
+Learn how to create and manage a [compute cluster](../concept-compute-target.md#azure-machine-learning-compute-managed) in your Azure Machine Learning workspace.
+
+You can use Azure Machine Learning compute cluster to distribute a training or batch inference process across a cluster of CPU or GPU compute nodes in the cloud. For more information on the VM sizes that include GPUs, see [GPU-optimized virtual machine sizes](../../virtual-machines/sizes-gpu.md).
+
+In this article, learn how to:
+
+* Create a compute cluster
+* Lower your compute cluster cost
+* Set up a [managed identity](../../active-directory/managed-identities-azure-resources/overview.md) for the cluster
+
+This article covers only the CLI v1 way to accomplish these tasks. To see how to use the SDK, CLI v2, or studio, see [Create an Azure Machine Learning compute cluster (CLI v2)](../how-to-create-attach-compute-cluster.md)
+
+## Prerequisites
+
+* An Azure Machine Learning workspace. For more information, see [Create an Azure Machine Learning workspace](../how-to-manage-workspace.md).
+
+* The [Azure CLI extension for Machine Learning service (v1)](reference-azure-machine-learning-cli.md), [Azure Machine Learning Python SDK](/python/api/overview/azure/ml/intro), or the [Azure Machine Learning Visual Studio Code extension](../how-to-setup-vs-code.md).
++
+## What is a compute cluster?
+
+Azure Machine Learning compute cluster is a managed-compute infrastructure that allows you to easily create a single or multi-node compute. The compute cluster is a resource that can be shared with other users in your workspace. The compute scales up automatically when a job is submitted, and can be put in an Azure Virtual Network. Compute cluster supports **no public IP (preview)** deployment as well in virtual network. The compute executes in a containerized environment and packages your model dependencies in a [Docker container](https://www.docker.com/why-docker).
+
+Compute clusters can run jobs securely in a [virtual network environment](../how-to-secure-training-vnet.md), without requiring enterprises to open up SSH ports. The job executes in a containerized environment and packages your model dependencies in a Docker container.
+
+## Limitations
+
+* Some of the scenarios listed in this document are marked as __preview__. Preview functionality is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+* Compute clusters can be created in a different region than your workspace. This functionality is in __preview__, and is only available for __compute clusters__, not compute instances. This preview is not available if you are using a private endpoint-enabled workspace.
+
+ > [!WARNING]
+ > When using a compute cluster in a different region than your workspace or datastores, you may see increased network latency and data transfer costs. The latency and costs can occur when creating the cluster, and when running jobs on it.
+
+* We currently support only creation (and not updating) of clusters through [ARM templates](/azure/templates/microsoft.machinelearningservices/workspaces/computes). For updating compute, we recommend using the SDK, Azure CLI or UX for now.
+
+* Azure Machine Learning Compute has default limits, such as the number of cores that can be allocated. For more information, see [Manage and request quotas for Azure resources](../how-to-manage-quotas.md).
+
+* Azure allows you to place _locks_ on resources, so that they cannot be deleted or are read only. __Do not apply resource locks to the resource group that contains your workspace__. Applying a lock to the resource group that contains your workspace will prevent scaling operations for Azure ML compute clusters. For more information on locking resources, see [Lock resources to prevent unexpected changes](../../azure-resource-manager/management/lock-resources.md).
+
+> [!TIP]
+> Clusters can generally scale up to 100 nodes as long as you have enough quota for the number of cores required. By default clusters are setup with inter-node communication enabled between the nodes of the cluster to support MPI jobs for example. However you can scale your clusters to 1000s of nodes by simply [raising a support ticket](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest), and requesting to allow list your subscription, or workspace, or a specific cluster for disabling inter-node communication.
+
+## Create
+
+**Time estimate**: Approximately 5 minutes.
+
+Azure Machine Learning Compute can be reused across runs. The compute can be shared with other users in the workspace and is retained between runs, automatically scaling nodes up or down based on the number of runs submitted, and the max_nodes set on your cluster. The min_nodes setting controls the minimum nodes available.
+
+The dedicated cores per region per VM family quota and total regional quota, which applies to compute cluster creation, is unified and shared with Azure Machine Learning training compute instance quota.
++
+The compute autoscales down to zero nodes when it isn't used. Dedicated VMs are created to run your jobs as needed.
+
++
+```azurecli-interactive
+az ml computetarget create amlcompute -n cpu --min-nodes 1 --max-nodes 1 -s STANDARD_D3_V2 --location westus2
+```
+
+> [!WARNING]
+> When using a compute cluster in a different region than your workspace or datastores, you may see increased network latency and data transfer costs. The latency and costs can occur when creating the cluster, and when running jobs on it.
+
+For more information, see Az PowerShell module [az ml computetarget create amlcompute](/cli/azure/ml(v1)/computetarget/create#az-ml-computetarget-create-amlcompute).
+++
+ ## Lower your compute cluster cost
+
+You may also choose to use [low-priority VMs](../how-to-manage-optimize-cost.md#low-pri-vm) to run some or all of your workloads. These VMs do not have guaranteed availability and may be preempted while in use. You will have to restart a preempted job.
+++
+Set the `vm-priority`:
+
+```azurecli-interactive
+az ml computetarget create amlcompute --name lowpriocluster --vm-size Standard_NC6 --max-nodes 5 --vm-priority lowpriority
+```
++
+## Set up managed identity
+++++
+* Create a new managed compute cluster with managed identity
+
+ * User-assigned managed identity
+
+ ```azurecli
+ az ml computetarget create amlcompute --name cpu-cluster --vm-size Standard_NC6 --max-nodes 5 --assign-identity '/subscriptions/<subcription_id>/resourcegroups/<resource_group>/providers/Microsoft.ManagedIdentity/userAssignedIdentities/<user_assigned_identity>'
+ ```
+
+ * System-assigned managed identity
+
+ ```azurecli
+ az ml computetarget create amlcompute --name cpu-cluster --vm-size Standard_NC6 --max-nodes 5 --assign-identity '[system]'
+ ```
+* Add a managed identity to an existing cluster:
+
+ * User-assigned managed identity
+ ```azurecli
+ az ml computetarget amlcompute identity assign --name cpu-cluster '/subscriptions/<subcription_id>/resourcegroups/<resource_group>/providers/Microsoft.ManagedIdentity/userAssignedIdentities/<user_assigned_identity>'
+ ```
+ * System-assigned managed identity
+
+ ```azurecli
+ az ml computetarget amlcompute identity assign --name cpu-cluster '[system]'
+ ```
++++
+### Managed identity usage
++
+## Troubleshooting
+
+There is a chance that some users who created their Azure Machine Learning workspace from the Azure portal before the GA release might not be able to create AmlCompute in that workspace. You can either raise a support request against the service or create a new workspace through the portal or the SDK to unblock yourself immediately.
+
+### Stuck at resizing
+
+If your Azure Machine Learning compute cluster appears stuck at resizing (0 -> 0) for the node state, this may be caused by Azure resource locks.
++
+## Next steps
+
+Use your compute cluster to:
+
+* [Submit a training run](../how-to-set-up-training-targets.md)
+* [Run batch inference](../tutorial-pipeline-batch-scoring-classification.md).
machine-learning How To Create Manage Compute Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-create-manage-compute-instance.md
+
+ Title: Create and manage a compute instance with CLI v1
+
+description: Learn how to create and manage an Azure Machine Learning compute instance with CLI v1. Use as your development environment, or as compute target for dev/test purposes.
++++++++ Last updated : 05/02/2022++
+# Create and manage an Azure Machine Learning compute instance with CLI v1
++
+Learn how to create and manage a [compute instance](../concept-compute-instance.md) in your Azure Machine Learning workspace with CLI v1.
+
+Use a compute instance as your fully configured and managed development environment in the cloud. For development and testing, you can also use the instance as a [training compute target](../concept-compute-target.md#train) or for an [inference target](../concept-compute-target.md#deploy). A compute instance can run multiple jobs in parallel and has a job queue. As a development environment, a compute instance can't be shared with other users in your workspace.
+
+Compute instances can run jobs securely in a [virtual network environment](../how-to-secure-training-vnet.md), without requiring enterprises to open up SSH ports. The job executes in a containerized environment and packages your model dependencies in a Docker container.
+
+In this article, you learn how to:
+
+* Create a compute instance
+* Manage (start, stop, restart, delete) a compute instance
+
+> [!NOTE]
+> This article covers only how to do these tasks using CLI v1. For more recent ways to manage a compute instance, see [Create an Azure Machine Learning compute cluster](../how-to-create-manage-compute-instance.md).
+
+## Prerequisites
+
+* An Azure Machine Learning workspace. For more information, see [Create an Azure Machine Learning workspace](../how-to-manage-workspace.md).
+
+* The [Azure CLI extension for Machine Learning service (v1)](reference-azure-machine-learning-cli.md)
+
+## Create
+
+> [!IMPORTANT]
+> Items marked (preview) below are currently in public preview.
+> The preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+**Time estimate**: Approximately 5 minutes.
+
+Creating a compute instance is a one time process for your workspace. You can reuse the compute as a development workstation or as a compute target for training. You can have multiple compute instances attached to your workspace.
+
+The dedicated cores per region per VM family quota and total regional quota, which applies to compute instance creation, is unified and shared with Azure Machine Learning training compute cluster quota. Stopping the compute instance doesn't release quota to ensure you'll be able to restart the compute instance. It isn't possible to change the virtual machine size of compute instance once it's created.
+
+The following example demonstrates how to create a compute instance:
++
+```azurecli-interactive
+az ml computetarget create computeinstance -n instance -s "STANDARD_D3_V2" -v
+```
+
+For more information, see [Az PowerShell module `az ml computetarget create computeinstance`](/cli/azure/ml(v1)/computetarget/create#az-ml-computetarget-create-computeinstance) reference.
++
+## Manage
+
+Start, stop, restart, and delete a compute instance. A compute instance doesn't automatically scale down, so make sure to stop the resource to prevent ongoing charges. Stopping a compute instance deallocates it. Then start it again when you need it. While stopping the compute instance stops the billing for compute hours, you'll still be billed for disk, public IP, and standard load balancer.
+
+> [!TIP]
+> The compute instance has 120GB OS disk. If you run out of disk space, [use the terminal](../how-to-access-terminal.md) to clear at least 1-2 GB before you stop or restart the compute instance. Please do not stop the compute instance by issuing sudo shutdown from the terminal. The temp disk size on compute instance depends on the VM size chosen and is mounted on /mnt.
++
+In the examples below, the name of the compute instance is **instance**
+
+* Stop
+
+ ```azurecli-interactive
+ az ml computetarget stop computeinstance -n instance -v
+ ```
+
+ For more information, see [Az PowerShell module `az ml computetarget stop computeinstance`](/cli/azure/ml(v1)/computetarget/computeinstance#az-ml-computetarget-computeinstance-stop).
+
+* Start
+
+ ```azurecli-interactive
+ az ml computetarget start computeinstance -n instance -v
+ ```
+
+ For more information, see [Az PowerShell module `az ml computetarget start computeinstance`](/cli/azure/ml(v1)/computetarget/computeinstance#az-ml-computetarget-computeinstance-start).
+
+* Restart
+
+ ```azurecli-interactive
+ az ml computetarget restart computeinstance -n instance -v
+ ```
+
+ For more information, see [Az PowerShell module `az ml computetarget restart computeinstance`](/cli/azure/ml(v1)/computetarget/computeinstance#az-ml-computetarget-computeinstance-restart).
+
+* Delete
+
+ ```azurecli-interactive
+ az ml computetarget delete -n instance -v
+ ```
+
+ For more information, see [Az PowerShell module `az ml computetarget delete computeinstance`](/cli/azure/ml(v1)/computetarget#az-ml-computetarget-delete).
+
+[Azure RBAC](../../role-based-access-control/overview.md) allows you to control which users in the workspace can create, delete, start, stop, restart a compute instance. All users in the workspace contributor and owner role can create, delete, start, stop, and restart compute instances across the workspace. However, only the creator of a specific compute instance, or the user assigned if it was created on their behalf, is allowed to access Jupyter, JupyterLab, and RStudio on that compute instance. A compute instance is dedicated to a single user who has root access, and can terminal in through Jupyter/JupyterLab/RStudio. Compute instance will have single-user sign in and all actions will use that userΓÇÖs identity for Azure RBAC and attribution of experiment runs. SSH access is controlled through public/private key mechanism.
+
+These actions can be controlled by Azure RBAC:
+* *Microsoft.MachineLearningServices/workspaces/computes/read*
+* *Microsoft.MachineLearningServices/workspaces/computes/write*
+* *Microsoft.MachineLearningServices/workspaces/computes/delete*
+* *Microsoft.MachineLearningServices/workspaces/computes/start/action*
+* *Microsoft.MachineLearningServices/workspaces/computes/stop/action*
+* *Microsoft.MachineLearningServices/workspaces/computes/restart/action*
+* *Microsoft.MachineLearningServices/workspaces/computes/updateSchedules/action*
+
+To create a compute instance, you'll need permissions for the following actions:
+* *Microsoft.MachineLearningServices/workspaces/computes/write*
+* *Microsoft.MachineLearningServices/workspaces/checkComputeNameAvailability/action*
++
+## Next steps
+
+* [Access the compute instance terminal](../how-to-access-terminal.md)
+* [Create and manage files](../how-to-manage-files.md)
+* [Update the compute instance to the latest VM image](../concept-vulnerability-management.md#compute-instance)
+* [Submit a training run](../how-to-set-up-training-targets.md)
machine-learning How To Create Register Datasets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-create-register-datasets.md
+
+ Title: Create Azure Machine Learning datasets
+
+description: Learn how to create Azure Machine Learning datasets to access your data for machine learning experiment runs.
++++++++ Last updated : 05/11/2022
+#Customer intent: As an experienced data scientist, I need to package my data into a consumable and reusable object to train my machine learning models.
++
+# Create Azure Machine Learning datasets
+
+> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning SDK you are using:"]
+> * [v1](how-to-create-register-datasets.md)
+> * [v2 (current version)](../how-to-create-register-data-assets.md)
++
+In this article, you learn how to create Azure Machine Learning datasets to access data for your local or remote experiments with the Azure Machine Learning Python SDK. To understand where datasets fit in Azure Machine Learning's overall data access workflow, see the [Securely access data](concept-data.md#data-workflow) article.
+
+By creating a dataset, you create a reference to the data source location, along with a copy of its metadata. Because the data remains in its existing location, you incur no extra storage cost, and don't risk the integrity of your data sources. Also datasets are lazily evaluated, which aids in workflow performance speeds. You can create datasets from datastores, public URLs, and [Azure Open Datasets](/azure/open-datasets/how-to-create-azure-machine-learning-dataset-from-open-dataset).
+
+For a low-code experience, [Create Azure Machine Learning datasets with the Azure Machine Learning studio.](../how-to-connect-data-ui.md#create-datasets)
+
+With Azure Machine Learning datasets, you can:
+
+* Keep a single copy of data in your storage, referenced by datasets.
+
+* Seamlessly access data during model training without worrying about connection strings or data paths. [Learn more about how to train with datasets](../how-to-train-with-datasets.md).
+
+* Share data and collaborate with other users.
+
+## Prerequisites
+
+To create and work with datasets, you need:
+
+* An Azure subscription. If you don't have one, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/).
+
+* An [Azure Machine Learning workspace](../how-to-manage-workspace.md).
+
+* The [Azure Machine Learning SDK for Python installed](/python/api/overview/azure/ml/install), which includes the azureml-datasets package.
+
+ * Create an [Azure Machine Learning compute instance](how-to-create-manage-compute-instance.md), which is a fully configured and managed development environment that includes integrated notebooks and the SDK already installed.
+
+ **OR**
+
+ * Work on your own Jupyter notebook and [install the SDK yourself](/python/api/overview/azure/ml/install).
+
+> [!NOTE]
+> Some dataset classes have dependencies on the [azureml-dataprep](https://pypi.org/project/azureml-dataprep/) package, which is only compatible with 64-bit Python. If you are developing on __Linux__, these classes rely on .NET Core 2.1, and are only supported on specific distributions. For more information on the supported distros, see the .NET Core 2.1 column in the [Install .NET on Linux](/dotnet/core/install/linux) article.
+
+> [!IMPORTANT]
+> While the package may work on older versions of Linux distros, we do not recommend using a distro that is out of mainstream support. Distros that are out of mainstream support may have security vulnerabilities, as they do not receive the latest updates. We recommend using the latest supported version of your distro that is compatible with .
+
+## Compute size guidance
+
+When creating a dataset, review your compute processing power and the size of your data in memory. The size of your data in storage is not the same as the size of data in a dataframe. For example, data in CSV files can expand up to 10x in a dataframe, so a 1 GB CSV file can become 10 GB in a dataframe.
+
+If your data is compressed, it can expand further; 20 GB of relatively sparse data stored in compressed parquet format can expand to ~800 GB in memory. Since Parquet files store data in a columnar format, if you only need half of the columns, then you only need to load ~400 GB in memory.
+
+[Learn more about optimizing data processing in Azure Machine Learning](../concept-optimize-data-processing.md).
+
+## Dataset types
+
+There are two dataset types, based on how users consume them in training; FileDatasets and TabularDatasets. Both types can be used in Azure Machine Learning training workflows involving, estimators, AutoML, hyperDrive and pipelines.
+
+### FileDataset
+
+A [FileDataset](/python/api/azureml-core/azureml.data.file_dataset.filedataset) references single or multiple files in your datastores or public URLs.
+If your data is already cleansed, and ready to use in training experiments, you can [download or mount](../how-to-train-with-datasets.md#mount-vs-download) the files to your compute as a FileDataset object.
+
+We recommend FileDatasets for your machine learning workflows, since the source files can be in any format, which enables a wider range of machine learning scenarios, including deep learning.
+
+Create a FileDataset with the [Python SDK](#create-a-filedataset) or the [Azure Machine Learning studio](../how-to-connect-data-ui.md#create-datasets)
+.
+### TabularDataset
+
+A [TabularDataset](/python/api/azureml-core/azureml.data.tabulardataset) represents data in a tabular format by parsing the provided file or list of files. This provides you with the ability to materialize the data into a pandas or Spark DataFrame so you can work with familiar data preparation and training libraries without having to leave your notebook. You can create a `TabularDataset` object from .csv, .tsv, [.parquet](/python/api/azureml-core/azureml.data.dataset_factory.tabulardatasetfactory#from-parquet-files-path--validate-true--include-path-false--set-column-types-none--partition-format-none-), [.jsonl files](/python/api/azureml-core/azureml.data.dataset_factory.tabulardatasetfactory#from-json-lines-files-path--validate-true--include-path-false--set-column-types-none--partition-format-none--invalid-lines--errorencoding--utf8--), and from [SQL query results](/python/api/azureml-core/azureml.data.dataset_factory.tabulardatasetfactory#from-sql-query-query--validate-true--set-column-types-none--query-timeout-30-).
+
+With TabularDatasets, you can specify a time stamp from a column in the data or from wherever the path pattern data is stored to enable a time series trait. This specification allows for easy and efficient filtering by time. For an example, see [Tabular time series-related API demo with NOAA weather data](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/work-with-data/datasets-tutorial/timeseries-datasets/tabular-timeseries-dataset-filtering.ipynb).
+
+Create a TabularDataset with [the Python SDK](#create-a-tabulardataset) or [Azure Machine Learning studio](../how-to-connect-data-ui.md#create-datasets).
+
+>[!NOTE]
+> [Automated ML](../concept-automated-ml.md) workflows generated via the Azure Machine Learning studio currently only support TabularDatasets.
+
+## Access datasets in a virtual network
+
+If your workspace is in a virtual network, you must configure the dataset to skip validation. For more information on how to use datastores and datasets in a virtual network, see [Secure a workspace and associated resources](../how-to-secure-workspace-vnet.md#datastores-and-datasets).
++
+## Create datasets from datastores
+
+For the data to be accessible by Azure Machine Learning, datasets must be created from paths in [Azure Machine Learning datastores](how-to-access-data.md) or web URLs.
+
+> [!TIP]
+> You can create datasets directly from storage urls with identity-based data access. Learn more at [Connect to storage with identity-based data access](../how-to-identity-based-data-access.md).
+
+
+To create datasets from a datastore with the Python SDK:
+
+1. Verify that you have `contributor` or `owner` access to the underlying storage service of your registered Azure Machine Learning datastore. [Check your storage account permissions in the Azure portal](/azure/role-based-access-control/check-access).
+
+1. Create the dataset by referencing paths in the datastore. You can create a dataset from multiple paths in multiple datastores. There is no hard limit on the number of files or data size that you can create a dataset from.
+
+> [!NOTE]
+> For each data path, a few requests will be sent to the storage service to check whether it points to a file or a folder. This overhead may lead to degraded performance or failure. A dataset referencing one folder with 1000 files inside is considered referencing one data path. We recommend creating dataset referencing less than 100 paths in datastores for optimal performance.
+
+### Create a FileDataset
+
+Use the [`from_files()`](/python/api/azureml-core/azureml.data.dataset_factory.filedatasetfactory#from-files-path--validate-true-) method on the `FileDatasetFactory` class to load files in any format and to create an unregistered FileDataset.
+
+If your storage is behind a virtual network or firewall, set the parameter `validate=False` in your `from_files()` method. This bypasses the initial validation step, and ensures that you can create your dataset from these secure files. Learn more about how to [use datastores and datasets in a virtual network](../how-to-secure-workspace-vnet.md#datastores-and-datasets).
+
+```Python
+from azureml.core import Workspace, Datastore, Dataset
+
+# create a FileDataset pointing to files in 'animals' folder and its subfolders recursively
+datastore_paths = [(datastore, 'animals')]
+animal_ds = Dataset.File.from_files(path=datastore_paths)
+
+# create a FileDataset from image and label files behind public web urls
+web_paths = ['https://azureopendatastorage.blob.core.windows.net/mnist/train-images-idx3-ubyte.gz',
+ 'https://azureopendatastorage.blob.core.windows.net/mnist/train-labels-idx1-ubyte.gz']
+mnist_ds = Dataset.File.from_files(path=web_paths)
+```
+
+If you want to upload all the files from a local directory, create a FileDataset in a single method with [upload_directory()](/python/api/azureml-core/azureml.data.dataset_factory.filedatasetfactory#upload-directory-src-dir--target--pattern-none--overwrite-false--show-progress-true-). This method uploads data to your underlying storage, and as a result incur storage costs.
+
+```Python
+from azureml.core import Workspace, Datastore, Dataset
+from azureml.data.datapath import DataPath
+
+ws = Workspace.from_config()
+datastore = Datastore.get(ws, '<name of your datastore>')
+ds = Dataset.File.upload_directory(src_dir='<path to you data>',
+ target=DataPath(datastore, '<path on the datastore>'),
+ show_progress=True)
+
+```
+
+To reuse and share datasets across experiment in your workspace, [register your dataset](#register-datasets).
+
+### Create a TabularDataset
+
+Use the [`from_delimited_files()`](/python/api/azureml-core/azureml.data.dataset_factory.tabulardatasetfactory#from-delimited-files-path--validate-true--include-path-false--infer-column-types-true--set-column-types-none--separatorheader-true--partition-format-none--support-multi-line-false--empty-as-string-false--encoding--utf8--) method on the `TabularDatasetFactory` class to read files in .csv or .tsv format, and to create an unregistered TabularDataset. To read in files from .parquet format, use the [`from_parquet_files()`](/python/api/azureml-core/azureml.data.dataset_factory.tabulardatasetfactory#from-parquet-files-path--validate-true--include-path-false--set-column-types-none--partition-format-none-) method. If you're reading from multiple files, results will be aggregated into one tabular representation.
+
+See the [TabularDatasetFactory reference documentation](/python/api/azureml-core/azureml.data.dataset_factory.tabulardatasetfactory) for information about supported file formats, as well as syntax and design patterns such as [multiline support](/python/api/azureml-core/azureml.data.dataset_factory.tabulardatasetfactory#from-delimited-files-path--validate-true--include-path-false--infer-column-types-true--set-column-types-none--separatorheader-true--partition-format-none--support-multi-line-false--empty-as-string-false--encoding--utf8--).
+
+If your storage is behind a virtual network or firewall, set the parameter `validate=False` in your `from_delimited_files()` method. This bypasses the initial validation step, and ensures that you can create your dataset from these secure files. Learn more about how to use [datastores and datasets in a virtual network](../how-to-secure-workspace-vnet.md#datastores-and-datasets).
+
+The following code gets the existing workspace and the desired datastore by name. And then passes the datastore and file locations to the `path` parameter to create a new TabularDataset, `weather_ds`.
+
+```Python
+from azureml.core import Workspace, Datastore, Dataset
+
+datastore_name = 'your datastore name'
+
+# get existing workspace
+workspace = Workspace.from_config()
+
+# retrieve an existing datastore in the workspace by name
+datastore = Datastore.get(workspace, datastore_name)
+
+# create a TabularDataset from 3 file paths in datastore
+datastore_paths = [(datastore, 'weather/2018/11.csv'),
+ (datastore, 'weather/2018/12.csv'),
+ (datastore, 'weather/2019/*.csv')]
+
+weather_ds = Dataset.Tabular.from_delimited_files(path=datastore_paths)
+```
+### Set data schema
+
+By default, when you create a TabularDataset, column data types are inferred automatically. If the inferred types don't match your expectations, you can update your dataset schema by specifying column types with the following code. The parameter `infer_column_type` is only applicable for datasets created from delimited files. [Learn more about supported data types](/python/api/azureml-core/azureml.data.dataset_factory.datatype).
++
+```Python
+from azureml.core import Dataset
+from azureml.data.dataset_factory import DataType
+
+# create a TabularDataset from a delimited file behind a public web url and convert column "Survived" to boolean
+web_path ='https://dprepdata.blob.core.windows.net/demo/Titanic.csv'
+titanic_ds = Dataset.Tabular.from_delimited_files(path=web_path, set_column_types={'Survived': DataType.to_bool()})
+
+# preview the first 3 rows of titanic_ds
+titanic_ds.take(3).to_pandas_dataframe()
+```
+
+|(Index)|PassengerId|Survived|Pclass|Name|Sex|Age|SibSp|Parch|Ticket|Fare|Cabin|Embarked
+-|--|--||-|||--|--||-|--|--|
+0|1|False|3|Braund, Mr. Owen Harris|male|22.0|1|0|A/5 21171|7.2500||S
+1|2|True|1|Cumings, Mrs. John Bradley (Florence Briggs Th...|female|38.0|1|0|PC 17599|71.2833|C85|C
+2|3|True|3|Heikkinen, Miss. Laina|female|26.0|0|0|STON/O2. 3101282|7.9250||S
+
+To reuse and share datasets across experiments in your workspace, [register your dataset](#register-datasets).
+
+## Wrangle data
+After you create and [register](#register-datasets) your dataset, you can load it into your notebook for data wrangling and [exploration](#explore-data) prior to model training.
+
+If you don't need to do any data wrangling or exploration, see how to consume datasets in your training scripts for submitting ML experiments in [Train with datasets](../how-to-train-with-datasets.md).
+
+### Filter datasets (preview)
+
+Filtering capabilities depends on the type of dataset you have.
+> [!IMPORTANT]
+> Filtering datasets with the preview method, [`filter()`](/python/api/azureml-core/azureml.data.tabulardataset#filter-expression-) is an [experimental](/python/api/overview/azure/ml/#stable-vs-experimental) preview feature, and may change at any time.
+>
+**For TabularDatasets**, you can keep or remove columns with the [keep_columns()](/python/api/azureml-core/azureml.data.tabulardataset#keep-columns-columns--validate-false-) and [drop_columns()](/python/api/azureml-core/azureml.data.tabulardataset#drop-columns-columns-) methods.
+
+To filter out rows by a specific column value in a TabularDataset, use the [filter()](/python/api/azureml-core/azureml.data.tabulardataset#filter-expression-) method (preview).
+
+The following examples return an unregistered dataset based on the specified expressions.
+
+```python
+# TabularDataset that only contains records where the age column value is greater than 15
+tabular_dataset = tabular_dataset.filter(tabular_dataset['age'] > 15)
+
+# TabularDataset that contains records where the name column value contains 'Bri' and the age column value is greater than 15
+tabular_dataset = tabular_dataset.filter((tabular_dataset['name'].contains('Bri')) & (tabular_dataset['age'] > 15))
+```
+
+**In FileDatasets**, each row corresponds to a path of a file, so filtering by column value is not helpful. But, you can [filter()](/python/api/azureml-core/azureml.data.filedataset#filter-expression-) out rows by metadata like, CreationTime, Size etc.
+
+The following examples return an unregistered dataset based on the specified expressions.
+
+```python
+# FileDataset that only contains files where Size is less than 100000
+file_dataset = file_dataset.filter(file_dataset.file_metadata['Size'] < 100000)
+
+# FileDataset that only contains files that were either created prior to Jan 1, 2020 or where
+file_dataset = file_dataset.filter((file_dataset.file_metadata['CreatedTime'] < datetime(2020,1,1)) | (file_dataset.file_metadata['CanSeek'] == False))
+```
+
+**Labeled datasets** created from [image labeling projects](../how-to-create-image-labeling-projects.md) are a special case. These datasets are a type of TabularDataset made up of image files. For these types of datasets, you can [filter()](/python/api/azureml-core/azureml.data.tabulardataset#filter-expression-) images by metadata, and by column values like `label` and `image_details`.
+
+```python
+# Dataset that only contains records where the label column value is dog
+labeled_dataset = labeled_dataset.filter(labeled_dataset['label'] == 'dog')
+
+# Dataset that only contains records where the label and isCrowd columns are True and where the file size is larger than 100000
+labeled_dataset = labeled_dataset.filter((labeled_dataset['label']['isCrowd'] == True) & (labeled_dataset.file_metadata['Size'] > 100000))
+```
+
+### Partition data
+
+You can partition a dataset by including the `partitions_format` parameter when creating a TabularDataset or FileDataset.
+
+When you partition a dataset, the partition information of each file path is extracted into columns based on the specified format. The format should start from the position of first partition key until the end of file path.
+
+For example, given the path `../Accounts/2019/01/01/data.jsonl` where the partition is by department name and time; the `partition_format='/{Department}/{PartitionDate:yyyy/MM/dd}/data.jsonl'` creates a string column 'Department' with the value 'Accounts' and a datetime column 'PartitionDate' with the value `2019-01-01`.
+
+If your data already has existing partitions and you want to preserve that format, include the `partitioned_format` parameter in your [`from_files()`](/python/api/azureml-core/azureml.data.dataset_factory.filedatasetfactory#from-files-path--validate-true--partition-format-none-) method to create a FileDataset.
+
+To create a TabularDataset that preserves existing partitions, include the `partitioned_format` parameter in the [from_parquet_files()](/python/api/azureml-core/azureml.data.dataset_factory.tabulardatasetfactory#from-parquet-files-path--validate-true--include-path-false--set-column-types-none--partition-format-none-) or the
+[from_delimited_files()](/python/api/azureml-core/azureml.data.dataset_factory.tabulardatasetfactory#from-delimited-files-path--validate-true--include-path-false--infer-column-types-true--set-column-types-none--separatorheader-true--partition-format-none--support-multi-line-false--empty-as-string-false--encoding--utf8--) method.
+
+The following example,
+* Creates a FileDataset from partitioned files.
+* Gets the partition keys
+* Creates a new, indexed FileDataset using
+
+```Python
+
+file_dataset = Dataset.File.from_files(data_paths, partition_format = '{userid}/*.wav')
+ds.register(name='speech_dataset')
+
+# access partition_keys
+indexes = file_dataset.partition_keys # ['userid']
+
+# get all partition key value pairs should return [{'userid': 'user1'}, {'userid': 'user2'}]
+partitions = file_dataset.get_partition_key_values()
++
+partitions = file_dataset.get_partition_key_values(['userid'])
+# return [{'userid': 'user1'}, {'userid': 'user2'}]
+
+# filter API, this will only download data from user1/ folder
+new_file_dataset = file_dataset.filter(ds['userid'] == 'user1').download()
+```
+
+You can also create a new partitions structure for TabularDatasets with the [partitions_by()](/python/api/azureml-core/azureml.data.tabulardataset#partition-by-partition-keys--target--name-none--show-progress-true--partition-as-file-dataset-false-) method.
+
+```Python
+
+ dataset = Dataset.get_by_name('test') # indexed by country, state, partition_date
+
+# call partition_by locally
+new_dataset = ds.partition_by(name="repartitioned_ds", partition_keys=['country'], target=DataPath(datastore, "repartition"))
+partition_keys = new_dataset.partition_keys # ['country']
+```
+
+## Explore data
+
+After you're done wrangling your data, you can [register](#register-datasets) your dataset, and then load it into your notebook for data exploration prior to model training.
+
+For FileDatasets, you can either **mount** or **download** your dataset, and apply the Python libraries you'd normally use for data exploration. [Learn more about mount vs download](../how-to-train-with-datasets.md#mount-vs-download).
+
+```python
+# download the dataset
+dataset.download(target_path='.', overwrite=False)
+
+# mount dataset to the temp directory at `mounted_path`
+
+import tempfile
+mounted_path = tempfile.mkdtemp()
+mount_context = dataset.mount(mounted_path)
+
+mount_context.start()
+```
+
+For TabularDatasets, use the [`to_pandas_dataframe()`](/python/api/azureml-core/azureml.data.tabulardataset#to-pandas-dataframe-on-error--nullout-of-range-datetime--null--) method to view your data in a dataframe.
+
+```python
+# preview the first 3 rows of titanic_ds
+titanic_ds.take(3).to_pandas_dataframe()
+```
+
+|(Index)|PassengerId|Survived|Pclass|Name|Sex|Age|SibSp|Parch|Ticket|Fare|Cabin|Embarked
+-|--|--||-|||--|--||-|--|--|
+0|1|False|3|Braund, Mr. Owen Harris|male|22.0|1|0|A/5 21171|7.2500||S
+1|2|True|1|Cumings, Mrs. John Bradley (Florence Briggs Th...|female|38.0|1|0|PC 17599|71.2833|C85|C
+2|3|True|3|Heikkinen, Miss. Laina|female|26.0|0|0|STON/O2. 3101282|7.9250||S
+
+## Create a dataset from pandas dataframe
+
+To create a TabularDataset from an in memory pandas dataframe
+use the [`register_pandas_dataframe()`](/python/api/azureml-core/azureml.data.dataset_factory.tabulardatasetfactory#register-pandas-dataframe-dataframe--target--name--description-none--tags-none--show-progress-true-) method. This method registers the TabularDataset to the workspace and uploads data to your underlying storage, which incurs storage costs.
+
+```python
+from azureml.core import Workspace, Datastore, Dataset
+import pandas as pd
+
+pandas_df = pd.read_csv('<path to your csv file>')
+ws = Workspace.from_config()
+datastore = Datastore.get(ws, '<name of your datastore>')
+dataset = Dataset.Tabular.register_pandas_dataframe(pandas_df, datastore, "dataset_from_pandas_df", show_progress=True)
+
+```
+> [!TIP]
+> Create and register a TabularDataset from an in memory spark dataframe or a dask dataframe with the public preview methods, [`register_spark_dataframe()`](/python/api/azureml-core/azureml.data.dataset_factory.tabulardatasetfactory##register-spark-dataframe-dataframe--target--name--description-none--tags-none--show-progress-true-) and [`register_dask_dataframe()`](/python/api/azureml-core/azureml.data.dataset_factory.tabulardatasetfactory#register-dask-dataframe-dataframe--target--name--description-none--tags-none--show-progress-true-). These methods are [experimental](/python/api/overview/azure/ml/#stable-vs-experimental) preview features, and may change at any time.
+>
+> These methods upload data to your underlying storage, and as a result incur storage costs.
+
+## Register datasets
+
+To complete the creation process, register your datasets with a workspace. Use the [`register()`](/python/api/azureml-core/azureml.data.abstract_dataset.abstractdataset#&preserve-view=trueregister-workspace--name--description-none--tags-none--create-new-version-false-) method to register datasets with your workspace in order to share them with others and reuse them across experiments in your workspace:
+
+```Python
+titanic_ds = titanic_ds.register(workspace=workspace,
+ name='titanic_ds',
+ description='titanic training data')
+```
+
+## Create datasets using Azure Resource Manager
+
+There are many templates at [https://github.com/Azure/azure-quickstart-templates/tree/master//quickstarts/microsoft.machinelearningservices](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.machinelearningservices) that can be used to create datasets.
+
+For information on using these templates, see [Use an Azure Resource Manager template to create a workspace for Azure Machine Learning](../how-to-create-workspace-template.md).
+
+
+## Train with datasets
+
+Use your datasets in your machine learning experiments for training ML models. [Learn more about how to train with datasets](../how-to-train-with-datasets.md).
+
+## Version datasets
+
+You can register a new dataset under the same name by creating a new version. A dataset version is a way to bookmark the state of your data so that you can apply a specific version of the dataset for experimentation or future reproduction. Learn more about [dataset versions](../how-to-version-track-datasets.md).
+```Python
+# create a TabularDataset from Titanic training data
+web_paths = ['https://dprepdata.blob.core.windows.net/demo/Titanic.csv',
+ 'https://dprepdata.blob.core.windows.net/demo/Titanic2.csv']
+titanic_ds = Dataset.Tabular.from_delimited_files(path=web_paths)
+
+# create a new version of titanic_ds
+titanic_ds = titanic_ds.register(workspace = workspace,
+ name = 'titanic_ds',
+ description = 'new titanic training data',
+ create_new_version = True)
+```
+
+## Next steps
+
+* Learn [how to train with datasets](../how-to-train-with-datasets.md).
+* Use automated machine learning to [train with TabularDatasets](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand/auto-ml-forecasting-energy-demand.ipynb).
+* For more dataset training examples, see the [sample notebooks](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/work-with-data/).
machine-learning How To Deploy Azure Container Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-deploy-azure-container-instance.md
+
+ Title: How to deploy models to Azure Container Instances with CLI (v1)
+
+description: 'Use CLI (v1) to deploy your Azure Machine Learning models as a web service using Azure Container Instances.'
++++++++ Last updated : 10/21/2021++
+# Deploy a model to Azure Container Instances with CLI (v1)
++
+Learn how to use Azure Machine Learning to deploy a model as a web service on Azure Container Instances (ACI). Use Azure Container Instances if you:
+
+- prefer not to manage your own Kubernetes cluster
+- Are OK with having only a single replica of your service, which may impact uptime
+
+For information on quota and region availability for ACI, see [Quotas and region availability for Azure Container Instances](../../container-instances/container-instances-quotas.md) article.
+
+> [!IMPORTANT]
+> It is highly advised to debug locally before deploying to the web service, for more information see [Debug Locally](../how-to-troubleshoot-deployment-local.md)
+>
+> You can also refer to Azure Machine Learning - [Deploy to Local Notebook](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/deployment/deploy-to-local)
+
+## Prerequisites
+
+- An Azure Machine Learning workspace. For more information, see [Create an Azure Machine Learning workspace](../how-to-manage-workspace.md).
+
+- A machine learning model registered in your workspace. If you don't have a registered model, see [How and where to deploy models](../how-to-deploy-and-where.md).
+
+- The [Azure CLI extension (v1) for Machine Learning service](reference-azure-machine-learning-cli.md), [Azure Machine Learning Python SDK](/python/api/overview/azure/ml/intro), or the [Azure Machine Learning Visual Studio Code extension](../how-to-setup-vs-code.md).
+
+- The __Python__ code snippets in this article assume that the following variables are set:
+
+ * `ws` - Set to your workspace.
+ * `model` - Set to your registered model.
+ * `inference_config` - Set to the inference configuration for the model.
+
+ For more information on setting these variables, see [How and where to deploy models](../how-to-deploy-and-where.md).
+
+- The __CLI__ snippets in this article assume that you've created an `inferenceconfig.json` document. For more information on creating this document, see [How and where to deploy models](../how-to-deploy-and-where.md).
+
+## Limitations
+
+* When using Azure Container Instances in a virtual network, the virtual network must be in the same resource group as your Azure Machine Learning workspace.
+* When using Azure Container Instances inside the virtual network, the Azure Container Registry (ACR) for your workspace cannot also be in the virtual network.
+
+For more information, see [How to secure inferencing with virtual networks](../how-to-secure-inferencing-vnet.md#enable-azure-container-instances-aci).
+
+## Deploy to ACI
+
+To deploy a model to Azure Container Instances, create a __deployment configuration__ that describes the compute resources needed. For example, number of cores and memory. You also need an __inference configuration__, which describes the environment needed to host the model and web service. For more information on creating the inference configuration, see [How and where to deploy models](../how-to-deploy-and-where.md).
+
+> [!NOTE]
+> * ACI is suitable only for small models that are under 1 GB in size.
+> * We recommend using single-node AKS to dev-test larger models.
+> * The number of models to be deployed is limited to 1,000 models per deployment (per container).
+
+### Using the SDK
++
+```python
+from azureml.core.webservice import AciWebservice, Webservice
+from azureml.core.model import Model
+
+deployment_config = AciWebservice.deploy_configuration(cpu_cores = 1, memory_gb = 1)
+service = Model.deploy(ws, "aciservice", [model], inference_config, deployment_config)
+service.wait_for_deployment(show_output = True)
+print(service.state)
+```
+
+For more information on the classes, methods, and parameters used in this example, see the following reference documents:
+
+* [AciWebservice.deploy_configuration](/python/api/azureml-core/azureml.core.webservice.aciwebservice#deploy-configuration-cpu-cores-none--memory-gb-none--tags-none--properties-none--description-none--location-none--auth-enabled-none--ssl-enabled-none--enable-app-insights-none--ssl-cert-pem-file-none--ssl-key-pem-file-none--ssl-cname-none--dns-name-label-none--primary-key-none--secondary-key-none--collect-model-data-none--cmk-vault-base-url-none--cmk-key-name-none--cmk-key-version-none-)
+* [Model.deploy](/python/api/azureml-core/azureml.core.model.model#deploy-workspace--name--models--inference-config-none--deployment-config-none--deployment-target-none--overwrite-false-)
+* [Webservice.wait_for_deployment](/python/api/azureml-core/azureml.core.webservice%28class%29#wait-for-deployment-show-output-false-)
+
+### Using the Azure CLI
++
+To deploy using the CLI, use the following command. Replace `mymodel:1` with the name and version of the registered model. Replace `myservice` with the name to give this service:
+
+```azurecli-interactive
+az ml model deploy -n myservice -m mymodel:1 --ic inferenceconfig.json --dc deploymentconfig.json
+```
++
+For more information, see the [az ml model deploy](/cli/azure/ml/model#az-ml-model-deploy) reference.
+
+## Using VS Code
+
+See [how to manage resources in VS Code](../how-to-manage-resources-vscode.md).
+
+> [!IMPORTANT]
+> You don't need to create an ACI container to test in advance. ACI containers are created as needed.
+
+> [!IMPORTANT]
+> We append hashed workspace id to all underlying ACI resources which are created, all ACI names from same workspace will have same suffix. The Azure Machine Learning service name would still be the same customer provided "service_name" and all the user facing Azure Machine Learning SDK APIs do not need any change. We do not give any guarantees on the names of underlying resources being created.
+
+## Next steps
+
+* [How to deploy a model using a custom Docker image](../how-to-deploy-custom-container.md)
+* [Deployment troubleshooting](../how-to-troubleshoot-deployment.md)
+* [Update the web service](../how-to-deploy-update-web-service.md)
+* [Use TLS to secure a web service through Azure Machine Learning](../how-to-secure-web-service.md)
+* [Consume a ML Model deployed as a web service](../how-to-consume-web-service.md)
+* [Monitor your Azure Machine Learning models with Application Insights](../how-to-enable-app-insights.md)
+* [Collect data for models in production](../how-to-enable-data-collection.md)
machine-learning How To Deploy Azure Kubernetes Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-deploy-azure-kubernetes-service.md
+
+ Title: Deploy ML models to Kubernetes Service with v1
+
+description: 'Use CLI (v1) or SDK (v1) to deploy your Azure Machine Learning models as a web service using Azure Kubernetes Service.'
++++++++ Last updated : 10/21/2021++
+# Deploy a model to an Azure Kubernetes Service cluster with v1
+++
+Learn how to use Azure Machine Learning to deploy a model as a web service on Azure Kubernetes Service (AKS). Azure Kubernetes Service is good for high-scale production deployments. Use Azure Kubernetes service if you need one or more of the following capabilities:
+
+- __Fast response time__
+- __Autoscaling__ of the deployed service
+- __Logging__
+- __Model data collection__
+- __Authentication__
+- __TLS termination__
+- __Hardware acceleration__ options such as GPU and field-programmable gate arrays (FPGA)
+
+When deploying to Azure Kubernetes Service, you deploy to an AKS cluster that is __connected to your workspace__. For information on connecting an AKS cluster to your workspace, see [Create and attach an Azure Kubernetes Service cluster](../how-to-create-attach-kubernetes.md).
+
+> [!IMPORTANT]
+> We recommend that you debug locally before deploying to the web service. For more information, see [Debug Locally](../how-to-troubleshoot-deployment-local.md)
+>
+> You can also refer to Azure Machine Learning - [Deploy to Local Notebook](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/deployment/deploy-to-local)
++
+## Prerequisites
+
+- An Azure Machine Learning workspace. For more information, see [Create an Azure Machine Learning workspace](../how-to-manage-workspace.md).
+
+- A machine learning model registered in your workspace. If you don't have a registered model, see [How and where to deploy models](../how-to-deploy-and-where.md).
+
+- The [Azure CLI extension (v1) for Machine Learning service](reference-azure-machine-learning-cli.md), [Azure Machine Learning Python SDK](/python/api/overview/azure/ml/intro), or the [Azure Machine Learning Visual Studio Code extension](../how-to-setup-vs-code.md).
+
+- The __Python__ code snippets in this article assume that the following variables are set:
+
+ * `ws` - Set to your workspace.
+ * `model` - Set to your registered model.
+ * `inference_config` - Set to the inference configuration for the model.
+
+ For more information on setting these variables, see [How and where to deploy models](../how-to-deploy-and-where.md).
+
+- The __CLI__ snippets in this article assume that you've created an `inferenceconfig.json` document. For more information on creating this document, see [How and where to deploy models](../how-to-deploy-and-where.md).
+
+- An Azure Kubernetes Service cluster connected to your workspace. For more information, see [Create and attach an Azure Kubernetes Service cluster](../how-to-create-attach-kubernetes.md).
+
+ - If you want to deploy models to GPU nodes or FPGA nodes (or any specific SKU), then you must create a cluster with the specific SKU. There is no support for creating a secondary node pool in an existing cluster and deploying models in the secondary node pool.
+
+## Understand the deployment processes
+
+The word "deployment" is used in both Kubernetes and Azure Machine Learning. "Deployment" has different meanings in these two contexts. In Kubernetes, a `Deployment` is a concrete entity, specified with a declarative YAML file. A Kubernetes `Deployment` has a defined lifecycle and concrete relationships to other Kubernetes entities such as `Pods` and `ReplicaSets`. You can learn about Kubernetes from docs and videos at [What is Kubernetes?](https://aka.ms/k8slearning).
+
+In Azure Machine Learning, "deployment" is used in the more general sense of making available and cleaning up your project resources. The steps that Azure Machine Learning considers part of deployment are:
+
+1. Zipping the files in your project folder, ignoring those specified in .amlignore or .gitignore
+1. Scaling up your compute cluster (Relates to Kubernetes)
+1. Building or downloading the dockerfile to the compute node (Relates to Kubernetes)
+ 1. The system calculates a hash of:
+ - The base image
+ - Custom docker steps (see [Deploy a model using a custom Docker base image](../how-to-deploy-custom-container.md))
+ - The conda definition YAML (see [Create & use software environments in Azure Machine Learning](../how-to-use-environments.md))
+ 1. The system uses this hash as the key in a lookup of the workspace Azure Container Registry (ACR)
+ 1. If it is not found, it looks for a match in the global ACR
+ 1. If it is not found, the system builds a new image (which will be cached and pushed to the workspace ACR)
+1. Downloading your zipped project file to temporary storage on the compute node
+1. Unzipping the project file
+1. The compute node executing `python <entry script> <arguments>`
+1. Saving logs, model files, and other files written to `./outputs` to the storage account associated with the workspace
+1. Scaling down compute, including removing temporary storage (Relates to Kubernetes)
+
+### Azure ML router
+
+The front-end component (azureml-fe) that routes incoming inference requests to deployed services automatically scales as needed. Scaling of azureml-fe is based on the AKS cluster purpose and size (number of nodes). The cluster purpose and nodes are configured when you [create or attach an AKS cluster](../how-to-create-attach-kubernetes.md). There is one azureml-fe service per cluster, which may be running on multiple pods.
+
+> [!IMPORTANT]
+> When using a cluster configured as __dev-test__, the self-scaler is **disabled**. Even for FastProd/DenseProd clusters, Self-Scaler is only enabled when telemetry shows that it's needed.
+
+Azureml-fe scales both up (vertically) to use more cores, and out (horizontally) to use more pods. When making the decision to scale up, the time that it takes to route incoming inference requests is used. If this time exceeds the threshold, a scale-up occurs. If the time to route incoming requests continues to exceed the threshold, a scale-out occurs.
+
+When scaling down and in, CPU usage is used. If the CPU usage threshold is met, the front end will first be scaled down. If the CPU usage drops to the scale-in threshold, a scale-in operation happens. Scaling up and out will only occur if there are enough cluster resources available.
+
+When scale-up or scale-down, azureml-fe pods will be restarted to apply the cpu/memory changes. Inferencing requests are not affected by the restarts.
+
+<a id="connectivity"></a>
+
+## Understand connectivity requirements for AKS inferencing cluster
+
+When Azure Machine Learning creates or attaches an AKS cluster, AKS cluster is deployed with one of the following two network models:
+* Kubenet networking - The network resources are typically created and configured as the AKS cluster is deployed.
+* Azure Container Networking Interface (CNI) networking - The AKS cluster is connected to an existing virtual network resource and configurations.
+
+For Kubenet networking, the network is created and configured properly for Azure Machine Learning service. For the CNI networking, you need to understand the connectivity requirements and ensure DNS resolution and outbound connectivity for AKS inferencing. For example, you may be using a firewall to block network traffic.
+
+The following diagram shows the connectivity requirements for AKS inferencing. Black arrows represent actual communication, and blue arrows represent the domain names. You may need to add entries for these hosts to your firewall or to your custom DNS server.
+
+ ![Connectivity Requirements for AKS Inferencing](./media/how-to-deploy-aks/aks-network.png)
+
+For general AKS connectivity requirements, see [Control egress traffic for cluster nodes in Azure Kubernetes Service](../../aks/limit-egress-traffic.md).
+
+For accessing Azure ML services behind a firewall, see [How to access azureml behind firewall](https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/machine-learning/how-to-access-azureml-behind-firewall.md).
+
+### Overall DNS resolution requirements
+
+DNS resolution within an existing VNet is under your control. For example, a firewall or custom DNS server. The following hosts must be reachable:
+
+| Host name | Used by |
+| -- | -- |
+| `<cluster>.hcp.<region>.azmk8s.io` | AKS API server |
+| `mcr.microsoft.com` | Microsoft Container Registry (MCR) |
+| `<ACR name>.azurecr.io` | Your Azure Container Registry (ACR) |
+| `<account>.table.core.windows.net` | Azure Storage Account (table storage) |
+| `<account>.blob.core.windows.net` | Azure Storage Account (blob storage) |
+| `api.azureml.ms` | Azure Active Directory (Azure AD) authentication |
+| `ingest-vienna<region>.kusto.windows.net` | Kusto endpoint for uploading telemetry |
+| `<leaf-domain-label + auto-generated suffix>.<region>.cloudapp.azure.com` | Endpoint domain name, if you autogenerated by Azure Machine Learning. If you used a custom domain name, you do not need this entry. |
+
+### Connectivity requirements in chronological order: from cluster creation to model deployment
+
+In the process of AKS create or attach, Azure ML router (azureml-fe) is deployed into the AKS cluster. In order to deploy Azure ML router, AKS node should be able to:
+* Resolve DNS for AKS API server
+* Resolve DNS for MCR in order to download docker images for Azure ML router
+* Download images from MCR, where outbound connectivity is required
+
+Right after azureml-fe is deployed, it will attempt to start and this requires to:
+* Resolve DNS for AKS API server
+* Query AKS API server to discover other instances of itself (it is a multi-pod service)
+* Connect to other instances of itself
+
+Once azureml-fe is started, it requires the following connectivity to function properly:
+* Connect to Azure Storage to download dynamic configuration
+* Resolve DNS for Azure AD authentication server api.azureml.ms and communicate with it when the deployed service uses Azure AD authentication.
+* Query AKS API server to discover deployed models
+* Communicate to deployed model PODs
+
+At model deployment time, for a successful model deployment AKS node should be able to:
+* Resolve DNS for customer's ACR
+* Download images from customer's ACR
+* Resolve DNS for Azure BLOBs where model is stored
+* Download models from Azure BLOBs
+
+After the model is deployed and service starts, azureml-fe will automatically discover it using AKS API and will be ready to route request to it. It must be able to communicate to model PODs.
+>[!Note]
+>If the deployed model requires any connectivity (e.g. querying external database or other REST service, downloading a BLOB etc), then both DNS resolution and outbound communication for these services should be enabled.
+
+## Deploy to AKS
+
+To deploy a model to Azure Kubernetes Service, create a __deployment configuration__ that describes the compute resources needed. For example, number of cores and memory. You also need an __inference configuration__, which describes the environment needed to host the model and web service. For more information on creating the inference configuration, see [How and where to deploy models](../how-to-deploy-and-where.md).
+
+> [!NOTE]
+> The number of models to be deployed is limited to 1,000 models per deployment (per container).
+
+<a id="using-the-cli"></a>
+
+# [Python](#tab/python)
++
+```python
+from azureml.core.webservice import AksWebservice, Webservice
+from azureml.core.model import Model
+from azureml.core.compute import AksCompute
+
+aks_target = AksCompute(ws,"myaks")
+# If deploying to a cluster configured for dev/test, ensure that it was created with enough
+# cores and memory to handle this deployment configuration. Note that memory is also used by
+# things such as dependencies and AML components.
+deployment_config = AksWebservice.deploy_configuration(cpu_cores = 1, memory_gb = 1)
+service = Model.deploy(ws, "myservice", [model], inference_config, deployment_config, aks_target)
+service.wait_for_deployment(show_output = True)
+print(service.state)
+print(service.get_logs())
+```
+
+For more information on the classes, methods, and parameters used in this example, see the following reference documents:
+
+* [AksCompute](/python/api/azureml-core/azureml.core.compute.aks.akscompute)
+* [AksWebservice.deploy_configuration](/python/api/azureml-core/azureml.core.webservice.aks.aksservicedeploymentconfiguration)
+* [Model.deploy](/python/api/azureml-core/azureml.core.model.model#deploy-workspace--name--models--inference-config-none--deployment-config-none--deployment-target-none--overwrite-false-)
+* [Webservice.wait_for_deployment](/python/api/azureml-core/azureml.core.webservice%28class%29#wait-for-deployment-show-output-false-)
+
+# [Azure CLI](#tab/azure-cli)
++
+To deploy using the CLI, use the following command. Replace `myaks` with the name of the AKS compute target. Replace `mymodel:1` with the name and version of the registered model. Replace `myservice` with the name to give this service:
+
+```azurecli-interactive
+az ml model deploy --ct myaks -m mymodel:1 -n myservice --ic inferenceconfig.json --dc deploymentconfig.json
+```
++
+For more information, see the [az ml model deploy](/cli/azure/ml/model#az-ml-model-deploy) reference.
+
+# [Visual Studio Code](#tab/visual-studio-code)
+
+For information on using VS Code, see [deploy to AKS via the VS Code extension](../how-to-manage-resources-vscode.md).
+
+> [!IMPORTANT]
+> Deploying through VS Code requires the AKS cluster to be created or attached to your workspace in advance.
+++
+### Autoscaling
++
+The component that handles autoscaling for Azure ML model deployments is azureml-fe, which is a smart request router. Since all inference requests go through it, it has the necessary data to automatically scale the deployed model(s).
+
+> [!IMPORTANT]
+> * **Do not enable Kubernetes Horizontal Pod Autoscaler (HPA) for model deployments**. Doing so would cause the two auto-scaling components to compete with each other. Azureml-fe is designed to auto-scale models deployed by Azure ML, where HPA would have to guess or approximate model utilization from a generic metric like CPU usage or a custom metric configuration.
+>
+> * **Azureml-fe does not scale the number of nodes in an AKS cluster**, because this could lead to unexpected cost increases. Instead, **it scales the number of replicas for the model** within the physical cluster boundaries. If you need to scale the number of nodes within the cluster, you can manually scale the cluster or [configure the AKS cluster autoscaler](../../aks/cluster-autoscaler.md).
+
+Autoscaling can be controlled by setting `autoscale_target_utilization`, `autoscale_min_replicas`, and `autoscale_max_replicas` for the AKS web service. The following example demonstrates how to enable autoscaling:
+
+```python
+aks_config = AksWebservice.deploy_configuration(autoscale_enabled=True,
+ autoscale_target_utilization=30,
+ autoscale_min_replicas=1,
+ autoscale_max_replicas=4)
+```
+
+Decisions to scale up/down is based off of utilization of the current container replicas. The number of replicas that are busy (processing a request) divided by the total number of current replicas is the current utilization. If this number exceeds `autoscale_target_utilization`, then more replicas are created. If it is lower, then replicas are reduced. By default, the target utilization is 70%.
+
+Decisions to add replicas are eager and fast (around 1 second). Decisions to remove replicas are conservative (around 1 minute).
+
+You can calculate the required replicas by using the following code:
+
+```python
+from math import ceil
+# target requests per second
+targetRps = 20
+# time to process the request (in seconds)
+reqTime = 10
+# Maximum requests per container
+maxReqPerContainer = 1
+# target_utilization. 70% in this example
+targetUtilization = .7
+
+concurrentRequests = targetRps * reqTime / targetUtilization
+
+# Number of container replicas
+replicas = ceil(concurrentRequests / maxReqPerContainer)
+```
+
+For more information on setting `autoscale_target_utilization`, `autoscale_max_replicas`, and `autoscale_min_replicas`, see the [AksWebservice](/python/api/azureml-core/azureml.core.webservice.akswebservice) module reference.
+
+## Web service authentication
+
+When deploying to Azure Kubernetes Service, __key-based__ authentication is enabled by default. You can also enable __token-based__ authentication. Token-based authentication requires clients to use an Azure Active Directory account to request an authentication token, which is used to make requests to the deployed service.
+
+To __disable__ authentication, set the `auth_enabled=False` parameter when creating the deployment configuration. The following example disables authentication using the SDK:
+
+```python
+deployment_config = AksWebservice.deploy_configuration(cpu_cores=1, memory_gb=1, auth_enabled=False)
+```
+
+For information on authenticating from a client application, see the [Consume an Azure Machine Learning model deployed as a web service](../how-to-consume-web-service.md).
+
+### Authentication with keys
+
+If key authentication is enabled, you can use the `get_keys` method to retrieve a primary and secondary authentication key:
+
+```python
+primary, secondary = service.get_keys()
+print(primary)
+```
+
+> [!IMPORTANT]
+> If you need to regenerate a key, use [`service.regen_key`](/python/api/azureml-core/azureml.core.webservice%28class%29)
+
+### Authentication with tokens
+
+To enable token authentication, set the `token_auth_enabled=True` parameter when you are creating or updating a deployment. The following example enables token authentication using the SDK:
+
+```python
+deployment_config = AksWebservice.deploy_configuration(cpu_cores=1, memory_gb=1, token_auth_enabled=True)
+```
+
+If token authentication is enabled, you can use the `get_token` method to retrieve a JWT token and that token's expiration time:
+
+```python
+token, refresh_by = service.get_token()
+print(token)
+```
+
+> [!IMPORTANT]
+> You will need to request a new token after the token's `refresh_by` time.
+>
+> Microsoft strongly recommends that you create your Azure Machine Learning workspace in the same region as your Azure Kubernetes Service cluster. To authenticate with a token, the web service will make a call to the region in which your Azure Machine Learning workspace is created. If your workspace's region is unavailable, then you will not be able to fetch a token for your web service even, if your cluster is in a different region than your workspace. This effectively results in Token-based Authentication being unavailable until your workspace's region is available again. In addition, the greater the distance between your cluster's region and your workspace's region, the longer it will take to fetch a token.
+>
+> To retrieve a token, you must use the Azure Machine Learning SDK or the [az ml service get-access-token](/cli/azure/ml(v1)/computetarget/create#az-ml-service-get-access-token) command.
++
+### Vulnerability scanning
+
+Microsoft Defender for Cloud provides unified security management and advanced threat protection across hybrid cloud workloads. You should allow Microsoft Defender for Cloud to scan your resources and follow its recommendations. For more, see [Azure Kubernetes Services integration with Defender for Cloud](../../security-center/defender-for-kubernetes-introduction.md).
+
+## Next steps
+
+* [Use Azure RBAC for Kubernetes authorization](../../aks/manage-azure-rbac.md)
+* [Secure inferencing environment with Azure Virtual Network](../how-to-secure-inferencing-vnet.md)
+* [How to deploy a model using a custom Docker image](../how-to-deploy-custom-container.md)
+* [Deployment troubleshooting](../how-to-troubleshoot-deployment.md)
+* [Update web service](../how-to-deploy-update-web-service.md)
+* [Use TLS to secure a web service through Azure Machine Learning](../how-to-secure-web-service.md)
+* [Consume a ML Model deployed as a web service](../how-to-consume-web-service.md)
+* [Monitor your Azure Machine Learning models with Application Insights](../how-to-enable-app-insights.md)
+* [Collect data for models in production](../how-to-enable-data-collection.md)
machine-learning How To Deploy Mlflow Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-deploy-mlflow-models.md
+
+ Title: Deploy MLflow models as web services
+
+description: Set up MLflow with Azure Machine Learning to deploy your ML models as an Azure web service.
+++++ Last updated : 10/25/2021++++
+# Deploy MLflow models as Azure web services
+
+> [!div class="op_single_selector" title1="Select the version of the Azure Machine Learning developer platform you are using:"]
+> * [v1](how-to-deploy-mlflow-models.md)
+> * [v2 (current version)](../how-to-deploy-mlflow-models-online-endpoints.md)
+
+In this article, learn how to deploy your [MLflow](https://www.mlflow.org) model as an Azure web service, so you can leverage and apply Azure Machine Learning's model management and data drift detection capabilities to your production models. See [MLflow and Azure Machine Learning](concept-mlflow-v1.md) for additional MLflow and Azure Machine Learning functionality integrations.
+
+Azure Machine Learning offers deployment configurations for:
+* Azure Container Instance (ACI) which is a suitable choice for a quick dev-test deployment.
+* Azure Kubernetes Service (AKS) which is recommended for scalable production deployments.
++
+> [!TIP]
+> The information in this document is primarily for data scientists and developers who want to deploy their MLflow model to an Azure Machine Learning web service endpoint. If you are an administrator interested in monitoring resource usage and events from Azure Machine Learning, such as quotas, completed training runs, or completed model deployments, see [Monitoring Azure Machine Learning](../monitor-azure-machine-learning.md).
+
+## MLflow with Azure Machine Learning deployment
+
+MLflow is an open-source library for managing the life cycle of your machine learning experiments. Its integration with Azure Machine Learning allows for you to extend this management beyond model training to the deployment phase of your production model.
+
+The following diagram demonstrates that with the MLflow deploy API and Azure Machine Learning, you can deploy models created with popular frameworks, like PyTorch, Tensorflow, scikit-learn, etc., as Azure web services and manage them in your workspace.
+
+![ deploy mlflow models with azure machine learning](./media/how-to-deploy-mlflow-models/mlflow-diagram-deploy.png)
+
+## Prerequisites
+
+* A machine learning model. If you don't have a trained model, find the notebook example that best fits your compute scenario in [this repo](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/ml-frameworks/using-mlflow) and follow its instructions.
+* [Set up the MLflow Tracking URI to connect Azure Machine Learning](how-to-use-mlflow.md#track-local-runs).
+* Install the `azureml-mlflow` package.
+ * This package automatically brings in `azureml-core` of the [The Azure Machine Learning Python SDK](/python/api/overview/azure/ml/install), which provides the connectivity for MLflow to access your workspace.
+* See which [access permissions you need to perform your MLflow operations with your workspace](../how-to-assign-roles.md#mlflow-operations).
+
+## Deploy to Azure Container Instance (ACI)
+
+To deploy your MLflow model to an Azure Machine Learning web service, your model must be set up with the [MLflow Tracking URI to connect with Azure Machine Learning](how-to-use-mlflow.md).
+
+In order to deploy to ACI, you don't need to define any deployment configuration, the service will default to an ACI deployment when a config is not provided.
+Then, register and deploy the model in one step with MLflow's [deploy](https://www.mlflow.org/docs/latest/python_api/mlflow.azureml.html#mlflow.azureml.deploy) method for Azure Machine Learning.
++
+```python
+from mlflow.deployments import get_deploy_client
+
+# set the tracking uri as the deployment client
+client = get_deploy_client(mlflow.get_tracking_uri())
+
+# set the model path
+model_path = "model"
+
+# define the model path and the name is the service name
+# the model gets registered automatically and a name is autogenerated using the "name" parameter below
+client.create_deployment(name="mlflow-test-aci", model_uri='runs:/{}/{}'.format(run.id, model_path))
+```
+
+### Customize deployment configuration
+
+If you prefer not to use the defaults, you can set up your deployment configuration with a deployment config json file that uses parameters from the [deploy_configuration()](/python/api/azureml-core/azureml.core.webservice.aciwebservice#deploy-configuration-cpu-cores-none--memory-gb-none--tags-none--properties-none--description-none--location-none--auth-enabled-none--ssl-enabled-none--enable-app-insights-none--ssl-cert-pem-file-none--ssl-key-pem-file-none--ssl-cname-none--dns-name-label-none-) method as reference.
+
+For your deployment config json file, each of the deployment config parameters need to be defined in the form of a dictionary. The following is an example. [Learn more about what your deployment configuration json file can contain](reference-azure-machine-learning-cli.md#azure-container-instance-deployment-configuration-schema).
++
+### Azure Container Instance deployment configuration schema
+```json
+{"computeType": "aci",
+ "containerResourceRequirements": {"cpu": 1, "memoryInGB": 1},
+ "location": "eastus2"
+}
+```
+
+Your json file can then be used to create your deployment.
+
+```python
+# set the deployment config
+deploy_path = "deployment_config.json"
+test_config = {'deploy-config-file': deploy_path}
+
+client.create_deployment(model_uri='runs:/{}/{}'.format(run.id, model_path),
+ config=test_config,
+ name="mlflow-test-aci")
+```
++
+## Deploy to Azure Kubernetes Service (AKS)
+
+To deploy your MLflow model to an Azure Machine Learning web service, your model must be set up with the [MLflow Tracking URI to connect with Azure Machine Learning](how-to-use-mlflow.md).
+
+To deploy to AKS, first create an AKS cluster. Create an AKS cluster using the [ComputeTarget.create()](/python/api/azureml-core/azureml.core.computetarget#create-workspace--name--provisioning-configuration-) method. It may take 20-25 minutes to create a new cluster.
+
+```python
+from azureml.core.compute import AksCompute, ComputeTarget
+
+# Use the default configuration (can also provide parameters to customize)
+prov_config = AksCompute.provisioning_configuration()
+
+aks_name = 'aks-mlflow'
+
+# Create the cluster
+aks_target = ComputeTarget.create(workspace=ws,
+ name=aks_name,
+ provisioning_configuration=prov_config)
+
+aks_target.wait_for_completion(show_output = True)
+
+print(aks_target.provisioning_state)
+print(aks_target.provisioning_errors)
+```
+Create a deployment config json using [deploy_configuration()](/python/api/azureml-core/azureml.core.webservice.aks.aksservicedeploymentconfiguration#parameters) method values as a reference. Each of the deployment config parameters simply need to be defined as a dictionary. Here's an example below:
+
+```json
+{"computeType": "aks", "computeTargetName": "aks-mlflow"}
+```
+
+Then, register and deploy the model in one step with MLflow's [deployment client](https://www.mlflow.org/docs/latest/python_api/mlflow.deployments.html).
+
+```python
+from mlflow.deployments import get_deploy_client
+
+# set the tracking uri as the deployment client
+client = get_deploy_client(mlflow.get_tracking_uri())
+
+# set the model path
+model_path = "model"
+
+# set the deployment config
+deploy_path = "deployment_config.json"
+test_config = {'deploy-config-file': deploy_path}
+
+# define the model path and the name is the service name
+# the model gets registered automatically and a name is autogenerated using the "name" parameter below
+client.create_deployment(model_uri='runs:/{}/{}'.format(run.id, model_path),
+ config=test_config,
+ name="mlflow-test-aci")
+```
+
+The service deployment can take several minutes.
+
+## Clean up resources
+
+If you don't plan to use your deployed web service, use `service.delete()` to delete it from your notebook. For more information, see the documentation for [WebService.delete()](/python/api/azureml-core/azureml.core.webservice%28class%29#delete--).
+
+## Example notebooks
+
+The [MLflow with Azure Machine Learning notebooks](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/ml-frameworks/using-mlflow) demonstrate and expand upon concepts presented in this article.
+
+> [!NOTE]
+> A community-driven repository of examples using mlflow can be found at https://github.com/Azure/azureml-examples.
+
+## Next steps
+
+* [Manage your models](concept-model-management-and-deployment.md).
+* Monitor your production models for [data drift](../how-to-enable-data-collection.md).
+* [Track Azure Databricks runs with MLflow](../how-to-use-mlflow-azure-databricks.md).
machine-learning How To Deploy Profile Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-deploy-profile-model.md
++
+description: Use CLI (v1) or SDK (v1) to profile your model before deployment. Profiling determines the memory and CPU usage of your model.
+++ Last updated : 07/31/2020+
+zone_pivot_groups: aml-control-methods
+++++
+# Profile your model to determine resource utilization
++
+This article shows how to profile a machine learning to model to determine how much CPU and memory you will need to allocate for the model when deploying it as a web service.
+
+> [!IMPORTANT]
+> This article applies to CLI v1 and SDK v1. This profiling technique is not available for v2 of either CLI or SDK.
+
+## Prerequisites
+
+This article assumes you have trained and registered a model with Azure Machine Learning. See the [sample tutorial here](../how-to-train-scikit-learn.md) for an example of training and registering a scikit-learn model with Azure Machine Learning.
+
+## Limitations
+
+* Profiling will not work when the Azure Container Registry (ACR) for your workspace is behind a virtual network.
+
+## Run the profiler
+
+Once you have registered your model and prepared the other components necessary for its deployment, you can determine the CPU and memory the deployed service will need. Profiling tests the service that runs your model and returns information such as the CPU usage, memory usage, and response latency. It also provides a recommendation for the CPU and memory based on resource usage.
+
+In order to profile your model, you will need:
+* A registered model.
+* An inference configuration based on your entry script and inference environment definition.
+* A single column tabular dataset, where each row contains a string representing sample request data.
+
+> [!IMPORTANT]
+> At this point we only support profiling of services that expect their request data to be a string, for example: string serialized json, text, string serialized image, etc. The content of each row of the dataset (string) will be put into the body of the HTTP request and sent to the service encapsulating the model for scoring.
+
+> [!IMPORTANT]
+> We only support profiling up to 2 CPUs in ChinaEast2 and USGovArizona region.
+
+Below is an example of how you can construct an input dataset to profile a service that expects its incoming request data to contain serialized json. In this case, we created a dataset based 100 instances of the same request data content. In real world scenarios we suggest that you use larger datasets containing various inputs, especially if your model resource usage/behavior is input dependent.
+++
+```python
+import json
+from azureml.core import Datastore
+from azureml.core.dataset import Dataset
+from azureml.data import dataset_type_definitions
+
+input_json = {'data': [[1, 2, 3, 4, 5, 6, 7, 8, 9, 10],
+ [10, 9, 8, 7, 6, 5, 4, 3, 2, 1]]}
+# create a string that can be utf-8 encoded and
+# put in the body of the request
+serialized_input_json = json.dumps(input_json)
+dataset_content = []
+for i in range(100):
+ dataset_content.append(serialized_input_json)
+dataset_content = '\n'.join(dataset_content)
+file_name = 'sample_request_data.txt'
+f = open(file_name, 'w')
+f.write(dataset_content)
+f.close()
+
+# upload the txt file created above to the Datastore and create a dataset from it
+data_store = Datastore.get_default(ws)
+data_store.upload_files(['./' + file_name], target_path='sample_request_data')
+datastore_path = [(data_store, 'sample_request_data' +'/' + file_name)]
+sample_request_data = Dataset.Tabular.from_delimited_files(
+ datastore_path, separator='\n',
+ infer_column_types=True,
+ header=dataset_type_definitions.PromoteHeadersBehavior.NO_HEADERS)
+sample_request_data = sample_request_data.register(workspace=ws,
+ name='sample_request_data',
+ create_new_version=True)
+```
+
+Once you have the dataset containing sample request data ready, create an inference configuration. Inference configuration is based on the score.py and the environment definition. The following example demonstrates how to create the inference configuration and run profiling:
+
+```python
+from azureml.core.model import InferenceConfig, Model
+from azureml.core.dataset import Dataset
++
+model = Model(ws, id=model_id)
+inference_config = InferenceConfig(entry_script='path-to-score.py',
+ environment=myenv)
+input_dataset = Dataset.get_by_name(workspace=ws, name='sample_request_data')
+profile = Model.profile(ws,
+ 'unique_name',
+ [model],
+ inference_config,
+ input_dataset=input_dataset)
+
+profile.wait_for_completion(True)
+
+# see the result
+details = profile.get_details()
+```
++++
+The following command demonstrates how to profile a model by using the CLI:
+
+```azurecli-interactive
+az ml model profile -g <resource-group-name> -w <workspace-name> --inference-config-file <path-to-inf-config.json> -m <model-id> --idi <input-dataset-id> -n <unique-name>
+```
+
+> [!TIP]
+> To persist the information returned by profiling, use tags or properties for the model. Using tags or properties stores the data with the model in the model registry. The following examples demonstrate adding a new tag containing the `requestedCpu` and `requestedMemoryInGb` information:
+>
+> ```python
+> model.add_tags({'requestedCpu': details['requestedCpu'],
+> 'requestedMemoryInGb': details['requestedMemoryInGb']})
+> ```
+>
+> ```azurecli-interactive
+> az ml model profile -g <resource-group-name> -w <workspace-name> --i <model-id> --add-tag requestedCpu=1 --add-tag requestedMemoryInGb=0.5
+> ```
++
+## Next steps
+
+* [Troubleshoot a failed deployment](../how-to-troubleshoot-deployment.md)
+* [Deploy to Azure Kubernetes Service](how-to-deploy-azure-kubernetes-service.md)
+* [Create client applications to consume web services](../how-to-consume-web-service.md)
+* [Update web service](../how-to-deploy-update-web-service.md)
+* [How to deploy a model using a custom Docker image](../how-to-deploy-custom-container.md)
+* [Use TLS to secure a web service through Azure Machine Learning](../how-to-secure-web-service.md)
+* [Monitor your Azure Machine Learning models with Application Insights](../how-to-enable-app-insights.md)
+* [Collect data for models in production](../how-to-enable-data-collection.md)
+* [Create event alerts and triggers for model deployments](../how-to-use-event-grid.md)
machine-learning How To Identity Based Data Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-identity-based-data-access.md
+
+ Title: Identity-based data access to storage services (v1)
+
+description: Learn how to use identity-based data access to connect to storage services on Azure with Azure Machine Learning datastores and the Machine Learning Python SDK v1.
++++++ Last updated : 01/25/2022++
+# Customer intent: As an experienced Python developer, I need to make my data in Azure Storage available to my compute for training my machine learning models.
++
+# Connect to storage by using identity-based data access with SDK v1
+
+In this article, you learn how to connect to storage services on Azure by using identity-based data access and Azure Machine Learning datastores via the [Azure Machine Learning SDK for Python](/python/api/overview/azure/ml/intro).
+
+Typically, datastores use **credential-based authentication** to confirm you have permission to access the storage service. They keep connection information, like your subscription ID and token authorization, in the [key vault](https://azure.microsoft.com/services/key-vault/) that's associated with the workspace. When you create a datastore that uses **identity-based data access**, your Azure account ([Azure Active Directory token](/azure/active-directory/fundamentals/active-directory-whatis)) is used to confirm you have permission to access the storage service. In the **identity-based data access** scenario, no authentication credentials are saved. Only the storage account information is stored in the datastore.
+
+To create datastores with **identity-based** data access via the Azure Machine Learning studio UI, see [Connect to data with the Azure Machine Learning studio](../how-to-connect-data-ui.md#create-datastores).
+
+To create datastores that use **credential-based** authentication, like access keys or service principals, see [Connect to storage services on Azure](how-to-access-data.md).
+
+## Identity-based data access in Azure Machine Learning
+
+There are two scenarios in which you can apply identity-based data access in Azure Machine Learning. These scenarios are a good fit for identity-based access when you're working with confidential data and need more granular data access management:
+
+> [!WARNING]
+> Identity-based data access is not supported for [automated ML experiments](../how-to-configure-auto-train.md).
+
+- Accessing storage services
+- Training machine learning models with private data
+
+### Accessing storage services
+
+You can connect to storage services via identity-based data access with Azure Machine Learning datastores or [Azure Machine Learning datasets](how-to-create-register-datasets.md).
+
+Your authentication credentials are usually kept in a datastore, which is used to ensure you have permission to access the storage service. When these credentials are registered via datastores, any user with the workspace Reader role can retrieve them. That scale of access can be a security concern for some organizations. [Learn more about the workspace Reader role.](../how-to-assign-roles.md#default-roles)
+
+When you use identity-based data access, Azure Machine Learning prompts you for your Azure Active Directory token for data access authentication instead of keeping your credentials in the datastore. That approach allows for data access management at the storage level and keeps credentials confidential.
+
+The same behavior applies when you:
+
+* [Create a dataset directly from storage URLs](#use-data-in-storage).
+* Work with data interactively via a Jupyter Notebook on your local computer or [compute instance](../concept-compute-instance.md).
+
+> [!NOTE]
+> Credentials stored via credential-based authentication include subscription IDs, shared access signature (SAS) tokens, and storage access key and service principal information, like client IDs and tenant IDs.
+
+### Model training on private data
+
+Certain machine learning scenarios involve training models with private data. In such cases, data scientists need to run training workflows without being exposed to the confidential input data. In this scenario, a [managed identity](how-to-use-managed-identities.md) of the training compute is used for data access authentication. This approach allows storage admins to grant Storage Blob Data Reader access to the managed identity that the training compute uses to run the training job. The individual data scientists don't need to be granted access. For more information, see [Set up managed identity on a compute cluster](how-to-create-attach-compute-cluster.md#set-up-managed-identity).
+
+## Prerequisites
+
+- An Azure subscription. If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/).
+
+- An Azure storage account with a supported storage type. These storage types are supported:
+ - [Azure Blob Storage](/azure/storage/blobs/storage-blobs-overview)
+ - [Azure Data Lake Storage Gen1](/azure/data-lake-store/)
+ - [Azure Data Lake Storage Gen2](/azure/storage/blobs/data-lake-storage-introduction)
+ - [Azure SQL Database](/azure/azure-sql/database/sql-database-paas-overview)
+
+- The [Azure Machine Learning SDK for Python](/python/api/overview/azure/ml/install).
+
+- An Azure Machine Learning workspace.
+
+ Either [create an Azure Machine Learning workspace](../how-to-manage-workspace.md) or use an [existing one via the Python SDK](../how-to-manage-workspace.md#connect-to-a-workspace).
+
+## Create and register datastores
+
+When you register a storage service on Azure as a datastore, you automatically create and register that datastore to a specific workspace. See [Storage access permissions](#storage-access-permissions) for guidance on required permission types. You also have the option to manually create the storage you want to connect to without any special permissions, and you just need the name.
+
+See [Work with virtual networks](#work-with-virtual-networks) for details on how to connect to data storage behind virtual networks.
+
+In the following code, notice the absence of authentication parameters like `sas_token`, `account_key`, `subscription_id`, and the service principal `client_id`. This omission indicates that Azure Machine Learning will use identity-based data access for authentication. Creation of datastores typically happens interactively in a notebook or via the studio. So your Azure Active Directory token is used for data access authentication.
+
+> [!NOTE]
+> Datastore names should consist only of lowercase letters, numbers, and underscores.
+
+### Azure blob container
+
+To register an Azure blob container as a datastore, use [`register_azure_blob_container()`](/python/api/azureml-core/azureml.core.datastore%28class%29#register-azure-blob-container-workspace--datastore-name--container-name--account-name--sas-token-none--account-key-none--protocol-none--endpoint-none--overwrite-false--create-if-not-exists-false--skip-validation-false--blob-cache-timeout-none--grant-workspace-access-false--subscription-id-none--resource-group-none-).
+
+The following code creates the `credentialless_blob` datastore, registers it to the `ws` workspace, and assigns it to the `blob_datastore` variable. This datastore accesses the `my_container_name` blob container on the `my-account-name` storage account.
+
+```Python
+# Create blob datastore without credentials.
+blob_datastore = Datastore.register_azure_blob_container(workspace=ws,
+ datastore_name='credentialless_blob',
+ container_name='my_container_name',
+ account_name='my_account_name')
+```
+
+### Azure Data Lake Storage Gen1
+
+Use [register_azure_data_lake()](/python/api/azureml-core/azureml.core.datastore.datastore#register-azure-data-lake-workspace--datastore-name--store-name--tenant-id-none--client-id-none--client-secret-none--resource-url-none--authority-url-none--subscription-id-none--resource-group-none--overwrite-false--grant-workspace-access-false-) to register a datastore that connects to Azure Data Lake Storage Gen1.
+
+The following code creates the `credentialless_adls1` datastore, registers it to the `workspace` workspace, and assigns it to the `adls_dstore` variable. This datastore accesses the `adls_storage` Azure Data Lake Storage account.
+
+```Python
+# Create Azure Data Lake Storage Gen1 datastore without credentials.
+adls_dstore = Datastore.register_azure_data_lake(workspace = workspace,
+ datastore_name='credentialless_adls1',
+ store_name='adls_storage')
+
+```
+
+### Azure Data Lake Storage Gen2
+
+Use [register_azure_data_lake_gen2()](/python/api/azureml-core/azureml.core.datastore.datastore#register-azure-data-lake-gen2-workspace--datastore-name--filesystem--account-name--tenant-id--client-id--client-secret--resource-url-none--authority-url-none--protocol-none--endpoint-none--overwrite-false-) to register a datastore that connects to Azure Data Lake Storage Gen2.
+
+The following code creates the `credentialless_adls2` datastore, registers it to the `ws` workspace, and assigns it to the `adls2_dstore` variable. This datastore accesses the file system `tabular` in the `myadls2` storage account.
+
+```python
+# Create Azure Data Lake Storage Gen2 datastore without credentials.
+adls2_dstore = Datastore.register_azure_data_lake_gen2(workspace=ws,
+ datastore_name='credentialless_adls2',
+ filesystem='tabular',
+ account_name='myadls2')
+```
+
+### Azure SQL database
+
+For an Azure SQL database, use [register_azure_sql_database()](/python/api/azureml-core/azureml.core.datastore.datastore#register-azure-sql-database-workspace--datastore-name--server-name--database-name--tenant-id-none--client-id-none--client-secret-none--resource-url-none--authority-url-none--endpoint-none--overwrite-false--username-none--password-none--subscription-id-none--resource-group-none--grant-workspace-access-false-kwargs-) to register a datastore that connects to an Azure SQL database storage.
+
+The following code creates and registers the `credentialless_sqldb` datastore to the `ws` workspace and assigns it to the variable, `sqldb_dstore`. This datastore accesses the database `mydb` in the `myserver` SQL DB server.
+
+```python
+# Create a sqldatabase datastore without credentials
+
+sqldb_dstore = Datastore.register_azure_sql_database(workspace=ws,
+ datastore_name='credentialless_sqldb',
+ server_name='myserver',
+ database_name='mydb')
+
+```
++
+## Storage access permissions
+
+To help ensure that you securely connect to your storage service on Azure, Azure Machine Learning requires that you have permission to access the corresponding data storage.
+> [!WARNING]
+> Cross tenant access to storage accounts is not supported. If cross tenant access is needed for your scenario, please reach out to the AzureML Data Support team alias at amldatasupport@microsoft.com for assistance with a custom code solution.
+
+Identity-based data access supports connections to **only** the following storage services.
+
+* Azure Blob Storage
+* Azure Data Lake Storage Gen1
+* Azure Data Lake Storage Gen2
+* Azure SQL Database
+
+To access these storage services, you must have at least [Storage Blob Data Reader](/azure/role-based-access-control/built-in-roles#storage-blob-data-reader) access to the storage account. Only storage account owners can [change your access level via the Azure portal](/azure/storage/blobs/assign-azure-role-data-access).
+
+If you prefer to not use your user identity (Azure Active Directory), you also have the option to grant a workspace managed-system identity (MSI) permission to create the datastore. To do so, you must have Owner permissions to the storage account and add the `grant_workspace_access= True` parameter to your data register method.
+
+If you're training a model on a remote compute target and want to access the data for training, the compute identity must be granted at least the Storage Blob Data Reader role from the storage service. Learn how to [set up managed identity on a compute cluster](how-to-create-attach-compute-cluster.md#set-up-managed-identity).
+
+## Work with virtual networks
+
+By default, Azure Machine Learning can't communicate with a storage account that's behind a firewall or in a virtual network.
+
+You can configure storage accounts to allow access only from within specific virtual networks. This configuration requires additional steps to ensure data isn't leaked outside of the network. This behavior is the same for credential-based data access. For more information, see [How to configure virtual network scenarios](how-to-access-data.md#virtual-network).
+
+If your storage account has virtual network settings, that dictates what identity type and permissions access is needed. For example for data preview and data profile, the virtual network settings determine what type of identity is used to authenticate data access.
+
+* In scenarios where only certain IPs and subnets are allowed to access the storage, then Azure Machine Learning uses the workspace MSI to accomplish data previews and profiles.
+
+* If your storage is ADLS Gen 2 or Blob and has virtual network settings, customers can use either user identity or workspace MSI depending on the datastore settings defined during creation.
+
+* If the virtual network setting is ΓÇ£Allow Azure services on the trusted services list to access this storage accountΓÇ¥, then Workspace MSI is used.
+
+## Use data in storage
+
+We recommend that you use [Azure Machine Learning datasets](how-to-create-register-datasets.md) when you interact with your data in storage with Azure Machine Learning.
+
+> [!IMPORTANT]
+> Datasets using identity-based data access are not supported for [automated ML experiments](../how-to-configure-auto-train.md).
+
+Datasets package your data into a lazily evaluated consumable object for machine learning tasks like training. Also, with datasets you can [download or mount](../how-to-train-with-datasets.md#mount-vs-download) files of any format from Azure storage services like Azure Blob Storage and Azure Data Lake Storage to a compute target.
+
+To create a dataset, you can reference paths from datastores that also use identity-based data access .
+
+* If you're underlying storage account type is Blob or ADLS Gen 2, your user identity needs Blob Reader role.
+* If your underlying storage is ADLS Gen 1, permissions need can be set via the storage's Access Control List (ACL).
+
+In the following example, `blob_datastore` already exists and uses identity-based data access.
+
+```python
+blob_dataset = Dataset.Tabular.from_delimited_files(blob_datastore,'test.csv')
+```
+
+Another option is to skip datastore creation and create datasets directly from storage URLs. This functionality currently supports only Azure blobs and Azure Data Lake Storage Gen1 and Gen2. For creation based on storage URL, only the user identity is needed to authenticate.
+
+```python
+blob_dset = Dataset.File.from_files('https://myblob.blob.core.windows.net/may/keras-mnist-fashion/')
+```
+
+When you submit a training job that consumes a dataset created with identity-based data access, the managed identity of the training compute is used for data access authentication. Your Azure Active Directory token isn't used. For this scenario, ensure that the managed identity of the compute is granted at least the Storage Blob Data Reader role from the storage service. For more information, see [Set up managed identity on compute clusters](how-to-create-attach-compute-cluster.md#set-up-managed-identity).
+
+## Access data for training jobs on compute clusters (preview)
++
+When training on [Azure Machine Learning compute clusters](how-to-create-attach-compute-cluster.md#what-is-a-compute-cluster), you can authenticate to storage with your Azure Active Directory token.
+
+This authentication mode allows you to:
+* Set up fine-grained permissions, where different workspace users can have access to different storage accounts or folders within storage accounts.
+* Audit storage access because the storage logs show which identities were used to access data.
+
+> [!WARNING]
+> This functionality has the following limitations
+> * Feature is only supported for experiments submitted via the [Azure Machine Learning CLI](../how-to-configure-cli.md)
+> * Only CommandJobs, and PipelineJobs with CommandSteps and AutoMLSteps are supported
+> * User identity and compute managed identity cannot be used for authentication within same job.
+
+The following steps outline how to set up identity-based data access for training jobs on compute clusters.
+
+1. Grant the user identity access to storage resources. For example, grant StorageBlobReader access to the specific storage account you want to use or grant ACL-based permission to specific folders or files in Azure Data Lake Gen 2 storage.
+
+1. Create an Azure Machine Learning datastore without cached credentials for the storage account. If a datastore has cached credentials, such as storage account key, those credentials are used instead of user identity.
+
+1. Submit a training job with property **identity** set to **type: user_identity**, as shown in following job specification. During the training job, the authentication to storage happens via the identity of the user that submits the job.
+
+> [!NOTE]
+> If the **identity** property is left unspecified and datastore does not have cached credentials, then compute managed identity becomes the fallback option.
+
+```yaml
+command: |
+ echo "--census-csv: ${{inputs.census_csv}}"
+ python hello-census.py --census-csv ${{inputs.census_csv}}
+code: src
+inputs:
+ census_csv:
+ type: uri_file
+ path: azureml://datastores/mydata/paths/census.csv
+environment: azureml:AzureML-sklearn-1.0-ubuntu20.04-py38-cpu@latest
+compute: azureml:cpu-cluster
+identity:
+ type: user_identity
+```
+
+## Next steps
+
+* [Create an Azure Machine Learning dataset](how-to-create-register-datasets.md)
+* [Train with datasets](../how-to-train-with-datasets.md)
+* [Create a datastore with key-based data access](how-to-access-data.md)
machine-learning How To Log View Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-log-view-metrics.md
+
+ Title: Log & view metrics and log files v1
+
+description: Enable logging on your ML training runs to monitor real-time run metrics, and to help diagnose errors and warnings.
+++++++ Last updated : 04/19/2021+++
+# Log & view metrics and log files v1
+
+> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning Python SDK you are using:"]
+> * [v1](how-to-log-view-metrics.md)
+> * [v2 (preview)](../how-to-log-view-metrics.md)
+
+Log real-time information using both the default Python logging package and Azure Machine Learning Python SDK-specific functionality. You can log locally and send logs to your workspace in the portal.
+
+Logs can help you diagnose errors and warnings, or track performance metrics like parameters and model performance. In this article, you learn how to enable logging in the following scenarios:
+
+> [!div class="checklist"]
+> * Log run metrics
+> * Interactive training sessions
+> * Submitting training jobs using ScriptRunConfig
+> * Python native `logging` settings
+> * Logging from additional sources
++
+> [!TIP]
+> This article shows you how to monitor the model training process. If you're interested in monitoring resource usage and events from Azure Machine learning, such as quotas, completed training runs, or completed model deployments, see [Monitoring Azure Machine Learning](../monitor-azure-machine-learning.md).
+
+## Data types
+
+You can log multiple data types including scalar values, lists, tables, images, directories, and more. For more information, and Python code examples for different data types, see the [Run class reference page](/python/api/azureml-core/azureml.core.run%28class%29).
+
+## Logging run metrics
+
+Use the following methods in the logging APIs to influence the metrics visualizations. Note the [service limits](../resource-limits-quotas-capacity.md#metrics) for these logged metrics.
+
+|Logged Value|Example code| Format in portal|
+|-|-|-|
+|Log an array of numeric values| `run.log_list(name='Fibonacci', value=[0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89])`|single-variable line chart|
+|Log a single numeric value with the same metric name repeatedly used (like from within a for loop)| `for i in tqdm(range(-10, 10)): run.log(name='Sigmoid', value=1 / (1 + np.exp(-i))) angle = i / 2.0`| Single-variable line chart|
+|Log a row with 2 numerical columns repeatedly|`run.log_row(name='Cosine Wave', angle=angle, cos=np.cos(angle)) sines['angle'].append(angle) sines['sine'].append(np.sin(angle))`|Two-variable line chart|
+|Log table with 2 numerical columns|`run.log_table(name='Sine Wave', value=sines)`|Two-variable line chart|
+|Log image|`run.log_image(name='food', path='./breadpudding.jpg', plot=None, description='desert')`|Use this method to log an image file or a matplotlib plot to the run. These images will be visible and comparable in the run record|
+
+## Logging with MLflow
+
+We recommend logging your models, metrics and artifacts with MLflow as it's open source and it supports local mode to cloud portability. The following table and code examples show how to use MLflow to log metrics and artifacts from your training runs.
+[Learn more about MLflow's logging methods and design patterns](https://mlflow.org/docs/latest/python_api/mlflow.html#mlflow.log_artifact).
+
+Be sure to install the `mlflow` and `azureml-mlflow` pip packages to your workspace.
+
+```conda
+pip install mlflow
+pip install azureml-mlflow
+```
+
+Set the MLflow tracking URI to point at the Azure Machine Learning backend to ensure that your metrics and artifacts are logged to your workspace.
+
+```python
+from azureml.core import Workspace
+import mlflow
+from mlflow.tracking import MlflowClient
+
+ws = Workspace.from_config()
+mlflow.set_tracking_uri(ws.get_mlflow_tracking_uri())
+
+mlflow.create_experiment("mlflow-experiment")
+mlflow.set_experiment("mlflow-experiment")
+mlflow_run = mlflow.start_run()
+```
+
+|Logged Value|Example code| Notes|
+|-|-|-|
+|Log a numeric value (int or float) | `mlflow.log_metric('my_metric', 1)`| |
+|Log a boolean value | `mlflow.log_metric('my_metric', 0)`| 0 = True, 1 = False|
+|Log a string | `mlflow.log_text('foo', 'my_string')`| Logged as an artifact|
+|Log numpy metrics or PIL image objects|`mlflow.log_image(img, 'figure.png')`||
+|Log matlotlib plot or image file|` mlflow.log_figure(fig, "figure.png")`||
+
+## View run metrics via the SDK
+You can view the metrics of a trained model using `run.get_metrics()`.
+
+```python
+from azureml.core import Run
+run = Run.get_context()
+run.log('metric-name', metric_value)
+
+metrics = run.get_metrics()
+# metrics is of type Dict[str, List[float]] mapping metric names
+# to a list of the values for that metric in the given run.
+
+metrics.get('metric-name')
+# list of metrics in the order they were recorded
+```
+
+You can also access run information using MLflow through the run object's data and info properties. See the [MLflow.entities.Run object](https://mlflow.org/docs/latest/python_api/mlflow.entities.html#mlflow.entities.Run) documentation for more information.
+
+After the run completes, you can retrieve it using the MlFlowClient().
+
+```python
+from mlflow.tracking import MlflowClient
+
+# Use MlFlow to retrieve the run that was just completed
+client = MlflowClient()
+finished_mlflow_run = MlflowClient().get_run(mlflow_run.info.run_id)
+```
+
+You can view the metrics, parameters, and tags for the run in the data field of the run object.
+
+```python
+metrics = finished_mlflow_run.data.metrics
+tags = finished_mlflow_run.data.tags
+params = finished_mlflow_run.data.params
+```
+
+>[!NOTE]
+> The metrics dictionary under `mlflow.entities.Run.data.metrics` only returns the most recently logged value for a given metric name. For example, if you log, in order, 1, then 2, then 3, then 4 to a metric called `sample_metric`, only 4 is present in the metrics dictionary for `sample_metric`.
+>
+> To get all metrics logged for a particular metric name, you can use [`MlFlowClient.get_metric_history()`](https://www.mlflow.org/docs/latest/python_api/mlflow.tracking.html#mlflow.tracking.MlflowClient.get_metric_history).
+
+<a name="view-the-experiment-in-the-web-portal"></a>
+
+## View run metrics in the studio UI
+
+You can browse completed run records, including logged metrics, in the [Azure Machine Learning studio](https://ml.azure.com).
+
+Navigate to the **Experiments** tab. To view all your runs in your Workspace across Experiments, select the **All runs** tab. You can drill down on runs for specific Experiments by applying the Experiment filter in the top menu bar.
+
+For the individual Experiment view, select the **All experiments** tab. On the experiment run dashboard, you can see tracked metrics and logs for each run.
+
+You can also edit the run list table to select multiple runs and display either the last, minimum, or maximum logged value for your runs. Customize your charts to compare the logged metrics values and aggregates across multiple runs. You can plot multiple metrics on the y-axis of your chart and customize your x-axis to plot your logged metrics.
++
+### View and download log files for a run
+
+Log files are an essential resource for debugging the Azure ML workloads. After submitting a training job, drill down to a specific run to view its logs and outputs:
+
+1. Navigate to the **Experiments** tab.
+1. Select the runID for a specific run.
+1. Select **Outputs and logs** at the top of the page.
+2. Select **Download all** to download all your logs into a zip folder.
+3. You can also download individual log files by choosing the log file and selecting **Download**
++
+#### user_logs folder
+
+This folder contains information about the user generated logs. This folder is open by default, and the **std_log.txt** log is selected. The **std_log.txt** is where your code's logs (for example, print statements) show up. This file contains `stdout` log and `stderr` logs from your control script and training script, one per process. In the majority of cases, you will monitor the logs here.
+
+#### system_logs folder
+
+This folder contains the logs generated by Azure Machine Learning and it will be closed by default.The logs generated by the system are grouped into different folders, based on the stage of the job in the runtime.
+
+#### Other folders
+
+For jobs training on multi-compute clusters, logs are present for each node IP. The structure for each node is the same as single node jobs. There is one more logs folder for overall execution, stderr, and stdout logs.
+
+Azure Machine Learning logs information from various sources during training, such as AutoML or the Docker container that runs the training job. Many of these logs are not documented. If you encounter problems and contact Microsoft support, they may be able to use these logs during troubleshooting.
+
+## Interactive logging session
+
+Interactive logging sessions are typically used in notebook environments. The method [Experiment.start_logging()](/python/api/azureml-core/azureml.core.experiment%28class%29#start-logging--args-kwargs-) starts an interactive logging session. Any metrics logged during the session are added to the run record in the experiment. The method [run.complete()](/python/api/azureml-core/azureml.core.run%28class%29#complete--set-status-true-) ends the sessions and marks the run as completed.
+
+## ScriptRun logs
+
+In this section, you learn how to add logging code inside of runs created when configured with ScriptRunConfig. You can use the [**ScriptRunConfig**](/python/api/azureml-core/azureml.core.scriptrunconfig) class to encapsulate scripts and environments for repeatable runs. You can also use this option to show a visual Jupyter Notebooks widget for monitoring.
+
+This example performs a parameter sweep over alpha values and captures the results using the [run.log()](/python/api/azureml-core/azureml.core.run%28class%29#log-name--value--description-) method.
+
+1. Create a training script that includes the logging logic, `train.py`.
+
+ [!code-python[](~/MachineLearningNotebooks/how-to-use-azureml/training/train-on-local/train.py)]
++
+1. Submit the ```train.py``` script to run in a user-managed environment. The entire script folder is submitted for training.
+
+ [!notebook-python[] (~/MachineLearningNotebooks/how-to-use-azureml/training/train-on-local/train-on-local.ipynb?name=src)]
++
+ [!notebook-python[] (~/MachineLearningNotebooks/how-to-use-azureml/training/train-on-local/train-on-local.ipynb?name=run)]
+
+ The `show_output` parameter turns on verbose logging, which lets you see details from the training process as well as information about any remote resources or compute targets. Use the following code to turn on verbose logging when you submit the experiment.
+
+ ```python
+ run = exp.submit(src, show_output=True)
+ ```
+
+ You can also use the same parameter in the `wait_for_completion` function on the resulting run.
+
+ ```python
+ run.wait_for_completion(show_output=True)
+ ```
+
+## Native Python logging
+
+Some logs in the SDK may contain an error that instructs you to set the logging level to DEBUG. To set the logging level, add the following code to your script.
+
+```python
+import logging
+logging.basicConfig(level=logging.DEBUG)
+```
+
+## Other logging sources
+
+Azure Machine Learning can also log information from other sources during training, such as automated machine learning runs, or Docker containers that run the jobs. These logs aren't documented, but if you encounter problems and contact Microsoft support, they may be able to use these logs during troubleshooting.
+
+For information on logging metrics in Azure Machine Learning designer, see [How to log metrics in the designer](../how-to-track-designer-experiments.md)
+
+## Example notebooks
+
+The following notebooks demonstrate concepts in this article:
+* [how-to-use-azureml/training/train-on-local](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/training/train-on-local)
+* [how-to-use-azureml/track-and-monitor-experiments/logging-api](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/track-and-monitor-experiments/logging-api)
++
+## Next steps
+
+See these articles to learn more on how to use Azure Machine Learning:
+
+* See an example of how to register the best model and deploy it in the tutorial, [Train an image classification model with Azure Machine Learning](../tutorial-train-deploy-notebook.md).
machine-learning How To Prepare Datasets For Automl Images V1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-prepare-datasets-for-automl-images-v1.md
+
+ Title: Prepare data for computer vision tasks v1
+
+description: Image data preparation for Azure Machine Learning automated ML to train computer vision models on classification, object detection, and segmentation v1
++++++ Last updated : 10/13/2021++
+# Prepare data for computer vision tasks with automated machine learning v1
++
+> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning CLI extension you are using:"]
+> * [v1](how-to-prepare-datasets-for-automl-images-v1.md)
+> * [v2 (current version)](../how-to-prepare-datasets-for-automl-images.md)
+
+
+> [!IMPORTANT]
+> Support for training computer vision models with automated ML in Azure Machine Learning is an experimental public preview feature. Certain features might not be supported or might have constrained capabilities. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+In this article, you learn how to prepare image data for training computer vision models with [automated machine learning in Azure Machine Learning](../concept-automated-ml.md).
+
+To generate models for computer vision tasks with automated machine learning, you need to bring labeled image data as input for model training in the form of an [Azure Machine Learning TabularDataset](/python/api/azureml-core/azureml.data.tabulardataset).
+
+To ensure your TabularDataset contains the accepted schema for consumption in automated ML, you can use the Azure Machine Learning data labeling tool or use a conversion script.
+
+## Prerequisites
+
+* Familiarize yourself with the accepted [schemas for JSONL files for AutoML computer vision experiments](../reference-automl-images-schema.md).
+
+* Labeled data you want to use to train computer vision models with automated ML.
+
+## Azure Machine Learning data labeling
+
+If you don't have labeled data, you can use Azure Machine Learning's [data labeling tool](../how-to-create-image-labeling-projects.md) to manually label images. This tool automatically generates the data required for training in the accepted format.
+
+It helps to create, manage, and monitor data labeling tasks for
+++ Image classification (multi-class and multi-label)++ Object detection (bounding box)++ Instance segmentation (polygon)+
+If you already have a data labeling project and you want to use that data, you can [export your labeled data as an Azure Machine Learning TabularDataset](../how-to-create-image-labeling-projects.md#export-the-labels), which can then be used directly with automated ML for training computer vision models.
+
+## Use conversion scripts
+
+If you have labeled data in popular computer vision data formats, like VOC or COCO, [helper scripts](https://github.com/Azure/azureml-examples/blob/main/python-sdk/tutorials/automl-with-azureml/image-object-detection/coco2jsonl.py) to generate JSONL files for training and validation data are available in [notebook examples](https://github.com/Azure/azureml-examples/tree/main/python-sdk/tutorials/automl-with-azureml).
+
+If your data doesn't follow any of the previously mentioned formats, you can use your own script to generate JSON Lines files based on schemas defined in [Schema for JSONL files for AutoML image experiments](../reference-automl-images-schema.md).
+
+After your data file(s) are converted to the accepted JSONL format, you can upload them to your storage account on Azure.
+
+## Upload the JSONL file and images to storage
+
+To use the data for automated ML training, upload the data to your [Azure Machine Learning workspace](../concept-workspace.md) via a [datastore](../how-to-access-data.md). The datastore provides a mechanism for you to upload/download data to storage on Azure, and interact with it from your remote compute targets.
+
+Upload the entire parent directory consisting of images and JSONL files to the default datastore that is automatically created upon workspace creation. This datastore connects to the default Azure blob storage container that was created as part of workspace creation.
+
+```python
+# Retrieve default datastore that's automatically created when we setup a workspace
+ds = ws.get_default_datastore()
+ds.upload(src_dir='./fridgeObjects', target_path='fridgeObjects')
+```
+Once the data upload is done, you can create an [Azure Machine Learning TabularDataset](/python/api/azureml-core/azureml.data.tabulardataset) and register it to your workspace for future use as input to your automated ML experiments for computer vision models.
+
+```python
+from azureml.core import Dataset
+from azureml.data import DataType
+
+training_dataset_name = 'fridgeObjectsTrainingDataset'
+# create training dataset
+training_dataset = Dataset.Tabular.from_json_lines_files(path=ds.path("fridgeObjects/train_annotations.jsonl"),
+ set_column_types={"image_url": DataType.to_stream(ds.workspace)}
+ )
+training_dataset = training_dataset.register( workspace=ws,name=training_dataset_name)
+
+print("Training dataset name: " + training_dataset.name)
+```
+
+## Next steps
+
+* [Train computer vision models with automated machine learning](../how-to-auto-train-image-models.md).
+* [Train a small object detection model with automated machine learning](../how-to-use-automl-small-object-detect.md).
+* [Tutorial: Train an object detection model (preview) with AutoML and Python](../tutorial-auto-train-image-models.md).
machine-learning How To Track Monitor Analyze Runs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-track-monitor-analyze-runs.md
+
+ Title: Track, monitor, and analyze runs
+
+description: Learn how to start, monitor, and track your machine learning experiment runs with the Azure Machine Learning Python SDK.
++++++ Last updated : 04/25/2022++++
+# Start, monitor, and track run history
+++
+The [Azure Machine Learning SDK for Python v1](/python/api/overview/azure/ml/intro) and [Machine Learning CLI](reference-azure-machine-learning-cli.md) provide various methods to monitor, organize, and track your runs for training and experimentation. Your ML run history is an important part of an explainable and repeatable ML development process.
+
+> [!TIP]
+> For information on using studio, see [Track, monitor, and analyze runs with studio](../how-to-track-monitor-analyze-runs.md).
+>
+> If you are using Azure Machine Learning SDK v2, see the following articles:
+> * [Log & view metrics and log files (v2)](../how-to-log-view-metrics.md).
+> * [Track experiments with MLflow and CLI (v2)](../how-to-track-monitor-analyze-runs.md).
+
+This article shows how to do the following tasks:
+
+* Monitor run performance.
+* Tag and find runs.
+* Run search over your run history.
+* Cancel or fail runs.
+* Create child runs.
+* Monitor the run status by email notification.
+
+
+> [!TIP]
+> If you're looking for information on monitoring the Azure Machine Learning service and associated Azure services, see [How to monitor Azure Machine Learning](../monitor-azure-machine-learning.md).
+> If you're looking for information on monitoring models deployed as web services, see [Collect model data](../how-to-enable-data-collection.md) and [Monitor with Application Insights](../how-to-enable-app-insights.md).
+
+## Prerequisites
+
+You'll need the following items:
+
+* An Azure subscription. If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/) today.
+
+* An [Azure Machine Learning workspace](../how-to-manage-workspace.md).
+
+* The Azure Machine Learning SDK for Python (version 1.0.21 or later). To install or update to the latest version of the SDK, see [Install or update the SDK](/python/api/overview/azure/ml/install).
+
+ To check your version of the Azure Machine Learning SDK, use the following code:
+
+ ```python
+ print(azureml.core.VERSION)
+ ```
+
+* The [Azure CLI](/cli/azure/) and [CLI extension for Azure Machine Learning](reference-azure-machine-learning-cli.md).
++
+## Monitor run performance
+
+* Start a run and its logging process
+
+ # [Python](#tab/python)
+
+ [!INCLUDE [sdk v1](../../../includes/machine-learning-sdk-v1.md)]
+
+ 1. Set up your experiment by importing the [Workspace](/python/api/azureml-core/azureml.core.workspace.workspace), [Experiment](/python/api/azureml-core/azureml.core.experiment.experiment), [Run](/python/api/azureml-core/azureml.core.run%28class%29), and [ScriptRunConfig](/python/api/azureml-core/azureml.core.scriptrunconfig) classes from the [azureml.core](/python/api/azureml-core/azureml.core) package.
+
+ ```python
+ import azureml.core
+ from azureml.core import Workspace, Experiment, Run
+ from azureml.core import ScriptRunConfig
+
+ ws = Workspace.from_config()
+ exp = Experiment(workspace=ws, name="explore-runs")
+ ```
+
+ 1. Start a run and its logging process with the [`start_logging()`](/python/api/azureml-core/azureml.core.experiment%28class%29#start-logging--args-kwargs-) method.
+
+ ```python
+ notebook_run = exp.start_logging()
+ notebook_run.log(name="message", value="Hello from run!")
+ ```
+
+ # [Azure CLI](#tab/azure-cli)
+
+ [!INCLUDE [cli v1](../../../includes/machine-learning-cli-v1.md)]
+
+ To start a run of your experiment, use the following steps:
+
+ 1. From a shell or command prompt, use the Azure CLI to authenticate to your Azure subscription:
+
+ ```azurecli-interactive
+ az login
+ ```
+ [!INCLUDE [select-subscription](../../../includes/machine-learning-cli-subscription.md)]
+
+ 1. Attach a workspace configuration to the folder that contains your training script. Replace `myworkspace` with your Azure Machine Learning workspace. Replace `myresourcegroup` with the Azure resource group that contains your workspace:
+
++
+ ```azurecli-interactive
+ az ml folder attach -w myworkspace -g myresourcegroup
+ ```
+
+ This command creates a `.azureml` subdirectory that contains example runconfig and conda environment files. It also contains a `config.json` file that is used to communicate with your Azure Machine Learning workspace.
+
+ For more information, see [az ml folder attach](/cli/azure/ml(v1)/folder#az-ml-folder-attach).
+
+ 2. To start the run, use the following command. When using this command, specify the name of the runconfig file (the text before \*.runconfig if you're looking at your file system) against the -c parameter.
+
+ ```azurecli-interactive
+ az ml run submit-script -c sklearn -e testexperiment train.py
+ ```
+
+ > [!TIP]
+ > The `az ml folder attach` command created a `.azureml` subdirectory, which contains two example runconfig files.
+ >
+ > If you have a Python script that creates a run configuration object programmatically, you can use [RunConfig.save()](/python/api/azureml-core/azureml.core.runconfiguration#save-path-none--name-none--separate-environment-yaml-false-) to save it as a runconfig file.
+ >
+ > For more example runconfig files, see [https://github.com/MicrosoftDocs/pipelines-azureml/](https://github.com/MicrosoftDocs/pipelines-azureml/).
+
+ For more information, see [az ml run submit-script](/cli/azure/ml(v1)/run#az-ml-run-submit-script).
+
+
+
+* Monitor the status of a run
+
+ # [Python](#tab/python)
+
+ [!INCLUDE [sdk v1](../../../includes/machine-learning-sdk-v1.md)]
+
+ * Get the status of a run with the [`get_status()`](/python/api/azureml-core/azureml.core.run%28class%29#get-status--) method.
+
+ ```python
+ print(notebook_run.get_status())
+ ```
+
+ * To get the run ID, execution time, and other details about the run, use the [`get_details()`](/python/api/azureml-core/azureml.core.workspace.workspace#get-details--) method.
+
+ ```python
+ print(notebook_run.get_details())
+ ```
+
+ * When your run finishes successfully, use the [`complete()`](/python/api/azureml-core/azureml.core.run%28class%29#complete--set-status-true-) method to mark it as completed.
+
+ ```python
+ notebook_run.complete()
+ print(notebook_run.get_status())
+ ```
+
+ * If you use Python's `with...as` design pattern, the run will automatically mark itself as completed when the run is out of scope. You don't need to manually mark the run as completed.
+
+ ```python
+ with exp.start_logging() as notebook_run:
+ notebook_run.log(name="message", value="Hello from run!")
+ print(notebook_run.get_status())
+
+ print(notebook_run.get_status())
+ ```
+
+ # [Azure CLI](#tab/azure-cli)
+
+ [!INCLUDE [cli v1](../../../includes/machine-learning-cli-v1.md)]
+
+ * To view a list of runs for your experiment, use the following command. Replace `experiment` with the name of your experiment:
+
+ ```azurecli-interactive
+ az ml run list --experiment-name experiment
+ ```
+
+ This command returns a JSON document that lists information about runs for this experiment.
+
+ For more information, see [az ml experiment list](/cli/azure/ml(v1)/experiment#az-ml-experiment-list).
+
+ * To view information on a specific run, use the following command. Replace `runid` with the ID of the run:
+
+ ```azurecli-interactive
+ az ml run show -r runid
+ ```
+
+ This command returns a JSON document that lists information about the run.
+
+ For more information, see [az ml run show](/cli/azure/ml(v1)/run#az-ml-run-show).
+
+
+
+## Tag and find runs
+
+In Azure Machine Learning, you can use properties and tags to help organize and query your runs for important information.
+
+* Add properties and tags
+
+ # [Python](#tab/python)
+
+ [!INCLUDE [sdk v1](../../../includes/machine-learning-sdk-v1.md)]
+
+ To add searchable metadata to your runs, use the [`add_properties()`](/python/api/azureml-core/azureml.core.run%28class%29#add-properties-properties-) method. For example, the following code adds the `"author"` property to the run:
+
+ ```Python
+ local_run.add_properties({"author":"azureml-user"})
+ print(local_run.get_properties())
+ ```
+
+ Properties are immutable, so they create a permanent record for auditing purposes. The following code example results in an error, because we already added `"azureml-user"` as the `"author"` property value in the preceding code:
+
+ ```Python
+ try:
+ local_run.add_properties({"author":"different-user"})
+ except Exception as e:
+ print(e)
+ ```
+
+ Unlike properties, tags are mutable. To add searchable and meaningful information for consumers of your experiment, use the [`tag()`](/python/api/azureml-core/azureml.core.run%28class%29#tag-key--value-none-) method.
+
+ ```Python
+ local_run.tag("quality", "great run")
+ print(local_run.get_tags())
+
+ local_run.tag("quality", "fantastic run")
+ print(local_run.get_tags())
+ ```
+
+ You can also add simple string tags. When these tags appear in the tag dictionary as keys, they have a value of `None`.
+
+ ```Python
+ local_run.tag("worth another look")
+ print(local_run.get_tags())
+ ```
+
+ # [Azure CLI](#tab/azure-cli)
+
+ [!INCLUDE [cli v1](../../../includes/machine-learning-cli-v1.md)]
+
+ > [!NOTE]
+ > Using the CLI, you can only add or update tags.
+
+ To add or update a tag, use the following command:
+
+ ```azurecli-interactive
+ az ml run update -r runid --add-tag quality='fantastic run'
+ ```
+
+ For more information, see [az ml run update](/cli/azure/ml(v1)/run#az-ml-run-update).
+
+
+
+* Query properties and tags
+
+ You can query runs within an experiment to return a list of runs that match specific properties and tags.
+
+ # [Python](#tab/python)
+
+ [!INCLUDE [sdk v1](../../../includes/machine-learning-sdk-v1.md)]
+
+ ```Python
+ list(exp.get_runs(properties={"author":"azureml-user"},tags={"quality":"fantastic run"}))
+ list(exp.get_runs(properties={"author":"azureml-user"},tags="worth another look"))
+ ```
+
+ # [Azure CLI](#tab/azure-cli)
+
+ [!INCLUDE [cli v1](../../../includes/machine-learning-cli-v1.md)]
+
+ The Azure CLI supports [JMESPath](http://jmespath.org) queries, which can be used to filter runs based on properties and tags. To use a JMESPath query with the Azure CLI, specify it with the `--query` parameter. The following examples show some queries using properties and tags:
+
+ ```azurecli-interactive
+ # list runs where the author property = 'azureml-user'
+ az ml run list --experiment-name experiment [?properties.author=='azureml-user']
+ # list runs where the tag contains a key that starts with 'worth another look'
+ az ml run list --experiment-name experiment [?tags.keys(@)[?starts_with(@, 'worth another look')]]
+ # list runs where the author property = 'azureml-user' and the 'quality' tag starts with 'fantastic run'
+ az ml run list --experiment-name experiment [?properties.author=='azureml-user' && tags.quality=='fantastic run']
+ ```
+
+ For more information on querying Azure CLI results, see [Query Azure CLI command output](/cli/azure/query-azure-cli).
+
+
+## Cancel or fail runs
+
+If you notice a mistake or if your run is taking too long to finish, you can cancel the run.
+
+# [Python](#tab/python)
++
+To cancel a run using the SDK, use the [`cancel()`](/python/api/azureml-core/azureml.core.run%28class%29#cancel--) method:
+
+```python
+src = ScriptRunConfig(source_directory='.', script='hello_with_delay.py')
+local_run = exp.submit(src)
+print(local_run.get_status())
+
+local_run.cancel()
+print(local_run.get_status())
+```
+
+If your run finishes, but it contains an error (for example, the incorrect training script was used), you can use the [`fail()`](/python/api/azureml-core/azureml.core.run%28class%29#fail-error-details-none--error-code-noneset-status-true-) method to mark it as failed.
+
+```python
+local_run = exp.submit(src)
+local_run.fail()
+print(local_run.get_status())
+```
+
+# [Azure CLI](#tab/azure-cli)
++
+To cancel a run using the CLI, use the following command. Replace `runid` with the ID of the run
+
+```azurecli-interactive
+az ml run cancel -r runid -w workspace_name -e experiment_name
+```
+
+For more information, see [az ml run cancel](/cli/azure/ml(v1)/run#az-ml-run-cancel).
+++
+## Create child runs
++
+Create child runs to group together related runs, such as for different hyperparameter-tuning iterations.
+
+> [!NOTE]
+> Child runs can only be created using the SDK.
+
+This code example uses the `hello_with_children.py` script to create a batch of five child runs from within a submitted run by using the [`child_run()`](/python/api/azureml-core/azureml.core.run%28class%29#child-run-name-none--run-id-none--outputs-none-) method:
+
+```python
+!more hello_with_children.py
+src = ScriptRunConfig(source_directory='.', script='hello_with_children.py')
+
+local_run = exp.submit(src)
+local_run.wait_for_completion(show_output=True)
+print(local_run.get_status())
+
+with exp.start_logging() as parent_run:
+ for c,count in enumerate(range(5)):
+ with parent_run.child_run() as child:
+ child.log(name="Hello from child run", value=c)
+```
+
+> [!NOTE]
+> As they move out of scope, child runs are automatically marked as completed.
+
+To create many child runs efficiently, use the [`create_children()`](/python/api/azureml-core/azureml.core.run.run#create-children-count-none--tag-key-none--tag-values-none-) method. Because each creation results in a network call,
+creating a batch of runs is more efficient than creating them one by one.
+
+### Submit child runs
+
+Child runs can also be submitted from a parent run. This allows you to create hierarchies of parent and child runs. You can't create a parentless child run: even if the parent run does nothing but launch child runs, it's still necessary to create the hierarchy. The statuses of all runs are independent: a parent can be in the `"Completed"` successful state even if one or more child runs were canceled or failed.
+
+You may wish your child runs to use a different run configuration than the parent run. For instance, you might use a less-powerful, CPU-based configuration for the parent, while using GPU-based configurations for your children. Another common wish is to pass each child different arguments and data. To customize a child run, create a `ScriptRunConfig` object for the child run.
+
+> [!IMPORTANT]
+> To submit a child run from a parent run on a remote compute, you must sign in to the workspace in the parent run code first. By default, the run context object in a remote run does not have credentials to submit child runs. Use a service principal or managed identity credentials to sign in. For more information on authenticating, see [set up authentication](../how-to-setup-authentication.md).
+
+The below code:
+
+- Retrieves a compute resource named `"gpu-cluster"` from the workspace `ws`
+- Iterates over different argument values to be passed to the children `ScriptRunConfig` objects
+- Creates and submits a new child run, using the custom compute resource and argument
+- Blocks until all of the child runs complete
+
+```python
+# parent.py
+# This script controls the launching of child scripts
+from azureml.core import Run, ScriptRunConfig
+
+compute_target = ws.compute_targets["gpu-cluster"]
+
+run = Run.get_context()
+
+child_args = ['Apple', 'Banana', 'Orange']
+for arg in child_args:
+ run.log('Status', f'Launching {arg}')
+ child_config = ScriptRunConfig(source_directory=".", script='child.py', arguments=['--fruit', arg], compute_target=compute_target)
+ # Starts the run asynchronously
+ run.submit_child(child_config)
+
+# Experiment will "complete" successfully at this point.
+# Instead of returning immediately, block until child runs complete
+
+for child in run.get_children():
+ child.wait_for_completion()
+```
+
+To create many child runs with identical configurations, arguments, and inputs efficiently, use the [`create_children()`](/python/api/azureml-core/azureml.core.run.run#create-children-count-none--tag-key-none--tag-values-none-) method. Because each creation results in a network call, creating a batch of runs is more efficient than creating them one by one.
+
+Within a child run, you can view the parent run ID:
+
+```python
+## In child run script
+child_run = Run.get_context()
+child_run.parent.id
+```
+
+### Query child runs
+
+To query the child runs of a specific parent, use the [`get_children()`](/python/api/azureml-core/azureml.core.run%28class%29#get-children-recursive-false--tags-none--properties-none--type-none--status-nonerehydrate-runs-true-) method.
+The ``recursive = True`` argument allows you to query a nested tree of children and grandchildren.
+
+```python
+print(parent_run.get_children())
+```
+
+### Log to parent or root run
+
+You can use the `Run.parent` field to access the run that launched the current child run. A common use-case for using `Run.parent` is to combine log results in a single place. Child runs execute asynchronously and there's no guarantee of ordering or synchronization beyond the ability of the parent to wait for its child runs to complete.
+
+```python
+# in child (or even grandchild) run
+
+def root_run(self : Run) -> Run :
+ if self.parent is None :
+ return self
+ return root_run(self.parent)
+
+current_child_run = Run.get_context()
+root_run(current_child_run).log("MyMetric", f"Data from child run {current_child_run.id}")
+
+```
+
+## Monitor the run status by email notification
+
+1. In the [Azure portal](https://portal.azure.com/), in the left navigation bar, select the **Monitor** tab.
+
+1. Select **Diagnostic settings** and then select **+ Add diagnostic setting**.
+
+ ![Screenshot of diagnostic settings for email notification.](./media/how-to-track-monitor-analyze-runs/diagnostic-setting.png)
+
+1. In the Diagnostic Setting,
+ 1. under the **Category details**, select the **AmlRunStatusChangedEvent**.
+ 1. In the **Destination details**, select the **Send to Log Analytics workspace** and specify the **Subscription** and **Log Analytics workspace**.
+
+ > [!NOTE]
+ > The **Azure Log Analytics Workspace** is a different type of Azure Resource than the **Azure Machine Learning service Workspace**. If there are no options in that list, you can [create a Log Analytics Workspace](/azure/azure-monitor/logs/quick-create-workspace).
+
+ ![Screenshot of configuring the email notification.](./media/how-to-track-monitor-analyze-runs/log-location.png)
+
+1. In the **Logs** tab, add a **New alert rule**.
+
+ ![Screeenshot of the new alert rule.](./media/how-to-track-monitor-analyze-runs/new-alert-rule.png)
+
+1. See [how to create and manage log alerts using Azure Monitor](/azure/azure-monitor/alerts/alerts-log).
+
+## Example notebooks
+
+The following notebooks demonstrate the concepts in this article:
+
+* To learn more about the logging APIs, see the [logging API notebook](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/track-and-monitor-experiments/logging-api/logging-api.ipynb).
+
+* For more information about managing runs with the Azure Machine Learning SDK, see the [manage runs notebook](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/track-and-monitor-experiments/manage-runs/manage-runs.ipynb).
+
+## Next steps
+
+* To learn how to log metrics for your experiments, see [Log metrics during training runs](../how-to-log-view-metrics.md).
+* To learn how to monitor resources and logs from Azure Machine Learning, see [Monitoring Azure Machine Learning](../monitor-azure-machine-learning.md).
machine-learning How To Tune Hyperparameters V1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-tune-hyperparameters-v1.md
+
+ Title: Hyperparameter tuning a model (v1)
+
+description: Automate hyperparameter tuning for deep learning and machine learning models using Azure Machine Learning.(v1)
+++++ Last updated : 05/02/2022++++
+# Hyperparameter tuning a model with Azure Machine Learning (v1)
+
+> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning CLI extension you are using:"]
+> * [v1](how-to-tune-hyperparameters-v1.md)
+> * [v2 (current version)](../how-to-tune-hyperparameters.md)
+
+
+Automate efficient hyperparameter tuning by using Azure Machine Learning (v1) [HyperDrive package](/python/api/azureml-train-core/azureml.train.hyperdrive). Learn how to complete the steps required to tune hyperparameters with the [Azure Machine Learning SDK](/python/api/overview/azure/ml/):
+
+1. Define the parameter search space
+1. Specify a primary metric to optimize
+1. Specify early termination policy for low-performing runs
+1. Create and assign resources
+1. Launch an experiment with the defined configuration
+1. Visualize the training runs
+1. Select the best configuration for your model
+
+## What is hyperparameter tuning?
+
+**Hyperparameters** are adjustable parameters that let you control the model training process. For example, with neural networks, you decide the number of hidden layers and the number of nodes in each layer. Model performance depends heavily on hyperparameters.
+
+ **Hyperparameter tuning**, also called **hyperparameter optimization**, is the process of finding the configuration of hyperparameters that results in the best performance. The process is typically computationally expensive and manual.
+
+Azure Machine Learning lets you automate hyperparameter tuning and run experiments in parallel to efficiently optimize hyperparameters.
++
+## Define the search space
+
+Tune hyperparameters by exploring the range of values defined for each hyperparameter.
+
+Hyperparameters can be discrete or continuous, and has a distribution of values described by a
+[parameter expression](/python/api/azureml-train-core/azureml.train.hyperdrive.parameter_expressions).
+
+### Discrete hyperparameters
+
+Discrete hyperparameters are specified as a `choice` among discrete values. `choice` can be:
+
+* one or more comma-separated values
+* a `range` object
+* any arbitrary `list` object
++
+```Python
+ {
+ "batch_size": choice(16, 32, 64, 128)
+ "number_of_hidden_layers": choice(range(1,5))
+ }
+```
+
+In this case, `batch_size` one of the values [16, 32, 64, 128] and `number_of_hidden_layers` takes one of the values [1, 2, 3, 4].
+
+The following advanced discrete hyperparameters can also be specified using a distribution:
+
+* `quniform(low, high, q)` - Returns a value like round(uniform(low, high) / q) * q
+* `qloguniform(low, high, q)` - Returns a value like round(exp(uniform(low, high)) / q) * q
+* `qnormal(mu, sigma, q)` - Returns a value like round(normal(mu, sigma) / q) * q
+* `qlognormal(mu, sigma, q)` - Returns a value like round(exp(normal(mu, sigma)) / q) * q
+
+### Continuous hyperparameters
+
+The Continuous hyperparameters are specified as a distribution over a continuous range of values:
+
+* `uniform(low, high)` - Returns a value uniformly distributed between low and high
+* `loguniform(low, high)` - Returns a value drawn according to exp(uniform(low, high)) so that the logarithm of the return value is uniformly distributed
+* `normal(mu, sigma)` - Returns a real value that's normally distributed with mean mu and standard deviation sigma
+* `lognormal(mu, sigma)` - Returns a value drawn according to exp(normal(mu, sigma)) so that the logarithm of the return value is normally distributed
+
+An example of a parameter space definition:
+
+```Python
+ {
+ "learning_rate": normal(10, 3),
+ "keep_probability": uniform(0.05, 0.1)
+ }
+```
+
+This code defines a search space with two parameters - `learning_rate` and `keep_probability`. `learning_rate` has a normal distribution with mean value 10 and a standard deviation of 3. `keep_probability` has a uniform distribution with a minimum value of 0.05 and a maximum value of 0.1.
+
+### Sampling the hyperparameter space
+
+Specify the parameter sampling method to use over the hyperparameter space. Azure Machine Learning supports the following methods:
+
+* Random sampling
+* Grid sampling
+* Bayesian sampling
+
+#### Random sampling
+
+[Random sampling](/python/api/azureml-train-core/azureml.train.hyperdrive.randomparametersampling) supports discrete and continuous hyperparameters. It supports early termination of low-performance runs. Some users do an initial search with random sampling and then refine the search space to improve results.
+
+In random sampling, hyperparameter values are randomly selected from the defined search space.
+
+```Python
+from azureml.train.hyperdrive import RandomParameterSampling
+from azureml.train.hyperdrive import normal, uniform, choice
+param_sampling = RandomParameterSampling( {
+ "learning_rate": normal(10, 3),
+ "keep_probability": uniform(0.05, 0.1),
+ "batch_size": choice(16, 32, 64, 128)
+ }
+)
+```
+
+#### Grid sampling
+
+[Grid sampling](/python/api/azureml-train-core/azureml.train.hyperdrive.gridparametersampling) supports discrete hyperparameters. Use grid sampling if you can budget to exhaustively search over the search space. Supports early termination of low-performance runs.
+
+Grid sampling does a simple grid search over all possible values. Grid sampling can only be used with `choice` hyperparameters. For example, the following space has six samples:
+
+```Python
+from azureml.train.hyperdrive import GridParameterSampling
+from azureml.train.hyperdrive import choice
+param_sampling = GridParameterSampling( {
+ "num_hidden_layers": choice(1, 2, 3),
+ "batch_size": choice(16, 32)
+ }
+)
+```
+
+#### Bayesian sampling
+
+[Bayesian sampling](/python/api/azureml-train-core/azureml.train.hyperdrive.bayesianparametersampling) is based on the Bayesian optimization algorithm. It picks samples based on how previous samples did, so that new samples improve the primary metric.
+
+Bayesian sampling is recommended if you have enough budget to explore the hyperparameter space. For best results, we recommend a maximum number of runs greater than or equal to 20 times the number of hyperparameters being tuned.
+
+The number of concurrent runs has an impact on the effectiveness of the tuning process. A smaller number of concurrent runs may lead to better sampling convergence, since the smaller degree of parallelism increases the number of runs that benefit from previously completed runs.
+
+Bayesian sampling only supports `choice`, `uniform`, and `quniform` distributions over the search space.
+
+```Python
+from azureml.train.hyperdrive import BayesianParameterSampling
+from azureml.train.hyperdrive import uniform, choice
+param_sampling = BayesianParameterSampling( {
+ "learning_rate": uniform(0.05, 0.1),
+ "batch_size": choice(16, 32, 64, 128)
+ }
+)
+```
+++
+## <a name="specify-primary-metric-to-optimize"></a> Specify primary metric
+
+Specify the [primary metric](/python/api/azureml-train-core/azureml.train.hyperdrive.primarymetricgoal) you want hyperparameter tuning to optimize. Each training run is evaluated for the primary metric. The early termination policy uses the primary metric to identify low-performance runs.
+
+Specify the following attributes for your primary metric:
+
+* `primary_metric_name`: The name of the primary metric needs to exactly match the name of the metric logged by the training script
+* `primary_metric_goal`: It can be either `PrimaryMetricGoal.MAXIMIZE` or `PrimaryMetricGoal.MINIMIZE` and determines whether the primary metric will be maximized or minimized when evaluating the runs.
+
+```Python
+primary_metric_name="accuracy",
+primary_metric_goal=PrimaryMetricGoal.MAXIMIZE
+```
+
+This sample maximizes "accuracy".
+
+### <a name="log-metrics-for-hyperparameter-tuning"></a>Log metrics for hyperparameter tuning
+
+The training script for your model **must** log the primary metric during model training so that HyperDrive can access it for hyperparameter tuning.
+
+Log the primary metric in your training script with the following sample snippet:
+
+```Python
+from azureml.core.run import Run
+run_logger = Run.get_context()
+run_logger.log("accuracy", float(val_accuracy))
+```
+
+The training script calculates the `val_accuracy` and logs it as the primary metric "accuracy". Each time the metric is logged, it's received by the hyperparameter tuning service. It's up to you to determine the frequency of reporting.
+
+For more information on logging values in model training runs, see [Enable logging in Azure ML training runs](../how-to-log-view-metrics.md).
+
+## <a name="early-termination"></a> Specify early termination policy
+
+Automatically end poorly performing runs with an early termination policy. Early termination improves computational efficiency.
+
+You can configure the following parameters that control when a policy is applied:
+
+* `evaluation_interval`: the frequency of applying the policy. Each time the training script logs the primary metric counts as one interval. An `evaluation_interval` of 1 will apply the policy every time the training script reports the primary metric. An `evaluation_interval` of 2 will apply the policy every other time. If not specified, `evaluation_interval` is set to 1 by default.
+* `delay_evaluation`: delays the first policy evaluation for a specified number of intervals. This is an optional parameter that avoids premature termination of training runs by allowing all configurations to run for a minimum number of intervals. If specified, the policy applies every multiple of evaluation_interval that is greater than or equal to delay_evaluation.
+
+Azure Machine Learning supports the following early termination policies:
+* [Bandit policy](#bandit-policy)
+* [Median stopping policy](#median-stopping-policy)
+* [Truncation selection policy](#truncation-selection-policy)
+* [No termination policy](#no-termination-policy-default)
++
+### Bandit policy
+
+[Bandit policy](/python/api/azureml-train-core/azureml.train.hyperdrive.banditpolicy#definition) is based on slack factor/slack amount and evaluation interval. Bandit ends runs when the primary metric isn't within the specified slack factor/slack amount of the most successful run.
+
+> [!NOTE]
+> Bayesian sampling does not support early termination. When using Bayesian sampling, set `early_termination_policy = None`.
+
+Specify the following configuration parameters:
+
+* `slack_factor` or `slack_amount`: the slack allowed with respect to the best performing training run. `slack_factor` specifies the allowable slack as a ratio. `slack_amount` specifies the allowable slack as an absolute amount, instead of a ratio.
+
+ For example, consider a Bandit policy applied at interval 10. Assume that the best performing run at interval 10 reported a primary metric is 0.8 with a goal to maximize the primary metric. If the policy specifies a `slack_factor` of 0.2, any training runs whose best metric at interval 10 is less than 0.66 (0.8/(1+`slack_factor`)) will be terminated.
+* `evaluation_interval`: (optional) the frequency for applying the policy
+* `delay_evaluation`: (optional) delays the first policy evaluation for a specified number of intervals
+++
+```Python
+from azureml.train.hyperdrive import BanditPolicy
+early_termination_policy = BanditPolicy(slack_factor = 0.1, evaluation_interval=1, delay_evaluation=5)
+```
+
+In this example, the early termination policy is applied at every interval when metrics are reported, starting at evaluation interval 5. Any run whose best metric is less than (1/(1+0.1) or 91% of the best performing run will be terminated.
+
+### Median stopping policy
+
+[Median stopping](/python/api/azureml-train-core/azureml.train.hyperdrive.medianstoppingpolicy) is an early termination policy based on running averages of primary metrics reported by the runs. This policy computes running averages across all training runs and stops runs whose primary metric value is worse than the median of the averages.
+
+This policy takes the following configuration parameters:
+* `evaluation_interval`: the frequency for applying the policy (optional parameter).
+* `delay_evaluation`: delays the first policy evaluation for a specified number of intervals (optional parameter).
++
+```Python
+from azureml.train.hyperdrive import MedianStoppingPolicy
+early_termination_policy = MedianStoppingPolicy(evaluation_interval=1, delay_evaluation=5)
+```
+
+In this example, the early termination policy is applied at every interval starting at evaluation interval 5. A run is stopped at interval 5 if its best primary metric is worse than the median of the running averages over intervals 1:5 across all training runs.
+
+### Truncation selection policy
+
+[Truncation selection](/python/api/azureml-train-core/azureml.train.hyperdrive.truncationselectionpolicy) cancels a percentage of lowest performing runs at each evaluation interval. Runs are compared using the primary metric.
+
+This policy takes the following configuration parameters:
+
+* `truncation_percentage`: the percentage of lowest performing runs to terminate at each evaluation interval. An integer value between 1 and 99.
+* `evaluation_interval`: (optional) the frequency for applying the policy
+* `delay_evaluation`: (optional) delays the first policy evaluation for a specified number of intervals
+* `exclude_finished_jobs`: specifies whether to exclude finished jobs when applying the policy
++
+```Python
+from azureml.train.hyperdrive import TruncationSelectionPolicy
+early_termination_policy = TruncationSelectionPolicy(evaluation_interval=1, truncation_percentage=20, delay_evaluation=5, exclude_finished_jobs=true)
+```
+
+In this example, the early termination policy is applied at every interval starting at evaluation interval 5. A run terminates at interval 5 if its performance at interval 5 is in the lowest 20% of performance of all runs at interval 5 and will exclude finished jobs when applying the policy.
+
+### No termination policy (default)
+
+If no policy is specified, the hyperparameter tuning service will let all training runs execute to completion.
+
+```Python
+policy=None
+```
+
+### Picking an early termination policy
+
+* For a conservative policy that provides savings without terminating promising jobs, consider a Median Stopping Policy with `evaluation_interval` 1 and `delay_evaluation` 5. These are conservative settings, that can provide approximately 25%-35% savings with no loss on primary metric (based on our evaluation data).
+* For more aggressive savings, use Bandit Policy with a smaller allowable slack or Truncation Selection Policy with a larger truncation percentage.
+
+## Create and assign resources
+
+Control your resource budget by specifying the maximum number of training runs.
+
+* `max_total_runs`: Maximum number of training runs. Must be an integer between 1 and 1000.
+* `max_duration_minutes`: (optional) Maximum duration, in minutes, of the hyperparameter tuning experiment. Runs after this duration are canceled.
+
+>[!NOTE]
+>If both `max_total_runs` and `max_duration_minutes` are specified, the hyperparameter tuning experiment terminates when the first of these two thresholds is reached.
+
+Additionally, specify the maximum number of training runs to run concurrently during your hyperparameter tuning search.
+
+* `max_concurrent_runs`: (optional) Maximum number of runs that can run concurrently. If not specified, all runs launch in parallel. If specified, must be an integer between 1 and 100.
+
+>[!NOTE]
+>The number of concurrent runs is gated on the resources available in the specified compute target. Ensure that the compute target has the available resources for the desired concurrency.
+
+```Python
+max_total_runs=20,
+max_concurrent_runs=4
+```
+
+This code configures the hyperparameter tuning experiment to use a maximum of 20 total runs, running four configurations at a time.
+
+## Configure hyperparameter tuning experiment
+
+To [configure your hyperparameter tuning](/python/api/azureml-train-core/azureml.train.hyperdrive.hyperdriverunconfig) experiment, provide the following:
+* The defined hyperparameter search space
+* Your early termination policy
+* The primary metric
+* Resource allocation settings
+* ScriptRunConfig `script_run_config`
+
+The ScriptRunConfig is the training script that will run with the sampled hyperparameters. It defines the resources per job (single or multi-node), and the compute target to use.
+
+> [!NOTE]
+>The compute target used in `script_run_config` must have enough resources to satisfy your concurrency level. For more information on ScriptRunConfig, see [Configure training runs](../how-to-set-up-training-targets.md).
+
+Configure your hyperparameter tuning experiment:
+
+```Python
+from azureml.train.hyperdrive import HyperDriveConfig
+from azureml.train.hyperdrive import RandomParameterSampling, BanditPolicy, uniform, PrimaryMetricGoal
+
+param_sampling = RandomParameterSampling( {
+ 'learning_rate': uniform(0.0005, 0.005),
+ 'momentum': uniform(0.9, 0.99)
+ }
+)
+
+early_termination_policy = BanditPolicy(slack_factor=0.15, evaluation_interval=1, delay_evaluation=10)
+
+hd_config = HyperDriveConfig(run_config=script_run_config,
+ hyperparameter_sampling=param_sampling,
+ policy=early_termination_policy,
+ primary_metric_name="accuracy",
+ primary_metric_goal=PrimaryMetricGoal.MAXIMIZE,
+ max_total_runs=100,
+ max_concurrent_runs=4)
+```
+
+The `HyperDriveConfig` sets the parameters passed to the `ScriptRunConfig script_run_config`. The `script_run_config`, in turn, passes parameters to the training script. The above code snippet is taken from the sample notebook [Train, hyperparameter tune, and deploy with PyTorch](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/ml-frameworks/pytorch/train-hyperparameter-tune-deploy-with-pytorch). In this sample, the `learning_rate` and `momentum` parameters will be tuned. Early stopping of runs will be determined by a `BanditPolicy`, which stops a run whose primary metric falls outside the `slack_factor` (see [BanditPolicy class reference](/python/api/azureml-train-core/azureml.train.hyperdrive.banditpolicy)).
+
+The following code from the sample shows how the being-tuned values are received, parsed, and passed to the training script's `fine_tune_model` function:
+
+```python
+# from pytorch_train.py
+def main():
+ print("Torch version:", torch.__version__)
+
+ # get command-line arguments
+ parser = argparse.ArgumentParser()
+ parser.add_argument('--num_epochs', type=int, default=25,
+ help='number of epochs to train')
+ parser.add_argument('--output_dir', type=str, help='output directory')
+ parser.add_argument('--learning_rate', type=float,
+ default=0.001, help='learning rate')
+ parser.add_argument('--momentum', type=float, default=0.9, help='momentum')
+ args = parser.parse_args()
+
+ data_dir = download_data()
+ print("data directory is: " + data_dir)
+ model = fine_tune_model(args.num_epochs, data_dir,
+ args.learning_rate, args.momentum)
+ os.makedirs(args.output_dir, exist_ok=True)
+ torch.save(model, os.path.join(args.output_dir, 'model.pt'))
+```
+
+> [!Important]
+> Every hyperparameter run restarts the training from scratch, including rebuilding the model and _all the data loaders_. You can minimize
+> this cost by using an Azure Machine Learning pipeline or manual process to do as much data preparation as possible prior to your training runs.
+
+## Submit hyperparameter tuning experiment
+
+After you define your hyperparameter tuning configuration, [submit the experiment](/python/api/azureml-core/azureml.core.experiment%28class%29#submit-config--tags-none-kwargs-):
+
+```Python
+from azureml.core.experiment import Experiment
+experiment = Experiment(workspace, experiment_name)
+hyperdrive_run = experiment.submit(hd_config)
+```
+
+## Warm start hyperparameter tuning (optional)
+
+Finding the best hyperparameter values for your model can be an iterative process. You can reuse knowledge from the five previous runs to accelerate hyperparameter tuning.
+
+Warm starting is handled differently depending on the sampling method:
+- **Bayesian sampling**: Trials from the previous run are used as prior knowledge to pick new samples, and to improve the primary metric.
+- **Random sampling** or **grid sampling**: Early termination uses knowledge from previous runs to determine poorly performing runs.
+
+Specify the list of parent runs you want to warm start from.
+
+```Python
+from azureml.train.hyperdrive import HyperDriveRun
+
+warmstart_parent_1 = HyperDriveRun(experiment, "warmstart_parent_run_ID_1")
+warmstart_parent_2 = HyperDriveRun(experiment, "warmstart_parent_run_ID_2")
+warmstart_parents_to_resume_from = [warmstart_parent_1, warmstart_parent_2]
+```
+
+If a hyperparameter tuning experiment is canceled, you can resume training runs from the last checkpoint. However, your training script must handle checkpoint logic.
+
+The training run must use the same hyperparameter configuration and mounted the outputs folders. The training script must accept the `resume-from` argument, which contains the checkpoint or model files from which to resume the training run. You can resume individual training runs using the following snippet:
+
+```Python
+from azureml.core.run import Run
+
+resume_child_run_1 = Run(experiment, "resume_child_run_ID_1")
+resume_child_run_2 = Run(experiment, "resume_child_run_ID_2")
+child_runs_to_resume = [resume_child_run_1, resume_child_run_2]
+```
+
+You can configure your hyperparameter tuning experiment to warm start from a previous experiment or resume individual training runs using the optional parameters `resume_from` and `resume_child_runs` in the config:
+
+```Python
+from azureml.train.hyperdrive import HyperDriveConfig
+
+hd_config = HyperDriveConfig(run_config=script_run_config,
+ hyperparameter_sampling=param_sampling,
+ policy=early_termination_policy,
+ resume_from=warmstart_parents_to_resume_from,
+ resume_child_runs=child_runs_to_resume,
+ primary_metric_name="accuracy",
+ primary_metric_goal=PrimaryMetricGoal.MAXIMIZE,
+ max_total_runs=100,
+ max_concurrent_runs=4)
+```
+
+## Visualize hyperparameter tuning runs
+
+You can visualize your hyperparameter tuning runs in the Azure Machine Learning studio, or you can use a notebook widget.
+
+### Studio
+
+You can visualize all of your hyperparameter tuning runs in the [Azure Machine Learning studio](https://ml.azure.com). For more information on how to view an experiment in the portal, see [View run records in the studio](../how-to-log-view-metrics.md#view-the-experiment-in-the-web-portal).
+
+- **Metrics chart**: This visualization tracks the metrics logged for each hyperdrive child run over the duration of hyperparameter tuning. Each line represents a child run, and each point measures the primary metric value at that iteration of runtime.
+
+ :::image type="content" source="../media/how-to-tune-hyperparameters/hyperparameter-tuning-metrics.png" alt-text="Hyperparameter tuning metrics chart":::
+
+- **Parallel Coordinates Chart**: This visualization shows the correlation between primary metric performance and individual hyperparameter values. The chart is interactive via movement of axes (click and drag by the axis label), and by highlighting values across a single axis (click and drag vertically along a single axis to highlight a range of desired values). The parallel coordinates chart includes an axis on the right most portion of the chart that plots the best metric value corresponding to the hyperparameters set for that run instance. This axis is provided in order to project the chart gradient legend onto the data in a more readable fashion.
+
+ :::image type="content" source="../media/how-to-tune-hyperparameters/hyperparameter-tuning-parallel-coordinates.png" alt-text="Hyperparameter tuning parallel coordinates chart":::
+
+- **2-Dimensional Scatter Chart**: This visualization shows the correlation between any two individual hyperparameters along with their associated primary metric value.
+
+ :::image type="content" source="../media/how-to-tune-hyperparameters/hyperparameter-tuning-2-dimensional-scatter.png" alt-text="Hyparameter tuning 2-dimensional scatter chart":::
+
+- **3-Dimensional Scatter Chart**: This visualization is the same as 2D but allows for three hyperparameter dimensions of correlation with the primary metric value. You can also click and drag to reorient the chart to view different correlations in 3D space.
+
+ :::image type="content" source="../media/how-to-tune-hyperparameters/hyperparameter-tuning-3-dimensional-scatter.png" alt-text="Hyparameter tuning 3-dimensional scatter chart":::
+
+### Notebook widget
+
+Use the [Notebook widget](/python/api/azureml-widgets/azureml.widgets.rundetails) to visualize the progress of your training runs. The following snippet visualizes all your hyperparameter tuning runs in one place in a Jupyter notebook:
+
+```Python
+from azureml.widgets import RunDetails
+RunDetails(hyperdrive_run).show()
+```
+
+This code displays a table with details about the training runs for each of the hyperparameter configurations.
++
+You can also visualize the performance of each of the runs as training progresses.
+
+## Find the best model
+
+Once all of the hyperparameter tuning runs have completed, identify the best performing configuration and hyperparameter values:
+
+```Python
+best_run = hyperdrive_run.get_best_run_by_primary_metric()
+best_run_metrics = best_run.get_metrics()
+parameter_values = best_run.get_details()['runDefinition']['arguments']
+
+print('Best Run Id: ', best_run.id)
+print('\n Accuracy:', best_run_metrics['accuracy'])
+print('\n learning rate:',parameter_values[3])
+print('\n keep probability:',parameter_values[5])
+print('\n batch size:',parameter_values[7])
+```
+
+## Sample notebook
+
+Refer to train-hyperparameter-* notebooks in this folder:
+* [how-to-use-azureml/ml-frameworks](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/ml-frameworks)
++
+## Next steps
+* [Track an experiment](../how-to-log-view-metrics.md)
+* [Deploy a trained model](../how-to-deploy-and-where.md)
machine-learning How To Use Environments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-use-environments.md
+
+ Title: Use software environments CLI v1
+
+description: Create and manage environments for model training and deployment with CLI v1. Manage Python packages and other settings for the environment.
+++++ Last updated : 04/19/2022++
+ms.devlang: azurecli
++
+# Create & use software environments in Azure Machine Learning with CLI v1
++
+> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning CLI extension you are using:"]
+> * [v1](how-to-use-environments.md)
+> * [v2 (current version)](../how-to-manage-environments-v2.md)
+
+In this article, learn how to create and manage Azure Machine Learning [environments](/python/api/azureml-core/azureml.core.environment.environment) using CLI v1. Use the environments to track and reproduce your projects' software dependencies as they evolve. The [Azure Machine Learning CLI](reference-azure-machine-learning-cli.md) v1 mirrors most of the functionality of the Python SDK v1. You can use it to create and manage environments.
+
+Software dependency management is a common task for developers. You want to ensure that builds are reproducible without extensive manual software configuration. The Azure Machine Learning `Environment` class accounts for local development solutions such as pip and Conda and distributed cloud development through Docker capabilities.
+
+For a high-level overview of how environments work in Azure Machine Learning, see [What are ML environments?](../concept-environments.md) For information about managing environments in the Azure ML studio, see [Manage environments in the studio](../how-to-manage-environments-in-studio.md). For information about configuring development environments, see [Set up a Python development environment for Azure ML](../how-to-configure-environment.md).
+
+## Prerequisites
+
+* An [Azure Machine Learning workspace](../how-to-manage-workspace.md)
++
+## Scaffold an environment
+
+The following command scaffolds the files for a default environment definition in the specified directory. These files are JSON files. They work like the corresponding class in the SDK. You can use the files to create new environments that have custom settings.
+
+```azurecli-interactive
+az ml environment scaffold -n myenv -d myenvdir
+```
+
+## Register an environment
+
+Run the following command to register an environment from a specified directory:
+
+```azurecli-interactive
+az ml environment register -d myenvdir
+```
+
+## List environments
+
+Run the following command to list all registered environments:
+
+```azurecli-interactive
+az ml environment list
+```
+
+## Download an environment
+
+To download a registered environment, use the following command:
+
+```azurecli-interactive
+az ml environment download -n myenv -d downloaddir
+```
+
+## Next steps
+
+* After you have a trained model, learn [how and where to deploy models](../how-to-deploy-and-where.md).
+* View the [`Environment` class SDK reference](/python/api/azureml-core/azureml.core.environment%28class%29).
machine-learning How To Use Managed Identities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-use-managed-identities.md
+
+ Title: Use managed identities for access control (v1)
+
+description: Learn how to use CLI and SDK v1 with managed identities to control access to Azure resources from Azure Machine Learning workspace.
+++++++ Last updated : 05/06/2021+++
+# Use Managed identities with Azure Machine Learning CLI v1
++
+> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning CLI extension you are using:"]
+> * [v1](how-to-use-managed-identities.md)
+> * [v2 (current version)](../how-to-use-managed-identities.md)
+
+[Managed identities](/active-directory/managed-identities-azure-resources/overview) allow you to configure your workspace with the *minimum required permissions to access resources*.
+
+When configuring Azure Machine Learning workspace in trustworthy manner, it is important to ensure that different services associated with the workspace have the correct level of access. For example, during machine learning workflow the workspace needs access to Azure Container Registry (ACR) for Docker images, and storage accounts for training data.
+
+Furthermore, managed identities allow fine-grained control over permissions, for example you can grant or revoke access from specific compute resources to a specific ACR.
+
+In this article, you'll learn how to use managed identities to:
+
+ * Configure and use ACR for your Azure Machine Learning workspace without having to enable admin user access to ACR.
+ * Access a private ACR external to your workspace, to pull base images for training or inference.
+ * Create workspace with user-assigned managed identity to access associated resources.
+
+## Prerequisites
+
+- An Azure Machine Learning workspace. For more information, see [Create an Azure Machine Learning workspace](../how-to-manage-workspace.md).
+- The [Azure CLI extension for Machine Learning service](reference-azure-machine-learning-cli.md)
+- The [Azure Machine Learning Python SDK](/python/api/overview/azure/ml/intro).
+- To assign roles, the login for your Azure subscription must have the [Managed Identity Operator](/role-based-access-control/built-in-roles#managed-identity-operator) role, or other role that grants the required actions (such as __Owner__).
+- You must be familiar with creating and working with [Managed Identities](/active-directory/managed-identities-azure-resources/overview).
+
+## Configure managed identities
+
+In some situations, it's necessary to disallow admin user access to Azure Container Registry. For example, the ACR may be shared and you need to disallow admin access by other users. Or, creating ACR with admin user enabled is disallowed by a subscription level policy.
+
+> [!IMPORTANT]
+> When using Azure Machine Learning for inference on Azure Container Instance (ACI), admin user access on ACR is __required__. Do not disable it if you plan on deploying models to ACI for inference.
+
+When you create ACR without enabling admin user access, managed identities are used to access the ACR to build and pull Docker images.
+
+You can bring your own ACR with admin user disabled when you create the workspace. Alternatively, let Azure Machine Learning create workspace ACR and disable admin user afterwards.
+
+### Bring your own ACR
+
+If ACR admin user is disallowed by subscription policy, you should first create ACR without admin user, and then associate it with the workspace. Also, if you have existing ACR with admin user disabled, you can attach it to the workspace.
+
+[Create ACR from Azure CLI](/container-registry/container-registry-get-started-azure-cli) without setting ```--admin-enabled``` argument, or from Azure portal without enabling admin user. Then, when creating Azure Machine Learning workspace, specify the Azure resource ID of the ACR. The following example demonstrates creating a new Azure ML workspace that uses an existing ACR:
+
+> [!TIP]
+> To get the value for the `--container-registry` parameter, use the [az acr show](/cli/azure/acr#az-acr-show) command to show information for your ACR. The `id` field contains the resource ID for your ACR.
+
+```azurecli-interactive
+az ml workspace create -w <workspace name> \
+-g <workspace resource group> \
+-l <region> \
+--container-registry /subscriptions/<subscription id>/resourceGroups/<acr resource group>/providers/Microsoft.ContainerRegistry/registries/<acr name>
+```
+
+### Let Azure Machine Learning service create workspace ACR
+
+If you do not bring your own ACR, Azure Machine Learning service will create one for you when you perform an operation that needs one. For example, submit a training run to Machine Learning Compute, build an environment, or deploy a web service endpoint. The ACR created by the workspace will have admin user enabled, and you need to disable the admin user manually.
++
+1. Create a new workspace
++
+ ```azurecli-interactive
+ az ml workspace show -n <my workspace> -g <my resource group>
+ ```
+
+1. Perform an action that requires ACR. For example, the [tutorial on training a model](../tutorial-train-deploy-notebook.md).
+
+1. Get the ACR name created by the cluster:
+
+ ```azurecli-interactive
+ az ml workspace show -w <my workspace> \
+ -g <my resource group>
+ --query containerRegistry
+ ```
+
+ This command returns a value similar to the following text. You only want the last portion of the text, which is the ACR instance name:
+
+ ```output
+ /subscriptions/<subscription id>/resourceGroups/<my resource group>/providers/MicrosoftContainerReggistry/registries/<ACR instance name>
+ ```
+
+1. Update the ACR to disable the admin user:
+
+ ```azurecli-interactive
+ az acr update --name <ACR instance name> --admin-enabled false
+ ```
+
+### Create compute with managed identity to access Docker images for training
+
+To access the workspace ACR, create machine learning compute cluster with system-assigned managed identity enabled. You can enable the identity from Azure portal or Studio when creating compute, or from Azure CLI using the below. For more information, see [using managed identity with compute clusters](how-to-create-attach-compute-cluster.md#set-up-managed-identity).
+
+# [Python](#tab/python)
+
+When creating a compute cluster with the [AmlComputeProvisioningConfiguration](/python/api/azureml-core/azureml.core.compute.amlcompute.amlcomputeprovisioningconfiguration), use the `identity_type` parameter to set the managed identity type.
+
+# [Azure CLI](#tab/azure-cli)
++
+```azurecli-interaction
+az ml computetarget create amlcompute --name <cluster name> -w <workspace> -g <resource group> --vm-size <vm sku> --assign-identity '[system]'
+```
+
+# [Portal](#tab/azure-portal)
+
+For information on configuring managed identity when creating a compute cluster in studio, see [Set up managed identity](../how-to-create-attach-compute-studio.md#managed-identity).
+++
+A managed identity is automatically granted ACRPull role on workspace ACR to enable pulling Docker images for training.
+
+> [!NOTE]
+> If you create compute first, before workspace ACR has been created, you have to assign the ACRPull role manually.
+
+## Access base images from private ACR
+
+By default, Azure Machine Learning uses Docker base images that come from a public repository managed by Microsoft. It then builds your training or inference environment on those images. For more information, see [What are ML environments?](../concept-environments.md).
+
+To use a custom base image internal to your enterprise, you can use managed identities to access your private ACR. There are two use cases:
+
+ * Use base image for training as is.
+ * Build Azure Machine Learning managed image with custom image as a base.
+
+### Pull Docker base image to machine learning compute cluster for training as is
+
+Create machine learning compute cluster with system-assigned managed identity enabled as described earlier. Then, determine the principal ID of the managed identity.
++
+```azurecli-interactive
+az ml computetarget amlcompute identity show --name <cluster name> -w <workspace> -g <resource group>
+```
+
+Optionally, you can update the compute cluster to assign a user-assigned managed identity:
++
+```azurecli-interactive
+az ml computetarget amlcompute identity assign --name <cluster name> \
+-w $mlws -g $mlrg --identities <my-identity-id>
+```
+
+To allow the compute cluster to pull the base images, grant the managed service identity ACRPull role on the private ACR
+
+```azurecli-interactive
+az role assignment create --assignee <principal ID> \
+--role acrpull \
+--scope "/subscriptions/<subscription ID>/resourceGroups/<private ACR resource group>/providers/Microsoft.ContainerRegistry/registries/<private ACR name>"
+```
+
+Finally, when submitting a training run, specify the base image location in the [environment definition](../how-to-use-environments.md#use-existing-environments).
++
+```python
+from azureml.core import Environment
+env = Environment(name="private-acr")
+env.docker.base_image = "<ACR name>.azurecr.io/<base image repository>/<base image version>"
+env.python.user_managed_dependencies = True
+```
+
+> [!IMPORTANT]
+> To ensure that the base image is pulled directly to the compute resource, set `user_managed_dependencies = True` and do not specify a Dockerfile. Otherwise Azure Machine Learning service will attempt to build a new Docker image and fail, because only the compute cluster has access to pull the base image from ACR.
+
+### Build Azure Machine Learning managed environment into base image from private ACR for training or inference
++
+In this scenario, Azure Machine Learning service builds the training or inference environment on top of a base image you supply from a private ACR. Because the image build task happens on the workspace ACR using ACR Tasks, you must perform more steps to allow access.
+
+1. Create __user-assigned managed identity__ and grant the identity ACRPull access to the __private ACR__.
+1. Grant the workspace __system-assigned managed identity__ a Managed Identity Operator role on the __user-assigned managed identity__ from the previous step. This role allows the workspace to assign the user-assigned managed identity to ACR Task for building the managed environment.
+
+ 1. Obtain the principal ID of workspace system-assigned managed identity:
+
+ ```azurecli-interactive
+ az ml workspace show -w <workspace name> -g <resource group> --query identityPrincipalId
+ ```
+
+ 1. Grant the Managed Identity Operator role:
+
+ ```azurecli-interactive
+ az role assignment create --assignee <principal ID> --role managedidentityoperator --scope <user-assigned managed identity resource ID>
+ ```
+
+ The user-assigned managed identity resource ID is Azure resource ID of the user assigned identity, in the format `/subscriptions/<subscription ID>/resourceGroups/<resource group>/providers/Microsoft.ManagedIdentity/userAssignedIdentities/<user-assigned managed identity name>`.
+
+1. Specify the external ACR and client ID of the __user-assigned managed identity__ in workspace connections by using [Workspace.set_connection method](/python/api/azureml-core/azureml.core.workspace.workspace#set-connection-name--category--target--authtype--value-):
+
+ [!INCLUDE [sdk v1](../../../includes/machine-learning-sdk-v1.md)]
+
+ ```python
+ workspace.set_connection(
+ name="privateAcr",
+ category="ACR",
+ target = "<acr url>",
+ authType = "RegistryConnection",
+ value={"ResourceId": "<user-assigned managed identity resource id>", "ClientId": "<user-assigned managed identity client ID>"})
+ ```
+
+1. Once the configuration is complete, you can use the base images from private ACR when building environments for training or inference. The following code snippet demonstrates how to specify the base image ACR and image name in an environment definition:
+
+ [!INCLUDE [sdk v1](../../../includes/machine-learning-sdk-v1.md)]
+
+ ```python
+ from azureml.core import Environment
+
+ env = Environment(name="my-env")
+ env.docker.base_image = "<acr url>/my-repo/my-image:latest"
+ ```
+
+ Optionally, you can specify the managed identity resource URL and client ID in the environment definition itself by using [RegistryIdentity](/python/api/azureml-core/azureml.core.container_registry.registryidentity). If you use registry identity explicitly, it overrides any workspace connections specified earlier:
+
+ [!INCLUDE [sdk v1](../../../includes/machine-learning-sdk-v1.md)]
+
+ ```python
+ from azureml.core.container_registry import RegistryIdentity
+
+ identity = RegistryIdentity()
+ identity.resource_id= "<user-assigned managed identity resource ID>"
+ identity.client_id="<user-assigned managed identity client ID>"
+ env.docker.base_image_registry.registry_identity=identity
+ env.docker.base_image = "my-acr.azurecr.io/my-repo/my-image:latest"
+ ```
+
+## Use Docker images for inference
+
+Once you've configured ACR without admin user as described earlier, you can access Docker images for inference without admin keys from your Azure Kubernetes service (AKS). When you create or attach AKS to workspace, the cluster's service principal is automatically assigned ACRPull access to workspace ACR.
+
+> [!NOTE]
+> If you bring your own AKS cluster, the cluster must have service principal enabled instead of managed identity.
+
+## Create workspace with user-assigned managed identity
+
+When creating a workspace, you can bring your own [user-assigned managed identity](/active-directory/managed-identities-azure-resources/how-to-manage-ua-identity-cli) that will be used to access the associated resources: ACR, KeyVault, Storage, and App Insights.
+
+> [!IMPORTANT]
+> When creating workspace with user-assigned managed identity, you must create the associated resources yourself, and grant the managed identity roles on those resources. Use the [role assignment ARM template](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.machinelearningservices/machine-learning-dependencies-role-assignment) to make the assignments.
+
+Use Azure CLI or Python SDK to create the workspace. When using the CLI, specify the ID using the `--primary-user-assigned-identity` parameter. When using the SDK, use `primary_user_assigned_identity`. The following are examples of using the Azure CLI and Python to create a new workspace using these parameters:
+
+__Azure CLI__
++
+```azurecli-interactive
+az ml workspace create -w <workspace name> -g <resource group> --primary-user-assigned-identity <managed identity ARM ID>
+```
+
+__Python__
++
+```python
+from azureml.core import Workspace
+
+ws = Workspace.create(name="workspace name",
+ subscription_id="subscription id",
+ resource_group="resource group name",
+ primary_user_assigned_identity="managed identity ARM ID")
+```
+
+You can also use [an ARM template](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.machinelearningservices/) to create a workspace with user-assigned managed identity.
+
+For a workspace with [customer-managed keys for encryption](../concept-data-encryption.md), you can pass in a user-assigned managed identity to authenticate from storage to Key Vault. Use argument
+ __user-assigned-identity-for-cmk-encryption__ (CLI) or __user_assigned_identity_for_cmk_encryption__ (SDK) to pass in the managed identity. This managed identity can be the same or different as the workspace primary user assigned managed identity.
+
+## Next steps
+
+* Learn more about [enterprise security in Azure Machine Learning](../concept-enterprise-security.md)
+* Learn about [identity-based data access](../how-to-identity-based-data-access.md)
+* Learn about [managed identities on compute cluster](how-to-create-attach-compute-cluster.md).
machine-learning How To Use Mlflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-use-mlflow.md
+
+ Title: MLflow Tracking for models
+
+description: Set up MLflow Tracking with Azure Machine Learning to log metrics and artifacts from ML models.
+++++ Last updated : 10/21/2021++++
+# Track ML models with MLflow and Azure Machine Learning
+
+> [!div class="op_single_selector" title1="Select the version of the Azure Machine Learning Python SDK you are using:"]
+> * [v1](how-to-use-mlflow.md)
+> * [v2 (current version)](../how-to-use-mlflow-cli-runs.md)
++
+In this article, learn how to enable MLflow's tracking URI and logging API, collectively known as [MLflow Tracking](https://mlflow.org/docs/latest/quickstart.html#using-the-tracking-api), to connect Azure Machine Learning as the backend of your MLflow experiments.
+
+> [!TIP]
+> For a more streamlined experience, see how to [Track experiments with the MLflow SDK or the Azure Machine Learning CLI (v2) (preview)](../how-to-use-mlflow-cli-runs.md)
+
+Supported capabilities include:
+++ Track and log experiment metrics and artifacts in your [Azure Machine Learning workspace](concept-azure-machine-learning-architecture.md#workspace). If you already use MLflow Tracking for your experiments, the workspace provides a centralized, secure, and scalable location to store training metrics and models.+++ [Submit training jobs with MLflow Projects with Azure Machine Learning backend support (preview)](../how-to-train-mlflow-projects.md). You can submit jobs locally with Azure Machine Learning tracking or migrate your runs to the cloud like via an [Azure Machine Learning Compute](../how-to-create-attach-compute-cluster.md).+++ Track and manage models in MLflow and Azure Machine Learning model registry.+
+[MLflow](https://www.mlflow.org) is an open-source library for managing the life cycle of your machine learning experiments. MLflow Tracking is a component of MLflow that logs and tracks your training run metrics and model artifacts, no matter your experiment's environment--locally on your computer, on a remote compute target, a virtual machine, or an [Azure Databricks cluster](../how-to-use-mlflow-azure-databricks.md).
+
+See [MLflow and Azure Machine Learning](concept-mlflow-v1.md) for additional MLflow and Azure Machine Learning functionality integrations.
+
+The following diagram illustrates that with MLflow Tracking, you track an experiment's run metrics and store model artifacts in your Azure Machine Learning workspace.
+
+![mlflow with azure machine learning diagram](./media/how-to-use-mlflow/mlflow-diagram-track.png)
+
+> [!TIP]
+> The information in this document is primarily for data scientists and developers who want to monitor the model training process. If you are an administrator interested in monitoring resource usage and events from Azure Machine Learning, such as quotas, completed training runs, or completed model deployments, see [Monitoring Azure Machine Learning](../monitor-azure-machine-learning.md).
+
+> [!NOTE]
+> You can use the [MLflow Skinny client](https://github.com/mlflow/mlflow/blob/master/README_SKINNY.rst) which is a lightweight MLflow package without SQL storage, server, UI, or data science dependencies. This is recommended for users who primarily need the tracking and logging capabilities without importing the full suite of MLflow features including deployments.
+
+## Prerequisites
+
+* Install the `azureml-mlflow` package.
+ * This package automatically brings in `azureml-core` of the [The Azure Machine Learning Python SDK](/python/api/overview/azure/ml/install), which provides the connectivity for MLflow to access your workspace.
+* [Create an Azure Machine Learning Workspace](../how-to-manage-workspace.md).
+ * See which [access permissions you need to perform your MLflow operations with your workspace](../how-to-assign-roles.md#mlflow-operations).
+
+## Track local runs
+
+MLflow Tracking with Azure Machine Learning lets you store the logged metrics and artifacts runs that were executed on your local machine into your Azure Machine Learning workspace. For more information, see [How to log and view metrics (v2)](how-to-log-view-metrics.md).
+
+### Set up tracking environment
+
+To track a local run, you need to point your local machine to the Azure Machine Learning MLflow Tracking URI.
+
+Import the `mlflow` and [`Workspace`](/python/api/azureml-core/azureml.core.workspace%28class%29) classes to access MLflow's tracking URI and configure your workspace.
+
+In the following code, the `get_mlflow_tracking_uri()` method assigns a unique tracking URI address to the workspace, `ws`, and `set_tracking_uri()` points the MLflow tracking URI to that address.
+
+```Python
+import mlflow
+from azureml.core import Workspace
+
+ws = Workspace.from_config()
+
+mlflow.set_tracking_uri(ws.get_mlflow_tracking_uri())
+```
+
+### Set experiment name
+
+All MLflow runs are logged to the active experiment, which can be set with the MLflow SDK or Azure CLI.
+
+Set the MLflow experiment name with [`set_experiment()`](https://mlflow.org/docs/latest/python_api/mlflow.html#mlflow.set_experiment) command.
+
+```Python
+experiment_name = 'experiment_with_mlflow'
+mlflow.set_experiment(experiment_name)
+```
+
+### Start training run
+
+After you set the MLflow experiment name, you can start your training run with `start_run()`. Then use `log_metric()` to activate the MLflow logging API and begin logging your training run metrics.
+
+```Python
+import os
+from random import random
+
+with mlflow.start_run() as mlflow_run:
+ mlflow.log_param("hello_param", "world")
+ mlflow.log_metric("hello_metric", random())
+ os.system(f"echo 'hello world' > helloworld.txt")
+ mlflow.log_artifact("helloworld.txt")
+```
+
+## Track remote runs
+
+Remote runs let you train your models on more powerful computes, such as GPU enabled virtual machines, or Machine Learning Compute clusters. See [Use compute targets for model training](../how-to-set-up-training-targets.md) to learn about different compute options.
+
+MLflow Tracking with Azure Machine Learning lets you store the logged metrics and artifacts from your remote runs into your Azure Machine Learning workspace. Any run with MLflow Tracking code in it will have metrics logged automatically to the workspace.
+
+First, you should create a `src` subdirectory and create a file with your training code in a `train.py` file in the `src` subdirectory. All your training code will go into the `src` subdirectory, including `train.py`.
+
+The training code is taken from this [MLflow example](https://github.com/Azure/azureml-examples/blob/main/cli/jobs/basics/src/hello-mlflow.py) in the Azure Machine Learning example repo.
+
+Copy this code into the file:
+
+```Python
+# imports
+import os
+import mlflow
+
+from random import random
+
+# define functions
+def main():
+ mlflow.log_param("hello_param", "world")
+ mlflow.log_metric("hello_metric", random())
+ os.system(f"echo 'hello world' > helloworld.txt")
+ mlflow.log_artifact("helloworld.txt")
++
+# run functions
+if __name__ == "__main__":
+ # run main function
+ main()
+```
+
+Load training script to submit an experiment.
+
+```Python
+script_dir = "src"
+training_script = 'train.py'
+with open("{}/{}".format(script_dir,training_script), 'r') as f:
+ print(f.read())
+```
+
+In your script, configure your compute and training run environment with the [`Environment`](/python/api/azureml-core/azureml.core.environment.environment) class.
+
+```Python
+from azureml.core import Environment
+from azureml.core.conda_dependencies import CondaDependencies
+
+env = Environment(name="mlflow-env")
+
+# Specify conda dependencies with scikit-learn and temporary pointers to mlflow extensions
+cd = CondaDependencies.create(
+ conda_packages=["scikit-learn", "matplotlib"],
+ pip_packages=["azureml-mlflow", "pandas", "numpy"]
+ )
+
+env.python.conda_dependencies = cd
+```
+
+Then, construct [`ScriptRunConfig`](/python/api/azureml-core/azureml.core.script_run_config.scriptrunconfig) with your remote compute as the compute target.
+
+```Python
+from azureml.core import ScriptRunConfig
+
+src = ScriptRunConfig(source_directory="src",
+ script=training_script,
+ compute_target="<COMPUTE_NAME>",
+ environment=env)
+```
+
+With this compute and training run configuration, use the `Experiment.submit()` method to submit a run. This method automatically sets the MLflow tracking URI and directs the logging from MLflow to your Workspace.
+
+```Python
+from azureml.core import Experiment
+from azureml.core import Workspace
+ws = Workspace.from_config()
+
+experiment_name = "experiment_with_mlflow"
+exp = Experiment(workspace=ws, name=experiment_name)
+
+run = exp.submit(src)
+```
+
+## View metrics and artifacts in your workspace
+
+The metrics and artifacts from MLflow logging are tracked in your workspace. To view them anytime, navigate to your workspace and find the experiment by name in your workspace in [Azure Machine Learning studio](https://ml.azure.com). Or run the below code.
+
+Retrieve run metric using MLflow [get_run()](https://mlflow.org/docs/latest/python_api/mlflow.html#mlflow.get_run).
+
+```Python
+from mlflow.entities import ViewType
+from mlflow.tracking import MlflowClient
+
+# Retrieve run ID for the last run experiement
+current_experiment=mlflow.get_experiment_by_name(experiment_name)
+runs = mlflow.search_runs(experiment_ids=current_experiment.experiment_id, run_view_type=ViewType.ALL)
+run_id = runs.tail(1)["run_id"].tolist()[0]
+
+# Use MlFlow to retrieve the run that was just completed
+client = MlflowClient()
+finished_mlflow_run = MlflowClient().get_run(run_id)
+
+metrics = finished_mlflow_run.data.metrics
+tags = finished_mlflow_run.data.tags
+params = finished_mlflow_run.data.params
+
+print(metrics,tags,params)
+```
+
+### Retrieve artifacts with MLFLow
+
+To view the artifacts of a run, you can use [MlFlowClient.list_artifacts()](https://mlflow.org/docs/latest/python_api/mlflow.tracking.html#mlflow.tracking.MlflowClient.list_artifacts)
+
+```Python
+client.list_artifacts(run_id)
+```
+
+To download an artifact to the current directory, you can use [MLFlowClient.download_artifacts()](https://www.mlflow.org/docs/latest/python_api/mlflow.tracking.html#mlflow.tracking.MlflowClient.download_artifacts)
+
+```Python
+client.download_artifacts(run_id, "helloworld.txt", ".")
+```
+
+### Compare and query
+
+Compare and query all MLflow runs in your Azure Machine Learning workspace with the following code.
+[Learn more about how to query runs with MLflow](https://mlflow.org/docs/latest/search-syntax.html#programmatically-searching-runs).
+
+```Python
+from mlflow.entities import ViewType
+
+all_experiments = [exp.experiment_id for exp in MlflowClient().list_experiments()]
+query = "metrics.hello_metric > 0"
+runs = mlflow.search_runs(experiment_ids=all_experiments, filter_string=query, run_view_type=ViewType.ALL)
+
+runs.head(10)
+```
+
+## Automatic logging
+With Azure Machine Learning and MLFlow, users can log metrics, model parameters and model artifacts automatically when training a model. A [variety of popular machine learning libraries](https://mlflow.org/docs/latest/tracking.html#automatic-logging) are supported.
+
+To enable [automatic logging](https://mlflow.org/docs/latest/tracking.html#automatic-logging) insert the following code before your training code:
+
+```Python
+mlflow.autolog()
+```
+
+[Learn more about Automatic logging with MLflow](https://mlflow.org/docs/latest/python_api/mlflow.html#mlflow.autolog).
+
+## Manage models
+
+Register and track your models with the [Azure Machine Learning model registry](concept-model-management-and-deployment.md#register-package-and-deploy-models-from-anywhere), which supports the MLflow model registry. Azure Machine Learning models are aligned with the MLflow model schema making it easy to export and import these models across different workflows. The MLflow related metadata such as, run ID is also tagged with the registered model for traceability. Users can submit training runs, register, and deploy models produced from MLflow runs.
+
+If you want to deploy and register your production ready model in one step, see [Deploy and register MLflow models](how-to-deploy-mlflow-models.md).
+
+To register and view a model from a run, use the following steps:
+
+1. Once a run is complete, call the [`register_model()`](https://mlflow.org/docs/latest/python_api/mlflow.html#mlflow.register_model) method.
+
+ ```Python
+ # the model folder produced from a run is registered. This includes the MLmodel file, model.pkl and the conda.yaml.
+ model_path = "model"
+ model_uri = 'runs:/{}/{}'.format(run_id, model_path)
+ mlflow.register_model(model_uri,"registered_model_name")
+ ```
+
+1. View the registered model in your workspace with [Azure Machine Learning studio](../overview-what-is-machine-learning-studio.md).
+
+ In the following example the registered model, `my-model` has MLflow tracking metadata tagged.
+
+ ![register-mlflow-model](./media/how-to-use-mlflow/registered-mlflow-model.png)
+
+1. Select the **Artifacts** tab to see all the model files that align with the MLflow model schema (conda.yaml, MLmodel, model.pkl).
+
+ ![model-schema](./media/how-to-use-mlflow/mlflow-model-schema.png)
+
+1. Select MLmodel to see the MLmodel file generated by the run.
+
+ ![MLmodel-schema](./media/how-to-use-mlflow/mlmodel-view.png)
++
+## Clean up resources
+
+If you don't plan to use the logged metrics and artifacts in your workspace, the ability to delete them individually is currently unavailable. Instead, delete the resource group that contains the storage account and workspace, so you don't incur any charges:
+
+1. In the Azure portal, select **Resource groups** on the far left.
+
+ ![Delete in the Azure portal](./media/how-to-use-mlflow/delete-resources.png)
+
+1. From the list, select the resource group you created.
+
+1. Select **Delete resource group**.
+
+1. Enter the resource group name. Then select **Delete**.
+
+## Example notebooks
+
+The [MLflow with Azure ML notebooks](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/track-and-monitor-experiments/using-mlflow) demonstrate and expand upon concepts presented in this article. Also see the community-driven repository, [AzureML-Examples](https://github.com/Azure/azureml-examples).
+
+## Next steps
+
+* [Deploy models with MLflow](how-to-deploy-mlflow-models.md).
+* Monitor your production models for [data drift](../how-to-enable-data-collection.md).
+* [Track Azure Databricks runs with MLflow](../how-to-use-mlflow-azure-databricks.md).
+* [Manage your models](concept-model-management-and-deployment.md).
machine-learning Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/introduction.md
+
+ Title: Machine Learning CLI (v1)
+
+description: Learn about the machine learning extension for the Azure CLI (v1).
++++++++ Last updated : 05/10/2022+++
+# Azure Machine Learning SDK & CLI v1
++
+> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning CLI extension or Python SDK you are using:"]
+> * [v1](introduction.md)
+> * [v2 (current version)](../index.yml)
+
+All articles in this section document the use of the first version of Azure Machine Learning Python SDK (v1) or Azure CLI ml extension (v1).
+
+## SDK v1
+
+The Azure SDK examples in articles in this section require the `azureml-core`, or Python SDK v1 for Azure Machine Learning. The Python SDK v2 is now available in preview.
+
+The v1 and v2 Python SDK packages are incompatible, and v2 style of coding will not work for articles in this directory. However, machine learning workspaces and all underlying resources can be interacted with from either, meaning one user can create a workspace with the SDK v1 and another can submit jobs to the same workspace with the SDK v2.
+
+We recommend not to install both versions of the SDK on the same environment, since it can cause clashes and confusion in the code.
+
+## How do I know which SDK version I have?
+
+* To find out whether you have Azure ML Python SDK v1, run `pip show azureml-core`. (Or, in a Jupyter notebook, use `%pip show azureml-core` )
+* To find out whether you have Azure ML Python SDK v2, run `pip show azure-ai-ml`. (Or, in a Jupyter notebook, use `%pip show azure-ai-ml`)
+
+Based on the results of `pip show` you can determine which version of SDK you have.
+
+## CLI v1
+
+The Azure CLI commands in articles in this section __require__ the `azure-cli-ml`, or v1, extension for Azure Machine Learning. The enhanced v2 CLI using the `ml` extension is now available and recommended.
+
+The extensions are incompatible, so v2 CLI commands will not work for articles in this directory. However, machine learning workspaces and all underlying resources can be interacted with from either, meaning one user can create a workspace with the v1 CLI and another can submit jobs to the same workspace with the v2 CLI.
+
+## How do I know which CLI extension I have?
+
+To find which extensions you have installed, use `az extension list`.
+* If the list of __Extensions__ contains `azure-cli-ml`, you have the v1 extension.
+* If the list contains `ml`, you have the v2 extension.
++
+## Next steps
+
+For more information on installing and using the different extensions, see the following articles:
+
+* `azure-cli-ml` - [Install, set up, and use the CLI (v1)](reference-azure-machine-learning-cli.md)
+* `ml` - [Install and set up the CLI (v2)](../how-to-configure-cli.md)
+
+For more information on installing and using the different SDK versions:
+
+* `azureml-core` - [Install the Azure Machine Learning SDK (v1) for Python](/python/api/overview/azure/ml/install?view=azure-ml-py)
+* `azure-ai-ml` - [Install the Azure Machine Learning SDK (v2) for Python](https://aka.ms/sdk-v2-install)
machine-learning Reference Azure Machine Learning Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/reference-azure-machine-learning-cli.md
+
+ Title: 'Install and set up the CLI (v1)'
+description: Learn how to use the Azure CLI extension (v1) for ML to create & manage resources such as your workspace, datastores, datasets, pipelines, models, and deployments.
+++++++ Last updated : 04/02/2021+++
+# Install & use the CLI (v1)
++
+> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning CLI extension you are using:"]
+> * [v1](reference-azure-machine-learning-cli.md)
+> * [v2 (current version)](../how-to-configure-cli.md)
++
+The Azure Machine Learning CLI is an extension to the [Azure CLI](/cli/azure/), a cross-platform command-line interface for the Azure platform. This extension provides commands for working with Azure Machine Learning. It allows you to automate your machine learning activities. The following list provides some example actions that you can do with the CLI extension:
+++ Run experiments to create machine learning models+++ Register machine learning models for customer usage+++ Package, deploy, and track the lifecycle of your machine learning models+
+The CLI is not a replacement for the Azure Machine Learning SDK. It is a complementary tool that is optimized to handle highly parameterized tasks which suit themselves well to automation.
+
+## Prerequisites
+
+* To use the CLI, you must have an Azure subscription. If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/) today.
+
+* To use the CLI commands in this document from your **local environment**, you need the [Azure CLI](/cli/azure/install-azure-cli).
+
+ If you use the [Azure Cloud Shell](https://azure.microsoft.com/features/cloud-shell/), the CLI is accessed through the browser and lives in the cloud.
+
+## Full reference docs
+
+Find the [full reference docs for the azure-cli-ml extension of Azure CLI](/cli/azure/ml(v1)/).
+
+## Connect the CLI to your Azure subscription
+
+> [!IMPORTANT]
+> If you are using the Azure Cloud Shell, you can skip this section. The cloud shell automatically authenticates you using the account you log into your Azure subscription.
+
+There are several ways that you can authenticate to your Azure subscription from the CLI. The most basic is to interactively authenticate using a browser. To authenticate interactively, open a command line or terminal and use the following command:
+
+```azurecli-interactive
+az login
+```
+
+If the CLI can open your default browser, it will do so and load a sign-in page. Otherwise, you need to open a browser and follow the instructions on the command line. The instructions involve browsing to [https://aka.ms/devicelogin](https://aka.ms/devicelogin) and entering an authorization code.
++
+For other methods of authenticating, see [Sign in with Azure CLI](/cli/azure/authenticate-azure-cli).
+
+## Install the extension
+
+To install the CLI (v1) extension:
+```azurecli-interactive
+az extension add -n azure-cli-ml
+```
+
+## Update the extension
+
+To update the Machine Learning CLI extension, use the following command:
+
+```azurecli-interactive
+az extension update -n azure-cli-ml
+```
+
+## Remove the extension
+
+To remove the CLI extension, use the following command:
+
+```azurecli-interactive
+az extension remove -n azure-cli-ml
+```
+
+## Resource management
+
+The following commands demonstrate how to use the CLI to manage resources used by Azure Machine Learning.
+++ If you do not already have one, create a resource group:+
+ ```azurecli-interactive
+ az group create -n myresourcegroup -l westus2
+ ```
+++ Create an Azure Machine Learning workspace:+
+ ```azurecli-interactive
+ az ml workspace create -w myworkspace -g myresourcegroup
+ ```
+
+ For more information, see [az ml workspace create](/cli/azure/ml/workspace#az-ml-workspace-create).
+++ Attach a workspace configuration to a folder to enable CLI contextual awareness.+
+ ```azurecli-interactive
+ az ml folder attach -w myworkspace -g myresourcegroup
+ ```
+
+ This command creates a `.azureml` subdirectory that contains example runconfig and conda environment files. It also contains a `config.json` file that is used to communicate with your Azure Machine Learning workspace.
+
+ For more information, see [az ml folder attach](/cli/azure/ml(v1)/folder#az-ml-folder-attach).
+++ Attach an Azure blob container as a Datastore.+
+ ```azurecli-interactive
+ az ml datastore attach-blob -n datastorename -a accountname -c containername
+ ```
+
+ For more information, see [az ml datastore attach-blob](/cli/azure/ml/datastore#az-ml-datastore-attach-blob).
+++ Upload files to a Datastore.+
+ ```azurecli-interactive
+ az ml datastore upload -n datastorename -p sourcepath
+ ```
+
+ For more information, see [az ml datastore upload](/cli/azure/ml/datastore#az-ml-datastore-upload).
+++ Attach an AKS cluster as a Compute Target.+
+ ```azurecli-interactive
+ az ml computetarget attach aks -n myaks -i myaksresourceid -g myresourcegroup -w myworkspace
+ ```
+
+ For more information, see [az ml computetarget attach aks](/cli/azure/ml(v1)/computetarget/attach#az-ml-computetarget-attach-aks)
+
+### Compute clusters
+++ Create a new managed compute cluster.+
+ ```azurecli-interactive
+ az ml computetarget create amlcompute -n cpu --min-nodes 1 --max-nodes 1 -s STANDARD_D3_V2
+ ```
+++++ Create a new managed compute cluster with managed identity+
+ + User-assigned managed identity
+
+ ```azurecli
+ az ml computetarget create amlcompute --name cpu-cluster --vm-size Standard_NC6 --max-nodes 5 --assign-identity '/subscriptions/<subcription_id>/resourcegroups/<resource_group>/providers/Microsoft.ManagedIdentity/userAssignedIdentities/<user_assigned_identity>'
+ ```
+
+ + System-assigned managed identity
+
+ ```azurecli
+ az ml computetarget create amlcompute --name cpu-cluster --vm-size Standard_NC6 --max-nodes 5 --assign-identity '[system]'
+ ```
++ Add a managed identity to an existing cluster:+
+ + User-assigned managed identity
+ ```azurecli
+ az ml computetarget amlcompute identity assign --name cpu-cluster '/subscriptions/<subcription_id>/resourcegroups/<resource_group>/providers/Microsoft.ManagedIdentity/userAssignedIdentities/<user_assigned_identity>'
+ ```
+ + System-assigned managed identity
+
+ ```azurecli
+ az ml computetarget amlcompute identity assign --name cpu-cluster '[system]'
+ ```
+
+For more information, see [az ml computetarget create amlcompute](/cli/azure/ml(v1)/computetarget/create#az-ml-computetarget-create-amlcompute).
++
+<a id="computeinstance"></a>
+
+### Compute instance
+Manage compute instances. In all the examples below, the name of the compute instance is **cpu**
+++ Create a new computeinstance.+
+ ```azurecli-interactive
+ az ml computetarget create computeinstance -n cpu -s "STANDARD_D3_V2" -v
+ ```
+
+ For more information, see [az ml computetarget create computeinstance](/cli/azure/ml(v1)/computetarget/create#az-ml-computetarget-create-computeinstance).
+++ Stop a computeinstance.+
+ ```azurecli-interactive
+ az ml computetarget computeinstance stop -n cpu -v
+ ```
+
+ For more information, see [az ml computetarget computeinstance stop](/cli/azure/ml(v1)/computetarget/computeinstance#az-ml-computetarget-computeinstance-stop).
+++ Start a computeinstance.+
+ ```azurecli-interactive
+ az ml computetarget computeinstance start -n cpu -v
+ ```
+
+ For more information, see [az ml computetarget computeinstance start](/cli/azure/ml(v1)/computetarget/computeinstance#az-ml-computetarget-computeinstance-start).
+++ Restart a computeinstance.+
+ ```azurecli-interactive
+ az ml computetarget computeinstance restart -n cpu -v
+ ```
+
+ For more information, see [az ml computetarget computeinstance restart](/cli/azure/ml(v1)/computetarget/computeinstance#az-ml-computetarget-computeinstance-restart).
+++ Delete a computeinstance.+
+ ```azurecli-interactive
+ az ml computetarget delete -n cpu -v
+ ```
+
+ For more information, see [az ml computetarget delete computeinstance](/cli/azure/ml(v1)/computetarget#az-ml-computetarget-delete).
++
+## <a id="experiments"></a>Run experiments
+
+* Start a run of your experiment. When using this command, specify the name of the runconfig file (the text before \*.runconfig if you are looking at your file system) against the -c parameter.
+
+ ```azurecli-interactive
+ az ml run submit-script -c sklearn -e testexperiment train.py
+ ```
+
+ > [!TIP]
+ > The `az ml folder attach` command creates a `.azureml` subdirectory, which contains two example runconfig files.
+ >
+ > If you have a Python script that creates a run configuration object programmatically, you can use [RunConfig.save()](/python/api/azureml-core/azureml.core.runconfiguration#save-path-none--name-none--separate-environment-yaml-false-) to save it as a runconfig file.
+ >
+ > The full runconfig schema can be found in this [JSON file](https://github.com/microsoft/MLOps/blob/b4bdcf8c369d188e83f40be8b748b49821f71cf2/infra-as-code/runconfigschema.json). The schema is self-documenting through the `description` key of each object. Additionally, there are enums for possible values, and a template snippet at the end.
+
+ For more information, see [az ml run submit-script](/cli/azure/ml(v1)/run#az-ml-run-submit-script).
+
+* View a list of experiments:
+
+ ```azurecli-interactive
+ az ml experiment list
+ ```
+
+ For more information, see [az ml experiment list](/cli/azure/ml(v1)/experiment#az-ml-experiment-list).
+
+### HyperDrive run
+
+You can use HyperDrive with Azure CLI to perform parameter tuning runs. First, create a HyperDrive configuration file in the following format. See [Tune hyperparameters for your model](../how-to-tune-hyperparameters.md) article for details on hyperparameter tuning parameters.
+
+```yml
+# hdconfig.yml
+sampling:
+ type: random # Supported options: Random, Grid, Bayesian
+ parameter_space: # specify a name|expression|values tuple for each parameter.
+ - name: --penalty # The name of a script parameter to generate values for.
+ expression: choice # supported options: choice, randint, uniform, quniform, loguniform, qloguniform, normal, qnormal, lognormal, qlognormal
+ values: [0.5, 1, 1.5] # The list of values, the number of values is dependent on the expression specified.
+policy:
+ type: BanditPolicy # Supported options: BanditPolicy, MedianStoppingPolicy, TruncationSelectionPolicy, NoTerminationPolicy
+ evaluation_interval: 1 # Policy properties are policy specific. See the above link for policy specific parameter details.
+ slack_factor: 0.2
+primary_metric_name: Accuracy # The metric used when evaluating the policy
+primary_metric_goal: Maximize # Maximize|Minimize
+max_total_runs: 8 # The maximum number of runs to generate
+max_concurrent_runs: 2 # The number of runs that can run concurrently.
+max_duration_minutes: 100 # The maximum length of time to run the experiment before cancelling.
+```
+
+Add this file alongside the run configuration files. Then submit a HyperDrive run using:
+```azurecli
+az ml run submit-hyperdrive -e <experiment> -c <runconfig> --hyperdrive-configuration-name <hdconfig> my_train.py
+```
+
+Note the *arguments* section in runconfig and *parameter space* in HyperDrive config. They contain the command-line arguments to be passed to training script. The value in runconfig stays the same for each iteration, while the range in HyperDrive config is iterated over. Do not specify the same argument in both files.
+
+## Dataset management
+
+The following commands demonstrate how to work with datasets in Azure Machine Learning:
+++ Register a dataset:+
+ ```azurecli-interactive
+ az ml dataset register -f mydataset.json
+ ```
+
+ For information on the format of the JSON file used to define the dataset, use `az ml dataset register --show-template`.
+
+ For more information, see [az ml dataset register](/cli/azure/ml(v1)/dataset#az-ml-dataset-register).
+++ List all datasets in a workspace:+
+ ```azurecli-interactive
+ az ml dataset list
+ ```
+
+ For more information, see [az ml dataset list](/cli/azure/ml(v1)/dataset#az-ml-dataset-list).
+++ Get details of a dataset:+
+ ```azurecli-interactive
+ az ml dataset show -n dataset-name
+ ```
+
+ For more information, see [az ml dataset show](/cli/azure/ml(v1)/dataset#az-ml-dataset-show).
+++ Unregister a dataset:+
+ ```azurecli-interactive
+ az ml dataset unregister -n dataset-name
+ ```
+
+ For more information, see [az ml dataset unregister](/cli/azure/ml(v1)/dataset#az-ml-dataset-archive).
+
+## Environment management
+
+The following commands demonstrate how to create, register, and list Azure Machine Learning [environments](../how-to-configure-environment.md) for your workspace:
+++ Create scaffolding files for an environment:+
+ ```azurecli-interactive
+ az ml environment scaffold -n myenv -d myenvdirectory
+ ```
+
+ For more information, see [az ml environment scaffold](/cli/azure/ml/environment#az-ml-environment-scaffold).
+++ Register an environment:+
+ ```azurecli-interactive
+ az ml environment register -d myenvdirectory
+ ```
+
+ For more information, see [az ml environment register](/cli/azure/ml/environment#az-ml-environment-register).
+++ List registered environments:+
+ ```azurecli-interactive
+ az ml environment list
+ ```
+
+ For more information, see [az ml environment list](/cli/azure/ml/environment#az-ml-environment-list).
+++ Download a registered environment:+
+ ```azurecli-interactive
+ az ml environment download -n myenv -d downloaddirectory
+ ```
+
+ For more information, see [az ml environment download](/cli/azure/ml/environment#az-ml-environment-download).
+
+### Environment configuration schema
+
+If you used the `az ml environment scaffold` command, it generates a template `azureml_environment.json` file that can be modified and used to create custom environment configurations with the CLI. The top level object loosely maps to the [`Environment`](/python/api/azureml-core/azureml.core.environment%28class%29) class in the Python SDK.
+
+```json
+{
+ "name": "testenv",
+ "version": null,
+ "environmentVariables": {
+ "EXAMPLE_ENV_VAR": "EXAMPLE_VALUE"
+ },
+ "python": {
+ "userManagedDependencies": false,
+ "interpreterPath": "python",
+ "condaDependenciesFile": null,
+ "baseCondaEnvironment": null
+ },
+ "docker": {
+ "enabled": false,
+ "baseImage": "mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04:20210615.v1",
+ "baseDockerfile": null,
+ "sharedVolumes": true,
+ "shmSize": "2g",
+ "arguments": [],
+ "baseImageRegistry": {
+ "address": null,
+ "username": null,
+ "password": null
+ }
+ },
+ "spark": {
+ "repositories": [],
+ "packages": [],
+ "precachePackages": true
+ },
+ "databricks": {
+ "mavenLibraries": [],
+ "pypiLibraries": [],
+ "rcranLibraries": [],
+ "jarLibraries": [],
+ "eggLibraries": []
+ },
+ "inferencingStackVersion": null
+}
+```
+
+The following table details each top-level field in the JSON file, it's type, and a description. If an object type is linked to a class from the Python SDK, there is a loose 1:1 match between each JSON field and the public variable name in the Python class. In some cases the field may map to a constructor argument rather than a class variable. For example, the `environmentVariables` field maps to the `environment_variables` variable in the [`Environment`](/python/api/azureml-core/azureml.core.environment%28class%29) class.
+
+| JSON field | Type | Description |
+||||
+| `name` | `string` | Name of the environment. Do not start name with **Microsoft** or **AzureML**. |
+| `version` | `string` | Version of the environment. |
+| `environmentVariables` | `{string: string}` | A hash-map of environment variable names and values. |
+| `python` | [`PythonSection`](/python/api/azureml-core/azureml.core.environment.pythonsection)hat defines the Python environment and interpreter to use on target compute resource. |
+| `docker` | [`DockerSection`](/python/api/azureml-core/azureml.core.environment.dockersection) | Defines settings to customize the Docker image built to the environment's specifications. |
+| `spark` | [`SparkSection`](/python/api/azureml-core/azureml.core.environment.sparksection) | The section configures Spark settings. It is only used when framework is set to PySpark. |
+| `databricks` | [`DatabricksSection`](/python/api/azureml-core/azureml.core.databricks.databrickssection) | Configures Databricks library dependencies. |
+| `inferencingStackVersion` | `string` | Specifies the inferencing stack version added to the image. To avoid adding an inferencing stack, leave this field `null`. Valid value: "latest". |
+
+## ML pipeline management
+
+The following commands demonstrate how to work with machine learning pipelines:
+++ Create a machine learning pipeline:+
+ ```azurecli-interactive
+ az ml pipeline create -n mypipeline -y mypipeline.yml
+ ```
+
+ For more information, see [az ml pipeline create](/cli/azure/ml(v1)/pipeline#az-ml-pipeline-create).
+
+ For more information on the pipeline YAML file, see [Define machine learning pipelines in YAML](reference-pipeline-yaml.md).
+++ Run a pipeline:+
+ ```azurecli-interactive
+ az ml run submit-pipeline -n myexperiment -y mypipeline.yml
+ ```
+
+ For more information, see [az ml run submit-pipeline](/cli/azure/ml(v1)/run#az-ml-run-submit-pipeline).
+
+ For more information on the pipeline YAML file, see [Define machine learning pipelines in YAML](reference-pipeline-yaml.md).
+++ Schedule a pipeline:+
+ ```azurecli-interactive
+ az ml pipeline create-schedule -n myschedule -e myexperiment -i mypipelineid -y myschedule.yml
+ ```
+
+ For more information, see [az ml pipeline create-schedule](/cli/azure/ml(v1)/pipeline#az-ml-pipeline-create-schedule).
+
+## Model registration, profiling, deployment
+
+The following commands demonstrate how to register a trained model, and then deploy it as a production service:
+++ Register a model with Azure Machine Learning:+
+ ```azurecli-interactive
+ az ml model register -n mymodel -p sklearn_regression_model.pkl
+ ```
+
+ For more information, see [az ml model register](/cli/azure/ml/model#az-ml-model-register).
+++ **OPTIONAL** Profile your model to get optimal CPU and memory values for deployment.
+ ```azurecli-interactive
+ az ml model profile -n myprofile -m mymodel:1 --ic inferenceconfig.json -d "{\"data\": [[1,2,3,4,5,6,7,8,9,10],[10,9,8,7,6,5,4,3,2,1]]}" -t myprofileresult.json
+ ```
+
+ For more information, see [az ml model profile](/cli/azure/ml/model#az-ml-model-profile).
+++ Deploy your model to AKS
+ ```azurecli-interactive
+ az ml model deploy -n myservice -m mymodel:1 --ic inferenceconfig.json --dc deploymentconfig.json --ct akscomputetarget
+ ```
+
+ For more information on the inference configuration file schema, see [Inference configuration schema](#inferenceconfig).
+
+ For more information on the deployment configuration file schema, see [Deployment configuration schema](#deploymentconfig).
+
+ For more information, see [az ml model deploy](/cli/azure/ml/model#az-ml-model-deploy).
+
+<a id="inferenceconfig"></a>
+
+## Inference configuration schema
++
+<a id="deploymentconfig"></a>
+
+## Deployment configuration schema
+
+### Local deployment configuration schema
++
+### Azure Container Instance deployment configuration schema
++
+### Azure Kubernetes Service deployment configuration schema
++
+## Next steps
+
+* [Command reference for the Machine Learning CLI extension](/cli/azure/ml).
+
+* [Train and deploy machine learning models using Azure Pipelines](/azure/devops/pipelines/targets/azure-machine-learning)
machine-learning Reference Pipeline Yaml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/reference-pipeline-yaml.md
+
+ Title: Machine Learning pipeline YAML (v1)
+
+description: Learn how to define a machine learning pipeline using a YAML file. YAML pipeline definitions are used with the machine learning extension for the Azure CLI (v1).
++++++++ Last updated : 07/31/2020+++
+# CLI (v1) pipeline job YAML schema
++
+> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning CLI extension you are using:"]
+> * [v1](reference-pipeline-yaml.md)
+> * [v2 (current version)](../reference-yaml-job-pipeline.md)
+
+> [!NOTE]
+> The YAML syntax detailed in this document is based on the JSON schema for the v1 version of the ML CLI extension. This syntax is guaranteed only to work with the ML CLI v1 extension.
+> Switch to the [v2 (current version)](../reference-yaml-job-pipeline.md) for the syntax for ML CLI v2.
+
+Define your machine learning pipelines in [YAML](https://yaml.org/). When using the machine learning extension for the [Azure CLI **v1**](reference-azure-machine-learning-cli.md)., many of the pipeline-related commands expect a YAML file that defines the pipeline.
+
+The following table lists what is and is not currently supported when defining a pipeline in YAML for use with CLI v1:
+
+| Step type | Supported? |
+| -- | :--: |
+| PythonScriptStep | Yes |
+| ParallelRunStep | Yes |
+| AdlaStep | Yes |
+| AzureBatchStep | Yes |
+| DatabricksStep | Yes |
+| DataTransferStep | Yes |
+| AutoMLStep | No |
+| HyperDriveStep | No |
+| ModuleStep | Yes |
+| MPIStep | No |
+| EstimatorStep | No |
+
+## Pipeline definition
+
+A pipeline definition uses the following keys, which correspond to the [Pipelines](/python/api/azureml-pipeline-core/azureml.pipeline.core.pipeline.pipeline) class:
+
+| YAML key | Description |
+| -- | -- |
+| `name` | The description of the pipeline. |
+| `parameters` | Parameter(s) to the pipeline. |
+| `data_reference` | Defines how and where data should be made available in a run. |
+| `default_compute` | Default compute target where all steps in the pipeline run. |
+| `steps` | The steps used in the pipeline. |
+
+## Parameters
+
+The `parameters` section uses the following keys, which correspond to the [PipelineParameter](/python/api/azureml-pipeline-core/azureml.pipeline.core.pipelineparameter) class:
+
+| YAML key | Description |
+| - | - |
+| `type` | The value type of the parameter. Valid types are `string`, `int`, `float`, `bool`, or `datapath`. |
+| `default` | The default value. |
+
+Each parameter is named. For example, the following YAML snippet defines three parameters named `NumIterationsParameter`, `DataPathParameter`, and `NodeCountParameter`:
+
+```yaml
+pipeline:
+ name: SamplePipelineFromYaml
+ parameters:
+ NumIterationsParameter:
+ type: int
+ default: 40
+ DataPathParameter:
+ type: datapath
+ default:
+ datastore: workspaceblobstore
+ path_on_datastore: sample2.txt
+ NodeCountParameter:
+ type: int
+ default: 4
+```
+
+## Data reference
+
+The `data_references` section uses the following keys, which correspond to the [DataReference](/python/api/azureml-core/azureml.data.data_reference.datareference):
+
+| YAML key | Description |
+| -- | -- |
+| `datastore` | The datastore to reference. |
+| `path_on_datastore` | The relative path in the backing storage for the data reference. |
+
+Each data reference is contained in a key. For example, the following YAML snippet defines a data reference stored in the key named `employee_data`:
+
+```yaml
+pipeline:
+ name: SamplePipelineFromYaml
+ parameters:
+ PipelineParam1:
+ type: int
+ default: 3
+ data_references:
+ employee_data:
+ datastore: adftestadla
+ path_on_datastore: "adla_sample/sample_input.csv"
+```
+
+## Steps
+
+Steps define a computational environment, along with the files to run on the environment. To define the type of a step, use the `type` key:
+
+| Step type | Description |
+| -- | -- |
+| `AdlaStep` | Runs a U-SQL script with Azure Data Lake Analytics. Corresponds to the [AdlaStep](/python/api/azureml-pipeline-steps/azureml.pipeline.steps.adlastep) class. |
+| `AzureBatchStep` | Runs jobs using Azure Batch. Corresponds to the [AzureBatchStep](/python/api/azureml-pipeline-steps/azureml.pipeline.steps.azurebatchstep) class. |
+| `DatabricsStep` | Adds a Databricks notebook, Python script, or JAR. Corresponds to the [DatabricksStep](/python/api/azureml-pipeline-steps/azureml.pipeline.steps.databricksstep) class. |
+| `DataTransferStep` | Transfers data between storage options. Corresponds to the [DataTransferStep](/python/api/azureml-pipeline-steps/azureml.pipeline.steps.datatransferstep) class. |
+| `PythonScriptStep` | Runs a Python script. Corresponds to the [PythonScriptStep](/python/api/azureml-pipeline-steps/azureml.pipeline.steps.python_script_step.pythonscriptstep) class. |
+| `ParallelRunStep` | Runs a Python script to process large amounts of data asynchronously and in parallel. Corresponds to the [ParallelRunStep](/python/api/azureml-pipeline-steps/azureml.pipeline.steps.parallel_run_step.parallelrunstep) class. |
+
+### ADLA step
+
+| YAML key | Description |
+| -- | -- |
+| `script_name` | The name of the U-SQL script (relative to the `source_directory`). |
+| `compute` | The Azure Data Lake compute target to use for this step. |
+| `parameters` | [Parameters](#parameters) to the pipeline. |
+| `inputs` | Inputs can be [InputPortBinding](/python/api/azureml-pipeline-core/azureml.pipeline.core.graph.inputportbinding), [DataReference](#data-reference), [PortDataReference](/python/api/azureml-pipeline-core/azureml.pipeline.core.portdatareference), [PipelineData](/python/api/azureml-pipeline-core/azureml.pipeline.core.pipelinedata), [Dataset](/python/api/azureml-core/azureml.core.dataset%28class%29), [DatasetDefinition](/python/api/azureml-core/azureml.data.dataset_definition.datasetdefinition), or [PipelineDataset](/python/api/azureml-pipeline-core/azureml.pipeline.core.pipelinedataset). |
+| `outputs` | Outputs can be either [PipelineData](/python/api/azureml-pipeline-core/azureml.pipeline.core.pipelinedata) or [OutputPortBinding](/python/api/azureml-pipeline-core/azureml.pipeline.core.graph.outputportbinding). |
+| `source_directory` | Directory that contains the script, assemblies, etc. |
+| `priority` | The priority value to use for the current job. |
+| `params` | Dictionary of name-value pairs. |
+| `degree_of_parallelism` | The degree of parallelism to use for this job. |
+| `runtime_version` | The runtime version of the Data Lake Analytics engine. |
+| `allow_reuse` | Determines whether the step should reuse previous results when run again with the same settings. |
+
+The following example contains an ADLA Step definition:
+
+```yaml
+pipeline:
+ name: SamplePipelineFromYaml
+ parameters:
+ PipelineParam1:
+ type: int
+ default: 3
+ data_references:
+ employee_data:
+ datastore: adftestadla
+ path_on_datastore: "adla_sample/sample_input.csv"
+ default_compute: adlacomp
+ steps:
+ Step1:
+ runconfig: "D:\\Yaml\\default_runconfig.yml"
+ parameters:
+ NUM_ITERATIONS_2:
+ source: PipelineParam1
+ NUM_ITERATIONS_1: 7
+ type: "AdlaStep"
+ name: "MyAdlaStep"
+ script_name: "sample_script.usql"
+ source_directory: "D:\\scripts\\Adla"
+ inputs:
+ employee_data:
+ source: employee_data
+ outputs:
+ OutputData:
+ destination: Output4
+ datastore: adftestadla
+ bind_mode: mount
+```
+
+### Azure Batch step
+
+| YAML key | Description |
+| -- | -- |
+| `compute` | The Azure Batch compute target to use for this step. |
+| `inputs` | Inputs can be [InputPortBinding](/python/api/azureml-pipeline-core/azureml.pipeline.core.graph.inputportbinding), [DataReference](#data-reference), [PortDataReference](/python/api/azureml-pipeline-core/azureml.pipeline.core.portdatareference), [PipelineData](/python/api/azureml-pipeline-core/azureml.pipeline.core.pipelinedata), [Dataset](/python/api/azureml-core/azureml.core.dataset%28class%29), [DatasetDefinition](/python/api/azureml-core/azureml.data.dataset_definition.datasetdefinition), or [PipelineDataset](/python/api/azureml-pipeline-core/azureml.pipeline.core.pipelinedataset). |
+| `outputs` | Outputs can be either [PipelineData](/python/api/azureml-pipeline-core/azureml.pipeline.core.pipelinedata) or [OutputPortBinding](/python/api/azureml-pipeline-core/azureml.pipeline.core.graph.outputportbinding). |
+| `source_directory` | Directory that contains the module binaries, executable, assemblies, etc. |
+| `executable` | Name of the command/executable that will be ran as part of this job. |
+| `create_pool` | Boolean flag to indicate whether to create the pool before running the job. |
+| `delete_batch_job_after_finish` | Boolean flag to indicate whether to delete the job from the Batch account after it's finished. |
+| `delete_batch_pool_after_finish` | Boolean flag to indicate whether to delete the pool after the job finishes. |
+| `is_positive_exit_code_failure` | Boolean flag to indicate if the job fails if the task exits with a positive code. |
+| `vm_image_urn` | If `create_pool` is `True`, and VM uses `VirtualMachineConfiguration`. |
+| `pool_id` | The ID of the pool where the job will run. |
+| `allow_reuse` | Determines whether the step should reuse previous results when run again with the same settings. |
+
+The following example contains an Azure Batch step definition:
+
+```yaml
+pipeline:
+ name: SamplePipelineFromYaml
+ parameters:
+ PipelineParam1:
+ type: int
+ default: 3
+ data_references:
+ input:
+ datastore: workspaceblobstore
+ path_on_datastore: "input.txt"
+ default_compute: testbatch
+ steps:
+ Step1:
+ runconfig: "D:\\Yaml\\default_runconfig.yml"
+ parameters:
+ NUM_ITERATIONS_2:
+ source: PipelineParam1
+ NUM_ITERATIONS_1: 7
+ type: "AzureBatchStep"
+ name: "MyAzureBatchStep"
+ pool_id: "MyPoolName"
+ create_pool: true
+ executable: "azurebatch.cmd"
+ source_directory: "D:\\scripts\\AureBatch"
+ allow_reuse: false
+ inputs:
+ input:
+ source: input
+ outputs:
+ output:
+ destination: output
+ datastore: workspaceblobstore
+```
+
+### Databricks step
+
+| YAML key | Description |
+| -- | -- |
+| `compute` | The Azure Databricks compute target to use for this step. |
+| `inputs` | Inputs can be [InputPortBinding](/python/api/azureml-pipeline-core/azureml.pipeline.core.graph.inputportbinding), [DataReference](#data-reference), [PortDataReference](/python/api/azureml-pipeline-core/azureml.pipeline.core.portdatareference), [PipelineData](/python/api/azureml-pipeline-core/azureml.pipeline.core.pipelinedata), [Dataset](/python/api/azureml-core/azureml.core.dataset%28class%29), [DatasetDefinition](/python/api/azureml-core/azureml.data.dataset_definition.datasetdefinition), or [PipelineDataset](/python/api/azureml-pipeline-core/azureml.pipeline.core.pipelinedataset). |
+| `outputs` | Outputs can be either [PipelineData](/python/api/azureml-pipeline-core/azureml.pipeline.core.pipelinedata) or [OutputPortBinding](/python/api/azureml-pipeline-core/azureml.pipeline.core.graph.outputportbinding). |
+| `run_name` | The name in Databricks for this run. |
+| `source_directory` | Directory that contains the script and other files. |
+| `num_workers` | The static number of workers for the Databricks run cluster. |
+| `runconfig` | The path to a `.runconfig` file. This file is a YAML representation of the [RunConfiguration](/python/api/azureml-core/azureml.core.runconfiguration) class. For more information on the structure of this file, see [runconfigschema.json](https://github.com/microsoft/MLOps/blob/b4bdcf8c369d188e83f40be8b748b49821f71cf2/infra-as-code/runconfigschema.json). |
+| `allow_reuse` | Determines whether the step should reuse previous results when run again with the same settings. |
+
+The following example contains a Databricks step:
+
+```yaml
+pipeline:
+ name: SamplePipelineFromYaml
+ parameters:
+ PipelineParam1:
+ type: int
+ default: 3
+ data_references:
+ adls_test_data:
+ datastore: adftestadla
+ path_on_datastore: "testdata"
+ blob_test_data:
+ datastore: workspaceblobstore
+ path_on_datastore: "dbtest"
+ default_compute: mydatabricks
+ steps:
+ Step1:
+ runconfig: "D:\\Yaml\\default_runconfig.yml"
+ parameters:
+ NUM_ITERATIONS_2:
+ source: PipelineParam1
+ NUM_ITERATIONS_1: 7
+ type: "DatabricksStep"
+ name: "MyDatabrickStep"
+ run_name: "DatabricksRun"
+ python_script_name: "train-db-local.py"
+ source_directory: "D:\\scripts\\Databricks"
+ num_workers: 1
+ allow_reuse: true
+ inputs:
+ blob_test_data:
+ source: blob_test_data
+ outputs:
+ OutputData:
+ destination: Output4
+ datastore: workspaceblobstore
+ bind_mode: mount
+```
+
+### Data transfer step
+
+| YAML key | Description |
+| -- | -- |
+| `compute` | The Azure Data Factory compute target to use for this step. |
+| `source_data_reference` | Input connection that serves as the source of data transfer operations. Supported values are [InputPortBinding](/python/api/azureml-pipeline-core/azureml.pipeline.core.graph.inputportbinding), [DataReference](#data-reference), [PortDataReference](/python/api/azureml-pipeline-core/azureml.pipeline.core.portdatareference), [PipelineData](/python/api/azureml-pipeline-core/azureml.pipeline.core.pipelinedata), [Dataset](/python/api/azureml-core/azureml.core.dataset%28class%29), [DatasetDefinition](/python/api/azureml-core/azureml.data.dataset_definition.datasetdefinition), or [PipelineDataset](/python/api/azureml-pipeline-core/azureml.pipeline.core.pipelinedataset). |
+| `destination_data_reference` | Input connection that serves as the destination of data transfer operations. Supported values are [PipelineData](/python/api/azureml-pipeline-core/azureml.pipeline.core.pipelinedata) and [OutputPortBinding](/python/api/azureml-pipeline-core/azureml.pipeline.core.graph.outputportbinding). |
+| `allow_reuse` | Determines whether the step should reuse previous results when run again with the same settings. |
+
+The following example contains a data transfer step:
+
+```yaml
+pipeline:
+ name: SamplePipelineFromYaml
+ parameters:
+ PipelineParam1:
+ type: int
+ default: 3
+ data_references:
+ adls_test_data:
+ datastore: adftestadla
+ path_on_datastore: "testdata"
+ blob_test_data:
+ datastore: workspaceblobstore
+ path_on_datastore: "testdata"
+ default_compute: adftest
+ steps:
+ Step1:
+ runconfig: "D:\\Yaml\\default_runconfig.yml"
+ parameters:
+ NUM_ITERATIONS_2:
+ source: PipelineParam1
+ NUM_ITERATIONS_1: 7
+ type: "DataTransferStep"
+ name: "MyDataTransferStep"
+ adla_compute_name: adftest
+ source_data_reference:
+ adls_test_data:
+ source: adls_test_data
+ destination_data_reference:
+ blob_test_data:
+ source: blob_test_data
+```
+
+### Python script step
+
+| YAML key | Description |
+| -- | -- |
+| `inputs` | Inputs can be [InputPortBinding](/python/api/azureml-pipeline-core/azureml.pipeline.core.graph.inputportbinding), [DataReference](#data-reference), [PortDataReference](/python/api/azureml-pipeline-core/azureml.pipeline.core.portdatareference), [PipelineData](/python/api/azureml-pipeline-core/azureml.pipeline.core.pipelinedata), [Dataset](/python/api/azureml-core/azureml.core.dataset%28class%29), [DatasetDefinition](/python/api/azureml-core/azureml.data.dataset_definition.datasetdefinition), or [PipelineDataset](/python/api/azureml-pipeline-core/azureml.pipeline.core.pipelinedataset). |
+| `outputs` | Outputs can be either [PipelineData](/python/api/azureml-pipeline-core/azureml.pipeline.core.pipelinedata) or [OutputPortBinding](/python/api/azureml-pipeline-core/azureml.pipeline.core.graph.outputportbinding). |
+| `script_name` | The name of the Python script (relative to `source_directory`). |
+| `source_directory` | Directory that contains the script, Conda environment, etc. |
+| `runconfig` | The path to a `.runconfig` file. This file is a YAML representation of the [RunConfiguration](/python/api/azureml-core/azureml.core.runconfiguration) class. For more information on the structure of this file, see [runconfig.json](https://github.com/microsoft/MLOps/blob/b4bdcf8c369d188e83f40be8b748b49821f71cf2/infra-as-code/runconfigschema.json). |
+| `allow_reuse` | Determines whether the step should reuse previous results when run again with the same settings. |
+
+The following example contains a Python script step:
+
+```yaml
+pipeline:
+ name: SamplePipelineFromYaml
+ parameters:
+ PipelineParam1:
+ type: int
+ default: 3
+ data_references:
+ DataReference1:
+ datastore: workspaceblobstore
+ path_on_datastore: testfolder/sample.txt
+ default_compute: cpu-cluster
+ steps:
+ Step1:
+ runconfig: "D:\\Yaml\\default_runconfig.yml"
+ parameters:
+ NUM_ITERATIONS_2:
+ source: PipelineParam1
+ NUM_ITERATIONS_1: 7
+ type: "PythonScriptStep"
+ name: "MyPythonScriptStep"
+ script_name: "train.py"
+ allow_reuse: True
+ source_directory: "D:\\scripts\\PythonScript"
+ inputs:
+ InputData:
+ source: DataReference1
+ outputs:
+ OutputData:
+ destination: Output4
+ datastore: workspaceblobstore
+ bind_mode: mount
+```
+
+### Parallel run step
+
+| YAML key | Description |
+| -- | -- |
+| `inputs` | Inputs can be [Dataset](/python/api/azureml-core/azureml.core.dataset%28class%29), [DatasetDefinition](/python/api/azureml-core/azureml.data.dataset_definition.datasetdefinition), or [PipelineDataset](/python/api/azureml-pipeline-core/azureml.pipeline.core.pipelinedataset). |
+| `outputs` | Outputs can be either [PipelineData](/python/api/azureml-pipeline-core/azureml.pipeline.core.pipelinedata) or [OutputPortBinding](/python/api/azureml-pipeline-core/azureml.pipeline.core.graph.outputportbinding). |
+| `script_name` | The name of the Python script (relative to `source_directory`). |
+| `source_directory` | Directory that contains the script, Conda environment, etc. |
+| `parallel_run_config` | The path to a `parallel_run_config.yml` file. This file is a YAML representation of the [ParallelRunConfig](/python/api/azureml-pipeline-steps/azureml.pipeline.steps.parallelrunconfig) class. |
+| `allow_reuse` | Determines whether the step should reuse previous results when run again with the same settings. |
+
+The following example contains a Parallel run step:
+
+```yaml
+pipeline:
+ description: SamplePipelineFromYaml
+ default_compute: cpu-cluster
+ data_references:
+ MyMinistInput:
+ dataset_name: mnist_sample_data
+ parameters:
+ PipelineParamTimeout:
+ type: int
+ default: 600
+ steps:
+ Step1:
+ parallel_run_config: "yaml/parallel_run_config.yml"
+ type: "ParallelRunStep"
+ name: "parallel-run-step-1"
+ allow_reuse: True
+ arguments:
+ - "--progress_update_timeout"
+ - parameter:timeout_parameter
+ - "--side_input"
+ - side_input:SideInputData
+ parameters:
+ timeout_parameter:
+ source: PipelineParamTimeout
+ inputs:
+ InputData:
+ source: MyMinistInput
+ side_inputs:
+ SideInputData:
+ source: Output4
+ bind_mode: mount
+ outputs:
+ OutputDataStep2:
+ destination: Output5
+ datastore: workspaceblobstore
+ bind_mode: mount
+```
+
+### Pipeline with multiple steps
+
+| YAML key | Description |
+| -- | -- |
+| `steps` | Sequence of one or more PipelineStep definitions. Note that the `destination` keys of one step's `outputs` become the `source` keys to the `inputs` of the next step.|
+
+```yaml
+pipeline:
+ name: SamplePipelineFromYAML
+ description: Sample multistep YAML pipeline
+ data_references:
+ TitanicDS:
+ dataset_name: 'titanic_ds'
+ bind_mode: download
+ default_compute: cpu-cluster
+ steps:
+ Dataprep:
+ type: "PythonScriptStep"
+ name: "DataPrep Step"
+ compute: cpu-cluster
+ runconfig: ".\\default_runconfig.yml"
+ script_name: "prep.py"
+ arguments:
+ - '--train_path'
+ - output:train_path
+ - '--test_path'
+ - output:test_path
+ allow_reuse: True
+ inputs:
+ titanic_ds:
+ source: TitanicDS
+ bind_mode: download
+ outputs:
+ train_path:
+ destination: train_csv
+ datastore: workspaceblobstore
+ test_path:
+ destination: test_csv
+ Training:
+ type: "PythonScriptStep"
+ name: "Training Step"
+ compute: cpu-cluster
+ runconfig: ".\\default_runconfig.yml"
+ script_name: "train.py"
+ arguments:
+ - "--train_path"
+ - input:train_path
+ - "--test_path"
+ - input:test_path
+ inputs:
+ train_path:
+ source: train_csv
+ bind_mode: download
+ test_path:
+ source: test_csv
+ bind_mode: download
+
+```
+
+## Schedules
+
+When defining the schedule for a pipeline, it can be either datastore-triggered or recurring based on a time interval. The following are the keys used to define a schedule:
+
+| YAML key | Description |
+| -- | -- |
+| `description` | A description of the schedule. |
+| `recurrence` | Contains recurrence settings, if the schedule is recurring. |
+| `pipeline_parameters` | Any parameters that are required by the pipeline. |
+| `wait_for_provisioning` | Whether to wait for provisioning of the schedule to complete. |
+| `wait_timeout` | The number of seconds to wait before timing out. |
+| `datastore_name` | The datastore to monitor for modified/added blobs. |
+| `polling_interval` | How long, in minutes, between polling for modified/added blobs. Default value: 5 minutes. Only supported for datastore schedules. |
+| `data_path_parameter_name` | The name of the data path pipeline parameter to set with the changed blob path. Only supported for datastore schedules. |
+| `continue_on_step_failure` | Whether to continue execution of other steps in the submitted PipelineRun if a step fails. If provided, will override the `continue_on_step_failure` setting of the pipeline.
+| `path_on_datastore` | Optional. The path on the datastore to monitor for modified/added blobs. The path is under the container for the datastore, so the actual path the schedule monitors is container/`path_on_datastore`. If none, the datastore container is monitored. Additions/modifications made in a subfolder of the `path_on_datastore` are not monitored. Only supported for datastore schedules. |
+
+The following example contains the definition for a datastore-triggered schedule:
+
+```yaml
+Schedule:
+ description: "Test create with datastore"
+ recurrence: ~
+ pipeline_parameters: {}
+ wait_for_provisioning: True
+ wait_timeout: 3600
+ datastore_name: "workspaceblobstore"
+ polling_interval: 5
+ data_path_parameter_name: "input_data"
+ continue_on_step_failure: None
+ path_on_datastore: "file/path"
+```
+
+When defining a **recurring schedule**, use the following keys under `recurrence`:
+
+| YAML key | Description |
+| -- | -- |
+| `frequency` | How often the schedule recurs. Valid values are `"Minute"`, `"Hour"`, `"Day"`, `"Week"`, or `"Month"`. |
+| `interval` | How often the schedule fires. The integer value is the number of time units to wait until the schedule fires again. |
+| `start_time` | The start time for the schedule. The string format of the value is `YYYY-MM-DDThh:mm:ss`. If no start time is provided, the first workload is run instantly and future workloads are run based on the schedule. If the start time is in the past, the first workload is run at the next calculated run time. |
+| `time_zone` | The time zone for the start time. If no time zone is provided, UTC is used. |
+| `hours` | If `frequency` is `"Day"` or `"Week"`, you can specify one or more integers from 0 to 23, separated by commas, as the hours of the day when the pipeline should run. Only `time_of_day` or `hours` and `minutes` can be used. |
+| `minutes` | If `frequency` is `"Day"` or `"Week"`, you can specify one or more integers from 0 to 59, separated by commas, as the minutes of the hour when the pipeline should run. Only `time_of_day` or `hours` and `minutes` can be used. |
+| `time_of_day` | If `frequency` is `"Day"` or `"Week"`, you can specify a time of day for the schedule to run. The string format of the value is `hh:mm`. Only `time_of_day` or `hours` and `minutes` can be used. |
+| `week_days` | If `frequency` is `"Week"`, you can specify one or more days, separated by commas, when the schedule should run. Valid values are `"Monday"`, `"Tuesday"`, `"Wednesday"`, `"Thursday"`, `"Friday"`, `"Saturday"`, and `"Sunday"`. |
+
+The following example contains the definition for a recurring schedule:
+
+```yaml
+Schedule:
+ description: "Test create with recurrence"
+ recurrence:
+ frequency: Week # Can be "Minute", "Hour", "Day", "Week", or "Month".
+ interval: 1 # how often fires
+ start_time: 2019-06-07T10:50:00
+ time_zone: UTC
+ hours:
+ - 1
+ minutes:
+ - 0
+ time_of_day: null
+ week_days:
+ - Friday
+ pipeline_parameters:
+ 'a': 1
+ wait_for_provisioning: True
+ wait_timeout: 3600
+ datastore_name: ~
+ polling_interval: ~
+ data_path_parameter_name: ~
+ continue_on_step_failure: None
+ path_on_datastore: ~
+```
+
+## Next steps
+
+Learn how to [use the CLI extension for Azure Machine Learning](reference-azure-machine-learning-cli.md).
machine-learning Tutorial Auto Train Image Models V1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/tutorial-auto-train-image-models-v1.md
+
+ Title: 'Tutorial: AutoML- train object detection model (v1)'
+
+description: Train an object detection model to identify if an image contains certain objects with automated ML and the Azure Machine Learning Python SDK automated ML. (v1)
+++++++ Last updated : 10/06/2021+++
+# Tutorial: Train an object detection model (preview) with AutoML and Python (v1)
+
+> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning CLI extension you are using:"]
+> * [v1](tutorial-auto-train-image-models-v1.md)
+> * [v2 (current version)](../tutorial-auto-train-image-models.md)
+
++
+>[!IMPORTANT]
+> The features presented in this article are in preview. They should be considered [experimental](/python/api/overview/azure/ml/#stable-vs-experimental) preview features that might change at any time.
+
+In this tutorial, you learn how to train an object detection model using Azure Machine Learning automated ML with the Azure Machine Learning Python SDK. This object detection model identifies whether the image contains objects, such as a can, carton, milk bottle, or water bottle.
+
+Automated ML accepts training data and configuration settings, and automatically iterates through combinations of different feature normalization/standardization methods, models, and hyperparameter settings to arrive at the best model.
++
+You'll write code using the Python SDK in this tutorial and learn the following tasks:
+
+> [!div class="checklist"]
+> * Download and transform data
+> * Train an automated machine learning object detection model
+> * Specify hyperparameter values for your model
+> * Perform a hyperparameter sweep
+> * Deploy your model
+> * Visualize detections
+
+## Prerequisites
+
+* If you donΓÇÖt have an Azure subscription, create a free account before you begin. Try the [free or paid version](https://azure.microsoft.com/free/) of Azure Machine Learning today.
+
+* Python 3.6 or 3.7 are supported for this feature
+
+* Complete the [Quickstart: Get started with Azure Machine Learning](../quickstart-create-resources.md#create-the-workspace) if you don't already have an Azure Machine Learning workspace.
+
+* Download and unzip the [**odFridgeObjects.zip*](https://cvbp-secondary.z19.web.core.windows.net/datasets/object_detection/odFridgeObjects.zip) data file. The dataset is annotated in Pascal VOC format, where each image corresponds to an xml file. Each xml file contains information on where its corresponding image file is located and also contains information about the bounding boxes and the object labels. In order to use this data, you first need to convert it to the required JSONL format as seen in the [Convert the downloaded data to JSONL](https://github.com/Azure/azureml-examples/blob/main/python-sdk/tutorials/automl-with-azureml/image-object-detection/auto-ml-image-object-detection.ipynb) section of the notebook.
+
+This tutorial is also available in the [azureml-examples repository on GitHub](https://github.com/Azure/azureml-examples/tree/main/python-sdk/tutorials/automl-with-azureml/image-object-detection) if you wish to run it in your own [local environment](../how-to-configure-environment.md#local). To get the required packages,
+* Run `pip install azureml`
+* [Install the full `automl` client](https://github.com/Azure/azureml-examples/blob/main/python-sdk/tutorials/automl-with-azureml/README.md#setup-using-a-local-conda-environment)
+
+## Compute target setup
+
+You first need to set up a compute target to use for your automated ML model training. Automated ML models for image tasks require GPU SKUs.
+
+This tutorial uses the NCsv3-series (with V100 GPUs) as this type of compute target leverages multiple GPUs to speed up training. Additionally, you can set up multiple nodes to take advantage of parallelism when tuning hyperparameters for your model.
+
+The following code creates a GPU compute of size Standard _NC24s_v3 with four nodes that are attached to the workspace, `ws`.
+
+> [!WARNING]
+> Ensure your subscription has sufficient quota for the compute target you wish to use.
+
+```python
+from azureml.core.compute import AmlCompute, ComputeTarget
+
+cluster_name = "gpu-nc24sv3"
+
+try:
+ compute_target = ComputeTarget(workspace=ws, name=cluster_name)
+ print('Found existing compute target.')
+except KeyError:
+ print('Creating a new compute target...')
+ compute_config = AmlCompute.provisioning_configuration(vm_size='Standard_NC24s_v3',
+ idle_seconds_before_scaledown=1800,
+ min_nodes=0,
+ max_nodes=4)
+
+ compute_target = ComputeTarget.create(ws, cluster_name, compute_config)
+
+#If no min_node_count is provided, the scale settings are used for the cluster.
+compute_target.wait_for_completion(show_output=True, min_node_count=None, timeout_in_minutes=20)
+```
+
+## Experiment setup
+Next, create an `Experiment` in your workspace to track your model training runs.
+
+```python
+
+from azureml.core import Experiment
+
+experiment_name = 'automl-image-object-detection'
+experiment = Experiment(ws, name=experiment_name)
+```
+
+## Visualize input data
+
+Once you have the input image data prepared in [JSONL](https://jsonlines.org/) (JSON Lines) format, you can visualize the ground truth bounding boxes for an image. To do so, be sure you have `matplotlib` installed.
+
+```
+%pip install --upgrade matplotlib
+```
+```python
+
+%matplotlib inline
+import matplotlib.pyplot as plt
+import matplotlib.image as mpimg
+import matplotlib.patches as patches
+from PIL import Image as pil_image
+import numpy as np
+import json
+import os
+
+def plot_ground_truth_boxes(image_file, ground_truth_boxes):
+ # Display the image
+ plt.figure()
+ img_np = mpimg.imread(image_file)
+ img = pil_image.fromarray(img_np.astype("uint8"), "RGB")
+ img_w, img_h = img.size
+
+ fig,ax = plt.subplots(figsize=(12, 16))
+ ax.imshow(img_np)
+ ax.axis("off")
+
+ label_to_color_mapping = {}
+
+ for gt in ground_truth_boxes:
+ label = gt["label"]
+
+ xmin, ymin, xmax, ymax = gt["topX"], gt["topY"], gt["bottomX"], gt["bottomY"]
+ topleft_x, topleft_y = img_w * xmin, img_h * ymin
+ width, height = img_w * (xmax - xmin), img_h * (ymax - ymin)
+
+ if label in label_to_color_mapping:
+ color = label_to_color_mapping[label]
+ else:
+ # Generate a random color. If you want to use a specific color, you can use something like "red".
+ color = np.random.rand(3)
+ label_to_color_mapping[label] = color
+
+ # Display bounding box
+ rect = patches.Rectangle((topleft_x, topleft_y), width, height,
+ linewidth=2, edgecolor=color, facecolor="none")
+ ax.add_patch(rect)
+
+ # Display label
+ ax.text(topleft_x, topleft_y - 10, label, color=color, fontsize=20)
+
+ plt.show()
+
+def plot_ground_truth_boxes_jsonl(image_file, jsonl_file):
+ image_base_name = os.path.basename(image_file)
+ ground_truth_data_found = False
+ with open(jsonl_file) as fp:
+ for line in fp.readlines():
+ line_json = json.loads(line)
+ filename = line_json["image_url"]
+ if image_base_name in filename:
+ ground_truth_data_found = True
+ plot_ground_truth_boxes(image_file, line_json["label"])
+ break
+ if not ground_truth_data_found:
+ print("Unable to find ground truth information for image: {}".format(image_file))
+
+def plot_ground_truth_boxes_dataset(image_file, dataset_pd):
+ image_base_name = os.path.basename(image_file)
+ image_pd = dataset_pd[dataset_pd['portable_path'].str.contains(image_base_name)]
+ if not image_pd.empty:
+ ground_truth_boxes = image_pd.iloc[0]["label"]
+ plot_ground_truth_boxes(image_file, ground_truth_boxes)
+ else:
+ print("Unable to find ground truth information for image: {}".format(image_file))
+```
+
+Using the above helper functions, for any given image, you can run the following code to display the bounding boxes.
+
+```python
+image_file = "./odFridgeObjects/images/31.jpg"
+jsonl_file = "./odFridgeObjects/train_annotations.jsonl"
+
+plot_ground_truth_boxes_jsonl(image_file, jsonl_file)
+```
+
+## Upload data and create dataset
+
+In order to use the data for training, upload it to your workspace via a datastore. The datastore provides a mechanism for you to upload or download data, and interact with it from your remote compute targets.
+
+```python
+ds = ws.get_default_datastore()
+ds.upload(src_dir='./odFridgeObjects', target_path='odFridgeObjects')
+```
+
+Once uploaded to the datastore, you can create an Azure Machine Learning dataset from the data. Datasets package your data into a consumable object for training.
+
+The following code creates a dataset for training. Since no validation dataset is specified, by default 20% of your training data is used for validation.
+
+``` python
+from azureml.core import Dataset
+from azureml.data import DataType
+
+training_dataset_name = 'odFridgeObjectsTrainingDataset'
+if training_dataset_name in ws.datasets:
+ training_dataset = ws.datasets.get(training_dataset_name)
+ print('Found the training dataset', training_dataset_name)
+else:
+ # create training dataset
+ # create training dataset
+ training_dataset = Dataset.Tabular.from_json_lines_files(
+ path=ds.path('odFridgeObjects/train_annotations.jsonl'),
+ set_column_types={"image_url": DataType.to_stream(ds.workspace)},
+ )
+ training_dataset = training_dataset.register(workspace=ws, name=training_dataset_name)
+
+print("Training dataset name: " + training_dataset.name)
+```
+
+### Visualize dataset
+
+You can also visualize the ground truth bounding boxes for an image from this dataset.
+
+Load the dataset into a pandas dataframe.
+
+```python
+import azureml.dataprep as dprep
+
+from azureml.dataprep.api.functions import get_portable_path
+
+# Get pandas dataframe from the dataset
+dflow = training_dataset._dataflow.add_column(get_portable_path(dprep.col("image_url")),
+ "portable_path", "image_url")
+dataset_pd = dflow.to_pandas_dataframe(extended_types=True)
+```
+
+For any given image, you can run the following code to display the bounding boxes.
+
+```python
+image_file = "./odFridgeObjects/images/31.jpg"
+plot_ground_truth_boxes_dataset(image_file, dataset_pd)
+```
+
+## Configure your object detection experiment
+
+To configure automated ML runs for image-related tasks, use the `AutoMLImageConfig` object. In your `AutoMLImageConfig`, you can specify the model algorithms with the `model_name` parameter and configure the settings to perform a hyperparameter sweep over a defined parameter space to find the optimal model.
+
+In this example, we use the `AutoMLImageConfig` to train an object detection model with `yolov5` and `fasterrcnn_resnet50_fpn`, both of which are pretrained on COCO, a large-scale object detection, segmentation, and captioning dataset that contains over thousands of labeled images with over 80 label categories.
+
+### Hyperparameter sweeping for image tasks
+
+You can perform a hyperparameter sweep over a defined parameter space to find the optimal model.
+
+The following code, defines the parameter space in preparation for the hyperparameter sweep for each defined algorithm, `yolov5` and `fasterrcnn_resnet50_fpn`. In the parameter space, specify the range of values for `learning_rate`, `optimizer`, `lr_scheduler`, etc., for AutoML to choose from as it attempts to generate a model with the optimal primary metric. If hyperparameter values are not specified, then default values are used for each algorithm.
+
+For the tuning settings, use random sampling to pick samples from this parameter space by importing the `GridParameterSampling, RandomParameterSampling` and `BayesianParameterSampling` classes. Doing so, tells automated ML to try a total of 20 iterations with these different samples, running four iterations at a time on our compute target, which was set up using four nodes. The more parameters the space has, the more iterations you need to find optimal models.
+
+The Bandit early termination policy is also used. This policy terminates poor performing configurations; that is, those configurations that are not within 20% slack of the best performing configuration, which significantly saves compute resources.
+
+```python
+from azureml.train.hyperdrive import RandomParameterSampling
+from azureml.train.hyperdrive import BanditPolicy, HyperDriveConfig
+from azureml.train.hyperdrive import choice, uniform
+
+parameter_space = {
+ 'model': choice(
+ {
+ 'model_name': choice('yolov5'),
+ 'learning_rate': uniform(0.0001, 0.01),
+ #'model_size': choice('small', 'medium'), # model-specific
+ 'img_size': choice(640, 704, 768), # model-specific
+ },
+ {
+ 'model_name': choice('fasterrcnn_resnet50_fpn'),
+ 'learning_rate': uniform(0.0001, 0.001),
+ #'warmup_cosine_lr_warmup_epochs': choice(0, 3),
+ 'optimizer': choice('sgd', 'adam', 'adamw'),
+ 'min_size': choice(600, 800), # model-specific
+ }
+ )
+}
+
+tuning_settings = {
+ 'iterations': 20,
+ 'max_concurrent_iterations': 4,
+ 'hyperparameter_sampling': RandomParameterSampling(parameter_space),
+ 'policy': BanditPolicy(evaluation_interval=2, slack_factor=0.2, delay_evaluation=6)
+}
+```
+
+Once the parameter space and tuning settings are defined, you can pass them into your `AutoMLImageConfig` object and then submit the experiment to train an image model using your training dataset.
+
+```python
+from azureml.train.automl import AutoMLImageConfig
+automl_image_config = AutoMLImageConfig(task='image-object-detection',
+ compute_target=compute_target,
+ training_data=training_dataset,
+ validation_data=validation_dataset,
+ primary_metric='mean_average_precision',
+ **tuning_settings)
+
+automl_image_run = experiment.submit(automl_image_config)
+automl_image_run.wait_for_completion(wait_post_processing=True)
+```
+
+When doing a hyperparameter sweep, it can be useful to visualize the different configurations that were tried using the HyperDrive UI. You can navigate to this UI by going to the 'Child runs' tab in the UI of the main automl_image_run from above, which is the HyperDrive parent run. Then you can go into the 'Child runs' tab of this one. Alternatively, here below you can see directly the HyperDrive parent run and navigate to its 'Child runs' tab:
+
+```python
+from azureml.core import Run
+hyperdrive_run = Run(experiment=experiment, run_id=automl_image_run.id + '_HD')
+hyperdrive_run
+```
+
+## Register the best model
+
+Once the run completes, we can register the model that was created from the best run.
+
+```python
+best_child_run = automl_image_run.get_best_child()
+model_name = best_child_run.properties['model_name']
+model = best_child_run.register_model(model_name = model_name, model_path='outputs/model.pt')
+```
+
+## Deploy model as a web service
+
+Once you have your trained model, you can deploy the model on Azure. You can deploy your trained model as a web service on Azure Container Instances (ACI) or Azure Kubernetes Service (AKS). ACI is the perfect option for testing deployments, while AKS is better suited for high-scale, production usage.
+
+In this tutorial, we deploy the model as a web service in AKS.
+
+1. Create an AKS compute cluster. In this example, a GPU virtual machine SKU is used for the deployment cluster
+
+ ```python
+ from azureml.core.compute import ComputeTarget, AksCompute
+ from azureml.exceptions import ComputeTargetException
+
+ # Choose a name for your cluster
+ aks_name = "cluster-aks-gpu"
+
+ # Check to see if the cluster already exists
+ try:
+ aks_target = ComputeTarget(workspace=ws, name=aks_name)
+ print('Found existing compute target')
+ except ComputeTargetException:
+ print('Creating a new compute target...')
+ # Provision AKS cluster with GPU machine
+ prov_config = AksCompute.provisioning_configuration(vm_size="STANDARD_NC6",
+ location="eastus2")
+ # Create the cluster
+ aks_target = ComputeTarget.create(workspace=ws,
+ name=aks_name,
+ provisioning_configuration=prov_config)
+ aks_target.wait_for_completion(show_output=True)
+ ```
+
+1. Define the inference configuration that describes how to set up the web-service containing your model. You can use the scoring script and the environment from the training run in your inference config.
+
+ > [!NOTE]
+ > To change the model's settings, open the downloaded scoring script and modify the model_settings variable before deploying the model.
+
+ ```python
+ from azureml.core.model import InferenceConfig
+
+ best_child_run.download_file('outputs/scoring_file_v_1_0_0.py', output_file_path='score.py')
+ environment = best_child_run.get_environment()
+ inference_config = InferenceConfig(entry_script='score.py', environment=environment)
+ ```
+
+1. You can then deploy the model as an AKS web service.
+
+ ```python
+
+ from azureml.core.webservice import AksWebservice
+ from azureml.core.webservice import Webservice
+ from azureml.core.model import Model
+ from azureml.core.environment import Environment
+
+ aks_config = AksWebservice.deploy_configuration(autoscale_enabled=True,
+ cpu_cores=1,
+ memory_gb=50,
+ enable_app_insights=True)
+
+ aks_service = Model.deploy(ws,
+ models=[model],
+ inference_config=inference_config,
+ deployment_config=aks_config,
+ deployment_target=aks_target,
+ name='automl-image-test',
+ overwrite=True)
+ aks_service.wait_for_deployment(show_output=True)
+ print(aks_service.state)
+ ```
+
+## Test the web service
+
+You can test the deployed web service to predict new images. For this tutorial, pass a random image from the dataset and pass it to the scoring URI.
+
+```python
+import requests
+
+# URL for the web service
+scoring_uri = aks_service.scoring_uri
+
+# If the service is authenticated, set the key or token
+key, _ = aks_service.get_keys()
+
+sample_image = './test_image.jpg'
+
+# Load image data
+data = open(sample_image, 'rb').read()
+
+# Set the content type
+headers = {'Content-Type': 'application/octet-stream'}
+
+# If authentication is enabled, set the authorization header
+headers['Authorization'] = f'Bearer {key}'
+
+# Make the request and display the response
+resp = requests.post(scoring_uri, data, headers=headers)
+print(resp.text)
+```
+## Visualize detections
+Now that you have scored a test image, you can visualize the bounding boxes for this image. To do so, be sure you have matplotlib installed.
+
+```
+%pip install --upgrade matplotlib
+```
+
+```python
+%matplotlib inline
+import matplotlib.pyplot as plt
+import matplotlib.image as mpimg
+import matplotlib.patches as patches
+from PIL import Image
+import numpy as np
+import json
+
+IMAGE_SIZE = (18,12)
+plt.figure(figsize=IMAGE_SIZE)
+img_np=mpimg.imread(sample_image)
+img = Image.fromarray(img_np.astype('uint8'),'RGB')
+x, y = img.size
+
+fig,ax = plt.subplots(1, figsize=(15,15))
+# Display the image
+ax.imshow(img_np)
+
+# draw box and label for each detection
+detections = json.loads(resp.text)
+for detect in detections['boxes']:
+ label = detect['label']
+ box = detect['box']
+ conf_score = detect['score']
+ if conf_score > 0.6:
+ ymin, xmin, ymax, xmax = box['topY'],box['topX'], box['bottomY'],box['bottomX']
+ topleft_x, topleft_y = x * xmin, y * ymin
+ width, height = x * (xmax - xmin), y * (ymax - ymin)
+ print('{}: [{}, {}, {}, {}], {}'.format(detect['label'], round(topleft_x, 3),
+ round(topleft_y, 3), round(width, 3),
+ round(height, 3), round(conf_score, 3)))
+
+ color = np.random.rand(3) #'red'
+ rect = patches.Rectangle((topleft_x, topleft_y), width, height,
+ linewidth=3, edgecolor=color,facecolor='none')
+
+ ax.add_patch(rect)
+ plt.text(topleft_x, topleft_y - 10, label, color=color, fontsize=20)
+
+plt.show()
+```
+
+## Clean up resources
+
+Do not complete this section if you plan on running other Azure Machine Learning tutorials.
+
+If you don't plan to use the resources you created, delete them, so you don't incur any charges.
+
+1. In the Azure portal, select **Resource groups** on the far left.
+1. From the list, select the resource group you created.
+1. Select **Delete resource group**.
+1. Enter the resource group name. Then select **Delete**.
+
+You can also keep the resource group but delete a single workspace. Display the workspace properties and select **Delete**.
+
+## Next steps
+
+In this automated machine learning tutorial, you did the following tasks:
+
+> [!div class="checklist"]
+> * Configured a workspace and prepared data for an experiment.
+> * Trained an automated object detection model
+> * Specified hyperparameter values for your model
+> * Performed a hyperparameter sweep
+> * Deployed your model
+> * Visualized detections
+
+* [Learn more about computer vision in automated ML (preview)](../concept-automated-ml.md#computer-vision-preview).
+* [Learn how to set up AutoML to train computer vision models with Python (preview)](../how-to-auto-train-image-models.md).
+* [Learn how to configure incremental training on computer vision models](../how-to-auto-train-image-models.md#incremental-training-optional).
+* See [what hyperparameters are available for computer vision tasks](../reference-automl-images-hyperparameters.md).
+* Review detailed code examples and use cases in the [GitHub notebook repository for automated machine learning samples](https://github.com/Azure/azureml-examples/tree/main/python-sdk/tutorials/automl-with-azureml). Please check the folders with 'image-' prefix for samples specific to building computer vision models.
+
+> [!NOTE]
+> Use of the fridge objects dataset is available through the license under the [MIT License](https://github.com/microsoft/computervision-recipes/blob/master/LICENSE).
machine-learning Tutorial Pipeline Python Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/tutorial-pipeline-python-sdk.md
+
+ Title: 'Tutorial: ML pipelines for training'
+
+description: In this tutorial, you build a machine learning pipeline for image classification with SDK v1. Focus on machine learning instead of infrastructure and automation.
++++++ Last updated : 01/28/2022+++
+# Tutorial: Build an Azure Machine Learning pipeline for image classification
++
+> [!NOTE]
+> For a tutorial that uses SDK v2 to build a pipeline, see [Tutorial: Use ML pipelines for production ML workflows with Python SDK v2 (preview) in a Jupyter Notebook](../tutorial-pipeline-python-sdk.md).
+
+In this tutorial, you learn how to build an [Azure Machine Learning pipeline](../concept-ml-pipelines.md) to prepare data and train a machine learning model. Machine learning pipelines optimize your workflow with speed, portability, and reuse, so you can focus on machine learning instead of infrastructure and automation.
+
+The example trains a small [Keras](https://keras.io/) convolutional neural network to classify images in the [Fashion MNIST](https://github.com/zalandoresearch/fashion-mnist) dataset.
+
+In this tutorial, you complete the following tasks:
+
+> [!div class="checklist"]
+> * Configure workspace
+> * Create an Experiment to hold your work
+> * Provision a ComputeTarget to do the work
+> * Create a Dataset in which to store compressed data
+> * Create a pipeline step to prepare the data for training
+> * Define a runtime Environment in which to perform training
+> * Create a pipeline step to define the neural network and perform the training
+> * Compose a Pipeline from the pipeline steps
+> * Run the pipeline in the experiment
+> * Review the output of the steps and the trained neural network
+> * Register the model for further use
+
+If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/) today.
+
+## Prerequisites
+
+* Complete the [Quickstart: Get started with Azure Machine Learning](../quickstart-create-resources.md) if you don't already have an Azure Machine Learning workspace.
+* A Python environment in which you've installed both the `azureml-core` and `azureml-pipeline` packages. This environment is for defining and controlling your Azure Machine Learning resources and is separate from the environment used at runtime for training.
+
+> [!Important]
+> Currently, the most recent Python release compatible with `azureml-pipeline` is Python 3.8. If you've difficulty installing the `azureml-pipeline` package, ensure that `python --version` is a compatible release. Consult the documentation of your Python virtual environment manager (`venv`, `conda`, and so on) for instructions.
+
+## Start an interactive Python session
+
+This tutorial uses the Python SDK for Azure ML to create and control an Azure Machine Learning pipeline. The tutorial assumes that you'll be running the code snippets interactively in either a Python REPL environment or a Jupyter notebook.
+
+* This tutorial is based on the `image-classification.ipynb` notebook found in the `python-sdk/tutorial/using-pipelines` directory of the [AzureML Examples](https://github.com/azure/azureml-examples) repository. The source code for the steps themselves is in the `keras-mnist-fashion` subdirectory.
++
+## Import types
+
+Import all the Azure Machine Learning types that you'll need for this tutorial:
+
+```python
+import os
+import azureml.core
+from azureml.core import (
+ Workspace,
+ Experiment,
+ Dataset,
+ Datastore,
+ ComputeTarget,
+ Environment,
+ ScriptRunConfig
+)
+from azureml.data import OutputFileDatasetConfig
+from azureml.core.compute import AmlCompute
+from azureml.core.compute_target import ComputeTargetException
+from azureml.pipeline.steps import PythonScriptStep
+from azureml.pipeline.core import Pipeline
+
+# check core SDK version number
+print("Azure ML SDK Version: ", azureml.core.VERSION)
+```
+
+The Azure ML SDK version should be 1.37 or greater. If it isn't, upgrade with `pip install --upgrade azureml-core`.
+
+## Configure workspace
+
+Create a workspace object from the existing Azure Machine Learning workspace.
+
+```python
+workspace = Workspace.from_config()
+```
+
+> [!IMPORTANT]
+> This code snippet expects the workspace configuration to be saved in the current directory or its parent. For more information on creating a workspace, see [Create and manage Azure Machine Learning workspaces](../how-to-manage-workspace.md). For more information on saving the configuration to file, see [Create a workspace configuration file](../how-to-configure-environment.md#workspace).
+
+## Create the infrastructure for your pipeline
+
+Create an `Experiment` object to hold the results of your pipeline runs:
+
+```python
+exp = Experiment(workspace=workspace, name="keras-mnist-fashion")
+```
+
+Create a `ComputeTarget` that represents the machine resource on which your pipeline will run. The simple neural network used in this tutorial trains in just a few minutes even on a CPU-based machine. If you wish to use a GPU for training, set `use_gpu` to `True`. Provisioning a compute target generally takes about five minutes.
+
+```python
+use_gpu = False
+
+# choose a name for your cluster
+cluster_name = "gpu-cluster" if use_gpu else "cpu-cluster"
+
+found = False
+# Check if this compute target already exists in the workspace.
+cts = workspace.compute_targets
+if cluster_name in cts and cts[cluster_name].type == "AmlCompute":
+ found = True
+ print("Found existing compute target.")
+ compute_target = cts[cluster_name]
+if not found:
+ print("Creating a new compute target...")
+ compute_config = AmlCompute.provisioning_configuration(
+ vm_size= "STANDARD_NC6" if use_gpu else "STANDARD_D2_V2"
+ # vm_priority = 'lowpriority', # optional
+ max_nodes=4,
+ )
+
+ # Create the cluster.
+ compute_target = ComputeTarget.create(workspace, cluster_name, compute_config)
+
+ # Can poll for a minimum number of nodes and for a specific timeout.
+ # If no min_node_count is provided, it will use the scale settings for the cluster.
+ compute_target.wait_for_completion(
+ show_output=True, min_node_count=None, timeout_in_minutes=10
+ )
+# For a more detailed view of current AmlCompute status, use get_status().print(compute_target.get_status().serialize())
+```
+
+> [!Note]
+> GPU availability depends on the quota of your Azure subscription and upon Azure capacity. See [Manage and increase quotas for resources with Azure Machine Learning](../how-to-manage-quotas.md).
+
+### Create a dataset for the Azure-stored data
+
+Fashion-MNIST is a dataset of fashion images divided into 10 classes. Each image is a 28x28 grayscale image and there are 60,000 training and 10,000 test images. As an image classification problem, Fashion-MNIST is harder than the classic MNIST handwritten digit database. It's distributed in the same compressed binary form as the original [handwritten digit database](http://yann.lecun.com/exdb/mnist/) .
+
+To create a `Dataset` that references the Web-based data, run:
+
+```python
+data_urls = ["https://data4mldemo6150520719.blob.core.windows.net/demo/mnist-fashion"]
+fashion_ds = Dataset.File.from_files(data_urls)
+
+# list the files referenced by fashion_ds
+print(fashion_ds.to_path())
+```
+
+This code completes quickly. The underlying data remains in the Azure storage resource specified in the `data_urls` array.
+
+## Create the data-preparation pipeline step
+
+The first step in this pipeline will convert the compressed data files of `fashion_ds` into a dataset in your own workspace consisting of CSV files ready for use in training. Once registered with the workspace, your collaborators can access this data for their own analysis, training, and so on
+
+```python
+datastore = workspace.get_default_datastore()
+prepared_fashion_ds = OutputFileDatasetConfig(
+ destination=(datastore, "outputdataset/{run-id}")
+).register_on_complete(name="prepared_fashion_ds")
+```
+
+The above code specifies a dataset that is based on the output of a pipeline step. The underlying processed files will be put in the workspace's default datastore's blob storage at the path specified in `destination`. The dataset will be registered in the workspace with the name `prepared_fashion_ds`.
+
+### Create the pipeline step's source
+
+The code that you've executed so far has create and controlled Azure resources. Now it's time to write code that does the first step in the domain.
+
+If you're following along with the example in the [AzureML Examples repo](https://github.com/Azure/azureml-examples/tree/main/python-sdk/tutorials/using-pipelines), the source file is already available as `keras-mnist-fashion/prepare.py`.
+
+If you're working from scratch, create a subdirectory called `kera-mnist-fashion/`. Create a new file, add the following code to it, and name the file `prepare.py`.
+
+```python
+# prepare.py
+# Converts MNIST-formatted files at the passed-in input path to a passed-in output path
+import os
+import sys
+
+# Conversion routine for MNIST binary format
+def convert(imgf, labelf, outf, n):
+ f = open(imgf, "rb")
+ l = open(labelf, "rb")
+ o = open(outf, "w")
+
+ f.read(16)
+ l.read(8)
+ images = []
+
+ for i in range(n):
+ image = [ord(l.read(1))]
+ for j in range(28 * 28):
+ image.append(ord(f.read(1)))
+ images.append(image)
+
+ for image in images:
+ o.write(",".join(str(pix) for pix in image) + "\n")
+ f.close()
+ o.close()
+ l.close()
+
+# The MNIST-formatted source
+mounted_input_path = sys.argv[1]
+# The output directory at which the outputs will be written
+mounted_output_path = sys.argv[2]
+
+# Create the output directory
+os.makedirs(mounted_output_path, exist_ok=True)
+
+# Convert the training data
+convert(
+ os.path.join(mounted_input_path, "mnist-fashion/train-images-idx3-ubyte"),
+ os.path.join(mounted_input_path, "mnist-fashion/train-labels-idx1-ubyte"),
+ os.path.join(mounted_output_path, "mnist_train.csv"),
+ 60000,
+)
+
+# Convert the test data
+convert(
+ os.path.join(mounted_input_path, "mnist-fashion/t10k-images-idx3-ubyte"),
+ os.path.join(mounted_input_path, "mnist-fashion/t10k-labels-idx1-ubyte"),
+ os.path.join(mounted_output_path, "mnist_test.csv"),
+ 10000,
+)
+```
+
+The code in `prepare.py` takes two command-line arguments: the first is assigned to `mounted_input_path` and the second to `mounted_output_path`. If that subdirectory doesn't exist, the call to `os.makedirs` creates it. Then, the program converts the training and testing data and outputs the comma-separated files to the `mounted_output_path`.
+
+### Specify the pipeline step
+
+Back in the Python environment you're using to specify the pipeline, run this code to create a `PythonScriptStep` for your preparation code:
+
+```python
+script_folder = "./keras-mnist-fashion"
+
+prep_step = PythonScriptStep(
+ name="prepare step",
+ script_name="prepare.py",
+ # On the compute target, mount fashion_ds dataset as input, prepared_fashion_ds as output
+ arguments=[fashion_ds.as_named_input("fashion_ds").as_mount(), prepared_fashion_ds],
+ source_directory=script_folder,
+ compute_target=compute_target,
+ allow_reuse=True,
+)
+```
+
+The call to `PythonScriptStep` specifies that, when the pipeline step is run:
+
+* All the files in the `script_folder` directory are uploaded to the `compute_target`
+* Among those uploaded source files, the file `prepare.py` will be run
+* The `fashion_ds` and `prepared_fashion_ds` datasets will be mounted on the `compute_target` and appear as directories
+* The path to the `fashion_ds` files will be the first argument to `prepare.py`. In `prepare.py`, this argument is assigned to `mounted_input_path`
+* The path to the `prepared_fashion_ds` will be the second argument to `prepare.py`. In `prepare.py`, this argument is assigned to `mounted_output_path`
+* Because `allow_reuse` is `True`, it won't be rerun until its source files or inputs change
+* This `PythonScriptStep` will be named `prepare step`
+
+Modularity and reuse are key benefits of pipelines. Azure Machine Learning can automatically determine source code or Dataset changes. The output of a step that isn't affected will be reused without rerunning the steps again if `allow_reuse` is `True`. If a step relies on a data source external to Azure Machine Learning that may change (for instance, a URL that contains sales data), set `allow_reuse` to `False` and the pipeline step will run every time the pipeline is run.
+
+## Create the training step
+
+Once the data has been converted from the compressed format to CSV files, it can be used for training a convolutional neural network.
+
+### Create the training step's source
+
+With larger pipelines, it's a good practice to put each step's source code in a separate directory (`src/prepare/`, `src/train/`, and so on) but for this tutorial, just use or create the file `train.py` in the same `keras-mnist-fashion/` source directory.
++
+Most of this code should be familiar to ML developers:
+
+* The data is partitioned into train and validation sets for training, and a separate test subset for final scoring
+* The input shape is 28x28x1 (only 1 because the input is grayscale), there will be 256 inputs in a batch, and there are 10 classes
+* The number of training epochs will be 10
+* The model has three convolutional layers, with max pooling and dropout, followed by a dense layer and softmax head
+* The model is fitted for 10 epochs and then evaluated
+* The model architecture is written to `outputs/model/model.json` and the weights to `outputs/model/model.h5`
+
+Some of the code, though, is specific to Azure Machine Learning. `run = Run.get_context()` retrieves a [`Run`](/python/api/azureml-core/azureml.core.run(class)?view=azure-ml-py&preserve-view=True) object, which contains the current service context. The `train.py` source uses this `run` object to retrieve the input dataset via its name (an alternative to the code in `prepare.py` that retrieved the dataset via the `argv` array of script arguments).
+
+The `run` object is also used to log the training progress at the end of every epoch and, at the end of training, to log the graph of loss and accuracy over time.
+
+### Create the training pipeline step
+
+The training step has a slightly more complex configuration than the preparation step. The preparation step used only standard Python libraries. More commonly, you'll need to modify the runtime environment in which your source code runs.
+
+Create a file `conda_dependencies.yml` with the following contents:
+
+```yml
+dependencies:
+- python=3.6.2
+- pip:
+ - azureml-core
+ - azureml-dataset-runtime
+ - keras==2.4.3
+ - tensorflow==2.4.3
+ - numpy
+ - scikit-learn
+ - pandas
+ - matplotlib
+```
+
+The `Environment` class represents the runtime environment in which a machine learning task runs. Associate the above specification with the training code with:
+
+```python
+keras_env = Environment.from_conda_specification(
+ name="keras-env", file_path="./conda_dependencies.yml"
+)
+
+train_cfg = ScriptRunConfig(
+ source_directory=script_folder,
+ script="train.py",
+ compute_target=compute_target,
+ environment=keras_env,
+)
+```
+
+Creating the training step itself uses code similar to the code used to create the preparation step:
+
+```python
+train_step = PythonScriptStep(
+ name="train step",
+ arguments=[
+ prepared_fashion_ds.read_delimited_files().as_input(name="prepared_fashion_ds")
+ ],
+ source_directory=train_cfg.source_directory,
+ script_name=train_cfg.script,
+ runconfig=train_cfg.run_config,
+)
+```
+
+## Create and run the pipeline
+
+Now that you've specified data inputs and outputs and created your pipeline's steps, you can compose them into a pipeline and run it:
+
+```python
+pipeline = Pipeline(workspace, steps=[prep_step, train_step])
+run = exp.submit(pipeline)
+```
+
+The `Pipeline` object you create runs in your `workspace` and is composed of the preparation and training steps you've specified.
+
+> [!Note]
+> This pipeline has a simple dependency graph: the training step relies on the preparation step and the preparation step relies on the `fashion_ds` dataset. Production pipelines will often have much more complex dependencies. Steps may rely on multiple upstream steps, a source code change in an early step may have far-reaching consequences, and so on. Azure Machine Learning tracks these concerns for you. You need only pass in the array of `steps` and Azure Machine Learning takes care of calculating the execution graph.
+
+The call to `submit` the `Experiment` completes quickly, and produces output similar to:
+
+```dotnetcli
+Submitted PipelineRun 5968530a-abcd-1234-9cc1-46168951b5eb
+Link to Azure Machine Learning Portal: https://ml.azure.com/runs/abc-xyz...
+```
+
+You can monitor the pipeline run by opening the link or you can block until it completes by running:
+
+```python
+run.wait_for_completion(show_output=True)
+```
+
+> [!IMPORTANT]
+> The first pipeline run takes roughly *15 minutes*. All dependencies must be downloaded, a Docker image is created, and the Python environment is provisioned and created. Running the pipeline again takes significantly less time because those resources are reused instead of created. However, total run time for the pipeline depends on the workload of your scripts and the processes that are running in each pipeline step.
+
+Once the pipeline completes, you can retrieve the metrics you logged in the training step:
+
+```python
+run.find_step_run("train step")[0].get_metrics()
+```
+
+If you're satisfied with the metrics, you can register the model in your workspace:
+
+```python
+run.find_step_run("train step")[0].register_model(
+ model_name="keras-model",
+ model_path="outputs/model/",
+ datasets=[("train test data", fashion_ds)],
+)
+```
+
+## Clean up resources
+
+Don't complete this section if you plan to run other Azure Machine Learning tutorials.
+
+### Stop the compute instance
++
+### Delete everything
+
+If you don't plan to use the resources you created, delete them, so you don't incur any charges:
+
+1. In the Azure portal, in the left menu, select **Resource groups**.
+1. In the list of resource groups, select the resource group you created.
+1. Select **Delete resource group**.
+1. Enter the resource group name. Then, select **Delete**.
+
+You can also keep the resource group but delete a single workspace. Display the workspace properties, and then select **Delete**.
+
+## Next steps
+
+In this tutorial, you used the following types:
+
+* The `Workspace` represents your Azure Machine Learning workspace. It contained:
+ * The `Experiment` that contains the results of training runs of your pipeline
+ * The `Dataset` that lazily loaded the data held in the Fashion-MNIST datastore
+ * The `ComputeTarget` that represents the machine(s) on which the pipeline steps run
+ * The `Environment` that is the runtime environment in which the pipeline steps run
+ * The `Pipeline` that composes the `PythonScriptStep` steps into a whole
+ * The `Model` that you registered after being satisfied with the training process
+
+The `Workspace` object contains references to other resources (notebooks, endpoints, and so on) that weren't used in this tutorial. For more, see [What is an Azure Machine Learning workspace?](../concept-workspace.md).
+
+The `OutputFileDatasetConfig` promotes the output of a run to a file-based dataset. For more information on datasets and working with data, see [How to access data](../how-to-access-data.md).
+
+For more on compute targets and environments, see [What are compute targets in Azure Machine Learning?](../concept-compute-target.md) and [What are Azure Machine Learning environments?](../concept-environments.md)
+
+The `ScriptRunConfig` associates a `ComputeTarget` and `Environment` with Python source files. A `PythonScriptStep` takes that `ScriptRunConfig` and defines its inputs and outputs, which in this pipeline was the file dataset built by the `OutputFileDatasetConfig`.
+
+For more examples of how to build pipelines by using the machine learning SDK, see the [example repository](https://github.com/Azure/azureml-examples).
managed-grafana Quickstart Managed Grafana Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/quickstart-managed-grafana-cli.md
+
+ Title: 'Quickstart: create a workspace in Azure Managed Grafana Preview using the Azure CLI'
+description: Learn how to create a Managed Grafana workspace using the Azure CLI
++++ Last updated : 05/11/2022
+ms.devlang: azurecli
+
+
+# Quickstart: Create a workspace in Azure Managed Grafana Preview using the Azure CLI
+
+This quickstart describes how to use the Azure Command-Line Interface (CLI) to create a new workspace in Azure Managed Grafana Preview.
+
+> [!NOTE]
+> The CLI experience for Azure Managed Grafana Preview is part of the amg extension for the Azure CLI (version 2.30.0 or higher). The extension will automatically install the first time you run an `az grafana` command.
+
+## Prerequisite
+
+An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/dotnet).
+
+## Sign in to Azure
+
+Open your CLI and run the `az login` command:
+
+```azurecli
+az login
+```
+
+This command will prompt your web browser to launch and load an Azure sign-in page. If the browser fails to open, use device code flow with `az login --use-device-code`. For more sign in options, go to [sign in with the Azure CLI](/cli/azure/authenticate-azure-cli).
+
+## Create a resource group
+
+Run the code below to create resource group to organize the Azure resources needed to complete this quickstart. Skip this step if you already have a resource group you want to use.
+
+| Parameter | Description | Example |
+|--|--|-|
+| --name | Choose a unique name for your new resource group. | *grafana-rg* |
+| --location | Choose an Azure region where Managed Grafana is available. For more info, go to [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=managed-grafana).| *eastus* |
+
+```azurecli
+az group create --location <location> --name <resource-group-name>
+```
+
+## Create an Azure Managed Grafana workspace
+
+Run the code below to create an Azure Managed Grafana workspace.
+
+| Parameter | Description | Example |
+|--|--|-|
+| --name | Choose a unique name for your new Managed Grafana workspace. | *grafana-test* |
+| --location | Choose an Azure Region where Managed Grafana is available. | *eastus* |
+
+```azurecli
+ az grafana create --name <managed-grafana-resource-name> --resource-group <resource-group-name>
+```
+
+Once the deployment is complete, you'll see a note in the output of the command line stating that instance was successfully created, alongside with additional information about the deployment.
+
+## Open your new Managed Grafana dashboard
+
+Now let's check if you can access your new Managed Grafana dashboard.
+
+1. Take note of the **endpoint** URL ending by `eus.grafana.azure.com`, listed in the CLI output.
+
+1. Open a browser and enter the endpoint URL. You should now see your Azure Managed Grafana Dashboard. From there, you can finish setting up your Grafana installation.
++
+> [!NOTE]
+> If creating a Grafana workspace fails the first time, please try again. The failure might be due to a limitation in our backend, and we are actively working to fix.
+
+## Clean up resources
+
+If you're not going to continue to use this workspace, delete the Azure resources you created.
+
+`az group delete -n <resource-group-name> --yes`
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [How to configure data sources for Azure Managed Grafana](./how-to-data-source-plugins-managed-identity.md)
+
managed-instance-apache-cassandra Create Cluster Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-instance-apache-cassandra/create-cluster-cli.md
This quickstart demonstrates how to use the Azure CLI commands to create a clust
--location $location \ --delegated-management-subnet-id $delegatedManagementSubnetId \ --initial-cassandra-admin-password $initialCassandraAdminPassword \
- -cassandra-version $cassandraVersion \
+ --cassandra-version $cassandraVersion \
--debug ```
managed-instance-apache-cassandra Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-instance-apache-cassandra/faq.md
# Frequently asked questions about Azure Managed Instance for Apache Cassandra
-This article addresses frequently asked questions about Azure Managed Instance for Apache Cassandra. You will learn when to use managed instances, their benefits, throughput limits, supported regions, and their configuration details.
+This article addresses frequently asked questions about Azure Managed Instance for Apache Cassandra. You'll learn when to use managed instances, their benefits, throughput limits, supported regions, and their configuration details.
## General FAQ
It can be used either entirely in the cloud or as a part of a hybrid cloud and o
### Why should I use this service instead of Azure Cosmos DB Cassandra API?
-Azure Managed Instance for Apache Cassandra is delivered by the Azure Cosmos DB team. It is a standalone managed service for deploying, maintaining, and scaling open-source Apache Cassandra data-centers and clusters. [Azure Cosmos DB Cassandra API](../cosmos-db/cassandra-introduction.md) on the other hand is a Platform-as-a-Service, providing an interoperability layer for the Apache Cassandra wire protocol. If your expectation is for the platform to behave in exactly the same way as any Apache Cassandra cluster, you should choose the managed instance service. To learn more, see the [Azure Managed Instance for Apache Cassandra Vs Azure Cosmos DB Cassandra API](compare-cosmosdb-managed-instance.md) article.
+Azure Managed Instance for Apache Cassandra is delivered by the Azure Cosmos DB team. It's a standalone managed service for deploying, maintaining, and scaling open-source Apache Cassandra data-centers and clusters. [Azure Cosmos DB Cassandra API](../cosmos-db/cassandra-introduction.md) on the other hand is a Platform-as-a-Service, providing an interoperability layer for the Apache Cassandra wire protocol. If your expectation is for the platform to behave in exactly the same way as any Apache Cassandra cluster, you should choose the managed instance service. To learn more, see the [Azure Managed Instance for Apache Cassandra Vs Azure Cosmos DB Cassandra API](compare-cosmosdb-managed-instance.md) article.
### Is Azure Managed Instance for Apache Cassandra dependent on Azure Cosmos DB?
-No, there is no architectural dependency between Azure Managed Instance for Apache Cassandra and the Azure Cosmos DB backend.
+No, there's no architectural dependency between Azure Managed Instance for Apache Cassandra and the Azure Cosmos DB backend.
### Does Azure Managed Instance for Apache Cassandra have an SLA?
These limits depend on the Virtual Machine SKUs you choose.
### How are Cassandra repairs carried out in Azure Managed Instance for Apache Cassandra?
-We use [cassandra-reaper.io](http://cassandra-reaper.io/). It is set up to run automatically for you.
+We use [cassandra-reaper.io](http://cassandra-reaper.io/). It's set up to run automatically for you.
### What is the cost of Azure Managed Instance for Apache Cassandra?
To fix an issue with your account, file a [support request](https://portal.azure
### Will the managed instance support node addition, cluster status, and node status commands?
-All the *read-only* Nodetool commands such as `status` are available through Azure CLI. However, operations such as *node addition* are not available, because we manage the health of nodes in the managed instance. In the Hybrid mode, you can connect to the cluster with *Nodetool*. However, using Nodetool is not recommended, as it could destabilize the cluster. It may also invalidate any production support SLA relating to the health of the managed instance datacenters in the cluster.
+All the *read-only* `nodetool` commands such as `status` are available through Azure CLI. However, operations such as *node addition* aren't available, because we manage the health of nodes in the managed instance. In the Hybrid mode, you can connect to the cluster with *`nodetool`*. However, using `nodetool` isn't recommended, as it could destabilize the cluster. It may also invalidate any production support SLA relating to the health of the managed instance datacenters in the cluster.
### What happens with various settings for table metadata? The settings for table metadata such as bloom filter, caching, read repair chance, gc_grace, and compression memtable_flush_period are fully supported as with any self-hosted Apache Cassandra environment.
+### Can I deploy a managed instance cluster using Terraform?
+
+Yes. You can find a sample for deploying a cluster with a datacenter [here](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/cosmosdb_cassandra_datacenter).
+ ## Next steps To learn about frequently asked questions in other APIs, see:
managed-instance-apache-cassandra Visualize Prometheus Grafana https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-instance-apache-cassandra/visualize-prometheus-grafana.md
The following tasks are required to visualize metrics:
* Deploy an Ubuntu Virtual Machine inside the Azure Virtual Network where the managed instance is present. * Install the [Prometheus Dashboards](https://github.com/datastax/metric-collector-for-apache-cassandra#installing-the-prometheus-dashboards) onto the VM.
+>[!WARNING]
+> Prometheus and Grafana are open-source software and not supported as part of the Azure Managed Instance for Apache Cassandra service. Visualizing metrics in the way described below will require you to host and maintain a virtual machine as the server for both Prometheus and Grafana. The instructions below were tested only for Ubuntu Server 18.04, there is no guarantee that they will work with other linux distributions. Following this approach will entail supporting any issues that may arise, such as running out of space, or availability of the server. For a fully supported and hosted metrics experience, consider using [Azure Monitor metrics](monitor-clusters.md#azure-metrics), or alternatively [Azure Monitor partner integrations](/azure/azure-monitor/partners).
+ ## Deploy an Ubuntu server 1. Sign in to the [Azure portal](https://portal.azure.com/).
marketplace Azure Ad Transactable Saas Landing Page https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/azure-ad-transactable-saas-landing-page.md
Most apps that are registered with Azure AD grant delegated permissions to read
## Next steps - [How to create a SaaS offer in the commercial marketplace](create-new-saas-offer.md)+
+**Video tutorials**
+
+- [Building a Simple SaaS Landing Page in .NET](https://go.microsoft.com/fwlink/?linkid=2196323)
marketplace Create New Saas Offer Plans https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/create-new-saas-offer-plans.md
If you haven't already done so, create a development and test (DEV) offer to tes
- [Sell your SaaS offer](create-new-saas-offer-marketing.md) through the **Co-sell with Microsoft** and **Resell through CSPs** programs. - [Test and publish a SaaS offer](test-publish-saas-offer.md).+
+**Video tutorials**
+
+- [Publishing a Private SaaS plan](https://go.microsoft.com/fwlink/?linkid=2196256)
marketplace Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/overview.md
Previously updated : 3/7/2022 Last updated : 5/24/2022 # What is the Microsoft commercial marketplace?
When you create a commercial marketplace offer in Partner Center, it may be list
## Next steps - Get an [Introduction to the Microsoft commercial marketplace](/learn/modules/intro-commercial-marketplace/) on Microsoft Learn.
+- Find videos and hands on labs at [Mastering the marketplace](https://go.microsoft.com/fwlink/?linkid=2195692)
- For new Microsoft partners who are interested in publishing to the commercial marketplace, see [Create a commercial marketplace account in Partner Center](create-account.md). - To learn more about recent and future releases, join the conversation in the [Microsoft Partner Community](https://www.microsoftpartnercommunity.com/).
marketplace Pc Saas Fulfillment Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/partner-center-portal/pc-saas-fulfillment-apis.md
For more information about CSP, refer to https://partner.microsoft.com/licensing
## Next steps - If you have not already done so, register your SaaS application in the [Azure portal](https://portal.azure.com) as explained in [Register an Azure AD Application](./pc-saas-registration.md). Afterwards, use the most current version of this interface for development: [SaaS fulfillment Subscription APIs v2](pc-saas-fulfillment-subscription-api.md) and [SaaS fulfillment Operations APIs v2](pc-saas-fulfillment-operations-api.md).+
+**Video tutorials**
+
+- [The SaaS Client Library for .NET](https://go.microsoft.com/fwlink/?linkid=2196324)
marketplace Pc Saas Fulfillment Life Cycle https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/partner-center-portal/pc-saas-fulfillment-life-cycle.md
A SaaS subscription can be canceled at any point in its life cycle. After a subs
- [SaaS fulfillment Subscription APIs v2](pc-saas-fulfillment-subscription-api.md) - [SaaS fulfillment operations APIs v2](pc-saas-fulfillment-operations-api.md)+
+**Video tutorials**
+
+- [Building a Simple SaaS Publisher Portal in .NET](https://go.microsoft.com/fwlink/?linkid=2196257)
+- [Using the SaaS Offer REST Fulfillment API](https://go.microsoft.com/fwlink/?linkid=2196320)
marketplace Pc Saas Fulfillment Webhook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/partner-center-portal/pc-saas-fulfillment-webhook.md
See [Support for the commercial marketplace program in Partner Center](../suppor
See the [commercial marketplace metering service APIs](../marketplace-metering-service-apis.md) for more options for SaaS offers in the commercial marketplace. Review and use the [clients for different programming languages and samples](https://github.com/microsoft/commercial-marketplace-samples).+
+**Video tutorials**
+
+- [SaaS Webhook Overview](https://go.microsoft.com/fwlink/?linkid=2196258)
+- [Implementing a Simple SaaS Webhook in .NET](https://go.microsoft.com/fwlink/?linkid=2196159)
+- [Azure AD Application Registrations](https://go.microsoft.com/fwlink/?linkid=2196262)
marketplace Pc Saas Registration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/partner-center-portal/pc-saas-registration.md
Sample response:
## Next steps Your Azure AD-secured app can now use the [SaaS Fulfillment Subscription APIs Version 2](pc-saas-fulfillment-subscription-api.md) and [SaaS Fulfillment Operations APIs Version 2](pc-saas-fulfillment-operations-api.md).+
+**Video tutorials**
+
+- [Azure AD Application Registrations](https://go.microsoft.com/fwlink/?linkid=2196262)
marketplace Saas Metered Billing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/partner-center-portal/saas-metered-billing.md
To understand publisher support options and open a support ticket with Microsoft
## Next steps - [Marketplace metered billing APIs](../marketplace-metering-service-apis.md)+
+**Video tutorials**
+
+- [SaaS Metered Billing Overview](https://go.microsoft.com/fwlink/?linkid=2196314)
+- [The SaaS Metered Billing API with REST](https://go.microsoft.com/fwlink/?linkid=2196418)
marketplace Plan Saas Offer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/plan-saas-offer.md
You can choose to opt into Microsoft-supported marketing and sales channels. Whe
- [Plan a test SaaS offer](plan-saas-dev-test-offer.md) - [Offer listing best practices](gtm-offer-listing-best-practices.md) - [Create a SaaS offer](create-new-saas-offer.md)+
+**Video tutorials**
+
+- [SaaS offer overview](https://go.microsoft.com/fwlink/?linkid=2196417)
+- [SaaS Offer Technical Overview](https://go.microsoft.com/fwlink/?linkid=2196315)
+- [Publishing a SaaS offer](https://go.microsoft.com/fwlink/?linkid=2196318)
+- [A SaaS Accelerator Hands-on Tour - The Basics](https://go.microsoft.com/fwlink/?linkid=2196164)
+- [SaaS Accelerator Architecture](https://go.microsoft.com/fwlink/?linkid=2196167)
+- [Installing the SaaS Accelerator With the Install Script](https://go.microsoft.com/fwlink/?linkid=2196326)
+- [Invoking Metered Billing with the SaaS Accelerator](https://go.microsoft.com/fwlink/?linkid=2196161)
+- [Configuring Email in the SaaS Accelerator](https://go.microsoft.com/fwlink/?linkid=2196165)
+- [Custom Landing Page Fields with the SaaS Accelerator](https://go.microsoft.com/fwlink/?linkid=2196166)
marketplace Private Plans https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/private-plans.md
Private plans will also appear in search results and can be deployed via command
[![[Private offers appearing in search results.]](media/marketplace-publishers-guide/private-product.png)](media/marketplace-publishers-guide/private-product.png#lightbox)
+## Next steps
+
+**Video tutorials**
+
+- [Publishing a Private SaaS plan](https://go.microsoft.com/fwlink/?linkid=2196256)
+ <! ## Next steps
migrate Migrate Replication Appliance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-replication-appliance.md
ms. Previously updated : 01/30/2020 Last updated : 05/17/2022
The replication appliance needs access to these URLs in the Azure public cloud.
https://management.azure.com | Used for replication management operations and coordination. *.services.visualstudio.com | Used for logging purposes. (It is optional) time.windows.com | Used to check time synchronization between system and global time.
-https://login.microsoftonline.com <br> https://secure.aadcdn.microsoftonline-p.com <br> https://login.live.com <br> https://graph.windows.net <br> https://login.windows.net <br> https://www.live.com <br> https://www.microsoft.com | Appliance setup needs access to these URLs. They are used for access control and identity management by Azure Active Directory.
+https://login.microsoftonline.com <br> https://login.live.com <br> https://graph.windows.net <br> https://login.windows.net <br> https://www.live.com <br> https://www.microsoft.com | Appliance setup needs access to these URLs. They are used for access control and identity management by Azure Active Directory.
https://dev.mysql.com/get/Downloads/MySQLInstaller/mysql-installer-community-5.7.20.0.msi | To complete MySQL download. In a few regions, the download might be redirected to the CDN URL. Ensure that the CDN URL is also allowed if needed. ## Azure Government URL access
The replication appliance needs access to these URLs in Azure Government.
https://management.usgovcloudapi.net | Used for replication management operations and coordination *.services.visualstudio.com | Used for logging purposes (It is optional) time.nist.gov | Used to check time synchronization between system and global time.
-https://login.microsoftonline.com <br> https://secure.aadcdn.microsoftonline-p.com <br> https://login.live.com <br> https://graph.windows.net <br> https://login.windows.net <br> https://www.live.com <br> https://www.microsoft.com | Appliance setup with OVA needs access to these URLs. They are used for access control and identity management by Azure Active Directory.
+https://login.microsoftonline.com <br> https://login.live.com <br> https://graph.windows.net <br> https://login.windows.net <br> https://www.live.com <br> https://www.microsoft.com | Appliance setup with OVA needs access to these URLs. They are used for access control and identity management by Azure Active Directory.
https://dev.mysql.com/get/Downloads/MySQLInstaller/mysql-installer-community-5.7.20.0.msi | To complete MySQL download. In a few regions, the download might be redirected to the CDN URL. Ensure that the CDN URL is also allowed if needed. >[!Note]
migrate Tutorial Discover Vmware https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-discover-vmware.md
To create a project and register the Azure Migrate appliance, you must have an A
- Contributor or Owner permissions in the Azure subscription. - Permissions to register Azure Active Directory (Azure AD) apps.-- Owner or Contributor and User Access Administrator permissions in the Azure subscription to create an instance of Azure Key Vault, which is used during agentless server migration.
+- Owner or Contributor and User Access Administrator permissions at subscription level to create an instance of Azure Key Vault, which is used during agentless server migration.
If you created a free Azure account, by default, you're the owner of the Azure subscription. If you're not the subscription owner, work with the owner to assign permissions.
mysql Concept Servers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concept-servers.md
+ Previously updated : 09/21/2020 Last updated : 05/24/2022 # Server concepts in Azure Database for MySQL Flexible Server
Within an Azure Database for MySQL Flexible Server, you can create one or multip
Azure Database for MySQL Flexible Server gives you the ability to **Stop** the server when not in use and **Start** the server when you resume activity. This is essentially done to save costs on the database servers and only pay for the resource when in use. This becomes even more important for dev-test workloads and when you are only using the server for part of the day. When you stop the server, all active connections will be dropped. Later, when you want to bring the server back online, you can either use the [Azure portal](how-to-stop-start-server-portal.md) or CLI.
-When the server is in the **Stopped** state, the server's compute is not billed. However, storage continues to to be billed as the server's storage remains to ensure that data files are available when the server is started again.
+When the server is in the **Stopped** state, the server's compute is not billed. However, storage continues to be billed as the server's storage remains to ensure that data files are available when the server is started again.
> [!IMPORTANT] > When you **Stop** the server it remains in that state for the next 30 days in a stretch. If you do not manually **Start** it during this time, the server will automatically be started at the end of 30 days. You can chose to **Stop** it again if you are not using the server.
You can manage the creation, deletion, server parameter configuration (my.cnf),
- Learn about [Create Server](./quickstart-create-server-portal.md) - Learn about [Monitoring and Alerts](./how-to-alert-on-metric.md)-
mysql Concepts Backup Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-backup-restore.md
+ Previously updated : 09/21/2020 Last updated : 05/24/2022 # Backup and restore in Azure Database for MySQL Flexible Server
If a scheduled backup fails, our backup service tries every 20 minutes to take a
## Backup redundancy options
-Azure Database for MySQL stores multiple copies of your backups so that your data is protected from planned and unplanned events, including transient hardware failures, network or power outages, and massive natural disasters. Azure Database for MySQL provides the flexibility to choose between locally redundant, zone-redundant or geo-redundant backup storage in Basic, General Purpose and Memory Optimized tiers. By default, Azure Database for MySQL server backup storage is locally redundant for servers with same-zone high availability (HA) or no high availability configuration, and zone redundant for servers with zone-redundant HA configuration.
+Azure Database for MySQL stores multiple copies of your backups so that your data is protected from planned and unplanned events, including transient hardware failures, network or power outages, and massive natural disasters. Azure Database for MySQL provides the flexibility to choose between locally redundant, zone-redundant or geo-redundant backup storage in Basic, General Purpose and Business Critical tiers. By default, Azure Database for MySQL server backup storage is locally redundant for servers with same-zone high availability (HA) or no high availability configuration, and zone redundant for servers with zone-redundant HA configuration.
Backup redundancy ensures that your database meets its availability and durability targets even in the face of failures and Azure Database for MySQL extends three options to users -
You can restore a server to it's [geo-paired region](overview.md#azure-regions)
Geo-restore is the default recovery option when your server is unavailable because of an incident in the region where the server is hosted. If a large-scale incident in a region results in unavailability of your database application, you can restore a server from the geo-redundant backups to a server in any other region. Geo-restore utilizes the most recent backup of the server. There is a delay between when a backup is taken and when it is replicated to different region. This delay can be up to an hour, so, if a disaster occurs, there can be up to one hour data loss.
-During geo-restore, the server configurations that can be changed include only security configuration (firewall rules and virtual network settings). Changing other server configurations such as compute, storage or pricing tier (Basic, General Purpose, or Memory Optimized) during geo-restore is not supported.
+During geo-restore, the server configurations that can be changed include only security configuration (firewall rules and virtual network settings). Changing other server configurations such as compute, storage or pricing tier (Basic, General Purpose, or Business Critical) during geo-restore is not supported.
Geo-restore can also be performed on a stopped server leveraging Azure CLI. Read [Restore Azure Database for MySQL - Flexible Server with Azure CLI](how-to-restore-server-cli.md) to learn more about geo-restoring a server with Azure CLI.
mysql Concepts Business Continuity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-business-continuity.md
+ Previously updated : 09/21/2020 Last updated : 05/24/2022 # Overview of business continuity with Azure Database for MySQL - Flexible Server
The table below illustrates the features that Flexible server offers.
| **Backup & Recovery** | Flexible server automatically performs daily backups of your database files and continuously backs up transaction logs. Backups can be retained for any period between 1 to 35 days. You'll be able to restore your database server to any point in time within your backup retention period. Recovery time will be dependent on the size of the data to restore + the time to perform log recovery. Refer to [Concepts - Backup and Restore](./concepts-backup-restore.md) for more details. |Backup data remains within the region | | **Local redundant backup** | Flexible server backups are automatically and securely stored in a local redundant storage within a region and in same availability zone. The locally redundant backups replicate the server backup data files three times within a single physical location in the primary region. Locally redundant backup storage provides at least 99.999999999% (11 nines) durability of objects over a given year. Refer to [Concepts - Backup and Restore](./concepts-backup-restore.md) for more details.| Applicable in all regions | | **Geo-redundant backup** | Flexible server backups can be configured as geo-redundant at create time. Enabling Geo-redundancy replicates the server backup data files in the primary region’s paired region to provide regional resiliency. Geo-redundant backup storage provides at least 99.99999999999999% (16 nines) durability of objects over a given year. Refer to [Concepts - Backup and Restore](./concepts-backup-restore.md) for more details.| Available in all [Azure paired regions](overview.md#azure-regions) |
-| **Zone redundant high availability** | Flexible server can be deployed in high availability mode, which deploys primary and standby servers in two different availability zones within a region. This protects from zone-level failures and also helps with reducing application downtime during planned and unplanned downtime events. Data from the primary server is synchronously replicated to the standby replica. During any downtime event, the database server is automatically failed over to the standby replica. Refer to [Concepts - High availability](./concepts-high-availability.md) for more details. | Supported in general purpose and memory optimized compute tiers. Available only in regions where multiple zones are available.|
+| **Zone redundant high availability** | Flexible server can be deployed in high availability mode, which deploys primary and standby servers in two different availability zones within a region. This protects from zone-level failures and also helps with reducing application downtime during planned and unplanned downtime events. Data from the primary server is synchronously replicated to the standby replica. During any downtime event, the database server is automatically failed over to the standby replica. Refer to [Concepts - High availability](./concepts-high-availability.md) for more details. | Supported in general purpose and Business Critical compute tiers. Available only in regions where multiple zones are available.|
| **Premium file shares** | Database files are stored in a highly durable and reliable Azure premium file shares that provide data redundancy with three copies of replica stored within an availability zone with automatic data recovery capabilities. Refer to [Premium File shares](../../storage/files/storage-how-to-create-file-share.md) for more details. | Data stored within an availability zone | ## Planned downtime mitigation
Here are some unplanned failure scenarios and the recovery process:
## Next steps - Learn about [zone redundant high availability](./concepts-high-availability.md)-- Learn about [backup and recovery](./concepts-backup-restore.md)
+- Learn about [backup and recovery](./concepts-backup-restore.md)
mysql Concepts Compute Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-compute-storage.md
+ Previously updated : 1/28/2021 Last updated : 05/24/2022 # Compute and storage options in Azure Database for MySQL - Flexible Server
Last updated 1/28/2021
[!INCLUDE[applies-to-mysql-flexible-server](../includes/applies-to-mysql-flexible-server.md)]
-You can create an Azure Database for MySQL Flexible Server in one of three different compute tiers: Burstable, General Purpose, and Memory Optimized. The compute tiers are differentiated by the underlying VM SKU used B-series, D-series, and E-series. The choice of compute tier and size determines the memory and vCores available on the server. The same storage technology is used across all compute tiers. All resources are provisioned at the MySQL server level. A server can have one or many databases.
+You can create an Azure Database for MySQL Flexible Server in one of three different compute tiers: Burstable, General Purpose, and Business Critical. The compute tiers are differentiated by the underlying VM SKU used B-series, D-series, and E-series. The choice of compute tier and size determines the memory and vCores available on the server. The same storage technology is used across all compute tiers. All resources are provisioned at the MySQL server level. A server can have one or many databases.
-| Resource / Tier | **Burstable** | **General Purpose** | **Memory Optimized** |
+| Resource / Tier | **Burstable** | **General Purpose** | **Business Critical** |
|:|:-|:--|:|
-| VM series| B-series | Ddsv4-series | Edsv4-series|
-| vCores | 1, 2 | 2, 4, 8, 16, 32, 48, 64 | 2, 4, 8, 16, 32, 48, 64 |
+| VM series| B-series | Ddsv4-series | Edsv4/v5-series*|
+| vCores | 1, 2, 4, 8, 12, 16, 20 | 2, 4, 8, 16, 32, 48, 64 | 2, 4, 8, 16, 32, 48, 64, 80, 96 |
| Memory per vCore | Variable | 4 GiB | 8 GiB * | | Storage size | 20 GiB to 16 TiB | 20 GiB to 16 TiB | 20 GiB to 16 TiB | | Database backup retention period | 1 to 35 days | 1 to 35 days | 1 to 35 days |
-\* With the exception of E64ds_v4 (Memory Optimized) SKU, which has 504 GB of memory
+\* With the exception of E64ds_v4 (Business Critical) SKU, which has 504 GB of memory
+
+\* Only few regions have Edsv5 compute availability.
To choose a compute tier, use the following table as a starting point.
To choose a compute tier, use the following table as a starting point.
|:-|:--| | Burstable | Best for workloads that donΓÇÖt need the full CPU continuously. | | General Purpose | Most business workloads that require balanced compute and memory with scalable I/O throughput. Examples include servers for hosting web and mobile apps and other enterprise applications.|
-| Memory Optimized | High-performance database workloads that require in-memory performance for faster transaction processing and higher concurrency. Examples include servers for processing real-time data and high-performance transactional or analytical apps.|
+| Business Critical | High-performance database workloads that require in-memory performance for faster transaction processing and higher concurrency. Examples include servers for processing real-time data and high-performance transactional or analytical apps.|
After you create a server, the compute tier, compute size, and storage size can be changed. Compute scaling requires a restart and takes between 60-120 seconds, while storage scaling does not require restart. You also can independently adjust the backup retention period up or down. For more information, see the [Scale resources](#scale-resources) section.
Compute resources can be selected based on the tier and size. This determines th
The detailed specifications of the available server types are as follows:
-| Compute size | vCores | Memory Size (GiB) | Max Supported IOPS | Max Supported I/O bandwidth (MBps)| Max Connections
-|-|--|-| |--|
-| **Burstable** | | |
-| Standard_B1s | 1 | 1 | 320 | 10 | 171
-| Standard_B1ms | 1 | 2 | 640 | 10 | 341
-| Standard_B2s | 2 | 4 | 1280 | 15 | 683
-| **General Purpose** | | | | |
-| Standard_D2ds_v4 | 2 | 8 | 3200 | 48 | 1365
-| Standard_D4ds_v4 | 4 | 16 | 6400 | 96 | 2731
-| Standard_D8ds_v4 | 8 | 32 | 12800 | 192 | 5461
-| Standard_D16ds_v4 | 16 | 64 | 20000 | 384 | 10923
-| Standard_D32ds_v4 | 32 | 128 | 20000 | 768 | 21845
-| Standard_D48ds_v4 | 48 | 192 | 20000 | 1152 | 32768
-| Standard_D64ds_v4 | 64 | 256 | 20000 | 1200 | 43691
-| **Memory Optimized** | | | | |
-| Standard_E2ds_v4 | 2 | 16 | 3200 | 48 | 2731
-| Standard_E4ds_v4 | 4 | 32 | 6400 | 96 | 5461
-| Standard_E8ds_v4 | 8 | 64 | 12800 | 192 | 10923
-| Standard_E16ds_v4 | 16 | 128 | 20000 | 384 | 21845
-| Standard_E32ds_v4 | 32 | 256 | 20000 | 768 | 43691
-| Standard_E48ds_v4 | 48 | 384 | 20000 | 1152 | 65536
-| Standard_E64ds_v4 | 64 | 504 | 20000 | 1200 | 86016
-
-To get more details about the compute series available, refer to Azure VM documentation for [Burstable (B-series)](../../virtual-machines/sizes-b-series-burstable.md), [General Purpose (Ddsv4-series)](../../virtual-machines/ddv4-ddsv4-series.md), and [Memory Optimized (Edsv4-series)](../../virtual-machines/edv4-edsv4-series.md).
+| Compute size | vCores | Memory Size (GiB) | Max Supported IOPS | Max Connections
+|-|--|-| |
+|**Burstable**
+|Standard_B1s | 1 | 1 | 320 | 171
+|Standard_B1ms | 1 | 2 | 640 | 341
+|Standard_B2s | 2 | 4 | 1280 | 683
+|Standard_B2ms | 2 | 8 | 1700 | 1365
+|Standard_B4ms | 4 | 16 | 2400 | 2731
+|Standard_B8ms | 8 | 32 | 3100 | 5461
+|Standard_B12ms | 12 | 48 | 3800 | 8193
+|Standard_B16ms | 16 | 64 | 4300 | 10923
+|Standard_B20ms | 20 | 80 | 5000 | 13653
+|**General Purpose**|
+|Standard_D2ds_v4 |2 |8 |3200 |1365
+|Standard_D4ds_v4 |4 |16 |6400 |2731
+|Standard_D8ds_v4 |8 |32 |12800 |5461
+|Standard_D16ds_v4 |16 |64 |20000 |10923
+|Standard_D32ds_v4 |32 |128 |20000 |21845
+|Standard_D48ds_v4 |48 |192 |20000 |32768
+|Standard_D64ds_v4 |64 |256 |20000 |43691
+|**Memory Optimized** |
+|Standard_E2ds_v4 | 2 | 16 | 5000 | 2731
+|Standard_E4ds_v4 | 4 | 32 | 10000 | 5461
+|Standard_E8ds_v4 | 8 | 64 | 18000 | 10923
+|Standard_E16ds_v4 | 16 | 128 | 28000 | 21845
+|Standard_E32ds_v4 | 32 | 256 | 38000 | 43691
+|Standard_E48ds_v4 | 48 | 384 | 48000 | 65536
+|Standard_E64ds_v4 | 64 | 504 | 48000 | 86016
+|Standard_E80ids_v4 | 80 | 504 | 48000 | 86016
+|Standard_E2ds_v5 | 2 | 16 | 5000 | 2731
+|Standard_E4ds_v5 | 4 | 32 | 10000 | 5461
+|Standard_E8ds_v5 | 8 | 64 | 18000 | 10923
+|Standard_E16ds_v5 | 16 | 128 | 28000 | 21845
+|Standard_E32ds_v5 | 32 | 256 | 38000 | 43691
+|Standard_E48ds_v5 | 48 | 384 | 48000 | 65536
+|Standard_E64ds_v5 | 64 | 512 | 48000 | 87383
+|Standard_E96ds_v5 | 96 | 672 | 48000 | 100000
+
+To get more details about the compute series available, refer to Azure VM documentation for [Burstable (B-series)](../../virtual-machines/sizes-b-series-burstable.md), [General Purpose (Ddsv4-series)](../../virtual-machines/ddv4-ddsv4-series.md), and Business Critical [Edsv4-series](../../virtual-machines/edv4-edsv4-series.md)/ [Edsv5-series](../../virtual-machines/edv5-edsv5-series.md)]
>[!NOTE] >For [Burstable (B-series) compute tier](../../virtual-machines/sizes-b-series-burstable.md) if the VM is started/stopped or restarted, the credits may be lost. For more information, see [Burstable (B-Series) FAQ](../../virtual-machines/sizes-b-series-burstable.md#q-why-is-my-remaining-credit-set-to-0-after-a-redeploy-or-a-stopstart).
Remember that storage once auto-scaled up, cannot be scaled down.
Azure Database for MySQL ΓÇô Flexible Server supports the provisioning of additional IOPS. This feature enables you to provision additional IOPS above the complimentary IOPS limit. Using this feature, you can increase or decrease the number of IOPS provisioned based on your workload requirements at any time.
-The minimum IOPS is 360 across all compute sizes and the maximum IOPS is determined by the selected compute size. To learn more about the maximum IOPS per compute size is shown below:
-
-| Compute size | Maximum IOPS |
-|-||
-| **Burstable** | |
-| Standard_B1s | 320 |
-| Standard_B1ms | 640 |
-| Standard_B2s | 1280 |
-| **General Purpose** | |
-| Standard_D2ds_v4 | 3200 |
-| Standard_D4ds_v4 | 6400 |
-| Standard_D8ds_v4 | 12800 |
-| Standard_D16ds_v4 | 20000 |
-| Standard_D32ds_v4 | 20000 |
-| Standard_D48ds_v4 | 20000 |
-| Standard_D64ds_v4 | 20000 |
-| **Memory Optimized** | |
-| Standard_E2ds_v4 | 3200 |
-| Standard_E4ds_v4 | 6400 |
-| Standard_ E8ds_v4 | 12800 |
-| Standard_ E16ds_v4 | 20000 |
-| Standard_E32ds_v4 | 20000 |
-| Standard_E48ds_v4 | 20000 |
-| Standard_E64ds_v4 | 20000 |
-
-The maximum IOPS is dependent on the maximum available IOPS per compute size. Refer to the column *Max uncached disk throughput: IOPS/MBps* in the [B-series](../../virtual-machines/sizes-b-series-burstable.md), [Ddsv4-series](../../virtual-machines/ddv4-ddsv4-series.md), and [Edsv4-series](../../virtual-machines/edv4-edsv4-series.md) documentation.
+The minimum IOPS is 360 across all compute sizes and the maximum IOPS is determined by the selected compute size. To learn more about the maximum IOPS per compute size refer to the [table].(#compute-tiers-size-and-server-types)
+
+The maximum IOPS is dependent on the maximum available IOPS per compute size. Refer to the column *Max uncached disk throughput: IOPS/MBps* in the [B-series](../../virtual-machines/sizes-b-series-burstable.md), [Ddsv4-series](../../virtual-machines/ddv4-ddsv4-series.md), and [Edsv4-series](../../virtual-machines/edv4-edsv4-series.md)/ [Edsv5-series](../../virtual-machines/edv5-edsv5-series.md)] documentation.
> [!Important] > **Complimentary IOPS** are equal to MINIMUM("Max uncached disk throughput: IOPS/MBps" of compute size, 300 + storage provisioned in GiB * 3)<br>
For the most up-to-date pricing information, see the service [pricing page](http
If you would like to optimize server cost, you can consider following tips: - Scale down your compute tier or compute size (vCores) if compute is underutilized.-- Consider switching to the Burstable compute tier if your workload doesn't need the full compute capacity continuously from the General Purpose and Memory Optimized tiers.
+- Consider switching to the Burstable compute tier if your workload doesn't need the full compute capacity continuously from the General Purpose and Business Critical tiers.
- Stop the server when not in use. - Reduce the backup retention period if a longer retention of backup is not required.
mysql Concepts Maintenance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-maintenance.md
+ Previously updated : 09/21/2020 Last updated : 05/24/2022 # Scheduled maintenance in Azure Database for MySQL ΓÇô Flexible server
mysql Concepts Read Replicas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-read-replicas.md
+ Previously updated : 06/17/2021 Last updated : 05/24/2022 # Read replicas in Azure Database for MySQL - Flexible Server
The read replica feature allows you to replicate data from an Azure Database for
Replicas are new servers that you manage similar to your source Azure Database for MySQL flexible servers. You will incur billing charges for each read replica based on the provisioned compute in vCores and storage in GB/ month. For more information, see [pricing](./concepts-compute-storage.md#pricing). > [!NOTE]
-> The read replica feature is only available for Azure Database for MySQL - Flexible servers in the General Purpose or Memory Optimized pricing tiers. Ensure the source server is in one of these pricing tiers.
+> The read replica feature is only available for Azure Database for MySQL - Flexible servers in the General Purpose or Business Critical pricing tiers. Ensure the source server is in one of these pricing tiers.
To learn more about MySQL replication features and issues, see the [MySQL replication documentation](https://dev.mysql.com/doc/refman/5.7/en/replication-features.html).
mysql Concepts Server Parameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-server-parameters.md
+ Previously updated : 11/10/2020 Last updated : 05/24/2022 # Server parameters in Azure Database for MySQL - Flexible Server
Review the [MySQL documentation](https://dev.mysql.com/doc/refman/5.7/en/innodb-
|General Purpose|32|128|103079215104|134217728|103079215104| |General Purpose|48|192|154618822656|134217728|154618822656| |General Purpose|64|256|206158430208|134217728|206158430208|
-|Memory Optimized|2|16|12884901888|134217728|12884901888|
-|Memory Optimized|4|32|25769803776|134217728|25769803776|
-|Memory Optimized|8|64|51539607552|134217728|51539607552|
-|Memory Optimized|16|128|103079215104|134217728|103079215104|
-|Memory Optimized|32|256|206158430208|134217728|206158430208|
-|Memory Optimized|48|384|309237645312|134217728|309237645312|
-|Memory Optimized|64|504|405874409472|134217728|405874409472|
+|Business Critical|2|16|12884901888|134217728|12884901888|
+|Business Critical|4|32|25769803776|134217728|25769803776|
+|Business Critical|8|64|51539607552|134217728|51539607552|
+|Business Critical|16|128|103079215104|134217728|103079215104|
+|Business Critical|32|256|206158430208|134217728|206158430208|
+|Business Critical|48|384|309237645312|134217728|309237645312|
+|Business Critical|64|504|405874409472|134217728|405874409472|
### innodb_file_per_table
The value of max_connection is determined by the memory size of the server.
|General Purpose|32|128|10923|10|21845| |General Purpose|48|192|16384|10|32768| |General Purpose|64|256|21845|10|43691|
-|Memory Optimized|2|16|1365|10|2731|
-|Memory Optimized|4|32|2731|10|5461|
-|Memory Optimized|8|64|5461|10|10923|
-|Memory Optimized|16|128|10923|10|21845|
-|Memory Optimized|32|256|21845|10|43691|
-|Memory Optimized|48|384|32768|10|65536|
-|Memory Optimized|64|504|43008|10|86016|
+|Business Critical|2|16|1365|10|2731|
+|Business Critical|4|32|2731|10|5461|
+|Business Critical|8|64|5461|10|10923|
+|Business Critical|16|128|10923|10|21845|
+|Business Critical|32|256|21845|10|43691|
+|Business Critical|48|384|32768|10|65536|
+|Business Critical|64|504|43008|10|86016|
When connections exceed the limit, you may receive the following error: > ERROR 1040 (08004): Too many connections
Upon initial deployment, an Azure for MySQL Flexible Server includes system tabl
In Azure Database for MySQL this parameter specifies the number of seconds the service waits before purging the binary log file.
-The binary log contains ΓÇ£eventsΓÇ¥ that describe database changes such as table creation operations or changes to table data. It also contains events for statements that potentially could have made changes. The binary log are used mainly for two purposes , replication and data recovery operations. Usually, the binary logs are purged as soon as the handle is free from service, backup or the replica set. In case of multiple replica, it would wait for the slowest replica to read the changes before it is been purged. If you want to persist binary logs for a more duration of time you can configure the parameter binlog_expire_logs_seconds. If the binlog_expire_logs_seconds is set to 0 which is the default value, it will purge as soon as the handle to the binary log is freed. if binlog_expire_logs_seconds > 0 then it would wait for the until the seconds configured before it purges. For Azure database for MySQL, managed features like backup and read replica purging of binary files are handled internally . When you replicate the data-out from the Azure Database for MySQL service, this parameter needs to be set in primary to avoid purging of binary logs before the replica reads from the changes from the primary. If you set the binlog_expire_logs_seconds to a higher value, then the binary logs will not get purged soon enough and can lead to increase in the storage billing.
+The binary log contains ΓÇ£eventsΓÇ¥ that describe database changes such as table creation operations or changes to table data. It also contains events for statements that potentially could have made changes. The binary log are used mainly for two purposes , replication and data recovery operations. Usually, the binary logs are purged as soon as the handle is free from service, backup or the replica set. In case of multiple replica, it would wait for the slowest replica to read the changes before it is been purged. If you want to persist binary logs for a more duration of time you can configure the parameter binlog_expire_logs_seconds. If the binlog_expire_logs_seconds is set to 0, which is the default value, it will purge as soon as the handle to the binary log is freed. If binlog_expire_logs_seconds > 0, then it would wait until the seconds configured before it purges. For Azure database for MySQL, managed features like backup and read replica purging of binary files are handled internally . When you replicate the data-out from the Azure Database for MySQL service, this parameter needs to be set in primary to avoid purging of binary logs before the replica reads from the changes from the primary. If you set the binlog_expire_logs_seconds to a higher value, then the binary logs will not get purged soon enough and can lead to increase in the storage billing.
## Non-modifiable server parameters
mysql Concepts Supported Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-supported-versions.md
+ Previously updated : 09/21/2020 Last updated : 05/24/2022 # Supported versions for Azure Database for MySQL - Flexible Server
The service automatically manages patching for bug fix version updates. For exam
> [!div class="nextstepaction"] >[Build a PHP app on Windows with MySQL](../../app-service/tutorial-php-mysql-app.md)<br/> >[Build PHP app on Linux with MySQL](../../app-service/tutorial-php-mysql-app.md?pivots=platform-linux%253fpivots%253dplatform-linux)<br/>
->[Build Java based Spring App with MySQL](/azure/developer/java/spring-framework/spring-app-service-e2e?tabs=bash)<br/>
+>[Build Java based Spring App with MySQL](/azure/developer/java/spring-framework/spring-app-service-e2e?tabs=bash)<br/>
mysql How To Configure High Availability Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-configure-high-availability-cli.md
Previously updated : 04/1/2021-+ Last updated : 05/24/2022 # Manage zone redundant high availability in Azure Database for MySQL Flexible Server with Azure CLI
High availability feature provisions physically separate primary and standby rep
## Enable high availability during server creation
-You can only create server using General purpose or Memory optimized pricing tiers with high availability. You can enable Zone redundant high availability for a server only during create time.
+You can only create server using General purpose or Business Critical pricing tiers with high availability. You can enable Zone redundant high availability for a server only during create time.
**Usage:**
az mysql flexible-server update [--high-availability {Disabled, SameZone, ZoneRe
## Next steps - Learn about [business continuity](./concepts-business-continuity.md)-- Learn about [zone redundant high availability](./concepts-high-availability.md)
+- Learn about [zone redundant high availability](./concepts-high-availability.md)
mysql How To Configure High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-configure-high-availability.md
Previously updated : 09/21/2020-+ Last updated : 05/24/2022 # Manage zone redundant high availability in Azure Database for MySQL Flexible Server
This section provides details specifically for HA-related fields. You can follow
3. If you want to change the default compute and storage, Select **Configure server**. 4. If high availability option is checked, the burstable tier will not be available to choose. You can choose either
- **General purpose** or **Memory Optimized** compute tiers.
+ **General purpose** or **Business Critical** compute tiers.
> [!IMPORTANT]
- > We only support zone redundant high availability for the ***General purpose*** and ***Memory optimized*** pricing tier.
+ > We only support zone redundant high availability for the ***General purpose*** and ***Business Critical*** pricing tier.
5. Select the **Compute size** for your choice from the dropdown.
mysql How To Configure Server Parameters Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-configure-server-parameters-cli.md
ms.devlang: azurecli + Last updated 11/10/2020- # Configure server parameters in Azure Database for MySQL Flexible Server using the Azure CLI
mysql How To Connect Tls Ssl https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-connect-tls-ssl.md
Previously updated : 09/21/2020 Last updated : 05/24/2022 ms.devlang: csharp, golang, java, javascript, php, python, ruby+ # Connect to Azure Database for MySQL - Flexible Server with encrypted connections
mysql Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/overview.md
- Previously updated : 03/23/2022+ Last updated : 05/24/2022 # Azure Database for MySQL - Flexible Server
In this article, we'll provide an overview and introduction to core concepts of
Azure Database for MySQL Flexible Server is a fully managed production-ready database service designed for more granular control and flexibility over database management functions and configuration settings. The flexible server architecture allows users to opt for high availability within single availability zone and across multiple availability zones. Flexible servers provide better cost optimization controls with the ability to stop/start server and burstable compute tier, ideal for workloads that donΓÇÖt need full-compute capacity continuously. Flexible Server also supports reserved instances allowing you to save up to 63% cost, ideal for production workloads with predictable compute capacity requirements. The service supports community version of MySQL 5.7 and 8.0. The service is generally available today in wide variety of [Azure regions](overview.md#azure-regions).
-The Flexible Server deployment option offers three compute tiers: Burstable, General Purpose, and Memory Optimized. Each tier offers different compute and memory capacity to support your database workloads. You can build your first app on a burstable tier for a few dollars a month, and then adjust the scale to meet the needs of your solution. Dynamic scalability enables your database to transparently respond to rapidly changing resource requirements. You only pay for the resources you need, and only when you need them. See [Compute and Storage](concepts-compute-storage.md) for details.
+The Flexible Server deployment option offers three compute tiers: Burstable, General Purpose, and Business Critical. Each tier offers different compute and memory capacity to support your database workloads. You can build your first app on a burstable tier for a few dollars a month, and then adjust the scale to meet the needs of your solution. Dynamic scalability enables your database to transparently respond to rapidly changing resource requirements. You only pay for the resources you need, and only when you need them. See [Compute and Storage](concepts-compute-storage.md) for details.
Flexible servers are best suited for - Ease of deployments, simplified scaling, and low database management overhead for functions like backups, high availability, security, and monitoring
See [Networking concepts](concepts-networking.md) to learn more.
## Adjust performance and scale within seconds
-The flexible server service is available in three SKU tiers: Burstable, General Purpose, and Memory Optimized. The Burstable tier is best suited for low-cost development and low concurrency workloads that don't need full-compute capacity continuously. The General Purpose and Memory Optimized are better suited for production workloads requiring high concurrency, scale, and predictable performance. You can build your first app on a small database for a few dollars a month, and then seamlessly adjust the scale to meet the needs of your solution. The storage scaling is online and supports storage autogrowth. Flexible Server enables you to provision additional IOPS up to 20 K IOPs above the complimentary IOPS limit independent of storage. Using this feature, you can increase or decrease the number of IOPS provisioned based on your workload requirements at any time. Dynamic scalability enables your database to transparently respond to rapidly changing resource requirements. You only pay for the resources you consume.
+The flexible server service is available in three SKU tiers: Burstable, General Purpose, and Business Critical. The Burstable tier is best suited for low-cost development and low concurrency workloads that don't need full-compute capacity continuously. The General Purpose and Business Critical are better suited for production workloads requiring high concurrency, scale, and predictable performance. You can build your first app on a small database for a few dollars a month, and then seamlessly adjust the scale to meet the needs of your solution. The storage scaling is online and supports storage autogrowth. Flexible Server enables you to provision additional IOPS up to 20 K IOPs above the complimentary IOPS limit independent of storage. Using this feature, you can increase or decrease the number of IOPS provisioned based on your workload requirements at any time. Dynamic scalability enables your database to transparently respond to rapidly changing resource requirements. You only pay for the resources you consume.
See [Compute and Storage concepts](concepts-compute-storage.md) to learn more.
Now that you've read an introduction to Azure Database for MySQL - Single-Server
- Build your first app using your preferred language: - [Python](connect-python.md)
- - [PHP](connect-php.md)
+ - [PHP](connect-php.md)
mysql Sample Scripts Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/sample-scripts-azure-cli.md
ms.devlang: azurecli - Previously updated : 09/15/2021+ Last updated : 05/24/2022 # Azure CLI samples for Azure Database for MySQL - Flexible Server
The following table includes links to sample Azure CLI scripts for Azure Databas
|**Configure logs**|| | [Configure audit logs](scripts/sample-cli-audit-logs.md) | Configures audit logs on a single Azure Database for MySQL - Flexible Server. | | [Configure slow-query logs](scripts/sample-cli-slow-query-logs.md) | Configures slow-query logs on a single Azure Database for MySQL - Flexible Server. |-
mysql Sample Cli Same Zone Ha https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/scripts/sample-cli-same-zone-ha.md
ms.devlang: azurecli - Previously updated : 02/10/2022+ Last updated : 05/24/2022 # Configure same-zone high availability in an Azure Database for MySQL - Flexible Server using Azure CLI This sample CLI script configures and manages [Same-Zone high availability](../concepts-high-availability.md) in an Azure Database for MySQL - Flexible Server.
-Currently, Same-Zone high availability is supported only for the General purpose and Memory optimized pricing tiers.
+Currently, Same-Zone high availability is supported only for the General purpose and Business Critical pricing tiers.
[!INCLUDE [quickstarts-free-trial-note](../../includes/flexible-server-free-trial-note.md)]
mysql Sample Cli Zone Redundant Ha https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/scripts/sample-cli-zone-redundant-ha.md
ms.devlang: azurecli - Previously updated : 02/10/2022+ Last updated : 05/24/2022 # Configure zone-redundant high availability in an Azure Database for MySQL - Flexible Server using Azure CLI
Last updated 02/10/2022
This sample CLI script configures and manages [Zone-Redundant high availability](../concepts-high-availability.md) in an Azure Database for MySQL - Flexible Server. You can enable Zone-Redundant high availability only during Flexible Server creation, and can disable it anytime. You can also choose the availability zone for the primary and the standby replica.
-Currently, Zone-Redundant high availability is supported only for the General purpose and Memory optimized pricing tiers.
+Currently, Zone-Redundant high availability is supported only for the General purpose and Business Critical pricing tiers.
[!INCLUDE [quickstarts-free-trial-note](../../includes/flexible-server-free-trial-note.md)]
mysql Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/whats-new.md
-+ Previously updated : 10/12/2021 Last updated : 05/24/2022 # What's new in Azure Database for MySQL - Flexible Server?
This release of Azure Database for MySQL - Flexible Server includes the followin
You wonΓÇÖt be able to create new or maintain existing read replicas on the Burstable tier server. In the interest of providing a good query and development experience for Burstable SKU tiers, the support for creating and maintaining read replica for servers in the Burstable pricing tier will be discontinued.
- If you have an existing Azure Database for MySQL - Flexible Server with read replica enabled, youΓÇÖll have to scale up your server to either General Purpose or Memory Optimized pricing tiers or delete the read replica within 60 days. After the 60-day period, while you can continue to use the primary server for your read-write operations, replication to read replica servers will be stopped. For newly created servers, read replica option will be available only for the General Purpose and Memory Optimized pricing tiers.
+ If you have an existing Azure Database for MySQL - Flexible Server with read replica enabled, youΓÇÖll have to scale up your server to either General Purpose or Business Critical pricing tiers or delete the read replica within 60 days. After the 60-day period, while you can continue to use the primary server for your read-write operations, replication to read replica servers will be stopped. For newly created servers, read replica option will be available only for the General Purpose and Business Critical pricing tiers.
- **Monitoring Azure Database for MySQL - Flexible Server with Azure Monitor Workbooks**
This release of Azure Database for MySQL - Flexible Server includes the followin
Flexible Server now supports [Data-in Replication](concepts-data-in-replication.md). Use this feature to synchronize and migrate data from a MySQL server running on-premises, in virtual machines, on Azure Database for MySQL Single Server, or on database services outside Azure to Azure Database for MySQL ΓÇô Flexible Server. Learn more about [How to configure Data-in Replication](how-to-data-in-replication.md). -- **GitHub actions support with Azure CLI**
+- **GitHub Actions support with Azure CLI**
- Flexible Server CLI now allows customers to automate workflows to deploy updates with GitHub actions. This feature helps set up and deploy database updates with MySQL GitHub Actions workflow. These CLI commands assist with setting up a repository to enable continuous deployment for ease of development. [Learn more](/cli/azure/mysql/flexible-server/deploy).
+ Flexible Server CLI now allows customers to automate workflows to deploy updates with GitHub Actions. This feature helps set up and deploy database updates with MySQL GitHub Actions workflow. These CLI commands assist with setting up a repository to enable continuous deployment for ease of development. [Learn more](/cli/azure/mysql/flexible-server/deploy).
- **Zone redundant HA forced failover fixes**
mysql Quickstart Mysql Github Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/quickstart-mysql-github-actions.md
The output is a JSON object with the role assignment credentials that provide ac
# [OpenID Connect](#tab/openid)
-You need to provide your application's **Client ID**, **Tenant ID**, and **Subscription ID** to the login action. These values can either be provided directly in the workflow or can be stored in GitHub secrets and referenced in your workflow. Saving the values as GitHub secrets is the more secure option.
+OpenID Connect is an authentication method that uses short-lived tokens. Setting up [OpenID Connect with GitHub Actions](https://docs.github.com/en/actions/deployment/security-hardening-your-deployments/about-security-hardening-with-openid-connect) is more complex process that offers hardened security.
-1. Open your GitHub repository and go to **Settings**.
+1. If you do not have an existing application, register a [new Active Directory application and service principal that can access resources](../../active-directory/develop/howto-create-service-principal-portal.md). Create the Active Directory application.
-1. Select **Settings > Secrets > New secret**.
+ ```azurecli-interactive
+ az ad app create --display-name myApp
+ ```
-1. Create secrets for `AZURE_CLIENT_ID`, `AZURE_TENANT_ID`, and `AZURE_SUBSCRIPTION_ID`. Use these values from your Active Directory application for your GitHub secrets:
+ This command will output JSON with an `appId` that is your `client-id`. Save the value to use as the `AZURE_CLIENT_ID` GitHub secret later.
- |GitHub Secret | Active Directory Application |
- |||
- |AZURE_CLIENT_ID | Application (client) ID |
- |AZURE_TENANT_ID | Directory (tenant) ID |
- |AZURE_SUBSCRIPTION_ID | Subscription ID |
+ You'll use the `objectId` value when creating federated credentials with Graph API and reference it as the `APPLICATION-OBJECT-ID`.
-1. Save each secret by selecting **Add secret**.
+1. Create a service principal. Replace the `$appID` with the appId from your JSON output.
+ This command generates JSON output with a different `objectId` and will be used in the next step. The new `objectId` is the `assignee-object-id`.
+
+ Copy the `appOwnerTenantId` to use as a GitHub secret for `AZURE_TENANT_ID` later.
+
+ ```azurecli-interactive
+ az ad sp create --id $appId
+ ```
+
+1. Create a new role assignment by subscription and object. By default, the role assignment will be tied to your default subscription. Replace `$subscriptionId` with your subscription ID, `$resourceGroupName` with your resource group name, and `$assigneeObjectId` with the generated `assignee-object-id`. Learn [how to manage Azure subscriptions with the Azure CLI](/cli/azure/manage-azure-subscriptions-azure-cli).
+
+ ```azurecli-interactive
+ az role assignment create --role contributor --subscription $subscriptionId --assignee-object-id $assigneeObjectId --assignee-principal-type ServicePrincipal --scopes /subscriptions/$subscriptionId/resourceGroups/$resourceGroupName/providers/Microsoft.Web/sites/
+ ```
+
+1. Run the following command to [create a new federated identity credential](/graph/api/application-post-federatedidentitycredentials?view=graph-rest-beta&preserve-view=true) for your active directory application.
+
+ * Replace `APPLICATION-OBJECT-ID` with the **objectId (generated while creating app)** for your Active Directory application.
+ * Set a value for `CREDENTIAL-NAME` to reference later.
+ * Set the `subject`. The value of this is defined by GitHub depending on your workflow:
+ * Jobs in your GitHub Actions environment: `repo:< Organization/Repository >:environment:< Name >`
+ * For Jobs not tied to an environment, include the ref path for branch/tag based on the ref path used for triggering the workflow: `repo:< Organization/Repository >:ref:< ref path>`. For example, `repo:n-username/ node_express:ref:refs/heads/my-branch` or `repo:n-username/ node_express:ref:refs/tags/my-tag`.
+ * For workflows triggered by a pull request event: `repo:< Organization/Repository >:pull_request`.
+
+ ```azurecli
+ az rest --method POST --uri 'https://graph.microsoft.com/beta/applications/<APPLICATION-OBJECT-ID>/federatedIdentityCredentials' --body '{"name":"<CREDENTIAL-NAME>","issuer":"https://token.actions.githubusercontent.com","subject":"repo:organization/repository:ref:refs/heads/main","description":"Testing","audiences":["api://AzureADTokenExchange"]}'
+ ```
+
+ To learn how to create a Create an active directory application, service principal, and federated credentials in Azure portal, see [Connect GitHub and Azure](/azure/developer/github/connect-from-azure#use-the-azure-login-action-with-openid-connect).
+
## Copy the MySQL connection string
You need to provide your application's **Client ID**, **Tenant ID**, and **Subsc
2. Open the first result to see detailed logs of your workflow's run.
- :::image type="content" source="media/quickstart-mysql-github-actions/github-actions-run-mysql.png" alt-text="Log of GitHub actions run":::
+ :::image type="content" source="media/quickstart-mysql-github-actions/github-actions-run-mysql.png" alt-text="Log of GitHub Actions run":::
## Clean up resources
notification-hubs Create Notification Hub Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/notification-hubs/create-notification-hub-bicep.md
+
+ Title: Create an Azure notification hub using Bicep
+description: Learn how to create an Azure notification hub using Bicep.
+++ Last updated : 05/24/2022+++++
+# Quickstart: Create a notification hub using Bicep
+
+Azure Notification Hubs provides an easy-to-use and scaled-out push engine that enables you to send notifications to any platform (iOS, Android, Windows, Kindle, etc.) from any backend (cloud or on-premises). For more information about the service, see [What is Azure Notification Hubs](notification-hubs-push-notification-overview.md).
++
+This quickstart uses Bicep to create an Azure Notification Hubs namespace, and a notification hub named **MyHub** within that namespace.
+
+## Prerequisites
+
+If you don't have an Azure subscription, create a [free](https://azure.microsoft.com/free/) account before you begin.
+
+## Review the Bicep file
+
+The Bicep file used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/notification-hub/).
++
+The Bicep file creates the two Azure resources:
+
+* [Microsoft.NotificationHubs/namespaces](/azure/templates/microsoft.notificationhubs/namespaces)
+* [Microsoft.NotificationHubs/namespaces/notificationHubs](/azure/templates/microsoft.notificationhubs/namespaces/notificationhubs)
+
+## Deploy the Bicep file
+
+1. Save the Bicep file as **main.bicep** to your local computer.
+1. Deploy the Bicep file using either Azure CLI or Azure PowerShell.
+
+ # [CLI](#tab/CLI)
+
+ ```azurecli
+ az group create --name exampleRG --location eastus
+ az deployment group create --resource-group exampleRG --template-file main.bicep --parameters namespaceName=<namespace-name>
+ ```
+
+ # [PowerShell](#tab/PowerShell)
+
+ ```azurepowershell
+ New-AzResourceGroup -Name exampleRG -Location eastus
+ New-AzResourceGroupDeployment -ResourceGroupName exampleRG -TemplateFile ./main.bicep -namespaceName "<namespace-name>"
+ ```
+
+
+
+ >[!NOTE]
+ > Replace **\<namespace-name\>** with the name of the Notifications Hub namespace.
+
+ When the deployment finishes, you should see a message indicating the deployment succeeded.
+
+## Review deployed resources
+
+Use the Azure portal, Azure CLI, or Azure PowerShell to list the deployed resources in the resource group.
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+az resource list --resource-group exampleRG
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell-interactive
+Get-AzResource -ResourceGroupName exampleRG
+```
+++
+## Clean up resources
+
+When you no longer need the logic app, use the Azure portal, Azure CLI, or Azure PowerShell to delete the resource group and its resources.
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+az group delete --name exampleRG
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell-interactive
+Remove-AzResourceGroup -Name exampleRG
+```
+++
+## Next steps
+
+For a step-by-step tutorial that guides you through the process of creating a Bicep file, see:
+
+> [!div class="nextstepaction"]
+> [Quickstart: Create Bicep files with Visual Studio Code](../azure-resource-manager/bicep/quickstart-create-bicep-use-visual-studio-code.md)
object-anchors Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/object-anchors/faq.md
Title: Frequently asked questions
-description: FAQs about the Azure Object Anchors service.
+description: Answers to frequently asked questions about the Azure Object Anchors service, which enables an application to detect an object in the world using a 3D model.
-+ Previously updated : 09/10/2021- Last updated : 05/20/2022+ #Customer intent: Address frequently asked questions regarding Azure Object Anchors.
Azure Object Anchors enables an application to detect an object in the physical
For more information, see [Azure Object Anchors overview](overview.md). ## Product FAQ+ **Q: What recommendations do you have for the objects that should be used?** **A:** We recommend the following properties for objects:
For more information, see [Azure Object Anchors overview](overview.md).
**Q: What is the gravity direction and unit required by the model conversion service?**
-**A:** The gravity direction is the down vector pointing to the earth and the unit of measurement represents the scale
- of the model. When converting a model, it's important to
- [ensure the gravity direction and asset dimension unit are correct](./troubleshoot/object-detection.md#ensure-the-gravity-direction-and-asset-dimension-unit-are-correct).
+**A:** The gravity direction is the down vector pointing to the earth and the unit of measurement represents the scale of the model. When converting a model, it's important to [ensure the gravity direction and asset dimension unit are correct](./troubleshoot/object-detection.md#ensure-the-gravity-direction-and-asset-dimension-unit-are-correct).
**Q: How long does it take to convert a CAD model?**
For more information, see [Azure Object Anchors overview](overview.md).
**A:** HoloLens 2.
-**Q: Which OS build should my HoloLens run?**
-
-**A:** OS Build 18363.720 or newer, released after March 12, 2020.
+**Q: Which version of Windows Holographic should my HoloLens 2 have installed?**
- More details at [Windows 10 March 12, 2020 update](https://support.microsoft.com/help/4551762).
+**A:** We recommend the most recent release from Windows Update. See the Windows Holographic [release notes](/hololens/hololens-release-notes) and [update instructions](/hololens/hololens-update-hololens).
**Q: How long does it take to detect an object on HoloLens?**
For smaller objects within 2 meters in each dimension, detection can occur withi
**Q: Which version of the Mixed Reality Toolkit (MRTK) should my HoloLens Unity application use to be able to work with the Object Anchors Unity SDK?**
-**A:** The Azure Object Anchors Unity SDK doesn't depend on the Mixed Reality Toolkit in any way, which means you are free to use any version you like. For more information, see [Introducing MRTK for Unity](/windows/mixed-reality/develop/unity/mrtk-getting-started).
+**A:** The Azure Object Anchors Unity SDK doesn't depend on the Mixed Reality Toolkit in any way, which means you're free to use any version you like. For more information, see [Introducing MRTK for Unity](/windows/mixed-reality/develop/unity/mrtk-getting-started).
**Q: How accurate is an estimated pose?**
-**A:** It depends on object size, material, environment, etc. For small objects, the estimated pose can be within 2-cm error. For large objects, like a car, the error can be up to 2-8 cm.
+**A:** It depends on object size, material, environment, and other factors. For small objects, the estimated pose can be within 2-cm error. For large objects, like a car, the error can be up to 2 cm to 8 cm.
**Q: Can Object Anchors handle moving objects?**
-**A:** We don't support **continuously moving** or **dynamic** objects. We do support objects in an entirely new position in the space once they have been physically moved there, but cannot track it while it is being moved.
+**A:** We don't support *continuously moving* or *dynamic* objects. We do support objects in an entirely new position in the space once they've been physically moved there, but can't track it while it's being moved.
**Q: Can Object Anchors handle deformation or articulations?**
For smaller objects within 2 meters in each dimension, detection can occur withi
**Q: How many different models can Object Anchors detect at the same time?**
-**A:** We currently support detecting three models at a time to ensure the best user experience, but we don't enforce a
- limit.
+**A:** We currently support detecting three models at a time to ensure the best user experience, but we don't enforce a limit.
**Q: Can Object Anchors detect multiple instances of the same object model?**
-**A:** Yes, we support detecting up to three instances of the same model type to ensure the best user experience, but we
- don't enforce a limit. You can detect one object instance per search area. By calling
- `ObjectQuery.SearchAreas.Add`, you can add more search areas to a query to detect more instances. You can call
- `ObjectObserver.DetectAsync` with multiple queries to detect multiple models.
+**A:** Yes, we support detecting up to three instances of the same model type to ensure the best user experience, but we don't enforce a limit. You can detect one object instance per search area. By calling `ObjectQuery.SearchAreas.Add`, you can add more search areas to a query to detect more instances. You can call `ObjectObserver.DetectAsync` with multiple queries to detect multiple models.
**Q: What should I do if the Object Anchors runtime cannot detect my object?**
For smaller objects within 2 meters in each dimension, detection can occur withi
**Q: How to choose object query parameters?**
-**A:** Here's some [general guidance](./troubleshoot/object-detection.md#adjust-object-query-values) and a more
- detailed guide for [difficult to detect objects](./detect-difficult-object.md).
+**A:** Here's some [general guidance](./troubleshoot/object-detection.md#adjust-object-query-values) and a more detailed guide for [difficult to detect objects](./detect-difficult-object.md).
**Q: How do I get Object Anchors diagnostics data from the HoloLens?**
For smaller objects within 2 meters in each dimension, detection can occur withi
**Q: Why does the source model not align with the physical object when using the pose returned by the Object Anchors Unity SDK?**
-**A:** Unity may change the coordinate system when importing an object model. For example, the Object Anchors Unity SDK inverts the Z axis when converting from a right-handed to left-handed coordinate system, but Unity may apply an additional rotation about either the X or Y axis. A developer can determine this additional rotation by visualizing and comparing the coordinate systems.
+**A:** Unity may change the coordinate system when importing an object model. For example, the Object Anchors Unity SDK inverts the Z axis when it converts from a right-handed to left-handed coordinate system. Unity may apply another rotation about either the X or Y axis. A developer can determine this other rotation by visualizing and comparing the coordinate systems.
**Q: Do you support 2D?**
-**A:** Since we are geometry based, we only support 3D.
+**A:** Since we're geometry based, we only support 3D.
**Q: Can you differentiate between the same model in different colors?**
For smaller objects within 2 meters in each dimension, detection can occur withi
**Q: Can I use Object Anchors without internet connectivity?** **A:**
-* For model conversion and training, connectivity is required as this occurs in the cloud.
-* Runtime sessions are fully on-device and do not require connectivity as all computations occur on the HoloLens 2.
+
+* For model conversion and training, connectivity is required because these actions occur in the cloud.
+* Runtime sessions are fully on-device and don't require connectivity because all computations occur on the HoloLens 2.
## Privacy FAQ+ **Q: How does Azure Object Anchors store data?** **A:** We only store System Metadata, which is encrypted at rest with a Microsoft managed data encryption key.
object-anchors Get Started Hololens Directx https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/object-anchors/quickstarts/get-started-hololens-directx.md
Title: 'Quickstart: Create a HoloLens app with DirectX'
-description: In this quickstart, you learn how to build a HoloLens app using Object Anchors.
+ Title: 'Quickstart: HoloLens app with DirectX'
+description: In this quickstart, you learn how to build a HoloLens app using the Azure Object Anchors service and DirectX.
Previously updated : 09/08/2021 Last updated : 05/20/2022 -+
+- mode-api
+- kr2b-contr-experiment
+ # Quickstart: Create a HoloLens app with Azure Object Anchors, in C++/WinRT and DirectX
-This quickstart covers how to create a HoloLens app using [Azure Object Anchors](../overview.md) in C++/WinRT and
-DirectX. Azure Object Anchors is a managed cloud service that converts 3D assets into AI models that enable
-object-aware mixed reality experiences for the HoloLens. When you're finished, you'll have a HoloLens app that can detect
-an object and its pose in a Holographic DirectX 11 (Universal Windows) application.
+This quickstart covers how to create a HoloLens app using [Azure Object Anchors](../overview.md) in C++/WinRT and DirectX.
+Object Anchors is a managed cloud service that converts 3D assets into AI models that enable object-aware mixed reality experiences for the HoloLens.
+When you're finished, you'll have a HoloLens app that can detect an object and its pose in a Holographic DirectX 11 (Universal Windows) application.
You'll learn how to:
You'll learn how to:
To complete this quickstart, make sure you have:
-* A physical object in your environment and its 3D model (either CAD or scanned).
-* A Windows machine with the following installed:
+* A physical object in your environment and its 3D model, either CAD or scanned.
+* A Windows computer with the following installed:
* <a href="https://git-scm.com" target="_blank">Git for Windows</a> * <a href="https://www.visualstudio.com/downloads/" target="_blank">Visual Studio 2019</a> with the **Universal Windows Platform development** workload and the **Windows 10 SDK (10.0.18362.0 or newer)** component * A HoloLens 2 device that is up to date and has [developer mode](/windows/mixed-reality/using-visual-studio#enabling-developer-mode) enabled.
To complete this quickstart, make sure you have:
[!INCLUDE [Clone Sample Repo](../../../includes/object-anchors-clone-sample-repository.md)]
-Open `quickstarts/apps/directx/DirectXAoaSampleApp.sln` in Visual Studio.
+Open *quickstarts/apps/directx/DirectXAoaSampleApp.sln* in Visual Studio.
Change the **Solution Configuration** to **Release**, change **Solution Platform** to **ARM64**, select **Device** from the deployment target options.
Change the **Solution Configuration** to **Release**, change **Solution Platform
The next step is to configure the app to use your account information. You took note of the **Account Key**, **Account ID**, and **Account Domain** values, in the ["Create an Object Anchors account"](#create-an-object-anchors-account) section.
-Open `Assets\ObjectAnchorsConfig.json`.
+Open *Assets\ObjectAnchorsConfig.json*.
Locate the `AccountId` field and replace `Set me` with your Account ID.
Locate the `AccountDomain` field and replace `Set me` with your Account Domain.
Now, build the **AoaSampleApp** project by right-clicking the project and selecting **Build**. ## Deploy the app to HoloLens After compiling the sample project successfully, you can deploy the app to HoloLens.
-Ensure the HoloLens device is powered on and connected to the PC through a USB cable. Make sure **Device** is the chosen deployment target (see above).
+Ensure the HoloLens device is powered on and connected to the PC through a USB cable. Make sure **Device** is the chosen deployment target, as above.
-Right-click **AoaSampleApp** project, then click **Deploy** from the pop-up menu to install the app. If no error shows up in Visual Studio's **Output Window**, the app will be installed on HoloLens.
+Right-click **AoaSampleApp** project, then select **Deploy** from the context menu to install the app. If no error shows up in Visual Studio's **Output Window**, the app will be installed on HoloLens.
-Before launching the app, you ought to have uploaded an object model, **chair.ou** for example, to the **3D Objects** folder on your HoloLens. If you haven't, follow the instructions in the ["Upload your model"](#upload-your-model) section.
+Before launching the app, you ought to have uploaded an object model, *chair.ou* for example, to the *3D Objects* folder on your HoloLens. If you haven't, follow the instructions in the [Upload your model](#upload-your-model) section.
-To launch and debug the app, select **Debug > Start debugging**.
+To launch and debug the app, select **Debug** > **Start debugging**.
## Ingest object model and detect its instance
-The **AoaSampleApp** app is now running on your HoloLens device. Walk close (within 2-meter distance) to the target object (chair) and scan it by looking at it from multiple perspectives. You should see a pink bounding box around the object with some yellow points rendered close to object's surface, which indicates that it was detected.
-
+The **AoaSampleApp** app is now running on your HoloLens device. Walk close, within 2-meter distance, to the target object (chair) and scan it by looking at it from multiple perspectives. You should see a pink bounding box around the object with some yellow points rendered close to object's surface, which indicates that it was detected. You should also see a yellow box that indicates the search area.
-Figure: a detected chair rendered with its bounding box (pink), point cloud (yellow), and a search area (large yellow box).
-You can define a search space for the object in the app by finger clicking in the air with either your right or left hand. The search space will switch among a sphere of 2-meters radius, a 4 m^3 bounding box and a view frustum. For larger objects such as cars, the best choice will typically be to use the view frustum selection while standing facing a corner of the object at about a 2-meter distance.
-Each time the search area changes, the app will remove instances currently being tracked, and then try to find them again in the new search area.
+You can define a search space for the object in the app by finger clicking in the air with either your right or left hand. The search space will switch among a sphere of 2-meters radius, a 4 m^3 bounding box and a view frustum. For larger objects such as cars, the best choice is usually to use the view frustum selection while standing facing a corner of the object at about a 2-meter distance. Each time the search area changes, the app removes instances currently being tracked. It then tries to find them again in the new search area.
-This app can track multiple objects at one time. To do that, upload multiple models to the **3D Objects** folder of your device and set a search area that covers all the target objects. It may take longer to detect and track multiple objects.
+This app can track multiple objects at one time. To do that, upload multiple models to the *3D Objects* folder of your device and set a search area that covers all the target objects. It may take longer to detect and track multiple objects.
-The app aligns a 3D model to its physical counterpart closely. A user can air tap using their left hand to turn on the high precision tracking mode, which computes a more accurate pose. This is still an experimental feature, which consumes more system resources, and could result in higher jitter in the estimated pose. Air tap again with the left hand to switch back to the normal tracking mode.
+The app aligns a 3D model to its physical counterpart closely. A user can air tap using their left hand to turn on the high precision tracking mode, which computes a more accurate pose. This feature is still experimental. It consumes more system resources and could result in higher jitter in the estimated pose. Air tap again with the left hand to switch back to the normal tracking mode.
## Next steps
open-datasets Dataset Mnist https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/open-datasets/dataset-mnist.md
Title: MNIST database of handwritten digits description: Learn how to use the MNIST database of handwritten digits dataset in Azure Open Datasets. + Last updated 04/16/2021
Four files are available in the container directly:
> **[Download the notebook instead](https://opendatasets-api.azure.com/discoveryapi/OpenDataset/DownloadNotebook?serviceType=AzureNotebooks&package=azureml-opendatasets&registryId=mnist)**. ### Load MNIST into a data frame using Azure Machine Learning tabular datasets.
-For more information on Azure Machine Learning datasets, see [Create Azure Machine Learning datasets](../machine-learning/how-to-create-register-datasets.md).
+For more information on Azure Machine Learning datasets, see [Create Azure Machine Learning datasets](../machine-learning/v1/how-to-create-register-datasets.md).
#### Get complete dataset into a data frame
plt.show()
``` ### Download or mount MNIST raw files Azure Machine Learning file datasets.
-This works only for Linux based compute. For more information on Azure Machine Learning datasets, see [Create Azure Machine Learning datasets](../machine-learning/how-to-create-register-datasets.md).
+This works only for Linux based compute. For more information on Azure Machine Learning datasets, see [Create Azure Machine Learning datasets](../machine-learning/v1/how-to-create-register-datasets.md).
```python mnist_file = MNIST.get_file_dataset()
plt.show()
> **[Download the notebook instead](https://opendatasets-api.azure.com/discoveryapi/OpenDataset/DownloadNotebook?serviceType=AzureDatabricks&package=azureml-opendatasets&registryId=mnist)**. ### Load MNIST into a data frame using Azure Machine Learning tabular datasets.
-For more information on Azure Machine Learning datasets, see [Create Azure Machine Learning datasets](../machine-learning/how-to-create-register-datasets.md).
+For more information on Azure Machine Learning datasets, see [Create Azure Machine Learning datasets](../machine-learning/v1/how-to-create-register-datasets.md).
#### Get complete dataset into a data frame
display(mnist_df.limit(5))
``` ### Download or mount MNIST raw files Azure Machine Learning file datasets.
-This works only for Linux based compute. For more information on Azure Machine Learning datasets, see [Create Azure Machine Learning datasets](../machine-learning/how-to-create-register-datasets.md).
+This works only for Linux based compute. For more information on Azure Machine Learning datasets, see [Create Azure Machine Learning datasets](../machine-learning/v1/how-to-create-register-datasets.md).
```python mnist_file = MNIST.get_file_dataset()
Sample not available for this platform/package combination.
## Next steps
-View the rest of the datasets in the [Open Datasets catalog](dataset-catalog.md).
+View the rest of the datasets in the [Open Datasets catalog](dataset-catalog.md).
open-datasets How To Create Azure Machine Learning Dataset From Open Dataset https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/open-datasets/how-to-create-azure-machine-learning-dataset-from-open-dataset.md
Last updated 08/05/2020--
-# Customer intent: As an experienced Python developer, I want to use Azure Open Datasets in my ML workflows for improved model accuracy.
+
+#Customer intent: As an experienced Python developer, I want to use Azure Open Datasets in my ML workflows for improved model accuracy.
# Create Azure Machine Learning datasets from Azure Open Datasets In this article, you learn how to bring curated enrichment data into your local or remote machine learning experiments with [Azure Machine Learning](../machine-learning/overview-what-is-azure-machine-learning.md) datasets and [Azure Open Datasets](./index.yml).
-By creating an [Azure Machine Learning dataset](../machine-learning/how-to-create-register-datasets.md), you create a reference to the data source location, along with a copy of its metadata. Because datasets are lazily evaluated, and the data remains in its existing location, you
+By creating an [Azure Machine Learning dataset](../machine-learning/v1/how-to-create-register-datasets.md), you create a reference to the data source location, along with a copy of its metadata. Because datasets are lazily evaluated, and the data remains in its existing location, you
* Incur no extra storage cost. * Don't risk unintentionally changing your original data sources. * Improve ML workflow performance speeds.
-To understand where datasets fit in Azure Machine Learning's overall data access workflow, see the [Securely access data](../machine-learning/concept-data.md#data-workflow) article.
+To understand where datasets fit in Azure Machine Learning's overall data access workflow, see the [Securely access data](../machine-learning/v1/concept-data.md#data-workflow) article.
Azure Open Datasets are curated public datasets that you can use to add scenario-specific features to enrich your predictive solutions and improve their accuracy. See the [Open Datasets catalog](https://azure.microsoft.com/services/open-datasets/catalog/) for public-domain data that can help you train machine learning models, like:
For this article, you need:
## Create datasets with the SDK
-To create Azure Machine Learning datasets via Azure Open Datasets classes in the Python SDK, make sure you've installed the package with `pip install azureml-opendatasets`. Each discrete data set is represented by its own class in the SDK, and certain classes are available as either an Azure Machine Learning [`TabularDataset`, `FileDataset`](../machine-learning/how-to-create-register-datasets.md#dataset-types), or both. See the [reference documentation](/python/api/azureml-opendatasets/azureml.opendatasets) for a full list of `opendatasets` classes.
+To create Azure Machine Learning datasets via Azure Open Datasets classes in the Python SDK, make sure you've installed the package with `pip install azureml-opendatasets`. Each discrete data set is represented by its own class in the SDK, and certain classes are available as either an Azure Machine Learning [`TabularDataset`, `FileDataset`](../machine-learning/v1/how-to-create-register-datasets.md#dataset-types), or both. See the [reference documentation](/python/api/azureml-opendatasets/azureml.opendatasets) for a full list of `opendatasets` classes.
You can retrieve certain `opendatasets` classes as either a `TabularDataset` or `FileDataset`, which allows you to manipulate and/or download the files directly. Other classes can get a dataset **only** by using the `get_tabular_dataset()` or `get_file_dataset()` functions from the `Dataset`class in the Python SDK.
For examples and demonstrations of Open Datasets functionality, see these [samp
* [Train with datasets](../machine-learning/how-to-train-with-datasets.md).
-* [Create an Azure machine learning dataset](../machine-learning/how-to-create-register-datasets.md).
+* [Create an Azure machine learning dataset](../machine-learning/v1/how-to-create-register-datasets.md).
partner-solutions Add Connectors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/apache-kafka-confluent-cloud/add-connectors.md
Title: Add connectors for Confluent Cloud - Azure partner solutions
description: This article describes how to install connectors for Confluent Cloud that you use with Azure resources. Last updated 09/03/2021++ # Add connectors for Confluent Cloud
partner-solutions Create Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/apache-kafka-confluent-cloud/create-cli.md
Title: Create Apache Kafka for Confluent Cloud through Azure CLI - Azure partner
description: This article describes how to use the Azure CLI to create an instance of Apache Kafka for Confluent Cloud. Last updated 06/07/2021++ ms.devlang: azurecli
partner-solutions Create Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/apache-kafka-confluent-cloud/create-powershell.md
Title: Create Apache Kafka for Confluent Cloud through Azure PowerShell - Azure
description: This article describes how to use Azure PowerShell to create an instance of Apache Kafka for Confluent Cloud. Last updated 11/03/2021++
partner-solutions Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/apache-kafka-confluent-cloud/create.md
Title: Create Apache Kafka for Confluent Cloud through Azure portal - Azure part
description: This article describes how to use the Azure portal to create an instance of Apache Kafka for Confluent Cloud. Last updated 12/14/2021++
partner-solutions Get Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/apache-kafka-confluent-cloud/get-support.md
Title: Contact support for Confluent Cloud - Azure partner solutions
description: This article describes how to contact support for Confluent Cloud on the Azure portal. Last updated 06/07/2021++ # Get support for Confluent Cloud resource
partner-solutions Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/apache-kafka-confluent-cloud/manage.md
Title: Manage a Confluent Cloud - Azure partner solutions
description: This article describes management of a Confluent Cloud on the Azure portal. How to set up single sign-on, delete a Confluent organization, and get support. Last updated 06/07/2021++ # Manage the Confluent Cloud resource
partner-solutions Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/apache-kafka-confluent-cloud/overview.md
Title: Apache Kafka on Confluent Cloud overview - Azure partner solutions
description: Learn about using Apache Kafka on Confluent Cloud in the Azure Marketplace. Last updated 02/22/2022++ # What is Apache Kafka for Confluent Cloud?
partner-solutions Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/apache-kafka-confluent-cloud/troubleshoot.md
Title: Troubleshooting Apache Kafka for Confluent Cloud - Azure partner solution
description: This article provides information about troubleshooting and frequently asked questions (FAQ) for Confluent Cloud on Azure. Last updated 02/18/2021++ # Troubleshooting Apache Kafka for Confluent Cloud solutions
partner-solutions Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/datadog/create.md
Title: Create Datadog - Azure partner solutions
description: This article describes how to use the Azure portal to create an instance of Datadog. Last updated 05/28/2021++
partner-solutions Get Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/datadog/get-support.md
Title: Get support for Datadog resource - Azure partner solutions
description: This article describes how to contact support for a Datadog resource. Last updated 05/28/2021++ # Get support for Datadog resource
partner-solutions Link To Existing Organization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/datadog/link-to-existing-organization.md
Title: Link to existing Datadog - Azure partner solutions
description: This article describes how to use the Azure portal to link to an existing instance of Datadog. Last updated 05/28/2021++
partner-solutions Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/datadog/manage.md
Title: Manage a Datadog resource - Azure partner solutions
description: This article describes management of a Datadog resource in the Azure portal. How to set up single sign-on, delete a Confluent organization, and get support. Last updated 05/28/2021++ # Manage the Datadog resource
partner-solutions Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/datadog/overview.md
Title: Datadog overview - Azure partner solutions
description: Learn about using Datadog in the Azure Marketplace. Last updated 05/28/2021++ # What is Datadog?
partner-solutions Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/datadog/prerequisites.md
Title: Prerequisites for Datadog on Azure - Azure partner solutions
description: This article describes how to configure your Azure environment to create an instance of Datadog. Last updated 05/28/2021++ # Configure environment before Datadog deployment
partner-solutions Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/datadog/troubleshoot.md
Title: Troubleshooting for Datadog - Azure partner solutions
description: This article provides information about troubleshooting for Datadog on Azure. Last updated 05/28/2021++ # Fix common errors for Datadog on Azure
partner-solutions Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/elastic/create.md
Title: Create Elastic application - Azure partner solutions
description: This article describes how to use the Azure portal to create an instance of Elastic. Last updated 09/02/2021++
partner-solutions Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/elastic/manage.md
Title: Manage an Elastic integration with Azure - Azure partner solutions
description: This article describes management of Elastic on the Azure portal. How to configure diagnostic settings and delete the resource. Last updated 09/02/2021++ # Manage the Elastic integration with Azure
partner-solutions Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/elastic/overview.md
Title: Elastic integration overview - Azure partner solutions
description: Learn about using the Elastic Cloud-Native Observability Platform in the Azure Marketplace. Last updated 09/02/2021++ # What is Elastic integration with Azure?
partner-solutions Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/elastic/troubleshoot.md
Title: Troubleshooting Elastic - Azure partner solutions
description: This article provides information about troubleshooting Elastic integration with Azure Last updated 09/02/2021++ # Troubleshooting Elastic integration with Azure
partner-solutions Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/logzio/create.md
Title: Create a Logz.io resource - Azure partner solutions
description: Quickstart article that describes how to create a Logz.io resource in Azure. Last updated 10/25/2021++
partner-solutions Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/logzio/manage.md
Title: Manage the Azure integration with Logz.io - Azure partner solutions
description: Learn how to manage the Azure integration with Logz.io. Last updated 10/25/2021++ # Manage the Logz.io integration in Azure
partner-solutions Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/logzio/overview.md
Title: Logz.io overview - Azure partner solutions
description: Learn about Azure integration using Logz.io in Azure Marketplace. Last updated 10/25/2021++ # What is Logz.io integration with Azure?
partner-solutions Setup Sso https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/logzio/setup-sso.md
Title: Single sign-on for Azure integration with Logz.io - Azure partner solutio
description: Learn about how to set up single sign-on for Azure integration with Logz.io. Last updated 10/25/2021++ # Set up Logz.io single sign-on
partner-solutions Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/logzio/troubleshoot.md
Title: Logz.io troubleshooting - Azure partner solutions
-description: Learn about how to troubleshoot the Azure integration with Logz.io.
+ Title: Troubleshooting Logz.io - Azure partner solutions
+description: This article describes how to troubleshoot Logz.io integration with Azure.
Previously updated : 10/25/2021 Last updated : 05/24/2022++
-# Troubleshoot Logz.io integration with Azure
+# Troubleshooting Logz.io integration with Azure
-This article describes how to troubleshoot the Azure integration with Logz.io.
+This article describes how to troubleshoot the Logz.io integration with Azure.
## Owner role needed to create resource
Use the following patterns to add new values:
- **Identifier**: `urn:auth0:logzio:<Application ID>` - **Reply URL**: `https://logzio.auth0.com/login/callback?connection=<Application ID>` ## Logs not being sent to Logz.io
To verify whether a resource is sending logs to Logz.io:
1. Go to [Azure diagnostic setting](../../azure-monitor/essentials/diagnostic-settings.md) for the specific resource. 1. Verify that there's a Logz.io diagnostic setting. +
+## Register resource provider
+
+You must register `Microsoft.Logz` in the Azure subscription that contains the Logz.io resource, and any subscriptions with resources that send data to Logz.io. For more information about troubleshooting resource provider registration, see [Resolve errors for resource provider registration](../../azure-resource-manager/troubleshooting/error-register-resource-provider.md).
## Limit reached in monitored resources Azure Monitor Diagnostics supports a maximum of five diagnostic settings on single resource or subscription. When you reach that limit, the resource will show **Limit reached** in **Monitored resources**. You can't add monitoring with Logz.io. ## VM extension installation failed A virtual machine (VM) can only be monitored by a single Logz.io account (main or sub). If you try to install the agent on a VM that is already monitored by another account, you see the following error: ## Purchase errors
partner-solutions Nginx Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/nginx/nginx-create.md
+
+ Title: Create an NGINX for Azure deployment
+description: This article describes how to use the Azure portal to create an instance of NGINX.
++++ Last updated : 05/12/2022+++
+# QuickStart: Get started with NGINX
+
+In this quickstart, you'll use the Azure Marketplace to find and create an instance of **NGINX for Azure**.
+
+## Create new NGINX deployment
+
+### Basics
+
+1. To create an NGINX deployment using the Marketplace, subscribe to **NGINX for Azure** in the Azure portal.
+
+1. Set the following values in the **Create NGINX Deployment** pane.
+
+ :::image type="content" source="media/nginx-create/nginx-create.png" alt-text="Screenshot of basics pane of the NGINX create experience.":::
+
+ | Property | Description |
+ |||
+ | Subscription | From the drop-down, select your Azure subscription where you have owner access |
+ | Resource group | Specify whether you want to create a new resource group or use an existing one. A resource group is a container that holds related resources for an Azure solution. For more information, see Azure Resource Group overview. |
+ | NGINX account name | Put the name for the NGINX account you want to create |
+ | Location | Select West Central US. West Central US is the only Azure region supported by NGINX during preview |
+ | Plan | Specified based on the selected NGINX plan |
+ | Price | Pay As You Go |
+
+> [!NOTE]
+> West Central US is the only Azure region supported by NGINX during preview.
+
+### Networking
+
+1. After filling in the proper values, select the **Next: Networking** to see the **Networking** screen. Specify the VNet and Subnet that is associated with the NGINX deployment.
+
+ :::image type="content" source="media/nginx-create/nginx-networking.png" alt-text="Screenshot of the networking pane in the NGINX create experience.":::
+
+1. Select the checkbox **I allow NGINX service provider to access the above virtual network for deployment** to indicate that you acknowledge access to your Tenant to ensure VNet and NIC association.
+
+1. Select either Public or Private End points for the IP address selection.
+
+### Tags
+
+You can specify custom tags for the new NGINX resource in Azure by adding custom key-value pairs.
+
+1. Select Tags.
+
+ :::image type="content" source="media/nginx-create/nginx-custom-tags.png" alt-text="Screenshot showing the tags pane in the NGINX create experience.":::
+
+ | Property | Description |
+ |-| -|
+ |Name | Name of the tag corresponding to the Azure NGINX resource. |
+ | Value | Value of the tag corresponding to the Azure NGINX resource. |
+
+### Review and create
+
+1. Select the **Next: Review + Create** to navigate to the final step for resource creation. When you get to the **Review + Create** page, all validations are run. At this point, review all the selections made in the Basics, Networking, and optionally Tags panes. You can also review the NGINX and Azure Marketplace terms and conditions.
+
+ :::image type="content" source="media/nginx-create/nginx-review-and-create.png" alt-text="screenshot of review and create nginx resource":::
+
+1. Once you've reviewed all the information select **Create**. Azure now deploys the NGINX for Azure resource.
+
+ :::image type="content" source="media/nginx-create/nginx-deploy.png" alt-text="Screenshot showing NGINX deployment in process.":::
+
+## Deployment completed
+
+1. Once the create process is completed, select **Go to Resource** to navigate to the specific NGINX resource.
+
+ :::image type="content" source="media/nginx-create/nginx-overview-pane.png" alt-text="Screenshot of a completed NGINX deployment.":::
+
+1. Select **Overview** in the Resource menu to see information on the deployed resources.
+
+ :::image type="content" source="media/nginx-create/nginx-overview-pane.png" alt-text="Screenshot of information on the NGINX resource overview.":::
+
+## Next steps
+
+- [Manage the NGINX resource](nginx-manage.md)
partner-solutions Nginx Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/nginx/nginx-manage.md
+
+ Title: Manage an NGINX resource through the Azure portal
+description: This article describes management functions for NGINX on the Azure portal.
+++++ Last updated : 05/12/2022++
+# Manage your NGINX for Azure (preview) integration through the portal
+
+Once your NGINX resource is created in the Azure portal, you might need to get information about it or change it. Here's list of ways to manage your NGINX resource.
+
+- [Configure managed identity](#configure-managed-identity)
+- [Changing the configuration](#changing-the-configuration)
+- [Adding certificates](#adding-certificates)
+- [Send metrics to monitoring](#send-metrics-to-monitoring)
+- [Delete an NGINX deployment](#delete-an-nginx-deployment)
+- [GitHub integration](#github-integration)
+
+## Configure managed identity
+
+Add a new User Assigned Managed Identity.
+
+1. From the Resource menu, select your NGINX deployment.
+
+1. From **Settings** on the left, select **Identity**.
+
+ :::image type="content" source="media/nginx-manage/nginx-identity.png" alt-text="Screenshot showing how to add a managed identity to NGINX resource.":::
+
+1. To add a User Assigned identity, select **Add** in the working pane. You see a new pane for adding **User assigned managed identities** on the right that are part of the subscription. Select an identity and select **Add**.
+
+ :::image type="content" source="media/nginx-manage/nginx-user-assigned.png" alt-text="Screenshot after user assigned managed identity is added.":::
+
+## Changing the configuration
+
+1. From the Resource menu, select your NGINX deployment.
+
+1. Select **NGINX configuration** on the left.
+
+ :::image type="content" source="media/nginx-manage/nginx-configuration.png" alt-text="Screenshot resources for NGINX configuration settings.":::
+
+1. To upload an existing **NGINX config package**, type the appropriate `.conf file` in **File path** in the working paned and select the **+** button and for config package.
+
+ :::image type="content" source="media/nginx-manage/nginx-config-path.png" alt-text="Screenshot of config (. C O N F) file for uploading.":::
+
+1. You see the contents of the file in the working pane. Select **Confirm** if correct.
+
+ :::image type="content" source="media/nginx-manage/nginx-config-upload.png" alt-text="Screenshot of upload confirmation for config file.":::
+
+1. To edit the config file within the Editor, select the pencil icon. When you're done editing, select **Submit**.
+
+ :::image type="content" source="media/nginx-manage/nginx-config-editor.png" alt-text="Screenshot of editor for config file with Intelisense displayed.":::
+
+## Adding certificates
+
+You can add a certificate by uploading it to Azure Key vault, and then associating the certificate with your deployment.
+
+1. From the Resource menu, select your NGINX deployment.
+
+1. Select **NGINX certificates** in **Settings** on the left.
+
+ :::image type="content" source="media/nginx-manage/nginx-certificates.png" alt-text="Screenshot of NGINX certificate uploading.":::
+
+1. Select **Add certificate**. You see an **Add certificate** pane on the right. Add the appropriate information
+
+ :::image type="content" source="media/nginx-manage/nginx-add-certificate.png" alt-text="Screenshot of the add certificate pane.":::
+
+1. When you've added the needed information, select **Save**.
+
+## Send metrics to monitoring
+
+1. From the Resource menu, select your NGINX deployment.
+
+1. Select **NGINX Monitoring** under the **Settings** on the left.
+
+ :::image type="content" source="media/nginx-manage/nginx-monitoring.png" alt-text="Screenshot of NGINX monitoring in Azure metrics.":::
+
+1. Select **Send metrics to Azure Monitor** to enable metrics and select **Save**.
+
+ :::image type="content" source="media/nginx-manage/nginx-send-to-monitor.png" alt-text="screenshot of nginx sent to monitoring":::
+
+## Delete an NGINX deployment
+
+To delete a deployment of NGINX for Azure (preview):
+
+1. From the Resource menu, select your NGINX deployment.
+
+1. Select **Overview** on the left.
+
+1. Select **Delete**.
+
+ :::image type="content" source="media/nginx-manage/nginx-delete-deployment.png" alt-text="Screenshot showing how delete an NGINX resource.":::
+
+1. Confirm that you want to delete the NGINX resource.
+
+ :::image type="content" source="media/nginx-manage/nginx-confirm-delete.png" alt-text="Screenshot showing the final confirmation of delete for NGINX resource.":::
+
+1. Select **Delete**.
+
+After the account is deleted, logs are no longer sent to NGINX, and all billing stops for NGINX through Azure Marketplace.
+
+> [!NOTE]
+> The delete button on the main account is only activated if all the sub-accounts mapped to the main account are already deleted. Refer to section for deleting sub-accounts here.
+
+## GitHub Integration
+
+Enable CI/CD deployments via GitHub Actions integrations.
+
+<!-- <<Add screenshot for GitHub integration>> -->
+
+## Next steps
+
+For help with troubleshooting, see [Troubleshooting NGINX integration with Azure](nginx-troubleshoot.md).
partner-solutions Nginx Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/nginx/nginx-overview.md
+
+ Title: What is NGINX for Azure
+description: Learn about using the NGINX Cloud-Native Observability Platform in the Azure Marketplace.
+++++ Last updated : 05/12/2022++
+# What is NGINX for Azure (preview)?
+
+In this article you learn how to enable deeper integration of the **NGINX** SaaS service with Azure.
+
+The Cloud-Native Observability Platform of NGINX centralizes log, metric, and tracing analytics in one place. You can more easily monitor the health and performance of your Azure environment, and troubleshoot your services faster.
+
+The NGINX for Azure (preview) offering in the Azure Marketplace allows you to manage NGINX in the Azure portal as an integrated service. You can implement NGINX as a monitoring solution for your cloud workloads through a streamlined workflow.
+
+You can set up the NGINX resources through a resource provider named Nginx.NginxPlus. You can create and manage NGINX resources through the Azure portal. NGINX owns and runs the software as a service (SaaS) application including the NGINX accounts created.
+
+Here are the key capabilities provided by the NGINX for Azure (preview) integration:
+
+- **Seamless onboarding** of NGINX SaaS software as an integrated service on Azure
+- **Unified billing** of NGINX SaaS through Azure Monthly bill
+- **Single-Sign on to NGINX.** - No separate sign-up needed from NGINX portal
+- **Lift and Shift config files** - Ability to use existing Configuration (.conf) files for SaaS deployment
+
+## Pre-requisites
+
+### Subscription owner
+
+The NGINX for Azure (preview) integration can only be set up by users who have Owner access on the Azure subscription. Ensure you have the appropriate Owner access before starting to set up this integration.
+
+## Find NGINX for Azure (preview) in the Azure Marketplace
+
+1. Navigate to the Azure Marketplace page.
+
+1. Search for _NGINX for Azure_ listed.
+
+1. In the plan overview pane, select the **Setup and Subscribe**. The **Create new NGINX account** window opens.
+
+## Next steps
+
+To create an instance of NGINX, see [QuickStart: Get started with NGINX](nginx-create.md).
partner-solutions Nginx Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/nginx/nginx-troubleshoot.md
+
+ Title: Troubleshooting your NGINX for Azure deployment
+description: This article provides information about getting support and troubleshooting an NGINX integration with Azure.
+++++ Last updated : 05/12/2022++
+# Troubleshooting NGINX integration with Azure
+
+You can get support for your NGINX deployment through a **New Support request**. The procedure for creating the request is here. In addition, we have included other troubleshooting for problems you might experience in creating and using an NGINX deployment.
+
+## Getting support
+
+1. To contact support about an Azure NGINX integration, open your NGINX Deployment in the portal.
+
+1. Select the **New Support request** in Resource menu on the left.
+
+1. Select **Raise a support ticket** and fill out the details.
+
+ :::image type="content" source="media/nginx-troubleshoot/nginx-support-request.png" alt-text="Screenshot of an new NGINX support ticket.":::
+
+## Troubleshooting
+
+### Unable to create an NGINX resource as not a subscription owner
+
+The NGINX for Azure integration can only be set up by users who have Owner access on the Azure subscription. Ensure you have the appropriate Owner access before starting to set up this integration.
+
+## Next steps
+
+Learn about [managing your instance](nginx-manage.md) of NGINX.
partner-solutions Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/overview.md
Title: Offerings from partners - Azure partner solutions description: Learn about solutions offered by partners on Azure. Previously updated : 05/25/2021+ Last updated : 05/12/2022++ # Extend Azure with solutions from partners
Partner solutions are available through the Marketplace.
| [Datadog](./datadog/overview.md) | Monitor your servers, clouds, metrics, and apps in one place. | | [Elastic](./elastic/overview.md) | Monitor the health and performance of your Azure environment. | | [Logz.io](./logzio/overview.md) | Monitor the health and performance of your Azure environment. |
+| [NGINX for Azure (preview)](./nginx/nginx-overview.md) | Use NGINX for Azure (preview) as a reverse proxy within your Azure environment. |
postgresql Quickstart Create Server Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/quickstart-create-server-arm-template.md
Previously updated : 11/30/2021 Last updated : 05/12/2022 # Quickstart: Use an ARM template to create an Azure Database for PostgreSQL - Flexible Server -- Flexible server is a managed service that you use to run, manage, and scale highly available PostgreSQL databases in the cloud. You can use an Azure Resource Manager template (ARM template) to provision a PostgreSQL Flexible Server to deploy multiple servers or multiple databases on a server. [!INCLUDE [About Azure Resource Manager](../../../includes/resource-manager-quickstart-introduction.md)]
Create a _postgres-flexible-server-template.json_ file and copy the following JS
```json {
- "$schema": "http://schema.management.azure.com/schemas/2014-04-01-preview/deploymentTemplate.json#",
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
- "parameters": {
- "administratorLogin": {
- "defaultValue": "csadmin",
- "type": "String"
- },
- "administratorLoginPassword": {
- "type": "SecureString"
- },
- "location": {
- "defaultValue": "eastus",
- "type": "String"
- },
- "serverName": {
- "type": "String"
- },
- "serverEdition": {
- "defaultValue": "GeneralPurpose",
- "type": "String"
- },
- "skuSizeGB": {
- "defaultValue": 128,
- "type": "Int"
- },
- "dbInstanceType": {
- "defaultValue": "Standard_D4ds_v4",
- "type": "String"
- },
- "haMode": {
- "defaultValue": "ZoneRedundant",
- "type": "string"
- },
- "availabilityZone": {
- "defaultValue": "1",
- "type": "String"
- },
- "version": {
- "defaultValue": "12",
- "type": "String"
- },
- "tags": {
- "defaultValue": {},
- "type": "Object"
- },
- "firewallRules": {
- "defaultValue": {},
- "type": "Object"
- },
- "backupRetentionDays": {
- "defaultValue": 14,
- "type": "Int"
- },
- "geoRedundantBackup": {
- "defaultValue": "Disabled",
- "type": "String"
- },
- "virtualNetworkExternalId": {
- "defaultValue": "",
- "type": "String"
- },
- "subnetName": {
- "defaultValue": "",
- "type": "String"
- },
- "privateDnsZoneArmResourceId": {
- "defaultValue": "",
- "type": "String"
- },
- },
- "variables": {
- "api": "2021-06-01",
- "publicNetworkAccess": "[if(empty(parameters('virtualNetworkExternalId')), 'Enabled', 'Disabled')]"
- },
+ "parameters": {
+ "administratorLogin": {
+ "type": "string",
+ },
+ "administratorLoginPassword": {
+ "type": "secureString"
+ },
+ "location": {
+ "type": "string",
+ "defaultValue": "[resourceGroup().location]"
+ },
+ "serverName": {
+ "type": "string"
+ },
+ "serverEdition": {
+ "type": "string",
+ "defaultValue": "GeneralPurpose"
+ },
+ "skuSizeGB": {
+ "type": "int",
+ "defaultValue": 128
+ },
+ "dbInstanceType": {
+ "type": "string",
+ "defaultValue": "Standard_D4ds_v4"
+ },
+ "haMode": {
+ "type": "string",
+ "defaultValue": "ZoneRedundant"
+ },
+ "availabilityZone": {
+ "type": "string",
+ "defaultValue": "1"
+ },
+ "version": {
+ "type": "string",
+ "defaultValue": "12"
+ },
+ "virtualNetworkExternalId": {
+ "type": "string",
+ "defaultValue": ""
+ },
+ "subnetName": {
+ "type": "string",
+ "defaultValue": ""
+ },
+ "privateDnsZoneArmResourceId": {
+ "type": "string",
+ "defaultValue": ""
+ }
+ },
"resources": [ { "type": "Microsoft.DBforPostgreSQL/flexibleServers",
- "apiVersion": "[variables('api')]",
+ "apiVersion": "2021-06-01",
"name": "[parameters('serverName')]", "location": "[parameters('location')]",
- "sku": {
- "name": "[parameters('dbInstanceType')]",
- "tier": "[parameters('serverEdition')]"
- },
- "tags": "[parameters('tags')]",
- "properties": {
- "version": "[parameters('version')]",
- "administratorLogin": "[parameters('administratorLogin')]",
- "administratorLoginPassword": "[parameters('administratorLoginPassword')]",
- "network": {
- "publicNetworkAccess": "[variables('publicNetworkAccess')]",
- "delegatedSubnetResourceId": "[if(empty(parameters('virtualNetworkExternalId')), json('null'), json(concat(parameters('virtualNetworkExternalId'), '/subnets/' , parameters('subnetName'))))]",
- "privateDnsZoneArmResourceId": "[if(empty(parameters('virtualNetworkExternalId')), json('null'), parameters('privateDnsZoneArmResourceId'))]"
- },
- "highAvailability": {
- "mode": "[parameters('haMode')]"
- },
- "storage": {
- "storageSizeGB": "[parameters('skuSizeGB')]"
- },
- "backup": {
- "backupRetentionDays": 7,
- "geoRedundantBackup": "Disabled"
- },
- "availabilityZone": "[parameters('availabilityZone')]"
- }
+ "sku": {
+ "name": "[parameters('dbInstanceType')]",
+ "tier": "[parameters('serverEdition')]"
+ },
+ "properties": {
+ "version": "[parameters('version')]",
+ "administratorLogin": "[parameters('administratorLogin')]",
+ "administratorLoginPassword": "[parameters('administratorLoginPassword')]",
+ "network": {
+ "delegatedSubnetResourceId": "[if(empty(parameters('virtualNetworkExternalId')), json('null'), json(format('{0}/subnets/{1}', parameters('virtualNetworkExternalId'), parameters('subnetName'))))]",
+ "privateDnsZoneArmResourceId": "[if(empty(parameters('virtualNetworkExternalId')), json('null'), parameters('privateDnsZoneArmResourceId'))]"
+ },
+ "highAvailability": {
+ "mode": "[parameters('haMode')]"
+ },
+ "storage": {
+ "storageSizeGB": "[parameters('skuSizeGB')]"
+ },
+ "backup": {
+ "backupRetentionDays": 7,
+ "geoRedundantBackup": "Disabled"
+ },
+ "availabilityZone": "[parameters('availabilityZone')]"
+ }
} ] }- ``` These resources are defined in the template: -- Microsoft.DBforPostgreSQL/flexibleServers
+- [Microsoft.DBforPostgreSQL/flexibleServers](/azure/templates/microsoft.dbforpostgresql/flexibleservers?tabs=json)
## Deploy the template
$adminPassword = Read-Host -Prompt "Enter the administrator password" -AsSecureS
New-AzResourceGroup -Name $resourceGroupName -Location $location # Use this command when you need to create a new resource group for your deployment New-AzResourceGroupDeployment -ResourceGroupName $resourceGroupName `
- -TemplateFile "D:\Azure\Templates\EngineeringSite.json
+ -TemplateFile "postgres-flexible-server-template.json" `
-serverName $serverName ` -administratorLogin $adminUser ` -administratorLoginPassword $adminPassword Read-Host -Prompt "Press [ENTER] to continue ..." ```- ## Review deployed resources
az resource show --resource-group $resourcegroupName --name $serverName --resour
- ## Clean up resources Keep this resource group, server, and single database if you want to go to the [Next steps](#next-steps). The next steps show you how to connect and query your database using different methods.
Remove-AzResourceGroup -Name ExampleResourceGroup
```azurecli-interactive az group delete --name ExampleResourceGroup ```--++ ## Next steps
postgresql Quickstart Create Server Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/quickstart-create-server-bicep.md
+
+ Title: 'Quickstart: Create an Azure DB for PostgresSQL Flexible Server - Bicep'
+description: In this Quickstart, learn how to create an Azure Database for PostgresSQL Flexible server using Bicep.
+++++ Last updated : 05/12/2022++
+# Quickstart: Use a Bicep to create an Azure Database for PostgreSQL - Flexible Server
+
+Flexible server is a managed service that you use to run, manage, and scale highly available PostgreSQL databases in the cloud. You can use [Bicep](../../azure-resource-manager/bicep/overview.md) to provision a PostgreSQL Flexible Server to deploy multiple servers or multiple databases on a server.
++
+## Prerequisites
+
+An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/).
+
+## Review the Bicep
+
+An Azure Database for PostgresSQL Server is the parent resource for one or more databases within a region. It provides the scope for management policies that apply to its databases: login, firewall, users, roles, and configurations.
+
+Create a _postgres-flexible-server.bicep_ file and copy the following Bicep into it.
+
+```bicep
+param administratorLogin string
+
+@secure()
+param administratorLoginPassword string
+param location string = resourceGroup().location
+param serverName string
+param serverEdition string = 'GeneralPurpose'
+param skuSizeGB int = 128
+param dbInstanceType string = 'Standard_D4ds_v4'
+param haMode string = 'ZoneRedundant'
+param availabilityZone string = '1'
+param version string = '12'
+param virtualNetworkExternalId string = ''
+param subnetName string = ''
+param privateDnsZoneArmResourceId string = ''
+
+resource serverName_resource 'Microsoft.DBforPostgreSQL/flexibleServers@2021-06-01' = {
+ name: serverName
+ location: location
+ sku: {
+ name: dbInstanceType
+ tier: serverEdition
+ }
+ properties: {
+ version: version
+ administratorLogin: administratorLogin
+ administratorLoginPassword: administratorLoginPassword
+ network: {
+ delegatedSubnetResourceId: (empty(virtualNetworkExternalId) ? json('null') : json('${virtualNetworkExternalId}/subnets/${subnetName}'))
+ privateDnsZoneArmResourceId: (empty(virtualNetworkExternalId) ? json('null') : privateDnsZoneArmResourceId)
+ }
+ highAvailability: {
+ mode: haMode
+ }
+ storage: {
+ storageSizeGB: skuSizeGB
+ }
+ backup: {
+ backupRetentionDays: 7
+ geoRedundantBackup: 'Disabled'
+ }
+ availabilityZone: availabilityZone
+ }
+}
+```
+
+These resources are defined in the Bicep file:
+
+- [Microsoft.DBforPostgreSQL/flexibleServers](/azure/templates/microsoft.dbforpostgresql/flexibleservers?tabs=bicep)
+
+## Deploy the Bicep file
+
+Select **Try it** from the following PowerShell code block to open Azure Cloud Shell.
+
+```azurepowershell-interactive
+$serverName = Read-Host -Prompt "Enter a name for the new Azure Database for PostgreSQL server"
+$resourceGroupName = Read-Host -Prompt "Enter a name for the new resource group where the server will exist"
+$location = Read-Host -Prompt "Enter an Azure region (for example, centralus) for the resource group"
+$adminUser = Read-Host -Prompt "Enter the Azure Database for PostgreSQL server's administrator account name"
+$adminPassword = Read-Host -Prompt "Enter the administrator password" -AsSecureString
+
+New-AzResourceGroup -Name $resourceGroupName -Location $location # Use this command when you need to create a new resource group for your deployment
+New-AzResourceGroupDeployment -ResourceGroupName $resourceGroupName `
+ -TemplateFile "postgres-flexible-server.bicep" `
+ -serverName $serverName `
+ -administratorLogin $adminUser `
+ -administratorLoginPassword $adminPassword
+
+Read-Host -Prompt "Press [ENTER] to continue ..."
+```
+
+## Review deployed resources
+
+Follow these steps to verify if your server was created in Azure.
+
+# [Azure portal](#tab/portal)
+
+1. In the [Azure portal](https://portal.azure.com), search for and select **Azure Database for PostgreSQL Flexible Servers**.
+1. In the database list, select your new server to view the **Overview** page to manage the server.
+
+# [PowerShell](#tab/PowerShell)
+
+You'll have to enter the name of the new server to view the details of your Azure Database for PostgreSQL Flexible server.
+
+```azurepowershell-interactive
+$serverName = Read-Host -Prompt "Enter the name of your Azure Database for PostgreSQL server"
+Get-AzResource -ResourceType "Microsoft.DBforPostgreSQL/flexibleServers" -Name $serverName | ft
+Write-Host "Press [ENTER] to continue..."
+```
+
+# [CLI](#tab/CLI)
+
+You'll have to enter the name and the resource group of the new server to view details about your Azure Database for PostgreSQL Flexible Server.
+
+```azurecli-interactive
+echo "Enter your Azure Database for PostgreSQL Flexible Server name:" &&
+read serverName &&
+echo "Enter the resource group where the Azure Database for PostgreSQL Flexible Server exists:" &&
+read resourcegroupName &&
+az resource show --resource-group $resourcegroupName --name $serverName --resource-type "Microsoft.DBforPostgreSQL/flexibleServers"
+```
+++
+## Clean up resources
+
+Keep this resource group, server, and single database if you want to go to the [Next steps](#next-steps). The next steps show you how to connect and query your database using different methods.
+
+To delete the resource group:
+
+# [Portal](#tab/azure-portal)
+
+In the [portal](https://portal.azure.com), select the resource group you want to delete.
+
+1. Select **Delete resource group**.
+1. To confirm the deletion, type the name of the resource group
+
+# [PowerShell](#tab/azure-powershell)
+
+```azurepowershell-interactive
+Remove-AzResourceGroup -Name ExampleResourceGroup
+```
+
+# [Azure CLI](#tab/azure-cli)
+
+```azurecli-interactive
+az group delete --name ExampleResourceGroup
+```
+++
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Migrate your database using dump and restore](../howto-migrate-using-dump-and-restore.md)
postgresql Howto Restart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-restart.md
Title: Restart server - Hyperscale (Citus) - Azure Database for PostgreSQL
-description: How to restart the database in Azure Database for PostgreSQL - Hyperscale (Citus)
+description: Learn how to restart all nodes in a Hyperscale (Citus) server group from the Azure portal.
+
Last updated 05/06/2022
# Restart Azure Database for PostgreSQL - Hyperscale (Citus)
-If you'd like to restart your Hyperscale (Citus) server group, you can do it
-from the group's **Overview** page in the Azure portal. Select the **Restart**
-button on the top bar. A confirmation dialog will appear. Select **Restart
-all** to continue.
+You can restart your Hyperscale (Citus) server group for the Azure portal. Restarting the server group applies to all nodes; you can't selectively restart
+individual nodes. The restart applies to all PostgreSQL server processes in the nodes. Any applications attempting to use the database will experience
+connectivity downtime while the restart happens.
-> [!NOTE]
-> If the Restart button is not yet present for your server group, please open
-> an Azure support request to restart the server group.
+1. In the Azure portal, navigate to the server group's **Overview** page.
-Restarting the server group applies to all nodes; you can't selectively restart
-individual nodes. The restart applies to the PostgreSQL server processes in the
-nodes. Any applications attempting to use the database will experience
-connectivity downtime while the restart happens.
+1. Select **Restart** on the top bar.
+ > [!NOTE]
+ > If the Restart button is not yet present for your server group, please open
+ > an Azure support request to restart the server group.
+
+1. In the confirmation dialog, select **Restart all** to continue.
**Next steps**
postgresql Howto Scale Rebalance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-scale-rebalance.md
Title: Rebalance shards - Hyperscale (Citus) - Azure Database for PostgreSQL
-description: Distribute shards evenly across servers for better performance
+description: Learn how to use the Azure portal to rebalance data in a server group using the Shard rebalancer.
+
Last updated 07/20/2021
# Rebalance shards in Hyperscale (Citus) server group
-To take advantage of newly added nodes you must rebalance distributed table
-[shards](concepts-distributed-data.md#shards), which means moving
-some shards from existing nodes to the new ones. Hyperscale (Citus) offers
-zero-downtime rebalancing, meaning queries can run without interruption during
+To take advantage of newly added nodes, rebalance distributed table
+[shards](concepts-distributed-data.md#shards). Rebalancing moves shards from existing nodes to the new ones. Hyperscale (Citus) offers
+zero-downtime rebalancing, meaning queries continue without interruption during
shard rebalancing.
-## Determine if the server group needs a rebalance
+## Determine if the server group is balanced
-The Azure portal can show you whether data is distributed equally between
-worker nodes in a server group. To see it, go to the **Shard rebalancer** page
-in the **Server group management** menu. If data is skewed between workers,
-you'll see the message **Rebalancing is recommended**, along with a list of the
-size of each node.
+The Azure portal shows whether data is distributed equally between
+worker nodes in a server group or not. From the **Server group management** menu, select **Shard rebalancer**.
-If data is already balanced, you'll see the message **Rebalancing is not
-recommended at this time**.
+- If data is skewed between workers: You'll see the message, **Rebalancing is recommended** and a list of the size of each node.
-## Run the shard rebalancer
+- If data is balanced: You'll see the message, **Rebalancing is not recommended at this time**.
-To start the shard rebalancer, you need to connect to the coordinator node of
-the server group and run the
-[rebalance_table_shards](reference-functions.md#rebalance_table_shards)
-SQL function on distributed tables. The function rebalances all tables in the
+## Run the Shard rebalancer
+
+To start the Shard rebalancer, connect to the coordinator node of the server group and then run the [rebalance_table_shards](reference-functions.md#rebalance_table_shards) SQL function on distributed tables.
+
+The function rebalances all tables in the
[colocation](concepts-colocation.md) group of the table named in its
-argument. Thus you do not have to call the function for every distributed
-table, just call it on a representative table from each colocation group.
+argument. You don't have to call the function for every distributed
+table. Instead, call it on a representative table from each colocation group.
```sql SELECT rebalance_table_shards('distributed_table_name');
SELECT rebalance_table_shards('distributed_table_name');
## Monitor rebalance progress
-To watch the rebalancer after you start it, go back to the Azure portal. Open
-the **Shard rebalancer** page in **Server group management**. It will show the
-message **Rebalancing is underway** along with two tables.
+You can view the rebalance progress from the Azure portal. From the **Server group management** menu, select **Shard rebalancer** . The
+message **Rebalancing is underway** displays with two tables:
-The first table shows the number of shards moving into or out of a node, for
-example, "6 of 24 moved in." The second table shows progress per database
-table: name, shard count affected, data size affected, and rebalancing status.
+- The first table shows the number of shards moving into or out of a node. For
+example, "6 of 24 moved in."
+- The second table shows progress per database table: name, shard count affected, data size affected, and rebalancing status.
-Select the **Refresh** button to update the page. When rebalancing is complete,
-it will again say **Rebalancing is not recommended at this time**.
+Select **Refresh** to update the page. When rebalancing is complete, you'll see the message **Rebalancing is not recommended at this time**.
## Next steps
postgresql Concepts Single To Flexible https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-single-to-flexible.md
+
+ Title: "Migrate from Azure Database for PostgreSQL Single Server to Flexible Server - Concepts"
+
+description: Concepts about migrating your Single server to Azure database for PostgreSQL Flexible server.
++++ Last updated : 05/11/2022+++
+# Migrate from Azure Database for PostgreSQL Single Server to Flexible Server (Preview)
+
+>[!NOTE]
+> Single Server to Flexible Server migration feature is in public preview.
+
+Azure Database for PostgreSQL Flexible Server provides zone redundant high availability, control over price, and control over maintenance window. Single to Flexible Server Migration feature enables customers to migrate their databases from Single server to Flexible. See this [documentation](../flexible-server/concepts-compare-single-server-flexible-server.md) to understand the differences between Single and Flexible servers. Customers can initiate migrations for multiple servers and databases in a repeatable fashion using this migration feature. This feature automates most of the steps needed to do the migration and thus making the migration journey across Azure platforms as seamless as possible. The feature is provided free of cost for customers.
+
+Single to Flexible server migration is enabled in **Preview** in Australia Southeast, Canada Central, Canada East, East Asia, North Central US, South Central US, Switzerland North, UAE North, UK South, UK West, West US, and Central US.
+
+## Overview
+
+Single to Flexible server migration feature provides an inline experience to migrate databases from Single Server (source) to Flexible Server (target).
+
+You choose the source server and can select up to **8** databases from it. This limitation is per migration task. The migration feature automates the following steps:
+
+1. Creates the migration infrastructure in the region of the target flexible server
+2. Creates public IP address and attaches it to the migration infrastructure
+3. Allow-listing of migration infrastructureΓÇÖs IP address on the firewall rules of both source and target servers
+4. Creates a migration project with both source and target types as Azure database for PostgreSQL
+5. Creates a migration activity to migrate the databases specified by the user from source to target.
+6. Migrates schema from source to target
+7. Creates databases with the same name on the target Flexible server
+8. Migrates data from source to target
+
+Following is the flow diagram for Single to Flexible migration feature.
+
+**Steps:**
+1. Create a Flex PG server
+2. Invoke migration
+3. Migration infrastructure provisioned (DMS)
+4. Initiates migration ΓÇô (4a) Initial dump/restore (online & offline) (4b) streaming the changes (online only)
+5. Cutover to the target
+
+The migration feature is exposed through **Azure Portal** and via easy-to-use **Azure CLI** commands. It allows you to create migrations, list migrations, display migration details, modify the state of the migration, and delete migrations
+
+## Migration modes comparison
+
+Single to Flexible Server migration supports online and offline mode of migrations. Online option provides reduced downtime migration with logical replication restrictions while the offline option offers a simple migration but may incur extended downtime depending on the size of databases.
+
+The following table summarizes the differences between these two modes of migration.
+
+| Capability | Online | Offline |
+|:|:-|:--|
+| Database availability for reads during migration | Available | Available |
+| Database availability for writes during migration | Available | Generally, not recommended. Any writes initiated after the migration is not captured or migrated |
+| Application Suitability | Applications that need maximum uptime | Applications that can afford a planned downtime window |
+| Environment Suitability | Production environments | Usually Development, Testing environments and some production that can afford downtime |
+| Suitability for Write-heavy workloads | Suitable but expected to reduce the workload during migration | Not Applicable. Writes at source after migration begins are not replicated to target. |
+| Manual Cutover | Required | Not required |
+| Downtime Required | Less | More |
+| Logical replication limitations | Applicable | Not Applicable |
+| Migration time required | Depends on Database size and the write activity until cutover | Depends on Database size |
+
+**Migration steps involved for Offline mode** = Dump of the source Single Server database followed by the Restore at the target Flexible server.
+
+The following table shows the approximate time taken to perform offline migrations for databases of various sizes.
+
+>[!NOTE]
+> Add ~15 minutes for the migration infrastructure to get deployed for each migration task, where each task can migrate up to 8 databases.
+
+| Database Size | Approximate Time Taken (HH:MM) |
+|:|:-|
+| 1 GB | 00:01 |
+| 5 GB | 00:05 |
+| 10 GB | 00:10 |
+| 50 GB | 00:45 |
+| 100 GB | 06:00 |
+| 500 GB | 08:00 |
+| 1000 GB | 09:30 |
+
+**Migration steps involved for Online mode** = Dump of the source Single Server database(s), Restore of that dump in the target Flexible server, followed by Replication of ongoing changes (change data capture using logical decoding).
+
+The time taken for an online migration to complete is dependent on the incoming writes to the source server. The higher the write workload is on the source, the more time it takes for the data to the replicated to the target flexible server.
+
+Based on the above differences, pick the mode that best works for your workloads.
+++
+## Migration steps
+
+### Pre-requisites
+
+Follow the steps provided in this section before you get started with the single to flexible server migration feature.
+
+- **Target Server Creation** - You need to create the target PostgreSQL flexible server before using the migration feature. Use the creation [QuickStart guide](../flexible-server/quickstart-create-server-portal.md) to create one.
+
+- **Source Server pre-requisites** - You must [enable logical replication](./concepts-logical.md) on the source server.
+
+ :::image type="content" source="./media/concepts-single-to-flexible/logical-replication-support.png" alt-text="Screenshot of logical replication support in Azure portal." lightbox="./media/concepts-single-to-flexible/logical-replication-support.png":::
+
+>[!NOTE]
+> Enabling logical replication will require a server reboot for the change to take effect.
+
+- **Azure Active Directory App set up** - It is a critical component of the migration feature. Azure AD App helps with role-based access control as the migration feature needs access to both the source and target servers. See [How to setup and configure Azure AD App](./how-to-setup-azure-ad-app-portal.md) for step-by-step process.
+
+### Data and schema migration
+
+Once all these pre-requisites are taken care of, you can do the migration. This automated step involves schema and data migration using Azure portal or Azure CLI.
+
+- [Migrate using Azure portal](./how-to-migrate-single-to-flexible-portal.md)
+- [Migrate using Azure CLI](./how-to-migrate-single-to-flexible-cli.md)
+
+### Post migration
+
+- All the resources created by this migration tool will be automatically cleaned up irrespective of whether the migration has **succeeded/failed/cancelled**. There is no action required from you.
+
+- If your migration has failed and you want to retry the migration, then you need to create a new migration task with a different name and retry the operation.
+
+- If you have more than eight databases on your single server and if you want to migrate them all, then it is recommended to create multiple migration tasks with each task migrating up to eight databases.
+
+- The migration does not move the database users and roles of the source server. This has to be manually created and applied to the target server post migration.
+
+- For security reasons, it is highly recommended to delete the Azure Active Directory app once the migration completes.
+
+- Post data validations and making your application point to flexible server, you can consider deleting your single server.
+
+## Limitations
+
+### Size limitations
+
+- Databases of sizes up to 1TB can be migrated using this feature. To migrate larger databases or heavy write workloads, reach out to your account team or reach us @ AskAzureDBforPGS2F@microsoft.com.
+
+- In one migration attempt, you can migrate up to eight user databases from a single server to flexible server. In case you have more databases to migrate, you can create multiple migrations between the same single and flexible servers.
+
+### Performance limitations
+
+- The migration infrastructure is deployed on a 4 vCore VM which may limit the migration performance.
+
+- The deployment of migration infrastructure takes ~10-15 minutes before the actual data migration starts - irrespective of the size of data or the migration mode (online or offline).
+
+### Replication limitations
+
+- Single to Flexible Server migration feature uses logical decoding feature of PostgreSQL to perform the online migration and it comes with the following limitations. See PostgreSQL documentation for [logical replication limitations](https://www.postgresql.org/docs/10/logical-replication-restrictions.html).
+ - **DDL commands** are not replicated.
+ - **Sequence** data is not replicated.
+ - **Truncate** commands are not replicated.(**Workaround**: use DELETE instead of TRUNCATE. To avoid accidental TRUNCATE invocations, you can revoke the TRUNCATE privilege from tables)
+
+ - Views, Materialized views, partition root tables and foreign tables will not be migrated.
+
+- Logical decoding will use resources in the source single server. Consider reducing the workload or plan to scale CPU/memory resources at the Source Single Server during the migration.
+
+### Other limitations
+
+- The migration feature migrates only data and schema of the single server databases to flexible server. It does not migrate other features such as server parameters, connection security details, firewall rules, users, roles and permissions. In other words, everything except data and schema must be manually configured in the target flexible server.
+
+- It does not validate the data in flexible server post migration. The customers must manually do this.
+
+- The migration tool only migrates user databases including Postgres database and not system/maintenance databases.
+
+- For failed migrations, there is no option to retry the same migration task. A new migration task with a unique name can to be created.
+
+- The migration feature does not include assessment of your single server.
+
+## Best practices
+
+- As part of discovery and assessment, take the server SKU, CPU usage, storage, database sizes, and extensions usage as some of the critical data to help with migrations.
+- Plan the mode of migration for each database. For less complex migrations and smaller databases, consider offline mode of migrations.
+- Batch similar sized databases in a migration task.
+- Perform large database migrations with one or two databases at a time to avoid source-side load and migration failures.
+- Perform test migrations before migrating for production.
+ - **Testing migrations** is a very important aspect of database migration to ensure that all aspects of the migration are taken care of, including application testing. The best practice is to begin by running a migration entirely for testing purposes. Start a migration, and after it enters the continuous replication (CDC) phase with minimal lag, make your flexible server as the primary database server and use it for testing the application to ensure expected performance and results. If you are doing migration to a higher Postgres version, test for your application compatibility.
+
+ - **Production migrations** - Once testing is completed, you can migrate the production databases. At this point you need to finalize the day and time of production migration. Ideally, there is low application use at this time. In addition, all stakeholders that need to be involved should be available and ready. The production migration would require close monitoring. It is important that for an online migration, the replication is completed before performing the cutover to prevent data loss.
+
+- Cut over all dependent applications to access the new primary database and open the applications for production usage.
+- Once the application starts running on flexible server, monitor the database performance closely to see if performance tuning is required.
+
+## Next steps
+
+- [Migrate to Flexible Server using Azure portal](./how-to-migrate-single-to-flexible-portal.md).
+- [Migrate to Flexible Server using Azure CLI](./how-to-migrate-single-to-flexible-cli.md)
postgresql Connect Rust https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/connect-rust.md
Title: 'Quickstart: Connect with Rust - Azure Database for PostgreSQL - Single Server'
-description: This quickstart provides Rust code samples that you can use to connect and query data from Azure Database for PostgreSQL - Single Server.
+ Title: Use Rust to interact with Azure Database for PostgreSQL
+description: Learn to connect and query data in Azure Database for PostgreSQL Single Server using Rust code samples.
ms.devlang: rust- Previously updated : 03/26/2021+ Last updated : 05/17/2022
-# Quickstart: Use Rust to connect and query data in Azure Database for PostgreSQL - Single Server
+# Quickstart: Use Rust to interact with Azure Database for PostgreSQL - Single Server
-In this article, you will learn how to use the [PostgreSQL driver for Rust](https://github.com/sfackler/rust-postgres) to interact with Azure Database for PostgreSQL by exploring CRUD (create, read, update, delete) operations implemented in the sample code. Finally, you can run the application locally to see it in action.
+In this article, you will learn how to use the [PostgreSQL driver for Rust](https://github.com/sfackler/rust-postgres) to connect and query data in Azure Database for PostgreSQL. You can explore CRUD (create, read, update, delete) operations implemented in sample code, and run the application locally to see it in action.
## Prerequisites
-For this quickstart you need:
+
+For this quickstart, you need:
- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free). - A recent version of [Rust](https://www.rust-lang.org/tools/install) installed.-- An Azure Database for PostgreSQL single server - create one using [Azure portal](./quickstart-create-server-database-portal.md) <br/> or [Azure CLI](./quickstart-create-server-database-azure-cli.md).
+- An Azure Database for PostgreSQL single server. Create one using [Azure portal](./quickstart-create-server-database-portal.md) <br/> or [Azure CLI](./quickstart-create-server-database-azure-cli.md).
- Based on whether you are using public or private access, complete **ONE** of the actions below to enable connectivity. |Action| Connectivity method|How-to guide| |: |: |: | | **Configure firewall rules** | Public | [Portal](./how-to-manage-firewall-using-portal.md) <br/> [CLI](./quickstart-create-server-database-azure-cli.md#configure-a-server-based-firewall-rule)|
- | **Configure Service Endpoint** | Public | [Portal](./how-to-manage-vnet-using-portal.md) <br/> [CLI](./how-to-manage-vnet-using-cli.md)|
+ | **Configure service endpoint** | Public | [Portal](./how-to-manage-vnet-using-portal.md) <br/> [CLI](./how-to-manage-vnet-using-cli.md)|
| **Configure private link** | Private | [Portal](./how-to-configure-privatelink-portal.md) <br/> [CLI](./how-to-configure-privatelink-cli.md) | - [Git](https://git-scm.com/downloads) installed. ## Get database connection information
-Connecting to an Azure Database for PostgreSQL database requires the fully qualified server name and login credentials. You can get this information from the Azure portal.
+Connecting to an Azure Database for PostgreSQL database requires a fully qualified server name and login credentials. You can get this information from the Azure portal.
1. In the [Azure portal](https://portal.azure.com/), search for and select your Azure Database for PostgreSQL server name. 1. On the server's **Overview** page, copy the fully qualified **Server name** and the **Admin username**. The fully qualified **Server name** is always of the form *\<my-server-name>.postgres.database.azure.com*, and the **Admin username** is always of the form *\<my-admin-username>@\<my-server-name>*.
postgresql How To Auto Grow Storage Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-auto-grow-storage-powershell.md
Title: Auto grow storage - Azure PowerShell - Azure Database for PostgreSQL
-description: This article describes how you can enable auto grow storage using PowerShell in Azure Database for PostgreSQL.
+ Title: Auto grow storage in Azure Database for PostgreSQL using PowerShell
+description: Learn how to auto grow storage using PowerShell in Azure Database for PostgreSQL.
Previously updated : 06/08/2020 - Last updated : 05/17/2022 +
-# Auto grow storage in Azure Database for PostgreSQL server using PowerShell
+# Auto grow Azure Database for PostgreSQL storage using PowerShell
-This article describes how you can configure an Azure Database for PostgreSQL server storage to grow
-without impacting the workload.
+This article describes how you can use PowerShell to configure Azure Database for PostgreSQL server storage to scale up automatically without impacting the workload.
-Storage auto grow prevents your server from
-[reaching the storage limit](./concepts-pricing-tiers.md#reaching-the-storage-limit) and
-becoming read-only. For servers with 100 GB or less of provisioned storage, the size is increased by
-5 GB when the free space is below 10%. For servers with more than 100 GB of provisioned storage, the
-size is increased by 5% when the free space is below 10 GB. Maximum storage limits apply as
-specified in the storage section of the
-[Azure Database for PostgreSQL pricing tiers](./concepts-pricing-tiers.md#storage).
+Storage auto grow prevents your server from [reaching the storage limit](./concepts-pricing-tiers.md#reaching-the-storage-limit) and
+becoming read-only. For servers with 100 GB or less of provisioned storage, the size increases by 5 GB when the free space is below 10%. For servers with more than 100 GB of provisioned storage, the size increases by 5% when the free space is below 10 GB. Maximum storage limits apply as
+specified in the storage section of the [Azure Database for PostgreSQL pricing tiers](./concepts-pricing-tiers.md#storage).
> [!IMPORTANT] > Remember that storage can only be scaled up, not down.
specified in the storage section of the
To complete this how-to guide, you need: -- The [Az PowerShell module](/powershell/azure/install-az-ps) installed locally or
- [Azure Cloud Shell](https://shell.azure.com/) in the browser
+- The [Az PowerShell module](/powershell/azure/install-az-ps) installed locally or [Azure Cloud Shell](https://shell.azure.com/) in the browser
- An [Azure Database for PostgreSQL server](quickstart-create-postgresql-server-database-using-azure-powershell.md)
-> [!IMPORTANT]
-> While the Az.PostgreSql PowerShell module is in preview, you must install it separately from the Az
-> PowerShell module using the following command: `Install-Module -Name Az.PostgreSql -AllowPrerelease`.
-> Once the Az.PostgreSql PowerShell module is generally available, it becomes part of future Az
-> PowerShell module releases and available natively from within Azure Cloud Shell.
- If you choose to use PowerShell locally, connect to your Azure account using the [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) cmdlet. ## Enable PostgreSQL server storage auto grow
-Enable server auto grow storage on an existing server with the following command:
+Enable auto grow storage on an existing server with the following command:
```azurepowershell-interactive Update-AzPostgreSqlServer -Name mydemoserver -ResourceGroupName myresourcegroup -StorageAutogrow Enabled ```
-Enable server auto grow storage while creating a new server with the following command:
+Enable auto grow storage while creating a new server with the following command:
```azurepowershell-interactive $Password = Read-Host -Prompt 'Please enter your password' -AsSecureString
postgresql How To Migrate Single To Flexible Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-migrate-single-to-flexible-cli.md
+
+ Title: "Migrate PostgreSQL Single Server to Flexible Server using the Azure CLI"
+
+description: Learn about migrating your Single server databases to Azure database for PostgreSQL Flexible server using CLI.
++++ Last updated : 05/09/2022++
+# Migrate Single Server to Flexible Server PostgreSQL using Azure CLI
+
+>[!NOTE]
+> Single Server to Flexible Server migration feature is in public preview.
+
+This quick start article shows you how to use Single to Flexible Server migration feature to migrate databases from Azure database for PostgreSQL Single server to Flexible server.
+
+## Before you begin
+
+1. If you are new to Microsoft Azure, [create an account](https://azure.microsoft.com/free/) to evaluate our offerings.
+2. Register your subscription for Azure Database Migration Service (DMS). If you have already done it, you can skip this step. Go to Azure portal homepage and navigate to your subscription as shown below.
+
+ :::image type="content" source="./media/concepts-single-to-flexible/single-to-flex-cli-dms.png" alt-text="Screenshot of C L I Database Migration Service." lightbox="./media/concepts-single-to-flexible/single-to-flex-cli-dms.png":::
+
+3. In your subscription, navigate to **Resource Providers** from the left navigation menu. Search for "**Microsoft.DataMigration**"; as shown below and click on **Register**.
+
+ :::image type="content" source="./media/concepts-single-to-flexible/single-to-flex-cli-dms-register.png" alt-text="Screenshot of C L I Database Migration Service register button." lightbox="./media/concepts-single-to-flexible/single-to-flex-cli-dms-register.png":::
+
+## Pre-requisites
+
+### Setup Azure CLI
+
+1. Install the latest Azure CLI for your corresponding operating system from the [Azure CLI install page](/cli/azure/install-azure-cli)
+2. In case Azure CLI is already installed, check the version by issuing **az version** command. The version should be **2.28.0 or above** to use the migration CLI commands. If not, update your Azure CLI using this [link](/cli/azure/update-azure-cli.md).
+3. Once you have the right Azure CLI version, run the **az login** command. A browser page is opened with Azure sign-in page to authenticate. Provide your Azure credentials to do a successful authentication. For other ways to sign with Azure CLI, visit this [link](/cli/azure/authenticate-azure-cli.md).
+
+ ```bash
+ az login
+ ```
+4. Take care of the pre-requisites listed in this [**document**](./concepts-single-to-flexible.md#pre-requisites) which are necessary to get started with the Single to Flexible migration feature.
+
+## Migration CLI commands
+
+Single to Flexible Server migration feature comes with a list of easy-to-use CLI commands to do migration-related tasks. All the CLI commands start with **az postgres flexible-server migration**. You can use the **help** parameter to help you with understanding the various options associated with a command and in framing the right syntax for the same.
+
+```azurecli-interactive
+az postgres flexible-server migration --help
+```
+
+ gives you the following output.
+
+ :::image type="content" source="./media/concepts-single-to-flexible/single-to-flex-cli-help.png" alt-text="Screenshot of C L I help." lightbox="./media/concepts-single-to-flexible/single-to-flex-cli-help.png":::
+
+It lists the set of migration commands that are supported along with their actions. Let us look into these commands in detail.
+
+### Create migration
+
+The create migration command helps in creating a migration from a source server to a target server
+
+```azurecli-interactive
+az postgres flexible-server migration create -- help
+```
+
+gives the following result
++
+It calls out the expected arguments and has an example syntax that needs to be used to create a successful migration from the source to target server. The CLI command to create a migration is given below
+
+```azurecli
+az postgres flexible-server migration create [--subscription]
+ [--resource-group]
+ [--name]
+ [--migration-name]
+ [--properties]
+```
+
+| Parameter | Description |
+| - | - |
+|**subscription** | Subscription ID of the target flexible server |
+| **resource-group** | Resource group of the target flexible server |
+| **name** | Name of the target flexible server |
+| **migration-name** | Unique identifier to migrations attempted to the flexible server. This field accepts only alphanumeric characters and does not accept any special characters except **-**. The name cannot start with a **-** and no two migrations to a flexible server can have the same name. |
+| **properties** | Absolute path to a JSON file, that has the information about the source single server |
+
+**For example:**
+
+```azurecli-interactive
+az postgres flexible-server migration create --subscription 5c5037e5-d3f1-4e7b-b3a9-f6bf9asd2nkh0 --resource-group my-learning-rg --name myflexibleserver --migration-name migration1 --properties "C:\Users\Administrator\Documents\migrationBody.JSON"
+```
+
+The **migration-name** argument used in **create migration** command will be used in other CLI commands such as **update, delete, show** to uniquely identify the migration attempt and to perform the corresponding actions.
+
+The migration feature offers online and offline mode of migration. To know more about the migration modes and their differences, visit this [link](./concepts-single-to-flexible.md)
+
+Create a migration between a source and target server with a migration mode of your choice. The **create** command needs a JSON file to be passed as part of its **properties** argument.
+
+The structure of the JSON is given below.
+
+```bash
+{
+"properties": {
+ "SourceDBServerResourceId":"subscriptions/<subscriptionid>/resourceGroups/<src_ rg_name>/providers/Microsoft.DBforPostgreSQL/servers/<source server name>",
+
+"SourceDBServerFullyQualifiedDomainName":ΓÇ»"fqdn of the source server as per the custom DNS server",
+"TargetDBServerFullyQualifiedDomainName":ΓÇ»"fqdn of the target server as per the custom DNS server"
+
+"SecretParameters": {
+ "AdminCredentials":
+ {
+ "SourceServerPassword": "<password>",
+ "TargetServerPassword": "<password>"
+ },
+"AADApp":
+ {
+ "ClientId": "<client id>",
+ "TenantId": "<tenant id>",
+ "AadSecret": "<secret>"
+ }
+},
+
+"MigrationResourceGroup":
+ {
+ "ResourceId":"subscriptions/<subscriptionid>/resourceGroups/<temp_rg_name>",
+ "SubnetResourceId":"/subscriptions/<subscriptionid>/resourceGroups/<rg_name>/providers/Microsoft.Network/virtualNetworks/<Vnet_name>/subnets/<subnet_name>"
+ },
+
+"DBsToMigrate":
+ [
+ "<db1>","<db2>"
+ ],
+
+"SetupLogicalReplicationOnSourceDBIfNeeded":ΓÇ»"true",
+
+"OverwriteDBsInTarget":ΓÇ»"true"
+
+}
+
+}
+
+```
+
+Create migration parameters:
+
+| Parameter | Type | Description |
+| - | - | - |
+| **SourceDBServerResourceId** | Required | Resource ID of the single server and is mandatory. |
+| **SourceDBServerFullyQualifiedDomainName** | optional | Used when a custom DNS server is used for name resolution for a virtual network. The FQDN of the single server as per the custom DNS server should be provided for this property. |
+| **TargetDBServerFullyQualifiedDomainName** | optional | Used when a custom DNS server is used for name resolution inside a virtual network. The FQDN of the flexible server as per the custom DNS server should be provided for this property. <br> **_SourceDBServerFullyQualifiedDomainName_**, **_TargetDBServerFullyQualifiedDomainName_** should be included as a part of the JSON only in the rare scenario of a custom DNS server being used for name resolution instead of Azure provided DNS. Otherwise, these parameters should not be included as a part of the JSON file. |
+| **SecretParameters** | Required | Passwords for admin user for both single server and flexible server along with the Azure AD app credentials. They help to authenticate against the source and target servers and help in checking proper authorization access to the resources.
+| **MigrationResourceGroup** | optional | This section consists of two properties. <br> **ResourceID (optional)** : The migration infrastructure and other network infrastructure components are created to migrate data and schema from the source to target. By default, all the components created by this feature are provisioned under the resource group of the target server. If you wish to deploy them under a different resource group, then you can assign the resource ID of that resource group to this property. <br> **SubnetResourceID (optional)** : In case if your source has public access turned OFF or if your target server is deployed inside a VNet, then specify a subnet under which migration infrastructure needs to be created so that it can connect to both source and target servers. |
+| **DBsToMigrate** | Required | Specify the list of databases you want to migrate to the flexible server. You can include a maximum of 8 database names at a time. |
+| **SetupLogicalReplicationOnSourceDBIfNeeded** | Optional | Logical replication can be enabled on the source server automatically by setting this property to **true**. This change in the server settings requires a server restart with a downtime of few minutes (~ 2-3 mins). |
+| **OverwriteDBsinTarget** | Optional | If the target server happens to have an existing database with the same name as the one you are trying to migrate, the migration will pause until you acknowledge that overwrites in the target DBs are allowed. This pause can be avoided by giving the migration feature, permission to automatically overwrite databases by setting the value of this property to **true** |
+
+### Mode of migrations
+
+The default migration mode for migrations created using CLI commands is **online**. With the above properties filled out in your JSON file, an online migration would be created from your single server to flexible server.
+
+If you want to migrate in **offline** mode, you need to add an additional property **"TriggerCutover":"true"** to your properties JSON file before initiating the create command.
+
+### List migrations
+
+The **list command** shows the migration attempts that were made to a flexible server. The CLI command to list migrations is given below
+
+```azurecli
+az postgres flexible-server migration list [--subscription]
+ [--resource-group]
+ [--name]
+ [--filter]
+```
+
+There is a parameter called **filter** and it can take **Active** and **All** as values.
+
+- **Active** ΓÇô Lists down the current active migration attempts for the target server. It does not include the migrations that have reached a failed/canceled/succeeded state.
+- **All** ΓÇô Lists down all the migration attempts to the target server. This includes both the active and past migrations irrespective of the state.
+
+```azurecli-interactive
+az postgres flexible-server migration list -- help
+```
+
+For any additional information.
+
+### Show Details
+
+The **list** gets the details of a specific migration. This includes information on the current state and substate of the migration. The CLI command to show the details of a migration is given below:
+
+```azurecli
+az postgres flexible-server migration list [--subscription]
+ [--resource-group]
+ [--name]
+ [--migration-name]
+```
+
+The **migration_name** is the name assigned to the migration during the **create migration** command. Here is a snapshot of the sample response from the **Show Details** CLI command.
++
+Some important points to note on the command response:
+
+- As soon as the **create** migration command is triggered, the migration moves to the **InProgress** state and **PerformingPreRequisiteSteps** substate. It takes up to 15 minutes for the migration workflow to deploy the migration infrastructure, configure firewall rules with source and target servers, and to perform a few maintenance tasks.
+- After the **PerformingPreRequisiteSteps** substate is completed, the migration moves to the substate of **Migrating Data** where the dump and restore of the databases take place.
+- Each DB being migrated has its own section with all migration details such as table count, incremental inserts, deletes, pending bytes, etc.
+- The time taken for **Migrating Data** substate to complete is dependent on the size of databases that are being migrated.
+- For **Offline** mode, the migration moves to **Succeeded** state as soon as the **Migrating Data** sub state completes successfully. If there is an issue at the **Migrating Data** substate, the migration moves into a **Failed** state.
+- For **Online** mode, the migration moves to the state of **WaitingForUserAction** and a substate of **WaitingForCutoverTrigger** after the **Migrating Data** state completes successfully. The details of **WaitingForUserAction** state are covered in detail in the next section.
+
+```azurecli-interactive
+ az postgres flexible-server migration show -- help
+ ```
+
+for any additional information.
+
+### Update migration
+
+As soon as the infrastructure setup is complete, the migration activity will pause with appropriate messages seen in the **show details** CLI command response if some pre-requisites are missing or if the migration is at a state to perform a cutover. At this point, the migration goes into a state called **WaitingForUserAction**. The **update migration** command is used to set values for parameters, which helps the migration to move to the next stage in the process. Let us look at each of the sub-states.
+
+- **WaitingForLogicalReplicationSetupRequestOnSourceDB** - If the logical replication is not set at the source server or if it was not included as a part of the JSON file, the migration will wait for logical replication to be enabled at the source. A user can enable the logical replication setting manually by changing the replication flag to **Logical** on the portal. This would require a server restart. This can also be enabled by the following CLI command
+
+```azurecli
+az postgres flexible-server migration update [--subscription]
+ [--resource-group]
+ [--name]
+ [--migration-name]
+ [--initiate-data-migration]
+```
+
+You need to pass the value **true** to the **initiate-data-migration** property to set logical replication on your source server.
+
+**For example:**
+
+```azurecli-interactive
+az postgres flexible-server migration update --subscription 5c5037e5-d3f1-4e7b-b3a9-f6bf9asd2nkh0 --resource-group my-learning-rg --name myflexibleserver --migration-name migration1 --initiate-data-migration true"
+```
+
+In case you have enabled it manually, **you would still need to issue the above update command** for the migration to move out of the **WaitingForUserAction** state. The server does not need a reboot again since it was already done via the portal action.
+
+- **WaitingForTargetDBOverwriteConfirmation** - This is the state where migration is waiting for confirmation on target overwrite as data is already present in the target server for the database that is being migrated. This can be enabled by the following CLI command.
+
+```azurecli
+az postgres flexible-server migration update [--subscription]
+ [--resource-group]
+ [--name]
+ [--migration-name]
+ [--overwrite-dbs]
+```
+
+You need to pass the value **true** to the **overwrite-dbs** property to give the permissions to the migration to overwrite any existing data in the target server.
+
+**For example:**
+
+```azurecli-interactive
+az postgres flexible-server migration update --subscription 5c5037e5-d3f1-4e7b-b3a9-f6bf9asd2nkh0 --resource-group my-learning-rg --name myflexibleserver --migration-name migration1 --overwrite-dbs true"
+```
+
+- **WaitingForCutoverTrigger** - Migration gets to this state when the dump and restore of the databases have been completed and the ongoing writes at your source single server is being replicated to the target flexible server.You should wait for the replication to complete so that the target is in sync with the source. You can monitor the replication lag by using the response from the **show migration** command. There is a metric called **Pending Bytes** associated with each database that is being migrated and this gives you indication of the difference between the source and target database in bytes. This should be nearing zero over time. Once it reaches zero for all the databases, stop any further writes to your single server. This should be followed by the validation of data and schema on your flexible server to make sure it matches exactly with the source server. After completing the above steps, you can trigger **cutover** by using the following CLI command.
+
+```azurecli
+az postgres flexible-server migration update [--subscription]
+ [--resource-group]
+ [--name]
+ [--migration-name]
+ [--cutover]
+```
+
+**For example:**
+
+```azurecli-interactive
+az postgres flexible-server migration update --subscription 5c5037e5-d3f1-4e7b-b3a9-f6bf9asd2nkh0 --resource-group my-learning-rg --name myflexibleserver --migration-name migration1 --cutover"
+```
+
+After issuing the above command, use the **show details** command to monitor if the cutover has completed successfully. Upon successful cutover, migration will move to **Succeeded** state. Update your application to point to the new target flexible server.
+
+```azurecli-interactive
+ az postgres flexible-server migration update -- help
+ ```
+
+for any additional information.
+
+### Delete/Cancel Migration
+
+Any ongoing migration attempts can be deleted or canceled using the **delete migration** command. This command stops all migration activities in that task, but does not drop or rollback any changes on your target server. Below is the CLI command to delete a migration
+
+```azurecli
+az postgres flexible-server migration delete [--subscription]
+ [--resource-group]
+ [--name]
+ [--migration-name]
+```
+
+**For example:**
+
+```azurecli-interactive
+az postgres flexible-server migration delete --subscription 5c5037e5-d3f1-4e7b-b3a9-f6bf9asd2nkh0 --resource-group my-learning-rg --name myflexibleserver --migration-name migration1"
+```
+
+```azurecli-interactive
+ az postgres flexible-server migration delete -- help
+ ```
+
+for any additional information.
+
+## Monitoring Migration
+
+The **create migration** command starts a migration between the source and target servers. The migration goes through a set of states and substates before eventually moving into the **completed** state. The **show command** helps to monitor ongoing migrations since it gives the current state and substate of the migration.
+
+Migration **states**:
+
+| Migration State | Description |
+| - | - |
+| **InProgress** | The migration infrastructure is being setup, or the actual data migration is in progress. |
+| **Canceled** | The migration has been canceled or deleted. |
+| **Failed** | The migration has failed. |
+| **Succeeded** | The migration has succeeded and is complete. |
+| **WaitingForUserAction** | Migration is waiting on a user action. This state has a list of substates that were discussed in detail in the previous section. |
+
+Migration **substates**:
+
+| Migration substates | Description |
+| - | - |
+| **PerformingPreRequisiteSteps** | Infrastructure is being set up and is being prepped for data migration. |
+| **MigratingData** | Data is being migrated. |
+| **CompletingMigration** | Migration cutover in progress. |
+| **WaitingForLogicalReplicationSetupRequestOnSourceDB** | Waiting for logical replication enablement. You can manually enable this manually or enable via the update migration CLI command covered in the next section. |
+| **WaitingForCutoverTrigger** | Migration is ready for cutover. You can start the cutover when ready. |
+| **WaitingForTargetDBOverwriteConfirmation** | Waiting for confirmation on target overwrite as data is present in the target server being migrated into. <br> You can enable this via the **update migration** CLI command. |
+| **Completed** | Cutover was successful, and migration is complete. |
++
+## How to find if custom DNS is used for name resolution?
+Navigate to your Virtual network where you deployed your source or the target server and click on **DNS server**. It should indicate if it is using a custom DNS server or default Azure provided DNS server.
++
+## Post Migration Steps
+
+Make sure the post migration steps listed [here](./concepts-single-to-flexible.md) are followed for a successful end to end migration.
+
+## Next steps
+
+- [Single Server to Flexible migration concepts](./concepts-single-to-flexible.md)
postgresql How To Migrate Single To Flexible Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-migrate-single-to-flexible-portal.md
+
+ Title: "Migrate PostgreSQL Single Server to Flexible Server using the Azure portal"
+
+description: Learn about migrating your Single server databases to Azure database for PostgreSQL Flexible server using Portal.
++++ Last updated : 05/09/2022++
+# Migrate Single Server to Flexible Server PostgreSQL using the Azure portal
+
+This guide shows you how to use Single to Flexible server migration feature to migrate databases from Azure database for PostgreSQL Single server to Flexible server.
+
+## Before you begin
+
+1. If you are new to Microsoft Azure, [create an account](https://azure.microsoft.com/free/) to evaluate our offerings.
+2. Register your subscription for the Azure Database Migration Service
+
+Go to Azure portal homepage and navigate to your subscription as shown below.
++
+In your subscription, navigate to **Resource Providers** from the left navigation menu. Search for **Microsoft.DataMigration**; as shown below and click on **Register**.
++
+## Pre-requisites
+
+Take care of the pre-requisites listed [here](./concepts-single-to-flexible.md#pre-requisites) to get started with the migration feature.
+
+## Configure migration task
+
+Single to Flexible server migration feature comes with a simple, wizard-based portal experience. Let us get started to know the steps needed to consume the tool from portal.
+
+1. **Sign into the Azure portal -** Open your web browser and go to the [portal](https://portal.azure.com/). Enter your credentials to sign in. The default view is your service dashboard.
+
+2. Navigate to your Azure database for PostgreSQL flexible server.If you have not created an Azure database for PostgreSQL flexible server, create one using this [link](../flexible-server/quickstart-create-server-portal.md).
+
+3. In the **Overview** tab of your flexible server, use the left navigation menu and scroll down to the option of **Migration (preview)** and click on it.
+
+ :::image type="content" source="./media/concepts-single-to-flexible/single-to-flex-migration-preview.png" alt-text="Screenshot of Migration Preview Tab details." lightbox="./media/concepts-single-to-flexible/single-to-flex-migration-preview.png":::
+
+4. Click the **Migrate from Single Server** button to start a migration from Single Server to Flexible Server. If this is the first time you are using the migration feature, you will see an empty grid with a prompt to begin your first migration.
+
+ :::image type="content" source="./media/concepts-single-to-flexible/single-to-flex-migrate-single-server.png" alt-text="Screenshot of Migrate from Single Server tab." lightbox="./media/concepts-single-to-flexible/single-to-flex-migrate-single-server.png":::
+
+ If you have already created migrations to your flexible server, you should see the grid populated with information of the list of migrations that were attempted to this flexible server from single servers.
+
+5. Click on the **Migrate from Single Server** button. You will be taken through a wizard-based user interface to create a migration to this flexible server from any single server.
+
+### Setup tab
+
+The first is the setup tab which has basic information about the migration and the list of pre-requisites that need to be taken care of to get started with migrations. The list of pre-requisites is the same as the ones listed in the pre-requisites section [here](./concepts-single-to-flexible.md). Click on the provided link to know more about the same.
++
+- The **Migration name** is the unique identifier for each migration to this flexible server. This field accepts only alphanumeric characters and does not accept any special characters except **&#39;-&#39;**. The name cannot start with a **&#39;-&#39;** and should be unique for a target server. No two migrations to the same flexible server can have the same name.
+- The **Migration resource group** is where all the migration-related components will be created by this migration feature.
+
+By default, it is resource group of the target flexible server and all the components will be cleaned up automatically once the migration completes. If you want to create a temporary resource group for migration-related purposes, create a resource group and select the same from the dropdown.
+
+- For the **Azure Active Directory App**, click the **select** option and pick the app that was created as a part of the pre-requisite step. Once the Azure AD App is chosen, paste the client secret that was generated for the Azure AD app to the **Azure Active Directory Client Secret** field.
++
+Click on the **Next** button.
+
+### Source tab
++
+The source tab prompts you to give details related to the source single server from which databases needs to be migrated. As soon as you pick the **Subscription** and **Resource Group**, the dropdown for server names will have the list of single servers under that resource group across regions. It is recommended to migrate databases from a single server to flexible server in the same region.
+
+Choose the single server from which you want to migrate databases from, in the drop down.
+
+Once the single server is chosen, the fields such as **Location, PostgreSQL version, Server admin login name** are automatically pre-populated. The server admin login name is the admin username that was used to create the single server. Enter the password for the **server admin login name**. This is required for the migration feature to login into the single server to initiate the dump and migration.
+
+You should also see the list of user databases inside the single server that you can pick for migration. You can select up to eight databases that can be migrated in a single migration attempt. If there are more than eight user databases, create multiple migrations using the same experience between the source and target servers.
+
+The final property in the source tab is migration mode. The migration feature offers online and offline mode of migration. To know more about the migration modes and their differences, please visit this [link](./concepts-single-to-flexible.md).
+
+Once you pick the migration mode, the restrictions associated with the mode are displayed.
+
+After filling out all the fields, please click the **Next** button.
+
+### Target tab
++
+This tab displays metadata of the flexible server like the **Subscription**, **Resource Group**, **Server name**, **Location**, and **PostgreSQL version**. It displays **server admin login name** which is the admin username that was used during the creation of the flexible server.Enter the corresponding password for the admin user. This is required for the migration feature to login into the flexible server to perform restore operations.
+
+Choose an option **yes/no** for **Authorize DB overwrite**.
+
+- If you set the option to **Yes**, you give this migration service permission to overwrite existing data in case when a database that is being migrated to flexible server is already present.
+- If set to **No**, it goes into a waiting state and asks you for permission either to overwrite the data or to cancel the migration.
+
+Click on the **Next** button
+
+### Networking tab
+
+The content on the Networking tab depends on the networking topology of your source and target servers.
+
+- If both source and target servers are in public access, then you are going to see the message below.
++
+In this case, you need not do anything and can just click on the **Next** button.
+
+- If either the source or target server is configured in private access, then the content of the networking tab is going to be different. Let us try to understand what does private access mean for single server and flexible server:
+
+- **Single Server Private Access** ΓÇô **Deny public network access** set to **Yes** and a private end point configured
+- **Flexible Server Private Access** ΓÇô When flexible server is deployed inside a Vnet.
+
+If either source or target is configured in private access, then the networking tab looks like the following
++
+All the fields will be automatically populated with subnet details. This is the subnet in which the migration feature will deploy Azure DMS to move data between the source and target.
+
+You can go ahead with the suggested subnet or choose a different subnet. But make sure that the selected subnet can connect to both the source and target servers.
+
+After picking a subnet, click on **Next** button
+
+### Review + create tab
+
+This tab gives a summary of all the details for creating the migration. Review the details and click on the **Create** button to start the migration.
++
+## Monitoring migrations
+
+After clicking on the **Create** button, you should see a notification in a few seconds saying the migration was successfully created.
++
+You should automatically be redirected to **Migrations (Preview)** page of flexible server that will have a new entry of the recently created migration
++
+The grid displaying the migrations has various columns including **Name**, **Status**, **Source server name**, **Region**, **Version**, **Database names**, and the **Migration start time**. By default, the grid shows the list of migrations in the decreasing order of migration start time. In other words, recent migrations appear on top of the grid.
+
+You can use the refresh button to refresh the status of the migrations.
+
+You can click on the migration name in the grid to see the details of that migration.
++
+- As soon as the migration is created, the migration moves to the **InProgress** state and **PerformingPreRequisiteSteps** substate. It takes up to 10 minutes for the migration workflow to move out of this substate since it takes time to create and deploy DMS, add its IP on firewall list of source and target servers and to perform a few maintenance tasks.
+- After the **PerformingPreRequisiteSteps** substate is completed, the migration moves to the substate of **Migrating Data** where the dump and restore of the databases take place.
+- The time taken for **Migrating Data** substate to complete is dependent on the size of databases that are being migrated.
+- You can click on each of the DBs that are being migrated and a fan-out blade appears that has all migration details such as table count, incremental inserts, deletes, pending bytes, etc.
+- For **Offline** mode, the migration moves to **Succeeded** state as soon as the **Migrating Data** state completes successfully. If there is an issue at the **Migrating Data** state, the migration moves into a **Failed** state.
+- For **Online** mode, the migration moves to the state of **WaitingForUserAction** and **WaitingForCutOver** substate after the **Migrating Data** substate completes successfully.
++
+You can click on the migration name to go into the migration details page and should see the substate of **WaitingForCutover**.
++
+At this stage, the ongoing writes at your source single server will be replicated to the target flexible server using the logical decoding feature of PostgreSQL. You should wait until the replication reaches a state where the target is almost in sync with the source. You can monitor the replication lag by clicking on each of the databases that are being migrated. It opens a fan-out blade with a bunch of metrics. Look for the value of **Pending Bytes** metric and it should be nearing zero over time. Once it reaches to a few MB for all the databases, stop any further writes to your single server and wait until the metric reaches 0. This should be followed by the validation of data and schema on your flexible server to make sure it matches exactly with the source server.
+
+After completing the above steps, click on the **Cutover** button. You should see the following message
++
+Click on the **Yes** button to start cutover.
+
+In a few seconds after starting cutover, you should see the following notification
++
+Once the cutover is complete, the migration moves to **Succeeded** state and migration of schema data from your single server to flexible server is now complete. You can use the refresh button in the page to check if the cutover was successful.
+
+After completing the above steps, you can make changes to your application code to point database connection strings to the flexible server and start using it as the primary database server.
+
+Possible migration states include
+
+- **InProgress**: The migration infrastructure is being setup, or the actual data migration is in progress.
+- **Canceled**: The migration has been canceled or deleted.
+- **Failed**: The migration has failed.
+- **Succeeded**: The migration has succeeded and is complete.
+- **WaitingForUserAction**: Migration is waiting on a user action..
+
+Possible migration substates include
+
+- **PerformingPreRequisiteSteps**: Infrastructure is being set up and is being prepped for data migration
+- **MigratingData**: Data is being migrated
+- **CompletingMigration**: Migration cutover in progress
+- **WaitingForLogicalReplicationSetupRequestOnSourceDB**: Waiting for logical replication enablement.
+- **WaitingForCutoverTrigger**: Migration is ready for cutover.
+- **WaitingForTargetDBOverwriteConfirmation**: Waiting for confirmation on target overwrite as data is present in the target server being migrated into.
+- **Completed**: Cutover was successful, and migration is complete.
+
+## Cancel migrations
+
+You also have the option to cancel any ongoing migrations. For a migration to be canceled, it must be in **InProgress** or **WaitingForUserAction** state. You cannot cancel a migration that has either already **Succeeded** or **Failed**.
+
+You can choose multiple ongoing migrations at once and can cancel them.
++
+Note that **cancel migration** just stops any more further migration activity on your target server. It will not drop or roll back any changes on your target server that were done by the migration attempts. Make sure to drop the databases involved in a canceled migration on your target server.
+
+## Post migration steps
+
+Make sure the post migration steps listed [here](./concepts-single-to-flexible.md) are followed for a successful end to end migration.
+
+## Next steps
+- [Single Server to Flexible migration concepts](./concepts-single-to-flexible.md)
postgresql How To Restart Server Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-restart-server-powershell.md
Title: Restart server - Azure PowerShell - Azure Database for PostgreSQL
-description: This article describes how you can restart an Azure Database for PostgreSQL server using PowerShell.
+ Title: Restart Azure Database for PostgreSQL using PowerShell
+description: Learn how to restart an Azure Database for PostgreSQL server using PowerShell.
Previously updated : 06/08/2020 - Last updated : 05/17/2022 +
-# Restart Azure Database for PostgreSQL server using PowerShell
+# Restart an Azure Database for PostgreSQL server using PowerShell
-This topic describes how you can restart an Azure Database for PostgreSQL server. You may need to restart
-your server for maintenance reasons, which causes a short outage during the operation.
+This topic describes how you can restart an Azure Database for PostgreSQL server. You may need to restart your server for maintenance reasons, which causes a short outage during the operation.
-The server restart is blocked if the service is busy. For example, the service may be processing a
-previously requested operation such as scaling vCores.
+The server restart is blocked if the service is busy. For example, the service may be processing a previously requested operation such as scaling vCores.
> [!NOTE]
-> The time required to complete a restart depends on the PostgreSQL recovery process. To decrease the restart time, we recommend you minimize the amount of activity occurring on the server prior to the restart. You may also want to increase the checkpoint frequency. You can also tune checkpoint related parameter values including `max_wal_size`. It is also recommended to run `CHECKPOINT` command prior to restarting the server.
+> The time required to complete a restart depends on the PostgreSQL recovery process. To decrease the restart time, we recommend that you minimize the amount of activity on the server prior to the restart. You may also want to increase the checkpoint frequency. As well, you can tune checkpoint related parameter values including `max_wal_size`. It is also recommended to run `CHECKPOINT` command prior to restarting the server.
## Prerequisites
To complete this how-to guide, you need:
[Azure Cloud Shell](https://shell.azure.com/) in the browser - An [Azure Database for PostgreSQL server](quickstart-create-postgresql-server-database-using-azure-powershell.md)
-> [!IMPORTANT]
-> While the Az.PostgreSql PowerShell module is in preview, you must install it separately from the Az
-> PowerShell module using the following command: `Install-Module -Name Az.PostgreSql -AllowPrerelease`.
-> Once the Az.PostgreSql PowerShell module is generally available, it becomes part of future Az
-> PowerShell module releases and available natively from within Azure Cloud Shell.
- If you choose to use PowerShell locally, connect to your Azure account using the [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) cmdlet.
If you choose to use PowerShell locally, connect to your Azure account using the
## Restart the server
-Restart the server with the following command:
+Enter the following command to restart the server:
```azurepowershell-interactive Restart-AzPostgreSqlServer -Name mydemoserver -ResourceGroupName myresourcegroup
postgresql How To Setup Azure Ad App Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-setup-azure-ad-app-portal.md
+
+ Title: "Set up Azure AD app to use with Single to Flexible migration"
+
+description: Learn about setting up Azure AD App to be used with Single to Flexible Server migration feature.
++++ Last updated : 05/09/2022++
+# Set up Azure AD app to use with Single to Flexible server Migration
+
+This quick start article shows you how to set up Azure Active Directory (Azure AD) app to use with Single to Flexible server migration. It's an important component of the Single to Flexible migration feature. See [Azure Active Directory app](../../active-directory/develop/howto-create-service-principal-portal.md) for details. Azure AD App helps with role-based access control (RBAC) as the migration infrastructure requires access to both the source and target servers, and is restricted by the roles assigned to the Azure Active Directory App. The Azure AD app instance once created, can be used to manage multiple migrations. To get started, create a new Azure Active Directory Enterprise App by doing the following steps:
+
+## Create Azure AD App
+
+1. If you're new to Microsoft Azure, [create an account](https://azure.microsoft.com/free/) to evaluate our offerings.
+2. Search for Azure Active Directory in the search bar on the top in the portal.
+3. Within the Azure Active Directory portal, under **Manage** on the left, choose **App Registrations**.
+4. Click on **New Registration**
+ :::image type="content" source="./media/concepts-single-to-flexible/azure-ad-new-registration.png" alt-text="New Registration for Azure Active Directory App." lightbox="./media/concepts-single-to-flexible/azure-ad-new-registration.png":::
+
+5. Give the app registration a name, choose an option that suits your needs for account types and click register
+ :::image type="content" source="./media/concepts-single-to-flexible/azure-ad-application-registration.png" alt-text="Azure AD App Name screen." lightbox="./media/concepts-single-to-flexible/azure-ad-application-registration.png":::
+
+6. Once the app is created, you can copy the client ID and tenant ID required for later steps in the migration. Next, click on **Add a certificate or secret**.
+ :::image type="content" source="./media/concepts-single-to-flexible/azure-ad-add-secret-screen.png" alt-text="Add a certificate screen." lightbox="./media/concepts-single-to-flexible/azure-ad-add-secret-screen.png":::
+
+7. In the next screen, click on **New client secret**.
+ :::image type="content" source="./media/concepts-single-to-flexible/azure-ad-add-new-client-secret.png" alt-text="New Client Secret screen." lightbox="./media/concepts-single-to-flexible/azure-ad-add-new-client-secret.png":::
+
+8. In the fan-out blade that opens, add a description, and select the drop-down to pick the life span of your Azure Active Directory App. Once all the migrations are complete, the Azure Active Directory App that was created for Role Based Access Control can be deleted. The default option is six months. If you don't need Azure Active Directory App for six months, choose three months and click **Add**.
+ :::image type="content" source="./media/concepts-single-to-flexible/azure-ad-add-client-secret-description.png" alt-text="Client Secret Description." lightbox="./media/concepts-single-to-flexible/azure-ad-add-client-secret-description.png":::
+
+9. In the next screen, copy the **Value** column that has the details of the Azure Active Directory App secret. This can be copied only while creation. If you miss copying the secret, you will need to delete the secret and create another one for future tries.
+ :::image type="content" source="./media/concepts-single-to-flexible/azure-ad-client-secret-value.png" alt-text="Copying client secret." lightbox="./media/concepts-single-to-flexible/azure-ad-client-secret-value.png":::
+
+10. Once Azure Active Directory App is created, you will need to add contributor privileges for this Azure Active Directory app to the following resources:
+
+ | Resource | Type | Description |
+ | - | - | - |
+ | Single Server | Required | Source single server you're migrating from. |
+ | Flexible Server | Required | Target flexible server you're migrating into. |
+ | Azure Resource Group | Required | Resource group for the migration. By default, this is the target flexible server resource group. If you're using a temporary resource group to create the migration infrastructure, the Azure Active Directory App will require contributor privileges to this resource group. |
+ | VNET | Required (if used) | If the source or the target happens to have private access, then the Azure Active Directory App will require contributor privileges to corresponding VNet. If you're using public access, you can skip this step. |
++
+## Add contributor privileges to an Azure resource
+
+Repeat the steps listed below for source single server, target flexible server, resource group and Vnet (if used).
+
+1. For the target flexible server, select the target flexible server in the Azure portal. Click on Access Control (IAM) on the top left.
+ :::image type="content" source="./media/concepts-single-to-flexible/azure-ad-iam-screen.png" alt-text="Access Control I A M screen." lightbox="./media/concepts-single-to-flexible/azure-ad-iam-screen.png":::
+
+2. Click **Add** and choose **Add role assignment**.
+ :::image type="content" source="./media/concepts-single-to-flexible/azure-ad-add-role-assignment.png" alt-text="Add role assignment here." lightbox="./media/concepts-single-to-flexible/azure-ad-add-role-assignment.png":::
+
+> [!NOTE]
+> The Add role assignment capability is only enabled for users in the subscription with role type as **Owners**. Users with other roles do not have permission to add role assignments.
+
+3. Under the **Role** tab, click on **Contributor** and click Next button
+ :::image type="content" source="./media/concepts-single-to-flexible/azure-ad-contributor-privileges.png" alt-text="Choosing Contributor Screen." lightbox="./media/concepts-single-to-flexible/azure-ad-contributor-privileges.png":::
+
+4. Under the Members tab, keep the default option of **Assign access to** User, group or service principal and click **Select Members**. Search for your Azure Active Directory App and click on **Select**.
+ :::image type="content" source="./media/concepts-single-to-flexible/azure-ad-review-and-assign.png" alt-text="Review and Assign Screen." lightbox="./media/concepts-single-to-flexible/azure-ad-review-and-assign.png":::
+
+
+## Next steps
+
+- [Single Server to Flexible migration concepts](./concepts-single-to-flexible.md)
+- [Migrate to Flexible server using Azure portal](./how-to-migrate-single-to-flexible-portal.md)
+- [Migrate to Flexible server using Azure CLI](./how-to-migrate-single-to-flexible-cli.md)
postgresql Quickstart Create Server Database Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/quickstart-create-server-database-azure-cli.md
Create a server with the [az postgres server create](/cli/azure/postgres/server#
## Configure a server-based firewall rule
-Create a firewall rule with the [az postgres server firewall-rule create](/cli/azure/mysql/server/firewall-rule) command to give your local environment access to connect to the server.
+Create a firewall rule with the [az postgres server firewall-rule create](/cli/azure/postgres/server/firewall-rule) command to give your local environment access to connect to the server.
:::code language="azurecli" source="~/azure_cli_scripts/postgresql/create-postgresql-server-and-firewall-rule/create-postgresql-server-and-firewall-rule.sh" id="CreateFirewallRule":::
private-link Private Endpoint Dns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/private-endpoint-dns.md
For Azure services, use the recommended zone names as described in the following
| Azure HDInsight (Microsoft.HDInsight) | privatelink.azurehdinsight.net | azurehdinsight.net | | Azure Arc (Microsoft.HybridCompute) / hybridcompute | privatelink.his.arc.azure.com<br />privatelink.guestconfiguration.azure.com | his.arc.azure.com<br />guestconfiguration.azure.com | | Azure Media Services (Microsoft.Media) / keydelivery, liveevent, streamingendpoint | privatelink.media.azure.net | media.azure.net |
+| Azure Data Explorer (Microsoft.Kusto) | privatelink.{region}.kusto.windows.net | {region}.kusto.windows.net |
+| Azure Static Web Apps (Microsoft.Web/Staticsites) / staticSites | privatelink.1.azurestaticapps.net | 1.azurestaticapps.net |
<sup>1</sup>To use with IoT Hub's built-in Event Hub compatible endpoint. To learn more, see [private link support for IoT Hub's built-in endpoint](../iot-hub/virtual-network-support.md#built-in-event-hub-compatible-endpoint)
For Azure services, use the recommended zone names as described in the following
| Azure Data Factory (Microsoft.DataFactory/factories) / portal | privatelink.adf.azure.cn | adf.azure.cn | | Azure Cache for Redis (Microsoft.Cache/Redis) / redisCache | privatelink.redis.cache.chinacloudapi.cn | redis.cache.chinacloudapi.cn | | Azure HDInsight (Microsoft.HDInsight) | privatelink.azurehdinsight.cn | azurehdinsight.cn |
+| Azure Data Explorer (Microsoft.Kusto) | privatelink.{region}.kusto.windows.cn | {region}.kusto.windows.cn |
<sup>1</sup>To use with IoT Hub's built-in Event Hub compatible endpoint. To learn more, see [private link support for IoT Hub's built-in endpoint](../iot-hub/virtual-network-support.md#built-in-event-hub-compatible-endpoint)
private-link Private Endpoint Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/private-endpoint-overview.md
A private-link resource is the destination target of a specified private endpoin
| Azure Container Registry | Microsoft.ContainerRegistry/registries | registry | | Azure Kubernetes Service - Kubernetes API | Microsoft.ContainerService/managedClusters | management | | Azure Data Factory | Microsoft.DataFactory/factories | dataFactory |
+| Azure Data Explorer | Microsoft.Kusto/clusters | cluster |
| Azure Database for MariaDB | Microsoft.DBforMariaDB/servers | mariadbServer | | Azure Database for MySQL | Microsoft.DBforMySQL/servers | mysqlServer | | Azure Database for PostgreSQL - Single server | Microsoft.DBforPostgreSQL/servers | postgresqlServer |
purview Asset Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/asset-insights.md
+ Previously updated : 09/27/2021 Last updated : 05/16/2022 # Asset insights on your data in Microsoft Purview
-This how-to guide describes how to access, view, and filter Microsoft Purview Asset insight reports for your data.
+This guide describes how to access, view, and filter Microsoft Purview asset insight reports for your data.
-> [!IMPORTANT]
-> Microsoft Purview Data Estate Insights are currently in PREVIEW. The [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-In this how-to guide, you'll learn how to:
+In this guide, you'll learn how to:
> [!div class="checklist"] > * View data estate insights from your Microsoft Purview account.
In this how-to guide, you'll learn how to:
Before getting started with Microsoft Purview Data Estate Insights, make sure that you've completed the following steps:
-* Set up your Azure resources and populate the account with data.
+* Set up a storage resource and populated the account with data.
-* Set up and complete a scan on the source type.
+* Set up and completed a scan your storage source.
-For more information, see [Manage data sources in Microsoft Purview](manage-data-sources.md).
+For more information to create and complete a scan, see [the manage data sources in Microsoft Purview article](manage-data-sources.md).
-## Use Microsoft Purview Asset Insights
+## Understand your asset inventory in Data Estate Insights
-In Microsoft Purview, you can register and scan source types. Once the scan is complete, you can view the asset distribution in Asset Insights, which tells you the state of your data estate by classification and resource sets. It also tells you if there is any change in data size.
+In Microsoft Purview Data Estate Insights, you can get an overview of the assets that have been scanned into the Data Map and view key gaps that can be closed by governance stakeholders, for better governance of the data estate.
> [!NOTE]
-> After you have scanned your source types, give Asset Insights 3-8 hours to reflect the new assets. The delay may be due to high traffic in deployment region or size of your workload. For further information, please contact the field support team.
+> After you have scanned your source types, give asset insights 3-8 hours to reflect the new assets. The delay may be due to high traffic in deployment region or size of your workload. For further information, please contact support.
1. Navigate to your Microsoft Purview account in the Azure portal. 1. On the **Overview** page, in the **Get Started** section, select the **Open Microsoft Purview governance portal** tile.
- :::image type="content" source="./media/asset-insights/portal-access.png" alt-text="Launch Microsoft Purview from the Azure portal":::
+ :::image type="content" source="./media/asset-insights/portal-access.png" alt-text="Screenshot of Microsoft Purview account in Azure portal with the Microsoft Purview governance portal button highlighted.":::
1. On the Microsoft Purview **Home** page, select **Data Estate Insights** on the left menu.
- :::image type="content" source="./media/asset-insights/view-insights.png" alt-text="View your data estate insights in the Azure portal":::
+ :::image type="content" source="./media/asset-insights/view-insights.png" alt-text="Screenshot of the Microsoft Purview governance portal with the Data Estate Insights button highlighted in the left menu.":::
-1. In the **Data Estate Insights** area, select **Assets** to display the Microsoft Purview **Asset insights** report.
+1. In the **Data Estate Insights** area, look for **Assets** in the **Inventory and Ownership** section.
-### View Asset Insights
+ :::image type="content" source="./media/asset-insights/asset-insights-table-of-contents.png" alt-text="Screenshot of the Microsoft Purview governance portal Insights menu with Assets highlighted.":::
-1. The main **Asset Insights** page displays the following areas:
-2. High level KPI's to show source types, classified assets, and discovered assets
-
-3. The first graph shows **assets per source type**.
+### View asset summary
-4. View your asset distribution by source type. Pick a classification or an entire classification category to see asset distribution by source type.
-
-5. To view more, select **View more**, which displays a tabular form of the source types and asset counts. The classification filters are carried to this page.
+1. The **Assets Summary** report provides several high-level KPIs, with these graphs:
- :::image type="content" source="./media/asset-insights/highlight-kpis.png" alt-text="View KPIs and graph in Asset Insights":::
-
-6. Select a specific source for which you'd like to see top folders with asset counts.
+ * **Unclassified assets**: Assets with no system or custom classification on the entity or its columns.
+ * **Unassigned data owner**: Assets that have the owner attribute within "Contacts" tab as blank.
+ * **Net new assets in last 30 days**: Assets that were added to the Purview account, via data scan or Atlas API pushes.
+ * **Deleted assets in last 30 days**: Assets that were deleted from the Purview account, as a result of deletion from data sources.
- :::image type="content" source="./media/asset-insights/select-data-source.png" alt-text="Select source type":::
-
-7. Select the total assets against the top folder within the source type you selected above.
+ :::image type="content" source="./media/asset-insights/asset-insights-summary-report-small.png" alt-text="Screenshot of the insights assets summary graphs, showing the four main KPI charts." lightbox="media/asset-insights/asset-insights-summary-report.png":::
+
+1. Below these KPIs, you can also view your data asset distribution by collection.
+
+ :::image type="content" source="./media/asset-insights/assets-by-collection-small.png" alt-text="Screenshot of the insights assets by collection section, showing all a graphic that summarizes number of assets by collection." lightbox="media/asset-insights/assets-by-collection.png":::
+
+1. Using filters you can drill down to assets within a specific collection or classification category.
+
+ :::image type="content" source="./media/asset-insights/filter.png" alt-text="Screenshot of the insights assets by collection section, with the filter at the top selected, showing available collections.":::
+
+ > [!NOTE]
+ > ***Each classification filter has some common values:***
+ > * **Applied**: Any filter value is applied
+ > * **Not Applied**: No filter value is applied. For example, if you pick a classification filter with value as "Not Applied", the graph will show all assets with no classification.
+ > * **All**: Filter values are cleared. Meaning the graph will show all assets, with or without classification.
+ > * **Specific**: You have picked a specific classification from the filter, and only that classification will be shown.
+
+1. To learn more about which specific assets are shown in the graph, select **View details**.
+
+ :::image type="content" source="./media/asset-insights/view-details.png" alt-text="Screenshot of the insights assets by collection section, with the view-details button at the bottom highlighted.":::
+
+ :::image type="content" source="./media/asset-insights/details-view.png" alt-text="Screenshot of the asset details view screen, which is still within the Data Estate Insights application.":::
- :::image type="content" source="./media/asset-insights/file-path.png" alt-text="View file paths":::
+1. You can select any collection to view the collection's asset list.
-8. View the list of files within the folder. Navigate back to Data Estate Insights using the bread crumbs.
+ :::image type="content" source="./media/asset-insights/select-collection.png" alt-text="Screenshot of the asset details view screen, with one of the collections highlighted.":::
- :::image type="content" source="./media/asset-insights/list-page.png" alt-text="View list of assets":::
+ :::image type="content" source="./media/asset-insights/asset-list.png" alt-text="Screenshot of the asset list screen, showing all assets within the selected collection.":::
+
+1. You can also select an asset to edit without leaving the Data Estate Insights App.
+
+ :::image type="content" source="./media/asset-insights/edit-asset.png" alt-text="Screenshot of the asset list screen, with an asset selected for editing and the asset edit screen open within the Data Estate Insights application.":::
+
### File-based source types
-The next couple of graphs in Asset Insights show a distribution of file based source types. The first graph, called **Size trend (GB) of file type within source types**, shows top file type size trend over the last 30 days.
+
+The next graphs in asset insights show a distribution of file-based source types. The first graph, called **Size trend (GB) of file type within source types**, shows top file type size trends over the last 30 days.
1. Pick your source type to view the file type within the source.
-1. Select **View more** to see the current data size, change in size, current asset count and change in asset count.
+1. Select **View details** to see the current data size, change in size, current asset count and change in asset count.
> [!NOTE] > If the scan has run only once in last 30 days or any catalog change like classification addition/removed happened only once in 30 days, then the change information above appears blank.
The next couple of graphs in Asset Insights show a distribution of file based so
1. Select the path to see the asset list.
-The second graph in file-based source types is ***Files not associated with a resource set***. If you expect that all files should roll up into a resource set, this graph can help you understand which assets have not been rolled up. Missing assets can be an indication of the wrong file-pattern in the folder. Follow the same steps as in other graphs to view more details on the files.
+The second graph in file-based source types is **Files not associated with a resource set**. If you expect that all files should roll up into a resource set, this graph can help you understand which assets haven't been rolled up. Missing assets can be an indication of the wrong file-pattern in the folder. You can select **View details** below the graph for more information.
- :::image type="content" source="./media/asset-insights/file-based-assets.png" alt-text="View file based assets":::
+ :::image type="content" source="./media/asset-insights/file-based-assets-inline.png" alt-text="View file based assets" lightbox="./media/asset-insights/file-based-assets.png":::
## Next steps
-Learn more about Microsoft Purview insight reports with
+Learn how to use Data Estate Insights with resources below:
-- [Classification insights](./classification-insights.md)-- [Glossary insights](glossary-insights.md)
+* [Learn how to use data stewardship insights](data-stewardship.md)
+* [Learn how to use classification insights](classification-insights.md)
+* [Learn how to use glossary insights](glossary-insights.md)
+* [Learn how to use label insights](sensitivity-insights.md)
purview Azure Purview Connector Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/azure-purview-connector-overview.md
The table below shows the supported capabilities for each data source. Select th
|**Category**| **Data Store** |**Technical metadata** |**Classification** |**Lineage** | **Access Policy** | |||||||
-| Azure | [Azure Blob Storage](register-scan-azure-blob-storage-source.md)| [Yes](register-scan-azure-blob-storage-source.md#register) | [Yes](register-scan-azure-blob-storage-source.md#scan)| Limited* | [Yes](how-to-access-policies-storage.md) |
+| Azure | [Azure Blob Storage](register-scan-azure-blob-storage-source.md)| [Yes](register-scan-azure-blob-storage-source.md#register) | [Yes](register-scan-azure-blob-storage-source.md#scan)| Limited* | [Yes (Preview)](how-to-data-owner-policies-storage.md) |
|| [Azure Cosmos DB](register-scan-azure-cosmos-database.md)| [Yes](register-scan-azure-cosmos-database.md#register) | [Yes](register-scan-azure-cosmos-database.md#scan)|No*|No| || [Azure Data Explorer](register-scan-azure-data-explorer.md)| [Yes](register-scan-azure-data-explorer.md#register) | [Yes](register-scan-azure-data-explorer.md#scan)| No* | No | || [Azure Data Factory](how-to-link-azure-data-factory.md) | [Yes](how-to-link-azure-data-factory.md) | No | [Yes](how-to-link-azure-data-factory.md) | No | || [Azure Data Lake Storage Gen1](register-scan-adls-gen1.md)| [Yes](register-scan-adls-gen1.md#register) | [Yes](register-scan-adls-gen1.md#scan)| Limited* | No |
-|| [Azure Data Lake Storage Gen2](register-scan-adls-gen2.md)| [Yes](register-scan-adls-gen2.md#register) | [Yes](register-scan-adls-gen2.md#scan)| Limited* | [Yes](how-to-access-policies-storage.md) |
+|| [Azure Data Lake Storage Gen2](register-scan-adls-gen2.md)| [Yes](register-scan-adls-gen2.md#register) | [Yes](register-scan-adls-gen2.md#scan)| Limited* | [Yes (Preview)](how-to-data-owner-policies-storage.md) |
|| [Azure Data Share](how-to-link-azure-data-share.md) | [Yes](how-to-link-azure-data-share.md) | No | [Yes](how-to-link-azure-data-share.md) | No | || [Azure Database for MySQL](register-scan-azure-mysql-database.md) | [Yes](register-scan-azure-mysql-database.md#register) | [Yes](register-scan-azure-mysql-database.md#scan) | No* | No | || [Azure Database for PostgreSQL](register-scan-azure-postgresql.md) | [Yes](register-scan-azure-postgresql.md#register) | [Yes](register-scan-azure-postgresql.md#scan) | No* | No | || [Azure Dedicated SQL pool (formerly SQL DW)](register-scan-azure-synapse-analytics.md)| [Yes](register-scan-azure-synapse-analytics.md#register) | [Yes](register-scan-azure-synapse-analytics.md#scan)| No* | No | || [Azure Files](register-scan-azure-files-storage-source.md)|[Yes](register-scan-azure-files-storage-source.md#register) | [Yes](register-scan-azure-files-storage-source.md#scan) | Limited* | No |
-|| [Azure SQL Database](register-scan-azure-sql-database.md)| [Yes](register-scan-azure-sql-database.md#register) |[Yes](register-scan-azure-sql-database.md#scan)| [Yes (Preview)](register-scan-azure-sql-database.md#lineagepreview) | No |
+|| [Azure SQL Database](register-scan-azure-sql-database.md)| [Yes](register-scan-azure-sql-database.md#register) |[Yes](register-scan-azure-sql-database.md#scan)| [Yes (Preview)](register-scan-azure-sql-database.md#lineagepreview) | [Yes (Preview)](how-to-data-owner-policies-azure-sql-db.md) |
|| [Azure SQL Managed Instance](register-scan-azure-sql-database-managed-instance.md)| [Yes](register-scan-azure-sql-database-managed-instance.md#scan) | [Yes](register-scan-azure-sql-database-managed-instance.md#scan) | No* | No | || [Azure Synapse Analytics (Workspace)](register-scan-synapse-workspace.md)| [Yes](register-scan-synapse-workspace.md#register) | [Yes](register-scan-synapse-workspace.md#scan)| [Yes - Synapse pipelines](how-to-lineage-azure-synapse-analytics.md)| No| |Database| [Amazon RDS](register-scan-amazon-rds.md) | [Yes](register-scan-amazon-rds.md#register-an-amazon-rds-data-source) | [Yes](register-scan-amazon-rds.md#scan-an-amazon-rds-database) | No | No |
The table below shows the supported capabilities for each data source. Select th
|| [SAP HANA](register-scan-sap-hana.md) | [Yes](register-scan-sap-hana.md#register) | No | No | No | || [Snowflake](register-scan-snowflake.md) | [Yes](register-scan-snowflake.md#register) | No | [Yes](register-scan-snowflake.md#lineage) | No | || [SQL Server](register-scan-on-premises-sql-server.md)| [Yes](register-scan-on-premises-sql-server.md#register) |[Yes](register-scan-on-premises-sql-server.md#scan) | No* | No|
+|| SQL Server on Azure-Arc| No |No | No |[Yes (Preview)](how-to-data-owner-policies-arc-sql-server.md) |
|| [Teradata](register-scan-teradata-source.md)| [Yes](register-scan-teradata-source.md#register)| No | [Yes*](register-scan-teradata-source.md#lineage) | No| |File|[Amazon S3](register-scan-amazon-s3.md)|[Yes](register-scan-amazon-s3.md)| [Yes](register-scan-amazon-s3.md)| Limited* | No| |Services and apps| [Erwin](register-scan-erwin-source.md)| [Yes](register-scan-erwin-source.md#register)| No | [Yes](register-scan-erwin-source.md#lineage)| No|
For all structured file formats, Microsoft Purview scanner samples files in the
- For structured file types, it samples the top 128 rows in each column or the first 1 MB, whichever is lower. - For document file formats, it samples the first 20 MB of each file. - If a document file is larger than 20 MB, then it is not subject to a deep scan (subject to classification). In that case, Microsoft Purview captures only basic meta data like file name and fully qualified name.-- For **tabular data sources (SQL, CosmosDB)**, it samples the top 128 rows.
+- For **tabular data sources (SQL)**, it samples the top 128 rows.
+- For **Azure Cosmos DB (SQL API)**, up to 300 distinct properties from the first 10 documents in a container will be collected for the schema and for each property, values from up to 128 documents or the first 1 MB will be sampled.
## Resource set file sampling
purview Catalog Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/catalog-permissions.md
description: This article gives an overview permission, access control, and coll
+ Previously updated : 03/09/2022 Last updated : 05/16/2022 # Access control in the Microsoft Purview Data Map
The Microsoft Purview Data Map uses **Collections** to organize and manage acces
A collection is a tool Microsoft Purview uses to group assets, sources, and other artifacts into a hierarchy for discoverability and to manage access control. All accesses to Microsoft Purview's resources are managed from collections in the Microsoft Purview account itself.
-> [!NOTE]
-> As of November 8th, 2021, ***Microsoft Purview Data Estate Insights*** is accessible to Data Curators. Data Readers do not have access to Data Estate Insights.
- ## Roles Microsoft Purview uses a set of predefined roles to control who can access what within the account. These roles are currently:
Microsoft Purview uses a set of predefined roles to control who can access what
- **Data curators** - a role that provides access to the data catalog to manage assets, configure custom classifications, set up glossary terms, and view data estate insights. Data curators can create, read, modify, move, and delete assets. They can also apply annotations to assets. - **Data readers** - a role that provides read-only access to data assets, classifications, classification rules, collections and glossary terms. - **Data source administrator** - a role that allows a user to manage data sources and scans. If a user is granted only to **Data source admin** role on a given data source, they can run new scans using an existing scan rule. To create new scan rules, the user must be also granted as either **Data reader** or **Data curator** roles.
+- **Insights reader** - a role that provides read-only access to insights reports for collections where the insights reader also has at least the **Data reader** role. For more information, see [insights permissions.](insights-permissions.md)
- **Policy author (Preview)** - a role that allows a user to view, update, and delete Microsoft Purview policies through the policy management app within Microsoft Purview. - **Workflow administrator** - a role that allows a user to access the workflow authoring page in the Microsoft Purview governance portal, and publish workflows on collections where they have access permissions. Workflow administrator only has access to authoring, and so will need at least Data reader permission on a collection to be able to access the Purview governance portal.
Microsoft Purview uses a set of predefined roles to control who can access what
|I need to put users into roles in Microsoft Purview | Collection administrator | |I need to create and publish access policies | Data source administrator and policy author | |I need to create workflows for my Microsoft Purview account | Workflow administrator |
+|I need to view insights for collections I'm a part of | Insights reader **or** data curator |
:::image type="content" source="media/catalog-permissions/catalog-permission-role.svg" alt-text="Chart showing Microsoft Purview roles" lightbox="media/catalog-permissions/catalog-permission-role.svg"::: >[!NOTE]
purview Classification Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/classification-insights.md
Title: Classification reporting on your data in Microsoft Purview using Microsoft Purview Data Estate Insights description: This how-to guide describes how to view and use Microsoft Purview classification reporting on your data.--++ Previously updated : 09/27/2021
-# Customer intent: As a security officer, I need to understand how to use Microsoft Purview Data Estate Insights to learn about sensitive data identified and classified and labeled during scanning.
- Last updated : 05/16/2022+
+#Customer intent: As a security officer, I need to understand how to use Microsoft Purview Data Estate Insights to learn about sensitive data identified and classified and labeled during scanning.
-# Classification insights about your data from Microsoft Purview
+# Classification insights about your data in Microsoft Purview
-This how-to guide describes how to access, view, and filter Microsoft Purview Classification insight reports for your data.
+This guide describes how to access, view, and filter Microsoft Purview Classification insight reports for your data.
-> [!IMPORTANT]
-> Microsoft Purview Data Estate Insights are currently in PREVIEW. The [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-Supported data sources include: Azure Blob Storage, Azure Data Lake Storage (ADLS) GEN 1, Azure Data Lake Storage (ADLS) GEN 2, Azure Cosmos DB (SQL API), Azure Synapse Analytics (formerly SQL DW), Azure SQL Database, Azure SQL Managed Instance, SQL Server, Amazon S3 buckets, and Amazon RDS databases (public preview), Power BI
-
-In this how-to guide, you'll learn how to:
+In this guide, you'll learn how to:
> [!div class="checklist"] > - Launch your Microsoft Purview account from Azure
In this how-to guide, you'll learn how to:
Before getting started with Microsoft Purview Data Estate Insights, make sure that you've completed the following steps: -- Set up your Azure resources and populated the relevant accounts with test data
+* Set up a storage resource and populated the account with data.
-- Set up and completed a scan on the test data in each data source. For more information, see [Manage data sources in Microsoft Purview](manage-data-sources.md) and [Create a scan rule set](create-a-scan-rule-set.md).
+* Set up and completed a scan on the data in each data source. For more information, see [Manage data sources in Microsoft Purview](manage-data-sources.md) and [Create a scan rule set](create-a-scan-rule-set.md).
-- Signed in to Microsoft Purview with account with a [Data Reader or Data Curator role](catalog-permissions.md#roles).
+* Signed in to Microsoft Purview with account with a [data Curator role or insight reader role](catalog-permissions.md#roles).
-For more information, see [Manage data sources in Microsoft Purview](manage-data-sources.md).
## Use Microsoft Purview Data Estate Insights for classifications
In Microsoft Purview, classifications are similar to subject tags, and are used
Microsoft Purview uses the same sensitive information types as Microsoft 365, allowing you to stretch your existing security policies and protection across your entire data estate. > [!NOTE]
-> After you have scanned your source types, give **Classification** Insights a couple of hours to reflect the new assets.
+> After you have scanned your source types, give **classification insights** a couple of hours to reflect the new assets.
**To view classification insights:**
Microsoft Purview uses the same sensitive information types as Microsoft 365, al
1. In Microsoft Purview, select the **Data Estate Insights** :::image type="icon" source="media/insights/ico-insights.png" border="false"::: menu item on the left to access your **Data Estate Insights** area.
-1. In the **Data Estate Insights** :::image type="icon" source="media/insights/ico-insights.png" border="false"::: area, select **Classification** to display the Microsoft Purview **Classification insights** report.
+1. In the **Data Estate Insights** :::image type="icon" source="media/insights/ico-insights.png" border="false"::: area, select **Classifications** to display the Microsoft Purview **Classification insights** report.
- :::image type="content" source="./media/insights/select-classification-labeling.png" alt-text="Classification insights report" lightbox="media/insights/select-classification-labeling.png":::
+ :::image type="content" source="./media/insights/select-classification-labeling.png" alt-text="Screenshot of the classification insights report." lightbox="media/insights/select-classification-labeling.png":::
- The main **Classification insights** page displays the following areas:
+ The main **classification insights** page displays the following areas:
|Area |Description | |||
Microsoft Purview uses the same sensitive information types as Microsoft 365, al
## Classification insights drilldown
-In any of the following **Classification insights** graphs, select the **View more** link to drill down for more details:
+In any of the following **Classification insights** graphs, select the **View details** link to drill down for more details:
- **Top classification categories by sources** - **Top classifications for files**
In any of the following **Classification insights** graphs, select the **View mo
For example: Do any of the following to learn more:
Do any of the following to learn more:
## Next steps
-Learn more about Microsoft Purview insight reports
-> [!div class="nextstepaction"]
-> [Glossary insights](glossary-insights.md)
+Learn how to use Data Estate Insights with resources below:
-> [!div class="nextstepaction"]
-> [Sensitivity labeling insights](./sensitivity-insights.md)
+* [Learn how to use Asset insights](asset-insights.md)
+* [Learn how to use Data Stewardship](data-stewardship.md)
+* [Learn how to use Classification insights](classification-insights.md)
+* [Learn how to use Glossary insights](glossary-insights.md)
+* [Learn how to use Label insights](sensitivity-insights.md)
purview Concept Best Practices Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/concept-best-practices-automation.md
Previously updated : 11/23/2021 Last updated : 05/17/2022 # Microsoft Purview automation best practices
When to use?
* Custom application development or process automation. ## Streaming (Atlas Kafka)
-Each Microsoft Purview account comes with an optional fully managed event hub, accessible via the Atlas Kafka endpoint found via the Azure portal > Microsoft Purview Account > Properties. Microsoft Purview events can be monitored by consuming messages from the event hub. External systems can also use the event hub to publish events to Microsoft Purview as they occur.
+Each Microsoft Purview account comes with a fully managed event hub, accessible via the Atlas Kafka endpoint found via the Azure portal > Microsoft Purview Account > Properties. Microsoft Purview events can be monitored by consuming messages from the event hub. External systems can also use the event hub to publish events to Microsoft Purview as they occur.
* **Consume Events** - Microsoft Purview will send notifications about metadata changes to Kafka topic **ATLAS_ENTITIES**. Applications interested in metadata changes can monitor for these notifications. Supported operations include: `ENTITY_CREATE`, `ENTITY_UPDATE`, `ENTITY_DELETE`, `CLASSIFICATION_ADD`, `CLASSIFICATION_UPDATE`, `CLASSIFICATION_DELETE`. * **Publish Events** - Microsoft Purview can be notified of metadata changes via notifications to Kafka topic **ATLAS_HOOK**. Supported operations include: `ENTITY_CREATE_V2`, `ENTITY_PARTIAL_UPDATE_V2`, `ENTITY_FULL_UPDATE_V2`, `ENTITY_DELETE_V2`.
purview Concept Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/concept-insights.md
Title: Understand Data Estate Insights reports in Microsoft Purview
-description: This article explains what Data Estate Insights are in Microsoft Purview.
+ Title: Understand Insights reports in Microsoft Purview
+description: This article explains what Insights are in Microsoft Purview.
+ Previously updated : 12/02/2020 Last updated : 05/16/2022
-# Understand Data Estate Insights in Microsoft Purview
+# Understand the Microsoft Purview Data Estate Insights application
-This article provides an overview of the Data Estate Insights feature in Microsoft Purview.
+This article provides an overview of the Data Estate Insights application in Microsoft Purview.
-Data Estate Insights are one of the key pillars of Microsoft Purview. The feature provides customers, a single pane of glass view into their catalog and further aims to provide specific insights to the data source administrators, business users, data stewards, data officer and, security administrators. Currently, Microsoft Purview has the following Data Estate Insights reports that will be available to customers during Insight's public preview.
-> [!IMPORTANT]
-> Microsoft Purview Data Estate Insights are currently in PREVIEW. The [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+The Data Estate Insights application is purpose-built for governance stakeholders, primarily for roles focused on data management, compliance, and data use: like a Chief Data Officer. The application provides actionable insights into the organizationΓÇÖs data estate, catalog usage, adoption, and processes. As organizations scan and populate their Microsoft Purview Data Map, the Data Estate Insights application automatically extracts valuable governance gaps and highlights them in its top metrics. Then it also provides drill-down experience that enables all stakeholders, such as data owners and data stewards, to take appropriate action to close the gaps.
-## Asset Insights
+All the reports within the Data Estate Insights application are automatically generated and populated, so governance stakeholders can focus on the information itself, rather than building the reports.
-This report gives a bird's eye view of your data estate, and its distribution by source type, by classification and by file size as some of the dimensions. This report caters to different types of stakeholder in the data governance and cataloging roles, who are interested to know state of their DataMap, by classification and file extensions.
+The dashboards and reports available within Microsoft Purview Data Estate Insights are categorized in three sections:
+* [Health](#health)
+* [Inventory and Ownership](#inventory-and-ownership)
+* [Curation and governance](#curation-and-governance)
-The report provides broad insights through graphs and KPIs and later deep dive into specific anomalies such as misplaced files. The report also supports an end-to-end customer experience, where customer can view count of assets with a specific classification, can break down the information by source types and top folders, and can also view the list of assets for further investigation.
+ :::image type="content" source="./media/insights/table-of-contents.png" alt-text="Screenshot of table of contents for Microsoft Purview Data Estate Insights.":::
-> [!NOTE]
-> File Extension Insights has been merged into Asset Insights with richer trend report showing growth in data size by file extension. Learn more by exploring [Asset Insights](asset-insights.md).
+## Health
-## Glossary Insights
+Data, governance, and quality focused users like chief data officers and data stewards can start at the health dashboards to understand the current health status of their data estate, current return on investment on their catalog, and begin to address any outstanding issues.
-This report gives the Data Stewards a status report on glossary. Data Stewards can view this report to understand distribution of glossary terms by status, learn how many glossary terms are attached to assets and how many aren't yet attached to any asset. Business users can also learn about completeness of their glossary terms.
+ :::image type="content" source="./media/insights/data-stewardship-small.png" alt-text="Screenshot of health insights report dashboard." lightbox="media/insights/data-stewardship-large.png":::
-This report summarizes top items that a Data Steward needs to focus on, to create a complete and usable glossary for their organization. Stewards can also navigate into the "Glossary" experience from "Glossary Insights" experience, to make changes on a specific glossary term.
+### Data stewardship
-## Classification Insights
+The data stewardship dashboard highlights key performing indicators that the governance stakeholders need to focus on, to attain a clean and governance-ready data estate. Information like asset curation rates, data ownership rates, and classification rates are calculated out of the box and trended over time.
-This report provides details about where classified data is located, the classifications found during a scan, and a drill-down to the classified files themselves. It enables Stewards, Curators and Security Administrators to understand the types of information found in their organization's data estate.
+Management-focused users, like a Chief Data Officer, can also get a high-level understanding of weekly and monthly active users of their catalog, and information about how the catalog is being used. Is the catalog being adopted across their organization, as better adoption will lead to better overall governance penetration in the organization?
+
+For more information about these dashboards, see the [data Stewardship documentation.](data-stewardship.md)
+
+## Inventory and ownership
+
+This area focuses on summarizing data estate inventory for data quality and management focused users, like data stewards and data curators. These dashboards provide key metrics and overviews to give users the ability to find and resolve gaps in their assets, all from within the data estate insights application.
+
+ :::image type="content" source="./media/insights/asset-insights-small.png" alt-text="Screenshot of inventory and ownership insights report dashboard." lightbox="media/insights/asset-insights-large.png":::
+
+### Assets
+
+This report provides a summary of your data estate and its distribution by collection and source type. You can also view new assets, deleted assets, updated assets, and stale assets from the last 30 days.
+
+Explore your data by classification, investigate why assets didn't get classified, and see how many assets exist without a data owner assigned. To take action, the report provides a ΓÇ£View DetailΓÇ¥ button to view and edit the specific assets that need treatment.
+
+You can also view data asset trends by asset count and data size, as we record this metadata during the data map scanning process.
+
+For more information, see the [asset insights documentation.](asset-insights.md)
+
+## Curation and governance
+
+This area focuses on giving a summary of how curated your assets are by several curation contexts. Currently we focus on showcasing assets with glossary, classification, and sensitivity labels.
+
+ :::image type="content" source="./media/insights/curation-and-governance-small.png" alt-text="Screenshot of example curation and governance insights report dashboard." lightbox="media/insights/curation-and-governance-large.png":::
+
+### Glossary
+
+Data, governance, and quality focused users like chief data officers and data stewards a status check on their business glossary. Data maintenance and collection focused users like Data Stewards can view this report to understand distribution of glossary terms by status, learn how many glossary terms are attached to assets, and how many aren't yet attached to any asset. Business users can also learn about completeness of their glossary terms.
+
+This report summarizes top items that use needs to focus on to create a complete and usable glossary for their organization. Users can also navigate into the "Glossary" experience from "Glossary Insights" experience, to make changes on a specific glossary term.
+
+For more information, see the [glossary insights in Microsoft Purview documentation.](glossary-insights.md)
+
+### Classifications
+
+This report provides details about where classified data is located, the classifications found during a scan, and a drill-down to the classified files themselves. It enables data quality and data security focused users like data stewards, data curators, and security administrators to understand the types of information found in their organization's data estate.
In Microsoft Purview, classifications are similar to subject tags, and are used to mark and identify content of a specific type in your data estate.
-Use the Classification Insights report to identify content with specific classifications and understand required actions, such as adding more security to the repositories, or moving content to a more secure location.
+Use the classification insights report to identify content with specific classifications and understand required actions, such as adding extra security to the repositories, or moving content to a more secure location.
-For more information, see [Classification insights about your data from Microsoft Purview](classification-insights.md).
+For more information, see the [classification insights about your data from Microsoft Purview documentation.](classification-insights.md)
-## Sensitivity Labeling Insights
+### Sensitivity Labels
-This report provides details about the sensitivity labels found during a scan, and a drill-down to the labeled files themselves. It enables security administrators to ensure the security of information found in their organization's data estate.
+This report provides details about the sensitivity labels found during a scan and a drill-down to the labeled files themselves. It enables security administrators to ensure the security of the data found in their organization's data estate by identifying where sensitive data is stored.
In Microsoft Purview, sensitivity labels are used to identify classification type categories, and the group security policies that you want to apply to each category.
-Use the Labeling Insights report to identify the sensitivity labels found in your content and understand required actions, such as managing access to specific repositories or files.
+Use the labeling insights report to identify the sensitivity labels found in your content and understand required actions, such as managing access to specific repositories or files.
-For more information, see [Sensitivity label insights about your data in Microsoft Purview](sensitivity-insights.md).
+For more information, see the [sensitivity label insights about your data in Microsoft Purview documentation.](sensitivity-insights.md)
## Next steps
-* [Asset insights](asset-insights.md)
-* [Glossary insights](glossary-insights.md)
-* [Classification insights](./classification-insights.md)
+Learn how to use Data Estate Insights with resources below:
+
+* [Learn how to use Asset insights](asset-insights.md)
+* [Learn how to use Classification insights](classification-insights.md)
+* [Learn how to use Glossary insights](glossary-insights.md)
+* [Learn how to use Label insights](sensitivity-insights.md)
purview Data Stewardship https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/data-stewardship.md
+
+ Title: Data Stewardship Insights in Microsoft Purview
+description: This article describes the data stewardship dashboards in Microsoft Purview, and how they can be used to govern and manage your data estate.
++++++ Last updated : 05/16/2022++
+# Get insights into data stewardship from Microsoft Purview
+
+As described in the [insights concepts](concept-insights.md), data stewardship is report that is part of the "Health" section of the Data Estate Insights App. This report offers a one-stop shop experience for data, governance, and quality focused users like chief data officers and data stewards to get actionable insights into key areas of gap in their data estate, for better governance.
+
+In this guide, you'll learn how to:
+
+> [!div class="checklist"]
+> * Navigate and view data stewardship report from your Microsoft Purview account.
+> * Drill down for more asset count details.
+
+## Prerequisites
+
+Before getting started with Microsoft Purview Data Estate Insights, make sure that you've completed the following steps:
+
+* Set up a storage resource and populated the account with data.
+
+* Set up and completed a scan your storage source.
+
+For more information to create and complete a scan, see [the manage data sources in Microsoft Purview article](manage-data-sources.md).
+
+## Understand your data estate and catalog health in Data Estate Insights
+
+In Microsoft Purview Data Estate Insights, you can get an overview of all assets inventoried in the Data Map, and any key gaps that can be closed by governance stakeholders, for better governance of the data estate.
+
+> [!NOTE]
+> After you have scanned your source types, give asset insights 3-8 hours to reflect the new assets. The delay may be due to high traffic in deployment region or size of your workload. For further information, please contact support.
+
+1. Navigate to your Microsoft Purview account in the Azure portal.
+
+1. On the **Overview** page, in the **Get Started** section, select the **Open Microsoft Purview governance portal** tile.
+
+ :::image type="content" source="./media/data-stewardship/portal-access.png" alt-text="Screenshot of Microsoft Purview account in Azure portal with the Microsoft Purview governance portal button highlighted.":::
+
+1. On the Microsoft Purview **Home** page, select **Data Estate Insights** on the left menu.
+
+ :::image type="content" source="./media/data-stewardship/view-insights.png" alt-text="Screenshot of the Microsoft Purview governance portal with the Data Estate Insights button highlighted in the left menu.":::
+
+1. In the **Data Estate Insights** area, look for **Data Stewardship** in the **Health** section.
+
+ :::image type="content" source="./media/data-stewardship/data-stewardship-table-of-contents.png" alt-text="Screenshot of the Microsoft Purview governance portal Data Estate Insights menu with Data Stewardship highlighted under the Health section.":::
++
+### View data stewardship dashboard
+
+The dashboard is purpose-built for the governance and quality focused users, like data stewards and chief data officers, to understand the data estate health and catalog adoption health of their organization. The dashboard shows high level KPIs that need to reduce governance risks:
+
+ * **Asset curation**: All data assets are categorized into three buckets - "Fully curated", "Partially curated" and "Not curated", based on certain attributes of assets being present. An asset is "Fully curated" if it has at least one classification tag, an assigned Data Owner and a description. If any of these attributes is missing, but not all, then the asset is categorized as "Partially curated" and if all of them are missing, then it's "Not curated".
+ * **Asset data ownership**: Assets that have the owner attribute within "Contacts" tab as blank are categorized as "No owner", else it's categorized as "Owner assigned".
+ * **Catalog usage and adoption**: This KPI shows a sum of monthly active users of the catalog across different pages.
+
+ :::image type="content" source="./media/data-stewardship/kpis-small.png" alt-text="Screenshot of the data stewardship insights summary graphs, showing the three main KPI charts." lightbox="media/data-stewardship/data-stewardship-kpis-large.png":::
+
+
+As users look at the main dashboard layout, it's divided into two tabs - [**Data estate**](#data-estate) and [**Catalog adoption**](#catalog-adoption).
+
+#### Data estate
+
+This section of **data stewardship** gives governance and quality focused users, like data stewards and chief data officers, an overview of their data estate, as well as running trends.
++
+##### Data estate health
+Data estate health is a scorecard view that helps management and governance focused users, like chief data officers, understand critical governance metrics that can be looked at by collection hierarchy.
++
+You can view the following metrics:
+* **Total asset**: Count of assets by collection drill-down
+* **With sensitive classifications**: Count of assets with any system classification applied
+* **Fully curated assets**: Count of assets that have a data owner, at least one classification and a description.
+* **Owners assigned**: Count of assets with data owner assigned on them
+* **No classifications**: Count of assets with no classification tag
+* **Net new assets**: Count of new assets pushed in the Data Map in the last 30 days
+* **Deleted assets**: Count of deleted assets from the Data Map in the last 30 days
+
+You can also drill down by collection paths. As you hover on each column name, it provides description of the column and takes you to the detailed graph for further drill-down.
+++
+##### Asset curation
+All data assets are categorized into three buckets - ***"Fully curated"***, ***"Partially curated"*** and ***"Not curated"***, based on whether assets have been given certain attributes.
++
+An asset is ***"Fully curated"*** if it has at least one classification tag, an assigned data owner, and a description.
+
+If any of these attributes is missing, but not all, then the asset is categorized as ***"Partially curated"***. If all of them are missing, then it's listed as ***"Not curated"***.
+
+You can drill down by collection hierarchy.
++
+For further information about which assets aren't fully curated, you can select ***"View details"*** link that will take you into the deeper view.
++
+In the ***"View details"*** page, if you select a specific collection, it will list all assets with attribute values or blanks, that make up the ***"fully curated"*** assets.
++
+The detail page shows two key pieces of information:
+
+First, it tells you what was the ***classification source***, if the asset is classified. It's **Manual** if a curator/data owner applied the classification manually. It's **Automatic** if it was classified during scanning. This page only provides the last applied classification state.
+
+Second, if an asset is unclassified, it tells us why it's not classified, in the column ***Reasons for unclassified***.
+Currently, Data estate insights can tell one of the following reasons:
+* No match found
+* Low confidence score
+* Not applicable
++
+You can select any asset and add missing attributes, without leaving the **Data estate insights** app.
++
+##### Trends and gap analysis
+
+This graph shows how the assets and key metrics have been trending over:
+* Last 30 days: The graph takes last run of the day or recording of the last run across days as a data point.
+* Last six weeks: The graph takes last run of the week where week ends on Sunday. If there was no run on Sunday, then it takes the last recorded run.
+* Last 12 months: The graph takes last run of the month.
+* Last four quarters: The graph takes last run of the calendar quarter.
++
+#### Catalog adoption
+
+This tab of the **data stewardship** insight gives management focused users like, chief data officers, a view of what is activity is happening in the catalog. The hypothesis is, the more activity on the catalog, the better usage, hence the better are the chances of governance program to have a high return on investment.
++
+##### Active users trend by catalog features
+
+Active users trend by area of the catalog, and the graph focuses on activities in **search and browse**, and **asset edits**.
+
+If there are active users of search and browse, meaning the user has typed a search keyword and hit enter, or selected browse by assets, we count it as an active user of "search and browse".
+
+If a user has edited an asset by selecting "save" after making changes, we consider that user as an active user of "asset edits".
++
+##### Most viewed assets in last 30 days
+
+You can see the most viewed assets in the catalog, their current curation level, and number of views. This list is currently limited to five items.
++
+##### Most searched keywords in last 30 days
+
+You can view count of top five searches with a result returned. The table also shows what key words were searched without any results in the catalog.
++
+## Next steps
+
+Learn more about Microsoft Purview Data estate insights through:
+* [Concepts](concept-insights.md)
purview Glossary Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/glossary-insights.md
Title: Glossary report on your data using Microsoft Purview Data Estate Insights
-description: This how-to guide describes how to view and use Microsoft Purview Data Estate Insights glossary reporting on your data.
+description: This guide describes how to view and use Microsoft Purview Data Estate Insights glossary reporting on your data.
+ Previously updated : 09/27/2021- Last updated : 05/16/2022
-# Glossary insights on your data in Microsoft Purview
+# Insights for your business glossary in Microsoft Purview
-This how-to guide describes how to access, view, and filter Microsoft Purview Glossary insight reports for your data.
+This guide describes how to access, view, and filter Microsoft Purview glossary insight reports for your data.
-> [!IMPORTANT]
-> Microsoft Purview Data Estate Insights are currently in PREVIEW. The [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
In this how-to guide, you'll learn how to: > [!div class="checklist"]
-> - Go to Data Estate Insights from your Microsoft Purview account
-> - Get a bird's eye view of your data
+> - Find Data Estate Insights from your Microsoft Purview account
+> - Get a bird's eye view of your data.
## Prerequisites
-Before getting started with Microsoft Purview Data Estate Insights, make sure that you've completed the following steps:
--- Set up your Azure resources and populate the account with data
+Before getting started with Microsoft Purview Data Estate Insights glossary insights, make sure that you've completed the following steps:
-- Set up and complete a scan on the source type
+* Set up a storage resource and populated the account with data.
-- Set up a glossary and attach assets to glossary terms
+* [Set up and complete a scan your storage source](manage-data-sources.md).
-For more information, see [Manage data sources in Microsoft Purview](manage-data-sources.md).
+* Set up at least one [business glossary term](how-to-create-import-export-glossary.md) and attach it to an asset.
-## Use Microsoft Purview Glossary Insights
+## Use Microsoft Purview glossary insights
-In Microsoft Purview, you can create glossary terms and attach them to assets. Later, you can view the glossary distribution in Glossary Insights. This tells you the state of your glossary by terms attached to assets. It also tells you terms by status and distribution of roles by number of users.
+In Microsoft Purview, you can [create glossary terms and attach them to assets](how-to-create-import-export-glossary.md). As you make use of these terms in your data map, you can view the glossary distribution in glossary insights. These insights will give you the state of your glossary based on:
+* Number of terms attached to assets
+* Status of terms
+* Distribution of roles by users
**To view Glossary Insights:**
-1. Go to the **Microsoft Purview** [instance screen in the Azure portal](https://aka.ms/purviewportal) and select your Microsoft Purview account.
+1. Go to the **Microsoft Purview** [account screen in the Azure portal](https://aka.ms/purviewportal) and select your Microsoft Purview account.
1. On the **Overview** page, in the **Get Started** section, select **Open Microsoft Purview governance portal** account tile.
- :::image type="content" source="./media/glossary-insights/portal-access.png" alt-text="Launch Microsoft Purview from the Azure portal":::
+ :::image type="content" source="./media/glossary-insights/portal-access.png" alt-text="Screenshot showing the Open Microsoft Purview governance portal button on the account page.":::
1. On the Microsoft Purview **Home** page, select **Data Estate Insights** on the left menu.
- :::image type="content" source="./media/glossary-insights/view-insights.png" alt-text="View your data estate insights in the Azure portal":::
-
-1. In the **Data Estate Insights** area, select **Glossary** to display the Microsoft Purview **Glossary insights** report.
+ :::image type="content" source="./media/glossary-insights/view-insights.png" alt-text="Screenshot showing Data Estate Insights in left menu of the Microsoft Purview governance portal.":::
-**Glossary Insights** provides you as a business user, valuable information to maintain a well-defined glossary for your organization.
+1. In the **Data Estate Insights** area, select **Glossary** to display the Microsoft Purview **glossary insights** report.
-1. The report starts with **High-level KPIs** that shows ***Total terms*** in your Microsoft Purview account, ***Approved terms without assets*** and ***Expired terms with assets***. Each of these values will help you identify the health of your Glossary.
+1. The report starts with **High-level KPIs** that shows ***Total terms*** in your Microsoft Purview account, ***Approved terms without assets*** and ***Expired terms with assets***. Each of these values will help you understand the current health of your Glossary.
- :::image type="content" source="./media/glossary-insights/glossary-kpi.png" alt-text="View glossary insights KPI":::
+ :::image type="content" source="./media/glossary-insights/glossary-kpi.png" alt-text="Screenshot showing glossary KPI charts.":::
-2. **Snapshot of terms** section (displayed above) shows you term status as ***Draft***, ***Approved***, ***Alert***, and ***Expired*** for terms with assets and terms without assets.
+1. The **Snapshot of terms** section (displayed above) shows the term status as ***Draft***, ***Approved***, ***Alert***, and ***Expired*** for terms with assets and terms without assets.
-3. Select **View more** to see the term names with various status and more details about ***Stewards*** and ***Experts***.
+1. Select **View details** to see the term names with various status and more details about ***Stewards*** and ***Experts***.
- :::image type="content" source="./media/glossary-insights/glossary-view-more.png" alt-text="Snapshot of terms with and without assets":::
+ :::image type="content" source="./media/glossary-insights/glossary-view-more.png" alt-text="Screenshot of terms with and without assets.":::
-4. When you select "View more" for ***Approved terms with assets***, Data Estate Insights allow you to navigate to the **Glossary** term detail page, from where you can further navigate to the list of assets with the attached terms.
+1. When you select "View more" for ***Approved terms with assets***, Data Estate Insights allow you to navigate to the **Glossary** term detail page, from where you can further navigate to the list of assets with the attached terms.
- :::image type="content" source="./media/glossary-insights/navigate-to-glossary-detail.png" alt-text="Data Estate Insights to glossary":::
+ :::image type="content" source="./media/glossary-insights/navigate-to-glossary-detail.png" alt-text="Screenshot of Data Estate Insights to glossary.":::
-4. In Glossary insights page, view a distribution of **Incomplete terms** by type of information missing. The graph shows count of terms with ***Missing definition***, ***Missing expert***, ***Missing steward*** and ***Missing multiple*** fields.
+1. In Glossary insights page, view a distribution of **Incomplete terms** by type of information missing. The graph shows count of terms with ***Missing definition***, ***Missing expert***, ***Missing steward*** and ***Missing multiple*** fields.
1. Select ***View more*** from **Incomplete terms**, to view the terms that have missing information. You can navigate to Glossary term detail page to input the missing information and ensure the glossary term is complete. ## Next steps
-Learn more about how to create a glossary term through [Glossary](./how-to-create-import-export-glossary.md)
+Learn more about how to create a glossary term through the [glossary documentation.](./how-to-create-import-export-glossary.md)
purview How To Create And Manage Collections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-create-and-manage-collections.md
The following guide will discuss the roles, how to manage them, and permissions
### Roles All assigned roles apply to sources, assets, and other objects within the collection where the role is applied.
+A few of the main roles are:
-* **Collection admins** can edit the collection, its details, and add subcollections. They can also add data curators, data readers, and other Microsoft Purview roles to a collection scope. Collection admins that are automatically inherited from a parent collection can't be removed.
-* **Data source admins** can manage data sources and data scans. They can also enter the policy management app to view and publish policies.
-* **Data curators** can perform create, read, modify, and delete actions on catalog data objects and establish relationships between objects. They can also enter the policy management app to view policies.
-* **Data readers** can access but not modify catalog data objects.
-* **Policy Authors** can enter the policy management app and create/edit policy statements.
+- **Collection administrator** - a role for users that will need to assign roles to other users in the Microsoft Purview governance portal or manage collections. Collection admins can add users to roles on collections where they're admins. They can also edit collections, their details, and add subcollections.
+- **Data curators** - a role that provides access to the data catalog to manage assets, configure custom classifications, set up glossary terms, and view data estate insights. Data curators can create, read, modify, move, and delete assets. They can also apply annotations to assets.
+- **Data readers** - a role that provides read-only access to data assets, classifications, classification rules, collections and glossary terms.
+- **Data source administrator** - a role that allows a user to manage data sources and scans. If a user is granted only to **Data source admin** role on a given data source, they can run new scans using an existing scan rule. To create new scan rules, the user must be also granted as either **Data reader** or **Data curator** roles.
+
+> [!IMPORTANT]
+> For a list of all available roles, and more information about roles, see the [permissions documentation](catalog-permissions.md#roles).
### Add role assignments
purview How To Data Owner Policies Arc Sql Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-data-owner-policies-arc-sql-server.md
+
+ Title: Provision access by data owner for SQL Server on Azure Arc-enabled servers (preview)
+description: Step-by-step guide on how data owners can configure access to Arc-enabled SQL servers through Microsoft Purview access policies.
+++++ Last updated : 05/24/2022++
+# Provision access by data owner for SQL Server on Azure Arc-enabled servers (preview)
++
+[Access policies](concept-data-owner-policies.md) allow you to manage access from Microsoft Purview to data sources that have been registered for *Data Use Management*.
+
+This how-to guide describes how a data owner can delegate authoring policies in Microsoft Purview to enable access to SQL Server on Azure Arc-enabled servers. The following actions are currently enabled: *SQL Performance Monitoring*, *SQL Security Auditing* and *Read*. *Read* is only supported for policies at server level. *Modify* is not supported at this point.
+
+## Prerequisites
+- SQL server version 2022 CTP 2.0 or later
+- Complete process to onboard that SQL server with Azure Arc and enable Azure AD Authentication. [Follow this guide to learn how](https://aka.ms/sql-on-arc-AADauth).
+
+**Enforcement of policies is available only in the following regions for Microsoft Purview**
+- East US
+- UK South
+
+## Security considerations
+- The Server admin can turn off the Microsoft Purview policy enforcement.
+- Arc Admin/Server admin permissions empower the Arc admin or Server admin with the ability to change the ARM path of the given server. Given that mappings in Microsoft Purview use ARM paths, this can lead to wrong policy enforcements.
+- SQL Admin (DBA) can gain the power of Server admin and can tamper with the cached policies from Microsoft Purview.
+- The recommended configuration is to create a separate App Registration per SQL server instance. This prevents SQL server2 from reading the policies meant for SQL server1, in case a rogue admin in SQL server2 tampers with the ARM path.
+
+## Configuration
+
+> [!Important]
+> You can assign the data source side permission (i.e., *IAM Owner*) **only** by entering Azure portal through this [special link](https://portal.azure.com/?feature.canmodifystamps=true&Microsoft_Azure_HybridData_Platform=sqlrbacmain#blade/Microsoft_Azure_HybridCompute/AzureArcCenterBlade/sqlServers). Alternatively, you can configure this permission at the parent resource group level so that it gets inherited by the "SQL Server - Azure Arc" data source.
+
+### SQL Server on Azure Arc-enabled server configuration
+This section describes the steps to configure the SQL Server on Azure Arc to use Microsoft Purview.
+
+1. Sign in to Azure portal with a [special link](https://portal.azure.com/?feature.canmodifystamps=true&Microsoft_Azure_HybridData_Platform=sqlrbacmain#blade/Microsoft_Azure_HybridCompute/AzureArcCenterBlade/sqlServers) that contains feature flags to list SQL Servers on Azure Arc
+
+1. Navigate to a SQL Server you want to configure
+
+1. Navigate to **Azure Active Directory** feature on the left pane
+
+1. Verify that Azure Active Directory Authentication is configured and scroll down.
+![Screenshot shows how to configure Microsoft Purview endpoint in Azure AD section.](./media/how-to-data-owner-policies-sql/setup-sql-on-arc-for-purview.png)
+
+1. Set **External Policy Based Authorization** to enabled
+
+1. Enter **Microsoft Purview Endpoint** in the format *https://\<purview-account-name\>.purview.azure.com*. You can see the names of Microsoft Purview accounts in your tenant through [this link](https://portal.azure.com/#blade/HubsExtension/BrowseResource/resourceType/Microsoft.Purview%2FAccounts). Optionally, you can confirm the endpoint by navigating to the Microsoft Purview account, then to the Properties section on the left menu and scrolling down until you see "Scan endpoint". The full endpoint path will be the one listed without the "/Scan" at the end.
+
+1. Make a note of the **App registration ID**, as you will need it when you register and enable this data source for *Data use Management* in Microsoft Purview.
+
+1. Select the **Save** button to save the configuration.
+
+### Register data sources in Microsoft Purview
+Register each data source with Microsoft Purview to later define access policies.
+
+1. Sign in to Microsoft Purview Studio.
+
+1. Navigate to the **Data map** feature on the left pane, select **Sources**, then select **Register**. Type "Azure Arc" in the search box and select **SQL Server on Azure Arc**. Then select **Continue**
+![Screenshot shows how to select a source for registration.](./media/how-to-data-owner-policies-sql/select-arc-sql-server-for-registration.png)
+
+1. Enter a **Name** for this registration. It is best practice to make the name of the registration the same as the server name in the next step.
+
+1. select an **Azure subscription**, **Server name** and **Server endpoint**.
+
+1. **Select a collection** to put this registration in.
+
+1. Turn the switch **Data Use Management** to **Enabled**. This switch enables the access-policies to be used with the given Arc-enabled SQL server. Note: Data Use Management can affect the security of your data, as it delegates to certain Microsoft Purview roles managing access to the data sources. Secure practices related to Data Use Management are described in this guide: [registering a data resource for Data Use Management](./how-to-enable-data-use-management.md)
+
+1. Enter the **Application ID** from the App Registration related to this Arc-enabled SQL server.
+
+1. Select **Register** or **Apply** at the bottom
+
+Once your data source has the **Data Use Management** toggle *Enabled*, it will look like this picture.
+![Screenshot shows how to register a data source for policy.](./media/how-to-data-owner-policies-sql/register-data-source-for-policy-arc-sql.png)
+
+> [!Note]
+> - If you want to create a policy on a resource group or subscription and have it enforced in Arc-enabled SQL servers, you will need to also register those servers independently for *Data use management* to provide their App ID. See this document on how to create policies at resource group or subscription level: [Enable Microsoft Purview data owner policies on all data sources in a subscription or a resource group](./how-to-data-owner-policies-resource-group.md).
+
+## Create and publish a data owner policy
+
+Execute the steps in the **Create a new policy** and **Publish a policy** sections of the [data-owner policy authoring tutorial](./how-to-data-owner-policy-authoring-generic.md#create-a-new-policy). The result will be a data owner policy similar to one of the examples shown in the images.
+
+**Example #1: SQL Performance Monitor policy**. This policy assigns the Azure AD principal 'Christie Cline' to the *SQL Performance monitoring* role, in the scope of Arc-enabled SQL server *DESKTOP-xxx*. This policy has also been published to that server.
+
+![Screenshot shows a sample data owner policy giving SQL Performance Monitor access to an Azure SQL Database.](./media/how-to-data-owner-policies-sql/data-owner-policy-example-arc-sql-server-performance-monitor.png)
+
+**Example #2: SQL Security Auditor policy**. Similar to example 1, but choose the *SQL Security auditing* action (instead of *SQL Performance monitoring*), when authoring the policy.
+
+**Example #3: Read policy**. This policy assigns the Azure AD principal 'sg-Finance' to the *SQL Data reader* role, in the scope of SQL server *DESKTOP-xxx*. This policy has also been published to that server.
+
+![Screenshot shows a sample data owner policy giving Data Reader access to an Azure SQL Database.](./media/how-to-data-owner-policies-sql/data-owner-policy-example-arc-sql-server-data-reader.png)
+
+> [!Note]
+> - Given that scan is not currently available for this data source, data reader policies can only be created at server level. Use the **Data sources** box instead of the Asset box when authoring the **data resources** part of the policy.
+> - There is a know issue with SQL Server Management Studio that prevents right-clicking on a table and choosing option ΓÇ£Select Top 1000 rowsΓÇ¥.
++
+>[!Important]
+> - Publish is a background operation. It can take up to **4 minutes** for the changes to be reflected in this data source.
+> - Changing a policy does not require a new publish operation. The changes will be picked up with the next pull.
+
+### Test the policy
+
+The Azure AD Accounts referenced in the access policies should now be able to connect to any database in the server to which the policies are published.
+
+#### Force policy download
+It is possible to force an immediate download of the latest published policies to the current SQL database by running the following command. The minimal permission required to run it is membership in ##MS_ServerStateManager##-server role.
+
+```sql
+-- Force immediate download of latest published policies
+exec sp_external_policy_refresh reload
+```
+
+#### Analyze downloaded policy state from SQL
+The following DMVs can be used to analyze which policies have been downloaded and are currently assigned to Azure AD accounts. The minimal permission required to run them is VIEW DATABASE SECURITY STATE - or assigned Action Group *SQL Security Auditor*.
+
+```sql
+
+-- Lists generally supported actions
+SELECT * FROM sys.dm_server_external_policy_actions
+
+-- Lists the roles that are part of a policy published to this server
+SELECT * FROM sys.dm_server_external_policy_roles
+
+-- Lists Azure AD principals assigned to a given role on a given resource scope
+SELECT * FROM sys.dm_server_external_policy_role_members
+```
+
+## Additional information
+
+### Policy action mapping
+
+This section contains a reference of how actions in Microsoft Purview data policies map to specific actions in SQL Server on Azure Arc-enabled servers.
+
+| **Microsoft Purview policy action** | **Data source specific actions** |
+|-|--|
+| | |
+| *Read* |Microsoft.Sql/sqlservers/Connect |
+||Microsoft.Sql/sqlservers/databases/Connect |
+||Microsoft.Sql/Sqlservers/Databases/Schemas/Tables/Rows|
+||Microsoft.Sql/Sqlservers/Databases/Schemas/Views/Rows |
+|||
+| *SQL Performance Monitor* |Microsoft.Sql/sqlservers/Connect |
+||Microsoft.Sql/sqlservers/databases/Connect |
+||Microsoft.Sql/sqlservers/SystemViewsAndFunctions/ServerMetadata/rows/select |
+||Microsoft.Sql/sqlservers/databases/SystemViewsAndFunctions/DatabaseMetadata/rows/select |
+||Microsoft.Sql/sqlservers/SystemViewsAndFunctions/ServerState/rows/select |
+||Microsoft.Sql/sqlservers/databases/SystemViewsAndFunctions/DatabaseState/rows/select |
+|||
+| *SQL Security Auditor* |Microsoft.Sql/sqlservers/Connect |
+||Microsoft.Sql/sqlservers/databases/Connect |
+||Microsoft.Sql/sqlservers/SystemViewsAndFunctions/ServerSecurityState/rows/select |
+||Microsoft.Sql/sqlservers/databases/SystemViewsAndFunctions/DatabaseSecurityState/rows/select |
+||Microsoft.Sql/sqlservers/SystemViewsAndFunctions/ServerSecurityMetadata/rows/select |
+||Microsoft.Sql/sqlservers/databases/SystemViewsAndFunctions/DatabaseSecurityMetadata/rows/select |
+|||
+
+## Next steps
+Check blog, demo and related how-to guides
+* [Demo of access policy for Azure Storage](/video/media/8ce7c554-0d48-430f-8f63-edf94946947c/purview-policy-storage-dataowner-scenario_mid.mp4)
+* [Concepts for Microsoft Purview data owner policies](./concept-data-owner-policies.md)
+* Blog: [Private preview: controlling access to Azure SQL at scale with policies in Purview](https://techcommunity.microsoft.com/t5/azure-sql-blog/private-preview-controlling-access-to-azure-sql-at-scale-with/ba-p/2945491)
+* [Enable Microsoft Purview data owner policies on all data sources in a subscription or a resource group](./how-to-data-owner-policies-resource-group.md)
+* [Enable Microsoft Purview data owner policies on an Azure SQL DB](./how-to-data-owner-policies-azure-sql-db.md)
purview How To Data Owner Policies Azure Sql Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-data-owner-policies-azure-sql-db.md
+
+ Title: Provision access by data owner for Azure SQL DB (preview)
+description: Step-by-step guide on how data owners can configure access for Azure SQL DB through Microsoft Purview access policies.
+++++ Last updated : 05/10/2022++
+# Provision access by data owner for Azure SQL DB (preview)
++
+[Access policies](concept-data-owner-policies.md) allow you to manage access from Microsoft Purview to data sources that have been registered for *Data Use Management*.
+
+This how-to guide describes how a data owner can delegate authoring policies in Microsoft Purview to enable access to Azure SQL DB. The following actions are currently enabled: *SQL Performance Monitoring*, *SQL Security Auditing* and *Read*. *Modify* is not supported at this point.
+
+## Prerequisites
+- Create a new Azure SQL DB or use an existing one in one of the currently available regions for this preview feature. You can [follow this guide to create a new Azure SQL DB](/azure/azure-sql/database/single-database-create-quickstart).
+
+**Enforcement of Microsoft Purview policies is available only in the following regions for Azure SQL DB**
+- East US
+- East US2
+- West US
+- West US3
+- West Central US
+- Canada Central
+- West Europe
+- UK South
+- Central India
+- Australia East
+
+## Configuration
+
+### Azure SQL Database configuration
+Each Azure SQL Database server needs a Managed Identity assigned to it.
+You can use the following PowerShell script:
+```powershell
+Connect-AzAccount
+
+$context = Get-AzSubscription -SubscriptionId xxxx-xxxx-xxxx-xxxx
+Set-AzContext $context
+
+Set-AzSqlServer -ResourceGroupName "RESOURCEGROUPNAME" -ServerName "SERVERNAME" -AssignIdentity
+```
+You will also need to enable external policy based authorization on the server.
+
+```powershell
+$server = Get-AzSqlServer -ResourceGroupName "RESOURCEGROUPNAME" -ServerName "SERVERNAME"
+
+#Initiate the call to the REST API to set externalPolicyBasedAuthorization to true
+Invoke-AzRestMethod -Method PUT -Path "$($server.ResourceId)/externalPolicyBasedAuthorizations/MicrosoftPurview?api-version=2021-11-01-preview" -Payload '{"properties":{"externalPolicyBasedAuthorization":true}}'
+
+#Verify that the propery has been set
+Invoke-AzRestMethod -Method GET -Path "$($server.ResourceId)/externalPolicyBasedAuthorizations/MicrosoftPurview?api-version=2021-11-01-preview"
+```
+
+### Register the data sources in Microsoft Purview
+The Azure SQL DB resources need to be registered first with Microsoft Purview to later define access policies. You can follow these guides:
+
+[Register and scan Azure SQL DB](./register-scan-azure-sql-database.md)
+
+After you've registered your resources, you'll need to enable *Data Use Management*. Data Use Management can affect the security of your data, as it delegates to certain Microsoft Purview roles to manage access to the data sources. Secure practices related to Data Use Management are described in this guide:
+
+[How to enable Data Use Management](./how-to-enable-data-use-management.md)
+
+Once your data source has the **Data Use Management** toggle *Enabled*, it will look like this picture. This will enable the access policies to be used with the given SQL server and all its contained databases.
+![Screenshot shows how to register a data source for policy.](./media/how-to-data-owner-policies-sql/register-data-source-for-policy-azure-sql-db.png)
++
+## Create and publish a data owner policy
+
+Execute the steps in the **Create a new policy** and **Publish a policy** sections of the [data-owner policy authoring tutorial](./how-to-data-owner-policy-authoring-generic.md#create-a-new-policy). The result will be a data owner policy similar to one of the examples shown in the images.
+
+**Example #1: SQL Performance Monitor policy**. This policy assigns the Azure AD principal 'Mateo Gomez' to the *SQL Performance monitoring* role, in the scope of SQL server *relecloud-sql-srv2*. This policy has also been published to that server.
+
+![Screenshot shows a sample data owner policy giving SQL Performance Monitor access to an Azure SQL Database.](./media/how-to-data-owner-policies-sql/data-owner-policy-example-azure-sql-db-performance-monitor.png)
+
+**Example #2: SQL Security Auditor policy**. Similar to example 1, but choose the *SQL Security auditing* action (instead of *SQL Performance monitoring*), when authoring the policy.
+
+**Example #3: Read policy**. This policy assigns the Azure AD principal 'Robert Murphy' to the *SQL Data reader* role, in the scope of SQL server *relecloud-sql-srv2*. This policy has also been published to that server.
+
+![Screenshot shows a sample data owner policy giving Data Reader access to an Azure SQL Database.](./media/how-to-data-owner-policies-sql/data-owner-policy-example-azure-sql-db-data-reader.png)
++
+>[!Important]
+> - Publish is a background operation. It can take up to **4 minutes** for the changes to be reflected in this data source.
+> - Changing a policy does not require a new publish operation. The changes will be picked up with the next pull.
+
+### Test the policy
+
+The Azure AD Accounts referenced in the access policies should now be able to connect to any database in the server to which the policies are published.
+
+#### Force policy download
+It is possible to force an immediate download of the latest published policies to the current SQL database by running the following command. The minimal permission required to run it is membership in ##MS_ServerStateManager##-server role.
+
+```sql
+-- Force immediate download of latest published policies
+exec sp_external_policy_refresh reload
+```
+
+#### Analyze downloaded policy state from SQL
+The following DMVs can be used to analyze which policies have been downloaded and are currently assigned to Azure AD accounts. The minimal permission required to run them is VIEW DATABASE SECURITY STATE - or assigned Action Group *SQL Security Auditor*.
+
+```sql
+
+-- Lists generally supported actions
+SELECT * FROM sys.dm_server_external_policy_actions
+
+-- Lists the roles that are part of a policy published to this server
+SELECT * FROM sys.dm_server_external_policy_roles
+
+-- Lists Azure AD principals assigned to a given role on a given resource scope
+SELECT * FROM sys.dm_server_external_policy_role_members
+```
+
+## Additional information
+
+### Policy action mapping
+
+This section contains a reference of how actions in Microsoft Purview data policies map to specific actions in Azure SQL DB.
+
+| **Microsoft Purview policy action** | **Data source specific actions** |
+|-|--|
+| | |
+| *Read* |Microsoft.Sql/sqlservers/Connect |
+||Microsoft.Sql/sqlservers/databases/Connect |
+||Microsoft.Sql/Sqlservers/Databases/Schemas/Tables/Rows|
+||Microsoft.Sql/Sqlservers/Databases/Schemas/Views/Rows |
+|||
+| *SQL Performance Monitor* |Microsoft.Sql/sqlservers/Connect |
+||Microsoft.Sql/sqlservers/databases/Connect |
+||Microsoft.Sql/sqlservers/SystemViewsAndFunctions/ServerMetadata/rows/select |
+||Microsoft.Sql/sqlservers/databases/SystemViewsAndFunctions/DatabaseMetadata/rows/select |
+||Microsoft.Sql/sqlservers/SystemViewsAndFunctions/ServerState/rows/select |
+||Microsoft.Sql/sqlservers/databases/SystemViewsAndFunctions/DatabaseState/rows/select |
+|||
+| *SQL Security Auditor* |Microsoft.Sql/sqlservers/Connect |
+||Microsoft.Sql/sqlservers/databases/Connect |
+||Microsoft.Sql/sqlservers/SystemViewsAndFunctions/ServerSecurityState/rows/select |
+||Microsoft.Sql/sqlservers/databases/SystemViewsAndFunctions/DatabaseSecurityState/rows/select |
+||Microsoft.Sql/sqlservers/SystemViewsAndFunctions/ServerSecurityMetadata/rows/select |
+||Microsoft.Sql/sqlservers/databases/SystemViewsAndFunctions/DatabaseSecurityMetadata/rows/select |
+|||
+
+## Next steps
+Check blog, demo and related how-to guides
+* [Demo of access policy for Azure Storage](/video/media/8ce7c554-0d48-430f-8f63-edf94946947c/purview-policy-storage-dataowner-scenario_mid.mp4)
+* [Concepts for Microsoft Purview data owner policies](./concept-data-owner-policies.md)
+* Blog: [Private preview: controlling access to Azure SQL at scale with policies in Purview](https://techcommunity.microsoft.com/t5/azure-sql-blog/private-preview-controlling-access-to-azure-sql-at-scale-with/ba-p/2945491)
+* [Enable Microsoft Purview data owner policies on all data sources in a subscription or a resource group](./how-to-data-owner-policies-resource-group.md)
+* [Enable Microsoft Purview data owner policies on an Arc-enabled SQL Server](./how-to-data-owner-policies-arc-sql-server.md)
purview How To Data Owner Policies Resource Group https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-data-owner-policies-resource-group.md
Previously updated : 4/15/2022- Last updated : 05/10/2022+ # Resource group and subscription access provisioning by data owner (preview) [!INCLUDE [feature-in-preview](includes/feature-in-preview.md)]
-[Policies](concept-data-owner-policies.md) allow you to enable access to data sources that have been registered for *Data Use Management* in Microsoft Purview.
+[Access policies](concept-data-owner-policies.md) allow you to manage access from Microsoft Purview to data sources that have been registered for *Data Use Management*.
-You can also [register an entire resource group or subscription](register-scan-azure-multiple-sources.md), and create a single policy that will manage access to **all** data sources in that resource group or subscription. That single policy will cover all existing data sources and any data sources that are created afterwards.
-This article describes how this is done.
-
-> [!IMPORTANT]
-> Currently, these are the available data sources for access policies in Public Preview:
-> - Blob storage
-> - Azure Data Lake Storage (ADLS) Gen2
+You can also [register an entire resource group or subscription](register-scan-azure-multiple-sources.md), and create a single policy that will manage access to **all** data sources in that resource group or subscription. That single policy will cover all existing data sources and any data sources that are created afterwards. This article describes how this is done.
## Prerequisites [!INCLUDE [Access policies generic pre-requisites](./includes/access-policies-prerequisites-generic.md)]
+**Only these data sources are enabled for access policies on resource group or subscription**. Follow the **Prerequisites** section that is specific to the data source(s) in these guides:
+* [Data owner policies on an Azure Storage account](./how-to-data-owner-policies-storage.md#prerequisites)
+* [Data owner policies on an Azure SQL Database](./how-to-data-owner-policies-azure-sql-db.md#prerequisites)*
+* [Data owner policies on an Arc-enabled SQL Server](./how-to-data-owner-policies-arc-sql-server.md#prerequisites)*
+
+(*) Only the *SQL Performance monitoring* and *Security auditing* actions are fully supported for SQL-type data sources. The *Read* action needs a workaround described later in this guide. The *Modify* action is not currently supported for SQL-type data sources.
## Configuration [!INCLUDE [Access policies generic configuration](./includes/access-policies-configuration-generic.md)]
This article describes how this is done.
### Register the subscription or resource group for Data Use Management The subscription or resource group needs to be registered with Microsoft Purview to later define access policies.
-To register your resource, follow the **Prerequisites** and **Register** sections of this guide:
+To register your subscription or resource group, follow the **Prerequisites** and **Register** sections of this guide:
- [Register multiple sources in Microsoft Purview](register-scan-azure-multiple-sources.md#prerequisites)
To ensure you securely enable Data Use Management, and follow best practices, fo
- [How to enable Data Use Management](./how-to-enable-data-use-management.md)
-In the end, your resource will have the **Data Use Management** toggle to **Enabled**, as shown in the picture:
+In the end, your resource will have the **Data Use Management** toggle **Enabled**, as shown in the picture:
+![Screenshot shows how to register a resource group or subscription for policy by toggling the enable tab in the resource editor.](./media/how-to-data-owner-policies-resource-group/register-resource-group-for-policy.png)
+
+>[!Important]
+> - If you want to create a policy on a resource group or subscription and have it enforced in Arc-enabled SQL servers, you will need to also register those servers independently for *Data use management* to provide their App ID.
## Create and publish a data owner policy
-Execute the steps in the [data-owner policy authoring tutorial](how-to-data-owner-policy-authoring-generic.md) to create and publish a policy similar to the example shown in the image: a policy that provides security group *sg-Finance* *modify* access to resource group *finance-rg*:
+Execute the steps in the **Create a new policy** and **Publish a policy** sections of the [data-owner policy authoring tutorial](./how-to-data-owner-policy-authoring-generic.md#create-a-new-policy). The result will be a data owner policy similar to the example shown in the image: a policy that provides security group *sg-Finance* *modify* access to resource group *finance-rg*. Use the Data source box in the Policy user experience.
+![Screenshot shows a sample data owner policy giving access to a resource group.](./media/how-to-data-owner-policies-resource-group/data-owner-policy-example-resource-group.png)
>[!Important]
-> - Publish is a background operation. It can take up to **2 hours** for the changes to be reflected in Storage account(s).
+> - Publish is a background operation. For example, Azure Storage accounts can take up to **2 hours** to reflect the changes.
+> - Changing a policy does not require a new publish operation. The changes will be picked up with the next pull.
+
+>[!Warning]
+> **Known Issues**
+> - No implicit connect permission is provided to SQL type data sources (e.g.: Azure SQL DB, SQL server on Azure Arc-enabled servers) when creating a policy with *Read* action on a resource group or subscription. To support this scenario, provide the connect permission to the Azure AD principals locally, i.e. directly in the SQL-type data sources.
## Additional information-- Creating a policy at subscription or resource group level will enable the Subjects to access Azure Storage system containers, for example, *$logs*. If this is undesired, first scan the data source and then create finer-grained policies for each (that is, at container or subcontainer level).
+- Creating a policy at subscription or resource group level will enable the Subjects to access Azure Storage system containers, for example, *$logs*. If this is undesired, first scan the data source and then create finer-grained policies for each (that is, at container or sub-container level).
### Limits The limit for Microsoft Purview policies that can be enforced by Storage accounts is 100 MB per subscription, which roughly equates to 5000 policies.
The limit for Microsoft Purview policies that can be enforced by Storage account
Check blog, demo and related tutorials: * [Concepts for Microsoft Purview data owner policies](./concept-data-owner-policies.md)
-* [Data owner policies on an Azure Storage account](./how-to-data-owner-policies-storage.md)
* [Blog: resource group-level governance can significantly reduce effort](https://techcommunity.microsoft.com/t5/azure-purview-blog/data-policy-features-resource-group-level-governance-can/ba-p/3096314) * [Video: Demo of data owner access policies for Azure Storage](https://www.youtube.com/watch?v=CFE8ltT19Ss)
purview How To Data Owner Policies Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-data-owner-policies-storage.md
+ Last updated 05/12/2022- # Access provisioning by data owner to Azure Storage datasets (preview) [!INCLUDE [feature-in-preview](includes/feature-in-preview.md)]
-[Access policies](concept-data-owner-policies.md) allow you to enable access to data sources that have been registered for *Data Use Management* in Microsoft Purview.
+[Access policies](concept-data-owner-policies.md) allow you to manage access from Microsoft Purview to data sources that have been registered for *Data Use Management*.
+ This article describes how a data owner can delegate in Microsoft Purview management of access to Azure Storage datasets. Currently, these two Azure Storage sources are supported: - Blob storage
Once your data source has the **Data Use Management** toggle **Enabled**, it wi
:::image type="content" source="./media/how-to-data-owner-policies-storage/register-data-source-for-policy-storage.png" alt-text="Screenshot that shows how to register a data source for policy by toggling the enable tab in the resource editor."::: ## Create and publish a data owner policy
-Execute the steps in the [data-owner policy authoring tutorial](how-to-data-owner-policy-authoring-generic.md) to create and publish an access policy similar to the example shown in the image: a policy that provides group *Contoso Team* *read* access to Storage account *marketinglake1*:
+Execute the steps in the **Create a new policy** and **Publish a policy** sections of the [data-owner policy authoring tutorial](./how-to-data-owner-policy-authoring-generic.md#create-a-new-policy). The result will be a data owner policy similar to the example shown in the image: a policy that provides group *Contoso Team* *read* access to Storage account *marketinglake1*:
:::image type="content" source="./media/how-to-data-owner-policies-storage/data-owner-policy-example-storage.png" alt-text="Screenshot that shows a sample data owner policy giving access to an Azure Storage account."::: >[!Important]
-> - Publish is a background operation. It can take up to **2 hours** for the changes to be reflected in Storage account(s).
+> - Publish is a background operation. Azure Storage accounts can take up to **2 hours** to reflect the changes.
## Additional information - Policy statements set below container level on a Storage account are supported. If no access has been provided at Storage account level or container level, then the App that requests the data must execute a direct access by providing a fully qualified name to the data object. If the App attempts to crawl down the hierarchy starting from the Storage account or Container (like Storage Explorer does), and there's no access at that level, the request will fail. The following documents show examples of how to perform a direct access. See also the blogs in the *Next steps* section of this how-to-guide.
This section contains a reference of how actions in Microsoft Purview data polic
| **Microsoft Purview policy action** | **Data source specific actions** | ||--| |||
-| *Read* |<sub>Microsoft.Storage/storageAccounts/blobServices/containers/read |
-| |<sub>Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read |
+| *Read* |Microsoft.Storage/storageAccounts/blobServices/containers/read |
+| |Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read |
|||
-| *Modify* |<sub>Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read |
-| |<sub>Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write |
-| |<sub>Microsoft.Storage/storageAccounts/blobServices/containers/blobs/add/action |
-| |<sub>Microsoft.Storage/storageAccounts/blobServices/containers/blobs/move/action |
-| |<sub>Microsoft.Storage/storageAccounts/blobServices/containers/blobs/delete |
-| |<sub>Microsoft.Storage/storageAccounts/blobServices/containers/read |
-| |<sub>Microsoft.Storage/storageAccounts/blobServices/containers/write |
-| |<sub>Microsoft.Storage/storageAccounts/blobServices/containers/delete |
+| *Modify* |Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read |
+| |Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write |
+| |Microsoft.Storage/storageAccounts/blobServices/containers/blobs/add/action |
+| |Microsoft.Storage/storageAccounts/blobServices/containers/blobs/move/action |
+| |Microsoft.Storage/storageAccounts/blobServices/containers/blobs/delete |
+| |Microsoft.Storage/storageAccounts/blobServices/containers/read |
+| |Microsoft.Storage/storageAccounts/blobServices/containers/write |
+| |Microsoft.Storage/storageAccounts/blobServices/containers/delete |
|||
purview How To Data Owner Policy Authoring Generic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-data-owner-policy-authoring-generic.md
+ Last updated 4/18/2022
This section describes the steps to create a new policy in Microsoft Purview.
1. Select the **Data Resources** button to bring up the window to enter Data resource information, which will open to the right.
-1. Under the **Data Resources** Panel do one of two things depending on the granularity of the policy:
+1. Under the **Data Resources** Panel do **one of two things** depending on the granularity of the policy:
- To create a broad policy statement that covers an entire data source, resource group, or subscription that was previously registered, use the **Data sources** box and select its **Type**. - To create a fine-grained policy, use the **Assets** box instead. Enter the **Data Source Type** and the **Name** of a previously registered and scanned data source. See example in the image.
purview How To Lineage Powerbi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-lineage-powerbi.md
Last updated 03/30/2021
This article elaborates on the data lineage aspects of Power BI source in Microsoft Purview. The prerequisite to see data lineage in Microsoft Purview for Power BI is to [scan your Power BI.](../purview/register-scan-power-bi-tenant.md)
+>[!IMPORTANT]
+> Currently, supported sources for Power BI Lineage are:
+> * Azure SQL
+> * Azure Storage
+> * Azure Data Lake Store Gen1
+> * Azure Data Lake Store Gen2
+ ## Common scenarios 1. After the Power BI source is scanned, data consumers can perform root cause analysis of a report or dashboard from Microsoft Purview. For any data discrepancy in a report, users can easily identify the upstream datasets and contact their owners if necessary.
purview Insights Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/insights-permissions.md
+
+ Title: Permissions for Data Estate Insights in Microsoft Purview
+description: This article describes what permissions are needed to access access and managed Data Estate Insights in Microsoft Purview.
++++++ Last updated : 05/16/2022++
+# Access control in Data Estate Insights within Microsoft Purview
+
+Like all other permissions in Microsoft Purview, Data Estate Insights access is given through collections. This article describes what permissions are needed to access Data Estate Insights in Microsoft Purview.
++
+## Insights reader role
+
+The insights reader role gives users read permission to the Data Estate Insights application in Microsoft Purview. However, a user with this role will only have access to information for collections that they also have at least data reader access to.
+
+As Data Estate Insights application gives a bird's eye view of your data estate and catalog usage from a governance and risk perspective, it's intended for users who need to manage and report on this high-level information, like a Chief Data Officer. You may not want, or need, all your data readers to have access to the Data Estate Insights dashboards.
++
+## Role assignment
+
+* **Insights Reader** role can be assigned to any Data Map user, by the **Data Curator of the root collection**. Users assigned the Data Curator role on subcollections don't have the privilege of assigning insights reader.
+
+ :::image type="content" source="media/insights-permissions/insights-reader.png" alt-text="Screenshot of root collection, showing the role assignments tab, with the add user button selected next to Insights reader.":::
+
+* A **Data Curator** of any collection also has read permission to Data Estate Insights application. Their scope of insights will be limited to the metadata assigned to collections. In other words, a Data Curator at a subcollection will only view KPIs and aggregations on collections they have access to. A Data Curator can still view and edit assets from Data Estate Insights app, without any extra permissions.
+
+* A **Data Reader** at any collection node, can see Data Estate Insights app on the left navigation bar, however, when they hover on the icon, they'll receive a message saying they need to contact Data Curator at root collection for access. Once a Data Reader has been assigned the Insights Reader role, they can view KPIs and aggregations based on the collections they have Data Reader permission on.
+A Data Reader can't edit assets or select ***"Export to CSV"*** from the app.
+
+> [!NOTE]
+> All roles other than Data Curator, will need explicit role assignment as **Insights Reader** to be able to click into the Data Estate Insights app.
+
+## Next steps
+
+Learn how to use Data Estate Insights with sources below:
+
+* [Learn how to use Asset insights](asset-insights.md)
+* [Learn how to use Data Stewardship](data-stewardship.md)
+* [Learn how to use Classification insights](classification-insights.md)
+* [Learn how to use Glossary insights](glossary-insights.md)
+* [Learn how to use Label insights](sensitivity-insights.md)
purview Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/overview.md
description: This article provides an overview of Microsoft Purview, including i
+ Previously updated : 12/06/2021 Last updated : 05/16/2022 # What is Microsoft Purview?
With the Microsoft Purview Data Catalog, business and technical users can quickl
For more information, see our [introduction to search using Data Catalog](how-to-search-catalog.md). ## Data Estate Insights
-With the Microsoft Purview Data Estate Insights, data officers and security officers can get a birdΓÇÖs eye view and at a glance understand what data is actively scanned, where sensitive data is and how it moves.
+With the Microsoft Purview Data Estate Insights, the chief data officers and other governance stakeholders can get a birdΓÇÖs eye view of their data estate and can gain actionable insights into the governance gaps that can be resolved from the experience itself.
For more information, see our [introduction to Data Estate Insights](concept-insights.md).
purview Reference Azure Purview Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/reference-azure-purview-glossary.md
description: A glossary defining the terminology used throughout Microsoft Purvi
+ Last updated 04/14/2022
An entry in the Business glossary that defines a concept specific to an organiza
A scan that detects and processes assets that have been created, modified, or deleted since the previous successful scan. To run an incremental scan, at least one full scan must be completed on the source. ## Ingested asset An asset that has been scanned, classified (when applicable), and added to the Microsoft Purview data map. Ingested assets are discoverable and consumable within the data catalog through automated scanning or external connections, such as Azure Data Factory and Azure Synapse.
+## Insight reader
+A role that provides read-only access to insights reports for collections where the insights reader also has the **Data reader** role.
## Data Estate Insights An area within Microsoft Purview where you can view reports that summarize information about your data. ## Integration runtime
purview Register Scan Adls Gen2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-adls-gen2.md
This article outlines the process to register an Azure Data Lake Storage Gen2 da
|**Metadata Extraction**| **Full Scan** |**Incremental Scan**|**Scoped Scan**|**Classification**|**Access Policy**|**Lineage**| ||||||||
-| [Yes](#register) | [Yes](#scan)|[Yes](#scan) | [Yes](#scan)|[Yes](#scan)| [Yes](#access-policy) | Limited** |
+| [Yes](#register) | [Yes](#scan)|[Yes](#scan) | [Yes](#scan)|[Yes](#scan)| [Yes (Preview)](#access-policy) | Limited** |
\** Lineage is supported if dataset is used as a source/sink in [Data Factory Copy activity](how-to-link-azure-data-factory.md)
This article outlines the process to register an Azure Data Lake Storage Gen2 da
* An active [Microsoft Purview account](create-catalog-portal.md).
-* You will need to be a Data Source Administrator and Data Reader to register a source and manage it in the Microsoft Purview governance portal. See our [Microsoft Purview Permissions page](catalog-permissions.md) for details.
+* You'll need to be a Data Source Administrator and Data Reader to register a source and manage it in the Microsoft Purview governance portal. See our [Microsoft Purview Permissions page](catalog-permissions.md) for details.
+
+* You need to have at least [Reader permission on the ADLS Gen 2 account](../storage/blobs/data-lake-storage-access-control-model.md#role-based-access-control-azure-rbac) to be able to register it.
## Register
This section will enable you to register the ADLS Gen2 data source and set up an
### Steps to register
-It is important to register the data source in Microsoft Purview prior to setting up a scan for the data source.
+It's important to register the data source in Microsoft Purview prior to setting up a scan for the data source.
1. Go to the [Azure portal](https://portal.azure.com), and navigate to the **Microsoft Purview accounts** page and select your _Purview account_
It is important to register the data source in Microsoft Purview prior to settin
## Scan
+> [!TIP]
+> To troubleshoot any issues with scanning:
+> 1. Confirm you have followed all [**prerequisites for scanning**](#prerequisites-for-scan).
+> 1. Review our [**scan troubleshooting documentation**](troubleshoot-connections.md).
+ ### Prerequisites for scan In order to have access to scan the data source, an authentication method in the ADLS Gen2 Storage account needs to be configured.
The following options are supported:
### Authentication for a scan
+# [System or user assigned managed identity](#tab/MI)
+ #### Using a system or user assigned managed identity for scanning
-It is important to give your Microsoft Purview account or user-assigned managed identity (UAMI) the permission to scan the ADLS Gen2 data source. You can add your Microsoft Purview account's system-assigned managed identity (which has the same name as your Microsoft Purview account) or UAMI at the Subscription, Resource Group, or Resource level, depending on what level scan permissions are needed.
+It's important to give your Microsoft Purview account or user-assigned managed identity (UAMI) the permission to scan the ADLS Gen2 data source. You can add your Microsoft Purview account's system-assigned managed identity (which has the same name as your Microsoft Purview account) or UAMI at the Subscription, Resource Group, or Resource level, depending on what level scan permissions are needed.
> [!Note] > You need to be an owner of the subscription to be able to add a managed identity on an Azure resource.
It is important to give your Microsoft Purview account or user-assigned managed
:::image type="content" source="media/register-scan-adls-gen2/register-adls-gen2-permission-microsoft-services.png" alt-text="Screenshot that shows the exceptions to allow trusted Microsoft services to access the storage account":::
+# [Account Key](#tab/AK)
+ #### Using Account Key for scanning When authentication method selected is **Account Key**, you need to get your access key and store in the key vault:
When authentication method selected is **Account Key**, you need to get your acc
:::image type="content" source="media/register-scan-adls-gen2/register-adls-gen2-secret.png" alt-text="Screenshot that shows the key vault option to create a secret":::
-1. If your key vault is not connected to Microsoft Purview yet, you will need to [create a new key vault connection](manage-credentials.md#create-azure-key-vaults-connections-in-your-microsoft-purview-account)
+1. If your key vault isn't connected to Microsoft Purview yet, you'll need to [create a new key vault connection](manage-credentials.md#create-azure-key-vaults-connections-in-your-microsoft-purview-account)
1. Finally, [create a new credential](manage-credentials.md#create-a-new-credential) using the key to set up your scan
+# [Service Principal](#tab/SP)
+ #### Using Service Principal for scanning ##### Creating a new service principal
-If you need to [Create a new service principal](./create-service-principal-azure.md), it is required to register an application in your Azure AD tenant and provide access to Service Principal in your data sources. Your Azure AD Global Administrator or other roles such as Application Administrator can perform this operation.
+If you need to [Create a new service principal](./create-service-principal-azure.md), it's required to register an application in your Azure AD tenant and provide access to Service Principal in your data sources. Your Azure AD Global Administrator or other roles such as Application Administrator can perform this operation.
##### Getting the Service Principal's Application ID
If you need to [Create a new service principal](./create-service-principal-azure
##### Granting the Service Principal access to your ADLS Gen2 account
-It is important to give your service principal the permission to scan the ADLS Gen2 data source. You can add access for the service principal at the Subscription, Resource Group, or Resource level, depending on what level scan permissions are needed.
+It's important to give your service principal the permission to scan the ADLS Gen2 data source. You can add access for the service principal at the Subscription, Resource Group, or Resource level, depending on what level scan permissions are needed.
> [!Note] > You need to be an owner of the subscription to be able to add a service principal on an Azure resource.
It is important to give your service principal the permission to scan the ADLS G
:::image type="content" source="media/register-scan-adls-gen2/register-adls-gen2-sp-permission.png" alt-text="Screenshot that shows the details to provide storage account permissions to the service principal"::: ++ ### Create the scan 1. Open your **Microsoft Purview account** and select the **Open Microsoft Purview governance portal**
It is important to give your service principal the permission to scan the ADLS G
:::image type="content" source="media/register-scan-adls-gen2/register-adls-gen2-new-scan.png" alt-text="Screenshot that shows the screen to create a new scan":::
+# [System or user assigned managed identity](#tab/MI)
+ #### If using a system or user assigned managed identity 1. Provide a **Name** for the scan, select the system-assigned or user-assigned managed identity under **Credential**, choose the appropriate collection for the scan, and select **Test connection**. On a successful connection, select **Continue**. :::image type="content" source="media/register-scan-adls-gen2/register-adls-gen2-managed-identity.png" alt-text="Screenshot that shows the managed identity option to run the scan":::
+# [Account Key](#tab/AK)
+ #### If using Account Key 1. Provide a **Name** for the scan, choose the appropriate collection for the scan, and select **Authentication method** as _Account Key_ :::image type="content" source="media/register-scan-adls-gen2/register-adls-gen2-acct-key.png" alt-text="Screenshot that shows the Account Key option for scanning":::
+# [Service Principal](#tab/SP)
+ #### If using Service Principal 1. Provide a **Name** for the scan, choose the appropriate collection for the scan, and select the **+ New** under **Credential**
It is important to give your service principal the permission to scan the ADLS G
1. Select **Test connection**. On a successful connection, select **Continue** ++ ### Scope and run the scan 1. You can scope your scan to specific folders and subfolders by choosing the appropriate items in the list.
It is important to give your service principal the permission to scan the ADLS G
## Access policy
-Access policies allow data owners to manage access to datasets from Microsoft Purview. Owners can monitor and manage data use from within the Microsoft Purview governance portal, without directly modifying the storage account where the data is housed.
--
-To create an access policy for Azure Data Lake Storage Gen 2, follow the guidelines below.
--
-### Enable Data Use Management
-
-Data Use Management is an option on your Microsoft Purview sources that will allow you to manage access for that source from within Microsoft Purview.
-To enable Data Use Management, follow [the Data Use Management guide](how-to-enable-data-use-management.md#enable-data-use-management).
-
-### Create an access policy
-
-Now that youΓÇÖve prepared your storage account and environment for access policies, you can follow one of these configuration guides to create your policies:
-
-* [Single storage account](./how-to-data-owner-policies-storage.md) - This guide will allow you to enable access policies on a single storage account in your subscription.
+To create an access policy for Azure Data Lake Storage Gen 2, follow these guides:
+* [Single storage account](./how-to-data-owner-policies-storage.md) - This guide will allow you to enable access policies on a single Azure Storage account in your subscription.
* [All sources in a subscription or resource group](./how-to-data-owner-policies-resource-group.md) - This guide will allow you to enable access policies on all enabled and available sources in a resource group, or across an Azure subscription.
-Or you can follow the [generic guide for creating data access policies](how-to-data-owner-policy-authoring-generic.md).
- ## Next steps
-Now that you have registered your source, follow the below guides to learn more about Microsoft Purview and your data.
+Now that you've registered your source, follow the below guides to learn more about Microsoft Purview and your data.
- [Data Estate Insights in Microsoft Purview](concept-insights.md) - [Lineage in Microsoft Purview](catalog-lineage-user-guide.md)
purview Register Scan Azure Blob Storage Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-azure-blob-storage-source.md
This article outlines the process to register an Azure Blob Storage account in M
|**Metadata Extraction**| **Full Scan** |**Incremental Scan**|**Scoped Scan**|**Classification**|**Access Policy**|**Lineage**| ||||||||
-| [Yes](#register) | [Yes](#scan)|[Yes](#scan) | [Yes](#scan)|[Yes](#scan)| [Yes](#access-policy) | Limited** |
+| [Yes](#register) | [Yes](#scan)|[Yes](#scan) | [Yes](#scan)|[Yes](#scan)| [Yes (Preview)](#access-policy) | Limited** |
\** Lineage is supported if dataset is used as a source/sink in [Data Factory Copy activity](how-to-link-azure-data-factory.md)
Scans can be managed or run again on completion
## Access policy
-Access policies allow data owners to manage access to datasets from Microsoft Purview. Owners can monitor and manage data use from within the Microsoft Purview governance portal, without directly modifying the storage account where the data is housed.
--
-To create an access policy for an Azure Storage account, follow the guidelines below.
--
-### Enable Data Use Management
-
-Data Use Management is an option on your Microsoft Purview sources that will allow you to manage access for that source from within Microsoft Purview.
-To enable Data Use Management, follow [the Data Use Management guide](how-to-enable-data-use-management.md#enable-data-use-management).
-
-### Create an access policy
-
-Now that youΓÇÖve prepared your storage account and environment for access policies, you can follow one of these configuration guides to create your policies:
-
-* [Single storage account](./how-to-data-owner-policies-storage.md) - This guide will allow you to enable access policies on a single storage account in your subscription.
+To create an access policy for Azure Blob Storage, follow these guides:
+* [Single storage account](./how-to-data-owner-policies-storage.md) - This guide will allow you to enable access policies on a single Azure Storage account in your subscription.
* [All sources in a subscription or resource group](./how-to-data-owner-policies-resource-group.md) - This guide will allow you to enable access policies on all enabled and available sources in a resource group, or across an Azure subscription.
-Or you can follow the [generic guide for creating data access policies](how-to-data-owner-policy-authoring-generic.md).
## Next steps
purview Register Scan Azure Sql Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-azure-sql-database.md
This article outlines the process to register an Azure SQL data source in Micros
|**Metadata Extraction**| **Full Scan** |**Incremental Scan**|**Scoped Scan**|**Classification**|**Access Policy**|**Lineage**| ||||||||
-| [Yes](#register) | [Yes](#scan)|[Yes](#scan) | [Yes](#scan)|[Yes](#scan)| No | [Yes](#lineagepreview)(Preview)** |
+| [Yes](#register) | [Yes](#scan)|[Yes](#scan) | [Yes](#scan)|[Yes](#scan)| [Yes (Preview)](#access-policy) | [Yes](#lineagepreview)(Preview)** |
\** Lineage is also supported if Azure SQL tables/views used as source/sink in [Data Factory Copy and Data Flow activities](how-to-link-azure-data-factory.md)
It's important to register the data source in Microsoft Purview before setting u
1. Select the **Azure SQL Database** data source and select **Continue**
- :::image type="content" source="media/register-scan-azure-sql-database/register-scan-azure-sql-db-select-ds.png" alt-text="Screenshot that allows selection of the data source.":::
- 1. Provide a suitable **Name** for the data source, select the relevant **Azure subscription**, **Server name** for the SQL server and the **collection** and select on **Apply** :::image type="content" source="media/register-scan-azure-sql-database/register-scan-azure-sql-db-ds-details.png" alt-text="Screenshot that shows the details to be entered in order to register the data source.":::
It's important to register the data source in Microsoft Purview before setting u
## Scan
+> [!TIP]
+> To troubleshoot any issues with scanning:
+> 1. Confirm you have followed all [**prerequisites**](#prerequisites).
+> 1. Check network by confirming [firewall](#firewall-settings), [Azure connections](#allow-azure-connections), or [integration runtime](#self-hosted-integration-runtime) settings.
+> 1. Confirm [authentication](#authentication-for-a-scan) is properly set up.
+> 1. Review our [**scan troubleshooting documentation**](troubleshoot-connections.md).
+ ### Firewall settings If your database server has a firewall enabled, you'll need to update the firewall to allow access in one of two ways:
The following options are supported:
* **User-assigned managed identity** (preview) - Similar to a SAMI, a user-assigned managed identity (UAMI) is a credential resource that allows Microsoft Purview to authenticate against Azure Active Directory. The **user-assigned** managed by users in Azure, rather than by Azure itself, which gives you more control over security. The UAMI can't currently be used with a self-hosted integration runtime for Azure SQL. For more information, see our [guide for user-assigned managed identities.](manage-credentials.md#create-a-user-assigned-managed-identity)
-* **Service Principal**- A service principal is an application that can be assigned permissions like any other group or user, without being associated directly with a person. Their authentication has an expiration date, and so can be useful for temporary projects. For more information, see the [service principal documenatation](/azure/active-directory/develop/app-objects-and-service-principals).
+* **Service Principal**- A service principal is an application that can be assigned permissions like any other group or user, without being associated directly with a person. Their authentication has an expiration date, and so can be useful for temporary projects. For more information, see the [service principal documentation](/azure/active-directory/develop/app-objects-and-service-principals).
-* **SQL Authentication** - connect to the SQL database with a username and password. For more information about SQL Authentication, you can [follow the SQL authentication documentation](/sql/relational-databases/security/choose-an-authentication-mode#connecting-through-sql-server-authentication).If you need to create a login, follow this [guide to query an Azure SQL database](/azure/azure-sql/database/connect-query-portal), and use [this guide to create a login using T-SQL.](/sql/t-sql/statements/create-login-transact-sql)
+* **SQL Authentication** - connect to the SQL database with a username and password. For more information about SQL Authentication, you can [follow the SQL authentication documentation](/sql/relational-databases/security/choose-an-authentication-mode#connecting-through-sql-server-authentication). If you need to create a login, follow this [guide to query an Azure SQL database](/azure/azure-sql/database/connect-query-portal), and use [this guide to create a login using T-SQL.](/sql/t-sql/statements/create-login-transact-sql)
> [!NOTE] > Be sure to select the Azure SQL Database option on the page.
Select your chosen method of authentication from the tabs below for steps to aut
1. Navigate to your key vault in the Azure portal.
- :::image type="content" source="media/register-scan-azure-sql-database/register-scan-azure-sql-db-key-vault.png" alt-text="Screenshot that shows the key vault.":::
- 1. Select **Settings > Secrets** and select **+ Generate/Import** :::image type="content" source="media/register-scan-azure-sql-database/register-scan-azure-sql-db-secret.png" alt-text="Screenshot that shows the key vault option to generate a secret."::: 1. Enter the **Name** and **Value** as the *password* from your Azure SQL Database.
- :::image type="content" source="media/register-scan-azure-sql-database/register-scan-azure-sql-db-secret-sql.png" alt-text="Screenshot that shows the key vault option to enter the sql secret values.":::
- 1. Select **Create** to complete 1. If your key vault isn't connected to Microsoft Purview yet, you'll need to [create a new key vault connection](manage-credentials.md#create-azure-key-vaults-connections-in-your-microsoft-purview-account)
The service principal needs permission to get metadata for the database, schemas
1. Navigate to your key vault in the Azure portal
- :::image type="content" source="media/register-scan-azure-sql-database/register-scan-azure-sql-db-key-vault.png" alt-text="Screenshot that shows the key vault to add a secret for Service Principal.":::
- 1. Select **Settings > Secrets** and select **+ Generate/Import** :::image type="content" source="media/register-scan-azure-sql-database/register-scan-azure-sql-db-secret.png" alt-text="Screenshot that shows the key vault option to generate a secret for Service Principal."::: 1. Give the secret a **Name** of your choice.
- :::image type="content" source="media/register-scan-azure-sql-database/register-scan-azure-sql-db-create-secret.png" alt-text="Screenshot that shows the key vault option to enter the secret values.":::
- 1. The secret's **Value** will be the Service Principal's **Secret Value**. If you've already created a secret for your service principal, you can find its value in **Client credentials** on your secret's overview page. If you need to create a secret, you can follow the steps in the [service principal guide](create-service-principal-azure.md#adding-a-secret-to-the-client-credentials).
The service principal needs permission to get metadata for the database, schemas
1. Select **Create** to create the secret.
- :::image type="content" source="media/register-scan-azure-sql-database/select-create.png" alt-text="Screenshot that shows the Key Vault Create a secret menu, with the Create button highlighted.":::
- 1. If your key vault isn't connected to Microsoft Purview yet, you'll need to [create a new key vault connection](manage-credentials.md#create-azure-key-vaults-connections-in-your-microsoft-purview-account) 1. Then, [create a new credential](manage-credentials.md#create-a-new-credential).
Select your method of authentication from the tabs below for scanning steps.
1. Choose your scan trigger. You can set up a schedule or run the scan once.
- :::image type="content" source="media/register-scan-azure-sql-database/register-scan-azure-sql-db-scan-trigger.png" alt-text="scan trigger.":::
- 1. Review your scan and select **Save and run**.
- :::image type="content" source="media/register-scan-azure-sql-database/register-scan-azure-sql-db-review-scan.png" alt-text="review scan.":::
- ### View Scan 1. Navigate to the _data source_ in the _Collection_ and select **View Details** to check the status of the scan
Scans can be managed or run again on completion
:::image type="content" source="media/register-scan-azure-sql-database/register-scan-azure-sql-db-full-inc.png" alt-text="full or incremental scan.":::
+## Access policy
+
+To create an access policy for Azure Data Lake Storage Gen 2, follow these guides:
+* [Single SQL account](./how-to-data-owner-policies-azure-sql-db.md) - This guide will allow you to enable access policies on a single Azure SQL Database account in your subscription.
+* [All sources in a subscription or resource group](./how-to-data-owner-policies-resource-group.md) - This guide will allow you to enable access policies on all enabled and available sources in a resource group, or across an Azure subscription.
+ ## Lineage (Preview) <a id="lineagepreview"></a>
Microsoft Purview supports lineage from Azure SQL Database. At the time of setti
1. Follow steps under [authentication for a scan using Managed Identity](#authentication-for-a-scan) section to authorize Microsoft Purview scan your Azure SQL Database
-2. Sign in to Azure SQL Database with Azure AD account and assign proper permission (for example: db_owner) to Purview Managed identity. Use below example SQL syntax to create user and grant permission by replacing 'purview-account' with your Account name:
+2. Sign in to Azure SQL Database with Azure AD account and assign db_owner permissions to the Microsoft Purview Managed identity. Use below example SQL syntax to create user and grant permission by replacing 'purview-account' with your Account name:
```sql Create user <purview-account> FROM EXTERNAL PROVIDER
You can [browse data catalog](how-to-browse-catalog.md) or [search data catalog]
* Lineage is captured for stored procedure runs that happened after a successful scan is set up. Lineage from past Stored procedure runs isn't captured. * If your database is processing heavy workloads with lots of stored procedure runs, lineage extraction will filter only the most recent runs. Stored procedure runs early in the 6 hour window or the run instances that create heavy query load won't be extracted. Contact support if you're missing lineage from any stored procedure runs. - ## Next steps Now that you've registered your source, follow the below guides to learn more about Microsoft Purview and your data.
purview Register Scan Hive Metastore Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-hive-metastore-source.md
When setting up scan, you can choose to scan an entire Hive metastore database,
* Ensure that Visual C++ Redistributable for Visual Studio 2012 Update 4 is installed on the machine where the self-hosted integration runtime is running. If you don't have this update installed, [download it now](https://www.microsoft.com/download/details.aspx?id=30679).
- * Download the Hive Metastore database's JDBC driver on the machine where your self-hosted integration runtime is running. For example, if the database is *mssql*, download [Microsoft's JDBC driver for SQL Server](/sql/connect/jdbc/download-microsoft-jdbc-driver-for-sql-server). If you scan Azure Databricks's Hive Metastore, download the MariaDB Connector/J version 2.7.5 from [here](https://dlm.mariadb.com/1965742/Connectors/java/connector-java-2.7.5/mariadb-java-client-2.7.5.jar); version 3.0.3 isn't supported. Note down the folder path which you will use to set up the scan.
+ * Download the Hive Metastore database's JDBC driver on the machine where your self-hosted integration runtime is running. For example, if the database is *mssql*, download [Microsoft's JDBC driver for SQL Server](/sql/connect/jdbc/download-microsoft-jdbc-driver-for-sql-server). If you scan Azure Databricks's Hive Metastore, download the MariaDB Connector/J version 2.7.5 from [here](https://dlm.mariadb.com/1965742/Connectors/java/connector-java-2.7.5/mariadb-java-client-2.7.5.jar); version 3.0.3 isn't supported. Note down the folder path that you'll use to set up the scan.
> [!Note] > The driver should be accessible by the self-hosted integration runtime. By default, self-hosted integration runtime uses [local service account "NT SERVICE\DIAHostService"](manage-integration-runtimes.md#service-account-for-self-hosted-integration-runtime). Make sure it has "Read and execute" and "List folder contents" permission to the driver folder.
The only supported authentication for a Hive Metastore database is Basic Authent
## Scan
+> [!TIP]
+> To troubleshoot any issues with scanning:
+> 1. Confirm you have followed all [**prerequisites**](#prerequisites).
+> 1. Review our [**scan troubleshooting documentation**](troubleshoot-connections.md).
+ Use the following steps to scan Hive Metastore databases to automatically identify assets and classify your data. For more information about scanning in general, see [Scans and ingestion in Microsoft Purview](concept-scans-and-ingestion.md). 1. In the Management Center, select integration runtimes. Make sure that a self-hosted integration runtime is set up. If it isn't set up, use the steps in [Create and manage a self-hosted integration runtime](./manage-integration-runtimes.md).
Use the following steps to scan Hive Metastore databases to automatically identi
:::image type="content" source="media/register-scan-hive-metastore-source/databricks-credentials.png" alt-text="Screenshot that shows Azure Databricks username and password examples as property values." border="true":::
- 1. **Metastore JDBC Driver Location**: Specify the path to the JDBC driver location in your machine where self-host integration runtime is running, e.g. `D:\Drivers\HiveMetastore`. It's the path to valid JAR folder location. Make sure the driver is accessible by the self-hosted integration runtime, learn more from [prerequisites section](#prerequisites).
+ 1. **Metastore JDBC Driver Location**: Specify the path to the JDBC driver location in your machine where self-host integration runtime is running, for example, `D:\Drivers\HiveMetastore`. It's the path to valid JAR folder location. Make sure the driver is accessible by the self-hosted integration runtime, learn more from [prerequisites section](#prerequisites).
> [!Note] > If you scan Azure Databricks's Hive Metastore, download the MariaDB Connector/J version 2.7.5 from [here](https://dlm.mariadb.com/1965742/Connectors/java/connector-java-2.7.5/mariadb-java-client-2.7.5.jar). Version 3.0.3 is not supported.
purview Register Scan Oracle Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-oracle-source.md
Currently, the Oracle service name isn't captured in the metadata or hierarchy.
* Ensure Visual C++ Redistributable for Visual Studio 2012 Update 4 is installed on the self-hosted integration runtime machine. If you don't have this update installed, [you can download it here](https://www.microsoft.com/download/details.aspx?id=30679).
- * Download the [Oracle JDBC driver](https://www.oracle.com/database/technologies/appdev/jdbc-downloads.html) on the machine where your self-hosted integration runtime is running. Note down the folder path which you will use to set up the scan.
+ * Download the [Oracle JDBC driver](https://www.oracle.com/database/technologies/appdev/jdbc-downloads.html) on the machine where your self-hosted integration runtime is running. Note down the folder path that you'll use to set up the scan.
> [!Note] > The driver should be accessible by the self-hosted integration runtime. By default, self-hosted integration runtime uses [local service account "NT SERVICE\DIAHostService"](manage-integration-runtimes.md#service-account-for-self-hosted-integration-runtime). Make sure it has "Read and execute" and "List folder contents" permission to the driver folder.
On the **Register sources (Oracle)** screen, do the following:
Follow the steps below to scan Oracle to automatically identify assets and classify your data. For more information about scanning in general, see our [introduction to scans and ingestion](concept-scans-and-ingestion.md).
+> [!TIP]
+> To troubleshoot any issues with scanning:
+> 1. Confirm you have followed all [**prerequisites**](#prerequisites).
+> 1. Review our [**scan troubleshooting documentation**](troubleshoot-connections.md).
+ ### Create and run scan To create and run a new scan, do the following:
To create and run a new scan, do the following:
Usage of NOT and special characters aren't acceptable.
- 1. **Driver location**: Specify the path to the JDBC driver location in your machine where self-host integration runtime is running, e.g. `D:\Drivers\Oracle`. It's the path to valid JAR folder location. Make sure the driver is accessible by the self-hosted integration runtime, learn more from [prerequisites section](#prerequisites).
+ 1. **Driver location**: Specify the path to the JDBC driver location in your machine where self-host integration runtime is running, for example, `D:\Drivers\Oracle`. It's the path to valid JAR folder location. Make sure the driver is accessible by the self-hosted integration runtime, learn more from [prerequisites section](#prerequisites).
1. **Stored procedure details**: Controls the number of details imported from stored procedures:
purview Register Scan Power Bi Tenant Cross Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-power-bi-tenant-cross-tenant.md
This article outlines how to register a Power BI tenant in a cross-tenant scenar
- For cross-tenant scenario, delegated authentication is only supported option for scanning. - You can create only one scan for a Power BI data source that is registered in your Microsoft Purview account.-- If Power BI dataset schema is not shown after scan, it is due to one of the current limitations with [Power BI Metadata scanner](/power-bi/admin/service-admin-metadata-scanning).
+- If Power BI dataset schema isn't shown after scan, it's due to one of the current limitations with [Power BI Metadata scanner](/power-bi/admin/service-admin-metadata-scanning).
## Prerequisites
Use any of the following deployment checklists during the setup or for troublesh
1. Make sure Power BI and Microsoft Purview accounts are in cross-tenant.
-2. Make sure Power BI tenant Id is entered correctly during the registration. By default, Power BI tenant ID that exists in the same Azure Active Directory as Microsoft Purview will be populated.
+2. Make sure Power BI tenant ID is entered correctly during the registration. By default, Power BI tenant ID that exists in the same Azure Active Directory as Microsoft Purview will be populated.
3. From Azure portal, validate if Microsoft Purview account Network is set to public access.
Use any of the following deployment checklists during the setup or for troublesh
7. In Power BI Azure AD tenant, validate Power BI admin user settings to make sure: 1. User is assigned to Power BI Administrator role. 2. At least one [Power BI license](/power-bi/admin/service-admin-licensing-organization#subscription-license-types) is assigned to the user.
- 3. If user is recently created, login with the user at least once to make sure password is reset successfully and user can successfully initiate the session.
- 4. There is no MFA or Conditional Access Policies are enforced on the user.
+ 3. If user is recently created, sign in with the user at least once to make sure password is reset successfully and user can successfully initiate the session.
+ 4. There's no MFA or Conditional Access Policies are enforced on the user.
8. In Power BI Azure AD tenant, validate App registration settings to make sure: 1. App registration exists in your Azure Active Directory tenant where Power BI tenant is located.
Use any of the following deployment checklists during the setup or for troublesh
1. Make sure Power BI and Microsoft Purview accounts are in cross-tenant.
-2. Make sure Power BI tenant Id is entered correctly during the registration.By default, Power BI tenant ID that exists in the same Azure Active Directory as Microsoft Purview will be populated.
+2. Make sure Power BI tenant ID is entered correctly during the registration.By default, Power BI tenant ID that exists in the same Azure Active Directory as Microsoft Purview will be populated.
3. From Azure portal, validate if Microsoft Purview account Network is set to public access.
Use any of the following deployment checklists during the setup or for troublesh
8. In Power BI Azure AD tenant, validate Power BI admin user settings to make sure: 1. User is assigned to Power BI Administrator role. 2. At least one [Power BI license](/power-bi/admin/service-admin-licensing-organization#subscription-license-types) is assigned to the user.
- 3. If user is recently created, login with the user at least once to make sure password is reset successfully and user can successfully initiate the session.
- 4. There is no MFA or Conditional Access Policies are enforced on the user.
+ 3. If user is recently created, sign in with the user at least once to make sure password is reset successfully and user can successfully initiate the session.
+ 4. There's no MFA or Conditional Access Policies are enforced on the user.
9. In Power BI Azure AD tenant, validate App registration settings to make sure: 5. App registration exists in your Azure Active Directory tenant where Power BI tenant is located.
Use any of the following deployment checklists during the setup or for troublesh
## Scan cross-tenant Power BI
+> [!TIP]
+> To troubleshoot any issues with scanning:
+> 1. Confirm you have completed the [**deployment checklist for your scenario**](#deployment-checklist).
+> 1. Review our [**scan troubleshooting documentation**](register-scan-power-bi-tenant-troubleshoot.md).
+ ### Scan cross-tenant Power BI using Delegated authentication Delegated authentication is the only supported option for cross-tenant scan option, however, you can use either Azure runtime or a self-hosted integration runtime to run a scan. To create and run a new scan using Azure runtime, perform the following steps:
-1. Create a user account in Azure Active Directory tenant where Power BI tenant is located and assign the user to Azure Active Directory role, **Power BI Administrator**. Take note of username and login to change the password.
+1. Create a user account in Azure Active Directory tenant where Power BI tenant is located and assign the user to Azure Active Directory role, **Power BI Administrator**. Take note of username and sign in to change the password.
2. Assign proper Power BI license to the user.
To create and run a new scan using Azure runtime, perform the following steps:
:::image type="content" source="media/setup-power-bi-scan-catalog-portal/power-bi-key-vault-secret.png" alt-text="Screenshot how to generate an Azure Key Vault secret.":::
-6. If your key vault is not connected to Microsoft Purview yet, you will need to [create a new key vault connection](manage-credentials.md#create-azure-key-vaults-connections-in-your-microsoft-purview-account)
+6. If your key vault isn't connected to Microsoft Purview yet, you'll need to [create a new key vault connection](manage-credentials.md#create-azure-key-vaults-connections-in-your-microsoft-purview-account)
7. Create an App Registration in your Azure Active Directory tenant where Power BI is located. Provide a web URL in the **Redirect URI**. Take note of Client ID(App ID).
To create and run a new scan using Azure runtime, perform the following steps:
:::image type="content" source="media/setup-power-bi-scan-catalog-portal/power-bi-scan-cross-tenant.png" alt-text="Image showing Power BI scan setup using Azure IR for cross tenant.":::
-17. For the **Credential**, select **Delegated authentication** and click **+ New** to create a new credential.
+17. For the **Credential**, select **Delegated authentication** and select **+ New** to create a new credential.
18. Create a new credential and provide required parameters:
To create and run a new scan using Azure runtime, perform the following steps:
If **Test Connection** failed, select **View Report** to see the detailed status and troubleshoot the problem:
- 1. Access - Failed status means the user authentication failed: Validate if username and password is correct. review if the Credential contains correct Client (App) ID from the App Registration.
+ 1. Access - Failed status means the user authentication failed: Validate if username and password are correct. review if the Credential contains correct Client (App) ID from the App Registration.
2. Assets (+ lineage) - Failed status means the Microsoft Purview - Power BI authorization has failed. Make sure the user is added to Power BI Administrator role and has proper Power BI license assigned to. 3. Detailed metadata (Enhanced) - Failed status means the Power BI admin portal is disabled for the following setting - **Enhance admin APIs responses with detailed metadata**
To create and run a new scan using Azure runtime, perform the following steps:
## Next steps
-Now that you have registered your source, follow the below guides to learn more about Microsoft Purview and your data.
+Now that you've registered your source, follow the below guides to learn more about Microsoft Purview and your data.
- [Data Estate Insights in Microsoft Purview](concept-insights.md) - [Lineage in Microsoft Purview](catalog-lineage-user-guide.md)
purview Register Scan Power Bi Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-power-bi-tenant.md
This article outlines how to register a Power BI tenant in a **same-tenant scena
|Private access |Denied |Allowed* |Self-hosted runtime |Delegated Authentication | [Review deployment checklist](#deployment-checklist) | |Private access |Denied |Denied |Self-hosted runtime |Delegated Authentication | [Review deployment checklist](#deployment-checklist) |
-\* Power BI tenant must have a private endpoint which is deployed in a Virtual Network accessible from the self-hosted integration runtime VM. For more information, see [private endpoint for Power BI tenant](/power-bi/enterprise/service-security-private-links).
+\* Power BI tenant must have a private endpoint that is deployed in a Virtual Network accessible from the self-hosted integration runtime VM. For more information, see [private endpoint for Power BI tenant](/power-bi/enterprise/service-security-private-links).
### Known limitations - If Microsoft Purview or Power BI tenant is protected behind a private endpoint, Self-hosted runtime is the only option to scan. - Delegated authentication is the only supported authentication option if self-hosted integration runtime is used during the scan. - You can create only one scan for a Power BI data source that is registered in your Microsoft Purview account.-- If Power BI dataset schema is not shown after scan, it is due to one of the current limitations with [Power BI Metadata scanner](/power-bi/admin/service-admin-metadata-scanning).
+- If Power BI dataset schema isn't shown after scan, it's due to one of the current limitations with [Power BI Metadata scanner](/power-bi/admin/service-admin-metadata-scanning).
## Prerequisites
Use any of the following deployment checklists during the setup or for troublesh
### Scan same-tenant Power BI using Azure IR and Managed Identity in public network 1. Make sure Power BI and Microsoft Purview accounts are in the same tenant.
-2. Make sure Power BI tenant Id is entered correctly during the registration.
+2. Make sure Power BI tenant ID is entered correctly during the registration.
3. From Azure portal, validate if Microsoft Purview account Network is set to public access. 4. From Power BI tenant Admin Portal, make sure Power BI tenant is configured to allow public network. 5. In Azure Active Directory tenant, create a security group.
Use any of the following deployment checklists during the setup or for troublesh
### Scan same-tenant Power BI using self-hosted IR and Delegated Authentication in public network 1. Make sure Power BI and Microsoft Purview accounts are in the same tenant.
-2. Make sure Power BI tenant Id is entered correctly during the registration.
+2. Make sure Power BI tenant ID is entered correctly during the registration.
3. From Azure portal, validate if Microsoft Purview account Network is set to public access. 4. From Power BI tenant Admin Portal, make sure Power BI tenant is configured to allow public network. 5. Check your Azure Key Vault to make sure:
Use any of the following deployment checklists during the setup or for troublesh
8. Validate Power BI admin user settings to make sure: 1. User is assigned to Power BI Administrator role. 2. At least one [Power BI license](/power-bi/admin/service-admin-licensing-organization#subscription-license-types) is assigned to the user.
- 3. If user is recently created, login with the user at least once to make sure password is reset successfully and user can successfully initiate the session.
- 4. There is no MFA or Conditional Access Policies are enforced on the user.
+ 3. If user is recently created, sign in with the user at least once to make sure password is reset successfully and user can successfully initiate the session.
+ 4. There's no MFA or Conditional Access Policies are enforced on the user.
9. Validate App registration settings to make sure: 5. App registration exists in your Azure Active Directory tenant. 6. Under **API permissions**, the following **delegated permissions** and **grant admin consent for the tenant** is set up with read for the following APIs:
Use any of the following deployment checklists during the setup or for troublesh
### Scan same-tenant Power BI using self-hosted IR and Delegated Authentication in a private network 1. Make sure Power BI and Microsoft Purview accounts are in the same tenant.
-2. Make sure Power BI tenant Id is entered correctly during the registration.
+2. Make sure Power BI tenant ID is entered correctly during the registration.
3. Check your Azure Key Vault to make sure: 1. There are no typos in the password. 2. Microsoft Purview Managed Identity has get/list access to secrets.
Use any of the following deployment checklists during the setup or for troublesh
5. Validate Power BI admin user to make sure: 1. User is assigned to Power BI Administrator role. 2. At least one [Power BI license](/power-bi/admin/service-admin-licensing-organization#subscription-license-types) is assigned to the user.
- 3. If user is recently created, login with the user at least once to make sure password is reset successfully and user can successfully initiate the session.
- 4. There is no MFA or Conditional Access Policies are enforced on the user.
+ 3. If user is recently created, sign in with the user at least once to make sure password is reset successfully and user can successfully initiate the session.
+ 4. There's no MFA or Conditional Access Policies are enforced on the user.
6. Validate Self-hosted runtime settings: 1. Latest version of [Self-hosted runtime](https://www.microsoft.com/download/details.aspx?id=39717) is installed on the VM. 2. [JDK 8 or later](https://www.oracle.com/java/technologies/javase-jdk11-downloads.html) is installed.
This section describes how to register a Power BI tenant in Microsoft Purview fo
## Scan same-tenant Power BI
+> [!TIP]
+> To troubleshoot any issues with scanning:
+> 1. Confirm you have completed the [**deployment checklist for your scenario**](#deployment-checklist).
+> 1. Review our [**scan troubleshooting documentation**](register-scan-power-bi-tenant-troubleshoot.md).
+ ### Scan same-tenant Power BI using Azure IR and Managed Identity This is a suitable scenario, if both Microsoft Purview and Power BI tenant are configured to allow public access in the network settings.
This scenario can be used when Microsoft Purview and Power BI tenant or both, ar
To create and run a new scan, do the following:
-1. Create a user account in Azure Active Directory tenant and assign the user to Azure Active Directory role, **Power BI Administrator**. Take note of username and login to change the password.
+1. Create a user account in Azure Active Directory tenant and assign the user to Azure Active Directory role, **Power BI Administrator**. Take note of username and sign in to change the password.
1. Assign proper Power BI license to the user.
To create and run a new scan, do the following:
:::image type="content" source="media/setup-power-bi-scan-catalog-portal/power-bi-key-vault-secret.png" alt-text="Screenshot how to generate an Azure Key Vault secret.":::
-1. If your key vault is not connected to Microsoft Purview yet, you will need to [create a new key vault connection](manage-credentials.md#create-azure-key-vaults-connections-in-your-microsoft-purview-account)
+1. If your key vault isn't connected to Microsoft Purview yet, you'll need to [create a new key vault connection](manage-credentials.md#create-azure-key-vaults-connections-in-your-microsoft-purview-account)
1. Create an App Registration in your Azure Active Directory tenant. Provide a web URL in the **Redirect URI**. Take note of Client ID(App ID).
To create and run a new scan, do the following:
:::image type="content" source="media/setup-power-bi-scan-catalog-portal/power-bi-scan-shir.png" alt-text="Image showing Power BI scan setup using SHIR for same tenant.":::
-1. For the **Credential**, select **Delegated authentication** and click **+ New** to create a new credential.
+1. For the **Credential**, select **Delegated authentication** and select **+ New** to create a new credential.
1. Create a new credential and provide required parameters:
To create and run a new scan, do the following:
## Next steps
-Now that you have registered your source, follow the below guides to learn more about Microsoft Purview and your data.
+Now that you've registered your source, follow the below guides to learn more about Microsoft Purview and your data.
- [Data Estate Insights in Microsoft Purview](concept-insights.md) - [Lineage in Microsoft Purview](catalog-lineage-user-guide.md)
purview Register Scan Synapse Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-synapse-workspace.md
This article outlines how to register Azure Synapse Analytics workspaces and how
>[!NOTE] >Currently, Azure Synapse lake databases are not supported.
-<!-- 4. Prerequisites
-Required. Add any relevant/source-specific prerequisites for connecting with this source. Authentication/Registration should be covered by the sections below and does not need to be covered here.
>- ## Prerequisites * An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). * An active [Microsoft Purview account](create-catalog-portal.md).
-* You will need to be a Data Source Administrator and Data Reader to register a source and manage it in the Microsoft Purview governance portal. See our [Microsoft Purview Permissions page](catalog-permissions.md) for details.
+* You'll need to be a Data Source Administrator and Data Reader to register a source and manage it in the Microsoft Purview governance portal. See our [Microsoft Purview Permissions page](catalog-permissions.md) for details.
## Register
Only a user with at least a *Reader* role on the Azure Synapse workspace and who
Follow the steps below to scan Azure Synapse Analytics workspaces to automatically identify assets and classify your data. For more information about scanning in general, see our [introduction to scans and ingestion](concept-scans-and-ingestion.md).
-You will first need to set up authentication for enumerating for either your [dedicated](#authentication-for-enumerating-dedicated-sql-database-resources) or [serverless](#authentication-for-enumerating-serverless-sql-database-resources) resources. This will allow Microsoft Purview to enumerate your workspace assets and perform scans.
+1. You'll first need to set up authentication for enumerating for either your [dedicated](#authentication-for-enumerating-dedicated-sql-database-resources) or [serverless](#authentication-for-enumerating-serverless-sql-database-resources) resources. This will allow Microsoft Purview to enumerate your workspace assets and perform scans.
+1. Then, you'll need to [apply permissions to scan the contents of the workspace](#apply-permissions-to-scan-the-contents-of-the-workspace).
+1. Lastly, confirm your [network is set up to allow access for Microsoft Purview](#set-up-azure-synapse-workspace-firewall-access).
+
+> [!TIP]
+> To troubleshoot any issues with scanning:
+> 1. Confirm you have followed all [**prerequisites**](#prerequisites).
+> 1. Confirm you have set up [enumeration authentication](#enumeration-authentication) for your resources.
+> 1. Confirm [authentication](#apply-permissions-to-scan-the-contents-of-the-workspace) is properly set up.
+> 1. Check network by confirming [firewall settings](#set-up-azure-synapse-workspace-firewall-access).
+> 1. Review our [**scan troubleshooting documentation**](troubleshoot-connections.md).
-Then, you will need to [apply permissions to scan the contents of the workspace](#apply-permissions-to-scan-the-contents-of-the-workspace).
+### Enumeration authentication
-### Authentication for enumerating dedicated SQL database resources
+#### Authentication for enumerating dedicated SQL database resources
1. In the Azure portal, go to the Azure Synapse workspace resource. 1. On the left pane, selectΓÇ»**Access Control (IAM)**.
Then, you will need to [apply permissions to scan the contents of the workspace]
> [!NOTE] > If you're planning to register and scan multiple Azure Synapse workspaces in your Microsoft Purview account, you can also assign the role from a higher level, such as a resource group or a subscription.
-### Authentication for enumerating serverless SQL database resources
+#### Authentication for enumerating serverless SQL database resources
+
+There are three places you'll need to set authentication to allow Microsoft Purview to enumerate your serverless SQL database resources:
+* [The Azure Synapse workspace](#azure-synapse-workspace)
+* [The associated storage](#storage-account)
+* [The Azure Synapse serverless databases](#azure-synapse-serverless-database)
-There are three places you will need to set authentication to allow Microsoft Purview to enumerate your serverless SQL database resources: The Azure Synapse workspace, the associated storage, and the Azure Synapse serverless databases. The steps below will set permissions for all three.
+The steps below will set permissions for all three.
-#### Azure Synapse workspace
+##### Azure Synapse workspace
1. In the Azure portal, go to the Azure Synapse workspace resource. 1. On the left pane, selectΓÇ»**Access Control (IAM)**.
There are three places you will need to set authentication to allow Microsoft Pu
1. Set the **Reader** role and enter your Microsoft Purview account name, which represents its managed service identity (MSI). 1. Select **Save** to finish assigning the role.
-#### Storage account
+##### Storage account
1. In the Azure portal, go to the **Resource group** or **Subscription** that the storage account associated with the Azure Synapse workspace is in. 1. On the left pane, selectΓÇ»**Access Control (IAM)**.
There are three places you will need to set authentication to allow Microsoft Pu
1. Set the **Storage blob data reader** role and enter your Microsoft Purview account name (which represents its MSI) in the **Select** box. 1. Select **Save** to finish assigning the role.
-#### Azure Synapse serverless database
+##### Azure Synapse serverless database
1. Go to your Azure Synapse workspace and open the Synapse Studio. 1. Select the **Data** tab on the left menu.
There are three places you will need to set authentication to allow Microsoft Pu
### Apply permissions to scan the contents of the workspace
-You can set up authentication for an Azure Synapse source in either of two ways:
+You can set up authentication for an Azure Synapse source in either of two ways. Select your scenario below for steps to apply permissions.
- Use a managed identity - Use a service principal
You can set up authentication for an Azure Synapse source in either of two ways:
> [!NOTE] > You must set up authentication on each SQL database that you intended to register and scan from your Azure Synapse workspace.
+# [Managed identity](#tab/MI)
+ #### Use a managed identity for dedicated SQL databases 1. Go to your Azure Synapse workspace.
If the Azure Synapse workspace has any external tables, the Microsoft Purview ma
```sql GRANT REFERENCES ON DATABASE SCOPED CREDENTIAL::[scoped_credential] TO [PurviewAccountName]; ```
+# [Service principal](#tab/SP)
#### Use a service principal for dedicated SQL databases
GRANT REFERENCES ON DATABASE SCOPED CREDENTIAL::[scoped_credential] TO [PurviewA
ALTER ROLE db_datareader ADD MEMBER [ServicePrincipalID]; ``` ++ ### Set up Azure Synapse workspace firewall access 1. In the Azure portal, go to the Azure Synapse workspace.
To create and run a new scan, do the following:
## Next steps
-Now that you have registered your source, follow the below guides to learn more about Microsoft Purview and your data.
+Now that you've registered your source, follow the below guides to learn more about Microsoft Purview and your data.
- [Data Estate Insights in Microsoft Purview](concept-insights.md) - [Lineage in Microsoft Purview](catalog-lineage-user-guide.md)
purview Sensitivity Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/sensitivity-insights.md
Last updated 04/22/2022
-# Customer intent: As a security officer, I need to understand how to use Microsoft Purview Data Estate Insights to learn about sensitive data identified and classified and labeled during scanning.
-+
+#Customer intent: As a security officer, I need to understand how to use Microsoft Purview Data Estate Insights to learn about sensitive data identified and classified and labeled during scanning.
# Sensitivity label insights about your data in Microsoft Purview This how-to guide describes how to access, view, and filter security insights provided by sensitivity labels applied to your data.
-> [!IMPORTANT]
-> Sensitivity labels in Microsoft Purview Data Estate Insights are currently in PREVIEW. The [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
Supported data sources include: Azure Blob Storage, Azure Data Lake Storage (ADLS) GEN 1, Azure Data Lake Storage (ADLS) GEN 2, SQL Server, Azure SQL Database, Azure SQL Managed Instance, Amazon S3 buckets, Amazon RDS databases (public preview), Power BI
For more information, see [How to automatically apply sensitivity labels to your
## Next steps
-Learn more about these Microsoft Purview insight reports:
+Learn how to use Data Estate Insights with sources below:
-- [Glossary insights](glossary-insights.md)-- [Classification insights](./classification-insights.md)
+* [Learn how to use Asset insights](asset-insights.md)
+* [Learn how to use Data Stewardship](data-stewardship.md)
+* [Learn how to use Classification insights](classification-insights.md)
+* [Learn how to use Glossary insights](glossary-insights.md)
purview Tutorial Using Rest Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/tutorial-using-rest-apis.md
Sample response token:
} ```
+> [!TIP]
+> If you get an error message that reads: *Cross-origin token redemption is permitted only for the 'Single-Page Application' client-type.*
+> * Check your request headers and confirm that your request **doesn't** contain the 'origin' header.
+> * Confirm that your redirect URI is set to **web** in your service principal.
+> * If you are using an application like Postman, make sure your software is up to date.
+ Use the access token above to call the Data plane APIs. ## Next steps
role-based-access-control Built In Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/built-in-roles.md
Previously updated : 04/25/2022 Last updated : 05/20/2022
The following table provides a brief description of each built-in role. Click th
> | [Data Box Reader](#data-box-reader) | Lets you manage Data Box Service except creating order or editing order details and giving access to others. | 028f4ed7-e2a9-465e-a8f4-9c0ffdfdc027 | > | [Data Lake Analytics Developer](#data-lake-analytics-developer) | Lets you submit, monitor, and manage your own jobs but not create or delete Data Lake Analytics accounts. | 47b7735b-770e-4598-a7da-8b91488b4c88 | > | [Reader and Data Access](#reader-and-data-access) | Lets you view everything but will not let you delete or create a storage account or contained resource. It will also allow read/write access to all data contained in a storage account via access to storage account keys. | c12c1c16-33a1-487b-954d-41c89c60f349 |
+> | [Storage Account Backup Contributor](#storage-account-backup-contributor) | Lets you perform backup and restore operations using Azure Backup on the storage account. | e5e2a7ff-d759-4cd2-bb51-3152d37e2eb1 |
> | [Storage Account Contributor](#storage-account-contributor) | Permits management of storage accounts. Provides access to the account key, which can be used to access data via Shared Key authorization. | 17d1049b-9a84-46fb-8f53-869881c3d3ab | > | [Storage Account Key Operator Service Role](#storage-account-key-operator-service-role) | Permits listing and regenerating storage account access keys. | 81a9662b-bebf-436f-a333-f67b29880f12 | > | [Storage Blob Data Contributor](#storage-blob-data-contributor) | Read, write, and delete Azure Storage containers and blobs. To learn which actions are required for a given data operation, see [Permissions for calling blob and queue data operations](/rest/api/storageservices/authenticate-with-azure-active-directory#permissions-for-calling-blob-and-queue-data-operations). | ba92f5b4-2d11-453d-a403-e96b0029c9fe |
The following table provides a brief description of each built-in role. Click th
> | [Microsoft Sentinel Contributor](#microsoft-sentinel-contributor) | Microsoft Sentinel Contributor | ab8e14d6-4a74-4a29-9ba8-549422addade | > | [Microsoft Sentinel Reader](#microsoft-sentinel-reader) | Microsoft Sentinel Reader | 8d289c81-5878-46d4-8554-54e1e3d8b5cb | > | [Microsoft Sentinel Responder](#microsoft-sentinel-responder) | Microsoft Sentinel Responder | 3e150937-b8fe-4cfb-8069-0eaf05ecd056 |
-> | [Security Admin](#security-admin) | View and update permissions for Security Center. Same permissions as the Security Reader role and can also update the security policy and dismiss alerts and recommendations. | fb1c8493-542b-48eb-b624-b4c8fea62acd |
-> | [Security Assessment Contributor](#security-assessment-contributor) | Lets you push assessments to Security Center | 612c2aa1-cb24-443b-ac28-3ab7272de6f5 |
+> | [Security Admin](#security-admin) | View and update permissions for Microsoft Defender for Cloud. Same permissions as the Security Reader role and can also update the security policy and dismiss alerts and recommendations. | fb1c8493-542b-48eb-b624-b4c8fea62acd |
+> | [Security Assessment Contributor](#security-assessment-contributor) | Lets you push assessments to Microsoft Defender for Cloud | 612c2aa1-cb24-443b-ac28-3ab7272de6f5 |
> | [Security Manager (Legacy)](#security-manager-legacy) | This is a legacy role. Please use Security Admin instead. | e3d13bf0-dd5a-482e-ba6b-9b8433878d10 |
-> | [Security Reader](#security-reader) | View permissions for Security Center. Can view recommendations, alerts, a security policy, and security states, but cannot make changes. | 39bc4728-0917-49c7-9d2c-d95423bc2eb4 |
+> | [Security Reader](#security-reader) | View permissions for Microsoft Defender for Cloud. Can view recommendations, alerts, a security policy, and security states, but cannot make changes. | 39bc4728-0917-49c7-9d2c-d95423bc2eb4 |
> | **DevOps** | | | > | [DevTest Labs User](#devtest-labs-user) | Lets you connect, start, restart, and shutdown your virtual machines in your Azure DevTest Labs. | 76283e04-6283-4c54-8f91-bcf1374a3c64 | > | [Lab Creator](#lab-creator) | Lets you create new labs under your Azure Lab Accounts. | b97fb8bc-a8b2-4522-a38b-dd33c7e65ead |
Lets you view everything but will not let you delete or create a storage account
} ```
+### Storage Account Backup Contributor
+
+Lets you perform backup and restore operations using Azure Backup on the storage account. [Learn more](../backup/blob-backup-configure-manage.md)
+
+> [!div class="mx-tableFixed"]
+> | Actions | Description |
+> | | |
+> | [Microsoft.Authorization](resource-provider-operations.md#microsoftauthorization)/*/read | Read roles and role assignments |
+> | [Microsoft.Authorization](resource-provider-operations.md#microsoftauthorization)/locks/read | Gets locks at the specified scope. |
+> | [Microsoft.Authorization](resource-provider-operations.md#microsoftauthorization)/locks/write | Add locks at the specified scope. |
+> | [Microsoft.Authorization](resource-provider-operations.md#microsoftauthorization)/locks/delete | Delete locks at the specified scope. |
+> | [Microsoft.Features](resource-provider-operations.md#microsoftfeatures)/features/read | Gets the features of a subscription. |
+> | [Microsoft.Features](resource-provider-operations.md#microsoftfeatures)/providers/features/read | Gets the feature of a subscription in a given resource provider. |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/subscriptions/resourceGroups/read | Gets or lists resource groups. |
+> | [Microsoft.Storage](resource-provider-operations.md#microsoftstorage)/operations/read | Polls the status of an asynchronous operation. |
+> | [Microsoft.Storage](resource-provider-operations.md#microsoftstorage)/storageAccounts/objectReplicationPolicies/delete | Delete object replication policy |
+> | [Microsoft.Storage](resource-provider-operations.md#microsoftstorage)/storageAccounts/objectReplicationPolicies/read | List object replication policies |
+> | [Microsoft.Storage](resource-provider-operations.md#microsoftstorage)/storageAccounts/objectReplicationPolicies/write | Create or update object replication policy |
+> | [Microsoft.Storage](resource-provider-operations.md#microsoftstorage)/storageAccounts/objectReplicationPolicies/restorePointMarkers/write | |
+> | [Microsoft.Storage](resource-provider-operations.md#microsoftstorage)/storageAccounts/blobServices/containers/read | Returns list of containers |
+> | [Microsoft.Storage](resource-provider-operations.md#microsoftstorage)/storageAccounts/blobServices/containers/write | Returns the result of put blob container |
+> | [Microsoft.Storage](resource-provider-operations.md#microsoftstorage)/storageAccounts/blobServices/read | Returns blob service properties or statistics |
+> | [Microsoft.Storage](resource-provider-operations.md#microsoftstorage)/storageAccounts/blobServices/write | Returns the result of put blob service properties |
+> | [Microsoft.Storage](resource-provider-operations.md#microsoftstorage)/storageAccounts/read | Returns the list of storage accounts or gets the properties for the specified storage account. |
+> | [Microsoft.Storage](resource-provider-operations.md#microsoftstorage)/storageAccounts/restoreBlobRanges/action | Restore blob ranges to the state of the specified time |
+> | **NotActions** | |
+> | *none* | |
+> | **DataActions** | |
+> | *none* | |
+> | **NotDataActions** | |
+> | *none* | |
+
+```json
+{
+ "assignableScopes": [
+ "/"
+ ],
+ "description": "Lets you perform backup and restore operations using Azure Backup on the storage account.",
+ "id": "/subscriptions/{subscriptionId}/providers/Microsoft.Authorization/roleDefinitions/e5e2a7ff-d759-4cd2-bb51-3152d37e2eb1",
+ "name": "e5e2a7ff-d759-4cd2-bb51-3152d37e2eb1",
+ "permissions": [
+ {
+ "actions": [
+ "Microsoft.Authorization/*/read",
+ "Microsoft.Authorization/locks/read",
+ "Microsoft.Authorization/locks/write",
+ "Microsoft.Authorization/locks/delete",
+ "Microsoft.Features/features/read",
+ "Microsoft.Features/providers/features/read",
+ "Microsoft.Resources/subscriptions/resourceGroups/read",
+ "Microsoft.Storage/operations/read",
+ "Microsoft.Storage/storageAccounts/objectReplicationPolicies/delete",
+ "Microsoft.Storage/storageAccounts/objectReplicationPolicies/read",
+ "Microsoft.Storage/storageAccounts/objectReplicationPolicies/write",
+ "Microsoft.Storage/storageAccounts/objectReplicationPolicies/restorePointMarkers/write",
+ "Microsoft.Storage/storageAccounts/blobServices/containers/read",
+ "Microsoft.Storage/storageAccounts/blobServices/containers/write",
+ "Microsoft.Storage/storageAccounts/blobServices/read",
+ "Microsoft.Storage/storageAccounts/blobServices/write",
+ "Microsoft.Storage/storageAccounts/read",
+ "Microsoft.Storage/storageAccounts/restoreBlobRanges/action"
+ ],
+ "notActions": [],
+ "dataActions": [],
+ "notDataActions": []
+ }
+ ],
+ "roleName": "Storage Account Backup Contributor",
+ "roleType": "BuiltInRole",
+ "type": "Microsoft.Authorization/roleDefinitions"
+}
+```
+ ### Storage Account Contributor Permits management of storage accounts. Provides access to the account key, which can be used to access data via Shared Key authorization. [Learn more](../storage/common/storage-auth-aad.md)
List cluster admin credential action. [Learn more](../aks/control-kubeconfig-acc
> | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/managedClusters/listClusterAdminCredential/action | List the clusterAdmin credential of a managed cluster | > | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/managedClusters/accessProfiles/listCredential/action | Get a managed cluster access profile by role name using list credential | > | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/managedClusters/read | Get a managed cluster |
+> | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/managedClusters/runcommand/action | Run user issued command against managed kubernetes server. |
> | **NotActions** | | > | *none* | | > | **DataActions** | |
List cluster admin credential action. [Learn more](../aks/control-kubeconfig-acc
"actions": [ "Microsoft.ContainerService/managedClusters/listClusterAdminCredential/action", "Microsoft.ContainerService/managedClusters/accessProfiles/listCredential/action",
- "Microsoft.ContainerService/managedClusters/read"
+ "Microsoft.ContainerService/managedClusters/read",
+ "Microsoft.ContainerService/managedClusters/runcommand/action"
], "notActions": [], "dataActions": [],
Lets you manage the security-related policies of SQL servers and databases, but
> | [Microsoft.Security](resource-provider-operations.md#microsoftsecurity)/sqlVulnerabilityAssessments/* | | > | [Microsoft.Sql](resource-provider-operations.md#microsoftsql)/managedInstances/administrators/read | Gets a list of managed instance administrators. | > | [Microsoft.Sql](resource-provider-operations.md#microsoftsql)/servers/administrators/read | Gets a specific Azure Active Directory administrator object |
+> | [Microsoft.Sql](resource-provider-operations.md#microsoftsql)/servers/externalPolicyBasedAuthorizations/* | |
> | **NotActions** | | > | *none* | | > | **DataActions** | |
Lets you manage the security-related policies of SQL servers and databases, but
"Microsoft.Sql/managedInstances/azureADOnlyAuthentications/*", "Microsoft.Security/sqlVulnerabilityAssessments/*", "Microsoft.Sql/managedInstances/administrators/read",
- "Microsoft.Sql/servers/administrators/read"
+ "Microsoft.Sql/servers/administrators/read",
+ "Microsoft.Sql/servers/externalPolicyBasedAuthorizations/*"
], "notActions": [], "dataActions": [],
Lets you manage SQL servers and databases, but not access to them, and not their
> | [Microsoft.Sql](resource-provider-operations.md#microsoftsql)/servers/vulnerabilityAssessments/* | | > | [Microsoft.Sql](resource-provider-operations.md#microsoftsql)/servers/azureADOnlyAuthentications/delete | Deletes a specific server Azure Active Directory only authentication object | > | [Microsoft.Sql](resource-provider-operations.md#microsoftsql)/servers/azureADOnlyAuthentications/write | Adds or updates a specific server Azure Active Directory only authentication object |
+> | [Microsoft.Sql](resource-provider-operations.md#microsoftsql)/servers/externalPolicyBasedAuthorizations/delete | Deletes a specific server external policy based authorization property |
+> | [Microsoft.Sql](resource-provider-operations.md#microsoftsql)/servers/externalPolicyBasedAuthorizations/write | Adds or updates a specific server external policy based authorization property |
> | **DataActions** | | > | *none* | | > | **NotDataActions** | |
Lets you manage SQL servers and databases, but not access to them, and not their
"Microsoft.Sql/servers/securityAlertPolicies/*", "Microsoft.Sql/servers/vulnerabilityAssessments/*", "Microsoft.Sql/servers/azureADOnlyAuthentications/delete",
- "Microsoft.Sql/servers/azureADOnlyAuthentications/write"
+ "Microsoft.Sql/servers/azureADOnlyAuthentications/write",
+ "Microsoft.Sql/servers/externalPolicyBasedAuthorizations/delete",
+ "Microsoft.Sql/servers/externalPolicyBasedAuthorizations/write"
], "dataActions": [], "notDataActions": []
Microsoft Sentinel Reader [Learn more](../sentinel/roles.md)
> | [Microsoft.Insights](resource-provider-operations.md#microsoftinsights)/alertRules/* | Create and manage a classic metric alert | > | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/deployments/* | Create and manage a deployment | > | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/subscriptions/resourceGroups/read | Gets or lists resource groups. |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/templateSpecs/*/read | |
> | [Microsoft.Support](resource-provider-operations.md#microsoftsupport)/* | Create and update a support ticket | > | **NotActions** | | > | *none* | |
Microsoft Sentinel Reader [Learn more](../sentinel/roles.md)
"Microsoft.Insights/alertRules/*", "Microsoft.Resources/deployments/*", "Microsoft.Resources/subscriptions/resourceGroups/read",
+ "Microsoft.Resources/templateSpecs/*/read",
"Microsoft.Support/*" ], "notActions": [],
Microsoft Sentinel Responder [Learn more](../sentinel/roles.md)
### Security Admin
-View and update permissions for Security Center. Same permissions as the Security Reader role and can also update the security policy and dismiss alerts and recommendations. [Learn more](../security-center/security-center-permissions.md)
+View and update permissions for Microsoft Defender for Cloud. Same permissions as the Security Reader role and can also update the security policy and dismiss alerts and recommendations. [Learn more](../defender-for-cloud/permissions.md)
> [!div class="mx-tableFixed"] > | Actions | Description |
View and update permissions for Security Center. Same permissions as the Securit
### Security Assessment Contributor
-Lets you push assessments to Security Center
+Lets you push assessments to Microsoft Defender for Cloud
> [!div class="mx-tableFixed"] > | Actions | Description |
Lets you push assessments to Security Center
"assignableScopes": [ "/" ],
- "description": "Lets you push assessments to Security Center",
+ "description": "Lets you push assessments to Microsoft Defender for Cloud",
"id": "/subscriptions/{subscriptionId}/providers/Microsoft.Authorization/roleDefinitions/612c2aa1-cb24-443b-ac28-3ab7272de6f5", "name": "612c2aa1-cb24-443b-ac28-3ab7272de6f5", "permissions": [
This is a legacy role. Please use Security Admin instead.
### Security Reader
-View permissions for Security Center. Can view recommendations, alerts, a security policy, and security states, but cannot make changes. [Learn more](../security-center/security-center-permissions.md)
+View permissions for Microsoft Defender for Cloud. Can view recommendations, alerts, a security policy, and security states, but cannot make changes. [Learn more](../defender-for-cloud/permissions.md)
> [!div class="mx-tableFixed"] > | Actions | Description |
Services Hub Operator allows you to perform all read, write, and deletion operat
- [Assign Azure roles using the Azure portal](role-assignments-portal.md) - [Azure custom roles](custom-roles.md)-- [Permissions in Azure Security Center](../security-center/security-center-permissions.md)
+- [Permissions in Microsoft Defender for Cloud](../defender-for-cloud/permissions.md)
role-based-access-control Conditions Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/conditions-overview.md
Previously updated : 05/16/2022 Last updated : 05/24/2022 #Customer intent: As a dev, devops, or it admin, I want to learn how to constrain access within a role assignment by using conditions.
Here are some of the [blob storage attributes](../storage/common/storage-auth-ab
- Blob prefix - Container name - Encryption scope name
+- Is Current Version
- Is hierarchical namespace enabled - Snapshot - Version ID
Here's a list of the primary features of conditions:
| Feature | Status | Date | | | | |
-| Use the following [attributes](../storage/common/storage-auth-abac-attributes.md#azure-blob-storage-attributes) in a condition: Account name, Blob prefix, Encryption scope name, Is hierarchical namespace enabled, Snapshot, Version ID | Preview | May 2022 |
+| Use the following [attributes](../storage/common/storage-auth-abac-attributes.md#azure-blob-storage-attributes) in a condition: Account name, Blob prefix, Encryption scope name, Is Current Version, Is hierarchical namespace enabled, Snapshot, Version ID | Preview | May 2022 |
| Use [custom security attributes on a principal in a condition](conditions-format.md#principal-attributes) | Preview | November 2021 | | Add conditions to blob storage data role assignments | Preview | May 2021 | | Use attributes on a resource in a condition | Preview | May 2021 |
role-based-access-control Resource Provider Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/resource-provider-operations.md
Previously updated : 04/27/2022 Last updated : 05/20/2022
Azure service: core
> | Microsoft.Marketplace/privateStores/listStopSellOffersPlansNotifications/action | List stop sell offers plans notifications | > | Microsoft.Marketplace/privateStores/listSubscriptionsContext/action | List the subscription in private store context | > | Microsoft.Marketplace/privateStores/listNewPlansNotifications/action | List new plans notifications |
+> | Microsoft.Marketplace/privateStores/queryUserOffers/action | Fetch the approved offers from the offers ids and the user subscriptions in the payload |
+> | Microsoft.Marketplace/privateStores/anyExistingOffersInTheStore/action | Return true if there is an existing offer for at least one enabled collection |
> | Microsoft.Marketplace/privateStores/adminRequestApprovals/read | Read all request approvals details, only admins | > | Microsoft.Marketplace/privateStores/adminRequestApprovals/write | Admin update the request with decision on the request |
+> | Microsoft.Marketplace/privateStores/collections/approveAllItems/action | Delete all specific approved items and set collection to allItemsApproved |
+> | Microsoft.Marketplace/privateStores/collections/disableApproveAllItems/action | Set approve all items property to false for the collection |
> | Microsoft.Marketplace/privateStores/offers/write | Creates offer in PrivateStore. | > | Microsoft.Marketplace/privateStores/offers/delete | Deletes offer from PrivateStore. | > | Microsoft.Marketplace/privateStores/offers/read | Reads PrivateStore offers. |
Azure service: [Azure Search](../search/index.yml)
> | Microsoft.Search/searchServices/regenerateAdminKey/action | Regenerates the admin key. | > | Microsoft.Search/searchServices/listQueryKeys/action | Returns the list of query API keys for the given Azure Search service. | > | Microsoft.Search/searchServices/createQueryKey/action | Creates the query key. |
+> | Microsoft.Search/searchServices/aliases/read | Return an alias or a list of aliases. |
+> | Microsoft.Search/searchServices/aliases/write | Create an alias or modify its properties. |
+> | Microsoft.Search/searchServices/aliases/delete | Delete an alias. |
> | Microsoft.Search/searchServices/dataSources/read | Return a data source or a list of data sources. | > | Microsoft.Search/searchServices/dataSources/write | Create a data source or modify its properties. | > | Microsoft.Search/searchServices/dataSources/delete | Delete a data source. |
Azure service: [Azure SignalR Service](../azure-signalr/index.yml)
> | Microsoft.SignalRService/SignalR/eventGridFilters/delete | Delete an event grid filter from a SignalR resource. | > | Microsoft.SignalRService/SignalR/operationResults/read | | > | Microsoft.SignalRService/SignalR/operationStatuses/read | |
+> | Microsoft.SignalRService/SignalR/privateEndpointConnectionProxies/updatePrivateEndpointProperties/action | |
> | Microsoft.SignalRService/SignalR/privateEndpointConnectionProxies/validate/action | Validate Private Endpoint Connection Proxy | > | Microsoft.SignalRService/SignalR/privateEndpointConnectionProxies/write | Write Private Endpoint Connection Proxy | > | Microsoft.SignalRService/SignalR/privateEndpointConnectionProxies/read | Read Private Endpoint Connection Proxy |
Azure service: [Azure SignalR Service](../azure-signalr/index.yml)
> | Microsoft.SignalRService/WebPubSub/hubs/delete | Delete hub settings | > | Microsoft.SignalRService/WebPubSub/operationResults/read | | > | Microsoft.SignalRService/WebPubSub/operationStatuses/read | |
+> | Microsoft.SignalRService/WebPubSub/privateEndpointConnectionProxies/updatePrivateEndpointProperties/action | |
> | Microsoft.SignalRService/WebPubSub/privateEndpointConnectionProxies/validate/action | Validate Private Endpoint Connection Proxy | > | Microsoft.SignalRService/WebPubSub/privateEndpointConnectionProxies/write | Write Private Endpoint Connection Proxy | > | Microsoft.SignalRService/WebPubSub/privateEndpointConnectionProxies/read | Read Private Endpoint Connection Proxy |
Azure service: [Azure SignalR Service](../azure-signalr/index.yml)
> | Microsoft.SignalRService/SignalR/group/read | Check group existence or user existence in group. | > | Microsoft.SignalRService/SignalR/group/write | Join / Leave group. | > | Microsoft.SignalRService/SignalR/hub/send/action | Broadcast messages to all client connections in hub. |
+> | Microsoft.SignalRService/SignalR/livetrace/read | Read live trace tool results |
+> | Microsoft.SignalRService/SignalR/livetrace/write | Create live trace connections |
> | Microsoft.SignalRService/SignalR/serverConnection/write | Start a server connection. | > | Microsoft.SignalRService/SignalR/user/send/action | Send messages to user, who may consist of multiple client connections. | > | Microsoft.SignalRService/SignalR/user/read | Check user existence. |
Azure service: [Azure SignalR Service](../azure-signalr/index.yml)
> | Microsoft.SignalRService/WebPubSub/group/read | Check group existence or user existence in group. | > | Microsoft.SignalRService/WebPubSub/group/write | Join / Leave group. | > | Microsoft.SignalRService/WebPubSub/hub/send/action | Broadcast messages to all client connections in hub. |
+> | Microsoft.SignalRService/WebPubSub/livetrace/read | Read live trace tool results |
+> | Microsoft.SignalRService/WebPubSub/livetrace/write | Create live trace connections |
> | Microsoft.SignalRService/WebPubSub/user/send/action | Send messages to user, who may consist of multiple client connections. | > | Microsoft.SignalRService/WebPubSub/user/read | Check user existence. |
Azure service: [Container Instances](../container-instances/index.yml)
> | Action | Description | > | | | > | Microsoft.ContainerInstance/register/action | Registers the subscription for the container instance resource provider and enables the creation of container groups. |
+> | Microsoft.ContainerInstance/containerGroupProfiles/read | Get all container goup profiles. |
+> | Microsoft.ContainerInstance/containerGroupProfiles/write | Create or update a specific container group profile. |
+> | Microsoft.ContainerInstance/containerGroupProfiles/delete | Delete the specific container group profile. |
> | Microsoft.ContainerInstance/containerGroups/read | Get all container groups. | > | Microsoft.ContainerInstance/containerGroups/write | Create or update a specific container group. | > | Microsoft.ContainerInstance/containerGroups/delete | Delete the specific container group. |
Azure service: [Azure Database Migration Service](../dms/index.yml)
> | Action | Description | > | | | > | Microsoft.DataMigration/register/action | Registers the subscription with the Azure Database Migration Service provider |
+> | Microsoft.DataMigration/databaseMigrations/write | Create or Update Database Migration resource |
+> | Microsoft.DataMigration/databaseMigrations/delete | Delete Database Migration resource |
+> | Microsoft.DataMigration/databaseMigrations/read | Retrieve the Database Migration resource |
+> | Microsoft.DataMigration/databaseMigrations/cancel/action | Stop ongoing migration for the database |
+> | Microsoft.DataMigration/databaseMigrations/cutover/action | Cutover online migration operation for the database |
> | Microsoft.DataMigration/locations/operationResults/read | Get the status of a long-running operation related to a 202 Accepted response | > | Microsoft.DataMigration/locations/operationStatuses/read | Get the status of a long-running operation related to a 202 Accepted response | > | Microsoft.DataMigration/locations/sqlMigrationServiceOperationResults/read | Retrieve Service Operation Results |
Azure service: [Azure Database for MySQL](../mysql/index.yml)
> | Microsoft.DBforMySQL/flexibleServers/restart/action | Restarts a specific server. | > | Microsoft.DBforMySQL/flexibleServers/start/action | Starts a specific server. | > | Microsoft.DBforMySQL/flexibleServers/stop/action | Stops a specific server. |
+> | Microsoft.DBforMySQL/flexibleServers/backups/write | |
> | Microsoft.DBforMySQL/flexibleServers/backups/read | Returns the list of backups for a server or gets the properties for the specified backup. | > | Microsoft.DBforMySQL/flexibleServers/configurations/read | Returns the list of MySQL server configurations or gets the configurations for the specified server. | > | Microsoft.DBforMySQL/flexibleServers/configurations/write | Updates the configuration of a MySQL server. |
Azure service: [Azure Database for MySQL](../mysql/index.yml)
> | Microsoft.DBforMySQL/flexibleServers/firewallRules/write | Creates a firewall rule with the specified parameters or updates an existing rule. | > | Microsoft.DBforMySQL/flexibleServers/firewallRules/read | Returns the list of firewall rules for a server or gets the properties for the specified firewall rule. | > | Microsoft.DBforMySQL/flexibleServers/firewallRules/delete | Deletes an existing firewall rule. |
+> | Microsoft.DBforMySQL/flexibleServers/logFiles/read | Return a list of server log files for a server with file download links |
> | Microsoft.DBforMySQL/flexibleServers/providers/Microsoft.Insights/diagnosticSettings/read | Gets the disagnostic setting for the resource | > | Microsoft.DBforMySQL/flexibleServers/providers/Microsoft.Insights/diagnosticSettings/write | Creates or updates the diagnostic setting for the resource | > | Microsoft.DBforMySQL/flexibleServers/providers/Microsoft.Insights/logDefinitions/read | Gets the available logs for MySQL servers |
Azure service: [Azure SQL Database](/azure/azure-sql/database/index), [Azure SQL
> | | | > | Microsoft.Sql/checkNameAvailability/action | Verify whether given server name is available for provisioning worldwide for a given subscription. | > | Microsoft.Sql/register/action | Registers the subscription for the Microsoft SQL Database resource provider and enables the creation of Microsoft SQL Databases. |
-> | Microsoft.Sql/unregister/action | UnRegisters the subscription for the Microsoft SQL Database resource provider and enables the creation of Microsoft SQL Databases. |
+> | Microsoft.Sql/unregister/action | UnRegisters the subscription for the Microsoft SQL Database resource provider and disables the creation of Microsoft SQL Databases. |
> | Microsoft.Sql/privateEndpointConnectionsApproval/action | Determines if user is allowed to approve a private endpoint connection | > | Microsoft.Sql/instancePools/read | Gets an instance pool | > | Microsoft.Sql/instancePools/write | Creates or updates an instance pool |
Azure service: [Azure SQL Database](/azure/azure-sql/database/index), [Azure SQL
> | Microsoft.Sql/locations/read | Gets the available locations for a given subscription | > | Microsoft.Sql/locations/administratorAzureAsyncOperation/read | Gets the Managed instance azure async administrator operations result. | > | Microsoft.Sql/locations/administratorOperationResults/read | Gets the Managed instance administrator operations result. |
+> | Microsoft.Sql/locations/advancedThreatProtectionAzureAsyncOperation/read | Retrieve results of the server Advanced Threat Protection settings write operation |
+> | Microsoft.Sql/locations/advancedThreatProtectionOperationResults/read | Retrieve results of the server Advanced Threat Protection settings write operation |
> | Microsoft.Sql/locations/auditingSettingsAzureAsyncOperation/read | Retrieve result of the extended server blob auditing policy Set operation | > | Microsoft.Sql/locations/auditingSettingsOperationResults/read | Retrieve result of the server blob auditing policy Set operation | > | Microsoft.Sql/locations/capabilities/read | Gets the capabilities for this subscription in a given location |
Azure service: [Azure SQL Database](/azure/azure-sql/database/index), [Azure SQL
> | Microsoft.Sql/locations/managedDatabaseMoveAzureAsyncOperation/read | Gets Managed Instance database move Azure async operation. | > | Microsoft.Sql/locations/managedDatabaseMoveOperationResults/read | Gets Managed Instance database move operation result. | > | Microsoft.Sql/locations/managedDatabaseRestoreAzureAsyncOperation/completeRestore/action | Completes managed database restore operation |
+> | Microsoft.Sql/locations/managedInstanceAdvancedThreatProtectionAzureAsyncOperation/read | Retrieve results of the managed instance Advanced Threat Protection settings write operation |
+> | Microsoft.Sql/locations/managedInstanceAdvancedThreatProtectionOperationResults/read | Retrieve results of the managed instance Advanced Threat Protection settings write operation |
> | Microsoft.Sql/locations/managedInstanceEncryptionProtectorAzureAsyncOperation/read | Gets in-progress operations on transparent data encryption managed instance encryption protector | > | Microsoft.Sql/locations/managedInstanceEncryptionProtectorOperationResults/read | Gets in-progress operations on transparent data encryption managed instance encryption protector | > | Microsoft.Sql/locations/managedInstanceKeyAzureAsyncOperation/read | Gets in-progress operations on transparent data encryption managed instance keys |
Azure service: [Azure SQL Database](/azure/azure-sql/database/index), [Azure SQL
> | Microsoft.Sql/managedInstances/administrators/read | Gets a list of managed instance administrators. | > | Microsoft.Sql/managedInstances/administrators/write | Creates or updates managed instance administrator with the specified parameters. | > | Microsoft.Sql/managedInstances/administrators/delete | Deletes an existing administrator of managed instance. |
+> | Microsoft.Sql/managedInstances/advancedThreatProtectionSettings/write | Change the managed instance Advanced Threat Protection settings for a given managed instance |
+> | Microsoft.Sql/managedInstances/advancedThreatProtectionSettings/read | Retrieve a list of managed instance Advanced Threat Protection settings configured for a given instance |
> | Microsoft.Sql/managedInstances/azureADOnlyAuthentications/read | Reads a specific managed server Azure Active Directory only authentication object | > | Microsoft.Sql/managedInstances/azureADOnlyAuthentications/write | Adds or updates a specific managed server Azure Active Directory only authentication object | > | Microsoft.Sql/managedInstances/azureADOnlyAuthentications/delete | Deletes a specific managed server Azure Active Directory only authentication object |
Azure service: [Azure SQL Database](/azure/azure-sql/database/index), [Azure SQL
> | Microsoft.Sql/managedInstances/databases/delete | Deletes an existing managed database | > | Microsoft.Sql/managedInstances/databases/write | Creates a new database or updates an existing database. | > | Microsoft.Sql/managedInstances/databases/completeRestore/action | Completes managed database restore operation |
+> | Microsoft.Sql/managedInstances/databases/advancedThreatProtectionSettings/write | Change the database Advanced Threat Protection settings for a given managed database |
+> | Microsoft.Sql/managedInstances/databases/advancedThreatProtectionSettings/read | Retrieve a list of the managed database Advanced Threat Protection settings configured for a given managed database |
> | Microsoft.Sql/managedInstances/databases/backupLongTermRetentionPolicies/write | Updates a long term retention policy for a managed database | > | Microsoft.Sql/managedInstances/databases/backupLongTermRetentionPolicies/read | Gets a long term retention policy for a managed database | > | Microsoft.Sql/managedInstances/databases/backupShortTermRetentionPolicies/read | Gets a short term retention policy for a managed database |
Azure service: [Azure SQL Database](/azure/azure-sql/database/index), [Azure SQL
> | Microsoft.Sql/servers/databases/recommendedSensitivityLabels/read | List the recommended sensitivity labels for a given database | > | Microsoft.Sql/servers/databases/recommendedSensitivityLabels/write | Batch update recommended sensitivity labels | > | Microsoft.Sql/servers/databases/replicationLinks/read | Return the list of replication links or gets the properties for the specified replication links. |
-> | Microsoft.Sql/servers/databases/replicationLinks/delete | Terminate the replication relationship forcefully and with potential data loss |
+> | Microsoft.Sql/servers/databases/replicationLinks/delete | Execute deletion of an existing replication link. |
> | Microsoft.Sql/servers/databases/replicationLinks/failover/action | Execute planned failover of an existing replication link. | > | Microsoft.Sql/servers/databases/replicationLinks/forceFailoverAllowDataLoss/action | Execute forced failover of an existing replication link. | > | Microsoft.Sql/servers/databases/replicationLinks/updateReplicationMode/action | Update replication mode for link to synchronous or asynchronous mode |
Azure service: [Event Hubs](../event-hubs/index.yml)
> | Microsoft.EventHub/namespaces/networkrulesets/read | Gets NetworkRuleSet Resource | > | Microsoft.EventHub/namespaces/networkrulesets/write | Create VNET Rule Resource | > | Microsoft.EventHub/namespaces/networkrulesets/delete | Delete VNET Rule Resource |
+> | Microsoft.EventHub/namespaces/networkSecurityPerimeterAssociationProxies/write | Create NetworkSecurityPerimeterAssociationProxies |
+> | Microsoft.EventHub/namespaces/networkSecurityPerimeterAssociationProxies/read | Get NetworkSecurityPerimeterAssociationProxies |
+> | Microsoft.EventHub/namespaces/networkSecurityPerimeterAssociationProxies/delete | Delete NetworkSecurityPerimeterAssociationProxies |
+> | Microsoft.EventHub/namespaces/networkSecurityPerimeterAssociationProxies/reconcile/action | Reconcile NetworkSecurityPerimeterAssociationProxies |
+> | Microsoft.EventHub/namespaces/networkSecurityPerimeterConfigurations/read | Get Network Security Perimeter Configurations |
+> | Microsoft.EventHub/namespaces/networkSecurityPerimeterConfigurations/reconcile/action | Reconcile Network Security Perimeter Configurations |
> | Microsoft.EventHub/namespaces/operationresults/read | Get the status of Namespace operation | > | Microsoft.EventHub/namespaces/privateEndpointConnectionProxies/validate/action | Validate Private Endpoint Connection Proxy | > | Microsoft.EventHub/namespaces/privateEndpointConnectionProxies/read | Get Private Endpoint Connection Proxy |
Azure service: [Azure Data Explorer](/azure/data-explorer/)
> | Microsoft.Kusto/Clusters/DataConnections/read | Reads a cluster's data connections resource. | > | Microsoft.Kusto/Clusters/DataConnections/write | Writes a cluster's data connections resource. | > | Microsoft.Kusto/Clusters/DataConnections/delete | Deletes a cluster's data connections resource. |
-> | Microsoft.Kusto/Clusters/ManagedPrivateEndpoints/read | Reads an attached database configuration resource. |
-> | Microsoft.Kusto/Clusters/ManagedPrivateEndpoints/write | |
-> | Microsoft.Kusto/Clusters/ManagedPrivateEndpoints/delete | |
+> | Microsoft.Kusto/Clusters/ManagedPrivateEndpoints/read | Reads a managed private endpoint |
+> | Microsoft.Kusto/Clusters/ManagedPrivateEndpoints/write | Writes a managed private endpoint |
+> | Microsoft.Kusto/Clusters/ManagedPrivateEndpoints/delete | Deletes a managed private endpoint |
> | Microsoft.Kusto/Clusters/OutboundNetworkDependenciesEndpoints/read | Reads outbound network dependencies endpoints for a resource | > | Microsoft.Kusto/Clusters/PrincipalAssignments/read | Reads a Cluster principal assignments resource. | > | Microsoft.Kusto/Clusters/PrincipalAssignments/write | Writes a Cluster principal assignments resource. |
Azure service: [Cognitive Services](../cognitive-services/index.yml)
> | Microsoft.CognitiveServices/accounts/Language/analyze-conversation/jobscancel/action | Cancel a long-running analysis job on conversation. | > | Microsoft.CognitiveServices/accounts/Language/analyze-conversation/jobs/action | Submit a long conversation for analysis. Specify one or more unique tasks to be executed as a long-running operation. | > | Microsoft.CognitiveServices/accounts/Language/analyze-conversation/jobs/read | Get the status of an analysis job. A job may consist of one or more tasks. Once all tasks are succeeded, the job will transition to the suceeded state and results will be available for each task. |
+> | Microsoft.CognitiveServices/accounts/Language/analyze-conversations/jobscancel/action | Cancel a long-running analysis job on conversation. |
+> | Microsoft.CognitiveServices/accounts/Language/analyze-conversations/jobs/action | Submit a long conversation for analysis. Specify one or more unique tasks to be executed as a long-running operation. |
> | Microsoft.CognitiveServices/accounts/Language/analyze-conversations/internal/projects/export/jobs/result/read | Get export job result details. | > | Microsoft.CognitiveServices/accounts/Language/analyze-conversations/internal/projects/models/read | Get a trained model info. Get trained models info.* |
+> | Microsoft.CognitiveServices/accounts/Language/analyze-conversations/jobs/read | Get the status of an analysis job. A job may consist of one or more tasks. Once all tasks are succeeded, the job will transition to the suceeded state and results will be available for each task. |
> | Microsoft.CognitiveServices/accounts/Language/analyze-conversations/projects/write | Creates a new or update a project. | > | Microsoft.CognitiveServices/accounts/Language/analyze-conversations/projects/delete | Deletes a project. | > | Microsoft.CognitiveServices/accounts/Language/analyze-conversations/projects/read | Gets a project info. Returns the list of projects.* |
Azure service: [Machine Learning](../machine-learning/index.yml)
> | Microsoft.MachineLearningServices/locations/deleteVirtualNetworkOrSubnets/action | Deleted the references to virtual networks/subnets associated with Machine Learning Service Workspaces. | > | Microsoft.MachineLearningServices/locations/updateQuotas/action | Update quota for each VM family at a subscription or a workspace level. | > | Microsoft.MachineLearningServices/locations/computeoperationsstatus/read | Gets the status of a particular compute operation |
+> | Microsoft.MachineLearningServices/locations/mfeOperationResults/read | Gets the result of a particular MFE operation |
+> | Microsoft.MachineLearningServices/locations/mfeOperationsStatus/read | Gets the status of a particular MFE operation |
> | Microsoft.MachineLearningServices/locations/quotas/read | Gets the currently assigned Workspace Quotas based on VMFamily. | > | Microsoft.MachineLearningServices/locations/usages/read | Usage report for aml compute resources in a subscription | > | Microsoft.MachineLearningServices/locations/vmsizes/read | Get supported vm sizes |
Azure service: [Machine Learning](../machine-learning/index.yml)
> | Microsoft.MachineLearningServices/workspaces/connections/write | Creates or updates a Machine Learning Services connection(s) | > | Microsoft.MachineLearningServices/workspaces/connections/delete | Deletes the Machine Learning Services connection(s) | > | Microsoft.MachineLearningServices/workspaces/data/read | Reads Data container in Machine Learning Services Workspace(s) |
+> | Microsoft.MachineLearningServices/workspaces/data/write | Writes Data container in Machine Learning Services Workspace(s) |
> | Microsoft.MachineLearningServices/workspaces/data/delete | Deletes Data container in Machine Learning Services Workspace(s) | > | Microsoft.MachineLearningServices/workspaces/data/versions/read | Reads Data Versions in Machine Learning Services Workspace(s) | > | Microsoft.MachineLearningServices/workspaces/data/versions/write | Create or Update Data Versions in Machine Learning Services Workspace(s) |
Azure service: [Azure Monitor](../azure-monitor/index.yml)
> | Microsoft.Insights/Components/WorkItemConfigs/Delete | Deleting an Application Insights ALM integration configuration | > | Microsoft.Insights/Components/WorkItemConfigs/Read | Reading an Application Insights ALM integration configuration | > | Microsoft.Insights/Components/WorkItemConfigs/Write | Writing an Application Insights ALM integration configuration |
+> | Microsoft.Insights/CreateNotifications/Write | Send test notifications to the provided receiver list |
> | Microsoft.Insights/DataCollectionEndpoints/Read | Read a data collection endpoint | > | Microsoft.Insights/DataCollectionEndpoints/Write | Create or update a data collection endpoint | > | Microsoft.Insights/DataCollectionEndpoints/Delete | Delete a data collection endpoint |
Azure service: [Azure Monitor](../azure-monitor/index.yml)
> | Microsoft.Insights/MyWorkbooks/Read | Read a private Workbook | > | Microsoft.Insights/MyWorkbooks/Write | Create or update a private workbook | > | Microsoft.Insights/MyWorkbooks/Delete | Delete a private workbook |
+> | Microsoft.Insights/NotificationStatus/Read | Get the test notification status/detail |
> | Microsoft.Insights/Operations/Read | Read operations | > | Microsoft.Insights/PrivateLinkScopeOperationStatuses/Read | Read a private link scoped operation status | > | Microsoft.Insights/PrivateLinkScopes/Read | Read a private link scope |
Azure service: [Azure Monitor](../azure-monitor/index.yml)
> | Microsoft.OperationalInsights/workspaces/query/AADDomainServicesAccountLogon/read | Read data from the AADDomainServicesAccountLogon table | > | Microsoft.OperationalInsights/workspaces/query/AADDomainServicesAccountManagement/read | Read data from the AADDomainServicesAccountManagement table | > | Microsoft.OperationalInsights/workspaces/query/AADDomainServicesDirectoryServiceAccess/read | Read data from the AADDomainServicesDirectoryServiceAccess table |
+> | Microsoft.OperationalInsights/workspaces/query/AADDomainServicesLogonLogoff/read | Read data from the AADDomainServicesLogonLogoff table |
> | Microsoft.OperationalInsights/workspaces/query/AADDomainServicesPolicyChange/read | Read data from the AADDomainServicesPolicyChange table | > | Microsoft.OperationalInsights/workspaces/query/AADDomainServicesPrivilegeUse/read | Read data from the AADDomainServicesPrivilegeUse table | > | Microsoft.OperationalInsights/workspaces/query/AADManagedIdentitySignInLogs/read | Read data from the AADManagedIdentitySignInLogs table |
Azure service: [Azure Monitor](../azure-monitor/index.yml)
> | Microsoft.OperationalInsights/workspaces/query/AzureDiagnostics/read | Read data from the AzureDiagnostics table | > | Microsoft.OperationalInsights/workspaces/query/AzureLoadTestingOperation/read | Read data from the AzureLoadTestingOperation table | > | Microsoft.OperationalInsights/workspaces/query/AzureMetrics/read | Read data from the AzureMetrics table |
+> | Microsoft.OperationalInsights/workspaces/query/BaiClusterEvent/read | Read data from the BaiClusterEvent table |
> | Microsoft.OperationalInsights/workspaces/query/BaiClusterNodeEvent/read | Read data from the BaiClusterNodeEvent table | > | Microsoft.OperationalInsights/workspaces/query/BaiJobEvent/read | Read data from the BaiJobEvent table | > | Microsoft.OperationalInsights/workspaces/query/BehaviorAnalytics/read | Read data from the BehaviorAnalytics table |
Azure service: [Azure Monitor](../azure-monitor/index.yml)
> | Microsoft.OperationalInsights/workspaces/query/MicrosoftDynamicsTelemetryPerformanceLogs/read | Read data from the MicrosoftDynamicsTelemetryPerformanceLogs table | > | Microsoft.OperationalInsights/workspaces/query/MicrosoftDynamicsTelemetrySystemMetricsLogs/read | Read data from the MicrosoftDynamicsTelemetrySystemMetricsLogs table | > | Microsoft.OperationalInsights/workspaces/query/MicrosoftHealthcareApisAuditLogs/read | Read data from the MicrosoftHealthcareApisAuditLogs table |
+> | Microsoft.OperationalInsights/workspaces/query/NetworkAccessTraffic/read | Read data from the NetworkAccessTraffic table |
> | Microsoft.OperationalInsights/workspaces/query/NetworkMonitoring/read | Read data from the NetworkMonitoring table | > | Microsoft.OperationalInsights/workspaces/query/NetworkSessions/read | Read data from the NetworkSessions table | > | Microsoft.OperationalInsights/workspaces/query/NWConnectionMonitorDestinationListenerResult/read | Read data from the NWConnectionMonitorDestinationListenerResult table |
Azure service: [Azure Monitor](../azure-monitor/index.yml)
> | Microsoft.OperationalInsights/workspaces/query/NWConnectionMonitorPathResult/read | Read data from the NWConnectionMonitorPathResult table | > | Microsoft.OperationalInsights/workspaces/query/NWConnectionMonitorTestResult/read | Read data from the NWConnectionMonitorTestResult table | > | Microsoft.OperationalInsights/workspaces/query/OEPAirFlowTask/read | Read data from the OEPAirFlowTask table |
+> | Microsoft.OperationalInsights/workspaces/query/OEPElasticOperator/read | Read data from the OEPElasticOperator table |
+> | Microsoft.OperationalInsights/workspaces/query/OEPElasticsearch/read | Read data from the OEPElasticsearch table |
> | Microsoft.OperationalInsights/workspaces/query/OfficeActivity/read | Read data from the OfficeActivity table | > | Microsoft.OperationalInsights/workspaces/query/OLPSupplyChainEntityOperations/read | Read data from the OLPSupplyChainEntityOperations table | > | Microsoft.OperationalInsights/workspaces/query/OLPSupplyChainEvents/read | Read data from the OLPSupplyChainEvents table |
Azure service: [Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/overview.m
> | Microsoft.Kubernetes/connectedClusters/pods/read | Reads pods | > | Microsoft.Kubernetes/connectedClusters/pods/write | Writes pods | > | Microsoft.Kubernetes/connectedClusters/pods/delete | Deletes pods |
+> | Microsoft.Kubernetes/connectedClusters/pods/exec/action | Exec into a pod |
> | Microsoft.Kubernetes/connectedClusters/podtemplates/read | Reads podtemplates | > | Microsoft.Kubernetes/connectedClusters/podtemplates/write | Writes podtemplates | > | Microsoft.Kubernetes/connectedClusters/podtemplates/delete | Deletes podtemplates |
Azure service: [Site Recovery](../site-recovery/index.yml)
> | Action | Description | > | | | > | Microsoft.RecoveryServices/register/action | Registers subscription for given Resource Provider |
-> | MICROSOFT.RECOVERYSERVICES/Locations/backupCrossRegionRestore/action | Trigger Cross region restore. |
-> | MICROSOFT.RECOVERYSERVICES/Locations/backupCrrJob/action | Get Cross Region Restore Job Details in the secondary region for Recovery Services Vault. |
-> | MICROSOFT.RECOVERYSERVICES/Locations/backupCrrJobs/action | List Cross Region Restore Jobs in the secondary region for Recovery Services Vault. |
-> | MICROSOFT.RECOVERYSERVICES/Locations/backupPreValidateProtection/action | |
-> | MICROSOFT.RECOVERYSERVICES/Locations/backupStatus/action | Check Backup Status for Recovery Services Vaults |
-> | MICROSOFT.RECOVERYSERVICES/Locations/backupValidateFeatures/action | Validate Features |
+> | Microsoft.RecoveryServices/Locations/backupCrossRegionRestore/action | Trigger Cross region restore. |
+> | Microsoft.RecoveryServices/Locations/backupCrrJob/action | Get Cross Region Restore Job Details in the secondary region for Recovery Services Vault. |
+> | Microsoft.RecoveryServices/Locations/backupCrrJobs/action | List Cross Region Restore Jobs in the secondary region for Recovery Services Vault. |
+> | Microsoft.RecoveryServices/Locations/backupPreValidateProtection/action | |
+> | Microsoft.RecoveryServices/Locations/backupStatus/action | Check Backup Status for Recovery Services Vaults |
+> | Microsoft.RecoveryServices/Locations/backupValidateFeatures/action | Validate Features |
> | Microsoft.RecoveryServices/locations/allocateStamp/action | AllocateStamp is internal operation used by service | > | Microsoft.RecoveryServices/locations/checkNameAvailability/action | Check Resource Name Availability is an API to check if resource name is available | > | Microsoft.RecoveryServices/locations/allocatedStamp/read | GetAllocatedStamp is internal operation used by service |
-> | MICROSOFT.RECOVERYSERVICES/Locations/backupAadProperties/read | Get AAD Properties for authentication in the third region for Cross Region Restore. |
-> | MICROSOFT.RECOVERYSERVICES/Locations/backupCrrOperationResults/read | Returns CRR Operation Result for Recovery Services Vault. |
-> | MICROSOFT.RECOVERYSERVICES/Locations/backupCrrOperationsStatus/read | Returns CRR Operation Status for Recovery Services Vault. |
-> | MICROSOFT.RECOVERYSERVICES/Locations/backupProtectedItem/write | Create a backup Protected Item |
-> | MICROSOFT.RECOVERYSERVICES/Locations/backupProtectedItems/read | Returns the list of all Protected Items. |
+> | Microsoft.RecoveryServices/Locations/backupAadProperties/read | Get AAD Properties for authentication in the third region for Cross Region Restore. |
+> | Microsoft.RecoveryServices/Locations/backupCrrOperationResults/read | Returns CRR Operation Result for Recovery Services Vault. |
+> | Microsoft.RecoveryServices/Locations/backupCrrOperationsStatus/read | Returns CRR Operation Status for Recovery Services Vault. |
+> | Microsoft.RecoveryServices/Locations/backupProtectedItem/write | Create a backup Protected Item |
+> | Microsoft.RecoveryServices/Locations/backupProtectedItems/read | Returns the list of all Protected Items. |
> | Microsoft.RecoveryServices/locations/operationStatus/read | Gets Operation Status for a given Operation | > | Microsoft.RecoveryServices/operations/read | Operation returns the list of Operations for a Resource Provider |
-> | MICROSOFT.RECOVERYSERVICES/Vaults/backupJobsExport/action | Export Jobs |
-> | MICROSOFT.RECOVERYSERVICES/Vaults/backupSecurityPIN/action | Returns Security PIN Information for Recovery Services Vault. |
-> | MICROSOFT.RECOVERYSERVICES/Vaults/backupTriggerValidateOperation/action | Validate Operation on Protected Item |
-> | MICROSOFT.RECOVERYSERVICES/Vaults/backupValidateOperation/action | Validate Operation on Protected Item |
+> | Microsoft.RecoveryServices/Vaults/backupJobsExport/action | Export Jobs |
+> | Microsoft.RecoveryServices/Vaults/backupSecurityPIN/action | Returns Security PIN Information for Recovery Services Vault. |
+> | Microsoft.RecoveryServices/Vaults/backupTriggerValidateOperation/action | Validate Operation on Protected Item |
+> | Microsoft.RecoveryServices/Vaults/backupValidateOperation/action | Validate Operation on Protected Item |
> | Microsoft.RecoveryServices/Vaults/write | Create Vault operation creates an Azure resource of type 'vault' | > | Microsoft.RecoveryServices/Vaults/read | The Get Vault operation gets an object representing the Azure resource of type 'vault' | > | Microsoft.RecoveryServices/Vaults/delete | The Delete Vault operation deletes the specified Azure resource of type 'vault' |
-> | MICROSOFT.RECOVERYSERVICES/Vaults/backupconfig/read | Returns Configuration for Recovery Services Vault. |
-> | MICROSOFT.RECOVERYSERVICES/Vaults/backupconfig/write | Updates Configuration for Recovery Services Vault. |
-> | MICROSOFT.RECOVERYSERVICES/Vaults/backupEncryptionConfigs/read | Gets Backup Resource Encryption Configuration. |
-> | MICROSOFT.RECOVERYSERVICES/Vaults/backupEncryptionConfigs/write | Updates Backup Resource Encryption Configuration |
-> | MICROSOFT.RECOVERYSERVICES/Vaults/backupEngines/read | Returns all the backup management servers registered with vault. |
-> | MICROSOFT.RECOVERYSERVICES/Vaults/backupFabrics/refreshContainers/action | Refreshes the container list |
-> | MICROSOFT.RECOVERYSERVICES/Vaults/backupFabrics/backupProtectionIntent/delete | Delete a backup Protection Intent |
-> | MICROSOFT.RECOVERYSERVICES/Vaults/backupFabrics/backupProtectionIntent/read | Get a backup Protection Intent |
-> | MICROSOFT.RECOVERYSERVICES/Vaults/backupFabrics/backupProtectionIntent/write | Create a backup Protection Intent |
-> | MICROSOFT.RECOVERYSERVICES/Vaults/backupFabrics/operationResults/read | Returns status of the operation |
-> | MICROSOFT.RECOVERYSERVICES/Vaults/backupFabrics/operationsStatus/read | Returns status of the operation |
-> | MICROSOFT.RECOVERYSERVICES/Vaults/backupFabrics/protectableContainers/read | Get all protectable containers |
-> | MICROSOFT.RECOVERYSERVICES/Vaults/backupFabrics/protectionContainers/delete | Deletes the registered Container |
-> | MICROSOFT.RECOVERYSERVICES/Vaults/backupFabrics/protectionContainers/inquire/action | Do inquiry for workloads within a container |
-> | MICROSOFT.RECOVERYSERVICES/Vaults/backupFabrics/protectionContainers/read | Returns all registered containers |
-> | MICROSOFT.RECOVERYSERVICES/Vaults/backupFabrics/protectionContainers/write | Creates a registered container |
-> | MICROSOFT.RECOVERYSERVICES/Vaults/backupFabrics/protectionContainers/items/read | Get all items in a container |
-> | MICROSOFT.RECOVERYSERVICES/Vaults/backupFabrics/protectionContainers/operationResults/read | Gets result of Operation performed on Protection Container. |
-> | MICROSOFT.RECOVERYSERVICES/Vaults/backupFabrics/protectionContainers/operationsStatus/read | Gets status of Operation performed on Protection Container. |
-> | MICROSOFT.RECOVERYSERVICES/Vaults/backupFabrics/protectionContainers/protectedItems/backup/action | Performs Backup for Protected Item. |
-> | MICROSOFT.RECOVERYSERVICES/Vaults/backupFabrics/protectionContainers/protectedItems/delete | Deletes Protected Item |
-> | MICROSOFT.RECOVERYSERVICES/Vaults/backupFabrics/protectionContainers/protectedItems/read | Returns object details of the Protected Item |
-> | MICROSOFT.RECOVERYSERVICES/Vaults/backupFabrics/protectionContainers/protectedItems/recoveryPointsRecommendedForMove/action | Get Recovery points recommended for move to another tier |
-> | MICROSOFT.RECOVERYSERVICES/Vaults/backupFabrics/protectionContainers/protectedItems/write | Create a backup Protected Item |
-> | MICROSOFT.RECOVERYSERVICES/Vaults/backupFabrics/protectionContainers/protectedItems/operationResults/read | Gets Result of Operation Performed on Protected Items. |
-> | MICROSOFT.RECOVERYSERVICES/Vaults/backupFabrics/protectionContainers/protectedItems/operationsStatus/read | Returns the status of Operation performed on Protected Items. |
-> | MICROSOFT.RECOVERYSERVICES/Vaults/backupFabrics/protectionContainers/protectedItems/recoveryPoints/accessToken/action | Get AccessToken for Cross Region Restore. |
-> | MICROSOFT.RECOVERYSERVICES/Vaults/backupFabrics/protectionContainers/protectedItems/recoveryPoints/move/action | Move Recovery point to another tier |
-> | MICROSOFT.RECOVERYSERVICES/Vaults/backupFabrics/protectionContainers/protectedItems/recoveryPoints/provisionInstantItemRecovery/action | Provision Instant Item Recovery for Protected Item |
-> | MICROSOFT.RECOVERYSERVICES/Vaults/backupFabrics/protectionContainers/protectedItems/recoveryPoints/read | Get Recovery Points for Protected Items. |
-> | MICROSOFT.RECOVERYSERVICES/Vaults/backupFabrics/protectionContainers/protectedItems/recoveryPoints/restore/action | Restore Recovery Points for Protected Items. |
-> | MICROSOFT.RECOVERYSERVICES/Vaults/backupFabrics/protectionContainers/protectedItems/recoveryPoints/revokeInstantItemRecovery/action | Revoke Instant Item Recovery for Protected Item |
-> | MICROSOFT.RECOVERYSERVICES/Vaults/backupJobs/cancel/action | Cancel the Job |
-> | MICROSOFT.RECOVERYSERVICES/Vaults/backupJobs/read | Returns all Job Objects |
-> | MICROSOFT.RECOVERYSERVICES/Vaults/backupJobs/operationResults/read | Returns the Result of Job Operation. |
-> | MICROSOFT.RECOVERYSERVICES/Vaults/backupJobs/operationsStatus/read | Returns the status of Job Operation. |
-> | MICROSOFT.RECOVERYSERVICES/Vaults/backupOperationResults/read | Returns Backup Operation Result for Recovery Services Vault. |
-> | MICROSOFT.RECOVERYSERVICES/Vaults/backupOperations/read | Returns Backup Operation Status for Recovery Services Vault. |
-> | MICROSOFT.RECOVERYSERVICES/Vaults/backupPolicies/delete | Delete a Protection Policy |
-> | MICROSOFT.RECOVERYSERVICES/Vaults/backupPolicies/read | Returns all Protection Policies |
-> | MICROSOFT.RECOVERYSERVICES/Vaults/backupPolicies/write | Creates Protection Policy |
-> | MICROSOFT.RECOVERYSERVICES/Vaults/backupPolicies/operationResults/read | Get Results of Policy Operation. |
-> | MICROSOFT.RECOVERYSERVICES/Vaults/backupPolicies/operations/read | Get Status of Policy Operation. |
-> | MICROSOFT.RECOVERYSERVICES/Vaults/backupProtectableItems/read | Returns list of all Protectable Items. |
-> | MICROSOFT.RECOVERYSERVICES/Vaults/backupProtectedItems/read | Returns the list of all Protected Items. |
-> | MICROSOFT.RECOVERYSERVICES/Vaults/backupProtectionContainers/read | Returns all containers belonging to the subscription |
-> | MICROSOFT.RECOVERYSERVICES/Vaults/backupProtectionIntents/read | List all backup Protection Intents |
-> | MICROSOFT.RECOVERYSERVICES/Vaults/backupResourceGuardProxies/delete | The Delete ResourceGuard proxy operation deletes the specified Azure resource of type 'ResourceGuard proxy' |
-> | MICROSOFT.RECOVERYSERVICES/Vaults/backupResourceGuardProxies/read | Get the list of ResourceGuard proxies for a resource |
-> | MICROSOFT.RECOVERYSERVICES/Vaults/backupResourceGuardProxies/read | Get ResourceGuard proxy operation gets an object representing the Azure resource of type 'ResourceGuard proxy' |
-> | MICROSOFT.RECOVERYSERVICES/Vaults/backupResourceGuardProxies/unlockDelete/action | Unlock delete ResourceGuard proxy operation unlocks the next delete critical operation |
-> | MICROSOFT.RECOVERYSERVICES/Vaults/backupResourceGuardProxies/write | Create ResourceGuard proxy operation creates an Azure resource of type 'ResourceGuard Proxy' |
-> | MICROSOFT.RECOVERYSERVICES/Vaults/backupstorageconfig/read | Returns Storage Configuration for Recovery Services Vault. |
-> | MICROSOFT.RECOVERYSERVICES/Vaults/backupstorageconfig/write | Updates Storage Configuration for Recovery Services Vault. |
-> | MICROSOFT.RECOVERYSERVICES/Vaults/backupUsageSummaries/read | Returns summaries for Protected Items and Protected Servers for a Recovery Services . |
-> | MICROSOFT.RECOVERYSERVICES/Vaults/backupValidateOperationResults/read | Validate Operation on Protected Item |
-> | MICROSOFT.RECOVERYSERVICES/Vaults/backupValidateOperationsStatuses/read | Validate Operation on Protected Item |
+> | Microsoft.RecoveryServices/Vaults/backupconfig/read | Returns Configuration for Recovery Services Vault. |
+> | Microsoft.RecoveryServices/Vaults/backupconfig/write | Updates Configuration for Recovery Services Vault. |
+> | Microsoft.RecoveryServices/Vaults/backupEncryptionConfigs/read | Gets Backup Resource Encryption Configuration. |
+> | Microsoft.RecoveryServices/Vaults/backupEncryptionConfigs/write | Updates Backup Resource Encryption Configuration |
+> | Microsoft.RecoveryServices/Vaults/backupEngines/read | Returns all the backup management servers registered with vault. |
+> | Microsoft.RecoveryServices/Vaults/backupFabrics/refreshContainers/action | Refreshes the container list |
+> | Microsoft.RecoveryServices/Vaults/backupFabrics/backupProtectionIntent/delete | Delete a backup Protection Intent |
+> | Microsoft.RecoveryServices/Vaults/backupFabrics/backupProtectionIntent/read | Get a backup Protection Intent |
+> | Microsoft.RecoveryServices/Vaults/backupFabrics/backupProtectionIntent/write | Create a backup Protection Intent |
+> | Microsoft.RecoveryServices/Vaults/backupFabrics/operationResults/read | Returns status of the operation |
+> | Microsoft.RecoveryServices/Vaults/backupFabrics/operationsStatus/read | Returns status of the operation |
+> | Microsoft.RecoveryServices/Vaults/backupFabrics/protectableContainers/read | Get all protectable containers |
+> | Microsoft.RecoveryServices/Vaults/backupFabrics/protectionContainers/delete | Deletes the registered Container |
+> | Microsoft.RecoveryServices/Vaults/backupFabrics/protectionContainers/inquire/action | Do inquiry for workloads within a container |
+> | Microsoft.RecoveryServices/Vaults/backupFabrics/protectionContainers/read | Returns all registered containers |
+> | Microsoft.RecoveryServices/Vaults/backupFabrics/protectionContainers/write | Creates a registered container |
+> | Microsoft.RecoveryServices/Vaults/backupFabrics/protectionContainers/items/read | Get all items in a container |
+> | Microsoft.RecoveryServices/Vaults/backupFabrics/protectionContainers/operationResults/read | Gets result of Operation performed on Protection Container. |
+> | Microsoft.RecoveryServices/Vaults/backupFabrics/protectionContainers/operationsStatus/read | Gets status of Operation performed on Protection Container. |
+> | Microsoft.RecoveryServices/Vaults/backupFabrics/protectionContainers/protectedItems/backup/action | Performs Backup for Protected Item. |
+> | Microsoft.RecoveryServices/Vaults/backupFabrics/protectionContainers/protectedItems/delete | Deletes Protected Item |
+> | Microsoft.RecoveryServices/Vaults/backupFabrics/protectionContainers/protectedItems/read | Returns object details of the Protected Item |
+> | Microsoft.RecoveryServices/Vaults/backupFabrics/protectionContainers/protectedItems/recoveryPointsRecommendedForMove/action | Get Recovery points recommended for move to another tier |
+> | Microsoft.RecoveryServices/Vaults/backupFabrics/protectionContainers/protectedItems/write | Create a backup Protected Item |
+> | Microsoft.RecoveryServices/Vaults/backupFabrics/protectionContainers/protectedItems/operationResults/read | Gets Result of Operation Performed on Protected Items. |
+> | Microsoft.RecoveryServices/Vaults/backupFabrics/protectionContainers/protectedItems/operationsStatus/read | Returns the status of Operation performed on Protected Items. |
+> | Microsoft.RecoveryServices/Vaults/backupFabrics/protectionContainers/protectedItems/recoveryPoints/accessToken/action | Get AccessToken for Cross Region Restore. |
+> | Microsoft.RecoveryServices/Vaults/backupFabrics/protectionContainers/protectedItems/recoveryPoints/move/action | Move Recovery point to another tier |
+> | Microsoft.RecoveryServices/Vaults/backupFabrics/protectionContainers/protectedItems/recoveryPoints/provisionInstantItemRecovery/action | Provision Instant Item Recovery for Protected Item |
+> | Microsoft.RecoveryServices/Vaults/backupFabrics/protectionContainers/protectedItems/recoveryPoints/read | Get Recovery Points for Protected Items. |
+> | Microsoft.RecoveryServices/Vaults/backupFabrics/protectionContainers/protectedItems/recoveryPoints/restore/action | Restore Recovery Points for Protected Items. |
+> | Microsoft.RecoveryServices/Vaults/backupFabrics/protectionContainers/protectedItems/recoveryPoints/revokeInstantItemRecovery/action | Revoke Instant Item Recovery for Protected Item |
+> | Microsoft.RecoveryServices/Vaults/backupJobs/cancel/action | Cancel the Job |
+> | Microsoft.RecoveryServices/Vaults/backupJobs/read | Returns all Job Objects |
+> | Microsoft.RecoveryServices/Vaults/backupJobs/operationResults/read | Returns the Result of Job Operation. |
+> | Microsoft.RecoveryServices/Vaults/backupJobs/operationsStatus/read | Returns the status of Job Operation. |
+> | Microsoft.RecoveryServices/Vaults/backupOperationResults/read | Returns Backup Operation Result for Recovery Services Vault. |
+> | Microsoft.RecoveryServices/Vaults/backupOperations/read | Returns Backup Operation Status for Recovery Services Vault. |
+> | Microsoft.RecoveryServices/Vaults/backupPolicies/delete | Delete a Protection Policy |
+> | Microsoft.RecoveryServices/Vaults/backupPolicies/read | Returns all Protection Policies |
+> | Microsoft.RecoveryServices/Vaults/backupPolicies/write | Creates Protection Policy |
+> | Microsoft.RecoveryServices/Vaults/backupPolicies/operationResults/read | Get Results of Policy Operation. |
+> | Microsoft.RecoveryServices/Vaults/backupPolicies/operations/read | Get Status of Policy Operation. |
+> | Microsoft.RecoveryServices/Vaults/backupProtectableItems/read | Returns list of all Protectable Items. |
+> | Microsoft.RecoveryServices/Vaults/backupProtectedItems/read | Returns the list of all Protected Items. |
+> | Microsoft.RecoveryServices/Vaults/backupProtectionContainers/read | Returns all containers belonging to the subscription |
+> | Microsoft.RecoveryServices/Vaults/backupProtectionIntents/read | List all backup Protection Intents |
+> | Microsoft.RecoveryServices/Vaults/backupResourceGuardProxies/delete | The Delete ResourceGuard proxy operation deletes the specified Azure resource of type 'ResourceGuard proxy' |
+> | Microsoft.RecoveryServices/Vaults/backupResourceGuardProxies/read | Get the list of ResourceGuard proxies for a resource |
+> | Microsoft.RecoveryServices/Vaults/backupResourceGuardProxies/read | Get ResourceGuard proxy operation gets an object representing the Azure resource of type 'ResourceGuard proxy' |
+> | Microsoft.RecoveryServices/Vaults/backupResourceGuardProxies/unlockDelete/action | Unlock delete ResourceGuard proxy operation unlocks the next delete critical operation |
+> | Microsoft.RecoveryServices/Vaults/backupResourceGuardProxies/write | Create ResourceGuard proxy operation creates an Azure resource of type 'ResourceGuard Proxy' |
+> | Microsoft.RecoveryServices/Vaults/backupstorageconfig/read | Returns Storage Configuration for Recovery Services Vault. |
+> | Microsoft.RecoveryServices/Vaults/backupstorageconfig/write | Updates Storage Configuration for Recovery Services Vault. |
+> | Microsoft.RecoveryServices/Vaults/backupUsageSummaries/read | Returns summaries for Protected Items and Protected Servers for a Recovery Services . |
+> | Microsoft.RecoveryServices/Vaults/backupValidateOperationResults/read | Validate Operation on Protected Item |
+> | Microsoft.RecoveryServices/Vaults/backupValidateOperationsStatuses/read | Validate Operation on Protected Item |
> | Microsoft.RecoveryServices/Vaults/certificates/write | The Update Resource Certificate operation updates the resource/vault credential certificate. | > | Microsoft.RecoveryServices/Vaults/extendedInformation/read | The Get Extended Info operation gets an object's Extended Info representing the Azure resource of type ?vault? | > | Microsoft.RecoveryServices/Vaults/extendedInformation/write | The Get Extended Info operation gets an object's Extended Info representing the Azure resource of type ?vault? |
Azure service: [Site Recovery](../site-recovery/index.yml)
> | Microsoft.RecoveryServices/Vaults/monitoringAlerts/write | Resolves the alert. | > | Microsoft.RecoveryServices/Vaults/monitoringConfigurations/read | Gets the Recovery services vault notification configuration. | > | Microsoft.RecoveryServices/Vaults/monitoringConfigurations/write | Configures e-mail notifications to Recovery services vault. |
-> | MICROSOFT.RECOVERYSERVICES/Vaults/privateEndpointConnectionProxies/delete | Wait for a few minutes and then try the operation again. If the issue persists, please contact Microsoft support. |
-> | MICROSOFT.RECOVERYSERVICES/Vaults/privateEndpointConnectionProxies/read | Get all protectable containers |
-> | MICROSOFT.RECOVERYSERVICES/Vaults/privateEndpointConnectionProxies/validate/action | Get all protectable containers |
-> | MICROSOFT.RECOVERYSERVICES/Vaults/privateEndpointConnectionProxies/write | Get all protectable containers |
-> | MICROSOFT.RECOVERYSERVICES/Vaults/privateEndpointConnectionProxies/operationsStatus/read | Get all protectable containers |
-> | MICROSOFT.RECOVERYSERVICES/Vaults/privateEndpointConnections/delete | Delete Private Endpoint requests. This call is made by Backup Admin. |
-> | MICROSOFT.RECOVERYSERVICES/Vaults/privateEndpointConnections/write | Approve or Reject Private Endpoint requests. This call is made by Backup Admin. |
-> | MICROSOFT.RECOVERYSERVICES/Vaults/privateEndpointConnections/operationsStatus/read | Returns the operation status for a private endpoint connection. |
+> | Microsoft.RecoveryServices/Vaults/privateEndpointConnectionProxies/delete | Wait for a few minutes and then try the operation again. If the issue persists, please contact Microsoft support. |
+> | Microsoft.RecoveryServices/Vaults/privateEndpointConnectionProxies/read | Get all protectable containers |
+> | Microsoft.RecoveryServices/Vaults/privateEndpointConnectionProxies/validate/action | Get all protectable containers |
+> | Microsoft.RecoveryServices/Vaults/privateEndpointConnectionProxies/write | Get all protectable containers |
+> | Microsoft.RecoveryServices/Vaults/privateEndpointConnectionProxies/operationsStatus/read | Get all protectable containers |
+> | Microsoft.RecoveryServices/Vaults/privateEndpointConnections/delete | Delete Private Endpoint requests. This call is made by Backup Admin. |
+> | Microsoft.RecoveryServices/Vaults/privateEndpointConnections/write | Approve or Reject Private Endpoint requests. This call is made by Backup Admin. |
+> | Microsoft.RecoveryServices/Vaults/privateEndpointConnections/operationsStatus/read | Returns the operation status for a private endpoint connection. |
> | Microsoft.RecoveryServices/Vaults/providers/Microsoft.Insights/diagnosticSettings/read | Azure Backup Diagnostics | > | Microsoft.RecoveryServices/Vaults/providers/Microsoft.Insights/diagnosticSettings/write | Azure Backup Diagnostics | > | Microsoft.RecoveryServices/Vaults/providers/Microsoft.Insights/logDefinitions/read | Azure Backup Logs |
Azure service: [Site Recovery](../site-recovery/index.yml)
> | Microsoft.RecoveryServices/vaults/replicationVaultSettings/read | Read any | > | Microsoft.RecoveryServices/vaults/replicationVaultSettings/write | Create or Update any | > | Microsoft.RecoveryServices/vaults/replicationvCenters/read | Read any vCenters |
-> | MICROSOFT.RECOVERYSERVICES/Vaults/usages/read | Returns usage details for a Recovery Services Vault. |
+> | Microsoft.RecoveryServices/Vaults/usages/read | Returns usage details for a Recovery Services Vault. |
> | Microsoft.RecoveryServices/vaults/usages/read | Read any Vault Usages | > | Microsoft.RecoveryServices/Vaults/vaultTokens/read | The Vault Token operation can be used to get Vault Token for vault level backend operations. |
route-server Vmware Solution Default Route https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/route-server/vmware-solution-default-route.md
[Azure VMware Solution](../azure-vmware/introduction.md) is an Azure service where native VMware vSphere workloads run and communicate with other Azure services. This communication happens over ExpressRoute, and Azure Route Server can be used to modify the default behavior of Azure VMware Solution networking. For example, a default route can be injected from a Network Virtual Appliance (NVA) in Azure to attract traffic from AVS and inspect it before sending it out to the public Internet, or to analyze traffic between AVS and the on-premises network.
-Additionally, similar designs can be used to interconnect AVS and on-premises networks sending traffic through an NVA, either because traffic inspection is not required or because ExpressRoute Global Reach is not available in the relevant regions.
+Additionally, similar designs can be used to interconnect AVS and on-premises networks sending traffic through an NVA, either because traffic inspection isn't required or because ExpressRoute Global Reach isn't available in the relevant regions.
## Topology
There are two main scenarios for this pattern:
- ExpressRoute Global Reach might not be available on a particular region to interconnect the ExpressRoute circuits of AVS and the on-premises network. - Some organizations might have the requirement to send traffic between AVS and the on-premises network through an NVA (typically a firewall).
-If both ExpressRoute circuits (to AVS and to on-premises) are terminated in the same ExpressRoute gateway, you could think that the gateway is going to route packets across them. However, an ExpressRoute gateway is not designed to do that. Instead, you need to hairpin the traffic to a Network Virtual Appliance that is able to route the traffic. To that purpose, the NVA should advertise a superset of the AVS and on-premises prefixes, as the following diagram shows:
+If both ExpressRoute circuits (to AVS and to on-premises) are terminated in the same ExpressRoute gateway, you could think that the gateway is going to route packets across them. However, an ExpressRoute gateway isn't designed to do that. Instead, you need to hairpin the traffic to a Network Virtual Appliance that is able to route the traffic. To that purpose, two actions are required:
+
+- The NVA should advertise a supernet for the AVS and on-premises prefixes, as the diagram below shows. You could use a supernet that includes both AVS and on-premises prefixes, or individual prefixes for AVS and on-premises (always less specific that the actual prefixes advertised over ExpressRoute).
+- UDRs in the GatewaySubnet that exactly match the prefixes advertised from AVS and on-premises will hairpin traffic from the GatewaySubnet to the Network Virtual Appliance.
:::image type="content" source="./media/scenarios/vmware-solution-to-on-premises-hairpin.png" alt-text="Diagram of AVS to on-premises communication with Route Server in a single region.":::
-As the diagram shows, the NVA needs to advertise a more generic (less specific) prefix that both the on-premises network and AVS. You need to be careful with this approach, since the NVA might be potentially attracting traffic that it should not (since it is advertising wider ranges, in the example above the whole `10.0.0.0/8` network).
+As the diagram above shows, the NVA needs to advertise more generic (less specific) prefixes that include the networks from on-premises and AVS. You need to be careful with this approach, since the NVA might be potentially attracting traffic that it shouldn't (since it is advertising wider ranges, in the example above the whole `10.0.0.0/8` network).
-If two regions are involved, you would need an NVA in each region, and both NVAs would exchange the routes they learn from their respective Azure Route Servers via BGP and some sort of encapsulation protocol such as VXLAN or IPsec, as the following diagram shows.
+If advertising less specific prefixes isn't possible, you could instead implement an alternative design using two separate VNets. In this design, instead of propagating less specific routes to attract traffic to the ExpressRoute gateway, two different NVAs in separate VNets exchange routes between each other, and propagate them to their respective ExpressRoute circuits via BGP and Azure Route Server, as the following diagram shows:
:::image type="content" source="./media/scenarios/vmware-solution-to-on-premises.png" alt-text="Diagram of AVS to on-premises communication with Route Server in two regions.":::
-The reason why encapsulation is needed is because the NVA NICs would learn the routes from ExpressRoute or from the Route Server, so they would send packets that need to be routed to the other NVA in the wrong direction (potentially creating a routing loop returning the packets to the local NVA).
+Note that some sort of encapsulation protocol such as VXLAN or IPsec is required between the NVAs. The reason why encapsulation is needed is because the NVA NICs would learn the routes from ExpressRoute and from the Route Server, so they would send packets that need to be routed to the other NVA in the wrong direction. This would create a routing loop returning the packets to the local NVA.
+
+The main difference between this dual VNet design and the previously described single VNet design is that with two VNets you're actually interconnecting from a routing perspective both ExpressRoute circuits (AVS and on-premises), meaning that whatever is learned from one will be advertised to the other. In the single-VNet design described earlier in this document a common set of supernets or less specific prefixes are sent down both circuits to attract traffic to the VNet.
## Next steps
search Cognitive Search Aml Skill https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-aml-skill.md
+ Last updated 06/12/2020
Like built-in skills, an **AML** skill has inputs and outputs. The inputs are se
## Prerequisites * An [AML workspace](../machine-learning/concept-workspace.md)
-* An [Azure Kubernetes Service AML compute target](../machine-learning/concept-compute-target.md) in this workspace with a [deployed model](../machine-learning/how-to-deploy-azure-kubernetes-service.md)
+* An [Azure Kubernetes Service AML compute target](../machine-learning/concept-compute-target.md) in this workspace with a [deployed model](../machine-learning/v1/how-to-deploy-azure-kubernetes-service.md)
* The [compute target should have SSL enabled](../machine-learning/how-to-secure-web-service.md#deploy-on-azure-kubernetes-service). Azure Cognitive Search only allows access to **https** endpoints * Self-signed certificates may not be used.
For cases when the AML service is unavailable or returns an HTTP error, a friend
## See also + [How to define a skillset](cognitive-search-defining-skillset.md)
-+ [AML Service troubleshooting](../machine-learning/how-to-troubleshoot-deployment.md)
++ [AML Service troubleshooting](../machine-learning/how-to-troubleshoot-deployment.md)
search Cognitive Search Common Errors Warnings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-common-errors-warnings.md
Previously updated : 12/09/2021 Last updated : 05/24/2022 # Troubleshooting common indexer errors and warnings in Azure Cognitive Search
Beginning with API version `2019-05-06`, item-level Indexer errors and warnings
| Property | Description | Example | | | | |
-| Key | The document ID of the document impacted by the error or warning. | https:\//coromsearch.blob.core.windows.net/jfk-1k/docid-32112954.pdf |
-| Name | The operation name describing where the error or warning occurred. This is generated by the following structure: [category].[subcategory].[resourceType].[resourceName] | DocumentExtraction.azureblob.myBlobContainerName Enrichment.WebApiSkill.mySkillName Projection.SearchIndex.OutputFieldMapping.myOutputFieldName Projection.SearchIndex.MergeOrUpload.myIndexName Projection.KnowledgeStore.Table.myTableName |
-| Message | A high-level description of the error or warning. | Could not execute skill because the Web Api request failed. |
-| Details | Any additional details which may be helpful to diagnose the issue, such as the WebApi response if executing a custom skill failed. | `link-cryptonyms-list - Error processing the request record : System.ArgumentNullException: Value cannot be null. Parameter name: source at System.Linq.Enumerable.All[TSource](IEnumerable`1 source, Func`2 predicate) at Microsoft.CognitiveSearch.WebApiSkills.JfkWebApiSkills.` ...rest of stack trace... |
-| DocumentationLink | A link to relevant documentation with detailed information to debug and resolve the issue. This link will often point to one of the below sections on this page. | https://go.microsoft.com/fwlink/?linkid=2106475 |
+| Key | The document ID of the document impacted by the error or warning. | `https://<storageaccount>.blob.core.windows.net/jfk-1k/docid-32112954.pdf`|
+| Name | The operation name describing where the error or warning occurred. This is generated by the following structure: `[category]`.`[subcategory]`.`[resourceType]`.`[resourceName`] | `DocumentExtraction.azureblob.myBlobContainerName` `Enrichment.WebApiSkill.mySkillName` `Projection.SearchIndex.OutputFieldMapping.myOutputFieldName` `Projection.SearchIndex.MergeOrUpload.myIndexName` `Projection.KnowledgeStore.Table.myTableName` |
+| Message | A high-level description of the error or warning. | `Could not execute skill because the Web Api request failed.` |
+| Details | Any additional details which may be helpful to diagnose the issue, such as the WebApi response if executing a custom skill failed. | `link-cryptonyms-list - Error processing the request record : System.ArgumentNullException: Value cannot be null. Parameter name: source at System.Linq.Enumerable.All[TSource](IEnumerable 1 source, Func 2 predicate) at Microsoft.CognitiveSearch.WebApiSkills.JfkWebApiSkills. ...rest of stack trace...` |
+| DocumentationLink | A link to relevant documentation with detailed information to debug and resolve the issue. This link will often point to one of the below sections on this page. | `https://go.microsoft.com/fwlink/?linkid=2106475` |
<a name="could-not-read-document"></a>
-## Error: Could not read document
+## `Error: Could not read document`
Indexer was unable to read the document from the data source. This can happen due to: | Reason | Details/Example | Resolution | | | | |
-| Inconsistent field types across different documents | "Type of value has a mismatch with column type. Couldn't store `'{47.6,-122.1}'` in authors column. Expected type is `JArray`." "Error converting data type nvarchar to float." "Conversion failed when converting the nvarchar value '12 months' to data type int." "Arithmetic overflow error converting expression to data type int." | Ensure that the type of each field is the same across different documents. For example, if the first document `'startTime'` field is a DateTime, and in the second document it's a string, this error will be hit. |
-| Errors from the data source's underlying service | (from Cosmos DB) `{"Errors":["Request rate is large"]}` | Check your storage instance to ensure it's healthy. You may need to adjust your scaling/partitioning. |
-| Transient issues | A transport-level error has occurred when receiving results from the server. (provider: TCP Provider, error: 0 - An existing connection was forcibly closed by the remote host | Occasionally there are unexpected connectivity issues. Try running the document through your indexer again later. |
+| Inconsistent field types across different documents | `Type of value has a mismatch with column type. Couldn't store '{47.6,-122.1}' in authors column. Expected type is JArray.` `Error converting data type nvarchar to float.` `Conversion failed when converting the nvarchar value '12 months' to data type int.` `Arithmetic overflow error converting expression to data type int.` | Ensure that the type of each field is the same across different documents. For example, if the first document `'startTime'` field is a DateTime, and in the second document it's a string, this error will be hit. |
+| Errors from the data source's underlying service | From Cosmos DB: `{"Errors":["Request rate is large"]}` | Check your storage instance to ensure it's healthy. You may need to adjust your scaling/partitioning. |
+| Transient issues | `A transport-level error has occurred when receiving results from the server. (provider: TCP Provider, error: 0 - An existing connection was forcibly closed by the remote host` | Occasionally there are unexpected connectivity issues. Try running the document through your indexer again later. |
<a name="could-not-extract-document-content"></a>
-## Error: Could not extract content or metadata from your document
+## `Error: Could not extract content or metadata from your document`
Indexer with a Blob data source was unable to extract the content or metadata from the document (for example, a PDF file). This can happen due to: | Reason | Details/Example | Resolution | | | | |
-| Blob is over the size limit | Document is `'150441598'` bytes, which exceeds the maximum size `'134217728'` bytes for document extraction for your current service tier. | [blob indexing errors](search-howto-indexing-azure-blob-storage.md#DealingWithErrors) |
-| Blob has unsupported content type | Document has unsupported content type `'image/png'` | [blob indexing errors](search-howto-indexing-azure-blob-storage.md#DealingWithErrors) |
-| Blob is encrypted | Document could not be processed - it may be encrypted or password protected. | You can skip the blob with [blob settings](search-howto-indexing-azure-blob-storage.md#PartsOfBlobToIndex). |
-| Transient issues | "Error processing blob: The request was aborted: The request was canceled." "Document timed out during processing." | Occasionally there are unexpected connectivity issues. Try running the document through your indexer again later. |
+| Blob is over the size limit | `Document is '150441598' bytes, which exceeds the maximum size '134217728' bytes for document extraction for your current service tier.` | [Blob indexing errors](search-howto-indexing-azure-blob-storage.md#DealingWithErrors) |
+| Blob has unsupported content type | `Document has unsupported content type 'image/png'` | [Blob indexing errors](search-howto-indexing-azure-blob-storage.md#DealingWithErrors) |
+| Blob is encrypted | `Document could not be processed - it may be encrypted or password protected.` | You can skip the blob with [blob settings](search-howto-indexing-azure-blob-storage.md#PartsOfBlobToIndex). |
+| Transient issues | `Error processing blob: The request was aborted: The request was canceled.` `Document timed out during processing.` | Occasionally there are unexpected connectivity issues. Try running the document through your indexer again later. |
<a name="could-not-parse-document"></a>
-## Error: Could not parse document
+## `Error: Could not parse document`
Indexer read the document from the data source, but there was an issue converting the document content into the specified field mapping schema. This can happen due to: | Reason | Details/Example | Resolution | | | | |
-| The document key is missing | Document key cannot be missing or empty | Ensure all documents have valid document keys. The document key is determined by setting the 'key' property as part of the [index definition](/rest/api/searchservice/create-index#request-body). Indexers will emit this error when the property flagged as the 'key' cannot be found on a particular document. |
-| The document key is invalid | Document key cannot be longer than 1024 characters | Modify the document key to meet the validation requirements. |
-| Could not apply field mapping to a field | Could not apply mapping function `'functionName'` to field `'fieldName'`. Array cannot be null. Parameter name: bytes | Double check the [field mappings](search-indexer-field-mappings.md) defined on the indexer, and compare with the data of the specified field of the failed document. It may be necessary to modify the field mappings or the document data. |
-| Could not read field value | Could not read the value of column `'fieldName'` at index `'fieldIndex'`. A transport-level error has occurred when receiving results from the server. (provider: TCP Provider, error: 0 - An existing connection was forcibly closed by the remote host.) | These errors are typically due to unexpected connectivity issues with the data source's underlying service. Try running the document through your indexer again later. |
+| The document key is missing | `Document key cannot be missing or empty` | Ensure all documents have valid document keys. The document key is determined by setting the 'key' property as part of the [index definition](/rest/api/searchservice/create-index#request-body). Indexers will emit this error when the property flagged as the 'key' cannot be found on a particular document. |
+| The document key is invalid | `Document key cannot be longer than 1024 characters` | Modify the document key to meet the validation requirements. |
+| Could not apply field mapping to a field | `Could not apply mapping function 'functionName' to field 'fieldName'. Array cannot be null. Parameter name: bytes` | Double check the [field mappings](search-indexer-field-mappings.md) defined on the indexer, and compare with the data of the specified field of the failed document. It may be necessary to modify the field mappings or the document data. |
+| Could not read field value | `Could not read the value of column 'fieldName' at index 'fieldIndex'. A transport-level error has occurred when receiving results from the server. (provider: TCP Provider, error: 0 - An existing connection was forcibly closed by the remote host.)` | These errors are typically due to unexpected connectivity issues with the data source's underlying service. Try running the document through your indexer again later. |
-<a name="Could not map output field '`xyz`' to search index due to deserialization problem while applying mapping function '`abc`'"></a>
-
-## Error: Could not map output field '`xyz`' to search index due to deserialization problem while applying mapping function '`abc`'
+## `Error: Could not map output field 'xyz' to search index due to deserialization problem while applying mapping function 'abc'`
The output mapping might have failed because the output data is in the wrong format for the mapping function you are using. For example, applying Base64Encode mapping function on binary data would generate this error. To resolve the issue, either rerun indexer without specifying mapping function or ensure that the mapping function is compatible with the output field data type. See [Output field mapping](cognitive-search-output-field-mapping.md) for details.
-<a name="could-not-execute-skill"></a>
-## Error: Could not execute skill
-Indexer was not able to run a skill in the skillset.
+## `Error: Could not execute skill`
+The indexer was not able to run a skill in the skillset.
| Reason | Details/Example | Resolution | | | | | | Transient connectivity issues | A transient error occurred. Please try again later. | Occasionally there are unexpected connectivity issues. Try running the document through your indexer again later. |
-| Potential product bug | An unexpected error occurred. | This indicates an unknown class of failure and may mean there is a product bug. Please file a [support ticket](https://portal.azure.com/#create/Microsoft.Support) to get help. |
+| Potential product bug | An unexpected error occurred. | This indicates an unknown class of failure and may mean there is a product bug. File a [support ticket](https://portal.azure.com/#create/Microsoft.Support) to get help. |
| A skill has encountered an error during execution | (From Merge Skill) One or more offset values were invalid and could not be parsed. Items were inserted at the end of the text | Use the information in the error message to fix the issue. This kind of failure will require action to resolve. |
-<a name="could-not-execute-skill-because-the-web-api-request-failed"></a>
-## Error: Could not execute skill because the Web API request failed
-Skill execution failed because the call to the Web API failed. Typically, this class of failure occurs when custom skills are used, in which case you will need to debug your custom code to resolve the issue. If instead the failure is from a built-in skill, refer to the error message for help in fixing the issue.
+
+## `Error: Could not execute skill because the Web API request failed`
+The skill execution failed because the call to the Web API failed. Typically, this class of failure occurs when custom skills are used, in which case you will need to debug your custom code to resolve the issue. If instead the failure is from a built-in skill, refer to the error message for help in fixing the issue.
While debugging this issue, be sure to pay attention to any [skill input warnings](#warning-skill-input-was-invalid) for this skill. Your Web API endpoint may be failing because the indexer is passing it unexpected input.
-<a name="could-not-execute-skill-because-web-api-skill-response-is-invalid"></a>
-## Error: Could not execute skill because Web API skill response is invalid
-Skill execution failed because the call to the Web API returned an invalid response. Typically, this class of failure occurs when custom skills are used, in which case you will need to debug your custom code to resolve the issue. If instead the failure is from a built-in skill, please file a [support ticket](https://portal.azure.com/#create/Microsoft.Support) to get assistance.
+## `Error: Could not execute skill because Web API skill response is invalid`
+The skill execution failed because the call to the Web API returned an invalid response. Typically, this class of failure occurs when custom skills are used, in which case you will need to debug your custom code to resolve the issue. If instead the failure is from a built-in skill, file a [support ticket](https://portal.azure.com/#create/Microsoft.Support) to get assistance.
+
-<a name="skill-did-not-execute-within-the-time-limit"></a>
+## `Error: Type of value has a mismatch with column type. Couldn't store in 'xyz' column. Expected type is 'abc'`
+If your data source has a field with a different data type than the field you are trying to map in your index, you may encounter this error. Check your data source field data types and make sure they are [mapped correctly to your index data types](/rest/api/searchservice/data-type-map-for-indexers-in-azure-search).
-## Error: Skill did not execute within the time limit
-There are two cases under which you may encounter this error message, each of which should be treated differently. Please follow the instructions below depending on what skill returned this error for you.
+
+## `Error: Skill did not execute within the time limit`
+There are two cases under which you may encounter this error message, each of which should be treated differently. Follow the instructions below depending on what skill returned this error for you.
### Built-in Cognitive Service skills Many of the built-in cognitive skills, such as language detection, entity recognition, or OCR, are backed by a Cognitive Service API endpoint. Sometimes there are transient issues with these endpoints and a request will time out. For transient issues, there is no remedy except to wait and try again. As a mitigation, consider setting your indexer to [run on a schedule](search-howto-schedule-indexers.md). Scheduled indexing picks up where it left off. Assuming transient issues are resolved, indexing and cognitive skill processing should be able to continue on the next scheduled run.
-If you continue to see this error on the same document for a built-in cognitive skill, please file a [support ticket](https://portal.azure.com/#create/Microsoft.Support) to get assistance, as this is not expected.
+If you continue to see this error on the same document for a built-in cognitive skill, file a [support ticket](https://portal.azure.com/#create/Microsoft.Support) to get assistance, as this is not expected.
### Custom skills If you encounter a timeout error with a custom skill you have created, there are a couple of things you can try. First, review your custom skill and ensure that it is not getting stuck in an infinite loop and that it is returning a result consistently. Once you have confirmed that is the case, determine what the execution time of your skill is. If you didn't explicitly set a `timeout` value on your custom skill definition, then the default `timeout` is 30 seconds. If 30 seconds is not long enough for your skill to execute, you may specify a higher `timeout` value on your custom skill definition. Here is an example of a custom skill definition where the timeout is set to 90 seconds:
If you encounter a timeout error with a custom skill you have created, there are
The maximum value that you can set for the `timeout` parameter is 230 seconds. If your custom skill is unable to execute consistently within 230 seconds, you may consider reducing the `batchSize` of your custom skill so that it will have fewer documents to process within a single execution. If you have already set your `batchSize` to 1, you will need to rewrite the skill to be able to execute in under 230 seconds or otherwise split it into multiple custom skills so that the execution time for any single custom skill is a maximum of 230 seconds. Review the [custom skill documentation](cognitive-search-custom-skill-web-api.md) for more information.
-<a name="could-not-mergeorupload--delete-document-to-the-search-index"></a>
-## Error: Could not '`MergeOrUpload`' | '`Delete`' document to the search index
+## `Error: Could not 'MergeOrUpload' | 'Delete' document to the search index`
The document was read and processed, but the indexer could not add it to the search index. This can happen due to: | Reason | Details/Example | Resolution | | | | | | A field contains a term that is too large | A term in your document is larger than the [32 KB limit](search-limits-quotas-capacity.md#api-request-limits) | You can avoid this restriction by ensuring the field is not configured as filterable, facetable, or sortable. | Document is too large to be indexed | A document is larger than the [maximum api request size](search-limits-quotas-capacity.md#api-request-limits) | [How to index large data sets](search-howto-large-index.md)
-| Document contains too many objects in collection | A collection in your document exceeds the [maximum elements across all complex collections limit](search-limits-quotas-capacity.md#index-limits) "The document with key `'1000052'` has `'4303'` objects in collections (JSON arrays). At most `'3000'` objects are allowed to be in collections across the entire document. Please remove objects from collections and try indexing the document again." | We recommend reducing the size of the complex collection in the document to below the limit and avoid high storage utilization.
+| Document contains too many objects in collection | A collection in your document exceeds the [maximum elements across all complex collections limit](search-limits-quotas-capacity.md#index-limits). `The document with key '1000052' has '4303' objects in collections (JSON arrays). At most '3000' objects are allowed to be in collections across the entire document. Remove objects from collections and try indexing the document again.` | We recommend reducing the size of the complex collection in the document to below the limit and avoid high storage utilization.
| Trouble connecting to the target index (that persists after retries) because the service is under other load, such as querying or indexing. | Failed to establish connection to update index. Search service is under heavy load. | [Scale up your search service](search-capacity-planning.md) | Search service is being patched for service update, or is in the middle of a topology reconfiguration. | Failed to establish connection to update index. Search service is currently down/Search service is undergoing a transition. | Configure service with at least 3 replicas for 99.9% availability per [SLA documentation](https://azure.microsoft.com/support/legal/sla/search/v1_0/) | Failure in the underlying compute/networking resource (rare) | Failed to establish connection to update index. An unknown failure occurred. | Configure indexers to [run on a schedule](search-howto-schedule-indexers.md) to pick up from a failed state. | An indexing request made to the target index was not acknowledged within a timeout period due to network issues. | Could not establish connection to the search index in a timely manner. | Configure indexers to [run on a schedule](search-howto-schedule-indexers.md) to pick up from a failed state. Additionally, try lowering the indexer [batch size](/rest/api/searchservice/create-indexer#parameters) if this error condition persists.
-<a name="could-not-index-document-because-the-indexer-data-to-index-was-invalid"></a>
-## Error: Could not index document because some of the document's data was not valid
+## `Error: Could not index document because some of the document's data was not valid`
The document was read and processed by the indexer, but due to a mismatch in the configuration of the index fields and the data extracted and processed by the indexer, it could not be added to the search index. This can happen due to: | Reason | Details/Example | |
-| Data type of the field(s) extracted by the indexer is incompatible with the data model of the corresponding target index field. | The data field '_data_' in the document with key '888' has an invalid value 'of type 'Edm.String''. The expected type was 'Collection(Edm.String)'. |
-| Failed to extract any JSON entity from a string value. | Could not parse value 'of type 'Edm.String'' of field '_data_' as a JSON object. Error:'After parsing a value an unexpected character was encountered: ''. Path '_path_', line 1, position 3162.' |
-| Failed to extract a collection of JSON entities from a string value. | Could not parse value 'of type 'Edm.String'' of field '_data_' as a JSON array. Error:'After parsing a value an unexpected character was encountered: ''. Path '[0]', line 1, position 27.' |
-| An unknown type was discovered in the source document. | Unknown type '_unknown_' cannot be indexed |
-| An incompatible notation for geography points was used in the source document. | WKT POINT string literals are not supported. Please use GeoJson point literals instead |
+| Data type of the field(s) extracted by the indexer is incompatible with the data model of the corresponding target index field. | `The data field '_data_' in the document with key '888' has an invalid value 'of type 'Edm.String''. The expected type was 'Collection(Edm.String)'.` |
+| Failed to extract any JSON entity from a string value. | `Could not parse value 'of type 'Edm.String'' of field '_data_' as a JSON object.` `Error:'After parsing a value an unexpected character was encountered: ''. Path '_path_', line 1, position 3162.'` |
+| Failed to extract a collection of JSON entities from a string value. | `Could not parse value 'of type 'Edm.String'' of field '_data_' as a JSON array.` `Error:'After parsing a value an unexpected character was encountered: ''. Path '[0]', line 1, position 27.'` |
+| An unknown type was discovered in the source document. | `Unknown type '_unknown_' cannot be indexed` |
+| An incompatible notation for geography points was used in the source document. | `WKT POINT string literals are not supported. Use GeoJson point literals instead` |
In all these cases, refer to [Supported Data types](/rest/api/searchservice/supported-data-types) and [Data type map for indexers](/rest/api/searchservice/data-type-map-for-indexers-in-azure-search) to make sure that you build the index schema correctly and have set up appropriate [indexer field mappings](search-indexer-field-mappings.md). The error message will include details that can help track down the source of the mismatch.
-## Error: Integrated change tracking policy cannot be used because table has a composite primary key
-
+## `Error: Integrated change tracking policy cannot be used because table has a composite primary key`
This applies to SQL tables, and usually happens when the key is either defined as a composite key or, when the table has defined a unique clustered index (as in a SQL index, not an Azure Search index). The main reason is that the key attribute is modified to be a composite primary key in the case of a [unique clustered index](/sql/relational-databases/indexes/clustered-and-nonclustered-indexes-described). In that case, make sure that your SQL table does not have a unique clustered index, or that you map the key field to a field that is guaranteed not to have duplicate values.
-<a name="could-not-process-document-within-indexer-max-run-time"></a>
-## Error: Could not process document within indexer max run time
+## `Error: Could not process document within indexer max run time`
This error occurs when the indexer is unable to finish processing a single document from the data source within the allowed execution time. [Maximum running time](search-limits-quotas-capacity.md#indexer-limits) is shorter when skillsets are used. When this error occurs, if you have maxFailedItems set to a value other than 0, the indexer bypasses the document on future runs so that indexing can progress. If you cannot afford to skip any document, or if you are seeing this error consistently, consider breaking documents into smaller documents so that partial progress can be made within a single indexer execution.
-<a name="could-not-project-document></a>
-## Error: Could not project document
+## `Error: Could not project document`
This error occurs when the indexer is attempting to [project data into a knowledge store](knowledge-store-projection-overview.md) and there was a failure on the attempt. This failure could be consistent and fixable, or it could be a transient failure with the projection output sink that you may need to wait and retry in order to resolve. Here is a set of known failure states and possible resolutions. | Reason | Details/Example | Resolution | | | | | | Could not update projection blob `'blobUri'` in container `'containerName'` |The specified container does not exist. | The indexer will check if the specified container has been previously created and will create it if necessary, but it only performs this check once per indexer run. This error means that something deleted the container after this step. To resolve this error, try this: leave your storage account information alone, wait for the indexer to finish, and then rerun the indexer. |
-| Could not update projection blob `'blobUri'` in container `'containerName'` |Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host. | This is expected to be a transient failure with Azure Storage and thus should be resolved by rerunning the indexer. If you encounter this error consistently, please file a [support ticket](https://portal.azure.com/#create/Microsoft.Support) so it can be investigated further. |
-| Could not update row `'projectionRow'` in table `'tableName'` | The server is busy. | This is expected to be a transient failure with Azure Storage and thus should be resolved by rerunning the indexer. If you encounter this error consistently, please file a [support ticket](https://portal.azure.com/#create/Microsoft.Support) so it can be investigated further. |
+| Could not update projection blob `'blobUri'` in container `'containerName'` |Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host. | This is expected to be a transient failure with Azure Storage and thus should be resolved by rerunning the indexer. If you encounter this error consistently, file a [support ticket](https://portal.azure.com/#create/Microsoft.Support) so it can be investigated further. |
+| Could not update row `'projectionRow'` in table `'tableName'` | The server is busy. | This is expected to be a transient failure with Azure Storage and thus should be resolved by rerunning the indexer. If you encounter this error consistently, file a [support ticket](https://portal.azure.com/#create/Microsoft.Support) so it can be investigated further. |
-<a name="could-not-execute-skill-because-a-skill-input-was-invalid"></a>
-## Warning: Skill input was invalid
-An input to the skill was missing, the wrong type, or otherwise invalid. The warning message will indicate the impact:
-1) Could not execute skill
-2) Skill executed but may have unexpected results
+## `Warning: Skill input was invalid`
+An input to the skill was missing, it has the wrong type, or otherwise, invalid. The warning message will indicate the impact:
+1) `Could not execute skill`
+2) `Skill executed but may have unexpected results`
-Cognitive skills have required inputs and optional inputs. For example the [Key phrase extraction skill](cognitive-search-skill-keyphrases.md) has two required inputs `text`, `languageCode`, and no optional inputs. Custom skill inputs are all considered optional inputs.
+Cognitive skills have required inputs and optional inputs. For example, the [Key phrase extraction skill](cognitive-search-skill-keyphrases.md) has two required inputs `text`, `languageCode`, and no optional inputs. Custom skill inputs are all considered optional inputs.
If any required inputs are missing or if any input is not the right type, the skill gets skipped and generates a warning. Skipped skills do not generate any outputs, so if other skills use outputs of the skipped skill they may generate additional warnings.
If you want to provide a default value in case of missing input, you can use the
| Reason | Details/Example | Resolution | | | | | | Skill input is the wrong type | "Required skill input was not of the expected type `String`. Name: `text`, Source: `/document/merged_content`." "Required skill input was not of the expected format. Name: `text`, Source: `/document/merged_content`." "Cannot iterate over non-array `/document/normalized_images/0/imageCelebrities/0/detail/celebrities`." "Unable to select `0` in non-array `/document/normalized_images/0/imageCelebrities/0/detail/celebrities`" | Certain skills expect inputs of particular types, for example [Sentiment skill](cognitive-search-skill-sentiment-v3.md) expects `text` to be a string. If the input specifies a non-string value, then the skill doesn't execute and generates no outputs. Ensure your data set has input values uniform in type, or use a [Custom Web API skill](cognitive-search-custom-skill-web-api.md) to preprocess the input. If you're iterating the skill over an array, check the skill context and input have `*` in the correct positions. Usually both the context and input source should end with `*` for arrays. |
-| Skill input is missing | "Required skill input is missing. Name: `text`, Source: `/document/merged_content`" "Missing value `/document/normalized_images/0/imageTags`." "Unable to select `0` in array `/document/pages` of length `0`." | If all your documents get this warning, most likely there is a typo in the input paths and you should double check property name casing, extra or missing `*` in the path, and make sure that the documents from the data source provide the required inputs. |
-| Skill language code input is invalid | Skill input `languageCode` has the following language codes `X,Y,Z`, at least one of which is invalid. | See more details [below](cognitive-search-common-errors-warnings.md#skill-input-languagecode-has-the-following-language-codes-xyz-at-least-one-of-which-is-invalid) |
+| Skill input is missing | `Required skill input is missing. Name: text, Source: /document/merged_content` `Missing value /document/normalized_images/0/imageTags.` `Unable to select 0 in array /document/pages of length 0.` | If all your documents get this warning, most likely there is a typo in the input paths and you should double check property name casing, extra or missing `*` in the path, and make sure that the documents from the data source provide the required inputs. |
+| Skill language code input is invalid | Skill input `languageCode` has the following language codes `X,Y,Z`, at least one of which is invalid. | See more details below. |
-<a name="skill-input-languagecode-has-the-following-language-codes-xyz-at-least-one-of-which-is-invalid"></a>
-## Warning: Skill input 'languageCode' has the following language codes 'X,Y,Z', at least one of which is invalid.
+## `Warning: Skill input 'languageCode' has the following language codes 'X,Y,Z', at least one of which is invalid.`
One or more of the values passed into the optional `languageCode` input of a downstream skill is not supported. This can occur if you are passing the output of the [LanguageDetectionSkill](cognitive-search-skill-language-detection.md) to subsequent skills, and the output consists of more languages than are supported in those downstream skills.
-Note you may also get a warning similar to this one if an invalid `countryHint` input gets passed to the LanguageDetectionSkill. If that happens, please validate that the field you are using from your data source for that input contains valid ISO 3166-1 alpha-2 two letter country codes. If some are valid and some are invalid, continue with the following guidance but replace `languageCode` with `countryHint` and `defaultLanguageCode` with `defaultCountryHint` to match your use case.
+Note that you may also get a warning similar to this one if an invalid `countryHint` input gets passed to the LanguageDetectionSkill. If that happens, validate that the field you are using from your data source for that input contains valid ISO 3166-1 alpha-2 two letter country codes. If some are valid and some are invalid, continue with the following guidance but replace `languageCode` with `countryHint` and `defaultLanguageCode` with `defaultCountryHint` to match your use case.
If you know that your data set is all in one language, you should remove the [LanguageDetectionSkill](cognitive-search-skill-language-detection.md) and the `languageCode` skill input and use the `defaultLanguageCode` skill parameter for that skill instead, assuming the language is supported for that skill.
Here are some references for the currently supported languages for each of the s
* [Translator supported languages](../cognitive-services/translator/language-support.md) * [Text SplitSkill](cognitive-search-skill-textsplit.md) supported languages: `da, de, en, es, fi, fr, it, ko, pt`
-<a name="skill-input-was-truncated"></a>
-## Warning: Skill input was truncated
+## `Warning: Skill input was truncated`
Cognitive skills have limits to the length of text that can be analyzed at once. If the text input of these skills is over that limit, we will truncate the text to meet the limit, and then perform the enrichment on that truncated text. This means that the skill is executed, but not over all of your data.
-In the example LanguageDetectionSkill below, the `'text'` input field may trigger this warning if it is over the character limit. You can find the skill input limits in the [skills documentation](cognitive-search-predefined-skills.md).
+In the example `LanguageDetectionSkill` below, the `'text'` input field may trigger this warning if it is over the character limit. You can find the skill input limits in the [skills documentation](cognitive-search-predefined-skills.md).
```json {
In the example LanguageDetectionSkill below, the `'text'` input field may trigge
If you want to ensure that all text is analyzed, consider using the [Split skill](cognitive-search-skill-textsplit.md).
-<a name="web-api-skill-response-contains-warnings"></a>
-## Warning: Web API skill response contains warnings
-Indexer was able to run a skill in the skillset, but the response from the Web API request indicated there were warnings during execution. Review the warnings to understand how your data is impacted and whether or not action is required.
-<a name="the-current-indexer-configuration-does-not-support-incremental-progress"></a>
+## `Warning: Web API skill response contains warnings`
+The indexer was able to run a skill in the skillset, but the response from the Web API request indicated there were warnings during execution. Review the warnings to understand how your data is impacted and whether or not, action is required.
-## Warning: The current indexer configuration does not support incremental progress
+
+## `Warning: The current indexer configuration does not support incremental progress`
This warning only occurs for Cosmos DB data sources. Incremental progress during indexing ensures that if indexer execution is interrupted by transient failures or execution time limit, the indexer can pick up where it left off next time it runs, instead of having to re-index the entire collection from scratch. This is especially important when indexing large collections.
It is possible to override this behavior, enabling incremental progress and supp
For more information, see [Incremental progress and custom queries](search-howto-index-cosmosdb.md#IncrementalProgress).
-<a name="some-data-was-lost-during projection-row-x-in-table-y-has-string-property-z-which-was-too-long"></a>
-## Warning: Some data was lost during projection. Row 'X' in table 'Y' has string property 'Z' which was too long.
+## `Warning: Some data was lost during projection. Row 'X' in table 'Y' has string property 'Z' which was too long.`
The [Table Storage service](https://azure.microsoft.com/services/storage/tables) has limits on how large [entity properties](/rest/api/storageservices/understanding-the-table-service-data-model#property-types) can be. Strings can have 32,000 characters or less. If a row with a string property longer than 32,000 characters is being projected, only the first 32,000 characters are preserved. To work around this issue, avoid projecting rows with string properties longer than 32,000 characters.
-<a name="truncated-extracted-text-to-x-characters"></a>
-## Warning: Truncated extracted text to X characters
+## `Warning: Truncated extracted text to X characters`
Indexers limit how much text can be extracted from any one document. This limit depends on the pricing tier: 32,000 characters for Free tier, 64,000 for Basic, 4 million for Standard, 8 million for Standard S2, and 16 million for Standard S3. Text that was truncated will not be indexed. To avoid this warning, try breaking apart documents with large amounts of text into multiple, smaller documents. For more information, see [Indexer limits](search-limits-quotas-capacity.md#indexer-limits).
-<a name="could-not-map-output-field-x-to-search-index"></a>
-## Warning: Could not map output field 'X' to search index
+## `Warning: Could not map output field 'X' to search index`
Output field mappings that reference non-existent/null data will produce warnings for each document and result in an empty index field. To work around this issue, double-check your output field-mapping source paths for possible typos, or set a default value using the [Conditional skill](cognitive-search-skill-conditional.md#sample-skill-definition-2-set-a-default-value-for-a-value-that-doesnt-exist). See [Output field mapping](cognitive-search-output-field-mapping.md) for details. | Reason | Details/Example | Resolution | | | | | | Cannot iterate over non-array | "Cannot iterate over non-array `/document/normalized_images/0/imageCelebrities/0/detail/celebrities`." | This error occurs when the output is not an array. If you think the output should be an array, check the indicated output source field path for errors. For example, you might have a missing or extra `*` in the source field name. It's also possible that the input to this skill is null, resulting in an empty array. Find similar details in [Skill Input was Invalid](cognitive-search-common-errors-warnings.md#warning-skill-input-was-invalid) section. |
-| Unable to select `0` in non-array | "Unable to select `0` in non-array `/document/pages`." | This could happen if the skills output does not produce an array and the output source field name has array index or `*` in its path. Please double check the paths provided in the output source field names and the field value for the indicated field name. Find similar details in [Skill Input was Invalid](cognitive-search-common-errors-warnings.md#warning-skill-input-was-invalid) section. |
+| Unable to select `0` in non-array | "Unable to select `0` in non-array `/document/pages`." | This could happen if the skills output does not produce an array and the output source field name has array index or `*` in its path. Double check the paths provided in the output source field names and the field value for the indicated field name. Find similar details in [Skill Input was Invalid](cognitive-search-common-errors-warnings.md#warning-skill-input-was-invalid) section. |
-<a name="the-data-change-detection-policy-is-configured-to-use-key-column-x"></a>
-## Warning: The data change detection policy is configured to use key column 'X'
+## `Warning: The data change detection policy is configured to use key column 'X'`
[Data change detection policies](/rest/api/searchservice/create-data-source#data-change-detection-policies) have specific requirements for the columns they use to detect change. One of these requirements is that this column is updated every time the source item is changed. Another requirement is that the new value for this column is greater than the previous value. Key columns don't fulfill this requirement because they don't change on every update. To work around this issue, select a different column for the change detection policy.
-<a name="document-text-appears-to-be-utf-16-encoded-but-is-missing-a-byte-order-mark"></a>
-
-## Warning: Document text appears to be UTF-16 encoded, but is missing a byte order mark
+## `Warning: Document text appears to be UTF-16 encoded, but is missing a byte order mark`
The [indexer parsing modes](/rest/api/searchservice/create-indexer#blob-configuration-parameters) need to know how text is encoded before parsing it. The two most common ways of encoding text are UTF-16 and UTF-8. UTF-8 is a variable-length encoding where each character is between 1 byte and 4 bytes long. UTF-16 is a fixed-length encoding where each character is 2 bytes long. UTF-16 has two different variants, "big endian" and "little endian". Text encoding is determined by a "byte order mark", a series of bytes before the text. | Encoding | Byte Order Mark |
If no byte order mark is present, the text is assumed to be encoded as UTF-8.
To work around this warning, determine what the text encoding for this blob is and add the appropriate byte order mark.
-<a name="cosmos-db-collection-has-a-lazy-indexing-policy"></a>
-
-## Warning: Cosmos DB collection 'X' has a Lazy indexing policy. Some data may be lost
+## `Warning: Cosmos DB collection 'X' has a Lazy indexing policy. Some data may be lost`
Collections with [Lazy](../cosmos-db/index-policy.md#indexing-mode) indexing policies can't be queried consistently, resulting in your indexer missing data. To work around this warning, change your indexing policy to Consistent.
-## Warning: The document contains very long words (longer than 64 characters). These words may result in truncated and/or unreliable model predictions.
-
+## `Warning: The document contains very long words (longer than 64 characters). These words may result in truncated and/or unreliable model predictions.`
This warning is passed from the Language service of Azure Cognitive Services. In some cases, it is safe to ignore this warning, such as when your document contains a long URL (which likely isn't a key phrase or driving sentiment, etc.). Be aware that when a word is longer than 64 characters, it will be truncated to 64 characters which can affect model predictions.
search Cognitive Search Concept Annotations Syntax https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-concept-annotations-syntax.md
Before reviewing the syntax, let's revisit a few important concepts to better un
| Term | Description | ||-|
-| Enriched Document | An enriched document is an internal structure created and used by the pipeline to hold all annotations related to a document. Think of an enriched document as a tree of annotations. Generally, an annotation created from a previous annotation becomes its child.<p/>Enriched documents only exist for the duration of skillset execution. Once content is mapped to the search index, the enriched document is no longer needed. Although you don't interact with enriched documents directly, it's useful to have a mental model of the documents when creating a skillset. |
-| Enrichment Context | The context in which the enrichment takes place, in terms of which element is enriched. By default, the enrichment context is at the `"/document"` level, scoped to individual documents. When a skill runs, the outputs of that skill become [properties of the defined context](#example-2).|
+| "enriched document" | An enriched document is an internal structure that collects skill output as it's created and it holds all annotations related to a document. Think of an enriched document as a tree of annotations. Generally, an annotation created from a previous annotation becomes its child. </p>Enriched documents only exist for the duration of skillset execution. Once content is mapped to the search index, the enriched document is no longer needed. Although you don't interact with enriched documents directly, it's useful to have a mental model of the documents when creating a skillset. |
+| "annotation" | Within an enriched document, a node that is created and populated by a skill, such as "text" and "layoutText" in the OCR skill, is called an annotation. An enriched document is populated with both annotations and unchanged field values or metadata copied from the source. |
+| "context" | The context in which the enrichment takes place, in terms of which element or component of the document is enriched. By default, the enrichment context is at the `"/document"` level, scoped to individual documents contained in the data source. When a skill runs, the outputs of that skill become [properties of the defined context](#example-2). |
<a name="example-1"></a> ## Example 1: Simple annotation reference
Notice that the cardinality of `"/document/people/*/lastname"` is larger than th
+ [How to integrate a custom skill into an enrichment pipeline](cognitive-search-custom-skill-interface.md) + [How to define a skillset](cognitive-search-defining-skillset.md) + [Create Skillset (REST)](/rest/api/searchservice/create-skillset)
-+ [How to map enriched fields to an index](cognitive-search-output-field-mapping.md)
++ [How to map enriched fields to an index](cognitive-search-output-field-mapping.md)
search Cognitive Search Output Field Mapping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-output-field-mapping.md
This operation will simply ΓÇ£flattenΓÇ¥ each of the names of the customEntities
``` ## Next steps+ Once you have mapped your enriched fields to searchable fields, you can set the field attributes for each of the searchable fields [as part of the index definition](search-what-is-an-index.md). For more information about field mapping, see [Field mappings in Azure Cognitive Search indexers](search-indexer-field-mappings.md).
search Cognitive Search Skill Custom Entity Lookup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-skill-custom-entity-lookup.md
Previously updated : 03/22/2022 Last updated : 05/19/2022 # Custom Entity Lookup cognitive skill
-The **Custom Entity Lookup** skill looks for text from a custom, user-defined list of words and phrases. Using this list, it labels all documents with any matching entities. The skill also supports a degree of fuzzy matching that can be applied to find matches that are similar but not quite exact.
+The **Custom Entity Lookup** skill is used to detect or recognize entities that you define. During skillset execution, the skill looks for text from a custom, user-defined list of words and phrases. The skill uses this list to label any matching entities found within source documents. The skill also supports a degree of fuzzy matching that can be applied to find matches that are similar but not exact.
> [!NOTE]
-> This skill is not bound to a Cognitive Services API but requires a Cognitive Services key to allow more than 20 transactions. This skill is [metered by Cognitive Search](https://azure.microsoft.com/pricing/details/search/#pricing).
+> This skill isn't bound to a Cognitive Services API but requires a Cognitive Services key to allow more than 20 transactions. This skill is [metered by Cognitive Search](https://azure.microsoft.com/pricing/details/search/#pricing).
## @odata.type + Microsoft.Skills.Text.CustomEntityLookupSkill ## Data limits+ + The maximum input record size supported is 256 MB. If you need to break up your data before sending it to the custom entity lookup skill, consider using the [Text Split skill](cognitive-search-skill-textsplit.md).
-+ The maximum entities definition table supported is 10 MB if it is provided using the *entitiesDefinitionUri* parameter.
-+ If the entities are defined inline, using the *inlineEntitiesDefinition* parameter, the maximum supported size is 10 KB.
++ The maximum size of the custom entity definition is 10 MB if it's provided as an external file, specified through the "entitiesDefinitionUri" parameter. ++ If the entities are defined inline using the "inlineEntitiesDefinition" parameter, the maximum size is 10 KB. ## Skill parameters
Parameters are case-sensitive.
| Parameter name | Description | |--|-|
-| `entitiesDefinitionUri` | Path to a JSON or CSV file containing all the target text to match against. This entity definition is read at the beginning of an indexer run; any updates to this file mid-run won't be realized until subsequent runs. This config must be accessible over HTTPS. See [Custom Entity Definition](#custom-entity-definition-format) Format" below for expected CSV or JSON schema.|
+| `entitiesDefinitionUri` | Path to an external JSON or CSV file containing all the target text to match against. This entity definition is read at the beginning of an indexer run; any updates to this file mid-run won't be realized until subsequent runs. This file must be accessible over HTTPS. See [Custom Entity Definition Format](#custom-entity-definition-format) below for expected CSV or JSON schema.|
|`inlineEntitiesDefinition` | Inline JSON entity definitions. This parameter supersedes the entitiesDefinitionUri parameter if present. No more than 10 KB of configuration may be provided inline. See [Custom Entity Definition](#custom-entity-definition-format) below for expected JSON schema. |
-|`defaultLanguageCode` | (Optional) Language code of the input text used to tokenize and delineate input text. The following languages are supported: `da, de, en, es, fi, fr, it, ko, pt`. The default is English (`en`). If you pass a languagecode-countrycode format, only the languagecode part of the format is used. |
-|`globalDefaultCaseSensitive` | (Optional) Default case sensitive value for the skill. If `defaultCaseSensitive` value of an entity is not specified, this value will become the `defaultCaseSensitive` value for that entity. |
-|`globalDefaultAccentSensitive` | (Optional) Default accent sensitive value for the skill. If `defaultAccentSensitive` value of an entity is not specified, this value will become the `defaultAccentSensitive` value for that entity. |
-|`globalDefaultFuzzyEditDistance` | (Optional) Default fuzzy edit distance value for the skill. If `defaultFuzzyEditDistance` value of an entity is not specified, this value will become the `defaultFuzzyEditDistance` value for that entity. |
+|`defaultLanguageCode` | (Optional) Language code of the input text used to tokenize and delineate input text. The following languages are supported: `da, de, en, es, fi, fr, it, ko, pt`. The default is English (`en`). If you pass a `languagecode-countrycode` format, only the `languagecode` part of the format is used. |
+|`globalDefaultCaseSensitive` | (Optional) Default case sensitive value for the skill. If `defaultCaseSensitive` value of an entity isn't specified, this value will become the `defaultCaseSensitive` value for that entity. |
+|`globalDefaultAccentSensitive` | (Optional) Default accent sensitive value for the skill. If `defaultAccentSensitive` value of an entity isn't specified, this value will become the `defaultAccentSensitive` value for that entity. |
+|`globalDefaultFuzzyEditDistance` | (Optional) Default fuzzy edit distance value for the skill. If `defaultFuzzyEditDistance` value of an entity isn't specified, this value will become the `defaultFuzzyEditDistance` value for that entity. |
## Skill inputs
Parameters are case-sensitive.
| `text` | The text to analyze. | | `languageCode` | Optional. Default is `"en"`. | - ## Skill outputs -
-| Output name | Description |
+| Output name | Description |
||-|
-| `entities` | An array of objects that contain information about the matches that were found, and related metadata. Each of the entities identified may contain the following fields: <ul> <li> *name*: The top-level entity identified. The entity represents the "normalized" form. </li> <li> *id*: A unique identifier for the entity as defined by the user in the "Custom Entity Definition Format".</li> <li> *description*: Entity description as defined by the user in the "Custom Entity Definition Format". </li> <li> *type:* Entity type as defined by the user in the "Custom Entity Definition Format".</li> <li> *subtype:* Entity subtype as defined by the user in the "Custom Entity Definition Format".</li> <li> *matches*: Collection that describes each of the matches for that entity on the source text. Each match will have the following members: </li> <ul> <li> *text*: The raw text match from the source document. </li> <li> *offset*: The location where the match was found in the text. </li> <li> *length*: The length of the matched text. </li> <li> *matchDistance*: The number of characters different this match was from original entity name or alias. </li> </ul> </ul>
- |
+| `entities` | An array of complex types that contains the following fields: <ul><li>`"name"`: The top-level entity; it represents the "normalized" form. </li><li>`"id"`: A unique identifier for the entity as defined in the "Custom Entity Definition". </li> <li>`"description"`: Entity description as defined by the user in the "Custom Entity Definition Format". </li> <li>`"type"`: Entity type as defined by the user in the "Custom Entity Definition Format".</li> <li> `"subtype"`: Entity subtype as defined by the user in the "Custom Entity Definition Format".</li> <li>`"matches"`: An array of complex types that contain: <ul><li>`"text"` from the source document </li><li>`"offset"` location where the match was found, </li><li>`"length"` of the text measured in characters <li>`"matchDistance"` or the number of characters that differ between the match and the entity `"name"`. </li></li></ul></ul> |
-## Custom Entity Definition Format
+## Custom entity definition format
-There are 3 different ways to provide the list of custom entities to the Custom Entity Lookup skill. You can provide the list in a .CSV file, a .JSON file or as an inline definition as part of the skill definition.
+There are three approaches for providing the list of custom entities to the Custom Entity Lookup skill:
-If the definition file is a .CSV or .JSON file, the path of the file needs to be provided as part of the *entitiesDefinitionUri* parameter. In this case, the file is downloaded once at the beginning of each indexer run. The file must be accessible as long as the indexer is intended to run. Also, the file must be encoded UTF-8.
++ .CSV file (UTF-8 encoded)++ .JSON file (UTF-8 encoded)++ Inline within the skill definition
-If the definition is provided inline, it should be provided as inline as the content of the *inlineEntitiesDefinition* skill parameter.
+If the definition file is in a .CSV or .JSON file, provide the full path in the "entitiesDefinitionUri" parameter. The file is downloaded at the start of each indexer run. It must remain accessible until the indexer stops.
+
+If you're using an inline definition, specify it under the "inlineEntitiesDefinition" skill parameter.
+
+> [!NOTE]
+> Indexers support specialized parsing modes for JSON and CSV files. When using the custom entity lookup skill, keep "parsingMode" set to "default". The skill expects JSON and CSV in an unparsed state.
### CSV format
-You can provide the definition of the custom entities to look for in a Comma-Separated Value (CSV) file by providing the path to the file and setting it in the *entitiesDefinitionUri* skill parameter. The path should be at an https location. The definition file can be up to 10 MB in size.
+You can provide the definition of the custom entities to look for in a Comma-Separated Value (CSV) file by providing the path to the file and setting it in the "entitiesDefinitionUri" skill parameter. The path should be at an https location. The definition file can be up to 10 MB in size.
The CSV format is simple. Each line represents a unique entity, as shown below:
Microsoft, MSFT
Satya Nadella ```
-In this case, there are three entities that can be returned as entities found (Bill Gates, Satya Nadella, Microsoft), but they will be identified if any of the terms on the line (aliases) are matched on the text. For instance, if the string "William H. Gates" is found in a document, a match for the "Bill Gates" entity will be returned.
+In this case, there are three entities that can be returned (Bill Gates, Satya Nadella, Microsoft). Aliases follow after the main entity. A match on an alias is bundled under the primary entity. For example, if the string "William H. Gates" is found in a document, a match for the "Bill Gates" entity will be returned.
### JSON format You can provide the definition of the custom entities to look for in a JSON file as well. The JSON format gives you a bit more flexibility since it allows you to define matching rules per term. For instance, you can specify the fuzzy matching distance (Damerau-Levenshtein distance) for each term or whether the matching should be case-sensitive or not.
- Just like with CSV files, you need to provide the path to the JSON file and set it in the *entitiesDefinitionUri* skill parameter. The path should be at an https location. The definition file can be up to 10 MB in size.
+ Just like with CSV files, you need to provide the path to the JSON file and set it in the "entitiesDefinitionUri" skill parameter. The path should be at an https location. The definition file can be up to 10 MB in size.
The most basic JSON custom entity list definition can be a list of entities to match:
The most basic JSON custom entity list definition can be a list of entities to m
] ```
-A more complex example of a JSON definition can optionally provide the id, description, type and subtype of each entity -- as well as other *aliases*. If an alias term is matched, the entity will be returned as well:
+More complex definitions can provide a user-defined ID, description, type, subtype, and aliases. If an alias term is matched, the entity will be returned as well:
```json [
A more complex example of a JSON definition can optionally provide the id, descr
}, { "name" : "Xbox One",
- "type": "Harware",
+ "type": "Hardware",
"subtype" : "Gaming Device", "id" : "4e36bf9d-5550-4396-8647-8e43d7564a76", "description" : "The Xbox One product"
A more complex example of a JSON definition can optionally provide the id, descr
] ```
-The tables below describe in more details the different configuration parameters you can set when defining the entities to match:
+The tables below describe the configuration parameters you can set when defining custom entities:
| Field name | Description | |--|-|
The tables below describe in more details the different configuration parameters
### Inline format
-In some cases, it may be more convenient to provide the list of custom entities to match inline directly into the skill definition. In that case you can use a similar JSON format to the one described above, but it is inlined in the skill definition.
-Only configurations that are less than 10 KB in size (serialized size) can be defined inline.
+In some cases, it may be more convenient to embed the custom entity definition so that its inline with the skill definition. You can use the same JSON format as the one described above, except that it's included within the skill definition. Only configurations that are less than 10 KB in size (serialized size) can be defined inline.
-## Sample definition
+## Sample skill definition
A sample skill definition using an inline format is shown below:
A sample skill definition using an inline format is shown below:
} ```
-Alternatively, if you decide to provide a pointer to the entities definition file, a sample skill definition using the `entitiesDefinitionUri` format is shown below:
+Alternatively, you can point to an external entities definition file. A sample skill definition using the `entitiesDefinitionUri` format is shown below:
```json {
Alternatively, if you decide to provide a pointer to the entities definition fil
} ] }
+```
+
+## Sample index definition
+This section provides a sample index definition. Both "entities" and "matches" are arrays of complex types. You can have multiple entities per document, and multiple matches for each entity.
+
+```json
+{
+ "name": "entities",
+ "type": "Collection(Edm.ComplexType)",
+ "fields": [
+ {
+ "name": "name",
+ "type": "Edm.String",
+ "facetable": false,
+ "filterable": false,
+ "retrievable": true,
+ "searchable": true,
+ "sortable": false,
+ },
+ {
+ "name": "id",
+ "type": "Edm.String",
+ "facetable": false,
+ "filterable": false,
+ "retrievable": true,
+ "searchable": false,
+ "sortable": false,
+ },
+ {
+ "name": "description",
+ "type": "Edm.String",
+ "facetable": false,
+ "filterable": false,
+ "retrievable": true,
+ "searchable": true,
+ "sortable": false,
+ },
+ {
+ "name": "type",
+ "type": "Edm.String",
+ "facetable": true,
+ "filterable": true,
+ "retrievable": true,
+ "searchable": false,
+ "sortable": false,
+ },
+ {
+ "name": "subtype",
+ "type": "Edm.String",
+ "facetable": true,
+ "filterable": true,
+ "retrievable": true,
+ "searchable": false,
+ "sortable": false,
+ },
+ {
+ "name": "matches",
+ "type": "Collection(Edm.ComplexType)",
+ "fields": [
+ {
+ "name": "text",
+ "type": "Edm.String",
+ "facetable": false,
+ "filterable": false,
+ "retrievable": true,
+ "searchable": true,
+ "sortable": false,
+ },
+ {
+ "name": "offset",
+ "type": "Edm.Int32",
+ "facetable": true,
+ "filterable": true,
+ "retrievable": true,
+ "sortable": false,
+ },
+ {
+ "name": "length",
+ "type": "Edm.Int32",
+ "facetable": true,
+ "filterable": true,
+ "retrievable": true,
+ "sortable": false,
+ },
+ {
+ "name": "matchDistance",
+ "type": "Edm.Double",
+ "facetable": true,
+ "filterable": true,
+ "retrievable": true,
+ "sortable": false,
+ }
+ ]
+ }
+ ]
+}
```
-## Sample input
+## Sample input data
```json {
Alternatively, if you decide to provide a pointer to the entities definition fil
} ```
-## Errors and warnings
+## Warnings
-### Warning: Reached maximum capacity for matches, skipping all further duplicate matches.
+`"Reached maximum capacity for matches, skipping all further duplicate matches."`
-This warning will be emitted if the number of matches detected is greater than the maximum allowed. In this case, we will stop including duplicate matches. If this is unacceptable to you, please file a [support ticket](https://portal.azure.com/#create/Microsoft.Support) so we can assist you with your individual use case.
+This warning will be emitted if the number of matches detected is greater than the maximum allowed. No more duplicate matches will be returned. If you need a higher threshold, you can file a [support ticket](https://portal.azure.com/#create/Microsoft.Support) for assistance with your individual use case.
## See also ++ [Custom Entity Lookup sample and readme](https://github.com/Azure-Samples/azure-search-postman-samples/tree/master/skill-examples/custom-entity-lookup-skill) + [Built-in skills](cognitive-search-predefined-skills.md) + [How to define a skillset](cognitive-search-defining-skillset.md) + [Entity Recognition skill (to search for well known entities)](cognitive-search-skill-entity-recognition-v3.md)
search Search Howto Index Sharepoint Online https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-index-sharepoint-online.md
The SharePoint indexer supports both [delegated and application](/graph/auth/aut
+ Application permissions, where the indexer runs under the identity of the SharePoint tenant with access to all sites and files within the SharePoint tenant. The indexer requires a [client secret](../active-directory/develop/v2-oauth2-client-creds-grant-flow.md) to access the SharePoint tenant. The indexer will also require [tenant admin approval](../active-directory/manage-apps/grant-admin-consent.md) before it can index any content.
+Note that if your Azure Active Directory organization has [Conditional Access enabled](/active-directory/conditional-access/overview.md) and your administrator is not able to grant any device access for Delegated permissions, you should consider Application permissions instead. For more information, refer to [SharePoint Conditional Access policies](/remove-search-indexer-troubleshooting.md#sharepoint-conditional-access-policies).
+ ### Step 3: Create an Azure AD application The SharePoint indexer will use this Azure Active Directory (Azure AD) application for authentication.
search Search Indexer Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-indexer-troubleshooting.md
Previously updated : 11/17/2021 Last updated : 05/23/2022 # Indexer troubleshooting guidance for Azure Cognitive Search Occasionally, indexers run into problems and there is no error to help with diagnosis. This article covers problems and potential resolutions when indexer results are unexpected and there is limited information to go on. If you have an error to investigate, see [Troubleshooting common indexer errors and warnings](cognitive-search-common-errors-warnings.md) instead. + <a name="connection-errors"></a> ## Troubleshoot connections to restricted resources
For data sources that are secured by Azure network security mechanisms, indexers
### Firewall rules
-Azure Storage, Cosmos DB and Azure SQL provide a configurable firewall. There's no specific error message when the firewall is enabled. Typically, firewall errors are generic and look like `The remote server returned an error: (403) Forbidden` or `Credentials provided in the connection string are invalid or have expired`.
+Azure Storage, Cosmos DB and Azure SQL provide a configurable firewall. There's no specific error message when the firewall is enabled. Typically, firewall errors are generic. Some common errors include:
+* `The remote server returned an error: (403) Forbidden`
+* `This request is not authorized to perform this operation`
+* `Credentials provided in the connection string are invalid or have expired`
+ There are two options for allowing indexers to access these resources in such an instance:
Details for configuring IP address range restrictions for each data source type
Azure functions (that could be used as a [Custom Web Api skill](cognitive-search-custom-skill-web-api.md)) also support [IP address restrictions](../azure-functions/ip-addresses.md#ip-address-restrictions). The list of IP addresses to configure would be the IP address of your search service and the IP address range of `AzureCognitiveSearch` service tag.
-For more information about connecting to a virtual machine, see [Configure a connection to SQL Server on an Azure VM](search-howto-connecting-azure-sql-iaas-to-azure-search-using-indexers.md)
+For more information about connecting to a virtual machine, see [Configure a connection to SQL Server on an Azure VM](search-howto-connecting-azure-sql-iaas-to-azure-search-using-indexers.md).
### Configure network security group (NSG) rules
In such cases, the Azure VM, or the SQL managed instance can be configured to re
The `AzureCognitiveSearch` service tag can be directly used in the inbound [NSG rules](../virtual-network/manage-network-security-group.md#work-with-security-rules) without needing to look up its IP address range.
-More details for accessing data in a SQL managed instance are outlined [here](search-howto-connecting-azure-sql-mi-to-azure-search-using-indexers.md)
+More details for accessing data in a SQL managed instance are outlined [here](search-howto-connecting-azure-sql-mi-to-azure-search-using-indexers.md).
+
+### Network errors
+
+Usually, network errors are generic. Some common errors include:
+* `A network-related or instance-specific error occurred while establishing a connection to the server`
+* `The server was not found or was not accessible`
+* `Verify that the instance name is correct and that the source is configured to allow remote connections`
+
+When you are receiving any of those errors:
+
+* Make sure your source is accessible by trying to connect to it directly and not through the search service
+* Check your source in the Azure portal for any current errors or outages
+* Check for any network outages in [Azure Status](https://status.azure.com/status)
+* Check you are using public DNS for name resolution and not an [Azure Private DNS](/dns/private-dns-overview)
+ ## Azure SQL Database serverless indexing (error code 40613)
The blob indexer [finds and extracts text from blobs in a container](search-howt
* The blob indexer is configured to only index metadata. To extract content, the blob indexer must be configured to [extract both content and metadata](search-howto-indexing-azure-blob-storage.md#PartsOfBlobToIndex): + ```http PUT https://[service name].search.windows.net/indexers/[indexer name]?api-version=2020-06-30 Content-Type: application/json
Indexers leverage a conservative buffering strategy to ensure that every new and
Indexers are not intended to be invoked multiple times in quick succession. If you need updates quickly, the supported approach is to push updates to the index while simultaneously updating the data source. For on-demand processing, we recommend that you pace your requests in five-minute intervals or more, and run the indexer on a schedule. ### Example of duplicate document processing with 30 second buffer+ Conditions under which a document is processed twice is explained below in a timeline that notes each action and counter action. The following timeline illustrates the issue: | Timeline (hh:mm:ss) | Event | Indexer High Water Mark | Comment |
Conditions under which a document is processed twice is explained below in a tim
In practice, this scenario only happens when on-demand indexers are manually invoked within minutes of each other, for certain data sources. It may result in mismatched numbers (like the indexer processed 345 documents total according to the indexer execution stats, but there are 340 documents in the data source and index) or potentially increased billing if you are running the same skills for the same document multiple times. Running an indexer using a schedule is the preferred recommendation. +
+## Indexing documents with sensitivity labels
+
+If you have [sensitivity labels set on documents](/microsoft-365/compliance/sensitivity-labels) you might not be able to index them. If you're getting errors, remove the labels prior to indexing.
++ ## See also * [Troubleshooting common indexer errors and warnings](cognitive-search-common-errors-warnings.md)
-* [Monitor indexer-based indexing](search-howto-monitor-indexers.md)
+* [Monitor indexer-based indexing](search-howto-monitor-indexers.md)
search Search Manage Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-manage-azure-cli.md
ms.devlang: azurecli- Previously updated : 08/03/2021+ Last updated : 05/23/2022 # Manage your Azure Cognitive Search service with the Azure CLI
Last updated 08/03/2021
> * [Portal](search-manage.md) > * [PowerShell](search-manage-powershell.md) > * [Azure CLI](search-manage-azure-cli.md)
-> * [REST API](/rest/api/searchmanagement/)
+> * [REST API](search-manage-rest.md)
> * [.NET SDK](/dotnet/api/microsoft.azure.management.search) > * [Python](https://pypi.python.org/pypi/azure-mgmt-search/0.1.0)
search Search Manage Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-manage-powershell.md
ms.devlang: powershell- Previously updated : 04/19/2022 + Last updated : 05/23/2022
> * [Portal](search-manage.md) > * [PowerShell](search-manage-powershell.md) > * [Azure CLI](search-manage-azure-cli.md)
-> * [REST API](/rest/api/searchmanagement/)
+> * [REST API](search-manage-rest.md)
> * [.NET SDK](/dotnet/api/microsoft.azure.management.search) > * [Python](https://pypi.python.org/pypi/azure-mgmt-search/0.1.0)
search Search Manage Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-manage-rest.md
+
+ Title: REST APIs for search management
+
+description: Create and configure an Azure Cognitive Search service with the Management REST API. The Management REST API is comprehensive in scope, with access to generally available and preview features.
+++++ Last updated : 05/23/2022++
+# Manage your Azure Cognitive Search service with REST APIs
+
+> [!div class="op_single_selector"]
+> * [Portal](search-manage.md)
+> * [PowerShell](search-manage-powershell.md)
+> * [Azure CLI](search-manage-azure-cli.md)
+> * [REST API](search-manage-rest.md)
+> * [.NET SDK](/dotnet/api/microsoft.azure.management.search)
+> * [Python](https://pypi.python.org/pypi/azure-mgmt-search/0.1.0)
+
+In this article, learn how to create and configure an Azure Cognitive Search service using the [Management REST APIs](/rest/api/searchmanagement/). Only the Management REST APIs are guaranteed to provide early access to [preview features](/rest/api/searchmanagement/management-api-versions#2021-04-01-preview).
+
+> [!div class="checklist"]
+> * [List search services](#list-search-services)
+> * [Create or update a service](#create-or-update-services)
+> * [(preview) Enable Azure role-based access control for data plane](#enable-rbac)
+> * [(preview) Enforce a customer-managed key policy](#enforce-cmk)
+> * [(preview) Disable semantic search](#disable-semantic-search)
+> * [(preview) Disable workloads that push data to external resources](#disable-external-access)
+
+All of the Management REST APIs have examples. If a task isn't covered in this article, see the [API reference](/rest/api/searchmanagement/) instead.
+
+## Prerequisites
+
+* An Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-search/)
+
+* [Postman](https://www.postman.com/downloads/) or another REST client that sends HTTP requests
+
+* Azure Active Directory (Azure AD) to obtain a bearer token for request authentication
+
+## Create a security principal
+
+Management REST API calls are authenticated through Azure Active Directory (Azure AD). You'll need a security principal for your client, along with permissions to create and configure a resource. This section explains how to create a security principal and assign a role.
+
+The following steps are from ["How to call REST APIs with Postman"](/rest/api/azure/#how-to-call-azure-rest-apis-with-postman).
+
+An easy way to generate the required client ID and password is using the **Try It** feature in the [Create a service principal](/cli/azure/create-an-azure-service-principal-azure-cli#1-create-a-service-principal) article.
+
+1. In [Create a service principal](/cli/azure/create-an-azure-service-principal-azure-cli#1-create-a-service-principal), select **Try It**. Sign in to your Azure subscription.
+
+1. First, get your subscription ID. In the console, enter the following command:
+
+ ```azurecli
+ az account show --query id -o tsv
+ ````
+
+1. Create a resource group for your security principal:
+
+ ```azurecli
+ az group create -l 'westus2' -n 'MyResourceGroup'
+ ```
+
+1. Paste in the following command. Replace the placeholder values with valid values: a descriptive security principal name, subscription ID, resource group name. Press Enter to run the command. Notice that the security principal has "owner" permissions, necessary for creating or updating an Azure resource.
+
+ ```azurecli
+ az ad sp create-for-rbac --name mySecurityPrincipalName \
+ --role owner \
+ --scopes /subscriptions/mySubscriptionID/resourceGroups/myResourceGroupName
+ ```
+
+ You'll use "appId", "password", and "tenantId" for the variables "clientId", "clientSecret", and "tenantId" in the next section.
+
+## Set up Postman
+
+The following steps are from [this blog post](https://blog.jongallant.com/2021/02/azure-rest-apis-postman-2021/) if you need more detail.
+
+1. Start a new Postman collection and edit its properties. In the Variables tab, create the following variables:
+
+ | Variable | Description |
+ |-|-|
+ | clientId | Provide the previously generated "appID" that you created in Azure AD. |
+ | clientSecret | Provide the "password" that was created for your client. |
+ | tenantId | Provide the "tenant" that was returned in the previous step. |
+ | subscriptionId | Provide the subscription ID for your subscription. |
+ | resource | Enter `https://management.azure.com/`. This Azure resource is used for all control plane operations. |
+ | bearerToken | (leave blank; the token is generated programmatically) |
+
+1. In the Authorization tab, select **Bearer Token** as the type.
+
+1. In the **Token** field, specify the variable placeholder `{{{{bearerToken}}}}`.
+
+1. In the Pre-request Script tab, paste in the following script:
+
+ ```javascript
+ pm.test("Check for collectionVariables", function () {
+ let vars = ['clientId', 'clientSecret', 'tenantId', 'subscriptionId'];
+ vars.forEach(function (item, index, array) {
+ console.log(item, index);
+ pm.expect(pm.collectionVariables.get(item), item + " variable not set").to.not.be.undefined;
+ pm.expect(pm.collectionVariables.get(item), item + " variable not set").to.not.be.empty;
+ });
+
+ if (!pm.collectionVariables.get("bearerToken") || Date.now() > new Date(pm.collectionVariables.get("bearerTokenExpiresOn") * 1000)) {
+ pm.sendRequest({
+ url: 'https://login.microsoftonline.com/' + pm.collectionVariables.get("tenantId") + '/oauth2/token',
+ method: 'POST',
+ header: 'Content-Type: application/x-www-form-urlencoded',
+ body: {
+ mode: 'urlencoded',
+ urlencoded: [
+ { key: "grant_type", value: "client_credentials", disabled: false },
+ { key: "client_id", value: pm.collectionVariables.get("clientId"), disabled: false },
+ { key: "client_secret", value: pm.collectionVariables.get("clientSecret"), disabled: false },
+ { key: "resource", value: pm.collectionVariables.get("resource") || "https://management.azure.com/", disabled: false }
+ ]
+ }
+ }, function (err, res) {
+ if (err) {
+ console.log(err);
+ } else {
+ let resJson = res.json();
+ pm.collectionVariables.set("bearerTokenExpiresOn", resJson.expires_on);
+ pm.collectionVariables.set("bearerToken", resJson.access_token);
+ }
+ });
+ }
+ });
+ ```
+
+1. Save the collection.
+
+Now that Postman is set up, you can send REST calls similar to the ones described in this article. You'll update the endpoint, and request body where applicable.
+
+## List search services
+
+Returns all search services under the current subscription, including detailed service information:
+
+```rest
+GET https://management.azure.com/subscriptions/{{subscriptionId}}/providers/Microsoft.Search/searchServices?api-version=2021-04-01-preview
+```
+
+## Create or update services
+
+Creates or updates a search service under the current subscription:
+
+```rest
+PUT https://management.azure.com/subscriptions/{{subscriptionId}}/resourceGroups/{{resource-group}}/providers/Microsoft.Search/searchServices/{{search-service-name}}?api-version=2021-04-01-preview
+{
+ "location": "{{region}}",
+ "sku": {
+ "name": "basic"
+ },
+ "properties": {
+ "replicaCount": 1,
+ "partitionCount": 1,
+ "hostingMode": "default"
+ }
+}
+```
+
+<a name="enable-rbac"></a>
+
+## (preview) Enable Azure role-based authentication for data plane
+
+To use Azure role-based access control (Azure RBAC), set "authOptions" to "aadOrApiKey" and then send the request.
+
+If you want to use Azure RBAC exclusively, [turn off API key authentication](search-security-rbac.md#disable-api-key-authentication) by following up a second request, this time setting "disableLocalAuth" to "false".
+
+```rest
+PUT https://management.azure.com/subscriptions/{{subscriptionId}}/resourcegroups/{{resource-group}}/providers/Microsoft.Search/searchServices/{{search-service-name}}?api-version=2021-04-01-preview
+{
+ "location": "{{region}}",
+ "tags": {
+ "app-name": "My e-commerce app"
+ },
+ "sku": {
+ "name": "standard"
+ },
+ "properties": {
+ "replicaCount": 1,
+ "partitionCount": 1,
+ "hostingMode": "default",
+ "disableLocalAuth": false,
+ "authOptions": "aadOrApiKey"
+ }
+ }
+}
+```
+
+<a name="enforce-cmk"></a>
+
+## (preview) Enforce a customer-managed key policy
+
+If you're using [customer-managed encryption](search-security-manage-encryption-keys.md), you can enable "encryptionWithCMK" with "enforcement" set to "Enabled" if you want the search service to report its compliance status.
+
+When you enable this policy, calls that create objects with sensitive data, such as the connection string within a data source, will fail if an encryption key isn't provided: `"Error creating Data Source: "CannotCreateNonEncryptedResource: The creation of non-encrypted DataSources is not allowed when encryption policy is enforced."`
+
+```rest
+PUT https://management.azure.com/subscriptions/{{subscriptionId}}/resourcegroups/{{resource-group}}/providers/Microsoft.Search/searchServices/{{search-service-name}}?api-version=2021-04-01-preview
+{
+ "location": "westus",
+ "sku": {
+ "name": "standard"
+ },
+ "properties": {
+ "replicaCount": 1,
+ "partitionCount": 1,
+ "hostingMode": "default",
+ "encryptionWithCmk": {
+ "enforcement": "Enabled",
+ "encryptionComplianceStatus": "Compliant"
+ },
+ }
+}
+```
+
+<a name="disable-semantic-search"></a>
+
+## (preview) Disable semantic search
+
+Although [semantic search is not enabled](semantic-search-overview.md#enable-semantic-search) by default, you could lock down the feature at the service level.
+
+```rest
+PUT https://management.azure.com/subscriptions/{{subscriptionId}}/resourcegroups/{{resource-group}}/providers/Microsoft.Search/searchServices/{{search-service-name}}?api-version=2021-04-01-Preview
+ {
+ "location": "{{region}}",
+ "sku": {
+ "name": "standard"
+ },
+ "properties": {
+ "semanticSearch": "disabled"
+ }
+ }
+```
+
+<a name="disable-external-access"></a>
+
+## (preview) Disable workloads that push data to external resources
+
+Azure Cognitive Search [writes to external data sources](search-indexer-securing-resources.md) when updating a knowledge store, saving debug session state, or caching enrichments. The following example disables these workloads at the service level.
+
+```rest
+PUT https://management.azure.com/subscriptions/{{subscriptionId}}/resourcegroups/{{resource-group}}/providers/Microsoft.Search/searchServices/{{search-service-name}}?api-version=2021-04-01-preview
+{
+ "location": "{{region}}",
+ "sku": {
+ "name": "standard"
+ },
+ "properties": {
+ "replicaCount": 1,
+ "partitionCount": 1,
+ "hostingMode": "default",
+ "disabledDataExfiltrationOptions": [
+ "All"
+ ]
+ }
+}
+```
+
+## Next steps
+
+After a search service is configured, next steps include [create an index](search-how-to-create-search-index.md) or [query an index](search-query-overview.md) using the portal, REST APIs, or the .NET SDK.
+
+* [Create an Azure Cognitive Search index in the Azure portal](search-get-started-portal.md)
+* [Set up an indexer to load data from other services](search-indexer-overview.md)
+* [Query an Azure Cognitive Search index using Search explorer in the Azure portal](search-explorer.md)
+* [How to use Azure Cognitive Search in .NET](search-howto-dotnet-sdk.md)
search Search Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-manage.md
tags: azure-portal Previously updated : 04/06/2021 Last updated : 05/23/2022 # Service administration for Azure Cognitive Search in the Azure portal
Last updated 04/06/2021
> > * [PowerShell](search-manage-powershell.md) > * [Azure CLI](search-manage-azure-cli.md)
-> * [REST API](/rest/api/searchmanagement/)
+> * [REST API](search-manage-rest.md)
> * [.NET SDK](/dotnet/api/microsoft.azure.management.search) > * [Portal](search-manage.md) > * [Python](https://pypi.python.org/pypi/azure-mgmt-search/0.1.0)>
-Azure Cognitive Search is a fully managed, cloud-based search service used for building a rich search experience into custom apps. This article covers the administration tasks that you can perform in the [Azure portal](https://portal.azure.com) for a search service that you've already created. The portal allows you to perform all [management tasks](#management-tasks) related to the service, as well as content management and exploration. As such, the portal provides broad spectrum access to all aspects of search service operations.
+Azure Cognitive Search is a fully managed, cloud-based search service used for building a rich search experience into custom apps. This article covers the administration tasks that you can perform in the [Azure portal](https://portal.azure.com) for a search service that you've already created.
-Each search service is managed as a standalone resource. The following image shows the portal pages for a single free search service called "demo-search-svc". Although you might be accustomed to using Azure PowerShell or Azure CLI for service management, it makes sense to become familiar with the tools and capabilities that the portal pages provide. Some tasks are just easier and faster to perform in the portal than through code.
+Depending on your permission level, the portal covers virtually all aspects of search service operations, including:
+
+* [Service administration](#management-tasks)
+* Content management
+* Content exploration
+
+Each search service is managed as a standalone resource. The following image shows the portal pages for a single free search service called "demo-search-svc".
## Overview (home) page
The overview page is the "home" page of each service. Below, the areas on the sc
| Area | Description | ||-|
-| 1 | The **Essentials** section shows you service properties including the endpoint used when setting up connections. It also shows you tier, replica, and partition counts at a glance. |
-| 2 | At the top of the page are a series of commands for invoking interactive tools or managing the service. Both [Import data](search-get-started-portal.md) and [Search explorer](search-explorer.md) are commonly used for prototyping and exploration. |
-| 3 | Below the **Essentials** section, are a series of tabbed subpages for quick access to usage statistics, service health metrics, and access to all of the existing indexes, indexers, data sources, and skillsets on your service. If you select an index or another object, additional pages become available to show object composition, settings, and status (if applicable). |
-| 4 | To the left, you can access links that open additional pages for accessing API keys used on connections, configuring the service, configuring security, monitoring operations, automating tasks, and getting support. |
+| 1 | The **Essentials** section lists service properties, such as the service endpoint, service tier, and replica and partition counts. |
+| 2 | A command bar at the top of the page includes [Import data](search-get-started-portal.md) and [Search explorer](search-explorer.md), used for prototyping and exploration. |
+| 3 | Tabbed pages in the center provide quick access to usage statistics, service health metrics, and access to all of the existing indexes, indexers, data sources, and skillsets.|
+| 4 | Navigation links are to the left. |
### Read-only service properties
-Several aspects of a search service are determined when the service is provisioned and cannot be changed:
+Several aspects of a search service are determined when the service is provisioned and can't be easily changed:
+
+* Service name
+* Service location <sup>1</sup>
+* Service tier <sup>2</sup>
-* Service name (you cannot rename a search service)
-* Service location (you cannot easily move an intact search service to another region. Although there is a template, moving the content is a manual process)
-* Service tier (you cannot switch from S1 to S2, for example, without creating a new service)
+<sup>1</sup> Although there are ARM and bicep templates for service deployment, moving content is a manual job.
+
+<sup>2</sup> Switching tiers requires creating a new service or filing a support ticket to request a tier upgrade.
## Management tasks
-Service administration is lightweight by design, and is often defined by the tasks you can perform relative to the service itself:
+Service administration includes the following tasks:
* [Adjust capacity](search-capacity-planning.md) by adding or removing replicas and partitions
-* [Manage API keys](search-security-api-keys.md) used for admin and query operations
+* [Rotate API keys](search-security-api-keys.md) used for admin and query operations
* [Control access to admin operations](search-security-rbac.md) through role-based security * [Configure IP firewall rules](service-configure-firewall.md) to restrict access by IP address * [Configure a private endpoint](service-create-private-endpoint.md) using Azure Private Link and a private virtual network * [Monitor service health and operations](monitor-azure-cognitive-search.md): storage, query volumes, and latency
-You can also enumerate all of the objects created on the service: indexes, indexers, data sources, skillsets, and so forth. The portal's overview page shows you all of the content that exists on your service. The vast majority of operations on a search service are content-related.
+There is feature parity across all modalities and languages except for preview management features. In general, preview management features are released through the Management REST API first. Programmatic support for service administration can be found in the following APIs and modules:
+
+* [Management REST API reference](/rest/api/searchmanagement/)
+* [Az.Search PowerShell module](search-manage-powershell.md)
+* [az search Azure CLI module](search-manage-azure-cli.md)
-The same management tasks performed in the portal can also be handled programmatically through the [Management REST APIs](/rest/api/searchmanagement/), [Az.Search PowerShell module](search-manage-powershell.md), [az search Azure CLI module](search-manage-azure-cli.md), and the Azure SDKs for .NET, Python, Java, and JavaScript. Administrative tasks are fully represented across portal and all programmatic interfaces. There is no specific administrative task that is available in only one modality.
+You can also use the management client libraries in the Azure SDKs for .NET, Python, Java, and JavaScript.
-Cognitive Search leverages other Azure services for deeper monitoring and management. By itself, the only data stored within the search service is object content (indexes, indexer and data source definitions, and other objects). Metrics reported out to portal pages are pulled from internal logs on a rolling 30-day cycle. For user-controlled log retention and additional events, you will need [Azure Monitor](../azure-monitor/index.yml). For more information about setting up diagnostic logging for a search service, see [Collect and analyze log data](monitor-azure-cognitive-search.md).
+## Data collection and retention
+
+Cognitive Search uses other Azure services for deeper monitoring and management. By itself, the only persistent data stored within the search service are the structures that support indexing, enrichment, and queries. These structures include indexes, indexers, data sources, skillsets, and synonym maps. All other saved data, including debug session state and caching, is placed in Azure Storage.
+
+Metrics reported out to portal pages are pulled from internal logs on a rolling 30-day cycle. For user-controlled log retention and more events, you will need [Azure Monitor](../azure-monitor/index.yml) and a supported approach for retaining log data. For more information about setting up diagnostic logging for a search service, see [Collect and analyze log data](monitor-azure-cognitive-search.md).
## Administrator permissions
-When you open the search service overview page, the permissions assigned to your account determine what pages are available to you. The overview page at the beginning of the article shows the portal pages an administrator or contributor will see.
+When you open the search service overview page, the Azure role assigned to your account determines what portal content is available to you. The overview page at the beginning of the article shows the portal content available to an Owner or Contributor.
+
+Control plane roles include the following:
+
+* Owner
+* Contributor (same as Owner, minus the ability to assign roles)
+* Reader (access to service information and the Monitoring tab)
-In Azure resource, administrative rights are granted through role assignments. In the context of Azure Cognitive Search, [role assignments](search-security-rbac.md) will determine who can allocate replicas and partitions or manage API keys, regardless of whether they are using the portal, [PowerShell](search-manage-powershell.md), [Azure CLI](search-manage-azure-cli.md),or the [Management REST APIs](/rest/api/searchmanagement):
+If you want a combination of control plane and data plane permissions, consider Search Service Contributor. For more information, see [Built-in roles](search-security-rbac.md#built-in-roles-used-in-search).
> [!TIP]
-> Provisioning or decommissioning the service itself can be done by an Azure subscription administrator or co-administrator. Using Azure-wide mechanisms, you can lock a subscription or resource to prevent accidental or unauthorized deletion of your search service by users with admin rights. For more information, see [Lock resources to prevent unexpected deletion](../azure-resource-manager/management/lock-resources.md).
+> By default, any Owner or Co-owner can create or delete services. To prevent accidental deletions, you can [lock resources](../azure-resource-manager/management/lock-resources.md).
## Next steps
search Search Security Manage Encryption Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-security-manage-encryption-keys.md
Skip key generation if you already have a key in Azure Key Vault that you want t
:::image type="content" source="media/search-manage-encryption-keys/cmk-key-identifier.png" alt-text="Create a new key vault key" border="true":::
-## 3 - Create a security principle
+## 3 - Create a security principal
You have several options for accessing the encryption key at run time. The simplest approach is to retrieve the key using the managed identity and permissions of your search service. You can use either a system or user-managed identity. Doing so allows you to omit the steps for application registration and application secrets, and simplifies the encryption key definition.
Access permissions could be revoked at any given time. Once revoked, any search
1. Select **Next**.
-1. On the **Principle** page, find and select the security principle used by the search service to access the encryption key. This will either be the system-managed or user-managed identity of the search service, or the registered application.
+1. On the **Principle** page, find and select the security principal used by the search service to access the encryption key. This will either be the system-managed or user-managed identity of the search service, or the registered application.
1. Select **Next** and **Create**.
search Search Security Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-security-rbac.md
Previously updated : 04/26/2022 Last updated : 05/24/2022
Built-in roles include generally available and preview roles. If these roles are
| - | - | | [Owner](../role-based-access-control/built-in-roles.md#owner) | (Generally available) Full access to the search resource, including the ability to assign Azure roles. Subscription administrators are members by default. | | [Contributor](../role-based-access-control/built-in-roles.md#contributor) | (Generally available) Same level of access as Owner, minus the ability to assign roles or change authorization options. |
-| [Reader](../role-based-access-control/built-in-roles.md#reader) | (Generally available) Limited access to partial service information. In the portal, the Reader role can access information in the service Overview page, in the Essentials section and under the Monitoring tab. All other tabs and pages are off limits. </br></br>This role has access to service information: resource group, service status, location, subscription name and ID, tags, URL, pricing tier, replicas, partitions, and search units. This role also has access to service metrics: search latency, percentage of throttled requests, average queries per second. </br></br>There is no access to API keys, role assignments, content (indexes or synonym maps), or content metrics (storage consumed, number of objects). |
+| [Reader](../role-based-access-control/built-in-roles.md#reader) | (Generally available) Limited access to partial service information. In the portal, the Reader role can access information in the service Overview page, in the Essentials section and under the Monitoring tab. All other tabs and pages are off limits. </br></br>This role has access to service information: service name, resource group, service status, location, subscription name and ID, tags, URL, pricing tier, replicas, partitions, and search units. This role also has access to service metrics: search latency, percentage of throttled requests, average queries per second. </br></br>There is no access to API keys, role assignments, content (indexes or synonym maps), or content metrics (storage consumed, number of objects). |
| [Search Service Contributor](../role-based-access-control/built-in-roles.md#search-service-contributor) | (Generally available) This role is identical to the Contributor role and applies to control plane operations. </br></br>(Preview) When you enable the RBAC preview for the data plane, this role also provides full access to all data plane actions on indexes, synonym maps, indexers, data sources, and skillsets as defined by [`Microsoft.Search/searchServices/*`](../role-based-access-control/resource-provider-operations.md#microsoftsearch). This role is for search service administrators who need to fully manage both the service and its content. </br></br>Like Contributor, members of this role cannot make or manage role assignments or change authorization options. To use the preview capabilities of this role, your service must have the preview feature enabled, as described in this article. | | [Search Index Data Contributor](../role-based-access-control/built-in-roles.md#search-index-data-contributor) | (Preview) Provides full data plane access to content in all indexes on the search service. This role is for developers or index owners who need to import, refresh, or query the documents collection of an index. | | [Search Index Data Reader](../role-based-access-control/built-in-roles.md#search-index-data-reader) | (Preview) Provides read-only data plane access to search indexes on the search service. This role is for apps and users who run queries. |
If you are using Postman or another web testing tool, see the Tip below for help
1. [Assign roles](#step-3-assign-roles) on the service and verify they are working correctly against the data plane. > [!TIP]
-> Management REST API calls are authenticated through Azure Active Directory. For guidance on setting up a security principle and a request, see this blog post [Azure REST APIs with Postman (2021)](https://blog.jongallant.com/2021/02/azure-rest-apis-postman-2021/). The previous example was tested using the instructions and Postman collection provided in the blog post.
+> Management REST API calls are authenticated through Azure Active Directory. For guidance on setting up a security principal and a request, see this blog post [Azure REST APIs with Postman (2021)](https://blog.jongallant.com/2021/02/azure-rest-apis-postman-2021/). The previous example was tested using the instructions and Postman collection provided in the blog post.
You cannot combine steps one and two. In step one, "disableLocalAuth" must be fa
To re-enable key authentication, rerun the last request, setting "disableLocalAuth" to false. The search service will resume acceptance of API keys on the request automatically (assuming they are specified). > [!TIP]
-> Management REST API calls are authenticated through Azure Active Directory. For guidance on setting up a security principle and a request, see this blog post [Azure REST APIs with Postman (2021)](https://blog.jongallant.com/2021/02/azure-rest-apis-postman-2021/). The previous example was tested using the instructions and Postman collection provided in the blog post.
+> Management REST API calls are authenticated through Azure Active Directory. For guidance on setting up a security principal and a request, see this blog post [Azure REST APIs with Postman (2021)](https://blog.jongallant.com/2021/02/azure-rest-apis-postman-2021/). The previous example was tested using the instructions and Postman collection provided in the blog post.
## Conditional Access
search Semantic Search Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/semantic-search-overview.md
PUT https://management.azure.com/subscriptions/{{subscriptionId}}/resourcegroups
To re-enable semantic search, rerun the above request, setting "semanticSearch" to either "free" (default) or "standard". > [!TIP]
-> Management REST API calls are authenticated through Azure Active Directory. For guidance on setting up a security principle and a request, see this blog post [Azure REST APIs with Postman (2021)](https://blog.jongallant.com/2021/02/azure-rest-apis-postman-2021/). The previous example was tested using the instructions and Postman collection provided in the blog post.
+> Management REST API calls are authenticated through Azure Active Directory. For guidance on setting up a security principal and a request, see this blog post [Azure REST APIs with Postman (2021)](https://blog.jongallant.com/2021/02/azure-rest-apis-postman-2021/). The previous example was tested using the instructions and Postman collection provided in the blog post.
## Next steps
security Secure Dev Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/develop/secure-dev-overview.md
architects, developers, and testers who build and deploy secure Azure
solutions. [Security and Compliance Blueprints on
-Azure](https://servicetrust.microsoft.com/ViewPage/BlueprintOverview) ΓÇô
+Azure](../../governance/blueprints/samples/azure-security-benchmark-foundation/index.md) ΓÇô
Azure Security and Compliance Blueprints are resources that can help you build and launch cloud-powered applications that comply with stringent regulations and standards. ++ ## Next steps In the following articles, we recommend security controls and activities that can help you design, develop, and deploy secure applications.
security Azure CA Details https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/azure-CA-details.md
Title: Azure Certificate Authority details
-description: Create, deploy, and manage a cloud-native Public Key Infrastructure with Azure PKI.
+description: Certificate Authority details for Azure services that utilize x509 certs and TLS encryption.
Looking for CA details specific to Azure Active Directory? See the [Certificate
## Root Certificate Authorities
-| Certificate Authority | Expiry Date | Serial Number /<br>Thumbprint | Downloads |
+| Certificate Authority | Expiry Date | Serial Number /<br>Thumbprint | Download |
|- |- |- |- |
-| DigiCert Global Root CA | Nov 10, 2031 | 0x083be056904246b1a1756ac95991c74a<br>A8985D3A65E5E5C4B2D7D66D40C6DD2FB19C5436 | [DER](https://hubcontentprod.azureedge.net/content/docfx/f770e87c-605e-4620-91ee-8cb4c8d1bf25/20220412T1713331314Z/media/cafiles/digicert/digicertglobalrootca2031-11-10der.crt)<br>[PEM](https://hubcontentprod.azureedge.net/content/docfx/f770e87c-605e-4620-91ee-8cb4c8d1bf25/20220412T1713331314Z/media/cafiles/digicert/digicertglobalrootca2031-11-10pem.crt) |
-| DigiCert Global Root G2 | Jan 15 2038 | 0x033af1e6a711a9a0bb2864b11d09fae5<br>DF3C24F9BFD666761B268073FE06D1CC8D4F82A4 | [DER](https://hubcontentprod.azureedge.net/content/docfx/f770e87c-605e-4620-91ee-8cb4c8d1bf25/20220427T0114263671Z/media/cafiles/digicert/digicertglobalrootg22038-01-15der.crt)<br>[PEM](https://hubcontentprod.azureedge.net/content/docfx/f770e87c-605e-4620-91ee-8cb4c8d1bf25/20220427T0114263671Z/media/cafiles/digicert/digicertglobalrootg22038-01-15pem.crt) |
-| DigiCert Global Root G3 | Jan 15, 2038 | 0x055556bcf25ea43535c3a40fd5ab4572<br>7E04DE896A3E666D00E687D33FFAD93BE83D349E | [DER](https://hubcontentprod.azureedge.net/content/docfx/f770e87c-605e-4620-91ee-8cb4c8d1bf25/20220412T1713331314Z/media/cafiles/digicert/digicertglobalrootg32038-01-15der.crt)<br>[PEM](https://hubcontentprod.azureedge.net/content/docfx/f770e87c-605e-4620-91ee-8cb4c8d1bf25/20220412T1713331314Z/media/cafiles/digicert/digicertglobalrootg32038-01-15pem.crt) |
-| Baltimore CyberTrust Root | May 12, 2025 | 0x20000b9<br>D4DE20D05E66FC53FE1A50882C78DB2852CAE474 | [DER](https://hubcontentprod.azureedge.net/content/docfx/f770e87c-605e-4620-91ee-8cb4c8d1bf25/20220412T1713331314Z/media/cafiles/digicert/baltimorecybertrustroot2025-05-12der.crt)<br>[PEM](https://hubcontentprod.azureedge.net/content/docfx/f770e87c-605e-4620-91ee-8cb4c8d1bf25/20220412T1713331314Z/media/cafiles/digicert/baltimorecybertrustroot2025-05-12pem.crt) |
-| Microsoft ECC Root Certificate Authority 2017 | Jul 18, 2042 | 0x66f23daf87de8bb14aea0c573101c2ec<br>999A64C37FF47D9FAB95F14769891460EEC4C3C5 | [DER](https://hubcontentprod.azureedge.net/content/docfx/f770e87c-605e-4620-91ee-8cb4c8d1bf25/20220412T1713331314Z/media/cafiles/mspki/microsofteccrootcertificateauthority20172042-07-18der.crt)<br>[PEM](https://hubcontentprod.azureedge.net/content/docfx/f770e87c-605e-4620-91ee-8cb4c8d1bf25/20220412T1713331314Z/media/cafiles/mspki/microsofteccrootcertificateauthority20172042-07-18pem.crt) |
-| Microsoft RSA Root Certificate Authority 2017 | Jul 18, 2042 | 0x1ed397095fd8b4b347701eaabe7f45b3<br>73A5E64A3BFF8316FF0EDCCC618A906E4EAE4D74 | [DER](https://hubcontentprod.azureedge.net/content/docfx/f770e87c-605e-4620-91ee-8cb4c8d1bf25/20220412T1713331314Z/media/cafiles/mspki/microsoftrsarootcertificateauthority20172042-07-18der.crt)<br>[PEM](https://hubcontentprod.azureedge.net/content/docfx/f770e87c-605e-4620-91ee-8cb4c8d1bf25/20220412T1713331314Z/media/cafiles/mspki/microsoftrsarootcertificateauthority20172042-07-18pem.crt) |
+| Baltimore CyberTrust Root | May 12, 2025 | 0x20000b9<br>D4DE20D05E66FC53FE1A50882C78DB2852CAE474 | [PEM](https://crt.sh/?d=76) |
+| DigiCert Global Root CA | Nov 10, 2031 | 0x083be056904246b1a1756ac95991c74a<br>A8985D3A65E5E5C4B2D7D66D40C6DD2FB19C5436 | [PEM](https://crt.sh/?d=853428) |
+| DigiCert Global Root G2 | Jan 15 2038 | 0x033af1e6a711a9a0bb2864b11d09fae5<br>DF3C24F9BFD666761B268073FE06D1CC8D4F82A4 | [PEM](https://crt.sh/?d=8656329) |
+| DigiCert Global Root G3 | Jan 15, 2038 | 0x055556bcf25ea43535c3a40fd5ab4572<br>7E04DE896A3E666D00E687D33FFAD93BE83D349E | [PEM](https://crt.sh/?d=8568700) |
+| Microsoft ECC Root Certificate Authority 2017 | Jul 18, 2042 | 0x66f23daf87de8bb14aea0c573101c2ec<br>999A64C37FF47D9FAB95F14769891460EEC4C3C5 | [PEM](https://crt.sh/?d=2565145421) |
+| Microsoft RSA Root Certificate Authority 2017 | Jul 18, 2042 | 0x1ed397095fd8b4b347701eaabe7f45b3<br>73A5E64A3BFF8316FF0EDCCC618A906E4EAE4D74 | [PEM](https://crt.sh/?d=2565151295) |
## Subordinate Certificate Authorities | Certificate Authority | Expiry Date | Serial Number<br>Thumbprint | Downloads | |- |- |- |- |
-| DigiCert SHA2 Secure Server CA | Sep 22, 2030 | 0x02742eaa17ca8e21c717bb1ffcfd0ca0<br>626D44E704D1CEABE3BF0D53397464AC8080142C | [DER](https://hubcontentprod.azureedge.net/content/docfx/f770e87c-605e-4620-91ee-8cb4c8d1bf25/20220412T1713331314Z/media/cafiles/digicert/digicertsha2secureserverca2030-09-22der.crt)<br>[PEM](https://hubcontentprod.azureedge.net/content/docfx/f770e87c-605e-4620-91ee-8cb4c8d1bf25/20220412T1713331314Z/media/cafiles/digicert/digicertsha2secureserverca2030-09-22pem.crt) |
-| DigiCert TLS Hybrid ECC SHA384 2020 CA1 | Sep 22, 2030 | 0x0a275fe704d6eecb23d5cd5b4b1a4e04<br>51E39A8BDB08878C52D6186588A0FA266A69CF28 | [DER](https://hubcontentprod.azureedge.net/content/docfx/f770e87c-605e-4620-91ee-8cb4c8d1bf25/20220412T1713331314Z/media/cafiles/digicert/digicerttlshybrideccsha3842020ca12030-09-22der.crt)<br>[PEM](https://hubcontentprod.azureedge.net/content/docfx/f770e87c-605e-4620-91ee-8cb4c8d1bf25/20220412T1713331314Z/media/cafiles/digicert/digicerttlshybrideccsha3842020ca12030-09-22pem.crt) |
-| DigiCert Cloud Services CA-1 | Aug 4, 2030 | 0x019ec1c6bd3f597bb20c3338e551d877<br>81B68D6CD2F221F8F534E677523BB236BBA1DC56 | [DER](https://hubcontentprod.azureedge.net/content/docfx/f770e87c-605e-4620-91ee-8cb4c8d1bf25/20220412T1713331314Z/media/cafiles/digicert/digicertcloudservicesca-12030-08-04der.crt)<br>[PEM](https://hubcontentprod.azureedge.net/content/docfx/f770e87c-605e-4620-91ee-8cb4c8d1bf25/20220412T1713331314Z/media/cafiles/digicert/digicertcloudservicesca-12030-08-04pem.crt) |
-| DigiCert Basic RSA CN CA G2 | Mar 4, 2030 | 0x02f7e1f982bad009aff47dc95741b2f6<br>4D1FA5D1FB1AC3917C08E43F65015E6AEA571179 | [DER](https://hubcontentprod.azureedge.net/content/docfx/f770e87c-605e-4620-91ee-8cb4c8d1bf25/20220412T1713331314Z/media/cafiles/digicert/digicertbasicrsacncag22030-03-04der.crt)<br>[PEM](https://hubcontentprod.azureedge.net/content/docfx/f770e87c-605e-4620-91ee-8cb4c8d1bf25/20220412T1713331314Z/media/cafiles/digicert/digicertbasicrsacncag22030-03-04pem.crt) |
-| DigiCert TLS RSA SHA256 2020 CA1 | Apr 13, 2031 | 0x06d8d904d5584346f68a2fa754227ec4<br>1C58A3A8518E8759BF075B76B750D4F2DF264FCD | [DER](https://hubcontentprod.azureedge.net/content/docfx/f770e87c-605e-4620-91ee-8cb4c8d1bf25/20220412T1713331314Z/media/cafiles/digicert/digicerttlsrsasha2562020ca12031-04-13der.crt)<br>[PEM](https://hubcontentprod.azureedge.net/content/docfx/f770e87c-605e-4620-91ee-8cb4c8d1bf25/20220412T1713331314Z/media/cafiles/digicert/digicerttlsrsasha2562020ca12031-04-13pem.crt) |
-| GeoTrust RSA CA 2018 | Nov 6, 2027 | 0x0546fe1823f7e1941da39fce14c46173<br>7CCC2A87E3949F20572B18482980505FA90CAC3B | [DER](https://hubcontentprod.azureedge.net/content/docfx/f770e87c-605e-4620-91ee-8cb4c8d1bf25/20220412T1713331314Z/media/cafiles/digicert/geotrustrsaca20182027-11-06der.crt)<br>[PEM](https://hubcontentprod.azureedge.net/content/docfx/f770e87c-605e-4620-91ee-8cb4c8d1bf25/20220412T1713331314Z/media/cafiles/digicert/geotrustrsaca20182027-11-06pem.crt) |
-| Microsoft Azure TLS Issuing CA 01 | Jun 27, 2024 | 0x0aafa6c5ca63c45141ea3be1f7c75317<br>2F2877C5D778C31E0F29C7E371DF5471BD673173 | [DER](https://hubcontentprod.azureedge.net/content/docfx/f770e87c-605e-4620-91ee-8cb4c8d1bf25/20220427T0114263671Z/media/cafiles/mspki/microsoftazuretlsissuingca012024-06-27-xsignder.crt)<br>[PEM](https://hubcontentprod.azureedge.net/content/docfx/f770e87c-605e-4620-91ee-8cb4c8d1bf25/20220412T1713331314Z/media/cafiles/mspki/microsoftazuretlsissuingca012024-06-27-xsignpem.crt) |
-| Microsoft Azure TLS Issuing CA 01 | Jun 27, 2024 | 0x1dbe9496f3db8b8de700000000001d<br>B9ED88EB05C15C79639493016200FDAB08137AF3 | [DER](https://hubcontentprod.azureedge.net/content/docfx/f770e87c-605e-4620-91ee-8cb4c8d1bf25/20220412T1713331314Z/media/cafiles/mspki/microsoftazuretlsissuingca012024-06-27der.crt)<br>[PEM](https://hubcontentprod.azureedge.net/content/docfx/f770e87c-605e-4620-91ee-8cb4c8d1bf25/20220412T1713331314Z/media/cafiles/mspki/microsoftazuretlsissuingca012024-06-27pem.crt) |
-| Microsoft Azure TLS Issuing CA 02 | Jun 27, 2024 | 0x0c6ae97cced599838690a00a9ea53214<br>E7EEA674CA718E3BEFD90858E09F8372AD0AE2AA | [DER](https://hubcontentprod.azureedge.net/content/docfx/f770e87c-605e-4620-91ee-8cb4c8d1bf25/20220412T1713331314Z/media/cafiles/mspki/microsoftazuretlsissuingca022024-06-27-xsignder.crt)<br>[PEM](https://hubcontentprod.azureedge.net/content/docfx/f770e87c-605e-4620-91ee-8cb4c8d1bf25/20220412T1713331314Z/media/cafiles/mspki/microsoftazuretlsissuingca022024-06-27-xsignpem.crt) |
-| Microsoft Azure TLS Issuing CA 02 | Jun 27, 2024 | 0x330000001ec6749f058517b4d000000000001e<br>C5FB956A0E7672E9857B402008E7CCAD031F9B08 | [DER](https://hubcontentprod.azureedge.net/content/docfx/f770e87c-605e-4620-91ee-8cb4c8d1bf25/20220412T1713331314Z/media/cafiles/mspki/microsoftazuretlsissuingca022024-06-27der.crt)<br>[PEM](https://hubcontentprod.azureedge.net/content/docfx/f770e87c-605e-4620-91ee-8cb4c8d1bf25/20220412T1713331314Z/media/cafiles/mspki/microsoftazuretlsissuingca022024-06-27pem.crt) |
-| Microsoft Azure TLS Issuing CA 05 | Jun 27, 2024 | 0x0d7bede97d8209967a52631b8bdd18bd<br>6C3AF02E7F269AA73AFD0EFF2A88A4A1F04ED1E5 | [DER](https://hubcontentprod.azureedge.net/content/docfx/f770e87c-605e-4620-91ee-8cb4c8d1bf25/20220427T0114263671Z/media/cafiles/mspki/microsoftazuretlsissuingca052024-06-27-xsignder.crt)<br>[PEM](https://hubcontentprod.azureedge.net/content/docfx/f770e87c-605e-4620-91ee-8cb4c8d1bf25/20220427T0114263671Z/media/cafiles/mspki/microsoftazuretlsissuingca052024-06-27-xsignpem.crt) |
-| Microsoft Azure TLS Issuing CA 05 | Jun 27, 2024 | 0x330000001f9f1fa2043bc28db900000000001f<br>56F1CA470BB94E274B516A330494C792C419CF87 | [DER](https://hubcontentprod.azureedge.net/content/docfx/f770e87c-605e-4620-91ee-8cb4c8d1bf25/20220427T0114263671Z/media/cafiles/mspki/microsoftazuretlsissuingca052024-06-27der.crt)<br>[PEM](https://hubcontentprod.azureedge.net/content/docfx/f770e87c-605e-4620-91ee-8cb4c8d1bf25/20220427T0114263671Z/media/cafiles/mspki/microsoftazuretlsissuingca052024-06-27pem.crt) |
-| Microsoft Azure TLS Issuing CA 06 | Jun 27, 2024 | 0x02e79171fb8021e93fe2d983834c50c0<br>30E01761AB97E59A06B41EF20AF6F2DE7EF4F7B0 | [DER](https://hubcontentprod.azureedge.net/content/docfx/f770e87c-605e-4620-91ee-8cb4c8d1bf25/20220412T1713331314Z/media/cafiles/mspki/microsoftazuretlsissuingca062024-06-27-xsignder.crt)<br>[PEM](https://hubcontentprod.azureedge.net/content/docfx/f770e87c-605e-4620-91ee-8cb4c8d1bf25/20220412T1713331314Z/media/cafiles/mspki/microsoftazuretlsissuingca062024-06-27-xsignpem.crt) |
-| Microsoft Azure TLS Issuing CA 06 | Jun 27, 2024 | 0x3300000020a2f1491a37fbd31f000000000020<br>8F1FD57F27C828D7BE29743B4D02CD7E6E5F43E6 | [DER](https://hubcontentprod.azureedge.net/content/docfx/f770e87c-605e-4620-91ee-8cb4c8d1bf25/20220412T1713331314Z/media/cafiles/mspki/microsoftazuretlsissuingca062024-06-27der.crt)<br>[PEM](https://hubcontentprod.azureedge.net/content/docfx/f770e87c-605e-4620-91ee-8cb4c8d1bf25/20220412T1713331314Z/media/cafiles/mspki/microsoftazuretlsissuingca062024-06-27pem.crt) |
-| Microsoft Azure ECC TLS Issuing CA 01 | Jun 27, 2024 | 0x09dc42a5f574ff3a389ee06d5d4de440<br>92503D0D74A7D3708197B6EE13082D52117A6AB0 | [DER](https://hubcontentprod.azureedge.net/content/docfx/f770e87c-605e-4620-91ee-8cb4c8d1bf25/20220412T1713331314Z/media/cafiles/mspki/microsoftazureecctlsissuingca012024-06-27-xsignder.crt)<br>[PEM](https://hubcontentprod.azureedge.net/content/docfx/f770e87c-605e-4620-91ee-8cb4c8d1bf25/20220412T1713331314Z/media/cafiles/mspki/microsoftazureecctlsissuingca012024-06-27-xsignpem.crt) |
-| Microsoft Azure ECC TLS Issuing CA 01 | Jun 27, 2024 | 0x330000001aa9564f44321c54b900000000001a<br>CDA57423EC5E7192901CA1BF6169DBE48E8D1268 | [DER](https://hubcontentprod.azureedge.net/content/docfx/f770e87c-605e-4620-91ee-8cb4c8d1bf25/20220412T1713331314Z/media/cafiles/mspki/microsoftazureecctlsissuingca012024-06-27der.crt)<br>[PEM](https://hubcontentprod.azureedge.net/content/docfx/f770e87c-605e-4620-91ee-8cb4c8d1bf25/20220412T1713331314Z/media/cafiles/mspki/microsoftazureecctlsissuingca012024-06-27pem.crt) |
-| Microsoft Azure ECC TLS Issuing CA 02 | Jun 27, 2024 | 0x0e8dbe5ea610e6cbb569c736f6d7004b<br>1E981CCDDC69102A45C6693EE84389C3CF2329F1 | [DER](https://hubcontentprod.azureedge.net/content/docfx/f770e87c-605e-4620-91ee-8cb4c8d1bf25/20220412T1713331314Z/media/cafiles/mspki/microsoftazureecctlsissuingca022024-06-27-xsignder.crt)<br>[PEM](https://hubcontentprod.azureedge.net/content/docfx/f770e87c-605e-4620-91ee-8cb4c8d1bf25/20220412T1713331314Z/media/cafiles/mspki/microsoftazureecctlsissuingca022024-06-27-xsignpem.crt) |
-| Microsoft Azure ECC TLS Issuing CA 02 | Jun 27, 2024 | 0x330000001b498d6736ed5612c200000000001b<br>489FF5765030EB28342477693EB183A4DED4D2A6 | [DER](https://hubcontentprod.azureedge.net/content/docfx/f770e87c-605e-4620-91ee-8cb4c8d1bf25/20220412T1713331314Z/media/cafiles/mspki/microsoftazureecctlsissuingca022024-06-27der.crt)<br>[PEM](https://hubcontentprod.azureedge.net/content/docfx/f770e87c-605e-4620-91ee-8cb4c8d1bf25/20220412T1713331314Z/media/cafiles/mspki/microsoftazureecctlsissuingca022024-06-27pem.crt) |
-| Microsoft Azure ECC TLS Issuing CA 05 | Jun 27, 2024 | 0x0ce59c30fd7a83532e2d0146b332f965<br>C6363570AF8303CDF31C1D5AD81E19DBFE172531 | [DER](https://hubcontentprod.azureedge.net/content/docfx/f770e87c-605e-4620-91ee-8cb4c8d1bf25/20220412T1713331314Z/media/cafiles/mspki/microsoftazureecctlsissuingca052024-06-27-xsignder.crt)<br>[PEM](https://hubcontentprod.azureedge.net/content/docfx/f770e87c-605e-4620-91ee-8cb4c8d1bf25/20220412T1713331314Z/media/cafiles/mspki/microsoftazureecctlsissuingca052024-06-27-xsignpem.crt) |
-| Microsoft Azure ECC TLS Issuing CA 05 | Jun 27, 2024 | 0x330000001cc0d2a3cd78cf2c1000000000001c<br>4C15BC8D7AA5089A84F2AC4750F040D064040CD4 | [DER](https://hubcontentprod.azureedge.net/content/docfx/f770e87c-605e-4620-91ee-8cb4c8d1bf25/20220412T1713331314Z/media/cafiles/mspki/microsoftazureecctlsissuingca052024-06-27der.crt)<br>[PEM](https://hubcontentprod.azureedge.net/content/docfx/f770e87c-605e-4620-91ee-8cb4c8d1bf25/20220412T1713331314Z/media/cafiles/mspki/microsoftazureecctlsissuingca052024-06-27pem.crt) |
-| Microsoft Azure ECC TLS Issuing CA 06 | Jun 27, 2024 | 0x066e79cd7624c63130c77abeb6a8bb94<br>7365ADAEDFEA4909C1BAADBAB68719AD0C381163 | [DER](https://hubcontentprod.azureedge.net/content/docfx/f770e87c-605e-4620-91ee-8cb4c8d1bf25/20220412T1713331314Z/media/cafiles/mspki/microsoftazureecctlsissuingca062024-06-27-xsignder.crt)<br>[PEM](https://hubcontentprod.azureedge.net/content/docfx/f770e87c-605e-4620-91ee-8cb4c8d1bf25/20220412T1713331314Z/media/cafiles/mspki/microsoftazureecctlsissuingca062024-06-27-xsignpem.crt) |
-| Microsoft Azure ECC TLS Issuing CA 06 | Jun 27, 2024 | 0x330000001d0913c309da3f05a600000000001d<br>DFEB65E575D03D0CC59FD60066C6D39421E65483 | [DER](https://hubcontentprod.azureedge.net/content/docfx/f770e87c-605e-4620-91ee-8cb4c8d1bf25/20220412T1713331314Z/media/cafiles/mspki/microsoftazureecctlsissuingca062024-06-27der.crt)<br>[PEM](https://hubcontentprod.azureedge.net/content/docfx/f770e87c-605e-4620-91ee-8cb4c8d1bf25/20220412T1713331314Z/media/cafiles/mspki/microsoftazureecctlsissuingca062024-06-27pem.crt) |
-| Microsoft RSA TLS CA 01 | Oct 8, 2024 | 0x0f14965f202069994fd5c7ac788941e2<br>703D7A8F0EBF55AAA59F98EAF4A206004EB2516A | [DER](https://hubcontentprod.azureedge.net/content/docfx/f770e87c-605e-4620-91ee-8cb4c8d1bf25/20220412T1713331314Z/media/cafiles/ssladmin/microsoftrsatlsca012024-10-08der.crt)<br>[PEM](https://hubcontentprod.azureedge.net/content/docfx/f770e87c-605e-4620-91ee-8cb4c8d1bf25/20220412T1713331314Z/media/cafiles/ssladmin/microsoftrsatlsca012024-10-08pem.crt) |
-| Microsoft RSA TLS CA 02 | Oct 8, 2024 | 0x0fa74722c53d88c80f589efb1f9d4a3a<br>B0C2D2D13CDD56CDAA6AB6E2C04440BE4A429C75 | [DER](https://hubcontentprod.azureedge.net/content/docfx/f770e87c-605e-4620-91ee-8cb4c8d1bf25/20220412T1713331314Z/media/cafiles/ssladmin/microsoftrsatlsca022024-10-08der.crt)<br>[PEM](https://hubcontentprod.azureedge.net/content/docfx/f770e87c-605e-4620-91ee-8cb4c8d1bf25/20220412T1713331314Z/media/cafiles/ssladmin/microsoftrsatlsca022024-10-08pem.crt) |
+| DigiCert Basic RSA CN CA G2 | Mar 4, 2030 | 0x02f7e1f982bad009aff47dc95741b2f6<br>4D1FA5D1FB1AC3917C08E43F65015E6AEA571179 | [PEM](https://crt.sh/?d=2545289014) |
+| DigiCert Cloud Services CA-1 | Aug 4, 2030 | 0x019ec1c6bd3f597bb20c3338e551d877<br>81B68D6CD2F221F8F534E677523BB236BBA1DC56 | [PEM](https://crt.sh/?d=12624881) |
+| DigiCert SHA2 Secure Server CA | Sep 22, 2030 | 0x02742eaa17ca8e21c717bb1ffcfd0ca0<br>626D44E704D1CEABE3BF0D53397464AC8080142C | [PEM](https://crt.sh/?d=3422153451) |
+| DigiCert TLS Hybrid ECC SHA384 2020 CA1 | Sep 22, 2030 | 0x0a275fe704d6eecb23d5cd5b4b1a4e04<br>51E39A8BDB08878C52D6186588A0FA266A69CF28 | [PEM](https://crt.sh/?d=3422153452) |
+| DigiCert TLS RSA SHA256 2020 CA1 | Apr 13, 2031 | 0x06d8d904d5584346f68a2fa754227ec4<br>1C58A3A8518E8759BF075B76B750D4F2DF264FCD | [PEM](https://crt.sh/?d=4385364571) |
+| GeoTrust Global TLS RSA4096 SHA256 2022 CA1 | Nov 09, 2031 | 0x0f622f6f21c2ff5d521f723a1d47d62d<br>7E6DB7B7584D8CF2003E0931E6CFC41A3A62D3DF | [PEM](https://crt.sh/?d=6670931375)|
+| GeoTrust TLS DV RSA Mixed SHA256 2020 CA-1 | May 31, 2023 | 0x0c08966535b942a9735265e4f97540bc<br>2F7AA2D86056A8775796F798C481A079E538E004 | [PEM](https://crt.sh/?d=3112858728)|
+| Microsoft Azure TLS Issuing CA 01 | Jun 27, 2024 | 0x0aafa6c5ca63c45141ea3be1f7c75317<br>2F2877C5D778C31E0F29C7E371DF5471BD673173 | [PEM](https://crt.sh/?d=3163654574) |
+| Microsoft Azure TLS Issuing CA 01 | Jun 27, 2024 | 0x1dbe9496f3db8b8de700000000001d<br>B9ED88EB05C15C79639493016200FDAB08137AF3 | [PEM](https://crt.sh/?d=2616326024) |
+| Microsoft Azure TLS Issuing CA 02 | Jun 27, 2024 | 0x0c6ae97cced599838690a00a9ea53214<br>E7EEA674CA718E3BEFD90858E09F8372AD0AE2AA | [PEM](https://crt.sh/?d=3163546037) |
+| Microsoft Azure TLS Issuing CA 02 | Jun 27, 2024 | 0x330000001ec6749f058517b4d000000000001e<br>C5FB956A0E7672E9857B402008E7CCAD031F9B08 | [PEM](https://crt.sh/?d=2616326032) |
+| Microsoft Azure TLS Issuing CA 05 | Jun 27, 2024 | 0x0d7bede97d8209967a52631b8bdd18bd<br>6C3AF02E7F269AA73AFD0EFF2A88A4A1F04ED1E5 | [PEM](https://crt.sh/?d=3163600408) |
+| Microsoft Azure TLS Issuing CA 05 | Jun 27, 2024 | 0x330000001f9f1fa2043bc28db900000000001f<br>56F1CA470BB94E274B516A330494C792C419CF87 | [PEM](https://crt.sh/?d=2616326057) |
+| Microsoft Azure TLS Issuing CA 06 | Jun 27, 2024 | 0x02e79171fb8021e93fe2d983834c50c0<br>30E01761AB97E59A06B41EF20AF6F2DE7EF4F7B0 | [PEM](https://crt.sh/?d=3163654575) |
+| Microsoft Azure TLS Issuing CA 06 | Jun 27, 2024 | 0x3300000020a2f1491a37fbd31f000000000020<br>8F1FD57F27C828D7BE29743B4D02CD7E6E5F43E6 | [PEM](https://crt.sh/?d=2616330106) |
+| Microsoft Azure ECC TLS Issuing CA 01 | Jun 27, 2024 | 0x09dc42a5f574ff3a389ee06d5d4de440<br>92503D0D74A7D3708197B6EE13082D52117A6AB0 | [PEM](https://crt.sh/?d=3232541596) |
+| Microsoft Azure ECC TLS Issuing CA 01 | Jun 27, 2024 | 0x330000001aa9564f44321c54b900000000001a<br>CDA57423EC5E7192901CA1BF6169DBE48E8D1268 | [PEM](https://crt.sh/?d=2616305805) |
+| Microsoft Azure ECC TLS Issuing CA 02 | Jun 27, 2024 | 0x0e8dbe5ea610e6cbb569c736f6d7004b<br>1E981CCDDC69102A45C6693EE84389C3CF2329F1 | [PEM](https://crt.sh/?d=3232541597) |
+| Microsoft Azure ECC TLS Issuing CA 02 | Jun 27, 2024 | 0x330000001b498d6736ed5612c200000000001b<br>489FF5765030EB28342477693EB183A4DED4D2A6 | [PEM](https://crt.sh/?d=2616326233) |
+| Microsoft Azure ECC TLS Issuing CA 05 | Jun 27, 2024 | 0x0ce59c30fd7a83532e2d0146b332f965<br>C6363570AF8303CDF31C1D5AD81E19DBFE172531 | [PEM](https://crt.sh/?d=3232541594) |
+| Microsoft Azure ECC TLS Issuing CA 05 | Jun 27, 2024 | 0x330000001cc0d2a3cd78cf2c1000000000001c<br>4C15BC8D7AA5089A84F2AC4750F040D064040CD4 | [PEM](https://crt.sh/?d=2616326161) |
+| Microsoft Azure ECC TLS Issuing CA 06 | Jun 27, 2024 | 0x066e79cd7624c63130c77abeb6a8bb94<br>7365ADAEDFEA4909C1BAADBAB68719AD0C381163 | [PEM](https://crt.sh/?d=3232541595) |
+| Microsoft Azure ECC TLS Issuing CA 06 | Jun 27, 2024 | 0x330000001d0913c309da3f05a600000000001d<br>DFEB65E575D03D0CC59FD60066C6D39421E65483 | [PEM](https://crt.sh/?d=2616326228) |
+| Microsoft RSA TLS CA 01 | Oct 8, 2024 | 0x0f14965f202069994fd5c7ac788941e2<br>703D7A8F0EBF55AAA59F98EAF4A206004EB2516A | [PEM](https://crt.sh/?d=3124375355) |
+| Microsoft RSA TLS CA 02 | Oct 8, 2024 | 0x0fa74722c53d88c80f589efb1f9d4a3a<br>B0C2D2D13CDD56CDAA6AB6E2C04440BE4A429C75 | [PEM](https://crt.sh/?d=3124375356) |
## Client compatibility for public PKIs
security Feature Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/feature-availability.md
The following table displays the current Microsoft Defender for IoT feature avai
| [Threat detection with IoT, and OT behavioral analytics](../../defender-for-iot/how-to-work-with-alerts-on-your-sensor.md) | GA | GA | | [Manual and automatic threat intelligence updates](../../defender-for-iot/how-to-work-with-threat-intelligence-packages.md) | GA | GA | | **Unify IT, and OT security with SIEM, SOAR and XDR** | | |
-| [Active Directory](../../defender-for-iot/organizations/how-to-create-and-manage-users.md#integrate-with-active-directory-servers) | GA | GA |
+| [Active Directory](../../defender-for-iot/organizations/integrate-with-active-directory.md) | GA | GA |
| [ArcSight](../../defender-for-iot/organizations/how-to-accelerate-alert-incident-response.md#accelerate-incident-workflows-by-using-alert-groups) | GA | GA | | [ClearPass (Alerts & Inventory)](../../defender-for-iot/organizations/tutorial-clearpass.md) | GA | GA | | [CyberArk PSM](../../defender-for-iot/organizations/tutorial-cyberark.md) | GA | GA |
sentinel Notebooks Hunt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/notebooks-hunt.md
description: Launch and run notebooks with the Microsoft Sentinel hunting capabi
-+ Last updated 04/04/2022 #Customer intent: As a security analyst, I want to deploy and launch a Jupyter notebook to hunt for security threats.
After you've created an AML workspace, start launching your notebooks in your Az
1. At the top of the page, select a **Compute** instance to use for your notebook server.
- If you don't have a compute instance, [create a new one](../machine-learning/how-to-create-manage-compute-instance.md?tabs=#use-the-script-in-the-studio). If your compute instance is stopped, make sure to start it. For more information, see [Run a notebook in the Azure Machine Learning studio](../machine-learning/how-to-run-jupyter-notebooks.md).
+ If you don't have a compute instance, [create a new one](../machine-learning/how-to-create-manage-compute-instance.md?tabs=#create). If your compute instance is stopped, make sure to start it. For more information, see [Run a notebook in the Azure Machine Learning studio](../machine-learning/how-to-run-jupyter-notebooks.md).
Only you can see and use the compute instances you create. Your user files are stored separately from the VM and are shared among all compute instances in the workspace.
sentinel Threat Intelligence Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/threat-intelligence-integration.md
To connect to TAXII threat intelligence feeds, follow the instructions to [conne
- [Learn more about STIX and TAXII @ThreatConnect](https://threatconnect.com/stix-taxii/) - [TAXII Services documentation @ThreatConnect](https://docs.threatconnect.com/en/latest/rest_api/taxii/taxii_2.1.html)
+### ReversingLabs
+
+- [Learn about ReversingLabs TAXII integration with Microsoft Sentinel](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/import-reversinglab-s-ransomware-feed-into-microsoft-sentinel/ba-p/3423937)
+ ### Sectrio - [Learn more about Sectrio integration](https://sectrio.com/threat-intelligence/)
sentinel Work With Threat Indicators https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/work-with-threat-indicators.md
According to the default settings, each time the rule runs on its schedule, any
In Microsoft Sentinel, the alerts generated from analytics rules also generate security incidents, which can be found in **Incidents** under **Threat Management** on the Microsoft Sentinel menu. Incidents are what your security operations teams will triage and investigate to determine the appropriate response actions. You can find detailed information in this [Tutorial: Investigate incidents with Microsoft Sentinel](./investigate-cases.md).
+IMPORTANT: Microsoft Sentinel refreshes indicators every 14 days to make sure they are available for matching purposes through the analytic rules.
+ ## Detect threats using matching analytics (Public preview) > [!IMPORTANT]
service-bus-messaging Automate Update Messaging Units https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/automate-update-messaging-units.md
Title: Azure Service Bus - Automatically update messaging units description: This article shows you how you can use automatically update messaging units of a Service Bus namespace. Previously updated : 03/03/2021 Last updated : 05/16/2022 # Automatically update messaging units of an Azure Service Bus namespace
For example, you can implement the following scaling scenarios for Service Bus n
- Decrease messaging units for a Service Bus namespace when the CPU usage of the namespace goes below 25%. - Use more messaging units during business hours and fewer during off hours.
-This article shows you how you can automatically scale a Service Bus namespace (update [messaging units](service-bus-premium-messaging.md)) in the Azure portal.
+This article shows you how you can automatically scale a Service Bus namespace (update [messaging units](service-bus-premium-messaging.md)) using the Azure portal and an Azure Resource Manager template.
> [!IMPORTANT] > This article applies to only the **premium** tier of Azure Service Bus.
+## Configure using the Azure portal
+In this section, you learn how to use the Azure portal to configure auto-scaling of messaging units for a Service Bus namespace.
+ ## Autoscale setting page First, follow these steps to navigate to the **Autoscale settings** page for your Service Bus namespace.
The previous section shows you how to add a default condition for the autoscale
> > - If you see failures due to lack of capacity (no messaging units available), raise a support ticket with us.
+## Run history
+Switch to the **Run history** tab on the **Scale** page to see a chart that plots number of messaging units as observed by the autoscale engine. If the chart is empty, it means either autoscale wasn't configured or configured but disabled, or is in a cool down period.
++
+## Notifications
+Switch to the **Notify** tab on the **Scale** page to:
+
+- Enable sending notification emails to administrators, co-administrators, and any additional administrators.
+- Enable sending notification emails to an HTTP or HTTPS endpoints exposed by webhooks.
+
+ :::image type="content" source="./media/automate-update-messaging-units/notify-page.png" alt-text="Screenshot showing the **Notify** tab of the **Scale** page.":::
+
+## Configure using a Resource Manager template
+You can use the following sample Resource Manager template to create a Service Bus namespace with a queue, and to configure autoscale settings for the namespace. In this example, two scale conditions are specified.
+
+- Default scale condition: increase messaging units when the average CPU usage goes above 75% and decrease messaging units when the average CPU usage goes below 25%.
+- Assign two messaging units to the namespace on weekends.
+
+### Template
+
+```json
+{
+ "$schema": "https: //schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "serviceBusNamespaceName": {
+ "type": "String",
+ "metadata": {
+ "description": "Name of the Service Bus namespace"
+ }
+ },
+ "serviceBusQueueName": {
+ "type": "String",
+ "metadata": {
+ "description": "Name of the Queue"
+ }
+ },
+ "autoScaleSettingName": {
+ "type": "String",
+ "metadata": {
+ "description": "Name of the auto scale setting."
+ }
+ },
+ "location": {
+ "defaultValue": "[resourceGroup().location]",
+ "type": "String",
+ "metadata": {
+ "description": "Location for all resources."
+ }
+ }
+ },
+ "resources": [{
+ "type": "Microsoft.ServiceBus/namespaces",
+ "apiVersion": "2021-11-01",
+ "name": "[parameters('serviceBusNamespaceName')]",
+ "location": "[parameters('location')]",
+ "sku": {
+ "name": "Premium"
+ },
+ "properties": {}
+ },
+ {
+ "type": "Microsoft.ServiceBus/namespaces/queues",
+ "apiVersion": "2021-11-01",
+ "name": "[format('{0}/{1}', parameters('serviceBusNamespaceName'), parameters('serviceBusQueueName'))]",
+ "dependsOn": [
+ "[resourceId('Microsoft.ServiceBus/namespaces', parameters('serviceBusNamespaceName'))]"
+ ],
+ "properties": {
+ "lockDuration": "PT5M",
+ "maxSizeInMegabytes": 1024,
+ "requiresDuplicateDetection": false,
+ "requiresSession": false,
+ "defaultMessageTimeToLive": "P10675199DT2H48M5.4775807S",
+ "deadLetteringOnMessageExpiration": false,
+ "duplicateDetectionHistoryTimeWindow": "PT10M",
+ "maxDeliveryCount": 10,
+ "autoDeleteOnIdle": "P10675199DT2H48M5.4775807S",
+ "enablePartitioning": false,
+ "enableExpress": false
+ }
+ },
+ {
+ "type": "Microsoft.Insights/autoscaleSettings",
+ "apiVersion": "2021-05-01-preview",
+ "name": "[parameters('autoScaleSettingName')]",
+ "location": "East US",
+ "dependsOn": [
+ "[resourceId('Microsoft.ServiceBus/namespaces', parameters('serviceBusNamespaceName'))]"
+ ],
+ "tags": {},
+ "properties": {
+ "name": "[parameters('autoScaleSettingName')]",
+ "enabled": true,
+ "predictiveAutoscalePolicy": {
+ "scaleMode": "Disabled",
+ "scaleLookAheadTime": null
+ },
+ "targetResourceUri": "[resourceId('Microsoft.ServiceBus/namespaces', parameters('serviceBusNamespaceName'))]",
+ "profiles": [{
+ "name": "Increase messaging units to 2 on weekends",
+ "capacity": {
+ "minimum": "2",
+ "maximum": "2",
+ "default": "2"
+ },
+ "rules": [],
+ "recurrence": {
+ "frequency": "Week",
+ "schedule": {
+ "timeZone": "Eastern Standard Time",
+ "days": [
+ "Saturday",
+ "Sunday"
+ ],
+ "hours": [
+ 6
+ ],
+ "minutes": [
+ 0
+ ]
+ }
+ }
+ },
+ {
+ "name": "{\"name\":\"Scale Out at 75% CPU and Scale In at 25% CPU\",\"for\":\"Increase messaging units to 4 on weekends\"}",
+ "capacity": {
+ "minimum": "1",
+ "maximum": "8",
+ "default": "2"
+ },
+ "rules": [{
+ "scaleAction": {
+ "direction": "Increase",
+ "type": "ServiceAllowedNextValue",
+ "value": "1",
+ "cooldown": "PT5M"
+ },
+ "metricTrigger": {
+ "metricName": "NamespaceCpuUsage",
+ "metricNamespace": "microsoft.servicebus/namespaces",
+ "metricResourceUri": "[resourceId('Microsoft.ServiceBus/namespaces', parameters('serviceBusNamespaceName'))]",
+ "operator": "GreaterThan",
+ "statistic": "Average",
+ "threshold": 75,
+ "timeAggregation": "Average",
+ "timeGrain": "PT1M",
+ "timeWindow": "PT10M",
+ "Dimensions": [],
+ "dividePerInstance": false
+ }
+ },
+ {
+ "scaleAction": {
+ "direction": "Decrease",
+ "type": "ServiceAllowedNextValue",
+ "value": "1",
+ "cooldown": "PT5M"
+ },
+ "metricTrigger": {
+ "metricName": "NamespaceCpuUsage",
+ "metricNamespace": "microsoft.servicebus/namespaces",
+ "metricResourceUri": "[resourceId('Microsoft.ServiceBus/namespaces', parameters('serviceBusNamespaceName'))]",
+ "operator": "LessThan",
+ "statistic": "Average",
+ "threshold": 25,
+ "timeAggregation": "Average",
+ "timeGrain": "PT1M",
+ "timeWindow": "PT10M",
+ "Dimensions": [],
+ "dividePerInstance": false
+ }
+ }
+ ],
+ "recurrence": {
+ "frequency": "Week",
+ "schedule": {
+ "timeZone": "Eastern Standard Time",
+ "days": [
+ "Saturday",
+ "Sunday"
+ ],
+ "hours": [
+ 18
+ ],
+ "minutes": [
+ 0
+ ]
+ }
+ }
+ }
+ ],
+ "notifications": [],
+ "targetResourceLocation": "East US"
+ }
+ }
+ ]
+}
+```
+
+You can also generate a JSON example for an autoscale setting resource from the Azure portal. After you configure autoscale settings in the Azure portal, select **JSON** on the command bar of the **Scale** page.
++
+Then, include the JSON in the `resources` section of a Resource Manager template as shown in the preceding example.
## Next steps To learn about messaging units, see the [Premium messaging](service-bus-premium-messaging.md)
service-bus-messaging Explorer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/explorer.md
Title: Use Azure Service Bus Explorer to run data operations on Service Bus (Preview)
+ Title: Use Azure Service Bus Explorer to run data operations (Preview)
description: This article provides information on how to use the portal-based Azure Service Bus Explorer to access Azure Service Bus data. Previously updated : 12/02/2021-+ Last updated : 05/24/2022+ # Use Service Bus Explorer to run data operations on Service Bus (Preview)
Azure Service Bus allows sender and receiver client applications to decouple the
> [!NOTE] > This article highlights the functionality of the Azure Service Bus Explorer that's part of the Azure portal. >
-> The community owned [open source Service Bus Explorer](https://github.com/paolosalvatori/ServiceBusExplorer) is a standalone application and is different from this one, which is part of the Azure portal.
+> The community owned [open source Service Bus Explorer](https://github.com/paolosalvatori/ServiceBusExplorer) is a standalone application and is different from this one.
Operations run on an Azure Service Bus namespace are of two kinds
Operations run on an Azure Service Bus namespace are of two kinds
* **Data operations** - Send to and receive messages from queues, topics, and subscriptions. > [!IMPORTANT]
-> Service Bus Explorer doesn't support **sessions**.
-
+> Service Bus Explorer doesn't support **management operations**.
## Prerequisites
To use the Service Bus Explorer tool, you'll need to do the following tasks:
- [Quickstart - Create queues](service-bus-quickstart-portal.md) - [Quickstart - Create topics](service-bus-quickstart-topics-subscriptions-portal.md) - > [!NOTE] > If it's not the namespace you created, ensure that you're a member of one of these roles on the namespace: > - [Service Bus Data Owner](../role-based-access-control/built-in-roles.md#azure-service-bus-data-owner) > - [Contributor](../role-based-access-control/built-in-roles.md#contributor) > - [Owner](../role-based-access-control/built-in-roles.md#owner)
+## Use the Service Bus Explorer
-## Using the Service Bus Explorer
-
-To use the Service Bus Explorer, navigate to the Service Bus namespace on which you want to do send, peek, and receive operations.
-
+To use the Service Bus Explorer, navigate to the Service Bus namespace on which you want to do data operations.
1. If you're looking to run operations against a queue, select **Queues** from the navigation menu. If you're looking to run operations against a topic (and it's related subscriptions), select **Topics**.
- :::image type="content" source="./media/service-bus-explorer/queue-topics-left-navigation.png" alt-text="Entity select":::
-2. After selecting **Queues** or **Topics**, select the specific queue or topic.
+ :::image type="content" source="./media/service-bus-explorer/queue-topics-left-navigation.png" alt-text="Screenshot of left side navigation, where entity can be selected." lightbox="./media/service-bus-explorer/queue-topics-left-navigation.png":::
+
+1. After selecting **Queues** or **Topics**, select the specific queue or topic.
1. Select the **Service Bus Explorer (preview)** from the left navigation menu
- :::image type="content" source="./media/service-bus-explorer/left-navigation-menu-selected.png" alt-text="SB Explorer Left nav menu":::
+ :::image type="content" source="./media/service-bus-explorer/left-navigation-menu-selected.png" alt-text="Screenshot of queue blade where Service Bus Explorer can be selected." lightbox="./media/service-bus-explorer/left-navigation-menu-selected.png":::
> [!NOTE]
- > Service Bus Explorer supports messages of size up to 1 MB.
+ > When peeking or receiving from a subscription, first select the specific **Subscription** from the dropdown selector.
+ > :::image type="content" source="./media/service-bus-explorer/subscription-selected.png" alt-text="Screenshot of dropdown for topic subscriptions." lightbox="./media/service-bus-explorer/subscription-selected.png":::
-### Sending a message to a queue or topic
+## Peek a message
-To send a message to a **queue** or a **topic**, select the **Send** tab of the Service Bus Explorer.
+With the peek functionality, you can use the Service Bus Explorer to view the top 100 messages in a queue, subscription or dead-letter queue.
-To compose a message here -
+1. To peek messages, select **Peek Mode** in the Service Bus Explorer dropdown.
-1. Select the **Content Type** to be either **Text/Plain**, **Application/Xml** or **Application/Json**.
-2. For **Content**, add the message content. Ensure that it matches the **Content Type** set earlier.
-3. Set the **Advanced Properties** (optional) - these include Correlation ID, Message ID, Label, ReplyTo, Time to Live (TTL) and Scheduled Enqueue Time (for Scheduled Messages).
-4. Set the **Custom Properties** - can be any user properties set against a dictionary key.
-1. Once the message has been composed, select **Send**.
+ :::image type="content" source="./media/service-bus-explorer/peek-mode-selected.png" alt-text="Screenshot of dropdown with Peek Mode selected." lightbox="./media/service-bus-explorer/peek-mode-selected.png":::
- :::image type="content" source="./media/service-bus-explorer/send-experience.png" alt-text="Compose Message":::
-1. When the send operation is completed successfully, do one of the following actions:
+1. Check the metrics to see if there are **Active Messages** or **Dead-lettered Messages** to peek and select either **Queue / Subscription** or **DeadLetter** sub-queue.
- - If sending to the queue, **Active Messages** metrics counter will increment.
+ :::image type="content" source="./media/service-bus-explorer/queue-after-send-metrics.png" alt-text="Screenshot of queue and dead-letter sub-queue tabs with message metrics displayed." lightbox="./media/service-bus-explorer/queue-after-send-metrics.png":::
- :::image type="content" source="./media/service-bus-explorer/queue-after-send-metrics.png" alt-text="QueueAfterSendMetrics":::
- - If sending to the topic, **Active Messages** metrics counter will increment on the Subscription where the message was routed to.
+1. Select the **Peek from start** button.
- :::image type="content" source="./media/service-bus-explorer/topic-after-send-metrics.png" alt-text="TopicAfterSendMetrics":::
+ :::image type="content" source="./media/service-bus-explorer/queue-peek-from-start.png" alt-text="Screenshot indicating the Peek from start button." lightbox="./media/service-bus-explorer/queue-peek-from-start.png":::
-### Receiving a message from a Queue
+1. Once the peek operation completes, up to 100 messages will show up on the grid as below. To view the details of a particular message, select it from the grid. You can choose to view the body or the message properties.
-The receive function on the Service Bus Explorer permits receiving a single message at a time. The receive operation is performed using the **ReceiveAndDelete** mode.
+ :::image type="content" source="./media/service-bus-explorer/peek-message-from-queue.png" alt-text="Screenshot with overview of peeked messages and message body content shown for peeked messages." lightbox="./media/service-bus-explorer/peek-message-from-queue.png":::
-> [!IMPORTANT]
-> Please note that the receive operation performed by the Service Bus explorer is a *destructive receive*, i.e. the message is removed from the queue when it is displayed on the Service Bus Explorer tool.
->
-> To browse messages without removing them from the queue, consider using the ***Peek*** functionality.
->
+ :::image type="content" source="./media/service-bus-explorer/peek-message-from-queue-2.png" alt-text="Screenshot with overview of peeked messages and message properties shown for peeked messages." lightbox="./media/service-bus-explorer/peek-message-from-queue-2.png":::
+
+ > [!NOTE]
+ > Since peek is not a destructive operation the message **won't** be removed from the entity.
+
+ > [!NOTE]
+ > For performance reasons, when peeking messages from a queue or subscription which has it's maximum message size set over 1MB, the message body will not be retrieved by default. Instead, you can load the message body for a specific message by clicking on the **Load message body** button. If the message body is over 1MB it will be truncated before being displayed.
+ > :::image type="content" source="./media/service-bus-explorer/peek-message-from-queue-with-large-message-support.png" alt-text="Screenshot with overview of peeked messages and button to load message body shown." lightbox="./media/service-bus-explorer/peek-message-from-queue-with-large-message-support.png":::
-To receive a message from a Queue (or its DeadLetter subqueue)
+### Peek a message with advanced options
-1. Click on the ***Receive*** tab on the Service Bus Explorer.
-2. Check the metrics to see if there are **Active Messages** or **Dead-lettered Messages** to receive.
+With the peek with options functionality, you can use the Service Bus Explorer to view the top messages in a queue, subscription or the dead-letter queue, specifying the number of messages to peek, and the sequence number to start peeking from.
- :::image type="content" source="./media/service-bus-explorer/queue-after-send-metrics.png" alt-text="QueueAfterSendMetrics":::
+1. To peek messages with advanced options, select **Peek Mode** in the Service Bus Explorer dropdown.
-3. Select either **Queue** or **DeadLetter**.
+ :::image type="content" source="./media/service-bus-explorer/peek-mode-selected.png" alt-text="Screenshot of dropdown with Peek Mode selected for peek with advanced options." lightbox="./media/service-bus-explorer/peek-mode-selected.png":::
- :::image type="content" source="./media/service-bus-explorer/queue-or-deadletter.png" alt-text="QueueOrDeadLetter":::
+1. Check the metrics to see if there are **Active Messages** or **Dead-lettered Messages** to peek and select either **Queue / Subscription** or **DeadLetter** sub-queue.
-4. Select **Receive** button, followed by **Yes** to confirm the operation.
-1. When the receive operation is successful, the message details will display on the grid as below. You can select the message from the grid to display its details.
+ :::image type="content" source="./media/service-bus-explorer/queue-after-send-metrics.png" alt-text="Screenshot of queue and dead-letter sub-queue tabs with message metrics displayed for peek with advanced options." lightbox="./media/service-bus-explorer/queue-after-send-metrics.png":::
- :::image type="content" source="./media/service-bus-explorer/receive-message-from-queue-2.png" alt-text="Screenshot of the Queues window in Service Bus Explorer with message details.":::
+1. Select the **Peek with options** button. Provide the number of messages to peek, and the sequence number to start peeking from, and select the **Peek** button.
+ :::image type="content" source="./media/service-bus-explorer/queue-peek-with-options.png" alt-text="Screenshot indicating the Peek with options button, and a blade where the options can be set." lightbox="./media/service-bus-explorer/queue-peek-with-options.png":::
-### Peeking a message from a Queue
+1. Once the peek operation completes, the messages will show up on the grid as below. To view the details of a particular message, select it from the grid. You can choose to view the body or the message properties.
-With the peek functionality, you can use the Service Bus Explorer to view the top 32 messages in a queue or the dead-letter queue.
+ :::image type="content" source="./media/service-bus-explorer/peek-message-from-queue-3.png" alt-text="Screenshot with overview of peeked messages and message body content shown for peek with advanced options." lightbox="./media/service-bus-explorer/peek-message-from-queue-3.png":::
-1. To peek messages in a queue, select the ***Peek*** tab in the Service Bus Explorer.
+ :::image type="content" source="./media/service-bus-explorer/peek-message-from-queue-4.png" alt-text="Screenshot with overview of peeked messages and message properties shown for peek with advanced options." lightbox="./media/service-bus-explorer/peek-message-from-queue-4.png":::
- :::image type="content" source="./media/service-bus-explorer/peek-tab-selected.png" alt-text="PeekTab":::
-2. Check the metrics to see if there are **Active Messages** or **Dead-lettered Messages** to peek.
+ > [!NOTE]
+ > Since peek is not a destructive operation the message **won't** be removed from the queue.
+
+## Receive a message
+
+The receive function on the Service Bus Explorer permits receiving messages from a queue or subscription.
+
+1. To receive messages, select **Receive Mode** in the Service Bus Explorer dropdown.
+
+ :::image type="content" source="./media/service-bus-explorer/receive-mode-selected.png" alt-text="Screenshot of dropdown with Receive Mode selected." lightbox="./media/service-bus-explorer/receive-mode-selected.png":::
+
+1. Check the metrics to see if there are **Active Messages** or **Dead-lettered Messages** to receive, and select either **Queue / Subscription** or **DeadLetter**.
- :::image type="content" source="./media/service-bus-explorer/queue-after-send-metrics.png" alt-text="QueueAfterSendMetrics":::
-3. Then select either **Queue** or **DeadLetter** subqueue.
+ :::image type="content" source="./media/service-bus-explorer/queue-after-send-metrics-2.png" alt-text="Screenshot of queue and dead-letter sub-queue tabs with message metrics displayed for receive mode." lightbox="./media/service-bus-explorer/queue-after-send-metrics-2.png":::
- :::image type="content" source="./media/service-bus-explorer/queue-or-deadletter.png" alt-text="QueueOrDeadLetter":::
-4. Select ***Peek***.
-1. Once the peek operation completes, up to 32 messages will show up on the grid as below. To view the details of a particular message, select it from the grid.
+1. Select the **Receive messages** button, and specify the receive mode, the number of messages to receive, and the maximum time to wait for a message and click on the **Receive** button.
- :::image type="content" source="./media/service-bus-explorer/peek-message-from-queue-2.png" alt-text="PeekMessageFromQueue":::
+ :::image type="content" source="./media/service-bus-explorer/receive-message-from-queue.png" alt-text="Screenshot indicating the Receive button, and a blade where the options can be set." lightbox="./media/service-bus-explorer/receive-message-from-queue.png":::
+
+ > [!IMPORTANT]
+ > Please note that the ReceiveAndDelete mode is a ***destructive receive***, i.e. the message is removed from the queue when it is displayed on the Service Bus Explorer tool.
+ >
+ > To browse messages without removing them from the queue, consider using the **Peek** functionality, or using the **PeekLock** receive mode.
+
+1. Once the receive operation completes, the messages will show up on the grid as below. To view the details of a particular message, select it in the grid.
+
+ :::image type="content" source="./media/service-bus-explorer/receive-message-from-queue-2.png" alt-text="Screenshot with overview of received messages and message body content shown." lightbox="./media/service-bus-explorer/receive-message-from-queue-2.png":::
+
+ :::image type="content" source="./media/service-bus-explorer/receive-message-from-queue-3.png" alt-text="Screenshot with overview of received messages and message properties shown." lightbox="./media/service-bus-explorer/receive-message-from-queue-3.png":::
> [!NOTE]
- > Since peek is not a destructive operation the message **won't** be removed from the queue.
-
+ > For performance reasons, when receiving messages from a queue or subscription which has it's maximum message size set over 1MB, only one message will be received at a time. If the message body is over 1MB it will be truncated before being displayed.
-### Receiving a message from a Subscription
+After a message has been received in **PeekLock** mode, there are various actions we can take on it.
-Just like with a queue, the **Receive** operation can be performed against a subscription (or its dead-letter entity).
+> [!NOTE]
+> We can only take these actions as long as we have a lock on the message.
-> [!IMPORTANT]
-> Please note that the receive operation performed by the Service Bus explorer is a ***destructive receive***, that is, the message is removed from the queue when it's displayed in the Service Bus Explorer tool.
->
-> To browse messages without removing them from the queue, use the **Peek** functionality.
->
+### Complete a message
-1. Select the ***Receive*** tab, and select the specific **Subscription** from the dropdown selector.
+1. In the grid, select the received message(s) we want to complete.
+1. Select the **Complete** button.
- :::image type="content" source="./media/service-bus-explorer/receive-subscription-tab-selected.png" alt-text="ReceiveTabSelected":::
-2. Select either **Subscription** or **DeadLetter** subentity.
+ :::image type="content" source="./media/service-bus-explorer/receive-message-from-queue-complete.png" alt-text="Screenshot indicating the Complete button." lightbox="./media/service-bus-explorer/receive-message-from-queue-complete.png":::
- :::image type="content" source="./media/service-bus-explorer/subscription-or-deadletter.png" alt-text="SubscriptionOrDeadLetter":::
-3. Select **Receive**, and then select **Yes** to confirm the receive and delete operation.
-1. When the receive operation is successful, the received message will display on the grid as below. To view the message details, select the message.
+ > [!IMPORTANT]
+ > Please note that completing a message is a ***destructive receive***, i.e. the message is removed from the queue when **Complete** has been selected in the Service Bus Explorer tool.
- :::image type="content" source="./media/service-bus-explorer/receive-message-from-subscription.png" alt-text="Screenshot of the Receive tab in Service Bus Explorer with message details.":::
+### Defer a message
-### Peeking a message from a Subscription
+1. In the grid, select the received message(s) we want to [defer](./message-deferral.md).
+1. Select the **Defer** button.
-To browse messages on a subscription or its deadLetter subentity, the **Peek** functionality can be utilized on the subscription as well.
+ :::image type="content" source="./media/service-bus-explorer/receive-message-from-queue-defer.png" alt-text="Screenshot indicating the Defer button." lightbox="./media/service-bus-explorer/receive-message-from-queue-defer.png":::
-1. Click on the **Peek** tab and select the specific **Subscription** from the dropdown selector.
+### Abandon lock
- :::image type="content" source="./media/service-bus-explorer/peek-subscription-tab-selected.png" alt-text="PeekTabSelected":::
-2. Pick between the **Subscription** or the **DeadLetter** subentity.
+1. In the grid, select the received message(s) for which we want to abandon the lock.
+1. Select the **Abandon lock** button.
- :::image type="content" source="./media/service-bus-explorer/subscription-or-deadletter.png" alt-text="SubscriptionOrDeadLetter":::
-3. Select **Peek** button.
-1. Once the peek operation completes, up to 32 messages will show up on the grid as below. To view the details of a particular message, select it in the grid.
+ :::image type="content" source="./media/service-bus-explorer/receive-message-from-queue-abandon-lock.png" alt-text="Screenshot indicating the Abandon Lock button." lightbox="./media/service-bus-explorer/receive-message-from-queue-abandon-lock.png":::
- :::image type="content" source="./media/service-bus-explorer/peek-message-from-subscription.png" alt-text="PeekMessageFromSubscription":::
+After the lock has been abandoned, the message will be available for receive operations again.
- > [!NOTE]
- >
- > - Since peek is not a destructive operation the message **will not** be removed from the queue.
-
+### Dead-letter
+
+1. In the grid, select the received message(s) we want to [dead-letter](./service-bus-dead-letter-queues.md).
+1. Select the **Dead-letter** button.
+
+ :::image type="content" source="./media/service-bus-explorer/receive-message-from-queue-dead-letter.png" alt-text="Screenshot indicating the Dead-letter button." lightbox="./media/service-bus-explorer/receive-message-from-queue-dead-letter.png":::
+
+After a message has been dead-lettered, it will be available from the **Dead-letter** sub-queue.
+
+## Send a message to a queue or topic
+
+To send a message to a **queue** or a **topic**, select the **Send messages** button of the Service Bus Explorer.
+
+1. Select the **Content Type** to be either **Text/Plain**, **Application/Xml** or **Application/Json**.
+1. For **Message body**, add the message content. Ensure that it matches the **Content Type** set earlier.
+1. Set the **Broker properties** (optional) - these include Correlation ID, Message ID, ReplyTo, Label/Subject, Time to Live (TTL) and Scheduled Enqueue Time (for Scheduled Messages).
+1. Set the **Custom Properties** (optional) - these can be any user properties set against a dictionary key.
+1. Check **Repeat send** to send the same message multiple times. If no Message ID was set, this will be automatically populated with sequential values.
+1. Once the message has been composed, select the **Send** button.
+
+ :::image type="content" source="./media/service-bus-explorer/send-experience.png" alt-text="Screenshot showing the compose message experience." lightbox="./media/service-bus-explorer/send-experience.png":::
+
+1. When the send operation is completed successfully, one of the following will happen:
+
+ - If sending to a queue, **Active Messages** metrics counter will increment.
+ - If sending to a topic, **Active Messages** metrics counter will increment on the Subscriptions where the message was routed to.
+
+## Re-send a message
+
+After peeking or receiving a message, we can re-send it, which will send a copy of the message to the same entity, while allowing us to update it's content and properties.
+
+1. In the grid, select the message(s) we want to re-send.
+1. Select the **Re-send selected messages** button.
+
+ :::image type="content" source="./media/service-bus-explorer/queue-select-messages-for-resend.png" alt-text="Screenshot indicating the Re-send selected messages button." lightbox="./media/service-bus-explorer/queue-select-messages-for-resend.png":::
+
+1. Optionally, select any message for which we want to update its details and make the desired changes.
+1. Select the **Send** button to send the messages to the entity.
+
+ :::image type="content" source="./media/service-bus-explorer/queue-resend-selected-messages.png" alt-text="Screenshot showing the re-send messages experience." lightbox="./media/service-bus-explorer/queue-resend-selected-messages.png":::
+
+## Switch authentication type
+
+When working with Service Bus Explorer, it's possible to use either **Access Key** or **Azure Active Directory** authentication.
+
+1. Select the **Settings** button.
+1. Choose the desired authentication method, and select the **Save** button.
+
+ :::image type="content" source="./media/service-bus-explorer/queue-select-authentication-type.png" alt-text="Screenshot indicating the Settings button and a blade showing the different authentication types." lightbox="./media/service-bus-explorer/queue-select-authentication-type.png":::
## Next Steps
service-bus-messaging Service Bus Go How To Use Queues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-go-how-to-use-queues.md
+
+ Title: Get started with Azure Service Bus queues (Go)
+description: This tutorial shows you how to send messages to and receive messages from Azure Service Bus queues using the Go programming language.
+documentationcenter: go
++ Last updated : 04/19/2022+
+ms.devlang: golang
++
+# Send messages to and receive messages from Azure Service Bus queues (Go)
+> [!div class="op_single_selector" title1="Select the programming language:"]
+> * [Go](service-bus-go-how-to-use-queues.md)
+> * [C#](service-bus-dotnet-get-started-with-queues.md)
+> * [Java](service-bus-java-how-to-use-queues.md)
+> * [JavaScript](service-bus-nodejs-how-to-use-queues.md)
+> * [Python](service-bus-python-how-to-use-queues.md)
+
+In this tutorial, you'll learn how to send messages to and receive messages from Azure Service Bus queues using the Go programming language.
+
+Azure Service Bus is a fully managed enterprise message broker with message queues and publish/subscribe capabilities. Service Bus is used to decouple applications and services from each other, providing a distributed, reliable, and high performance message transport.
+
+The Azure SDK for Go's [azservicebus](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/messaging/azservicebus) package allows you to send and receive messages from Azure Service Bus and using the Go programming language.
+
+By the end of this tutorial, you'll be able to: send a single message or batch of messages to a queue, receive messages, and dead-letter messages that aren't processed.
+
+## Prerequisites
+
+- An Azure subscription. You can activate your [Visual Studio or MSDN subscriber benefits](https://azure.microsoft.com/pricing/member-offers/msdn-benefits-details/?WT.mc_id=A85619ABF) or sign-up for a [free account](https://azure.microsoft.com/free/?WT.mc_id=A85619ABF).
+- If you don't have a queue to work with, follow steps in the [Use Azure portal to create a Service Bus queue](service-bus-quickstart-portal.md) article to create a queue.
+- Go version 1.18 or [above](https://go.dev/dl/)
++
+## Create the sample app
+
+To begin, create a new Go module.
+
+1. Create a new directory for the module named `service-bus-go-how-to-use-queues`.
+1. In the `azservicebus` directory, initialize the module and install the required packages.
+
+ ```bash
+ go mod init service-bus-go-how-to-use-queues
+
+ go get github.com/Azure/azure-sdk-for-go/sdk/azidentity
+
+ go get github.com/Azure/azure-sdk-for-go/sdk/messaging/azservicebus
+ ```
+1. Create a new file named `main.go`.
+
+## Authenticate and create a client
+
+In the `main.go` file, create a new function named `GetClient` and add the following code:
+
+```go
+func GetClient() *azservicebus.Client {
+ namespace, ok := os.LookupEnv("AZURE_SERVICEBUS_HOSTNAME") //ex: myservicebus.servicebus.windows.net
+ if !ok {
+ panic("AZURE_SERVICEBUS_HOSTNAME environment variable not found")
+ }
+
+ cred, err := azidentity.NewDefaultAzureCredential(nil)
+ if err != nil {
+ panic(err)
+ }
+
+ client, err := azservicebus.NewClient(namespace, cred, nil)
+ if err != nil {
+ panic(err)
+ }
+ return client
+}
+```
+
+The `GetClient` function returns a new `azservicebus.Client` object that's created by using an Azure Service Bus namespace and a credential. The namespace is provided by the `AZURE_SERVICEBUS_HOSTNAME` environment variable. And the credential is created by using the `azidentity.NewDefaultAzureCredential` function.
+
+For local development, the `DefaultAzureCredential` used the access token from Azure CLI, which can be created by running the `az login` command to authenticate to Azure.
+
+> [!TIP]
+> To authenticate with a connection string use the [NewClientFromConnectionString](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/messaging/azservicebus#NewClientFromConnectionString) function.
+
+## Send messages to a queue
+
+In the `main.go` file, create a new function named `SendMessage` and add the following code:
+
+```go
+func SendMessage(message string, client *azservicebus.Client) {
+ sender, err := client.NewSender("myqueue", nil)
+ if err != nil {
+ panic(err)
+ }
+ defer sender.Close(context.TODO())
+
+ sbMessage := &azservicebus.Message{
+ Body: []byte(message),
+ }
+ err = sender.SendMessage(context.TODO(), sbMessage, nil)
+ if err != nil {
+ panic(err)
+ }
+}
+```
+
+`SendMessage` takes two parameters: a message string and a `azservicebus.Client` object. It then creates a new `azservicebus.Sender` object and sends the message to the queue. To send bulk messages, add the `SendMessageBatch` function to your `main.go` file.
+
+```go
+func SendMessageBatch(messages []string, client *azservicebus.Client) {
+ sender, err := client.NewSender("myqueue", nil)
+ if err != nil {
+ panic(err)
+ }
+
+ batch, err := sender.NewMessageBatch(context.TODO(), nil)
+ if err != nil {
+ panic(err)
+ }
+
+ for _, message := range messages {
+ if err := batch.AddMessage(&azservicebus.Message{Body: []byte(message)}, nil); err != nil {
+ panic(err)
+ }
+ }
+ if err := sender.SendMessageBatch(context.TODO(), batch, nil); err != nil {
+ panic(err)
+ }
+}
+```
+
+`SendMessageBatch` takes two parameters: a slice of messages and a `azservicebus.Client` object. It then creates a new `azservicebus.Sender` object and sends the messages to the queue.
+++
+## Receive messages from a queue
+
+After you've sent messages to the queue, you can receive them with the `azservicebus.Receiver` type. To receive messages from a queue, add the `GetMessage` function to your `main.go` file.
+
+```go
+func GetMessage(count int, client *azservicebus.Client) {
+ receiver, err := client.NewReceiverForQueue("myqueue", nil) //Change myqueue to env var
+ if err != nil {
+ panic(err)
+ }
+ defer receiver.Close(context.TODO())
+
+ messages, err := receiver.ReceiveMessages(context.TODO(), count, nil)
+ if err != nil {
+ panic(err)
+ }
+
+ for _, message := range messages {
+ body, err := message.Body()
+ if err != nil {
+ panic(err)
+ }
+ fmt.Printf("%s\n", string(body))
+
+ err = receiver.CompleteMessage(context.TODO(), message, nil)
+ if err != nil {
+ panic(err)
+ }
+ }
+}
+```
+
+`GetMessage` takes an `azservicebus.Client` object and creates a new `azservicebus.Receiver` object. It then receives the messages from the queue. The `Receiver.ReceiveMessages` function takes two parameters: a context and the number of messages to receive. The `Receiver.ReceiveMessages` function returns a slice of `azservicebus.ReceivedMessage` objects.
+
+Next, a `for` loop iterates through the messages and prints the message body. Then the `CompleteMessage` function is called to complete the message, removing it from the queue.
+
+Messages that exceed length limits, are sent to an invalid queue, or aren't successfully processed can be sent to the dead letter queue. To send messages to the dead letter queue, add the `SendDeadLetterMessage` function to your `main.go` file.
++
+```go
+func DeadLetterMessage(client *azservicebus.Client) {
+ deadLetterOptions := &azservicebus.DeadLetterOptions{
+ ErrorDescription: to.Ptr("exampleErrorDescription"),
+ Reason: to.Ptr("exampleReason"),
+ }
+
+ receiver, err := client.NewReceiverForQueue("myqueue", nil)
+ if err != nil {
+ panic(err)
+ }
+ defer receiver.Close(context.TODO())
+
+ messages, err := receiver.ReceiveMessages(context.TODO(), 1, nil)
+ if err != nil {
+ panic(err)
+ }
+
+ if len(messages) == 1 {
+ err := receiver.DeadLetterMessage(context.TODO(), messages[0], deadLetterOptions)
+ if err != nil {
+ panic(err)
+ }
+ }
+}
+```
+
+`DeadLetterMessage` takes an `azservicebus.Client` object and a `azservicebus.ReceivedMessage` object. It then sends the message to the dead letter queue. The function takes two parameters: a context and a `azservicebus.DeadLetterOptions` object. The `Receiver.DeadLetterMessage` function returns an error if the message fails to be sent to the dead letter queue.
+
+To receive messages from the dead letter queue, add the `ReceiveDeadLetterMessage` function to your `main.go` file.
+
+```go
+func GetDeadLetterMessage(client *azservicebus.Client) {
+ receiver, err := client.NewReceiverForQueue(
+ "myqueue",
+ &azservicebus.ReceiverOptions{
+ SubQueue: azservicebus.SubQueueDeadLetter,
+ },
+ )
+ if err != nil {
+ panic(err)
+ }
+ defer receiver.Close(context.TODO())
+
+ messages, err := receiver.ReceiveMessages(context.TODO(), 1, nil)
+ if err != nil {
+ panic(err)
+ }
+
+ for _, message := range messages {
+ fmt.Printf("DeadLetter Reason: %s\nDeadLetter Description: %s\n", *message.DeadLetterReason, *message.DeadLetterErrorDescription) //change to struct an unmarshal into it
+ err := receiver.CompleteMessage(context.TODO(), message, nil)
+ if err != nil {
+ panic(err)
+ }
+ }
+}
+```
+
+`GetDeadLetterMessage` takes a `azservicebus.Client` object and creates a new `azservicebus.Receiver` object with options for the dead letter queue. It then receives the messages from the dead letter queue. The function then receives one message from the dead letter queue. Then it prints the dead letter reason and description for that message.
+
+## Sample code
+
+```go
+package main
+
+import (
+ "context"
+ "errors"
+ "fmt"
+ "os"
+
+ "github.com/Azure/azure-sdk-for-go/sdk/azcore/to"
+ "github.com/Azure/azure-sdk-for-go/sdk/azidentity"
+ "github.com/Azure/azure-sdk-for-go/sdk/messaging/azservicebus"
+)
+
+func GetClient() *azservicebus.Client {
+ namespace, ok := os.LookupEnv("AZURE_SERVICEBUS_HOSTNAME") //ex: myservicebus.servicebus.windows.net
+ if !ok {
+ panic("AZURE_SERVICEBUS_HOSTNAME environment variable not found")
+ }
+
+ cred, err := azidentity.NewDefaultAzureCredential(nil)
+ if err != nil {
+ panic(err)
+ }
+
+ client, err := azservicebus.NewClient(namespace, cred, nil)
+ if err != nil {
+ panic(err)
+ }
+ return client
+}
+
+func SendMessage(message string, client *azservicebus.Client) {
+ sender, err := client.NewSender("myqueue", nil)
+ if err != nil {
+ panic(err)
+ }
+ defer sender.Close(context.TODO())
+
+ sbMessage := &azservicebus.Message{
+ Body: []byte(message),
+ }
+ err = sender.SendMessage(context.TODO(), sbMessage, nil)
+ if err != nil {
+ panic(err)
+ }
+}
+
+func SendMessageBatch(messages []string, client *azservicebus.Client) {
+ sender, err := client.NewSender("myqueue", nil)
+ if err != nil {
+ panic(err)
+ }
+ defer sender.Close(context.TODO())
+
+ batch, err := sender.NewMessageBatch(context.TODO(), nil)
+ if err != nil {
+ panic(err)
+ }
+
+ for _, message := range messages {
+ err := batch.AddMessage(&azservicebus.Message{Body: []byte(message)}, nil)
+ if errors.Is(err, azservicebus.ErrMessageTooLarge) {
+ fmt.Printf("Message batch is full. We should send it and create a new one.\n")
+ }
+ }
+
+ if err := sender.SendMessageBatch(context.TODO(), batch, nil); err != nil {
+ panic(err)
+ }
+}
+
+func GetMessage(count int, client *azservicebus.Client) {
+ receiver, err := client.NewReceiverForQueue("myqueue", nil)
+ if err != nil {
+ panic(err)
+ }
+ defer receiver.Close(context.TODO())
+
+ messages, err := receiver.ReceiveMessages(context.TODO(), count, nil)
+ if err != nil {
+ panic(err)
+ }
+
+ for _, message := range messages {
+ body, err := message.Body()
+ if err != nil {
+ panic(err)
+ }
+ fmt.Printf("%s\n", string(body))
+
+ err = receiver.CompleteMessage(context.TODO(), message, nil)
+ if err != nil {
+ panic(err)
+ }
+ }
+}
+
+func DeadLetterMessage(client *azservicebus.Client) {
+ deadLetterOptions := &azservicebus.DeadLetterOptions{
+ ErrorDescription: to.Ptr("exampleErrorDescription"),
+ Reason: to.Ptr("exampleReason"),
+ }
+
+ receiver, err := client.NewReceiverForQueue("myqueue", nil)
+ if err != nil {
+ panic(err)
+ }
+ defer receiver.Close(context.TODO())
+
+ messages, err := receiver.ReceiveMessages(context.TODO(), 1, nil)
+ if err != nil {
+ panic(err)
+ }
+
+ if len(messages) == 1 {
+ err := receiver.DeadLetterMessage(context.TODO(), messages[0], deadLetterOptions)
+ if err != nil {
+ panic(err)
+ }
+ }
+}
+
+func GetDeadLetterMessage(client *azservicebus.Client) {
+ receiver, err := client.NewReceiverForQueue(
+ "myqueue",
+ &azservicebus.ReceiverOptions{
+ SubQueue: azservicebus.SubQueueDeadLetter,
+ },
+ )
+ if err != nil {
+ panic(err)
+ }
+ defer receiver.Close(context.TODO())
+
+ messages, err := receiver.ReceiveMessages(context.TODO(), 1, nil)
+ if err != nil {
+ panic(err)
+ }
+
+ for _, message := range messages {
+ fmt.Printf("DeadLetter Reason: %s\nDeadLetter Description: %s\n", *message.DeadLetterReason, *message.DeadLetterErrorDescription)
+ err := receiver.CompleteMessage(context.TODO(), message, nil)
+ if err != nil {
+ panic(err)
+ }
+ }
+}
+
+func main() {
+ client := GetClient()
+
+ fmt.Println("send a single message...")
+ SendMessage("firstMessage", client)
+
+ fmt.Println("send two messages as a batch...")
+ messages := [2]string{"secondMessage", "thirdMessage"}
+ SendMessageBatch(messages[:], client)
+
+ fmt.Println("\nget all three messages:")
+ GetMessage(3, client)
+
+ fmt.Println("\nsend a message to the Dead Letter Queue:")
+ SendMessage("Send message to Dead Letter", client)
+ DeadLetterMessage(client)
+ GetDeadLetterMessage(client)
+}
+
+```
+
+## Run the code
+
+Before you run the code, create an environment variable named `AZURE_SERVICEBUS_HOSTNAME`. Set the environment variable's value to the Service Bus namespace.
+
+# [Bash](#tab/bash)
+
+```bash
+export AZURE_SERVICEBUS_HOSTNAME=<YourServiceBusHostName>
+```
+
+# [PowerShell](#tab/powershell)
+
+```powershell
+$env:AZURE_SERVICEBUS_HOSTNAME=<YourServiceBusHostName>
+```
+++
+Next, run the following `go run` command to run the app:
+
+```bash
+go run main.go
+```
+
+## Next steps
+For more information, check out the following links:
+
+- [Azure Service Bus SDK for Go](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/messaging/azservicebus)
+- [Azure Service Bus SDK for Go on GitHub](https://github.com/Azure/azure-sdk-for-go/tree/main/sdk/messaging/azservicebus)
service-bus-messaging Service Bus Performance Improvements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-performance-improvements.md
To maximize throughput, follow these guidelines:
* If each receiver is in a different process, use only a single factory per process. * Receivers can use synchronous or asynchronous operations. Given the moderate receive rate of an individual receiver, client-side batching of a Complete request doesn't affect receiver throughput.
-* Leave batched store access enabled. This access reduces the overall load of the entity. It also reduces the overall rate at which messages can be written into the queue or topic.
+* Leave batched store access enabled. This access reduces the overall load of the entity. It also increases the overall rate at which messages can be written into the queue or topic.
* Set the prefetch count to a small value (for example, PrefetchCount = 10). This count prevents receivers from being idle while other receivers have large numbers of messages cached. ### Topic with a few subscriptions
service-bus-messaging Service Bus Quickstart Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-quickstart-portal.md
In this article, you created a Service Bus namespace and a queue in the namespac
- [Java](service-bus-java-how-to-use-queues.md) - [JavaScript](service-bus-nodejs-how-to-use-queues.md) - [Python](service-bus-python-how-to-use-queues.md)
+- [Go](service-bus-go-how-to-use-queues.md)
- [PHP](service-bus-php-how-to-use-queues.md) [free account]: https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio
service-bus-messaging Service Bus Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-samples.md
The Service Bus messaging samples demonstrate key features in [Service Bus messa
## Go samples | Package | Samples location | | - | - |
-| azure-service-bus-go | [GitHub location](https://github.com/Azure/azure-service-bus-go/) |
+| azservicebus | [GitHub location](https://github.com/Azure/azure-sdk-for-go/tree/main/sdk/messaging/azservicebus) |
## Management samples You can find management samples on GitHub at https://github.com/Azure/azure-service-bus/tree/master/samples/Management.
service-bus-messaging Transport Layer Security Configure Minimum Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/transport-layer-security-configure-minimum-version.md
To check the minimum required TLS version for your Service Bus namespace, you ca
.\ARMClient.exe token <your-subscription-id> ```
-Once you have your bearer token, you can use the script below in combination with something like [Rest Client](https://marketplace.visualstudio.com/items?itemName=humao.rest-client) to query the API.
+Once you have your bearer token, you can use the script below in combination with something like [REST Client](https://marketplace.visualstudio.com/items?itemName=humao.rest-client) to query the API.
```http @token = Bearer <Token received from ARMClient>
service-connector Concept Region Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/concept-region-support.md
Previously updated : 02/17/2021- Last updated : 05/03/2022+
-# Service Connector Preview region support
+# Service Connector region support
-When you create a service connection with Service Connector, the conceptual connection resource is provisioned into the same region as your compute service instance by default. This page shows the region support information and corresponding behavior of Service Connector Public Preview.
+When you create a service connection with Service Connector, the conceptual connection resource is provisioned into the same region as your compute service instance by default. This page shows the region support information and corresponding behavior of Service Connector.
## Supported regions with regional endpoint
If your compute service instance is located in one of the regions that Service C
## Supported regions with geographical endpoint
-Your compute service instance might be created in a region where Service Connector has geographical region support. It means that your service connection will be created in a different region from your compute instance. In such cases you will see a banner providing some details about the region when you create a service connection. The region difference may impact your compliance, data residency, and data latency.
+Your compute service instance might be created in a region where Service Connector has geographical region support. It means that your service connection will be created in a different region from your compute instance. In such cases, you'll see a banner providing some details about the region when you create a service connection. The region difference may impact your compliance, data residency, and data latency.
|Region | Support Region| |-||
Your compute service instance might be created in a region where Service Connect
|West US 3 |West US 2 | |South Central US |West US 2 |
-## Regions not supported in the public preview
-
-In regions where Service Connector isn't supported, you will still find Service Connector CLI commands and the portal node, but you won't be able to create or manage service connections. The product team is working actively to enable more regions.
+## Regions not supported
+In regions where Service Connector isn't supported, you'll still find Service Connector CLI commands and the portal node, but you won't be able to create or manage service connections. The product team is working actively to enable more regions.
service-connector Concept Service Connector Internals https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/concept-service-connector-internals.md
description: Learn about Service Connector internals, the architecture, the conn
+ Previously updated : 10/29/2021- Last updated : 05/03/2022 # Service Connector internals Service Connector is an extension resource provider that provides an easy way to create and manage connections between services. - Support major databases, storage, real-time services, state, and secret stores that are used together with your cloud native application (the list is actively expanding).-- Configure network settings, authentication, and manage connection environment variables or properties by creating a service connection with just a single command or a few clicks.
+- Configure network settings, authentication, and manage connection environment variables or properties by creating a service connection with just a single command or a few steps.
- Validate connections and find corresponding suggestions to fix a service connection. ## Service connection overview
Service connection is the key concept in the resource model of Service Connector
| Connection Name | The unique name of the service connection. | | Source Service Type | Source services are usually Azure compute services. Service Connector functionalities can be found in supported compute services by extending these Azure compute service providers. | | Target Service Type | Target services are backing services or dependency services that your compute services connect to. Service Connector supports various target service types including major databases, storage, real-time services, state, and secret stores. |
-| Client Type | Client type refers your compute runtime stack, development framework, or specific client library type, which accepts the specific format of the connection environment variables or properties. |
+| Client Type | Client type refers to your compute runtime stack, development framework, or specific client library type, which accepts the specific format of the connection environment variables or properties. |
| Authentication Type | The authentication type used of service connection. It could be pure secret/connection string, Managed Identity, or Service Principal. |
-You can create multiple service connections from one source service instance if your instance needs to connect multiple target resources. And the same target resource can be connected from multiple source instances. Service Connector will manage all connections in the properties of their source instance. ThatΓÇÖs means you can create, get, update, and delete the connections in portal or CLI command of their source service instance.
+You can create multiple service connections from one source service instance if your instance needs to connect multiple target resources. And the same target resource can be connected from multiple source instances. Service Connector will manage all connections in the properties of their source instance. It means that you can create, get, update, and delete the connections in the Azure portal or using CLI commands of the source service instance.
-The connection support cross subscription or tenant. Source and target service can belong to different subscriptions or tenants. When you create a new service connection, the connection resource is in the same region with your compute service instance by default.
+Connections can be made across subscriptions or tenants. Source and target services can belong to different subscriptions or tenants. When you create a new service connection, the connection resource is in the same region as your compute service instance by default.
## Create or update a service connection
-Service Connector will do multiple steps while creating or updating a connection, including:
+Service Connector will run multiple tasks while creating or updating a connection, including:
-- Configure target resource network and firewall settings, making sure source and target services can talk to each other in network level.
+- Configure target resource network and firewall settings, making sure source and target services can talk to each other on the network level.
- Configure connection information on source resource - Configure authentication information on source and target if needed-- Create or update connection support rollback if failure.
+- Create or update connection support rollback in case of failure.
-Creating and updating a connection contains multiple steps. If any step failed, Service Connector will roll back all previous steps to keep the initial settings in source and target instances.
+Creating and updating a connection contains multiple steps. If a step fails, Service Connector will roll back all previous steps to keep the initial settings in the source and target instances.
## Connection configurations Once a service connection is created, the connection configuration will be set to the source service.
-In portal, navigate to **Service Connector (Preview)** page. You can expand each connection and view the connection configurations.
+
+In the Azure portal, navigate to **Service Connector**. You can expand each connection and view the connection configurations.
:::image type="content" source="media/tutorial-java-spring-confluent-kafka/portal-list-connection-config.png" alt-text="List portal configuration":::
-In CLI, you can use `list-configuration` command to view the connection configuration.
+In the CLI, you can use the `list-configuration` command to view the connection configuration.
```azurecli az webapp connection list-configuration -g {webapp_rg} -n {webapp_name} --connection {service_connection_name}
az spring-cloud connection list-configuration -g {spring_cloud_rg} -n {spring_cl
## Configuration naming convention
-Service Connector sets configuration (environment variables or Spring Boot configurations) when creating a connection. The environment variable key-value pair(s) are determined by your client type and authentication type. E.g., Using Azure SDK with managed identity requires client ID, client secret, etc. Using JDBC driver requires database connection string. The naming rule of the configuration are as following.
+Service Connector sets the configuration (environment variables or Spring Boot configurations) when creating a connection. The environment variable key-value pair(s) are determined by your client type and authentication type. For example, using the Azure SDK with managed identity requires a client ID, client secret, etc. Using JDBC driver a requires database connection string. Follow this convention to name the configuration:
-If you are using **Spring Boot** as the client type:
+If you're using **Spring Boot** as the client type:
-* Spring Boot library for each target service has its own naming convention. E.g., MySQL connection settings would be `spring.datasource.url`, `spring.datasource.username`, `spring.datasource.password`. Kafka connection settings would be `spring.kafka.properties.bootstrap.servers`.
+* Spring Boot library for each target service has its own naming convention. For example, MySQL connection settings would be `spring.datasource.url`, `spring.datasource.username`, `spring.datasource.password`. Kafka connection settings would be `spring.kafka.properties.bootstrap.servers`.
-If you using **other client types** except Spring Boot:
+If you're using **other client types**, except for Spring Boot:
-* When connect to a target service, the key name of the first connection configuration is in format as `{Cloud}_{Type}_{Name}`. E.g., `AZURE_STORAGEBLOB_RESOURCEENDPOINT`, `CONFLUENTCLOUD_KAFKA_BOOTSTRAPSERVER`.
-* For the same type of target resource, the key name of the second connection configuration will be format as `{Cloud}_{Type}_{Connection Name}_{Name}`. E.g., `AZURE_STORAGEBLOB_CONN2_RESOURCEENDPOINT`, `CONFLUENTCLOUD_KAFKA_CONN2_BOOTSTRAPSERVER`.
+* When connect to a target service, the key name of the first connection configuration is in format as `{Cloud}_{Type}_{Name}`. For example, `AZURE_STORAGEBLOB_RESOURCEENDPOINT`, `CONFLUENTCLOUD_KAFKA_BOOTSTRAPSERVER`.
+* For the same type of target resource, the key name of the second connection configuration will be format as `{Cloud}_{Type}_{Connection Name}_{Name}`. For example, `AZURE_STORAGEBLOB_CONN2_RESOURCEENDPOINT`, `CONFLUENTCLOUD_KAFKA_CONN2_BOOTSTRAPSERVER`.
## Validate a service connection The following items will be checked while validating the connection:
service-connector How To Integrate Confluent Kafka https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-confluent-kafka.md
description: Integrate Apache kafka on Confluent Cloud into your application wit
+ Previously updated : 10/29/2021- Last updated : 05/03/2022 # Integrate Apache kafka on Confluent Cloud with Service Connector
-This page shows the supported authentication types and client types of Apache kafka on Confluent Cloud with Service using Service Connector. You might still be able to connect to Apache kafka on Confluent Cloud in other programming languages without using Service Connector. This page also shows default environment variable name and value (or Spring Boot configuration) you get when you create the service connection. You can learn more about [Service Connector environment variable naming convention](concept-service-connector-internals.md).
+This page shows the supported authentication types and client types of Apache kafka on Confluent Cloud with Service using Service Connector. You might still be able to connect to Apache kafka on Confluent Cloud in other programming languages without using Service Connector. This page also shows default environment variable names and values (or Spring Boot configuration) you get when you create the service connection. You can learn more about [Service Connector environment variable naming convention](concept-service-connector-internals.md).
## Supported compute service
This page shows the supported authentication types and client types of Apache ka
| Client Type | System-assigned Managed Identity | User-assigned Managed Identity | Secret/ConnectionString | Service Principal | | | | | | |
-| .Net | | | ![yes icon](./media/green-check.png) | |
+| .NET | | | ![yes icon](./media/green-check.png) | |
| Java | | | ![yes icon](./media/green-check.png) | | | Java - Spring Boot | | | ![yes icon](./media/green-check.png) | | | Node.js | | | ![yes icon](./media/green-check.png) | |
service-connector How To Integrate Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-cosmos-db.md
description: Integrate Azure Cosmos DB into your application with Service Connec
+ Previously updated : 11/11/2021 Last updated : 05/03/2022 # Integrate Azure Cosmos DB with Service Connector
-This page shows the supported authentication types and client types of Azure Cosmos DB using Service Connector. You might still be able to connect to Azure Cosmos DB in other programming languages without using Service Connector. This page also shows default environment variable name and value (or Spring Boot configuration) you get when you create the service connection. You can learn more about [Service Connector environment variable naming convention](concept-service-connector-internals.md).
+This page shows the supported authentication types and client types of Azure Cosmos DB using Service Connector. You might still be able to connect to Azure Cosmos DB in other programming languages without using Service Connector. This page also shows default environment variable names and values (or Spring Boot configuration) you get when you create the service connection. You can learn more about [Service Connector environment variable naming convention](concept-service-connector-internals.md).
## Supported compute service
service-connector How To Integrate Event Hubs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-event-hubs.md
description: Integrate Azure Event Hubs into your application with Service Conne
+ Previously updated : 02/21/2022 Last updated : 05/03/2022 # Integrate Azure Event Hubs with Service Connector
service-connector How To Integrate Key Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-key-vault.md
description: Integrate Azure Key Vault into your application with Service Connec
+ Previously updated : 10/29/2021- Last updated : 05/03/2022 # Integrate Azure Key Vault with Service Connector
> [!NOTE] > When you use Service Connector to connect your key vault or manage key vault connections, Service Connector will be using your token to perform the corresponding operations.
-This page shows the supported authentication types and client types of Azure Key Vault using Service Connector. You might still be able to connect to Azure Key Vault in other programming languages without using Service Connector. This page also shows default environment variable name and value (or Spring Boot configuration) you get when you create the service connection. You can learn more about [Service Connector environment variable naming convention](concept-service-connector-internals.md).
+This page shows the supported authentication types and client types of Azure Key Vault using Service Connector. You might still be able to connect to Azure Key Vault in other programming languages without using Service Connector. This page also shows default environment variable names and values (or Spring Boot configuration) you get when you create the service connection. You can learn more about [Service Connector environment variable naming convention](concept-service-connector-internals.md).
## Supported compute service
This page shows the supported authentication types and client types of Azure Key
| Client Type | System-assigned Managed Identity | User-assigned Managed Identity | Secret/ConnectionString | Service Principal | | | | | | |
-| .Net | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) |
+| .NET | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) |
| Java | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | | Java - Spring Boot | | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | | Node.js | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) |
service-connector How To Integrate Mysql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-mysql.md
description: Integrate Azure Database for MySQL into your application with Servi
+ Previously updated : 10/29/2021- Last updated : 05/03/2022 # Integrate Azure Database for MySQL with Service Connector
-This page shows the supported authentication types and client types of Azure Database for MySQL using Service Connector. You might still be able to connect to Azure Database for MySQL in other programming languages without using Service Connector. This page also shows default environment variable name and value (or Spring Boot configuration) you get when you create the service connection. You can learn more about [Service Connector environment variable naming convention](concept-service-connector-internals.md).
+This page shows the supported authentication types and client types of Azure Database for MySQL using Service Connector. You might still be able to connect to Azure Database for MySQL in other programming languages without using Service Connector. This page also shows default environment variable names and values (or Spring Boot configuration) you get when you create the service connection. You can learn more about [Service Connector environment variable naming convention](concept-service-connector-internals.md).
## Supported compute service
This page shows the supported authentication types and client types of Azure Dat
| Client Type | System-assigned Managed Identity | User-assigned Managed Identity | Secret/ConnectionString | Service Principal | | | | | | |
-| .Net (MySqlConnector) | | | ![yes icon](./media/green-check.png) | |
+| .NET (MySqlConnector) | | | ![yes icon](./media/green-check.png) | |
| Java (JDBC) | | | ![yes icon](./media/green-check.png) | | | Java - Spring Boot (JDBC) | | | ![yes icon](./media/green-check.png) | | | Node.js (mysql) | | | ![yes icon](./media/green-check.png) | |
service-connector How To Integrate Postgres https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-postgres.md
description: Integrate Azure Database for PostgreSQL into your application with
+ Previously updated : 10/29/2021- Last updated : 05/03/2022 # Integrate Azure Database for PostgreSQL with Service Connector
-This page shows the supported authentication types and client types of Azure Database for PostgreSQL using Service Connector. You might still be able to connect to Azure Database for PostgreSQL in other programming languages without using Service Connector. This page also shows default environment variable name and value (or Spring Boot configuration) you get when you create the service connection. You can learn more about [Service Connector environment variable naming convention](concept-service-connector-internals.md).
+This page shows the supported authentication types and client types of Azure Database for PostgreSQL using Service Connector. You might still be able to connect to Azure Database for PostgreSQL in other programming languages without using Service Connector. This page also shows default environment variable names and values (or Spring Boot configuration) you get when you create the service connection. You can learn more about [Service Connector environment variable naming convention](concept-service-connector-internals.md).
## Supported compute service
This page shows the supported authentication types and client types of Azure Dat
| Client Type | System-assigned Managed Identity | User-assigned Managed Identity | Secret/ConnectionString | Service Principal | | | | | | |
-| .Net (ADO.NET) | | | ![yes icon](./media/green-check.png) | |
+| .NET (ADO.NET) | | | ![yes icon](./media/green-check.png) | |
| Java (JDBC) | | | ![yes icon](./media/green-check.png) | | | Java - Spring Boot (JDBC) | | | ![yes icon](./media/green-check.png) | | | Node.js (pg) | | | ![yes icon](./media/green-check.png) | |
service-connector How To Integrate Redis Cache https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-redis-cache.md
description: Integrate Azure Cache for Redis and Azure Cache Redis Enterprise in
+ Previously updated : 1/3/2022 Last updated : 05/03/2022 # Integrate Azure Cache for Redis with Service Connector
-This page shows the supported authentication types and client types of Azure Cache for Redis using Service Connector. You might still be able to connect to Azure Cache for Redis in other programming languages without using Service Connector. This page also shows default environment variable name and value (or Spring Boot configuration) you get when you create the service connection. You can learn more about [Service Connector environment variable naming convention](concept-service-connector-internals.md).
+This page shows the supported authentication types and client types of Azure Cache for Redis using Service Connector. You might still be able to connect to Azure Cache for Redis in other programming languages without using Service Connector. This page also shows default environment variable names and values (or Spring Boot configuration) you get when you create the service connection. You can learn more about [Service Connector environment variable naming convention](concept-service-connector-internals.md).
## Supported compute service
service-connector How To Integrate Service Bus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-service-bus.md
description: Integrate Service Bus into your application with Service Connector
+ Previously updated : 02/21/2022 Last updated : 05/03/2022 # Integrate Service Bus with Service Connector
This page shows the supported authentication types and client types of Azure Ser
| Default environment variable name | Description | Sample value | | | | - | | spring.cloud.azure.servicebus.namespace | Service Bus namespace | `{ServiceBusNamespace}.servicebus.windows.net` |
-| spring.cloud.azure.client-id | Your client ID | `{yourClientID} ` |
+| spring.cloud.azure.client-id | Your client ID | `{yourClientID}` |
#### Spring Boot service principal
service-connector How To Integrate Signalr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-signalr.md
Last updated 10/29/2021-+ # Integrate Azure SignalR Service with Service Connector
This page shows the supported authentication types and client types of Azure Sig
| Client Type | System-assigned Managed Identity | User-assigned Managed Identity | Secret/ConnectionString | Service Principal | | | | | | |
-| .Net | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| .NET | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
## Default environment variable names or application properties
service-connector How To Integrate Storage Blob https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-storage-blob.md
description: Integrate Azure Blob Storage into your application with Service Con
+ Previously updated : 10/29/2021- Last updated : 05/03/2022 # Integrate Azure Blob Storage with Service Connector
-This page shows the supported authentication types and client types of Azure Blob Storage using Service Connector. You might still be able to connect to Azure Blob Storage in other programming languages without using Service Connector. This page also shows default environment variable name and value (or Spring Boot configuration) you get when you create the service connection. You can learn more about [Service Connector environment variable naming convention](concept-service-connector-internals.md).
+This page shows the supported authentication types and client types of Azure Blob Storage using Service Connector. You might still be able to connect to Azure Blob Storage in other programming languages without using Service Connector. This page also shows default environment variable names and values (or Spring Boot configuration) you get when you create the service connection. You can learn more about [Service Connector environment variable naming convention](concept-service-connector-internals.md).
## Supported compute service
This page shows the supported authentication types and client types of Azure Blo
| Client Type | System-assigned Managed Identity | User-assigned Managed Identity | Secret/ConnectionString | Service Principal | | | | | | |
-| .Net | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| .NET | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
| Java | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | | Java - Spring Boot | | | ![yes icon](./media/green-check.png) | | | Node.js | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
service-connector How To Integrate Storage File https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-storage-file.md
description: Integrate Azure File Storage into your application with Service Con
+ Previously updated : 10/29/2021- Last updated : 05/03/2022 # Integrate Azure File Storage with Service Connector
-This page shows the supported authentication types and client types of Azure File Storage using Service Connector. You might still be able to connect to Azure File Storage in other programming languages without using Service Connector. This page also shows default environment variable name and value (or Spring Boot configuration) you get when you create the service connection. You can learn more about [Service Connector environment variable naming convention](concept-service-connector-internals.md).
+This page shows the supported authentication types and client types of Azure File Storage using Service Connector. You might still be able to connect to Azure File Storage in other programming languages without using Service Connector. This page also shows default environment variable names and values (or Spring Boot configuration) you get when you create the service connection. You can learn more about [Service Connector environment variable naming convention](concept-service-connector-internals.md).
## Supported compute service
This page shows the supported authentication types and client types of Azure Fil
| Client Type | System-assigned Managed Identity | User-assigned Managed Identity | Secret/ConnectionString | Service Principal | | | | | | |
-| .Net | | | ![yes icon](./media/green-check.png) | |
+| .NET | | | ![yes icon](./media/green-check.png) | |
| Java | | | ![yes icon](./media/green-check.png) | | | Java - Spring Boot | | | ![yes icon](./media/green-check.png) | | | Node.js | | | ![yes icon](./media/green-check.png) | |
service-connector How To Integrate Storage Queue https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-storage-queue.md
description: Integrate Azure Queue Storage into your application with Service Co
+ Previously updated : 10/29/2021- Last updated : 05/03/2022 # Integrate Azure Queue Storage with Service Connector
-This page shows the supported authentication types and client types of Azure Queue Storage using Service Connector. You might still be able to connect to Azure Queue Storage in other programming languages without using Service Connector. This page also shows default environment variable name and value (or Spring Boot configuration) you get when you create the service connection. You can learn more about [Service Connector environment variable naming convention](concept-service-connector-internals.md).
+This page shows the supported authentication types and client types of Azure Queue Storage using Service Connector. You might still be able to connect to Azure Queue Storage in other programming languages without using Service Connector. This page also shows default environment variable names and values (or Spring Boot configuration) you get when you create the service connection. You can learn more about [Service Connector environment variable naming convention](concept-service-connector-internals.md).
## Supported compute service
This page shows the supported authentication types and client types of Azure Que
| Client Type | System-assigned Managed Identity | User-assigned Managed Identity | Secret/ConnectionString | Service Principal | | | | | | |
-| .Net | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| .NET | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
| Java | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | | Java - Spring Boot | ![yes icon](./media/green-check.png) | | | | | Node.js | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
service-connector How To Integrate Storage Table https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-storage-table.md
description: Integrate Azure Table Storage into your application with Service Co
+ Previously updated : 10/29/2021- Last updated : 05/03/2022 # Integrate Azure Table Storage with Service Connector
-This page shows the supported authentication types and client types of Azure Table Storage using Service Connector. You might still be able to connect to Azure Table Storage in other programming languages without using Service Connector. This page also shows default environment variable name and value (or Spring Boot configuration) you get when you create the service connection. You can learn more about [Service Connector environment variable naming convention](concept-service-connector-internals.md).
+This page shows the supported authentication types and client types of Azure Table Storage using Service Connector. You might still be able to connect to Azure Table Storage in other programming languages without using Service Connector. This page also shows default environment variable names and values (or Spring Boot configuration) you get when you create the service connection. You can learn more about [Service Connector environment variable naming convention](concept-service-connector-internals.md).
## Supported compute service
This page shows the supported authentication types and client types of Azure Tab
| Client Type | System-assigned Managed Identity | User-assigned Managed Identity | Secret/ConnectionString | Service Principal | | | | | | |
-| .Net | | | ![yes icon](./media/green-check.png) | |
+| .NET | | | ![yes icon](./media/green-check.png) | |
| Java | | | ![yes icon](./media/green-check.png) | | | Node.js | | | ![yes icon](./media/green-check.png) | | | Python | | | ![yes icon](./media/green-check.png) | |
service-connector How To Troubleshoot Front End Error https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-troubleshoot-front-end-error.md
description: Error list and suggested actions of Service Connector
+ Previously updated : 10/29/2021- Last updated : 05/03/2022 # How to troubleshoot with Service Connector
-If you come across an issue, you can refer to the error message to find suggested actions or fixes. This how-to guide shows you the options to troubleshoot Service Connector.
+If you come across an issue, you can refer to the error message to find suggested actions or fixes. This how-to guide shows you several options to troubleshoot Service Connector.
-## Error message and suggested actions from Portal
+## Troubleshooting from the Azure portal
| Error message | Suggested Action | | | |
If you come across an issue, you can refer to the error message to find suggeste
| Unknown resource type | <ul><li>Check whether the target resource exists.</li><li>Check the correctness of the target resource ID.</li></ul> | | Unsupported resource | <ul><li>Check whether the authentication type is supported by the specified source-target connection combination.</li></ul> |
-### Error type,Error message and suggested actions using Azure CLI
+### Troubleshooting using the Azure CLI
-### InvalidArgumentValueError
+#### InvalidArgumentValueError
| Error message | Suggested Action |
If you come across an issue, you can refer to the error message to find suggeste
| Connection ID is invalid: `{ConnectionId}` | <ul><li>Check the correctness of the connection ID.</li></ul> |
-### RequiredArgumentMissingError
+#### RequiredArgumentMissingError
| Error message | Suggested Action | | | |
-| `{Argument}` should not be blank | User should provide argument value for interactive input |
+| `{Argument}` shouldn't be blank | User should provide argument value for interactive input |
| Required keys missing for parameter `{Parameter}`. All possible keys are: `{Keys}` | Provide value for the auth info parameter, usually in the form of `--param key1=val1 key2=val2`. | | Required argument is missing, please provide the arguments: `{Arguments}` | Provide the required argument. |
-### ValidationError
+#### ValidationError
| Error message | Suggested Action | | | |
-| Only one auth info is needed | User can only provide one auth info parameter, check whether auth info is not provided or multiple auth info parameters are provided. |
-| Auth info argument should be provided when updating the connection: `{ConnectionName}` | When updating a secret type connection, auth info parameter should be provided. (This is because user's secret can not be accessed through ARM api) |
-| Either client type or auth info should be specified to update | When updating a connection, either client type or auth info should be provided. |
+| Only one auth info is needed | User can only provide one auth info parameter. Check whether auth info is missing or multiple auth info parameters are provided. |
+| Auth info argument should be provided when updating the connection: `{ConnectionName}` | When you update a secret type connection, auth info parameter should be provided. This error occurs because user's secret cannot be accessed through the ARM API.
+| Either client type or auth info should be specified to update | When you update a connection, either client type or auth info should be provided. |
| Usage error: {} [KEY=VALUE ...] | Check the available keys and provide values for the auth info parameter, usually in the form of `--param key1=val1 key2=val2`. | | Unsupported Key `{Key}` is provided for parameter `{Parameter}`. All possible keys are: `{Keys}` | Check the available keys and provide values for the auth info parameter, usually in the form of `--param key1=val1 key2=val2`. | | Provision failed, please create the target resource manually and then create the connection. Error details: `{ErrorTrace}` | <ul><li>Retry.</li><li>Create the target resource manually and then create the connection.</li></ul> |
service-connector Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/overview.md
Title: What is Service Connector?
-description: Better understand what typical use case scenarios to use Service Connector, and learn the key benefits of Service Connector.
+description: Understand typical use case scenarios for Service Connector, and learn the key benefits of Service Connector.
+ Previously updated : 10/29/2021- Last updated : 05/03/2022 # What is Service Connector?
-The Service Connector service helps you connect Azure compute service to other backing services easily. This service configures the network settings and connection information (for example, generating environment variables) between compute service and target backing service in management plane. Developers just use preferred SDK or library that consumes the connection information to do data plane operations against target backing service.
+The Service Connector service helps you connect Azure compute services to other backing services. This service configures the network settings and connection information (for example, generating environment variables) between compute services and target backing services in management plane. Developers use their preferred SDK or library that consumes the connection information to do data plane operations against the target backing service.
-This article provides an overview of Service Connector service.
+This article provides an overview of Service Connector.
## What is Service Connector used for?
-Any application that runs on Azure compute services and requires a backing service, can use Service Connector. We list some examples that can use Service Connector to simplify service-to-service connection experience.
+Any application that runs on Azure compute services and requires a backing service, can use Service Connector. Find below some examples that can use Service Connector to simplify service-to-service connection experience.
* **WebApp + DB:** Use Service Connector to connect PostgreSQL, MySQL, or Cosmos DB to your App Service.
-* **WebApp + Storage:** Use Service Connector to connect to Azure Storage Accounts and use your preferred storage products easily in your App Service.
+* **WebApp + Storage:** Use Service Connector to connect to Azure Storage accounts and use your preferred storage products easily in your App Service.
* **Spring Cloud + Database:** Use Service Connector to connect PostgreSQL, MySQL, SQL DB or Cosmos DB to your Spring Cloud application. * **Spring Cloud + Apache Kafka:** Service Connector can help you connect your Spring Cloud application to Apache Kafka on Confluent Cloud.
-See [What services are supported in Service Connector](#what-services-are-supported-in-service-connector) to see more supported services and application patterns.
+See [what services are supported in Service Connector](#what-services-are-supported-in-service-connector) to see more supported services and application patterns.
## What are the benefits using Service Connector?
-**Connect to target backing service with just single command or a few clicks:**
+**Connect to target backing service with just a single command or a few clicks:**
-Service Connector is designed for your ease of use. It asks three required parameters including target service instance, authentication type between compute service and target service and your application client type to create a connection. Developers can use Azure Connection CLI or guided Azure portal experience to create connections easily.
+Service Connector is designed for your ease of use. To create a connection, you'll need three required parameters: a target service instance, an authentication type between the compute service and the target service, and your application client type. Developers can use the Azure CLI or the guided Azure portal experience to create connections.
**Use Connection Status to monitor or identify connection issue:**
-Once a service connection is created. Developers can validate and check connection health status. Service Connector can suggest actions to fix broken connections.
+Once a service connection is created, developers can validate and check the health status of their connections. Service Connector can suggest some actions to take to fix broken connections.
## What services are supported in Service Connector?
-> [!NOTE]
-> Service Connector is in Public Preview. The product team is actively adding more supported service types in the list.
-
-**Compute Service:**
+**Compute
* Azure App Service * Azure Spring Cloud
-**Target Service:**
+**Target
* Azure App Configuration * Azure Cache for Redis (Basic, Standard and Premium and Enterprise tiers)
Once a service connection is created. Developers can validate and check connecti
There are two major ways to use Service Connector for your Azure application:
-* **Azure Connection CLI:** Create, list, validate and delete service-to-service connections with connection command group in Azure CLI.
-* **Service Connector experience on Azure portal:** Use guided portal experience to create service-to-service connections and manage connections with a hierarchy list.
+* **Azure CLI:** Create, list, validate and delete service-to-service connections with connection commands in the Azure CLI.
+* **Azure Portal:** Use the guided portal experience to create service-to-service connections and manage connections with a hierarchy list.
## Next steps Follow the tutorials listed below to start building your own application with Service Connector. > [!div class="nextstepaction"]
-> [Quickstart: Service Connector in App Service using Azure CLI](./quickstart-cli-app-service-connection.md)
-
-> [!div class="nextstepaction"]
-> [Quickstart: Service Connector in App Service using Azure portal](./quickstart-portal-app-service-connection.md)
-
-> [!div class="nextstepaction"]
-> [Quickstart: Service Connector in Spring Cloud Service using Azure CLI](./quickstart-cli-spring-cloud-connection.md)
-
-> [!div class="nextstepaction"]
-> [Quickstart: Service Connector in Spring Cloud using Azure portal](./quickstart-portal-spring-cloud-connection.md)
-
-> [!div class="nextstepaction"]
-> [Learn about Service Connector concepts](./concept-service-connector-internals.md)
+> - [Quickstart: Service Connector in App Service using Azure CLI](./quickstart-cli-app-service-connection.md)
+> - [Quickstart: Service Connector in App Service using Azure portal](./quickstart-portal-app-service-connection.md)
+> - [Quickstart: Service Connector in Spring Cloud Service using Azure CLI](./quickstart-cli-spring-cloud-connection.md)
+> - [Quickstart: Service Connector in Spring Cloud using Azure portal](./quickstart-portal-spring-cloud-connection.md)
+> - [Learn about Service Connector concepts](./concept-service-connector-internals.md)
service-connector Quickstart Cli App Service Connection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/quickstart-cli-app-service-connection.md
Previously updated : 10/29/2021- Last updated : 05/03/2022 ms.devlang: azurecli+ # Quickstart: Create a service connection in App Service with the Azure CLI
az webapp connection list-support-types --output table
#### [Using Access Key](#tab/Using-access-key)
-Use the Azure CLI [az webapp connection](/cli/azure/webapp/connection) command to create a service connection to a blob storage with access key, providing the following information:
+Use the Azure CLI [az webapp connection](/cli/azure/webapp/connection) command to create a service connection to an Azure Blob Storage with an access key, providing the following information:
-- **Source compute service resource group name:** The resource group name of the App Service.-- **App Service name:** The name of your App Service that connects to the target service.-- **Target service resource group name:** The resource group name of the blob storage.-- **Storage account name:** The account name of your blob storage.
+- **Source compute service resource group name:** the resource group name of the App Service.
+- **App Service name:** the name of your App Service that connects to the target service.
+- **Target service resource group name:** the resource group name of the Blob Storage.
+- **Storage account name:** the account name of your Blob Storage.
```azurecli-interactive az webapp connection create storage-blob --secret ``` > [!NOTE]
-> If you don't have a blob storage, you can run `az webapp connection create storage-blob --new --secret` to provision a new one and directly get connected to your app service.
+> If you don't have a Blob Storage, you can run `az webapp connection create storage-blob --new --secret` to provision a new one and directly get connected to your app service.
#### [Using Managed Identity](#tab/Using-Managed-Identity) > [!IMPORTANT] > Using Managed Identity requires you have the permission to [Azure AD role assignment](../active-directory/managed-identities-azure-resources/howto-assign-access-portal.md). If you don't have the permission, your connection creation would fail. You can ask your subscription owner for the permission or using access key to create the connection.
-Use the Azure CLI [az webapp connection](/cli/azure/webapp/connection) command to create a service connection to a blob storage with System-assigned Managed Identity, providing the following information:
+Use the Azure CLI [az webapp connection](/cli/azure/webapp/connection) command to create a service connection to a Blob Storage with a System-assigned Managed Identity, providing the following information:
-- **Source compute service resource group name:** The resource group name of the App Service.-- **App Service name:** The name of your App Service that connects to the target service.-- **Target service resource group name:** The resource group name of the blob storage.-- **Storage account name:** The account name of your blob storage.
+- **Source compute service resource group name:** the resource group name of the App Service.
+- **App Service name:** the name of your App Service that connects to the target service.
+- **Target service resource group name:** the resource group name of the Blob Storage.
+- **Storage account name:** the account name of your Blob Storage.
```azurecli-interactive az webapp connection create storage-blob --system-identity ``` > [!NOTE]
-> If you don't have a blob storage, you can run `az webapp connection create storage-blob --new --system-identity` to provision a new one and directly get connected to your app service.
+> If you don't have a Blob Storage, you can run `az webapp connection create storage-blob --new --system-identity` to provision a new one and directly get connected to your app service.
## View connections
-Use the Azure CLI [az webapp connection](/cli/azure/webapp/connection) command to list connection to your App Service, providing the following information:
+Use the Azure CLI [az webapp connection](/cli/azure/webapp/connection) command to list connections to your App Service, providing the following information:
-- **Source compute service resource group name:** The resource group name of the App Service.-- **App Service name:** The name of your App Service that connects to the target service.
+- **Source compute service resource group name:** the resource group name of the App Service.
+- **App Service name:** the name of your App Service that connects to the target service.
```azurecli-interactive az webapp connection list -g "<your-app-service-resource-group>" --webapp "<your-app-service-name>"
az webapp connection list -g "<your-app-service-resource-group>" --webapp "<your
Follow the tutorials listed below to start building your own application with Service Connector. > [!div class="nextstepaction"]
-> [Tutorial: WebApp + Storage with Azure CLI](./tutorial-csharp-webapp-storage-cli.md)
-
-> [!div class="nextstepaction"]
-> [Tutorial: WebApp + PostgreSQL with Azure CLI](./tutorial-django-webapp-postgres-cli.md)
+> - [Tutorial: WebApp + Storage with Azure CLI](./tutorial-csharp-webapp-storage-cli.md)
+> - [Tutorial: WebApp + PostgreSQL with Azure CLI](./tutorial-django-webapp-postgres-cli.md)
service-connector Quickstart Cli Spring Cloud Connection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/quickstart-cli-spring-cloud-connection.md
Previously updated : 10/29/2021- Last updated : 05/03/2022 ms.devlang: azurecli+ # Quickstart: Create a service connection in Spring Cloud with the Azure CLI
-The [Azure CLI](/cli/azure) is a set of commands used to create and manage Azure resources. The Azure CLI is available across Azure services and is designed to get you working quickly with Azure, with an emphasis on automation. This quickstart shows you the options to create Azure Web PubSub instance with the Azure CLI.
+The [Azure CLI](/cli/azure) is a set of commands used to create and manage Azure resources. The Azure CLI is available across Azure services and is designed to get you working quickly with Azure, with an emphasis on automation. This quickstart shows you several options to create an Azure Web PubSub instance with the Azure CLI.
[!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)] [!INCLUDE [azure-cli-prepare-your-environment.md](../../includes/azure-cli-prepare-your-environment.md)] -- This quickstart requires version 2.30.0 or higher of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
+- Version 2.30.0 or higher of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
-- This quickstart assumes that you already have at least a Spring Cloud application running on Azure. If you don't have a Spring Cloud application, [create one](../spring-cloud/quickstart.md).
+- At least one Spring Cloud application running on Azure. If you don't have a Spring Cloud application, [create one](../spring-cloud/quickstart.md).
## View supported target service types
-Use the Azure CLI [az spring-cloud connection]() command create and manage service connections to your Spring Cloud application.
+Use the Azure CLI [[az spring-cloud connection](quickstart-cli-spring-cloud-connection.md)] command to create and manage service connections to your Spring Cloud application.
```azurecli-interactive az provider register -n Microsoft.ServiceLinker
az spring-cloud connection list-support-types --output table
## Create a service connection
-#### [Using Access Key](#tab/Using-access-key)
+#### [Using an access key](#tab/Using-access-key)
-Use the Azure CLI [az spring-cloud connection]() command to create a service connection to a blob storage with access key, providing the following information:
+Use the Azure CLI [az spring-cloud connection]() command to create a service connection to an Azure Blob Storage with an access key, providing the following information:
-- **Spring Cloud resource group name:** The resource group name of the Spring Cloud.-- **Spring Cloud name:** The name of your Spring Cloud.-- **Spring Cloud app name:** The name of your Spring Cloud app that connects to the target service.-- **Target service resource group name:** The resource group name of the blob storage.-- **Storage account name:** The account name of your blob storage.
+- **Spring Cloud resource group name:** the resource group name of the Spring Cloud.
+- **Spring Cloud name:** the name of your Spring Cloud.
+- **Spring Cloud app name:** the name of your Spring Cloud app that connects to the target service.
+- **Target service resource group name:** the resource group name of the Blob Storage.
+- **Storage account name:** the account name of your Blob Storage.
```azurecli-interactive az spring-cloud connection create storage-blob --secret ``` > [!NOTE]
-> If you don't have a blob storage, you can run `az spring-cloud connection create storage-blob --new --secret` to provision a new one and directly get connected to your app service.
+> If you don't have a Blob Storage, you can run `az spring-cloud connection create storage-blob --new --secret` to provision a new one and directly get connected to your app service.
#### [Using Managed Identity](#tab/Using-Managed-Identity) > [!IMPORTANT]
-> Using Managed Identity requires you have the permission to [Azure AD role assignment](../active-directory/managed-identities-azure-resources/howto-assign-access-portal.md). If you don't have the permission, your connection creation would fail. You can ask your subscription owner for the permission or using access key to create the connection.
+> To use Managed Identity, you must have permission to manage [role assignments in Azure Active Directory](../active-directory/managed-identities-azure-resources/howto-assign-access-portal.md). If you don't have this permission, your connection creation will fail. You can ask your subscription owner to grant you a role assignment permission or use an access key to create the connection.
-Use the Azure CLI [az spring-cloud connection]() command to create a service connection to a blob storage with System-assigned Managed Identity, providing the following information:
+Use the Azure CLI [az spring-cloud connection](quickstart-cli-spring-cloud-connection.md) command to create a service connection to a Blob Storage with System-assigned Managed Identity, providing the following information:
-- **Spring Cloud resource group name:** The resource group name of the Spring Cloud.-- **Spring Cloud name:** The name of your Spring Cloud.-- **Spring Cloud app name:** The name of your Spring Cloud app that connects to the target service.-- **Target service resource group name:** The resource group name of the blob storage.-- **Storage account name:** The account name of your blob storage.
+- **Spring Cloud resource group name:** the resource group name of the Spring Cloud.
+- **Spring Cloud name:** the name of your Spring Cloud.
+- **Spring Cloud app name:** the name of your Spring Cloud app that connects to the target service.
+- **Target service resource group name:** the resource group name of the Blob Storage.
+- **Storage account name:** the account name of your Blob Storage.
```azurecli-interactive az spring-cloud connection create storage-blob --system-identity ``` > [!NOTE]
-> If you don't have a blob storage, you can run `az spring-cloud connection create --system-identity --new --secret` to provision a new one and directly get connected to your app service.
+> If you don't have a Blob Storage, you can run `az spring-cloud connection create --system-identity --new --secret` to provision a new one and directly get connected to your app service.
## View connections
-Use the Azure CLI [az spring-cloud connection]() command to list connection to your Spring Cloud application, providing the following information:
+Use the Azure CLI [az spring-cloud connection](quickstart-cli-spring-cloud-connection.md) command to list connection to your Spring Cloud application, providing the following information:
```azurecli-interactive az spring-cloud connection list -g <your-spring-cloud-resource-group> --spring-cloud <your-spring-cloud-name>
az spring-cloud connection list -g <your-spring-cloud-resource-group> --spring-c
Follow the tutorials listed below to start building your own application with Service Connector. > [!div class="nextstepaction"]
-> [Tutorial: Spring Cloud + MySQL](./tutorial-java-spring-mysql.md)
-
-> [!div class="nextstepaction"]
-> [Tutorial: Spring Cloud + Apache Kafka on Confluent Cloud](./tutorial-java-spring-confluent-kafka.md)
+> - [Tutorial: Spring Cloud + MySQL](./tutorial-java-spring-mysql.md)
+> - [Tutorial: Spring Cloud + Apache Kafka on Confluent Cloud](./tutorial-java-spring-confluent-kafka.md)
service-connector Quickstart Portal App Service Connection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/quickstart-portal-app-service-connection.md
description: Quickstart showing how to create a service connection in App Servic
+ Previously updated : 01/27/2022-
-# Customer intent: As an app developer, I want to connect several services together so that I can ensure I have the right connectivity to access my Azure resources.
Last updated : 05/03/2022
+#Customer intent: As an app developer, I want to connect several services together so that I can ensure I have the right connectivity to access my Azure resources.
# Quickstart: Create a service connection in App Service from the Azure portal
Get started with Service Connector by using the Azure portal to create a new ser
## Prerequisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/dotnet).-- An application deployed to App Service in a [Region supported by Service Connector](./concept-region-support.md). If you don't have one yet, [create and deploy an app to App Service](../app-service/quickstart-dotnetcore.md).
+- An application deployed to App Service in a [region supported by Service Connector](./concept-region-support.md). If you don't have one yet, [create and deploy an app to App Service](../app-service/quickstart-dotnetcore.md).
## Sign in to Azure
Sign in to the Azure portal at [https://portal.azure.com/](https://portal.azure.
You'll use Service Connector to create a new service connection in App Service. 1. Select the **All resources** button on the left of the Azure portal. Type **App Service** in the filter and select the name of the App Service you want to use in the list.
-2. Select **Service Connector (Preview)** from the left table of contents. Then select **Create**.
+2. Select **Service Connector** from the left table of contents. Then select **Create**.
3. Select or enter the following settings. | Setting | Suggested value | Description | | | - | -- | | **Service type** | Blob Storage | Target service type. If you don't have a Storage Blob container, you can [create one](../storage/blobs/storage-quickstart-blobs-portal.md) or use another service type. |
- | **Subscription** | One of your subscriptions | The subscription where your target service (the service you want to connect to) is. The default value is the subscription that this App Service is in. |
+ | **Subscription** | One of your subscriptions | The subscription where your target service (the service you want to connect to) is located. The default value is the subscription that this App Service is in. |
| **Connection name** | Generated unique name | The connection name that identifies the connection between your App Service and target service | | **Storage account** | Your storage account | The target storage account you want to connect to. If you choose a different service type, select the corresponding target service instance. | | **Client type** | The same app stack on this App Service | Your application stack that works with the target service you selected. The default value comes from the App Service runtime stack. |
-4. Select **Next: Authentication** to select the authentication type. Then select **Connection string** to use access key to connect your Blob storage account.
+4. Select **Next: Authentication** to select the authentication type. Then select **Connection string** to use access key to connect your Blob Storage account.
5. Then select **Next: Review + Create** to review the provided information. Then select **Create** to create the service connection. It might take 1 minute to complete the operation. ## View service connections in App Service
-1. In **Service Connector (Preview)**, you see an App Service connection to the target service.
+1. In **Service Connector**, you see an App Service connection to the target service.
1. Select the **>** button to expand the list. You can see the environment variables required by your application code.
You'll use Service Connector to create a new service connection in App Service.
Follow the tutorials listed below to start building your own application with Service Connector. > [!div class="nextstepaction"]
-> [Tutorial: WebApp + Storage with Azure CLI](./tutorial-csharp-webapp-storage-cli.md)
-
-> [!div class="nextstepaction"]
-> [Tutorial: WebApp + PostgreSQL with Azure CLI](./tutorial-django-webapp-postgres-cli.md)
+> - [Tutorial: WebApp + Storage with Azure CLI](./tutorial-csharp-webapp-storage-cli.md)
+> - [Tutorial: WebApp + PostgreSQL with Azure CLI](./tutorial-django-webapp-postgres-cli.md)
service-connector Quickstart Portal Container Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/quickstart-portal-container-apps.md
+
+ Title: Quickstart - Create a service connection in Container Apps from the Azure portal
+description: Quickstart showing how to create a service connection in Azure Container Apps from the Azure portal
+++++ Last updated : 05/23/2022
+#Customer intent: As an app developer, I want to connect a containerized app to a storage account in the Azure portal using Service Connector.
++
+# Quickstart: Create a service connection in Container Apps from the Azure portal
+
+Get started with Service Connector by using the Azure portal to create a new service connection in Azure Container Apps.
+
+> [!IMPORTANT]
+> This feature in Container Apps is currently in preview.
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/dotnet).
+- An application deployed to Container Apps in a [region supported by Service Connector](./concept-region-support.md). If you don't have one yet, [create and deploy a container to Container Apps](/container-apps/quickstart-portal).
+
+## Sign in to Azure
+
+Sign in to the Azure portal at [https://portal.azure.com/](https://portal.azure.com/) with your Azure account.
+
+## Create a new service connection in Container Apps
+
+You'll use Service Connector to create a new service connection in Container Apps.
+
+1. Select the **All resources** button on the left of the Azure portal. Type **Container Apps** in the filter and select the name of the container app you want to use in the list.
+2. Select **Service Connector** from the left table of contents. Then select **Create**.
+3. Select or enter the following settings.
+
+ | Setting | Suggested value | Description |
+ | | - | -- |
+ | **Container** | Your container | Select your Container Apps. |
+ | **Service type** | Blob Storage | Target service type. If you don't have a Storage Blob container, you can [create one](../storage/blobs/storage-quickstart-blobs-portal.md) or use another service type. |
+ | **Subscription** | One of your subscriptions | The subscription where your target service (the service you want to connect to) is located. The default value is the subscription that this container app is in. |
+ | **Connection name** | Generated unique name | The connection name that identifies the connection between your container app and target service |
+ | **Storage account** | Your storage account | The target storage account you want to connect to. If you choose a different service type, select the corresponding target service instance. |
+ | **Client type** | The app stack in your selected container | Your application stack that works with the target service you selected. The default value is **none**, which will generate a list of configurations. If you know about the app stack or the client SDK in the container you selected, select the same app stack for the client type. |
+
+4. Select **Next: Authentication** to select the authentication type. Then select **Connection string** to use access key to connect your Blob Storage account.
+
+5. Select **Next: Network** to select the network configuration. Then select **Enable firewall settings** to update firewall allowlist in Blob Storage so that your container apps can reach the Blob Storage.
+
+6. Then select **Next: Review + Create** to review the provided information. Running the final validation takes a few seconds. Then select **Create** to create the service connection. It might take one minute to complete the operation.
+
+## View service connections in Container Apps
+
+1. In **Service Connector**, select **Refresh** and you'll see a Container Apps connection displayed.
+
+1. Select **>** to expand the list. You can see the environment variables required by your application code.
+
+1. Select **...** and then **Validate**. You can see the connection validation details in the pop-up panel on the right.
+
+## Next steps
+
+Follow the tutorials listed below to start building your own application with Service Connector:
+
+> [!div class="nextstepaction"]
+> [Service Connector internals](./concept-service-connector-internals.md)
service-connector Quickstart Portal Spring Cloud Connection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/quickstart-portal-spring-cloud-connection.md
Title: Quickstart - Create a service connection in Spring Cloud from Azure portal
+ Title: Quickstart - Create a service connection in Spring Cloud from the Azure portal
description: Quickstart showing how to create a service connection in Spring Cloud from Azure portal + Previously updated : 10/29/2021- Last updated : 05/03/2022
-# Quickstart: Create a service connection in Spring Cloud from Azure portal
+# Quickstart: Create a service connection in Spring Cloud from the Azure portal
-This quickstart shows you how to create a new service connection with Service Connector in Spring Cloud from Azure portal.
+This quickstart shows you how to create a new service connection with Service Connector in Spring Cloud from the Azure portal.
+## Prerequisites
-This quickstart assumes that you already have at least a Spring Cloud application running on Azure. If you don't have a Spring Cloud application, [create one](../spring-cloud/quickstart.md).
+- An Azure account with an active subscription. [Create an Azure account for free](https://azure.microsoft.com/free/dotnet).
+- A Spring Cloud application running on Azure. If you don't have one yet, [create a Spring Cloud application](../spring-cloud/quickstart.md).
## Sign in to Azure Sign in to the Azure portal at [https://portal.azure.com/](https://portal.azure.com/) with your Azure account.
-## Create a new service connection in Spring Cloud
+## Create a new service connection in Azure Spring Cloud
-You will use Service Connector to create a new service connection in Spring Cloud.
-
-1. Select **All resource** button found on the left of the Azure portal. Type **Spring Cloud** in the filter and click the name of the Spring Cloud you want to use in the list.
+1. Select the **All resources** button from the left menu. Type **Azure Spring Cloud** in the filter and select the name of the Spring Cloud resource you want to use from the list.
1. Select **Apps** and select the application name from the list.
-1. Select **Service Connector (Preview)** from the left table of contents. Then select **Create**.
+1. Select **Service Connector** from the left table of contents. Then select **Create**.
1. Select or enter the following settings. | Setting | Suggested value | Description | | | - | -- |
- | **Subscription** | One of your subscriptions | The subscription where your target service (the service you want to connect to) is. The default value is the subscription that this App Service is in. |
- | **Service Type** | Blob Storage | Target service type. If you don't have a Blob storage, you can [create one](../storage/blobs/storage-quickstart-blobs-portal.md) or use an other service type. |
+ | **Subscription** | One of your subscriptions | The subscription where your target service is located. The target service is the service you want to connect to. The default value is the subscription for the App Service. |
+ | **Service Type** | Azure Blob Storage | Target service type. If you don't have an Azure Blob storage, you can [create one](../storage/blobs/storage-quickstart-blobs-portal.md) or use another service type. |
| **Connection Name** | Generated unique name | The connection name that identifies the connection between your App Service and target service | | **Storage account** | Your storage account | The target storage account you want to connect to. If you choose a different service type, select the corresponding target service instance. |
You will use Service Connector to create a new service connection in Spring Clou
## View service connections in Spring Cloud
-1. In **Service Connector (Preview)**, you see a Spring Cloud connection to the target service.
+1. Select **Service Connector** to view the Spring Cloud connection to the target service.
-1. Click **>** button to expand the list, you can see the properties required by your Spring boot application.
+1. Select **>** to expand the list and access the properties required by your Spring boot application.
-1. Click **...** button and select **Validate**, you can see the connection validation details in the pop-up blade from right.
+1. Select the ellipsis **...** and **Validate**. You can see the connection validation details in the pop-up blade from the right.
## Next steps Follow the tutorials listed below to start building your own application with Service Connector. > [!div class="nextstepaction"]
-> [Tutorial: Spring Cloud + MySQL](./tutorial-java-spring-mysql.md)
-
-> [!div class="nextstepaction"]
-> [Tutorial: Spring Cloud + Apache Kafka on Confluent Cloud](./tutorial-java-spring-confluent-kafka.md)
+> - [Tutorial: Spring Cloud + MySQL](./tutorial-java-spring-mysql.md)
+> - [Tutorial: Spring Cloud + Apache Kafka on Confluent Cloud](./tutorial-java-spring-confluent-kafka.md)
service-connector Tutorial Csharp Webapp Storage Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/tutorial-csharp-webapp-storage-cli.md
Title: 'Tutorial: Deploy Web Application Connected to Azure Storage Blob with Service Connector'
-description: Create a web app connected to Azure Storage Blob with Service Connector.
+ Title: 'Tutorial: Deploy a web application connected to Azure Blob Storage with Service Connector'
+description: Create a web app connected to Azure Blob Storage with Service Connector.
Previously updated : 10/28/2021- Last updated : 05/03/2022 ms.devlang: azurecli+
-# Tutorial: Deploy Web Application Connected to Azure Storage Blob with Service Connector
+# Tutorial: Deploy a web application connected to Azure Blob Storage with Service Connector
-Learn how to access Azure Storage for a web app (not a signed-in user) running on Azure App Service by using managed identities. In this tutorial, you use the Azure CLI to complete the following tasks:
+Learn how to access Azure Blob Storage for a web app (not a signed-in user) running on Azure App Service by using managed identities. In this tutorial, you'll use the Azure CLI to complete the following tasks:
> [!div class="checklist"] > * Set up your initial environment with the Azure CLI > * Create a storage account and an Azure Blob Storage container. > * Deploy code to Azure App Service and connect to storage with managed identity using Service Connector
-## 1. Set up your initial environment
+## Prerequisites
-1. Have an Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio).
-2. Install the <a href="/cli/azure/install-azure-cli" target="_blank">Azure CLI</a> 2.30.0 or higher, with which you run commands in any shell to provision and configure Azure resources.
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/dotnet).
+- The <a href="/cli/azure/install-azure-cli" target="_blank">Azure CLI</a> 2.30.0 or higher. You'll use it to run commands in any shell to provision and configure Azure resources.
+## Set up your initial environment
+1. Check that your Azure CLI version is 2.30.0 or higher:
-Check that your Azure CLI version is 2.30.0 or higher:
+ ```azurecli
+ az --version
+ ```
+ If you need to upgrade, try the `az upgrade` command (requires version 2.11+) or see <a href="/cli/azure/install-azure-cli" target="_blank">Install the Azure CLI</a>.
-```Azure CLI
-az --version
-```
+1. Sign in to Azure using the CLI:
-If you need to upgrade, try the `az upgrade` command (requires version 2.11+) or see <a href="/cli/azure/install-azure-cli" target="_blank">Install the Azure CLI</a>.
+ ```azurecli
+ az login
+ ```
+ This command opens a browser to gather your credentials. When the command finishes, it shows a JSON output containing information about your subscriptions.
-Then sign in to Azure through the CLI:
+ Once signed in, you can run Azure commands with the Azure CLI to work with resources in your subscription.
-```Azure CLI
-az login
-```
+## Clone or download the sample app
-This command opens a browser to gather your credentials. When the command finishes, it shows JSON output containing information about your subscriptions.
+1. Clone the sample repository:
+ ```Bash
+ git clone https://github.com/Azure-Samples/serviceconnector-webapp-storageblob-dotnet.git
+ ```
-Once signed in, you can run Azure commands with the Azure CLI to work with resources in your subscription.
+1. Go to the repository's root folder:
+ ```Bash
+ cd serviceconnector-webapp-storageblob-dotnet
+ ```
-## 2. Clone or download the sample app
+## Create the App Service app
-Clone the sample repository:
-```Bash
-git clone https://github.com/Azure-Samples/serviceconnector-webapp-storageblob-dotnet.git
-```
+1. In the terminal, make sure you're in the *WebAppStorageMISample* repository folder that contains the app code.
-and go to the root folder of repository:
-```Bash
-cd serviceconnector-webapp-storageblob-dotnet
-```
+1. Create an App Service app (the host process) with the [`az webapp up`](/cli/azure/webapp#az-webapp-up) command below.
+
+ ```azurecli
+ az webapp up --name <app-name> --sku B1 --location eastus --resource-group ServiceConnector-tutorial-rg
+ ```
-## 3. Create the App Service app
+ Replace the following placeholder texts with your own data:
-In the terminal, make sure you're in the *WebAppStorageMISample* repository folder that contains the app code.
+ - For the *`--location`* argument, make sure to use a [region supported by Service Connector](concept-region-support.md).
+ - Replace *`<app-name>`* with a unique name across all Azure (the server endpoint is `https://<app-name>.azurewebsites.net`). Allowed characters for *`<app-name>`* are `A`-`Z`, `0`-`9`, and `-`. A good pattern is to use a combination of your company name and an app identifier.
-Create an App Service app (the host process) with the [`az webapp up`](/cli/azure/webapp#az-webapp-up) command:
+## Create a storage account and a Blob Storage container
-```Azure CLI
-az webapp up --name <app-name> --sku B1 --location eastus --resource-group ServiceConnector-tutorial-rg
-```
--- For the `--location` argument, make sure you use the location that [Service Connector supports](concept-region-support.md).-- **Replace** *\<app-name>* with a unique name across all Azure (the server endpoint is `https://<app-name>.azurewebsites.net`). Allowed characters for *\<app-name>* are `A`-`Z`, `0`-`9`, and `-`. A good pattern is to use a combination of your company name and an app identifier.-
-## 4. Create a storage account and Blob Storage container
+In the terminal, run the following command to create a general purpose v2 storage account and a Blob Storage container.
-In the terminal, run the following command to create general-purpose v2 storage account and Blob Storage container. **Replace** *\<storage-name>* with a unique name. The container name must be lowercase, must start with a letter or number, and can include only letters, numbers, and the dash (-) character.
-
-```Azure CLI
+```azurecli
az storage account create --name <storage-name> --resource-group ServiceConnector-tutorial-rg --sku Standard_RAGRS --https-only ```
+Replace *`<storage-name>`* with a unique name. The name of the container must be in lowercase, start with a letter or a number, and can include only letters, numbers, and the dash (-) character.
-## 5. Connect App Service app to Blob Storage container with managed identity
+## Connect an App Service app to a Blob Storage container with a managed identity
-In the terminal, run the following command to connect your web app to blob storage with managed identity.
+In the terminal, run the following command to connect your web app to blob storage with a managed identity.
-```Azure CLI
+```azurecli
az webapp connection create storage-blob -g ServiceConnector-tutorial-rg -n <app-name> --tg ServiceConnector-tutorial-rg --account <storage-name> --system-identity ``` -- **Replace** *\<app-name>* with your web app name you used in step 3.-- **Replace** *\<storage-name>* with your storage app name you used in step 4.
+ Replace the following placeholder texts with your own data:
+- Replace *`<app-name>`* with your web app name you used in step 3.
+- Replace *`<storage-name>`* with your storage app name you used in step 4.
> [!NOTE] > If you see the error message "The subscription is not registered to use Microsoft.ServiceLinker", please run `az provider register -n Microsoft.ServiceLinker` to register the Service Connector resource provider and run the connection command again.
-## 6. Run sample code
+## Run sample code
-In the terminal, run the following command to open the sample application in your browser. Replace *\<app-name>* with your web app name you used in step 3.
+In the terminal, run the following command to open the sample application in your browser. Replace *`<app-name>`* with the web app name you used earlier.
```Azure CLI az webapp browse --name <app-name> ```
-The sample code is a web application. Each time you refresh the index page, it will create or update a blob with the text `Hello Service Connector! Current is {UTC Time Now}` to the storage container and read back to show it in the index page.
+The sample code is a web application. Each time you refresh the index page, the application creates or updates a blob with the text `Hello Service Connector! Current is {UTC Time Now}` to the storage container and reads back to show it in the index page.
## Next steps
service-connector Tutorial Django Webapp Postgres Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/tutorial-django-webapp-postgres-cli.md
Title: 'Tutorial: Using Service Connector to build a Django app with Postgres on Azure App Service' description: Create a Python web app with a PostgreSQL database and deploy it to Azure. The tutorial uses the Django framework, the app is hosted on Azure App Service on Linux, and the App Service and Database is connected with Service Connector. ms.devlang: python+ Previously updated : 11/30/2021 Last updated : 05/03/2022 zone_pivot_groups: postgres-server-options-
-# Tutorial: Using Service Connector (Preview) to build a Django app with Postgres on Azure App Service
+# Tutorial: Using Service Connector to build a Django app with Postgres on Azure App Service
> [!NOTE]
-> You are using Service Connector (preview) that makes it easier to connect your web app to database service in this tutorial. The tutorial here is a modification of the [App Service tutorial](../app-service/tutorial-python-postgresql-app.md) to use this preview feature so you will see similarities. Look into section [4.2 Configure environment variables to connect the database](#42-configure-environment-variables-to-connect-the-database) in this tutorial to see where service connector comes into play and simplifies the connection process given in the App Service tutorial.
+> You are using Service Connector that makes it easier to connect your web app to database service in this tutorial. The tutorial here is a modification of the [App Service tutorial](../app-service/tutorial-python-postgresql-app.md) to use this feature so you will see similarities. Look into section [Configure environment variables to connect the database](#configure-environment-variables-to-connect-the-database) in this tutorial to see where Service Connector comes into play and simplifies the connection process given in the App Service tutorial.
::: zone pivot="postgres-single-server"
-This tutorial shows how to deploy a data-driven Python [Django](https://www.djangoproject.com/) web app to [Azure App Service](overview.md) and connect it to an Azure Database for Postgres database. You can also try the PostgresSQL Flexible server by selecting the option above. Flexible server provides a simpler deployment mechanism and lower ongoing costs.
+This tutorial shows how to deploy a data-driven Python [Django](https://www.djangoproject.com/) web app to [Azure App Service](overview.md) and connect it to an Azure Database for a Postgres database. You can also try the PostgresSQL Flexible Server by selecting the option above. Flexible Server provides a simpler deployment mechanism and lower ongoing costs.
In this tutorial, you use the Azure CLI to complete the following tasks:
In this tutorial, you use the Azure CLI to complete the following tasks:
::: zone pivot="postgres-flexible-server"
-This tutorial shows how to deploy a data-driven Python [Django](https://www.djangoproject.com/) web app to [Azure App Service](overview.md) and connect it to an [Azure Database for PostgreSQL Flexible server (Preview)](../postgresql/flexible-server/index.yml) database. If you cannot use PostgreSQL Flexible server, then select the Single Server option above.
+This tutorial shows how to deploy a data-driven Python [Django](https://www.djangoproject.com/) web app to [Azure App Service](overview.md) and connect it to an [Azure Database for PostgreSQL Flexible server](../postgresql/flexible-server/index.yml) database. If you can't use PostgreSQL Flexible server, then select the Single Server option above.
-In this tutorial, you use the Azure CLI to complete the following tasks:
+In this tutorial, you'll use the Azure CLI to complete the following tasks:
> [!div class="checklist"] > * Set up your initial environment with Python and the Azure CLI
In this tutorial, you use the Azure CLI to complete the following tasks:
:::zone-end
-## 1. Set up your initial environment
+## Set up your initial environment
1. Have an Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio). 1. Install <a href="https://www.python.org/downloads/" target="_blank">Python 3.6 or higher</a>.
Once signed in, you can run Azure commands with the Azure CLI to work with resou
Having issues? [Let us know](https://aka.ms/DjangoCLITutorialHelp).
-## 2. Clone or download the sample app
+## Clone or download the sample app
# [Git clone](#tab/clone)
cd serviceconnector-webapp-postgresql-django
::: zone pivot="postgres-flexible-server"
-For Flexible server (Preview), use the flexible-server branch of the sample, which contains a few necessary changes, such as how the database server URL is set and adding `'OPTIONS': {'sslmode': 'require'}` to the Django database configuration as required by Azure PostgreSQL Flexible server.
+For Flexible server, use the flexible-server branch of the sample, which contains a few necessary changes, such as how the database server URL is set and adding `'OPTIONS': {'sslmode': 'require'}` to the Django database configuration as required by Azure PostgreSQL Flexible server.
```terminal git checkout flexible-server
git checkout flexible-server
Visit [https://github.com/Azure-Samples/djangoapp](https://github.com/Azure-Samples/djangoapp). ::: zone pivot="postgres-flexible-server"
-For Flexible server, select the branches control that says "master" and select the flexible-server branch instead.
+For Flexible server, select the branches control that says "master" and then select the **flexible-server** branch.
::: zone-end
-Select **Clone**, and then select **Download ZIP**.
+Select **Code**, and then select **Download ZIP**.
Unpack the ZIP file into a folder named *djangoapp*.
The production settings are specific to configuring Django to run in any product
Having issues? [Let us know](https://aka.ms/DjangoCLITutorialHelp).
-## 3. Create Postgres database in Azure
+## Create Postgres database in Azure
::: zone pivot="postgres-single-server" <!-- > [!NOTE]
Install the `db-up` extension for the Azure CLI:
az extension add --name db-up ```
-If the `az` command is not recognized, be sure you have the Azure CLI installed as described in [Set up your initial environment](#1-set-up-your-initial-environment).
+If the `az` command isn't recognized, be sure you have the Azure CLI installed as described in [Set up your initial environment](#set-up-your-initial-environment).
Then create the Postgres database in Azure with the [`az postgres up`](/cli/azure/postgres#az-postgres-up) command:
Then create the Postgres database in Azure with the [`az postgres up`](/cli/azur
az postgres up --resource-group ServiceConnector-tutorial-rg --location eastus --sku-name B_Gen5_1 --server-name <postgres-server-name> --database-name pollsdb --admin-user <admin-username> --admin-password <admin-password> --ssl-enforcement Enabled ``` -- **Replace** *\<postgres-server-name>* with a name that's **unique across all Azure** (the server endpoint becomes `https://<postgres-server-name>.postgres.database.azure.com`). A good pattern is to use a combination of your company name and another unique value.-- For *\<admin-username>* and *\<admin-password>*, specify credentials to create an administrator user for this Postgres server. The admin username can't be *azure_superuser*, *azure_pg_admin*, *admin*, *administrator*, *root*, *guest*, or *public*. It can't start with *pg_*. The password must contain **8 to 128 characters** from three of the following categories: English uppercase letters, English lowercase letters, numbers (0 through 9), and non-alphanumeric characters (for example, !, #, %). The password cannot contain username.-- Do not use the `$` character in the username or password. Later you create environment variables with these values where the `$` character has special meaning within the Linux container used to run Python apps.-- The B_Gen5_1 (Basic, Gen5, 1 core) [pricing tier](../postgresql/concepts-pricing-tiers.md) used here is the least expensive. For production databases, omit the `--sku-name` argument to use the GP_Gen5_2 (General Purpose, Gen 5, 2 cores) tier instead.
+Replace the following placeholder texts with your own data:
+- **Replace** *`<postgres-server-name>`* with a name that's **unique across all Azure** (the server endpoint becomes `https://<postgres-server-name>.postgres.database.azure.com`). A good pattern is to use a combination of your company name and another unique value.
+- For *`<admin-username>`* and *`<admin-password>`*, specify credentials to create an administrator user for this Postgres server. The admin username can't be *azure_superuser*, *azure_pg_admin*, *admin*, *administrator*, *root*, *guest*, or *public*. It can't start with *pg_*. The password must contain **8 to 128 characters** from three of the following categories: English uppercase letters, English lowercase letters, numbers (0 through 9), and non-alphanumeric characters (for example, *!*, *#*, *%*). The password can't contain a username.
+- Don't use the `$` character in the username or password. You'll later create environment variables with these values where the `$` character has special meaning within the Linux container used to run Python apps.
+- The `*B_Gen5_1*` (Basic, Gen5, 1 core) [pricing tier](../postgresql/concepts-pricing-tiers.md) used here is the least expensive. For production databases, omit the `--sku-name` argument to use the GP_Gen5_2 (General Purpose, Gen 5, 2 cores) tier instead.
This command performs the following actions, which may take a few minutes:
This command performs the following actions, which may take a few minutes:
You can do all the steps separately with other `az postgres` and `psql` commands, but `az postgres up` does all the steps together.
-When the command completes, it outputs a JSON object that contains different connection strings for the database along with the server URL, a generated user name (such as "joyfulKoala@msdocs-djangodb-12345"), and a GUID password. **Copy the user name and password to a temporary text file** as you need them later in this tutorial.
+When the command completes, it outputs a JSON object that contains different connection strings for the database along with the server URL, a generated user name (such as "joyfulKoala@msdocs-djangodb-12345"), and a GUID password.
+
+> [!IMPORTANT]
+> Copy the user name and password to a temporary text file as you will need them later in this tutorial.
<!-- not all locations support az postgres up --> > [!TIP]
-> `-l <location-name>`, can be set to any one of the [Azure regions](https://azure.microsoft.com/global-infrastructure/regions/). You can get the regions available to your subscription with the [`az account list-locations`](/cli/azure/account#az-account-list-locations) command. For production apps, put your database and your app in the same location.
+> `-l <location-name>` can be set to any [Azure regions](https://azure.microsoft.com/global-infrastructure/regions/). You can get the regions available to your subscription with the [`az account list-locations`](/cli/azure/account#az-account-list-locations) command. For production apps, put your database and your app in the same location.
::: zone-end
When the command completes, it outputs a JSON object that contains different con
az postgres flexible-server create --sku-name Standard_B1ms --public-access all ```
- If the `az` command is not recognized, be sure you have the Azure CLI installed as described in [Set up your initial environment](#1-set-up-your-initial-environment).
+ If the `az` command isn't recognized, be sure you have the Azure CLI installed as described in [Set up your initial environment](#set-up-your-initial-environment).
The [az postgres flexible-server create](/cli/azure/postgres/flexible-server#az-postgres-flexible-server-create) command performs the following actions, which take a few minutes:
When the command completes, it outputs a JSON object that contains different con
Having issues? [Let us know](https://aka.ms/DjangoCLITutorialHelp).
-## 4. Deploy the code to Azure App Service
+## Deploy the code to Azure App Service
In this section, you create app host in App Service app, connect this app to the Postgres database, then deploy your code to that host.
-### 4.1 Create the App Service app
+### Create the App Service app
::: zone pivot="postgres-single-server"
Upon successful deployment, the command generates JSON output like the following
Having issues? Refer first to the [Troubleshooting guide](../app-service/configure-language-python.md#troubleshooting), otherwise, [let us know](https://aka.ms/DjangoCLITutorialHelp).
-### 4.2 Configure environment variables to connect the database
+### Configure environment variables to connect the database
With the code now deployed to App Service, the next step is to connect the app to the Postgres database in Azure.
In your Python code, you access these settings as environment variables with sta
Having issues? Refer first to the [Troubleshooting guide](../app-service/configure-language-python.md#troubleshooting), otherwise, [let us know](https://aka.ms/DjangoCLITutorialHelp).
-### 4.3 Run Django database migrations
+### Run Django database migrations
-Django database migrations ensure that the schema in the PostgreSQL on Azure database match those described in your code.
+Django database migrations ensure that the schema in the PostgreSQL on Azure database matches with your code.
1. Run `az webpp ssh` to open an SSH session for the web app in the browser:
Django database migrations ensure that the schema in the PostgreSQL on Azure dat
1. The `createsuperuser` command prompts you for superuser credentials. For the purposes of this tutorial, use the default username `root`, press **Enter** for the email address to leave it blank, and enter `Pollsdb1` for the password.
-1. If you see an error that the database is locked, make sure that you ran the `az webapp settings` command in the previous section. Without those settings, the migrate command cannot communicate with the database, resulting in the error.
+1. If you see an error that the database is locked, make sure that you ran the `az webapp settings` command in the previous section. Without those settings, the migrate command can't communicate with the database, resulting in the error.
Having issues? Refer first to the [Troubleshooting guide](../app-service/configure-language-python.md#troubleshooting), otherwise, [let us know](https://aka.ms/DjangoCLITutorialHelp).
-### 4.4 Create a poll question in the app
+### Create a poll question in the app
1. Open the app website. The app should display the message "Polls app" and "No polls are available" because there are no specific polls yet in the database.
Having issues? Refer first to the [Troubleshooting guide](../app-service/configu
az webapp browse ```
- If you see "Application Error", then it's likely that you either didn't create the required settings in the previous step, [Configure environment variables to connect the database](#42-configure-environment-variables-to-connect-the-database), or that those value contain errors. Run the command `az webapp config appsettings list` to check the settings.
+ If you see "Application Error", then it's likely that you either didn't create the required settings in the previous step "[Configure environment variables to connect the database](#configure-environment-variables-to-connect-the-database)", or that these values contain errors. Run the command `az webapp config appsettings list` to check the settings.
After updating the settings to correct any errors, give the app a minute to restart, then refresh the browser. 1. Browse to the web app's admin page by appending `/admin` to the URL, for example, `http://<app-name>.azurewebsites.net/admin`. Sign in using Django superuser credentials from the previous section (`root` and `Pollsdb1`). Under **Polls**, select **Add** next to **Questions** and create a poll question with some choices.
-1. Return to the main the website (`http://<app-name>.azurewebsites.net`) to confirm that the questions are now presented to the user. Answer questions however you like to generate some data in the database.
+1. Return to the main website (`http://<app-name>.azurewebsites.net`) to confirm that the questions are now presented to the user. Answer questions however you like to generate some data in the database.
**Congratulations!** You're running a Python Django web app in Azure App Service for Linux, with an active Postgres database.
Having issues? Refer first to the [Troubleshooting guide](../app-service/configu
> App Service detects a Django project by looking for a *wsgi.py* file in each subfolder, which `manage.py startproject` creates by default. When App Service finds that file, it loads the Django web app. For more information, see [Configure built-in Python image](../app-service/configure-language-python.md).
-## 5. Clean up resources
+## Clean up resources
-If you'd like to keep the app or continue to the additional tutorials, skip ahead to [Next steps](#next-steps). Otherwise, to avoid incurring ongoing charges you can delete the resource group created for this tutorial:
+If you'd like to keep the app or continue to additional tutorials, skip ahead to [Next steps](#next-steps). Otherwise, to avoid incurring ongoing charges you can delete the resource group created for this tutorial:
```azurecli az group delete --name ServiceConnector-tutorial-rg --no-wait
service-connector Tutorial Java Spring Confluent Kafka https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/tutorial-java-spring-confluent-kafka.md
Title: 'Tutorial: Deploy a Spring Boot app connected to Apache Kafka on Confluent Cloud with Service Connector in Azure Spring Cloud' description: Create a Spring Boot app connected to Apache Kafka on Confluent Cloud with Service Connector in Azure Spring Cloud. ms.devlang: java+ Previously updated : 10/28/2021- Last updated : 05/03/2022 # Tutorial: Deploy a Spring Boot app connected to Apache Kafka on Confluent Cloud with Service Connector in Azure Spring Cloud
-Learn how to access Apache Kafka on Confluent Cloud for a spring boot application running on Azure Spring Cloud. In this tutorial, you complete the following tasks:
+Learn how to access Apache Kafka on Confluent Cloud for a Spring Boot application running on Azure Spring Cloud. In this tutorial, you complete the following tasks:
> [!div class="checklist"] > * Create Apache Kafka on Confluent Cloud
Learn how to access Apache Kafka on Confluent Cloud for a spring boot applicatio
> * Build and deploy the Spring Boot app > * Connect Apache Kafka on Confluent Cloud to Azure Spring Cloud using Service Connector
-## 1. Set up your initial environment
+## Set up your initial environment
1. Have an Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio). 2. Install Java 8 or 11. 3. Install the <a href="/cli/azure/install-azure-cli" target="_blank">Azure CLI</a> 2.18.0 or higher, with which you run commands in any shell to provision and configure Azure resources.
-## 2. Clone or download the sample app
+## Clone or download the sample app
-Clone the sample repository:
+1. Clone the sample repository:
-```Bash
-git clone https://github.com/Azure-Samples/serviceconnector-springcloud-confluent-springboot/
-```
+ ```Bash
+ git clone https://github.com/Azure-Samples/serviceconnector-springcloud-confluent-springboot/
+ ```
-Then navigate into that folder:
+1. Navigate into that folder:
-```Bash
-cd serviceconnector-springcloud-confluent-springboot
-```
+ ```Bash
+ cd serviceconnector-springcloud-confluent-springboot
+ ```
-## 3. Prepare cloud services
+## Prepare cloud services
-### 3.1 Create an instance of Apache Kafka for Confluent Cloud
+### Create an instance of Apache Kafka for Confluent Cloud
Create an instance of Apache Kafka for Confluent Cloud by following [this guidance](../partner-solutions/apache-kafka-confluent-cloud/create.md).
-### 3.2 Create Kafka cluster and schema registry on Confluent Cloud
+### Create Kafka cluster and schema registry on Confluent Cloud
-1. Login to Confluent Cloud by SSO provided by Azure
+1. Sign in to Confluent Cloud using the SSO provided by Azure
:::image type="content" source="media/tutorial-java-spring-confluent-kafka/azure-confluent-sso-login.png" alt-text="The link of Confluent cloud SSO login using Azure portal" lightbox="media/tutorial-java-spring-confluent-kafka/azure-confluent-sso-login.png":::
Create an instance of Apache Kafka for Confluent Cloud by following [this guidan
* Region/zones: eastus(Virginia), Single Zone * Cluster name: `cluster_1` or any other name.
-1. In **Cluster overview** -> **Cluster settings**, get the Kafka **bootstrap server url** and take a note.
+1. In **Cluster overview** -> **Cluster settings**, get the Kafka **bootstrap server url** and take note it down.
:::image type="content" source="media/tutorial-java-spring-confluent-kafka/confluent-cluster-setting.png" alt-text="Cluster settings of Apache Kafka on Confluent Cloud" lightbox="media/tutorial-java-spring-confluent-kafka/confluent-cluster-setting.png":::
-1. Create API-keys for the cluster in **Data integration** -> **API Keys** -> **+ Add Key** with **Global access**. Take a note of the key and secret.
+1. Create API keys for the cluster in **Data integration** -> **API Keys** -> **+ Add Key** with **Global access**. Note down the key and secret.
1. Create a topic named `test` with partitions 6 in **Topics** -> **+ Add topic**
-1. In default environment, click **Schema Registry** tab. Enable the Schema Registry and take a note of the **API endpoint**.
-1. Create API-keys for schema registry. Take a note of the key and secret.
+1. Under **default environment**, select the **Schema Registry** tab. Enable the Schema Registry and note down the **API endpoint**.
+1. Create API keys for schema registry. Save the key and secret.
-### 3.3 Create a Spring Cloud instance
+### Create a Spring Cloud instance
-Create an instance of Azure Spring Cloud by following [this guidance](../spring-cloud/quickstart.md) in Java. Make sure your Spring Cloud instance is created in [the region that has Service Connector support](concept-region-support.md).
+Create an instance of Azure Spring Cloud by following [the Spring Cloud quickstart](../spring-cloud/quickstart.md) in Java. Make sure your Spring Cloud instance is created in [the region that has Service Connector support](concept-region-support.md).
-## 4. Build and deploy the app
+## Build and deploy the app
-### 4.1 build the sample app and create a new spring app
+### Build the sample app and create a new spring app
1. Sign in to Azure and choose your subscription.
-```azurecli
-az login
+ ```azurecli
+ az login
-az account set --subscription <Name or ID of your subscription>
-```
+ az account set --subscription <Name or ID of your subscription>
+ ```
1. Build the project using gradle
-```Bash
-./gradlew build
-```
+ ```Bash
+ ./gradlew build
+ ```
-1. Create the app with public endpoint assigned. If you selected Java version 11 when generating the Spring Cloud project, include the --runtime-version=Java_11 switch.
+1. Create the app with a public endpoint assigned. If you selected Java version 11 when generating the Spring Cloud project, include the `--runtime-version=Java_11` switch.
-```azurecli
-az spring-cloud app create -n hellospring -s <service-instance-name> -g <your-resource-group-name> --assign-endpoint true
-```
+ ```azurecli
+ az spring-cloud app create -n hellospring -s <service-instance-name> -g <your-resource-group-name> --assign-endpoint true
+ ```
-## 4.2 Create service connection using Service Connector
+## Create service connection using Service Connector
#### [CLI](#tab/Azure-CLI)
-Run the following command to connect your Apache Kafka on Confluent cloud to your spring cloud app.
+Run the following command to connect your Apache Kafka on Confluent Cloud to your spring cloud app.
```azurecli az spring-cloud connection create confluent-cloud -g <your-spring-cloud-resource-group> --service <your-spring-cloud-service> --app <your-spring-cloud-app> --deployment <your-spring-cloud-deployment> --bootstrap-server <kafka-bootstrap-server-url> --kafka-key <cluster-api-key> --kafka-secret <cluster-api-secret> --schema-registry <kafka-schema-registry-endpoint> --schema-key <registry-api-key> --schema-secret <registry-api-secret> ```
-* **Replace** *\<your-resource-group-name>* with the resource group name that you created your Spring Cloud instance.
-* **Replace** *\<kafka-bootstrap-server-url>* with your kafka bootstrap server url (the value should be like `pkc-xxxx.eastus.azure.confluent.cloud:9092`)
-* **Replace** *\<cluster-api-key>* and *\<cluster-api-secret>* with your cluster API key and secret.
-* **Replace** *\<kafka-schema-registry-endpoint>* with your kafka Schema Registry endpoint (the value should be like `https://psrc-xxxx.westus2.azure.confluent.cloud`)
-* **Replace** *\<registry-api-key>* and *\<registry-api-secret>* with your kafka Schema Registry API key and secret.
+Replace the following placeholder texts with your own data:
+* Replace *`<your-resource-group-name>`* with the resource group name that you created your Spring Cloud instance.
+* Replace *`<kafka-bootstrap-server-url>`* with your kafka bootstrap server url (the value should be like `pkc-xxxx.eastus.azure.confluent.cloud:9092`)
+* Replace *`<cluster-api-key>`* and *`<cluster-api-secret>`* with your cluster API key and secret.
+* Replace *`<kafka-schema-registry-endpoint>`* with your kafka Schema Registry endpoint (the value should be like `https://psrc-xxxx.westus2.azure.confluent.cloud`)
+* Replace *`<registry-api-key>`* and *`<registry-api-secret>`* with your kafka Schema Registry API key and secret.
> [!NOTE] > If you see the error message "The subscription is not registered to use Microsoft.ServiceLinker", please run `az provider register -n Microsoft.ServiceLinker` to register the Service Connector resource provider and run the connection command again. #### [Portal](#tab/Azure-portal)
-Click **Service Connector (Preview)** Select or enter the following settings.
+Select **Service Connector** and enter the following settings.
| Setting | Suggested value | Description | | | - | -- |
-| **Service Type** | Apache Kafka on Confluent cloud | Target service type. If you don't have a Apache Kafka on Confluent cloud, please complete the previous steps in this tutorial. |
-| **Name** | Generated unique name | The connection name that identifies the connection between your Spring Cloud and target service |
-| **Kafka bootstrap server url** | Your kafka bootstrap server url | You get this value from step 3.2 |
-| **Cluster API Key** | Your cluster API key | |
-| **Cluster API Secret** | Your cluster API secret | |
-| **Create connection for schema registry** | checked | Also create a connection to the schema registry |
-| **Schema Registry endpoint** | Your kafka Schema Registry endpoint | |
-| **Schema Registry API Key** | Your kafka Schema Registry API Key | |
-| **Schema Registry API Secret** | Your kafka Schema Registry API Secret | |
+| **Service Type** | Apache Kafka on Confluent cloud | Target service type. If you don't have an Apache Kafka on Confluent Cloud target service, complete the previous steps in this tutorial. |
+| **Name** | Generated unique name | The connection name that identifies the connection between your Spring Cloud and target service. |
+| **Kafka bootstrap server url** | Your Kafka bootstrap server url. | Enter the value from earlier step: "Create Kafka cluster and schema registry on Confluent Cloud". |
+| **Cluster API Key** | Your cluster API key. | Your cluster API key. |
+| **Cluster API Secret** | Your cluster API secret. | Your cluster API secret. |
+| **Create connection for schema registry** | Checked | Also create a connection to the schema registry. |
+| **Schema Registry endpoint** | Your Kafka Schema Registry endpoint. | |
+| **Schema Registry API Key** | Your Kafka Schema Registry API Key. |Your Kafka Schema Registry API Key. |
+| **Schema Registry API Secret** | Your Kafka Schema Registry API Secret. |Your Kafka Schema Registry API Secret. |
Select **Review + Create** to review the connection settings. Then select **Create** to create start creating the service connection.
-## 4.3 Deploy the Jar file for the app
+## Deploy the JAR file
-Run the following command to upload the jar file (`build/libs/java-springboot-0.0.1-SNAPSHOT.jar`) to your Spring Cloud app
+Run the following command to upload the JAR file (`build/libs/java-springboot-0.0.1-SNAPSHOT.jar`) to your Spring Cloud app.
```azurecli az spring-cloud app deploy -n hellospring -s <service-instance-name> -g <your-resource-group-name> --artifact-path build/libs/java-springboot-0.0.1-SNAPSHOT.jar ```
-## 5. Validate the Kafka data ingestion
+## Validate the Kafka data ingestion
-Navigate to your Spring Cloud app's endpoint from Azure portal, click the application URL. You will see "10 messages were produced to topic test".
+Navigate to your Spring Cloud app's endpoint from the Azure portal and select the application URL. You'll see "10 messages were produced to topic test".
Then go to the Confluent portal and the topic's page will show production throughput.
service-connector Tutorial Java Spring Mysql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/tutorial-java-spring-mysql.md
Title: 'Tutorial: Deploy Spring Cloud Application Connected to Azure Database for MySQL with Service Connector'
+ Title: 'Tutorial: Deploy a Spring Cloud Application Connected to Azure Database for MySQL with Service Connector'
description: Create a Spring Boot application connected to Azure Database for MySQL with Service Connector. Previously updated : 10/28/2021- Last updated : 05/03/2022+ ms.devlang: azurecli # Tutorial: Deploy Spring Cloud Application Connected to Azure Database for MySQL with Service Connector
-In this tutorial, you complete the following tasks using the Azure portal or the Azure CLI. Both methods are explained in the following procedures.
+In this tutorial, you will complete the following tasks using the Azure portal or the Azure CLI. Both methods are explained in the following procedures.
> [!div class="checklist"] > * Provision an instance of Azure Spring Cloud > * Build and deploy apps to Azure Spring Cloud > * Integrate Azure Spring Cloud with Azure Database for MySQL with Service Connector
-## 1. Prerequisites
+## Prerequisites
* [Install JDK 8 or JDK 11](/azure/developer/java/fundamentals/java-jdk-install) * [Sign up for an Azure subscription](https://azure.microsoft.com/free/) * [Install the Azure CLI version 2.0.67 or higher](/cli/azure/install-azure-cli) and install the Azure Spring Cloud extension with the command: `az extension add --name spring-cloud`
-## 2. Provision an instance of Azure Spring Cloud
+## Provision an instance of Azure Spring Cloud
The following procedure uses the Azure CLI extension to provision an instance of Azure Spring Cloud.
-1. Update Azure CLI with Azure Spring Cloud extension.
+1. Update Azure CLI with the Azure Spring Cloud extension.
```azurecli az extension update --name spring-cloud
The following procedure uses the Azure CLI extension to provision an instance of
1. Prepare a name for your Azure Spring Cloud service. The name must be between 4 and 32 characters long and can contain only lowercase letters, numbers, and hyphens. The first character of the service name must be a letter and the last character must be either a letter or a number.
-1. Create a resource group to contain your Azure Spring Cloud service. Create in instance of the Azure Spring Cloud service.
+1. Create a resource group to contain your Azure Spring Cloud service and an instance of the Azure Spring Cloud service.
```azurecli az group create --name ServiceConnector-tutorial-rg az spring-cloud create -n <service instance name> -g ServiceConnector-tutorial-rg ```
-## 3. Create an Azure Database for MySQL
+## Create an Azure Database for MySQL
The following procedure uses the Azure CLI extension to provision an instance of Azure Database for MySQL. 1. Install the [db-up](/cli/azure/ext/db-up/mysql) extension.
-```azurecli
-az extension add --name db-up
-```
+ ```azurecli
+ az extension add --name db-up
+ ```
-Create an Azure Database for MySQL server using the following command:
+1. Create an Azure Database for MySQL server using the following command:
-```azurecli
-az mysql up --resource-group ServiceConnector-tutorial-rg --admin-user <admin-username> --admin-password <admin-password>
-```
+ ```azurecli
+ az mysql up --resource-group ServiceConnector-tutorial-rg --admin-user <admin-username> --admin-password <admin-password>
+ ```
-- For *\<admin-username>* and *\<admin-password>*, specify credentials to create an administrator user for this MySQL server. The admin username can't be *azure_superuser*, *azure_pg_admin*, *admin*, *administrator*, *root*, *guest*, or *public*. It can't start with *pg_*. The password must contain **8 to 128 characters** from three of the following categories: English uppercase letters, English lowercase letters, numbers (0 through 9), and non-alphanumeric characters (for example, !, #, %). The password cannot contain username.
+ For *`<admin-username>`* and *`<admin-password>`*, specify credentials to create an administrator user for this MySQL server. The admin username can't be *azure_superuser*, *azure_pg_admin*, *admin*, *administrator*, *root*, *guest*, or *public*. It can't start with *pg_*. The password must contain **8 to 128 characters** from three of the following categories: English uppercase letters, English lowercase letters, numbers (0 through 9), and non-alphanumeric characters (for example, !, #, %). The password cannot contain username.
-The server is created with the following default values (unless you manually override them):
+ The server is created with the following default values (unless you manually override them):
-**Setting** | **Default value** | **Description**
-||
-server-name | System generated | A unique name that identifies your Azure Database for MySQL server.
-sku-name | GP_Gen5_2 | The name of the sku. Follows the convention {pricing tier}\_{compute generation}\_{vCores} in shorthand. The default is a General Purpose Gen5 server with 2 vCores. See our [pricing page](https://azure.microsoft.com/pricing/details/mysql/) for more information about the tiers.
-backup-retention | 7 | How long a backup should be retained. Unit is days.
-geo-redundant-backup | Disabled | Whether geo-redundant backups should be enabled for this server or not.
-location | westus2 | The Azure location for the server.
-ssl-enforcement | Enabled | Whether SSL should be enabled or not for this server.
-storage-size | 5120 | The storage capacity of the server (unit is megabytes).
-version | 5.7 | The MySQL major version.
+ **Setting** | **Default value** | **Description**
+ ||
+ server-name | System generated | A unique name that identifies your Azure Database for MySQL server.
+ sku-name | GP_Gen5_2 | The name of the sku. Follows the convention {pricing tier}\_{compute generation}\_{vCores} in shorthand. The default is a General Purpose Gen5 server with 2 vCores. See our [pricing page](https://azure.microsoft.com/pricing/details/mysql/) for more information about the tiers.
+ backup-retention | 7 | How long a backup should be retained. Unit is days.
+ geo-redundant-backup | Disabled | Whether geo-redundant backups should be enabled for this server or not.
+ location | westus2 | The Azure location for the server.
+ ssl-enforcement | Enabled | Whether SSL should be enabled or not for this server.
+ storage-size | 5120 | The storage capacity of the server (unit is megabytes).
+ version | 5.7 | The MySQL major version.
> [!NOTE]
Once your server is created, it comes with the following settings:
- An empty database named `sampledb` is created - A new user named "root" with privileges to `sampledb` is created
-## 4. Build and deploy the app
+## Build and deploy the app
1. Create the app with public endpoint assigned. If you selected Java version 11 when generating the Spring Cloud project, include the `--runtime-version=Java_11` switch.
Once your server is created, it comes with the following settings:
git clone https://github.com/Azure-Samples/serviceconnector-springcloud-mysql-springboot.git ```
-1. Build the project using maven.
+1. Build the project using Maven.
```bash cd serviceconnector-springcloud-mysql-springboot mvn clean package -DskipTests ```
-1. Deploy the Jar file for the app (`target/demo-0.0.1-SNAPSHOT.jar`).
+1. Deploy the JAR file for the app (`target/demo-0.0.1-SNAPSHOT.jar`).
```azurecli az spring-cloud app deploy -n hellospring -s <service instance name> -g ServiceConnector-tutorial-rg --artifact-path target/demo-0.0.1-SNAPSHOT.jar ```
-1. Query app status after deployments with the following command.
+1. Query app status after deployment with the following command.
```azurecli az spring-cloud app list -o table
service-connector Tutorial Portal Key Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/tutorial-portal-key-vault.md
+
+ Title: Tutorial - Create a service connection and store secrets into Key Vault
+description: Tutorial showing how to create a service connection and store secrets into Key Vault
+++++ Last updated : 05/23/2022++
+# Quickstart: Create a service connection and store secrets into Key Vault
+
+Azure Key Vault is a cloud service that provides a secure store for secrets. You can securely store keys, passwords, certificates, and other secrets. When you create a service connection, you can securely store access keys and secrets into connected Key Vault. In this tutorial, you'll complete the following tasks using the Azure portal. Both methods are explained in the following procedures.
+
+> [!div class="checklist"]
+> * Create a service connection to Azure Key Vault in Azure App Service
+> * Create a service connection to Azure Blob Storage and store secrets in Key Vault
+> * View secrets in Key Vault
+
+## Prerequisites
+
+To create a service connection and store secrets in Key Vault with Service Connector, you need:
+
+* Basic knowledge of [using Service Connector](.\quickstart-portal-app-service-connection.md)
+* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/dotnet).
+* An app hosted on App Service. If you don't have one yet, [create and deploy an app to App Service](../app-service/quickstart-dotnetcore.md)
+* An Azure Key Vault. If you don't have one, [create an Azure Key Vault](../key-vault\general\quick-create-portal.md)
+* Another target service instance supported by Service Connector. In this tutorial, you'll use [Azure Blob Storage](../storage/blobs/storage-quickstart-blobs-portal.md)
+* Read and write access to the App Service, Key Vault and the target service.
+
+## Create a Key Vault connection in App Service
+
+To store your connection access keys and secrets into a key vault, start by connecting your App Service to a key vault.
+
+1. Select the **All resources** button on the left of the Azure portal. Type **App Service** in the filter and select the name of the App Service you want to use from the list.
+1. Select **Service Connector** from the left table of contents. Then select **Create**.
+1. Select or enter the following settings.
+
+ | Setting | Suggested value | Description |
+ | | - | -- |
+ | **Service type** | Key Vault | Target service type. If you don't have a Key Vault, you need to [create one](../key-vault\general\quick-create-portal.md). |
+ | **Subscription** | One of your subscriptions. | The subscription in which your target service is deployed. The target service is the service you want to connect to. The default value is the subscription listed for the App Service. |
+ | **Connection name** | Generated unique name | The connection name that identifies the connection between your App Service and target service |
+ | **Key vault name** | Your Key vault name | The target Key Vault you want to connect to. |
+ | **Client type** | The same app stack on this App Service | Your application stack that works with the target service you selected. The default value comes from the App Service runtime stack. |
+
+1. Select **Next: Authentication** to select the authentication type. Then select **System assigned managed identity** to connect your Key Vault.
+
+1. Select **Next: Network** to select the network configuration. Then select **Enable firewall settings** to update the firewall allowlist in Key Vault so that your App Service can reach the Key Vault.
+
+1. Then select **Next: Review + Create** to review the provided information. Select **Create** to create the service connection. It can take one minute to complete the operation.
+
+## Create a Blob Storage connection in App Service and store access keys into Key Vault
+
+Now you can create a service connection to another target service and directly store access keys into a connected Key Vault when using a connection string/access key or a Service Principal for authentication. We'll use Blob Storage as an example below. Follow the same process for other target services.
+
+### [Connection string](#tab/connectionstring)
+
+1. Select the **All resources** button on the left of the Azure portal. Type **App Service** in the filter and select the name of the App Service you want to use from the list.
+1. Select **Service Connector** from the left table of contents. Then select **Create**.
+1. Select or enter the following settings.
+
+ | Setting | Suggested value | Description |
+ | | - | -- |
+ | **Service type** | Blob Storage | Target service type. If you don't have a Storage Blob container, you can [create one](../storage/blobs/storage-quickstart-blobs-portal.md) or use another service type. |
+ | **Subscription** | One of your subscriptions | The subscription in which your target service is deployed. The target service is the service you want to connect to. The default value is the subscription listed for the App Service.
+ | **Connection name** | Generated unique name | The connection name that identifies the connection between your App Service and target service. |
+ | **Storage account** | Your storage account | The target storage account you want to connect to. If you choose a different service type, select the corresponding target service instance. |
+ | **Client type** | The same app stack on this App Service | Your application stack that works with the target service you selected. The default value comes from the App Service runtime stack. |
+
+1. Select **Next: Authentication** to select the authentication type. Then select **Connection string** to use an access key to connect your Blob storage account.
+
+ | Setting | Suggested value | Description |
+ | | - | -- |
+ | **Store Secret to Key Vault** | Check | This option lets Service Connector store the connection string/access key into your Key Vault. |
+ | **Key Vault connection** | One of your Key Vault connections | Select the Key Vault in which you want to store your connection string/access key. |
+
+1. Select **Next: Network** to select the network configuration. Then select **Enable firewall settings** to update firewall allowlist in Key Vault so that your App Service can reach the Key Vault.
+
+1. Then select **Next: Review + Create** to review the provided information. Then select **Create** to create the service connection. It might take one minute to complete the operation.
+
+### [Service principal](#tab/serviceprincipal)
+
+1. Select the **All resources** button on the left of the Azure portal. Type **App Service** in the filter and select the name of the App Service you want to use from the list.
+1. Select **Service Connector** from the left table of contents. Then select **Create**.
+1. Select or enter the following settings.
+
+ | Setting | Suggested value | Description |
+ | | - | -- |
+ | **Service type** | Blob Storage | Target service type. If you don't have a Storage Blob container, you can [create one](../storage/blobs/storage-quickstart-blobs-portal.md) or use another service type. |
+ | **Subscription** | One of your subscriptions | The subscription in which your target service is deployed. The target service is the service you want to connect to. The default value is the subscription listed for the App Service. |
+ | **Connection name** | Generated unique name | The connection name that identifies the connection between your App Service and target service. |
+ | **Storage account** | Your storage account | The target storage account you want to connect to. If you choose a different service type, select the corresponding target service instance. |
+ | **Client type** | The same app stack for this App Service | Your application stack that works with the target service you selected. The default value comes from the App Service runtime stack. |
+
+1. Select **Next: Authentication** to select the authentication type and select **Service Principal** to use Service Principal to connect your Blob storage account.
+
+ | Setting | Suggested value | Description |
+ | | - | -- |
+ | **Service Principal object ID or name** | Choose the Service Principal you want to use to connect to Blob Storage from the list | The Service Principal in your subscription that is used to connect to target service. |
+ | **Store Secret to Key Vault** | Check | This option lets Service Connector store the service principal ID and secret into Key Vault. |
+ | **Key Vault connection** | One of your key vault connections | Select the Key Vault in which you want to store your service principal ID and secret. |
+
+1. Select **Next: Network** to select the network configuration. Then select **Enable firewall settings** to update firewall allowlist in Key Vault so that your App Service can reach the Key Vault.
+
+1. Then select **Next: Review + Create** to review the provided information. Then select **Create** to create the service connection. It might take one minute to complete the operation.
+++
+## View your configuration in Key Vault
+
+1. Expand the Blob Storage connection, select **Hidden value. Click to show value**. You can see that the value is a Key Vault reference.
+
+1. Select the **Key Vault** in the Service Type column of your Key Vault connection. You will be redirected to the Key Vault portal page.
+
+1. Select **Secrets** in the Key Vault left ToC, and select the blob storage secret name.
+
+ > [!TIP]
+ > Don't have permission to list secrets? Refer to [troubleshooting](/azure/key-vault/general/troubleshooting-access-issues#i-am-not-able-to-list-or-get-secretskeyscertificate-i-am-seeing-something-went-wrong-error).
+
+4. Select a version ID from the Current Version list.
+
+5. Select **Show Secret Value** button and you'll see the actual connection string of this blob storage connection.
+
+## Clean up resources
+
+When no longer needed, delete the resource group and all related resources created for this tutorial. To do so, select a resource group or the individual resources you created and select **Delete**.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Service Connector internals](./concept-service-connector-internals.md)
service-fabric How To Managed Cluster Availability Zones https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/how-to-managed-cluster-availability-zones.md
The recommended topology for managed cluster requires the resources outlined bel
>Only 3 Availability Zone deployments are supported. >[!NOTE]
-> It is not possible to do an in-place change of a managed cluster from non-spanning to a spanned cluster.
+> It is not possible to do an in-place change of virtual machine scale sets in a managed cluster from non-zone-spanning to a zone-spanned cluster.
Diagram that shows the Azure Service Fabric Availability Zone architecture ![Azure Service Fabric Availability Zone Architecture][sf-multi-az-arch]
Sample node list depicting FD/UD formats in a virtual machine scale set spanning
![Sample node list depicting FD/UD formats in a virtual machine scale set spanning zones.][sfmc-multi-az-nodes] **Distribution of Service replicas across zones**:
-When a service is deployed on the nodeTypes that are spanning zones, the replicas are placed to ensure they land up in separate zones. This separation is ensured as the fault domainΓÇÖs on the nodes present in each of these nodeTypes are configured with the zone information (i.e FD = fd:/zone1/1 etc.). For example: for five replicas or instances of a service the distribution will be 2-2-1 and runtime will try to ensure equal distribution across AZs.
+When a service is deployed on the node types that are spanning zones, the replicas are placed to ensure they land up in separate zones. This separation is ensured as the fault domainΓÇÖs on the nodes present in each of these node types are configured with the zone information (i.e FD = fd:/zone1/1 etc.). For example: for five replicas or instances of a service the distribution will be 2-2-1 and runtime will try to ensure equal distribution across AZs.
**User Service Replica Configuration**:
-Stateful user services deployed on the cross availability zone nodeTypes should be configured with this configuration: replica count with target = 9, min = 5. This configuration will help the service to be working even when one zone goes down since 6 replicas will be still up in the other two zones. An application upgrade in such a scenario will also go through.
+Stateful user services deployed on the cross-availability zone node types should be configured with this configuration: replica count with target = 9, min = 5. This configuration will help the service to be working even when one zone goes down since 6 replicas will be still up in the other two zones. An application upgrade in such a scenario will also go through.
**Zone down scenario**: When a zone goes down, all the nodes in that zone will appear as down. Service replicas on these nodes will also be down. Since there are replicas in the other zones, the service continues to be responsive with primary replicas failing over to the zones which are functioning. The services will appear in warning state as the target replica count is not met and the VM count is still more than the defined min target replica size. As a result, Service Fabric load balancer will bring up replicas in the working zones to match the configured target replica count. At this point, the services will appear healthy. When the zone which was down comes back up, the load balancer will again spread all the service replicas evenly across all the zones.
To enable a zone resilient Azure Service Fabric managed cluster, you must includ
} ```
+## Migrate an existing non-zone resilient cluster to Zone Resilient (Preview)
+Existing Service Fabric managed clusters which are not spanned across availability zones can now be migrated in-place to span availability zones. Supported scenarios include clusters created in regions that have three availability zones as well as clusters in regions where three availability zones are made available post-deployment.
+
+Requirements:
+* Standard SKU cluster
+* Three [availability zones in the region](/availability-zones/az-overview.md#azure-regions-with-availability-zones).
+
+>[!NOTE]
+>Migration to a zone resilient configuration can cause a brief loss of external connectivity through the load balancer, but will not affect cluster health. This occurs when a new Public IP needs to be created in order to make the networking resilient to Zone failures. Please plan the migration accordingly.
+
+1) Start with determining if there will be a new IP required and what resources need to be migrated to become zone resilient. To get the current Availability Zone resiliency state for the resources of the managed cluster use the following API call:
+
+ ```http
+ POST https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.ServiceFabric/managedClusters/{clusterName}/getazresiliencystatus?api-version=2022-02-01-preview
+ ```
+ Or you can use the Az Module as follows:
+ ```
+ Select-AzSubscription -SubscriptionId {subscriptionId}
+ Invoke-AzResourceAction -ResourceId /subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.ServiceFabric/managedClusters/{clusterName} -Action getazresiliencystatus -ApiVersion 2022-02-01-preview
+ ```
+ This should provide with response similar to:
+ ```json
+ {
+ "baseResourceStatus" :[
+ {
+ "resourceName": "sfmccluster1"
+ "resourceType": "Microsoft.Storage/storageAccounts"
+ "isZoneResilient": false
+ },
+ {
+ "resourceName": "PublicIP-sfmccluster1"
+ "resourceType": "Microsoft.Network/publicIPAddresses"
+ "isZoneResilient": false
+ },
+ {
+ "resourceName": "primary"
+ "resourceType": "Microsoft.Compute/virutalmachinescalesets"
+ "isZoneResilient": false
+ },
+ ],
+ "isClusterZoneResilient": false
+ }
+ ```
+
+ If the Public IP resource is not zone resilient, migration of the cluster will cause a brief loss of external connectivity. This is due to the migration setting up new Public IP and updating the cluster FQDN to the new IP. If the Public IP resource is zone resilient, migration will not modify the Public IP resource or FQDN and there will be no external connectivity impact.
+
+2) Initiate migration of the underlying storage account created for managed cluster from LRS to ZRS using [live migration](../storage/common/redundancy-migration.md#request-a-live-migration-to-zrs-gzrs-or-ra-gzrs). The resource group of storage account that needs to be migrated would be of the form "SFC_ClusterId"(ex SFC_9240df2f-71ab-4733-a641-53a8464d992d) under the same subscription as the managed cluster resource.
+
+3) Add a new primary node type which spans across availability zones
+
+ This step will trigger the resource provider to perform the migration of the primary node type and Public IP along with a cluster FQDN DNS update, if needed, to become zone resilient. Use the above API to understand implication of this step.
+
+* Use apiVersion 2022-02-01-preview or higher.
+* Add a new primary node type to the cluster with zones parameter set to ["1", "2", "3"] as show below:
+```json
+{
+ "apiVersion": "2022-02-01-preview",
+ "type": "Microsoft.ServiceFabric/managedclusters/nodetypes",
+ "name": "[concat(parameters('clusterName'), '/', parameters('nodeTypeName'))]",
+ "location": "[resourcegroup().location]",
+ "dependsOn": [
+ "[concat('Microsoft.ServiceFabric/managedclusters/', parameters('clusterName'))]"
+ ],
+ "properties": {
+ ...
+ "isPrimary": true,
+ "zones": ["1", "2", "3"]
+ ...
+ }
+}
+```
+
+4) Add secondary node type which spans across availability zones.
+ This step will add a secondary node type which spans across availability zones similar to the primary node type. Once created, customers need to migrate existing services from the old node types to the new ones by [using placement properties](./service-fabric-cluster-resource-manager-cluster-description.md).
+
+* Use apiVersion 2022-02-01-preview or higher.
+* Add a new secondary node type to the cluster with zones parameter set to ["1", "2", "3"] as show below:
+
+ ```json
+ {
+ "apiVersion": "2022-02-01-preview",
+ "type": "Microsoft.ServiceFabric/managedclusters/nodetypes",
+ "name": "[concat(parameters('clusterName'), '/', parameters('nodeTypeName'))]",
+ "location": "[resourcegroup().location]",
+ "dependsOn": [
+ "[concat('Microsoft.ServiceFabric/managedclusters/', parameters('clusterName'))]"
+ ],
+ "properties": {
+ ...
+ "isPrimary": false,
+ "zones": ["1", "2", "3"]
+ ...
+ }
+ }
+ ```
+
+5) Start removing older non az spanning node types from the cluster
+
+ Once all your services are not present on your non zone spanned node types, you must remove the old node types. Start by [removing the old node types from the cluster](./how-to-managed-cluster-modify-node-type.md) using Portal or cmdlet. As a last step, remove any old node types from your template.
+
+6) Mark the cluster resilient to zone failures
+
+ This step helps in future deployments, since it ensures all future deployments of node types span across availability zones and thus cluster remains tolerant to AZ failures. Set `zonalResiliency: true` in the cluster ARM template and do a deployment to mark cluster as zone resilient and ensure all new node type deployments span across availability zones.
+
+ ```json
+ {
+ "apiVersion": "2022-02-01-preview",
+ "type": "Microsoft.ServiceFabric/managedclusters",
+ "zonalResiliency": "true"
+ }
+ ```
+ You can also see the updated status in portal under Overview -> Properties similar to `Zonal resiliency True`, once complete.
+
+7) Validate all the resources are zone resilient
+
+ To validate the Availability Zone resiliency state for the resources of the managed cluster use the following GET API call:
+
+ ```http
+ POST https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.ServiceFabric/managedClusters/{clusterName}/getazresiliencystatus?api-version=2022-02-01-preview
+ ```
+ This should provide with response similar to:
+ ```json
+ {
+ "baseResourceStatus" :[
+ {
+ "resourceName": "sfmccluster1"
+ "resourceType": "Microsoft.Storage/storageAccounts"
+ "isZoneResilient": true
+ },
+ {
+ "resourceName": "PublicIP-sfmccluster1"
+ "resourceType": "Microsoft.Network/publicIPAddresses"
+ "isZoneResilient": true
+ },
+ {
+ "resourceName": "primary"
+ "resourceType": "Microsoft.Compute/virutalmachinescalesets"
+ "isZoneResilient": true
+ },
+ ],
+ "isClusterZoneResilient": true
+ }
+ ```
+ If you run in to any problems reach out to support for assistance.
[sf-architecture]: ./media/service-fabric-cross-availability-zones/sf-cross-az-topology.png [sf-architecture]: ./media/service-fabric-cross-availability-zones/sf-cross-az-topology.png
service-fabric Service Fabric Backuprestoreservice Quickstart Azurecluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-backuprestoreservice-quickstart-azurecluster.md
Title: Periodic backup and restore in Azure Service Fabric
description: Use Service Fabric's periodic backup and restore feature for enabling periodic data backup of your application data. Previously updated : 5/24/2019 Last updated : 5/20/2022 # Periodic backup and restore in an Azure Service Fabric cluster > [!div class="op_single_selector"]
Invoke-WebRequest -Uri $url -Method Post -Body $body -ContentType 'application/j
![Create Backup Policy][6]
-2. Fill out the information. For Azure clusters, AzureBlobStore should be selected.
+2. Fill out the information. For details out how to specify a frequency based interval, see the [TimeGrain property](/dotnet/api/microsoft.azure.management.monitor.models.metricavailability.timegrain?view=azure-dotnet&preserve-view=true). For Azure clusters, AzureBlobStore should be selected.
![Create Backup Policy Azure Blob Storage][7]
Invoke-WebRequest -Uri $url -Method Post -Body $body -ContentType 'application/j
#### Using Service Fabric Explorer Make sure the [advanced mode](service-fabric-visualizing-your-cluster.md#backup-and-restore) for Service Fabric Explorer is enabled
-1. Select an application and go to action. Click Enable/Update Application Backup.
+1. Click the gear at the top right of the SF Explorer Window.
+2. Check the box for "Advanced mode" and refresh the SF Explorer page.
+3. Select an application and go to action. Click Enable/Update Application Backup.
+ ![Enable Application Backup][3]
service-fabric Service Fabric Cluster Creation Setup Aad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-cluster-creation-setup-aad.md
Title: Set up Azure Active Directory for client authentication description: Learn how to set up Azure Active Directory (Azure AD) to authenticate clients for Service Fabric clusters. Previously updated : 6/28/2019 Last updated : 5/18/2022
In this article, the term "application" will be used to refer to [Azure Active D
A Service Fabric cluster offers several entry points to its management functionality, including the web-based [Service Fabric Explorer][service-fabric-visualizing-your-cluster] and [Visual Studio][service-fabric-manage-application-in-visual-studio]. As a result, you will create two Azure AD applications to control access to the cluster: one web application and one native application. After the applications are created, you will assign users to read-only and admin roles.
+> [!NOTE]
+> At this time, Service Fabric doesn't support Azure AD authentication for storage.
+ > [!NOTE] > It is a [known issue](https://github.com/microsoft/service-fabric/issues/399) that applications and nodes on Linux AAD-enabled clusters cannot be viewed in Azure Portal.
service-fabric Service Fabric Cluster Fabric Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-cluster-fabric-settings.md
The following is a list of Fabric settings that you can customize, organized by
| **Parameter** | **Allowed Values** | **Upgrade Policy** | **Guidance or Short Description** | | | | | | |DeployedState |wstring, default is L"Disabled" |Static |2-stage removal of CSS. |
+|EnableSecretMonitoring|bool, default is FALSE |Static |Must be enabled to use Managed KeyVaultReferences. Default may become true in the future. For more information, see [KeyVaultReference support for Azure-deployed Service Fabric Applications](https://docs.microsoft.com/azure/service-fabric/service-fabric-keyvault-references)|
+|SecretMonitoringInterval|TimeSpan, default is Common::TimeSpan::FromMinutes(15) |Static |The rate at which Service Fabric will poll Key Vault for changes when using Managed KeyVaultReferences. This rate is a best effort, and changes in Key Vault may be reflected in the cluster earlier or later than the interval. For more information, see [KeyVaultReference support for Azure-deployed Service Fabric Applications](https://docs.microsoft.com/azure/service-fabric/service-fabric-keyvault-references) |
+ |UpdateEncryptionCertificateTimeout |TimeSpan, default is Common::TimeSpan::MaxValue |Static |Specify timespan in seconds. The default has changed to TimeSpan::MaxValue; but overrides are still respected. May be deprecated in the future. |
+## CentralSecretService/Replication
+
+| **Parameter** | **Allowed Values** | **Upgrade Policy** | **Guidance or Short Description** |
+| | | | |
+|ReplicationBatchSendInterval|TimeSpan, default is Common::TimeSpan::FromSeconds(15)|Static|Specify timespan in seconds. Determines the amount of time that the replicator waits after receiving an operation before force sending a batch.|
+|ReplicationBatchSize|uint, default is 1|Static|Specifies the number of operations to be sent between primary and secondary replicas. If zero the primary sends one record per operation to the secondary. Otherwise the primary replica aggregates log records until the config value is reached. This will reduce network traffic.|
+ ## ClusterManager | **Parameter** | **Allowed Values** | **Upgrade Policy** | **Guidance or Short Description** |
The following is a list of Fabric settings that you can customize, organized by
|UpgradeStatusPollInterval |Time in seconds, default is 60 |Dynamic|The frequency of polling for application upgrade status. This value determines the rate of update for any GetApplicationUpgradeProgress call | |CompleteClientRequest | Bool, default is false |Dynamic| Complete client request when accepted by CM. |
+## ClusterManager/Replication
+
+| **Parameter** | **Allowed Values** | **Upgrade Policy** | **Guidance or Short Description** |
+| | | | |
+|ReplicationBatchSendInterval|TimeSpan, default is Common::TimeSpan::FromSeconds(15)|Static|Specify timespan in seconds. Determines the amount of time that the replicator waits after receiving an operation before force sending a batch.|
+|ReplicationBatchSize|uint, default is 1|Static|Specifies the number of operations to be sent between primary and secondary replicas. If zero the primary sends one record per operation to the secondary. Otherwise the primary replica aggregates log records until the config value is reached. This will reduce network traffic.|
+ ## Common | **Parameter** | **Allowed Values** | **Upgrade Policy** | **Guidance or Short Description** |
The following is a list of Fabric settings that you can customize, organized by
|IsEnabled|bool, default is FALSE|Static|Enables/Disables DnsService. DnsService is disabled by default and this config needs to be set to enable it. | |PartitionPrefix|string, default is "--"|Static|Controls the partition prefix string value in DNS queries for partitioned services. The value : <ul><li>Should be RFC-compliant as it will be part of a DNS query.</li><li>Should not contain a dot, '.', as dot interferes with DNS suffix behavior.</li><li>Should not be longer than 5 characters.</li><li>Cannot be an empty string.</li><li>If the PartitionPrefix setting is overridden, then PartitionSuffix must be overridden, and vice-versa.</li></ul>For more information, see [Service Fabric DNS Service.](service-fabric-dnsservice.md).| |PartitionSuffix|string, default is ""|Static|Controls the partition suffix string value in DNS queries for partitioned services.The value : <ul><li>Should be RFC-compliant as it will be part of a DNS query.</li><li>Should not contain a dot, '.', as dot interferes with DNS suffix behavior.</li><li>Should not be longer than 5 characters.</li><li>If the PartitionPrefix setting is overridden, then PartitionSuffix must be overridden, and vice-versa.</li></ul>For more information, see [Service Fabric DNS Service.](service-fabric-dnsservice.md). |
+|RecursiveQueryParallelMaxAttempts|Int, default is 0|Static|The number of times parallel queries will be attempted. Parallel queries are executed after the max attempts for serial queries have been exhausted.|
+|RecursiveQueryParallelTimeout|TimeSpan, default is Common::TimeSpan::FromSeconds(5)|Static|The timeout value in seconds for each attempted parallel query.|
+|RecursiveQuerySerialMaxAttempts|Int, default is 2|Static|The number of serial queries that will be attempted, at most. If this number is higher than the amount of forwarding DNS servers, querying will stop once all the servers have been attempted exactly once.|
+|RecursiveQuerySerialTimeout|TimeSpan, default is Common::TimeSpan::FromSeconds(5)|Static|The timeout value in seconds for each attempted serial query.|
|TransientErrorMaxRetryCount|Int, default is 3|Static|Controls the number of times SF DNS will retry when a transient error occurs while calling SF APIs (e.g. when retrieving names and endpoints).| |TransientErrorRetryIntervalInMillis|Int, default is 0|Static|Sets the delay in milliseconds between retries for when SF DNS calls SF APIs.|
The following is a list of Fabric settings that you can customize, organized by
|UserRoleClientX509FindValueSecondary |string, default is "" |Dynamic|Search filter value used to locate certificate for default user role FabricClient. | |UserRoleClientX509StoreName |string, default is "My" |Dynamic|Name of the X.509 certificate store that contains certificate for default user role FabricClient. |
+## Failover/Replication
+
+| **Parameter** | **Allowed Values** | **Upgrade Policy** | **Guidance or Short Description** |
+| | | | |
+|ReplicationBatchSendInterval|TimeSpan, default is Common::TimeSpan::FromSeconds(15)|Static|Specify timespan in seconds. Determines the amount of time that the replicator waits after receiving an operation before force sending a batch.|
+|ReplicationBatchSize|uint, default is 1|Static|Specifies the number of operations to be sent between primary and secondary replicas. If zero the primary sends one record per operation to the secondary. Otherwise the primary replica aggregates log records until the config value is reached. This will reduce network traffic.|
+ ## FailoverManager | **Parameter** | **Allowed Values** | **Upgrade Policy** | **Guidance or Short Description** |
The following is a list of Fabric settings that you can customize, organized by
|ExpectedNodeDeactivationDuration|TimeSpan, default is Common::TimeSpan::FromSeconds(60.0 \* 30)|Dynamic|Specify timespan in seconds. This is the expected duration for a node to complete deactivation in. | |ExpectedNodeFabricUpgradeDuration|TimeSpan, default is Common::TimeSpan::FromSeconds(60.0 \* 30)|Dynamic|Specify timespan in seconds. This is the expected duration for a node to be upgraded during Windows Fabric upgrade. | |ExpectedReplicaUpgradeDuration|TimeSpan, default is Common::TimeSpan::FromSeconds(60.0 \* 30)|Dynamic|Specify timespan in seconds. This is the expected duration for all the replicas to be upgraded on a node during application upgrade. |
+|IgnoreReplicaRestartWaitDurationWhenBelowMinReplicaSetSize|bool, default is FALSE|Dynamic|If IgnoreReplicaRestartWaitDurationWhenBelowMinReplicaSetSize is set to:<br>- false : Windows Fabric will wait for fixed time specified in ReplicaRestartWaitDuration for a replica to come back up.<br>- true : Windows Fabric will wait for fixed time specified in ReplicaRestartWaitDuration for a replica to come back up if partition is above or at Min Replica Set Size. If partition is below Min Replica Set Size new replica will be created right away.|
|IsSingletonReplicaMoveAllowedDuringUpgrade|bool, default is TRUE|Dynamic|If set to true; replicas with a target replica set size of 1 will be permitted to move during upgrade. |
+|MaxInstanceCloseDelayDurationInSeconds|uint, default is 1800|Dynamic|Maximum value of InstanceCloseDelay that can be configured to be used for FabricUpgrade/ApplicationUpgrade/NodeDeactivations |
|MinReplicaSetSize|int, default is 3|Not Allowed|This is the minimum replica set size for the FM. If the number of active FM replicas drops below this value; the FM will reject changes to the cluster until at least the min number of replicas is recovered | |PlacementConstraints|string, default is ""|Not Allowed|Any placement constraints for the failover manager replicas | |PlacementTimeLimit|TimeSpan, default is Common::TimeSpan::FromSeconds(600)|Dynamic|Specify timespan in seconds. The time limit for reaching target replica count; after which a warning health report will be initiated |
The following is a list of Fabric settings that you can customize, organized by
|SecondaryFileCopyRetryDelayMilliseconds|uint, default is 500|Dynamic|The file copy retry delay (in milliseconds).| |UseChunkContentInTransportMessage|bool, default is TRUE|Dynamic|The flag for using the new version of the upload protocol introduced in v6.4. This protocol version uses service fabric transport to upload files to image store which provides better performance than SMB protocol used in previous versions. |
+## FileStoreService/Replication
+
+| **Parameter** | **Allowed Values** | **Upgrade Policy** | **Guidance or Short Description** |
+| | | | |
+|ReplicationBatchSendInterval|TimeSpan, default is Common::TimeSpan::FromSeconds(15)|Static|Specify timespan in seconds. Determines the amount of time that the replicator waits after receiving an operation before force sending a batch.|
+|ReplicationBatchSize|uint, default is 1|Static|Specifies the number of operations to be sent between primary and secondary replicas. If zero the primary sends one record per operation to the secondary. Otherwise the primary replica aggregates log records until the config value is reached. This will reduce network traffic.|
+ ## HealthManager | **Parameter** | **Allowed Values** | **Upgrade Policy** | **Guidance or Short Description** |
The following is a list of Fabric settings that you can customize, organized by
| | | | | |PropertyGroup|KeyDoubleValueMap, default is None|Dynamic|Determines the part of the load that sticks with replica when swapped It takes value between 0 (load doesn't stick with replica) and 1 (load sticks with replica - default) |
+## Naming/Replication
+
+| **Parameter** | **Allowed Values** | **Upgrade Policy** | **Guidance or Short Description** |
+| | | | |
+|ReplicationBatchSendInterval|TimeSpan, default is Common::TimeSpan::FromSeconds(15)|Static|Specify timespan in seconds. Determines the amount of time that the replicator waits after receiving an operation before force sending a batch.|
+|ReplicationBatchSize|uint, default is 1|Static|Specifies the number of operations to be sent between primary and secondary replicas. If zero the primary sends one record per operation to the secondary. Otherwise the primary replica aggregates log records until the config value is reached. This will reduce network traffic.|
+ ## NamingService | **Parameter** | **Allowed Values** | **Upgrade Policy** | **Guidance or Short Description** |
The following is a list of Fabric settings that you can customize, organized by
|ServiceApiHealthDuration | Time in seconds, default is 30 minutes |Dynamic| Specify timespan in seconds. ServiceApiHealthDuration defines how long do we wait for a service API to run before we report it unhealthy. | |ServiceReconfigurationApiHealthDuration | Time in seconds, default is 30 |Dynamic| Specify timespan in seconds. ServiceReconfigurationApiHealthDuration defines how long do we wait for a service API to run before we report unhealthy. This applies to API calls that impact availability.|
+## RepairManager/Replication
+| **Parameter** | **Allowed Values** | **Upgrade Policy**| **Guidance or Short Description** |
+| | | | |
+|ReplicationBatchSendInterval|TimeSpan, default is Common::TimeSpan::FromSeconds(15)|Static|Specify timespan in seconds. Determines the amount of time that the replicator waits after receiving an operation before force sending a batch.|
+|ReplicationBatchSize|uint, default is 1|Static|Specifies the number of operations to be sent between primary and secondary replicas. If zero the primary sends one record per operation to the secondary. Otherwise the primary replica aggregates log records until the config value is reached. This will reduce network traffic.|
+ ## Replication
+<i> **Warning Note** : Changing Replication/TranscationalReplicator settings at cluster level changes settings for all stateful services include system services. This is generally not recommended. See this document [Configure Azure Service Fabric Reliable Services - Azure Service Fabric | Microsoft Docs](https://docs.microsoft.com/azure/service-fabric/service-fabric-reliable-services-configuration) to configure services at app level.</i>
++ | **Parameter** | **Allowed Values** | **Upgrade Policy**| **Guidance or Short Description** | | | | | | |BatchAcknowledgementInterval|TimeSpan, default is Common::TimeSpan::FromMilliseconds(15)|Static|Specify timespan in seconds. Determines the amount of time that the replicator waits after receiving an operation before sending back an acknowledgement. Other operations received during this time period will have their acknowledgements sent back in a single message-> reducing network traffic but potentially reducing the throughput of the replicator.|
The following is a list of Fabric settings that you can customize, organized by
|QueueHealthMonitoringInterval|TimeSpan, default is Common::TimeSpan::FromSeconds(30)|Static|Specify timespan in seconds. This value determines the time period used by the Replicator to monitor any warning/error health events in the replication operation queues. A value of '0' disables health monitoring | |QueueHealthWarningAtUsagePercent|uint, default is 80|Static|This value determines the replication queue usage(in percentage) after which we report warning about high queue usage. We do so after a grace interval of QueueHealthMonitoringInterval. If the queue usage falls below this percentage in the grace interval| |ReplicatorAddress|string, default is "localhost:0"|Static|The endpoint in form of a string -'IP:Port' which is used by the Windows Fabric Replicator to establish connections with other replicas in order to send/receive operations.|
+|ReplicationBatchSendInterval|TimeSpan, default is Common::TimeSpan::FromSeconds(15)|Static|Specify timespan in seconds. Determines the amount of time that the replicator waits after receiving an operation before force sending a batch.|
+|ReplicationBatchSize|uint, default is 1|Static|Specifies the number of operations to be sent between primary and secondary replicas. If zero the primary sends one record per operation to the secondary. Otherwise the primary replica aggregates log records until the config value is reached. This will reduce network traffic.|
|ReplicatorListenAddress|string, default is "localhost:0"|Static|The endpoint in form of a string -'IP:Port' which is used by the Windows Fabric Replicator to receive operations from other replicas.| |ReplicatorPublishAddress|string, default is "localhost:0"|Static|The endpoint in form of a string -'IP:Port' which is used by the Windows Fabric Replicator to send operations to other replicas.| |RetryInterval|TimeSpan, default is Common::TimeSpan::FromSeconds(5)|Static|Specify timespan in seconds. When an operation is lost or rejected this timer determines how often the replicator will retry sending the operation.|
The following is a list of Fabric settings that you can customize, organized by
|Level |Int, default is 4 | Dynamic |Trace etw level can take values 1, 2, 3, 4. To be supported you must keep the trace level at 4 | ## TransactionalReplicator
+<i> **Warning Note** : Changing Replication/TranscationalReplicator settings at cluster level changes settings for all stateful services include system services. This is generally not recommended. See this document [Configure Azure Service Fabric Reliable Services - Azure Service Fabric | Microsoft Docs](https://docs.microsoft.com/azure/service-fabric/service-fabric-reliable-services-configuration) to configure services at app level.</i>
| **Parameter** | **Allowed Values** | **Upgrade Policy** | **Guidance or Short Description** | | | | | |
The following is a list of Fabric settings that you can customize, organized by
|MaxSecondaryReplicationQueueMemorySize |Uint, default is 0 | Static |This is the maximum value of the secondary replication queue in bytes. | |MaxSecondaryReplicationQueueSize |Uint, default is 16384 | Static |This is the maximum number of operations that could exist in the secondary replication queue. Note that it must be a power of 2. | |ReplicatorAddress |string, default is "localhost:0" | Static | The endpoint in form of a string -'IP:Port' which is used by the Windows Fabric Replicator to establish connections with other replicas in order to send/receive operations. |
+|ReplicationBatchSendInterval|TimeSpan, default is Common::TimeSpan::FromMilliseconds(15) | Static | Specify timespan in seconds. Determines the amount of time that the replicator waits after receiving an operation before force sending a batch.|
|ShouldAbortCopyForTruncation |bool, default is FALSE | Static | Allow pending log truncation to go through during copy. With this enabled the copy stage of builds can be cancelled if the log is full and they are block truncation. | ## Transport
service-fabric Service Fabric Cluster Standalone Deployment Preparation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-cluster-standalone-deployment-preparation.md
Title: Standalone Cluster Deployment Preparation description: Documentation related to preparing the environment and creating the cluster configuration, to be considered prior to deploying a cluster intended for handling a production workload. Previously updated : 9/11/2018 Last updated : 5/19/2022 # Plan and prepare your Service Fabric Standalone cluster deployment
Here are recommended specs for machines in a Service Fabric cluster:
* The [RemoteRegistry service](/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/cc754820(v=ws.11)) should be running on all the machines * **Service Fabric installation drive must be NTFS File System** * **Windows services *Performance Logs & Alerts* and *Windows Event Log* must [be enabled](/previous-versions/windows/it-pro/windows-server-2008-r2-and-2008/cc755249(v=ws.11))**.
+* **Remote User Account Control must be disabled**
+ > [!IMPORTANT] > The cluster administrator deploying and configuring the cluster must have [administrator privileges](https://social.technet.microsoft.com/wiki/contents/articles/13436.windows-server-2012-how-to-add-an-account-to-a-local-administrator-group.aspx) on each of the machines. You cannot install Service Fabric on a domain controller.
service-fabric Service Fabric Concept Resource Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-concept-resource-model.md
Title: Azure Service Fabric application resource model description: This article provides an overview of managing an Azure Service Fabric application by using Azure Resource Manager. Previously updated : 10/21/2019 Last updated : 5/18/2022
After the storage account is created, you create a blob container where the appl
Resources in your cluster can be secured by setting the public access level to **private**. You can grant access in multiple ways:
-* Authorize access to blobs and queues by using [Azure Active Directory](../storage/common/storage-auth-aad-app.md).
* Grant access to Azure blob and queue data by using [Azure RBAC in the Azure portal](../storage/blobs/assign-azure-role-data-access.md). * Delegate access by using a [shared access signature](/rest/api/storageservices/delegate-access-with-shared-access-signature).
service-fabric Service Fabric Managed Disk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-managed-disk.md
-# Deploy an Azure Service Fabric cluster node type with managed data disks (preview)
-
->[!NOTE]
-> Support for managed data disks is only in preview right now and should not be used with production workloads.
-
+# Deploy an Azure Service Fabric cluster node type with managed data disks
Azure Service Fabric node types, by default, use the temporary disk on each virtual machine (VM) in the underlying virtual machine scale set for data storage. However, because the temporary disk is not persistent, and the size of the temporary disk is bound to a given VM SKU, this can be too restrictive for some scenarios.
This article provides the steps for how to use native support from Service Fabri
## Prerequisites * The required minimum disk size for the managed data disk is 50 GB.
-* In scenarios where more than one managed data disk is attached, the customer needs to manage the data disks themselves.
+* Data disk drive letter should be set to character lexicographically greater than all drives present in the virtual machine scale set SKU.
+* Only one managed data disk per VM is supported. For scenarios involving more than 1 data disks, user needs to manage the data disks on their own.
## Configure the virtual machine scale set to use managed data disks in Service Fabric To use managed data disks on a node type, configure the underlying virtual machine scale set resource with the following: * Add a managed disk in data disks section of the template for the virtual machine scale set.
-* Update the Service Fabric extension with following settings:
+* Update the Service Fabric extension for the virtual machine scale set with following settings:
* For Windows: **useManagedDataDisk: true** and **dataPath: 'K:\\\\SvcFab'**. Note that drive K is just a representation. You can use any drive letter lexicographically greater than all the drive letters present in the virtual machine scale set SKU. * For Linux: **useManagedDataDisk:true** and **dataPath: '\mnt\sfdataroot'**.
->[!NOTE]
-> Support for managed data disks for Linux Service Fabric clusters is currently not available.
- Here's an Azure Resource Manager template for a Service Fabric extension: ```json
Here's an Azure Resource Manager template for a Service Fabric extension:
``` ## Migrate to using managed data disks for Service Fabric node types-
-For all migration scenarios:
+For all migration scenarios, new node types with managed data disks need to be added. Existing node types cannot be converted to use managed data disks.
1. Add a new node type that's configured to use managed data disks as specified earlier.-
-1. Migrate any required workloads to the new node type.
-
-1. Disable and remove the old node type from the cluster.
---
+2. Migrate any required workloads to the new node type.
+3. Disable and remove the old node type from the cluster.
## Next steps
service-fabric Service Fabric Reliable Services Communication Remoting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-reliable-services-communication-remoting.md
Title: Service remoting by using C# in Service Fabric description: Service Fabric remoting allows clients and services to communicate with C# services by using a remote procedure call. Previously updated : 09/20/2017 Last updated : 05/17/2022 # Service remoting in C# with Reliable Services
This step makes sure that the service is listening only on the V2 listener.
## Use the remoting V2 (interface compatible) stack
- The remoting V2 (interface compatible, known as V2_1) stack has all the features of the V2 remoting stack. Its interface stack is compatible with the remoting V1 stack, but it is not backward compatible with V2 and V1. To upgrade from V1 to V2_1 without affecting service availability, follow the steps in the article Upgrade from V1 to V2 (interface compatible).
+ The remoting V2 (interface compatible) stack is known as V2_1 and is the most up-to-date version. It has all the features of the V2 remoting stack. Its interface stack is compatible with the remoting V1 stack, but it is not backward compatible with V2 and V1. To upgrade from V1 to V2_1 without affecting service availability, follow the steps in the article Upgrade from V1 to V2 (interface compatible).
### Use an assembly attribute to use the remoting V2 (interface compatible) stack
service-fabric Service Fabric Run Script At Service Startup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-run-script-at-service-startup.md
description: Learn how to configure a policy for a Service Fabric service setup
Previously updated : 03/21/2018 Last updated : 05/19/2022 # Run a service startup script as a local user or system account
In the PowerShell file, add the following to set a system environment variable:
``` ## Debug a startup script locally using console redirection
-Occasionally, it's useful for debugging purposes to see the console output from running a setup script. You can set a console redirection policy on the setup entry point in the service manifest, which writes the output to a file. The file output is written to the application folder called **log** on the cluster node where the application is deployed and run.
+Occasionally, it's useful for debugging purposes to see the console output from running a setup script. You can set a console redirection policy on the setup entry point in the service manifest, which writes the output to a file. The file output is written to the application folder called **log** on the cluster node where the application is deployed and run, found in `C:\SfDeployCluster\_App\{application-name}\log`. You may see a number after your applications name in the path. This number increments on each deployment. The files written to the log folder include Code_{service-name}Pkg_S_0.err, which is the standard error output, and Code_{service-name}Pkg_S_0.out, which is the standard output. You may see more than one set of files depending on service activation attempts.
> [!WARNING] > Never use the console redirection policy in an application that is deployed in production because this can affect the application failover. *Only* use this for local development and debugging purposes.
service-health Alerts Activity Log Service Notifications Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-health/alerts-activity-log-service-notifications-arm.md
Title: Receive activity log alerts on Azure service notifications using Resource Manager template description: Get notified via SMS, email, or webhook when Azure service occurs. Previously updated : 06/29/2020 Last updated : 05/13/2022
The following template creates an action group with an email target and enables
"contentVersion": "1.0.0.0", "parameters": { "actionGroups_name": {
- "type": "String",
+ "type": "string",
"defaultValue": "SubHealth" }, "activityLogAlerts_name": {
- "type": "String",
+ "type": "string",
"defaultValue": "ServiceHealthActivityLogAlert" }, "emailAddress": {
The following template creates an action group with an email target and enables
} }, "variables": {
- "alertScope": "[concat('/','subscriptions','/',subscription().subscriptionId)]"
+ "alertScope": "[format('/subscriptions/{0}', subscription().subscriptionId)]"
}, "resources": [ {
- "comments": "Action Group",
"type": "microsoft.insights/actionGroups", "apiVersion": "2019-06-01", "name": "[parameters('actionGroups_name')]", "location": "Global",
- "scale": null,
- "dependsOn": [],
- "tags": {},
"properties": { "groupShortName": "[parameters('actionGroups_name')]", "enabled": true,
The following template creates an action group with an email target and enables
} }, {
- "comments": "Service Health Activity Log Alert",
"type": "microsoft.insights/activityLogAlerts", "apiVersion": "2017-04-01", "name": "[parameters('activityLogAlerts_name')]", "location": "Global",
- "scale": null,
- "dependsOn": [
- "[resourceId('microsoft.insights/actionGroups', parameters('actionGroups_name'))]"
- ],
- "tags": {},
"properties": { "scopes": [ "[variables('alertScope')]"
The following template creates an action group with an email target and enables
} ] },
- "enabled": true,
- "description": ""
- }
+ "enabled": true
+ },
+ "dependsOn": [
+ "[resourceId('microsoft.insights/actionGroups', parameters('actionGroups_name'))]"
+ ]
} ] }
service-health Alerts Activity Log Service Notifications Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-health/alerts-activity-log-service-notifications-bicep.md
+
+ Title: Receive activity log alerts on Azure service notifications using Bicep
+description: Get notified via SMS, email, or webhook when Azure service occurs.
Last updated : 05/13/2022++++
+# Quickstart: Create activity log alerts on service notifications using a Bicep file
+
+This article shows you how to set up activity log alerts for service health notifications by using a Bicep file.
++
+Service health notifications are stored in the [Azure activity log](../azure-monitor/essentials/platform-logs-overview.md). Given the possibly large volume of information stored in the activity log, there is a separate user interface to make it easier to view and set up alerts on service health notifications.
+
+You can receive an alert when Azure sends service health notifications to your Azure subscription. You can configure the alert based on:
+
+- The class of service health notification (Service issues, Planned maintenance, Health advisories).
+- The subscription affected.
+- The service(s) affected.
+- The region(s) affected.
+
+> [!NOTE]
+> Service health notifications does not send an alert regarding resource health events.
+
+You also can configure who the alert should be sent to:
+
+- Select an existing action group.
+- Create a new action group (that can be used for future alerts).
+
+To learn more about action groups, see [Create and manage action groups](../azure-monitor/alerts/action-groups.md).
+
+## Prerequisites
+
+- If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+- To run the commands from your local computer, install Azure CLI or the Azure PowerShell modules. For more information, see [Install the Azure CLI](/cli/azure/install-azure-cli) and [Install Azure PowerShell](/powershell/azure/install-az-ps).
+
+## Review the Bicep file
+
+The following Bicep file creates an action group with an email target and enables all service health notifications for the target subscription. Save this Bicep as *CreateServiceHealthAlert.bicep*.
+
+```bicep
+param actionGroups_name string = 'SubHealth'
+param activityLogAlerts_name string = 'ServiceHealthActivityLogAlert'
+param emailAddress string
+
+var alertScope = '/subscriptions/${subscription().subscriptionId}'
+
+resource actionGroups_name_resource 'microsoft.insights/actionGroups@2019-06-01' = {
+ name: actionGroups_name
+ location: 'Global'
+ properties: {
+ groupShortName: actionGroups_name
+ enabled: true
+ emailReceivers: [
+ {
+ name: actionGroups_name
+ emailAddress: emailAddress
+ }
+ ]
+ smsReceivers: []
+ webhookReceivers: []
+ }
+}
+
+resource activityLogAlerts_name_resource 'microsoft.insights/activityLogAlerts@2017-04-01' = {
+ name: activityLogAlerts_name
+ location: 'Global'
+ properties: {
+ scopes: [
+ alertScope
+ ]
+ condition: {
+ allOf: [
+ {
+ field: 'category'
+ equals: 'ServiceHealth'
+ }
+ {
+ field: 'properties.incidentType'
+ equals: 'Incident'
+ }
+ ]
+ }
+ actions: {
+ actionGroups: [
+ {
+ actionGroupId: actionGroups_name_resource.id
+ webhookProperties: {}
+ }
+ ]
+ }
+ enabled: true
+ }
+}
+
+```
+
+The Bicep file defines two resources:
+
+- [Microsoft.Insights/actionGroups](/azure/templates/microsoft.insights/actiongroups)
+- [Microsoft.Insights/activityLogAlerts](/azure/templates/microsoft.insights/activityLogAlerts)
+
+## Deploy the Bicep file
+
+Deploy the Bicep file using Azure CLI and Azure PowerShell. Replace the sample values for **Resource Group** and **emailAddress** with appropriate values for your environment.
+
+# [CLI](#tab/CLI)
+
+```azurecli
+az login
+az deployment group create --name CreateServiceHealthAlert --resource-group my-resource-group --template-file CreateServiceHealthAlert.bicep --parameters emailAddress='user@contoso.com'
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```powershell
+Connect-AzAccount
+Select-AzSubscription -SubscriptionName my-subscription
+New-AzResourceGroupDeployment -Name CreateServiceHealthAlert -ResourceGroupName my-resource-group -TemplateFile CreateServiceHealthAlert.bicep -emailAddress user@contoso.com
+```
+++
+## Validate the deployment
+
+Verify that the workspace has been created using one of the following commands. Replace the sample values for **Resource Group** with the value you used above.
+
+# [CLI](#tab/CLI)
+
+```azurecli
+az monitor activity-log alert show --resource-group my-resource-group --name ServiceHealthActivityLogAlert
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```powershell
+Get-AzActivityLogAlert -ResourceGroupName my-resource-group -Name ServiceHealthActivityLogAlert
+```
+++
+## Clean up resources
+
+If you plan to continue working with subsequent quickstarts and tutorials, you might want to leave these resources in place. When no longer needed, delete the resource group, which deletes the alert rule and the related resources. To delete the resource group by using Azure CLI or Azure PowerShell
+
+# [CLI](#tab/CLI)
+
+```azurecli
+az group delete --name my-resource-group
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```powershell
+Remove-AzResourceGroup -Name my-resource-group
+```
+++
+## Next steps
+
+- Learn about [best practices for setting up Azure Service Health alerts](https://www.microsoft.com/en-us/videoplayer/embed/RE2OtUa).
+- Learn how to [setup mobile push notifications for Azure Service Health](https://www.microsoft.com/en-us/videoplayer/embed/RE2OtUw).
+- Learn how to [configure webhook notifications for existing problem management systems](service-health-alert-webhook-guide.md).
+- Learn about [service health notifications](service-notifications.md).
+- Learn about [notification rate limiting](../azure-monitor/alerts/alerts-rate-limiting.md).
+- Review the [activity log alert webhook schema](../azure-monitor/alerts/activity-log-alerts-webhook.md).
+- Get an [overview of activity log alerts](../azure-monitor/alerts/alerts-overview.md), and learn how to receive alerts.
+- Learn more about [action groups](../azure-monitor/alerts/action-groups.md).
spring-cloud Access App Virtual Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/access-app-virtual-network.md
Title: "Azure Spring Cloud access app in virtual network"
-description: Access app in Azure Spring Cloud in a virtual network.
+ Title: "Azure Spring Apps access app in virtual network"
+description: Access app in Azure Spring Apps in a virtual network.
Last updated 11/30/2021-+ ms.devlang: azurecli # Access your application in a private network
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
+ **This article applies to:** ✔️ Basic/Standard tier ✔️ Enterprise tier This article explains how to access an endpoint for your application in a private network.
-When **Assign Endpoint** on applications in an Azure Spring Cloud service instance is deployed in your virtual network, the endpoint is a private fully qualified domain name (FQDN). The domain is only accessible in the private network. Apps and services use the application endpoint. They include the *Test Endpoint* described in [View apps and deployments](./how-to-staging-environment.md#view-apps-and-deployments). *Log streaming*, described in [Stream Azure Spring Cloud app logs in real-time](./how-to-log-streaming.md), also works only within the private network.
+When **Assign Endpoint** on applications in an Azure Spring Apps service instance is deployed in your virtual network, the endpoint is a private fully qualified domain name (FQDN). The domain is only accessible in the private network. Apps and services use the application endpoint. They include the *Test Endpoint* described in [View apps and deployments](./how-to-staging-environment.md#view-apps-and-deployments). *Log streaming*, described in [Stream Azure Spring Apps app logs in real-time](./how-to-log-streaming.md), also works only within the private network.
## Find the IP for your application #### [Portal](#tab/azure-portal)
-1. Select the virtual network resource you created as explained in [Deploy Azure Spring Cloud in your Azure virtual network (VNet injection)](./how-to-deploy-in-azure-virtual-network.md).
+1. Select the virtual network resource you created as explained in [Deploy Azure Spring Apps in your Azure virtual network (VNet injection)](./how-to-deploy-in-azure-virtual-network.md).
2. In the **Connected devices** search box, enter *kubernetes-internal*.
When **Assign Endpoint** on applications in an Azure Spring Cloud service instan
#### [CLI](#tab/azure-CLI)
-Find the IP Address for your Spring Cloud services. Customize the value of your spring cloud name based on your real environment.
+Find the IP Address for your Spring Cloud services. Customize the value of your Azure Spring Apps instance name based on your real environment.
```azurecli SPRING_CLOUD_NAME='spring-cloud-name'
- SERVICE_RUNTIME_RG=`az spring-cloud show \
+ SERVICE_RUNTIME_RG=`az spring show \
--resource-group $RESOURCE_GROUP \ --name $SPRING_CLOUD_NAME \ --query "properties.networkProfile.serviceRuntimeNetworkResourceGroup" \
The following procedure creates a private DNS zone for an application in the pri
#### [CLI](#tab/azure-CLI)
-1. Define variables for your subscription, resource group, and Azure Spring Cloud instance. Customize the values based on your real environment.
+1. Define variables for your subscription, resource group, and Azure Spring Apps instance. Customize the values based on your real environment.
```azurecli SUBSCRIPTION='subscription-id' RESOURCE_GROUP='my-resource-group'
- VIRTUAL_NETWORK_NAME='azure-spring-cloud-vnet'
+ VIRTUAL_NETWORK_NAME='azure-spring-apps-vnet'
``` 1. Sign in to the Azure CLI and choose your active subscription.
To link the private DNS zone to the virtual network, you need to create a virtua
2. On the left pane, select **Virtual network links**, then select **Add**.
-4. Enter *azure-spring-cloud-dns-link* for the **Link name**.
+4. Enter *azure-spring-apps-dns-link* for the **Link name**.
-5. For **Virtual network**, select the virtual network you created as explained in [Deploy Azure Spring Cloud in your Azure virtual network (VNet injection)](./how-to-deploy-in-azure-virtual-network.md).
+5. For **Virtual network**, select the virtual network you created as explained in [Deploy Azure Spring Apps in your Azure virtual network (VNet injection)](./how-to-deploy-in-azure-virtual-network.md).
![Add virtual network link](media/spring-cloud-access-app-vnet/add-virtual-network-link.png)
To link the private DNS zone to the virtual network, you need to create a virtua
#### [CLI](#tab/azure-CLI)
-Link the private DNS zone you created to the virtual network holding your Azure Spring Cloud service.
+Link the private DNS zone you created to the virtual network holding your Azure Spring Apps service.
```azurecli az network private-dns link vnet create \ --resource-group $RESOURCE_GROUP \
- --name azure-spring-cloud-dns-link \
+ --name azure-spring-apps-dns-link \
--zone-name private.azuremicroservices.io \ --virtual-network $VIRTUAL_NETWORK_NAME \ --registration-enabled false
Use the [IP address](#find-the-ip-for-your-application) to create the A record i
## Assign private FQDN for your application
-After following the procedure in [Deploy Azure Spring Cloud in a virtual network](./how-to-deploy-in-azure-virtual-network.md), you can assign a private FQDN for your application.
+After following the procedure in [Deploy Azure Spring Apps in a virtual network](./how-to-deploy-in-azure-virtual-network.md), you can assign a private FQDN for your application.
#### [Portal](#tab/azure-portal)
-1. Select the Azure Spring Cloud service instance deployed in your virtual network, and open the **Apps** tab in the menu on the left.
+1. Select the Azure Spring Apps service instance deployed in your virtual network, and open the **Apps** tab in the menu on the left.
2. Select the application to show the **Overview** page.
Update your app to assign an endpoint to it. Customize the value of your app nam
```azurecli SPRING_CLOUD_APP='your spring cloud app'
-az spring-cloud app update \
+az spring app update \
--resource-group $RESOURCE_GROUP \ --name $SPRING_CLOUD_APP \ --service $SPRING_CLOUD_NAME \
az group delete --name $RESOURCE_GROUP
## Next steps - [Expose applications with end-to-end TLS in a virtual network](./expose-apps-gateway-end-to-end-tls.md)-- [Troubleshooting Azure Spring Cloud in VNET](./troubleshooting-vnet.md)-- [Customer Responsibilities for Running Azure Spring Cloud in VNET](./vnet-customer-responsibilities.md)
+- [Troubleshooting Azure Spring Apps in VNET](./troubleshooting-vnet.md)
+- [Customer Responsibilities for Running Azure Spring Apps in VNET](./vnet-customer-responsibilities.md)
spring-cloud Concept App Status https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/concept-app-status.md
Title: App status in Azure Spring Cloud
-description: Learn the app status categories in Azure Spring Cloud
+ Title: App status in Azure Spring Apps
+description: Learn the app status categories in Azure Spring Apps
Last updated 03/30/2022 -+
-# App status in Azure Spring Cloud
+# App status in Azure Spring Apps
+
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
**This article applies to:** ✔️ Java ✔️ C# **This article applies to:** ✔️ Basic/Standard tier ✔️ Enterprise tier
-This article shows you how to view app status for Azure Spring Cloud.
+This article shows you how to view app status for Azure Spring Apps.
-The Azure Spring Cloud UI delivers information about the status of running applications. There's an **Apps** option for each resource group in a subscription that displays general status of application types. For each application type, there's a display of **Application instances**.
+The Azure Spring Apps UI delivers information about the status of running applications. There's an **Apps** option for each resource group in a subscription that displays general status of application types. For each application type, there's a display of **Application instances**.
## Apps status
The instance status is reported as one of the following values:
| Value | Definition | |-||
-| Starting | The binary is successfully deployed to the given instance. The instance booting the jar file may fail because the jar can't run properly. Azure Spring Cloud will restart the app instance in 60 seconds if it detects that the app instance is still in the *Starting* state. |
-| Running | The instance works. The instance can serve requests from inside Azure Spring Cloud. |
+| Starting | The binary is successfully deployed to the given instance. The instance booting the jar file may fail because the jar can't run properly. Azure Spring Apps will restart the app instance in 60 seconds if it detects that the app instance is still in the *Starting* state. |
+| Running | The instance works. The instance can serve requests from inside Azure Spring Apps. |
| Failed | The app instance failed to start the userΓÇÖs binary after several retries. The app instance may be in one of the following states:<br/>- The app may stay in the *Starting* status and never be ready for serving requests.<br/>- The app may boot up but crashed in a few seconds. | | Terminating | The app instance is shutting down. The app may not serve requests and the app instance will be removed. |
The discovery status of the instance is reported as one of the following values:
## App registration status
-The *app registration* status shows the state in service discovery. Azure Spring Cloud uses Eureka for service discovery. For more information on how the Eureka client calculates the state, see [Eureka's health checks](https://cloud.spring.io/spring-cloud-static/Greenwich.RELEASE/multi/multi__service_discovery_eureka_clients.html#_eureka_s_health_checks).
+The *app registration* status shows the state in service discovery. Azure Spring Apps uses Eureka for service discovery. For more information on how the Eureka client calculates the state, see [Eureka's health checks](https://cloud.spring.io/spring-cloud-static/Greenwich.RELEASE/multi/multi__service_discovery_eureka_clients.html#_eureka_s_health_checks).
## Next steps
-* [Prepare a Spring or Steeltoe application for deployment in Azure Spring Cloud](how-to-prepare-app-deployment.md)
+* [Prepare a Spring or Steeltoe application for deployment in Azure Spring Apps](how-to-prepare-app-deployment.md)
spring-cloud Concept Manage Monitor App Spring Boot Actuator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/concept-manage-monitor-app-spring-boot-actuator.md
Last updated 05/06/2022-+ # Manage and monitor app with Spring Boot Actuator
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
+ **This article applies to:** ✔️ Java ❌ C# **This article applies to:** ✔️ Basic/Standard tier ✔️ Enterprise tier
-After deploying new binary to your app, you may want to check the functionality and see information about your running application. This article explains how to access the API from a test endpoint provided by Azure Spring Cloud and expose the production-ready features for your app.
+After deploying new binary to your app, you may want to check the functionality and see information about your running application. This article explains how to access the API from a test endpoint provided by Azure Spring Apps and expose the production-ready features for your app.
## Prerequisites
-This article assumes that you have a Spring Boot 2.x application that can be successfully deployed and booted on Azure Spring Cloud service. See [Quickstart: Launch an existing application in Azure Spring Cloud using the Azure portal](./quickstart.md)
+This article assumes that you have a Spring Boot 2.x application that can be successfully deployed and booted on Azure Spring Apps service. See [Quickstart: Launch an existing application in Azure Spring Apps using the Azure portal](./quickstart.md)
## Verify app through test endpoint
To view all the endpoints built-in, see [Exposing Endpoints](https://docs.spring
## Next steps
-* [Understand metrics for Azure Spring Cloud](./concept-metrics.md)
-* [Understanding app status in Azure Spring Cloud](./concept-app-status.md)
+* [Understand metrics for Azure Spring Apps](./concept-metrics.md)
+* [Understanding app status in Azure Spring Apps](./concept-app-status.md)
spring-cloud Concept Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/concept-metrics.md
Title: Metrics for Azure Spring Cloud
-description: Learn how to review metrics in Azure Spring Cloud
+ Title: Metrics for Azure Spring Apps
+description: Learn how to review metrics in Azure Spring Apps
Last updated 09/08/2020 -+
-# Metrics for Azure Spring Cloud
+# Metrics for Azure Spring Apps
+
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
**This article applies to:** ✔️ Basic/Standard tier ✔️ Enterprise tier Azure Metrics explorer is a component of the Microsoft Azure portal that allows plotting charts, visually correlating trends, and investigating spikes and dips in metrics. Use the metrics explorer to investigate the health and utilization of your resources.
-In Azure Spring Cloud, there are two viewpoints for metrics.
+In Azure Spring Apps, there are two viewpoints for metrics.
* Charts in each application overview page * Common metrics page
Each application's **Application Overview** page presents a metrics chart that a
![Application Metrics Overview](media/metrics/metrics-3.png)
-Azure Spring Cloud provides these five charts with metrics that are updated every minute:
+Azure Spring Apps provides these five charts with metrics that are updated every minute:
* **Http Server Errors**: Error count for HTTP requests to your app * **Data In**: Bytes received by your app
The time range can also be adjusted from last 30 minutes to last 30 days or a cu
![Metric Modification](media/metrics/metrics-6.png)
-The default view includes all of an Azure Spring Cloud service's application's metrics together. Metrics of one app or instance can be filtered in the display. Select **Add filter**, set the property to **App**, and select the target application you want to monitor in the **Values** text box.
+The default view includes all of an Azure Spring Apps service's application's metrics together. Metrics of one app or instance can be filtered in the display. Select **Add filter**, set the property to **App**, and select the target application you want to monitor in the **Values** text box.
You can use two kinds of filters (properties):
For more information, see [dotnet counters](/dotnet/core/diagnostics/dotnet-coun
>[!div class="mx-tdCol2BreakAll"] >| Display Name | Azure Metric Name | Unit | Details | >|--|--|-|-|
->| Bytes Received | IngressBytesReceived | Bytes | Count of bytes received by Azure Spring Cloud from the clients |
->| Bytes Sent | IngressBytesSent | Bytes | Count of bytes sent by Azure Spring Cloud to the clients |
->| Requests | IngressRequests | Count | Count of requests by Azure Spring Cloud from the clients |
->| Failed Requests | IngressFailedRequests | Count | Count of failed requests by Azure Spring Cloud from the clients |
->| Response Status | IngressResponseStatus | Count | HTTP response status returned by Azure Spring Cloud. The response status code distribution can be further categorized to show responses in 2xx, 3xx, 4xx, and 5xx categories |
->| Response Time | IngressResponseTime | Seconds | Http response time return by Azure Spring Cloud |
->| Throughput In (bytes/s) | IngressBytesReceivedRate | BytesPerSecond | Bytes received per second by Azure Spring Cloud from the clients |
->| Throughput Out (bytes/s) | IngressBytesSentRate | BytesPerSecond | Bytes sent per second by Azure Spring Cloud to the clients |
+>| Bytes Received | IngressBytesReceived | Bytes | Count of bytes received by Azure Spring Apps from the clients |
+>| Bytes Sent | IngressBytesSent | Bytes | Count of bytes sent by Azure Spring Apps to the clients |
+>| Requests | IngressRequests | Count | Count of requests by Azure Spring Apps from the clients |
+>| Failed Requests | IngressFailedRequests | Count | Count of failed requests by Azure Spring Apps from the clients |
+>| Response Status | IngressResponseStatus | Count | HTTP response status returned by Azure Spring Apps. The response status code distribution can be further categorized to show responses in 2xx, 3xx, 4xx, and 5xx categories |
+>| Response Time | IngressResponseTime | Seconds | Http response time return by Azure Spring Apps |
+>| Throughput In (bytes/s) | IngressBytesReceivedRate | BytesPerSecond | Bytes received per second by Azure Spring Apps from the clients |
+>| Throughput Out (bytes/s) | IngressBytesSentRate | BytesPerSecond | Bytes sent per second by Azure Spring Apps to the clients |
## Next steps
-* [Quickstart: Monitoring Azure Spring Cloud apps with logs, metrics, and tracing](./quickstart-logs-metrics-tracing.md)
+* [Quickstart: Monitoring Azure Spring Apps apps with logs, metrics, and tracing](./quickstart-logs-metrics-tracing.md)
* [Getting started with Azure Metrics Explorer](../azure-monitor/essentials/metrics-getting-started.md) * [Analyze logs and metrics with diagnostics settings](./diagnostic-services.md)
-* [Tutorial: Monitor Spring Cloud resources using alerts and action groups](./tutorial-alerts-action-groups.md)
-* [Quotas and Service Plans for Azure Spring Cloud](./quotas.md)
+* [Tutorial: Monitor Spring app resources using alerts and action groups](./tutorial-alerts-action-groups.md)
+* [Quotas and Service Plans for Azure Spring Apps](./quotas.md)
spring-cloud Concept Security Controls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/concept-security-controls.md
Title: Security controls for Azure Spring Cloud Service
-description: Use security controls built in into Azure Spring Cloud Service.
+ Title: Security controls for Azure Spring Apps Service
+description: Use security controls built in into Azure Spring Apps Service.
Last updated 04/23/2020-+
-# Security controls for Azure Spring Cloud Service
+# Security controls for Azure Spring Apps Service
+
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
**This article applies to:** ✔️ Java ✔️ C# **This article applies to:** ✔️ Basic/Standard tier ✔️ Enterprise tier
-Security controls are built in into Azure Spring Cloud Service.
+Security controls are built in into Azure Spring Apps Service.
A security control is a quality or feature of an Azure service that contributes to the service's ability to prevent, detect, and respond to security vulnerabilities. For each control, we use *Yes* or *No* to indicate whether it is currently in place for the service. We use *N/A* for a control that is not applicable to the service.
A security control is a quality or feature of an Azure service that contributes
|:-|:-|:-|:-| | Server-side encryption at rest: Microsoft-managed keys | Yes | User uploaded source and artifacts, config server settings, app settings, and data in persistent storage are stored in Azure Storage, which automatically encrypts the content at rest.<br><br>Config server cache, runtime binaries built from uploaded source, and application logs during the application lifetime are saved to Azure managed disk, which automatically encrypts the content at rest.<br><br>Container images built from user uploaded source are saved in Azure Container Registry, which automatically encrypts the image content at rest. | [Azure Storage encryption for data at rest](../storage/common/storage-service-encryption.md)<br><br>[Server-side encryption of Azure managed disks](../virtual-machines/disk-encryption.md)<br><br>[Container image storage in Azure Container Registry](../container-registry/container-registry-storage.md) | | Encryption in transient | Yes | User app public endpoints use HTTPS for inbound traffic by default. | |
-| API calls encrypted | Yes | Management calls to configure Azure Spring Cloud service occur via Azure Resource Manager calls over HTTPS. | [Azure Resource Manager](../azure-resource-manager/index.yml) |
+| API calls encrypted | Yes | Management calls to configure Azure Spring Apps service occur via Azure Resource Manager calls over HTTPS. | [Azure Resource Manager](../azure-resource-manager/index.yml) |
| Customer Lockbox | Yes | Provide Microsoft with access to relevant customer data during support scenarios. | [Customer Lockbox for Microsoft Azure](../security/fundamentals/customer-lockbox-overview.md) **Network access security controls** | Security control | Yes/No | Notes | Documentation | |:-|:-|:-|:-|
-| Service Tag | Yes | Use **AzureSpringCloud** service tag to define outbound network access controls on [network security groups](../virtual-network/network-security-groups-overview.md#security-rules) or [Azure Firewall](../firewall/service-tags.md), to allow traffic to applications in Azure Spring Cloud. | [Service tags](../virtual-network/service-tags-overview.md) |
+| Service Tag | Yes | Use **AzureSpringCloud** service tag to define outbound network access controls on [network security groups](../virtual-network/network-security-groups-overview.md#security-rules) or [Azure Firewall](../firewall/service-tags.md), to allow traffic to applications in Azure Spring Apps. | [Service tags](../virtual-network/service-tags-overview.md) |
## Next steps
-* [Quickstart: Deploy your first Spring Boot app in Azure Spring Cloud](./quickstart.md)
+* [Quickstart: Deploy your first Spring Boot app in Azure Spring Apps](./quickstart.md)
spring-cloud Concept Understand App And Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/concept-understand-app-and-deployment.md
Title: "App and deployment in Azure Spring Cloud"
-description: This topic explains the distinction between application and deployment in Azure Spring Cloud.
+ Title: "App and deployment in Azure Spring Apps"
+description: This topic explains the distinction between application and deployment in Azure Spring Apps.
Last updated 07/23/2020-+
-# App and deployment in Azure Spring Cloud
+# App and deployment in Azure Spring Apps
+
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
**This article applies to:** ✔️ Java ✔️ C# **This article applies to:** ✔️ Basic/Standard tier ✔️ Enterprise tier
-**App** and **Deployment** are the two key concepts in the resource model of Azure Spring Cloud. In Azure Spring Cloud, an *App* is an abstraction of one business app. One version of code or binary deployed as the *App* runs in a *Deployment*. Apps run in an *Azure Spring Cloud Service Instance*, or simply *service instance*, as shown next.
+**App** and **Deployment** are the two key concepts in the resource model of Azure Spring Apps. In Azure Spring Apps, an *App* is an abstraction of one business app. One version of code or binary deployed as the *App* runs in a *Deployment*. Apps run in an *Azure Spring Apps Service Instance*, or simply *service instance*, as shown next.
![Apps and Deployments](./media/spring-cloud-app-and-deployment/app-deployment-rev.png)
-You can have multiple service instances within a single Azure subscription, but the Azure Spring Cloud Service is easiest to use when all of the Apps that make up a business app reside within a single service instance.
+You can have multiple service instances within a single Azure subscription, but the Azure Spring Apps Service is easiest to use when all of the Apps that make up a business app reside within a single service instance.
-Azure Spring Cloud standard tier allows one App to have one production deployment and one staging deployment, so that you can do blue/green deployment on it easily.
+Azure Spring Apps standard tier allows one App to have one production deployment and one staging deployment, so that you can do blue/green deployment on it easily.
## App
The following features/properties are defined on Deployment level, and will be e
## Next steps
-* [Set up a staging environment in Azure Spring Cloud](./how-to-staging-environment.md)
+* [Set up a staging environment in Azure Spring Apps](./how-to-staging-environment.md)
spring-cloud Concepts Blue Green Deployment Strategies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/concepts-blue-green-deployment-strategies.md
Title: "Blue-green deployment strategies in Azure Spring Cloud"
-description: This topic explains two approaches to blue-green deployments in Azure Spring Cloud.
+ Title: "Blue-green deployment strategies in Azure Spring Apps"
+description: This topic explains two approaches to blue-green deployments in Azure Spring Apps.
Last updated 11/12/2021-+
-# Blue-green deployment strategies in Azure Spring Cloud
+# Blue-green deployment strategies in Azure Spring Apps
+
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
**This article applies to:** ✔️ Basic/Standard tier ✔️ Enterprise tier
-This article describes the blue-green deployment support in Azure Spring Cloud.
+This article describes the blue-green deployment support in Azure Spring Apps.
-Azure Spring Cloud (Standard tier and higher) permits two deployments for every app, only one of which receives production traffic. This pattern is commonly known as blue-green deployment. Azure Spring Cloud's support for blue-green deployment, together with a [Continuous Delivery (CD)](/devops/deliver/what-is-continuous-delivery) pipeline and rigorous automated testing, allows agile application deployments with high confidence.
+Azure Spring Apps (Standard tier and higher) permits two deployments for every app, only one of which receives production traffic. This pattern is commonly known as blue-green deployment. Azure Spring Apps's support for blue-green deployment, together with a [Continuous Delivery (CD)](/devops/deliver/what-is-continuous-delivery) pipeline and rigorous automated testing, allows agile application deployments with high confidence.
## Alternating deployments
-The simplest way to implement blue-green deployment with Azure Spring Cloud is to create two fixed deployments and always deploy to the deployment that isn't receiving production traffic. With the [Azure Spring Cloud task for Azure Pipelines](/azure/devops/pipelines/tasks/deploy/azure-spring-cloud), you can deploy this way just by setting the `UseStagingDeployment` flag to `true`.
+The simplest way to implement blue-green deployment with Azure Spring Apps is to create two fixed deployments and always deploy to the deployment that isn't receiving production traffic. With the [Azure Spring Apps task for Azure Pipelines](/azure/devops/pipelines/tasks/deploy/azure-spring-cloud), you can deploy this way just by setting the `UseStagingDeployment` flag to `true`.
Here's how the alternating deployments approach works in practice:
The alternating deployments approach is simple and fast, as it doesn't require t
#### Persistent staging deployment
-The staging deployment always remains running, and thus consuming resources of the Azure Spring Cloud instance. This effectively doubles the resource requirements of each application on Azure Spring Cloud.
+The staging deployment always remains running, and thus consuming resources of the Azure Spring Apps instance. This effectively doubles the resource requirements of each application on Azure Spring Apps.
#### The approval race condition
In the illustration below, version `v5` is running on the deployment `deployment
![Deploying new version on a named deployment](media/spring-cloud-blue-green-patterns/named-deployment-1.png)
-There's no risk of another version being deployed in parallel. First, Azure Spring Cloud doesn't allow the creation of a third deployment while two deployments already exist. Second, even if it was possible to have more than two deployments, each deployment is identified by the version of the application it contains. Thus, the pipeline orchestrating the deployment of `v6` would only attempt to set `deployment-v6` as the production deployment.
+There's no risk of another version being deployed in parallel. First, Azure Spring Apps doesn't allow the creation of a third deployment while two deployments already exist. Second, even if it was possible to have more than two deployments, each deployment is identified by the version of the application it contains. Thus, the pipeline orchestrating the deployment of `v6` would only attempt to set `deployment-v6` as the production deployment.
![New version receives production traffic named deployment](media/spring-cloud-blue-green-patterns/named-deployment-2.png)
However, there are drawbacks as well, as described in the following section.
#### Deployment pipeline failures
-Between the time a deployment starts and the time the staging deployment is deleted, any additional attempts to run the deployment pipeline will fail. The pipeline will attempt to create a new deployment, which will result in an error because only two deployments are permitted per application in Azure Spring Cloud.
+Between the time a deployment starts and the time the staging deployment is deleted, any additional attempts to run the deployment pipeline will fail. The pipeline will attempt to create a new deployment, which will result in an error because only two deployments are permitted per application in Azure Spring Apps.
Therefore, the deployment orchestration must either have the means to retry a failed deployment process at a later time, or the means to ensure that the deployment flows for each version will remain queued until the flow is completed for all previous versions. ## Next steps
-* [Automate application deployments to Azure Spring Cloud](./how-to-cicd.md)
+* [Automate application deployments to Azure Spring Apps](./how-to-cicd.md)
spring-cloud Connect Managed Identity To Azure Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/connect-managed-identity-to-azure-sql.md
Title: Use Managed identity to connect Azure SQL to Azure Spring Cloud app
-description: Set up managed identity to connect Azure SQL to an Azure Spring Cloud app.
+ Title: Use Managed identity to connect Azure SQL to Azure Spring Apps app
+description: Set up managed identity to connect Azure SQL to an Azure Spring Apps app.
Last updated 03/25/2021-+
-# Use a managed identity to connect Azure SQL Database to an Azure Spring Cloud app
+# Use a managed identity to connect Azure SQL Database to an Azure Spring Apps app
+
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
**This article applies to:** ✔️ Java ❌ C# **This article applies to:** ✔️ Basic/Standard tier ✔️ Enterprise tier
-This article shows you how to create a managed identity for an Azure Spring Cloud app and use it to access Azure SQL Database.
+This article shows you how to create a managed identity for an Azure Spring Apps app and use it to access Azure SQL Database.
[Azure SQL Database](https://azure.microsoft.com/services/sql-database/) is the intelligent, scalable, relational database service built for the cloud. ItΓÇÖs always up to date, with AI-powered and automated features that optimize performance and durability. Serverless compute and Hyperscale storage options automatically scale resources on demand, so you can focus on building new applications without worrying about storage size or resource management. ## Prerequisites * Follow the [Spring Data JPA tutorial](/azure/developer/java/spring-framework/configure-spring-data-jpa-with-azure-sql-server) to provision an Azure SQL Database and get it work with a Java app locally
-* Follow the [Azure Spring Cloud system-assigned managed identity tutorial](./how-to-enable-system-assigned-managed-identity.md) to provision an Azure Spring Cloud app with MI enabled
+* Follow the [Azure Spring Apps system-assigned managed identity tutorial](./how-to-enable-system-assigned-managed-identity.md) to provision an Azure Spring Apps app with MI enabled
## Grant permission to the Managed Identity
Open the *src/main/resources/application.properties* file, and add `Authenticati
spring.datasource.url=jdbc:sqlserver://$AZ_DATABASE_NAME.database.windows.net:1433;database=demo;encrypt=true;trustServerCertificate=false;hostNameInCertificate=*.database.windows.net;loginTimeout=30;Authentication=ActiveDirectoryMSI; ```
-## Build and deploy the app to Azure Spring Cloud
+## Build and deploy the app to Azure Spring Apps
-Rebuild the app and deploy it to the Azure Spring Cloud app provisioned in the second bullet point under Prerequisites. Now you have a Spring Boot application, authenticated by a Managed Identity, that uses JPA to store and retrieve data from an Azure SQL Database in Azure Spring Cloud.
+Rebuild the app and deploy it to the Azure Spring Apps app provisioned in the second bullet point under Prerequisites. Now you have a Spring Boot application, authenticated by a Managed Identity, that uses JPA to store and retrieve data from an Azure SQL Database in Azure Spring Apps.
## Next steps
-* [How to access Storage blob with managed identity in Azure Spring Cloud](https://github.com/Azure-Samples/Azure-Spring-Cloud-Samples/tree/master/managed-identity-storage-blob)
-* [How to enable system-assigned managed identity for applications in Azure Spring Cloud](./how-to-enable-system-assigned-managed-identity.md)
+* [How to access Storage blob with managed identity in Azure Spring Apps](https://github.com/Azure-Samples/Azure-Spring-Cloud-Samples/tree/master/managed-identity-storage-blob)
+* [How to enable system-assigned managed identity for applications in Azure Spring Apps](./how-to-enable-system-assigned-managed-identity.md)
* [Learn more about managed identities for Azure resources](../active-directory/managed-identities-azure-resources/overview.md)
-* [Authenticate Azure Spring Cloud with Key Vault in GitHub Actions](./github-actions-key-vault.md)
+* [Authenticate Azure Spring Apps with Key Vault in GitHub Actions](./github-actions-key-vault.md)
spring-cloud Diagnostic Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/diagnostic-services.md
Title: Analyze logs and metrics in Azure Spring Cloud | Microsoft Docs
-description: Learn how to analyze diagnostics data in Azure Spring Cloud
+ Title: Analyze logs and metrics in Azure Spring Apps | Microsoft Docs
+description: Learn how to analyze diagnostics data in Azure Spring Apps
Last updated 01/06/2020 -+ # Analyze logs and metrics with diagnostics settings
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
+ **This article applies to:** ✔️ Java ✔️ C# **This article applies to:** ✔️ Basic/Standard tier ✔️ Enterprise tier
-This article shows you how to analyze diagnostics data in Azure Spring Cloud.
+This article shows you how to analyze diagnostics data in Azure Spring Apps.
-Using the diagnostics functionality of Azure Spring Cloud, you can analyze logs and metrics with any of the following
+Using the diagnostics functionality of Azure Spring Apps, you can analyze logs and metrics with any of the following
* Use Azure Log Analytics, where the data is written to Azure Storage. There is a delay when exporting logs to Log Analytics. * Save logs to a storage account for auditing or manual inspection. You can specify the retention time (in days).
Using the diagnostics functionality of Azure Spring Cloud, you can analyze logs
Choose the log category and metric category you want to monitor. > [!TIP]
-> Just want to stream your logs? Check out this [Azure CLI command](/cli/azure/spring-cloud/app#az-spring-cloud-app-logs)!
+> Just want to stream your logs? Check out this [Azure CLI command](/cli/azure/spring/app#az-spring-cloud-app-logs)!
## Logs
Choose the log category and metric category you want to monitor.
## Metrics
-For a complete list of metrics, see [Spring Cloud Metrics](./concept-metrics.md#user-metrics-options).
+For a complete list of metrics, see the [User metrics options](./concept-metrics.md#user-metrics-options) section of [Metrics for Azure Spring Apps](concept-metrics.md).
To get started, enable one of these services to receive the data. To learn about configuring Log Analytics, see [Get started with Log Analytics in Azure Monitor](../azure-monitor/logs/log-analytics-tutorial.md). ## Configure diagnostics settings
-1. In the Azure portal, go to your Azure Spring Cloud instance.
+1. In the Azure portal, go to your Azure Spring Apps instance.
1. Select **diagnostics settings** option, and then select **Add diagnostics setting**. 1. Enter a name for the setting, and then choose where you want to send the logs. You can select any combination of the following three options: * **Archive to a storage account**
To get started, enable one of these services to receive the data. To learn about
> [!NOTE] > There might be a gap of up to 15 minutes between when logs or metrics are emitted and when they appear in your storage account, your event hub, or Log Analytics.
-> If the Azure Spring Cloud instance is deleted or moved, the operation won't cascade to the **diagnostics settings** resources. The **diagnostics settings** resources have to be deleted manually before the operation against its parent, the Azure Spring Cloud instance. Otherwise, if a new Azure Spring Cloud instance is provisioned with the same resource ID as the deleted one, or if the Azure Spring Cloud instance is moved back, the previous **diagnostics settings** resources continue extending it.
+> If the Azure Spring Apps instance is deleted or moved, the operation won't cascade to the **diagnostics settings** resources. The **diagnostics settings** resources have to be deleted manually before the operation against its parent, the Azure Spring Apps instance. Otherwise, if a new Azure Spring Apps instance is provisioned with the same resource ID as the deleted one, or if the Azure Spring Apps instance is moved back, the previous **diagnostics settings** resources continue extending it.
## View the logs and metrics
There are various methods to view logs and metrics as described under the follow
### Use the Logs blade
-1. In the Azure portal, go to your Azure Spring Cloud instance.
+1. In the Azure portal, go to your Azure Spring Apps instance.
1. To open the **Log Search** pane, select **Logs**. 1. In the **Tables** search box * To view logs, enter a simple query such as:
Azure Log Analytics is running with a Kusto engine so you can query your logs fo
Application logs provide critical information and verbose logs about your application's health, performance, and more. In the next sections are some simple queries to help you understand your application's current and past states.
-### Show application logs from Azure Spring Cloud
+### Show application logs from Azure Spring Apps
-To review a list of application logs from Azure Spring Cloud, sorted by time with the most recent logs shown first, run the following query:
+To review a list of application logs from Azure Spring Apps, sorted by time with the most recent logs shown first, run the following query:
```sql AppPlatformLogsforSpring
You may be able to use the same strategy for other Java log libraries.
## Next steps
-* [Quickstart: Deploy your first Spring Boot app in Azure Spring Cloud](./quickstart.md)
+* [Quickstart: Deploy your first Spring Boot app in Azure Spring Apps](./quickstart.md)
spring-cloud Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/disaster-recovery.md
Title: Azure Spring Cloud geo-disaster recovery | Microsoft Docs
-description: Learn how to protect your Spring Cloud application from regional outages
+ Title: Azure Spring Apps geo-disaster recovery | Microsoft Docs
+description: Learn how to protect your Spring application from regional outages
Last updated 10/24/2019 -+
-# Azure Spring Cloud disaster recovery
+# Azure Spring Apps disaster recovery
+
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
**This article applies to:** ✔️ Java ✔️ C# **This article applies to:** ✔️ Basic/Standard tier ✔️ Enterprise tier
-This article explains some strategies you can use to protect your applications in Azure Spring Cloud from experiencing downtime. Any region or data center may experience downtime caused by regional disasters, but careful planning can mitigate impact on your customers.
+This article explains some strategies you can use to protect your applications in Azure Spring Apps from experiencing downtime. Any region or data center may experience downtime caused by regional disasters, but careful planning can mitigate impact on your customers.
## Plan your application deployment
-Applications in Azure Spring Cloud run in a specific region. Azure operates in multiple geographies around the world. An Azure geography is a defined area of the world that contains at least one Azure Region. An Azure region is an area within a geography, containing one or more data centers. Each Azure region is paired with another region within the same geography, together making a regional pair. Azure serializes platform updates (planned maintenance) across regional pairs, ensuring that only one region in each pair is updated at a time. In the event of an outage affecting multiple regions, at least one region in each pair will be prioritized for recovery.
+Applications in Azure Spring Apps run in a specific region. Azure operates in multiple geographies around the world. An Azure geography is a defined area of the world that contains at least one Azure Region. An Azure region is an area within a geography, containing one or more data centers. Each Azure region is paired with another region within the same geography, together making a regional pair. Azure serializes platform updates (planned maintenance) across regional pairs, ensuring that only one region in each pair is updated at a time. In the event of an outage affecting multiple regions, at least one region in each pair will be prioritized for recovery.
-Ensuring high availability and protection from disasters requires that you deploy your Spring Cloud applications to multiple regions. Azure provides a list of [paired regions](../availability-zones/cross-region-replication-azure.md) so that you can plan your Spring Cloud deployments to regional pairs. We recommend that you consider three key factors when designing your architecture: region availability, Azure paired regions, and service availability.
+Ensuring high availability and protection from disasters requires that you deploy your Spring applications to multiple regions. Azure provides a list of [paired regions](../availability-zones/cross-region-replication-azure.md) so that you can plan your Spring app deployments to regional pairs. We recommend that you consider three key factors when designing your architecture: region availability, Azure paired regions, and service availability.
* Region availability: Choose a geographic area close to your users to minimize network lag and transmission time. * Azure paired regions: Choose paired regions within your chosen geographic area to ensure coordinated platform updates and prioritized recovery efforts if needed.
Ensuring high availability and protection from disasters requires that you deplo
## Use Azure Traffic Manager to route traffic
-[Azure Traffic Manager](../traffic-manager/traffic-manager-overview.md) provides DNS-based traffic load-balancing and can distribute network traffic across multiple regions. Use Azure Traffic Manager to direct customers to the closest Azure Spring Cloud service instance to them. For best performance and redundancy, direct all application traffic through Azure Traffic Manager before sending it to your Azure Spring Cloud service.
+[Azure Traffic Manager](../traffic-manager/traffic-manager-overview.md) provides DNS-based traffic load-balancing and can distribute network traffic across multiple regions. Use Azure Traffic Manager to direct customers to the closest Azure Spring Apps service instance to them. For best performance and redundancy, direct all application traffic through Azure Traffic Manager before sending it to your Azure Spring Apps service.
-If you have applications in Azure Spring Cloud running in multiple regions, use Azure Traffic Manager to control the flow of traffic to your applications in each region. Define an Azure Traffic Manager endpoint for each service using the service IP. Customers should connect to an Azure Traffic Manager DNS name pointing to the Azure Spring Cloud service. Azure Traffic Manager load balances traffic across the defined endpoints. If a disaster strikes a data center, Azure Traffic Manager will direct traffic from that region to its pair, ensuring service continuity.
+If you have applications in Azure Spring Apps running in multiple regions, use Azure Traffic Manager to control the flow of traffic to your applications in each region. Define an Azure Traffic Manager endpoint for each service using the service IP. Customers should connect to an Azure Traffic Manager DNS name pointing to the Azure Spring Apps service. Azure Traffic Manager load balances traffic across the defined endpoints. If a disaster strikes a data center, Azure Traffic Manager will direct traffic from that region to its pair, ensuring service continuity.
-## Create Azure Traffic Manager for Azure Spring Cloud
+## Create Azure Traffic Manager for Azure Spring Apps
-1. Create Azure Spring Cloud in two different regions.
-You will need two service instances of Azure Spring Cloud deployed in two different regions (East US and West Europe). Launch an existing application in Azure Spring Cloud using the Azure portal to create two service instances. Each will serve as primary and fail-over endpoint for Traffic.
+1. Create Azure Spring Apps in two different regions.
+You will need two service instances of Azure Spring Apps deployed in two different regions (East US and West Europe). Launch an existing application in Azure Spring Apps using the Azure portal to create two service instances. Each will serve as primary and fail-over endpoint for Traffic.
**Two service instances info:**
Follow [Custom Domain Document](./tutorial-custom-domain.md) to set up custom do
3. Create a traffic manager and two endpoints: [Create a Traffic Manager profile using the Azure portal](../traffic-manager/quickstart-create-traffic-manager-profile.md). Here is the traffic manager profile:
-* Traffic Manager DNS Name: `http://asc-bcdr.trafficmanager.net`
+* Traffic Manager DNS Name: `http://asa-bcdr.trafficmanager.net`
* Endpoint Profiles: | Profile | Type | Target | Priority | Custom Header Settings | |--|--|--|--|--|
-| Endpoint A Profile | External Endpoint | service-sample-a.asc-test.net | 1 | host: bcdr-test.contoso.com |
-| Endpoint B Profile | External Endpoint | service-sample-b.asc-test.net | 2 | host: bcdr-test.contoso.com |
+| Endpoint A Profile | External Endpoint | service-sample-a.azuremicroservices.io | 1 | host: bcdr-test.contoso.com |
+| Endpoint B Profile | External Endpoint | service-sample-b.azuremicroservices.io | 2 | host: bcdr-test.contoso.com |
-4. Create a CNAME record in DNS Zone: bcdr-test.contoso.com CNAME asc-bcdr.trafficmanager.net.
+4. Create a CNAME record in DNS Zone: bcdr-test.contoso.com CNAME asa-bcdr.trafficmanager.net.
5. Now, the environment is completely set up. Customers should be able to access the app via: bcdr-test.contoso.com ## Next steps
-* [Quickstart: Deploy your first Spring Boot app in Azure Spring Cloud](./quickstart.md)
+* [Quickstart: Deploy your first Spring Boot app in Azure Spring Apps](./quickstart.md)
spring-cloud Expose Apps Gateway End To End Tls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/expose-apps-gateway-end-to-end-tls.md
Title: Expose applications with end-to-end TLS in a virtual network using Application Gateway-+ description: How to expose applications to the internet using Application Gateway Last updated 02/28/2022-+ ms.devlang: java, azurecli # Expose applications with end-to-end TLS in a virtual network
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
+ **This article applies to:** ✔️ Basic/Standard tier ✔️ Enterprise tier
-This article explains how to expose applications to the internet using Application Gateway. When an Azure Spring Cloud service instance is deployed in your virtual network, applications on the service instance are only accessible in the private network. To make the applications accessible on the Internet, you need to integrate with Azure Application Gateway.
+This article explains how to expose applications to the internet using Application Gateway. When an Azure Spring Apps service instance is deployed in your virtual network, applications on the service instance are only accessible in the private network. To make the applications accessible on the Internet, you need to integrate with Azure Application Gateway.
## Prerequisites - [Azure CLI version 2.0.4 or later](/cli/azure/install-azure-cli).-- An Azure Spring Cloud service instance deployed in a virtual network with an application accessible over the private network using the default `.private.azuremicroservices.io` domain suffix. For more information, see [Deploy Azure Spring Cloud in a virtual network](./how-to-deploy-in-azure-virtual-network.md)
+- An Azure Spring Apps service instance deployed in a virtual network with an application accessible over the private network using the default `.private.azuremicroservices.io` domain suffix. For more information, see [Deploy Azure Spring Apps in a virtual network](./how-to-deploy-in-azure-virtual-network.md)
- A custom domain to be used to access the application. - A certificate, stored in Key Vault, which matches the custom domain to be used to establish the HTTPS listener. For more information, see [Tutorial: Import a certificate in Azure Key Vault](../key-vault/certificates/tutorial-import-certificate.md).
-## Configure Application Gateway for Azure Spring Cloud
+## Configure Application Gateway for Azure Spring Apps
-We recommend that the domain name, as seen by the browser, is the same as the host name which Application Gateway uses to direct traffic to the Azure Spring Cloud back end. This recommendation provides the best experience when using Application Gateway to expose applications hosted in Azure Spring Cloud and residing in a virtual network. If the domain exposed by Application Gateway is different from the domain accepted by Azure Spring Cloud, cookies and generated redirect URLs (for example) can be broken.
+We recommend that the domain name, as seen by the browser, is the same as the host name which Application Gateway uses to direct traffic to the Azure Spring Apps back end. This recommendation provides the best experience when using Application Gateway to expose applications hosted in Azure Spring Apps and residing in a virtual network. If the domain exposed by Application Gateway is different from the domain accepted by Azure Spring Apps, cookies and generated redirect URLs (for example) can be broken.
-To configure Application Gateway in front of Azure Spring Cloud, use the following steps.
+To configure Application Gateway in front of Azure Spring Apps, use the following steps.
-1. Follow the instructions in [Deploy Azure Spring Cloud in a virtual network](./how-to-deploy-in-azure-virtual-network.md).
+1. Follow the instructions in [Deploy Azure Spring Apps in a virtual network](./how-to-deploy-in-azure-virtual-network.md).
1. Follow the instructions in [Access your application in a private network](./access-app-virtual-network.md). 1. Acquire a certificate for your domain of choice and store that in Key Vault. For more information, see [Tutorial: Import a certificate in Azure Key Vault](../key-vault/certificates/tutorial-import-certificate.md).
-1. Configure a custom domain and corresponding certificate from Key Vault on an app deployed onto Azure Spring Cloud. For more information, see [Tutorial: Map an existing custom domain to Azure Spring Cloud](./tutorial-custom-domain.md).
+1. Configure a custom domain and corresponding certificate from Key Vault on an app deployed onto Azure Spring Apps. For more information, see [Tutorial: Map an existing custom domain to Azure Spring Apps](./tutorial-custom-domain.md).
1. Deploy Application Gateway in a virtual network configured according to the following list:
- - Use Azure Spring Cloud in the backend pool, referenced by the domain suffixed with `private.azuremicroservices.io`.
+ - Use Azure Spring Apps in the backend pool, referenced by the domain suffixed with `private.azuremicroservices.io`.
- Include an HTTPS listener using the same certificate from Key Vault.
- - Configure the virtual network with HTTP settings that use the custom domain name configured on Azure Spring Cloud instead of the domain suffixed with `private.azuremicroservices.io`.
+ - Configure the virtual network with HTTP settings that use the custom domain name configured on Azure Spring Apps instead of the domain suffixed with `private.azuremicroservices.io`.
1. Configure your public DNS to point to Application Gateway. ## Define variables
-Next, use the following commands to define variables for the resource group and virtual network you created as directed in [Deploy Azure Spring Cloud in a virtual network](./how-to-deploy-in-azure-virtual-network.md). Customize the values based on your real environment. When you define `SPRING_APP_PRIVATE_FQDN`, remove `https://` from the URI.
+Next, use the following commands to define variables for the resource group and virtual network you created as directed in [Deploy Azure Spring Apps in a virtual network](./how-to-deploy-in-azure-virtual-network.md). Customize the values based on your real environment. When you define `SPRING_APP_PRIVATE_FQDN`, remove `https://` from the URI.
```bash SUBSCRIPTION='subscription-id' RESOURCE_GROUP='my-resource-group' LOCATION='eastus' SPRING_CLOUD_NAME='name-of-spring-cloud-instance'
-APPNAME='name-of-app-in-azure-spring-cloud'
+APPNAME='name-of-app-in-azure-spring-apps'
SPRING_APP_PRIVATE_FQDN='$APPNAME.private.azuremicroservices.io'
-VIRTUAL_NETWORK_NAME='azure-spring-cloud-vnet'
+VIRTUAL_NETWORK_NAME='azure-spring-apps-vnet'
APPLICATION_GATEWAY_SUBNET_NAME='app-gw-subnet' APPLICATION_GATEWAY_SUBNET_CIDR='10.1.2.0/24' ```
After you've finished updating the policy JSON (see [Update Certificate Policy](
```azurecli KV_NAME='name-of-key-vault'
-CERT_NAME_IN_KV='name-of-certificate-in-key-vault'
+CERT_NAME_IN_KEY_VAULT='name-of-certificate-in-key-vault'
az keyvault certificate create \ --vault-name $KV_NAME \
- --name $CERT_NAME_IN_KV \
+ --name $CERT_NAME_IN_KEY_VAULT \
--policy "$KV_CERT_POLICY" ```
-## Configure the public domain name on Azure Spring Cloud
+## Configure the public domain name on Azure Spring Apps
-Traffic will enter the application deployed on Azure Spring Cloud using the public domain name. To configure your application to listen to this host name and do so over HTTPS, use the following commands to add a custom domain to your app:
+Traffic will enter the application deployed on Azure Spring Apps using the public domain name. To configure your application to listen to this host name and do so over HTTPS, use the following commands to add a custom domain to your app:
```azurecli KV_NAME='name-of-key-vault' KV_RG='resource-group-name-of-key-vault'
-CERT_NAME_IN_ASC='name-of-certificate-in-Azure-Spring-Cloud'
-CERT_NAME_IN_KV='name-of-certificate-with-intermediaries-in-key-vault'
+CERT_NAME_IN_AZURE_SPRING_APPS='name-of-certificate-in-Azure-Spring-Apps'
+CERT_NAME_IN_KEY_VAULT='name-of-certificate-with-intermediaries-in-key-vault'
DOMAIN_NAME=myapp.mydomain.com
-# provide permissions to ASC to read the certificate from Key Vault:
+# provide permissions to Azure Spring Apps to read the certificate from Key Vault:
VAULTURI=$(az keyvault show -n $KV_NAME -g $KV_RG --query properties.vaultUri -o tsv)
-# get the object id for the Azure Spring Cloud Domain-Management Service Principal:
-ASCDM_OID=$(az ad sp show --id 03b39d0f-4213-4864-a245-b1476ec03169 --query objectId --output tsv)
+# get the object id for the Azure Spring Apps Domain-Management Service Principal:
+ASADM_OID=$(az ad sp show --id 03b39d0f-4213-4864-a245-b1476ec03169 --query objectId --output tsv)
# allow this Service Principal to read and list certificates and secrets from Key Vault:
-az keyvault set-policy -g $KV_RG -n $KV_NAME --object-id $ASCDM_OID --certificate-permissions get list --secret-permissions get list
+az keyvault set-policy -g $KV_RG -n $KV_NAME --object-id $ASADM_OID --certificate-permissions get list --secret-permissions get list
# add custom domain name and configure TLS using the certificate:
-az spring-cloud certificate add \
+az spring certificate add \
--resource-group $RESOURCE_GROUP \ --service $SPRING_CLOUD_NAME \
- --name $CERT_NAME_IN_ASC \
- --vault-certificate-name $CERT_NAME_IN_KV \
+ --name $CERT_NAME_IN_AZURE_SPRING_APPS \
+ --vault-certificate-name $CERT_NAME_IN_KEY_VAULT \
--vault-uri $VAULTURI
-az spring-cloud app custom-domain bind \
+az spring app custom-domain bind \
--resource-group $RESOURCE_GROUP \ --service $SPRING_CLOUD_NAME \ --domain-name $DOMAIN_NAME \
- --certificate $CERT_NAME_IN_ASC \
+ --certificate $CERT_NAME_IN_AZURE_SPRING_APPS \
--app $APPNAME ``` ## Create network resources
-The Azure Application Gateway to be created will join the same virtual network as--or peered virtual network to--the Azure Spring Cloud service instance. First create a new subnet for the Application Gateway in the virtual network using `az network vnet subnet create`, and also create a Public IP address as the Frontend of the Application Gateway using `az network public-ip create`.
+The Azure Application Gateway to be created will join the same virtual network as--or peered virtual network to--the Azure Spring Apps service instance. First create a new subnet for the Application Gateway in the virtual network using `az network vnet subnet create`, and also create a Public IP address as the Frontend of the Application Gateway using `az network public-ip create`.
```azurecli APPLICATION_GATEWAY_PUBLIC_IP_NAME='app-gw-public-ip'
Create an application gateway using `az network application-gateway create` and
```azurecli APPGW_NAME='name-for-application-gateway'
-KEYVAULT_SECRET_ID_FOR_CERT=$(az keyvault certificate show --name $CERT_NAME_IN_KV --vault-name $KV_NAME --query sid --output tsv)
+KEYVAULT_SECRET_ID_FOR_CERT=$(az keyvault certificate show --name $CERT_NAME_IN_KEY_VAULT --vault-name $KV_NAME --query sid --output tsv)
az network application-gateway create \ --name $APPGW_NAME \
It can take up to 30 minutes for Azure to create the application gateway.
#### [Use a publicly signed certificate](#tab/public-cert-2)
-Update the HTTP settings to use the public domain name as the hostname instead of the domain suffixed with ".private.azuremicroservices.io" to send traffic to Azure Spring Cloud with.
+Update the HTTP settings to use the public domain name as the hostname instead of the domain suffixed with ".private.azuremicroservices.io" to send traffic to Azure Spring Apps with.
```azurecli az network application-gateway http-settings update \
az network application-gateway http-settings update \
#### [Use a self-signed certificate](#tab/self-signed-cert-2)
-Update the HTTP settings to use the public domain name as the hostname instead of the domain suffixed with ".private.azuremicroservices.io" to send traffic to Azure Spring Cloud with. Given that a self-signed certificate is used, it will need to be allow-listed on the HTTP Settings of Application Gateway.
+Update the HTTP settings to use the public domain name as the hostname instead of the domain suffixed with ".private.azuremicroservices.io" to send traffic to Azure Spring Apps with. Given that a self-signed certificate is used, it will need to be allow-listed on the HTTP Settings of Application Gateway.
To allowlist the certificate, first fetch the public portion of it from Key Vault by using the following command: ```azurecli az keyvault certificate download \ --vault-name $KV_NAME \
- --name $CERT_NAME_IN_KV \
+ --name $CERT_NAME_IN_KEY_VAULT \
--file ./selfsignedcert.crt \ --encoding DER ```
The output indicates the healthy status of backend pool, as shown in the followi
{ "servers": [ {
- "address": "my-azure-spring-cloud-hello-vnet.private.azuremicroservices.io",
+ "address": "my-azure-spring-apps-hello-vnet.private.azuremicroservices.io",
"health": "Healthy", "healthProbeLog": "Success. Received 200 status code", "ipConfiguration": null
You can now access the application using the public domain name.
## Next steps -- [Troubleshooting Azure Spring Cloud in VNET](./troubleshooting-vnet.md)-- [Customer Responsibilities for Running Azure Spring Cloud in VNET](./vnet-customer-responsibilities.md)
+- [Troubleshooting Azure Spring Apps in VNET](./troubleshooting-vnet.md)
+- [Customer Responsibilities for Running Azure Spring Apps in VNET](./vnet-customer-responsibilities.md)
spring-cloud Expose Apps Gateway Tls Termination https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/expose-apps-gateway-tls-termination.md
Title: "Expose applications to the internet using Application Gateway with TLS termination"-+ description: How to expose applications to internet using Application Gateway with TLS termination Last updated 11/09/2021-+ # Expose applications to the internet with TLS Termination at Application Gateway
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
+ This article explains how to expose applications to the internet using Application Gateway.
-When an Azure Spring Cloud service instance is deployed in your virtual network (VNET), applications on the service instance are only accessible in the private network. To make the applications accessible on the Internet, you need to integrate with Azure Application Gateway. The incoming encrypted traffic can be decrypted at the application gateway or it can be passed to Azure Spring Cloud encrypted to achieve end-to-end TLS/SSL. For dev and test purposes, you can start with SSL termination at the application gateway, which is covered in this guide. For production, we recommend end-to-end TLS/SSL with private certificate, as described in [Expose applications with end-to-end TLS in a virtual network](expose-apps-gateway-end-to-end-tls.md).
+When an Azure Spring Apps service instance is deployed in your virtual network (VNET), applications on the service instance are only accessible in the private network. To make the applications accessible on the Internet, you need to integrate with Azure Application Gateway. The incoming encrypted traffic can be decrypted at the application gateway or it can be passed to Azure Spring Apps encrypted to achieve end-to-end TLS/SSL. For dev and test purposes, you can start with SSL termination at the application gateway, which is covered in this guide. For production, we recommend end-to-end TLS/SSL with private certificate, as described in [Expose applications with end-to-end TLS in a virtual network](expose-apps-gateway-end-to-end-tls.md).
## Prerequisites - [Azure CLI version 2.0.4 or later](/cli/azure/install-azure-cli).-- An Azure Spring Cloud service instance deployed in a virtual network with an application accessible over the private network using the default `.private.azuremicroservices.io` domain suffix. For more information, see [Deploy Azure Spring Cloud in a virtual network](how-to-deploy-in-azure-virtual-network.md)
+- An Azure Spring Apps service instance deployed in a virtual network with an application accessible over the private network using the default `.private.azuremicroservices.io` domain suffix. For more information, see [Deploy Azure Spring Apps in a virtual network](how-to-deploy-in-azure-virtual-network.md)
- A custom domain to be used to access the application. - A certificate, stored in Key Vault, which matches the custom domain to be used to establish the HTTPS listener. For more information, see [Tutorial: Import a certificate in Azure Key Vault](../key-vault/certificates/tutorial-import-certificate.md).
-## Configure Application Gateway for Azure Spring Cloud
+## Configure Application Gateway for Azure Spring Apps
-We recommend that the domain name, as seen by the browser, is the same as the host name which Application Gateway uses to direct traffic to the Azure Spring Cloud back end. This recommendation provides the best experience when using Application Gateway to expose applications hosted in Azure Spring Cloud and residing in a virtual network. If the domain exposed by Application Gateway is different from the domain accepted by Azure Spring Cloud, cookies and generated redirect URLs (for example) can be broken.
+We recommend that the domain name, as seen by the browser, is the same as the host name which Application Gateway uses to direct traffic to the Azure Spring Apps back end. This recommendation provides the best experience when using Application Gateway to expose applications hosted in Azure Spring Apps and residing in a virtual network. If the domain exposed by Application Gateway is different from the domain accepted by Azure Spring Apps, cookies and generated redirect URLs (for example) can be broken.
-To configure Application Gateway in front of Azure Spring Cloud in a private VNET, use the following steps.
+To configure Application Gateway in front of Azure Spring Apps in a private VNET, use the following steps.
-1. Follow the instructions in [Deploy Azure Spring Cloud in a virtual network](how-to-deploy-in-azure-virtual-network.md).
+1. Follow the instructions in [Deploy Azure Spring Apps in a virtual network](how-to-deploy-in-azure-virtual-network.md).
1. Follow the instructions in [Access your application in a private network](access-app-virtual-network.md). 1. Acquire a certificate for your domain of choice and store that in Key Vault. For more information, see [Tutorial: Import a certificate in Azure Key Vault](../key-vault/certificates/tutorial-import-certificate.md).
-1. Configure a custom domain and corresponding certificate from Key Vault on an app deployed onto Azure Spring Cloud. For more information, see [Tutorial: Map an existing custom domain to Azure Spring Cloud](tutorial-custom-domain.md).
+1. Configure a custom domain and corresponding certificate from Key Vault on an app deployed onto Azure Spring Apps. For more information, see [Tutorial: Map an existing custom domain to Azure Spring Apps](tutorial-custom-domain.md).
1. Deploy Application Gateway in a virtual network configured according to the following list:
- - Use Azure Spring Cloud in the backend pool, referenced by the domain suffixed with `private.azuremicroservices.io`.
+ - Use Azure Spring Apps in the backend pool, referenced by the domain suffixed with `private.azuremicroservices.io`.
- Include an HTTPS listener using the same certificate from Key Vault.
- - Configure the virtual network with HTTP settings that use the custom domain name configured on Azure Spring Cloud instead of the domain suffixed with `private.azuremicroservices.io`.
+ - Configure the virtual network with HTTP settings that use the custom domain name configured on Azure Spring Apps instead of the domain suffixed with `private.azuremicroservices.io`.
1. Configure your public DNS to point to the application gateway. ## Define variables
-Next, use the following commands to define variables for the resource group and virtual network you created as directed in [Deploy Azure Spring Cloud in a virtual network](how-to-deploy-in-azure-virtual-network.md). Replace the *\<...>* placeholders with real values based on your actual environment. When you define `SPRING_APP_PRIVATE_FQDN`, remove `https://` from the URI.
+Next, use the following commands to define variables for the resource group and virtual network you created as directed in [Deploy Azure Spring Apps in a virtual network](how-to-deploy-in-azure-virtual-network.md). Replace the *\<...>* placeholders with real values based on your actual environment. When you define `SPRING_APP_PRIVATE_FQDN`, remove `https://` from the URI.
```bash SUBSCRIPTION='<subscription-id>' RESOURCE_GROUP='<resource-group-name>' LOCATION='eastus'
-SPRING_CLOUD_NAME='<name-of-azure-spring-cloud-instance>'
-APPNAME='<name-of-app-in-azure-spring-cloud>'
+SPRING_CLOUD_NAME='<name-of-Azure-Spring-Apps-instance>'
+APPNAME='<name-of-app-in-Azure-Spring-Apps>'
SPRING_APP_PRIVATE_FQDN='$APPNAME.private.azuremicroservices.io'
-VIRTUAL_NETWORK_NAME='azure-spring-cloud-vnet'
+VIRTUAL_NETWORK_NAME='azure-spring-apps-vnet'
APPLICATION_GATEWAY_SUBNET_NAME='app-gw-subnet' APPLICATION_GATEWAY_SUBNET_CIDR='10.1.2.0/24' ```
az login
az account set --subscription $SUBSCRIPTION ```
-## Configure the public domain name on Azure Spring Cloud
+## Configure the public domain name on Azure Spring Apps
-Traffic will enter the application deployed on Azure Spring Cloud using the public domain name. To configure your application to listen to this host name over HTTP, use the following commands to add a custom domain to your app, replacing the *\<...>* placeholders with real values:
+Traffic will enter the application deployed on Azure Spring Apps using the public domain name. To configure your application to listen to this host name over HTTP, use the following commands to add a custom domain to your app, replacing the *\<...>* placeholders with real values:
```azurecli KV_NAME='<name-of-key-vault>'
KV_RG='<resource-group-name-of-key-vault>'
CERT_NAME_IN_KV='<name-of-certificate-with-intermediaries-in-key-vault>' DOMAIN_NAME=myapp.mydomain.com
-az spring-cloud app custom-domain bind \
+az spring app custom-domain bind \
--resource-group $RESOURCE_GROUP \ --service $SPRING_CLOUD_NAME \ --domain-name $DOMAIN_NAME \
az spring-cloud app custom-domain bind \
## Create network resources
-The application gateway to be created will join the same virtual network as the Azure Spring Cloud service instance. First, create a new subnet for the application gateway in the virtual network, then create a public IP address as the frontend of the application gateway, as shown in the following example.
+The application gateway to be created will join the same virtual network as the Azure Spring Apps service instance. First, create a new subnet for the application gateway in the virtual network, then create a public IP address as the frontend of the application gateway, as shown in the following example.
```azurecli APPLICATION_GATEWAY_PUBLIC_IP_NAME='app-gw-public-ip'
Create an application gateway using the following steps to enable SSL terminatio
:::image type="content" source="media/expose-apps-gateway-tls-termination/create-frontend-ip.png" alt-text="Screenshot of Azure portal showing Frontends tab of 'Create application gateway' page.":::
-1. Create a backend pool for the application gateway. Select **Target** as your FQDN of the application deployed in Azure Spring Cloud.
+1. Create a backend pool for the application gateway. Select **Target** as your FQDN of the application deployed in Azure Spring Apps.
:::image type="content" source="media/expose-apps-gateway-tls-termination/create-backend-pool.png" alt-text="Screenshot of Azure portal 'Add a backend pool' page.":::
It can take up to 30 minutes for Azure to create the application gateway.
### Update HTTP settings to use the domain name towards the backend
-Update the HTTP settings to use the public domain name as the hostname instead of the domain suffixed with `.private.azuremicroservices.io` to send traffic to Azure Spring Cloud with.
+Update the HTTP settings to use the public domain name as the hostname instead of the domain suffixed with `.private.azuremicroservices.io` to send traffic to Azure Spring Apps with.
```azurecli az network application-gateway http-settings update \
The output indicates the healthy status of backend pool, as shown in the followi
{ "servers": [ {
- "address": "my-azure-spring-cloud-hello-vnet.private.azuremicroservices.io",
+ "address": "my-azure-spring-apps-hello-vnet.private.azuremicroservices.io",
"health": "Healthy", "healthProbeLog": "Success. Received 200 status code", "ipConfiguration": null
az group delete --name $RESOURCE_GROUP
## Next steps - [Exposing applications with end-to-end TLS in a virtual network](./expose-apps-gateway-end-to-end-tls.md)-- [Troubleshooting Azure Spring Cloud in VNET](./troubleshooting-vnet.md)-- [Customer Responsibilities for Running Azure Spring Cloud in VNET](./vnet-customer-responsibilities.md)
+- [Troubleshooting Azure Spring Apps in VNET](./troubleshooting-vnet.md)
+- [Customer Responsibilities for Running Azure Spring Apps in VNET](./vnet-customer-responsibilities.md)
spring-cloud Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/faq.md
Title: Frequently asked questions about Azure Spring Cloud | Microsoft Docs
-description: This article answers frequently asked questions about Azure Spring Cloud.
+ Title: Frequently asked questions about Azure Spring Apps | Microsoft Docs
+description: This article answers frequently asked questions about Azure Spring Apps.
Last updated 09/08/2020 -+ zone_pivot_groups: programming-languages-spring-cloud
-# Azure Spring Cloud FAQ
+# Azure Spring Apps FAQ
+
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
**This article applies to:** ✔️ Basic/Standard tier ✔️ Enterprise tier
-This article answers frequently asked questions about Azure Spring Cloud.
+This article answers frequently asked questions about Azure Spring Apps.
## General
-### Why Azure Spring Cloud?
+### Why Azure Spring Apps?
-Azure Spring Cloud provides a platform as a service (PaaS) for Spring Cloud developers. Azure Spring Cloud manages your application infrastructure so that you can focus on application code and business logic. Core features built into Azure Spring Cloud include Eureka, Config Server, Service Registry Server, VMware Tanzu® Build Service™, Blue-green deployment, and more. This service also enables developers to bind their applications with other Azure services, such as Azure Cosmos DB, Azure Database for MySQL, and Azure Cache for Redis.
+Azure Spring Apps provides a platform as a service (PaaS) for Spring developers. Azure Spring Apps manages your application infrastructure so that you can focus on application code and business logic. Core features built into Azure Spring Apps include Eureka, Config Server, Service Registry Server, VMware Tanzu® Build Service™, Blue-green deployment, and more. This service also enables developers to bind their applications with other Azure services, such as Azure Cosmos DB, Azure Database for MySQL, and Azure Cache for Redis.
-Azure Spring Cloud enhances the application diagnostics experience for developers and operators by integrating Azure Monitor, Application Insights, and Log Analytics.
+Azure Spring Apps enhances the application diagnostics experience for developers and operators by integrating Azure Monitor, Application Insights, and Log Analytics.
-### How secure is Azure Spring Cloud?
+### How secure is Azure Spring Apps?
-Security and privacy are among the top priorities for Azure and Azure Spring Cloud customers. Azure helps ensure that only customers have access to application data, logs, or configurations by securely encrypting all of this data.
+Security and privacy are among the top priorities for Azure and Azure Spring Apps customers. Azure helps ensure that only customers have access to application data, logs, or configurations by securely encrypting all of this data.
-* The service instances in Azure Spring Cloud are isolated from each other.
-* Azure Spring Cloud provides complete TLS/SSL and certificate management.
-* Critical security patches for OpenJDK and Spring Cloud runtimes are applied to Azure Spring Cloud as soon as possible.
+* The service instances in Azure Spring Apps are isolated from each other.
+* Azure Spring Apps provides complete TLS/SSL and certificate management.
+* Critical security patches for OpenJDK and Spring runtimes are applied to Azure Spring Apps as soon as possible.
-### How does Azure Spring Cloud host my applications?
+### How does Azure Spring Apps host my applications?
-Each service instance in Azure Spring Cloud is backed by a fully dedicated Kubernetes cluster with multiple worker nodes. Azure Spring Cloud manages the underlying Kubernetes cluster for you, including high availability, scalability, Kubernetes version upgrade, and so on.
+Each service instance in Azure Spring Apps is backed by a fully dedicated Kubernetes cluster with multiple worker nodes. Azure Spring Apps manages the underlying Kubernetes cluster for you, including high availability, scalability, Kubernetes version upgrade, and so on.
-Azure Spring Cloud intelligently schedules your applications on the underlying Kubernetes worker nodes. To provide high availability, Azure Spring Cloud distributes applications with 2 or more instances on different nodes.
+Azure Spring Apps intelligently schedules your applications on the underlying Kubernetes worker nodes. To provide high availability, Azure Spring Apps distributes applications with 2 or more instances on different nodes.
-### In which regions is Azure Spring Cloud available?
+### In which regions is Azure Spring Apps available?
East US, East US 2, Central US, South Central US, North Central US, West US, West US 2, West US 3, West Europe, North Europe, UK South, Southeast Asia, Australia East, Canada Central, UAE North, Central India, Korea Central, East Asia, Japan East, South Africa North, Brazil South, France Central, China East 2(Mooncake), and China North 2(Mooncake). [Learn More](https://azure.microsoft.com/global-infrastructure/services/?products=spring-cloud) ### Is any customer data stored outside of the specified region?
-Azure Spring Cloud is a regional service. All customer data in Azure Spring Cloud is stored to a single, specified region. To learn more about geo and region, see [Data residency in Azure](https://azure.microsoft.com/global-infrastructure/data-residency/).
+Azure Spring Apps is a regional service. All customer data in Azure Spring Apps is stored to a single, specified region. To learn more about geo and region, see [Data residency in Azure](https://azure.microsoft.com/global-infrastructure/data-residency/).
-### What are the known limitations of Azure Spring Cloud?
+### What are the known limitations of Azure Spring Apps?
-Azure Spring Cloud has the following known limitations:
+Azure Spring Apps has the following known limitations:
* `spring.application.name` will be overridden by the application name that's used to create each application. * `server.port` defaults to port 1025. If any other value is applied, it will be overridden. Please also respect this setting and not specify server port in your code.
-* The Azure portal, Azure Resource Manager templates, and Terraform do not support uploading application packages. You can upload application packages by deploying the application using the Azure CLI, Azure DevOps, Maven Plugin for Azure Spring Cloud, Azure Toolkit for IntelliJ, and the Visual Studio Code extension for Azure Spring Cloud.
+* The Azure portal, Azure Resource Manager templates, and Terraform do not support uploading application packages. You can upload application packages by deploying the application using the Azure CLI, Azure DevOps, Maven Plugin for Azure Spring Apps, Azure Toolkit for IntelliJ, and the Visual Studio Code extension for Azure Spring Apps.
### What pricing tiers are available? Which one should I use and what are the limits within each tier?
-* Azure Spring Cloud offers two pricing tiers: Basic and Standard. The Basic tier is targeted for Dev/Test and trying out Azure Spring Cloud. The Standard tier is optimized to run general purpose production traffic. See [Azure Spring Cloud pricing details](https://azure.microsoft.com/pricing/details/spring-cloud/) for limits and feature level comparison.
+* Azure Spring Apps offers two pricing tiers: Basic and Standard. The Basic tier is targeted for Dev/Test and trying out Azure Spring Apps. The Standard tier is optimized to run general purpose production traffic. See [Azure Spring Apps pricing details](https://azure.microsoft.com/pricing/details/spring-cloud/) for limits and feature level comparison.
### What's the difference between Service Binding and Service Connector?
We are not actively developing additional capabilities for Service Binding in fa
### How can I provide feedback and report issues?
-If you encounter any issues with Azure Spring Cloud, create an [Azure Support Request](../azure-portal/supportability/how-to-create-azure-support-request.md). To submit a feature request or provide feedback, go to [Azure Feedback](https://feedback.azure.com/d365community/forum/79b1327d-d925-ec11-b6e6-000d3a4f06a4).
+If you encounter any issues with Azure Spring Apps, create an [Azure Support Request](../azure-portal/supportability/how-to-create-azure-support-request.md). To submit a feature request or provide feedback, go to [Azure Feedback](https://feedback.azure.com/d365community/forum/79b1327d-d925-ec11-b6e6-000d3a4f06a4).
### How do I get VMware Spring Runtime support (Enterprise tier only)
Enterprise tier has built-in VMware Spring Runtime Support, so you can open supp
## Development
-### I am a Spring Cloud developer but new to Azure. What is the quickest way for me to learn how to develop an application in Azure Spring Cloud?
+### I am a Spring developer but new to Azure. What is the quickest way for me to learn how to develop an application in Azure Spring Apps?
-For the quickest way to get started with Azure Spring Cloud, follow the instructions in [Quickstart: Launch an application in Azure Spring Cloud by using the Azure portal](./quickstart.md).
+For the quickest way to get started with Azure Spring Apps, follow the instructions in [Quickstart: Launch an application in Azure Spring Apps by using the Azure portal](./quickstart.md).
::: zone pivot="programming-language-java" ### Is Spring Boot 2.4.x supported?
We've identified an issue with Spring Boot 2.4 and are currently working with th
::: zone-end
-### Where can I view my Spring Cloud application logs and metrics?
+### Where can I view my Spring application logs and metrics?
Find metrics in the App Overview tab and the [Azure Monitor](../azure-monitor/essentials/data-platform-metrics.md#metrics-explorer) tab.
-Azure Spring Cloud supports exporting Spring Cloud application logs and metrics to Azure Storage, Event Hub, and [Log Analytics](../azure-monitor/logs/data-platform-logs.md). The table name in Log Analytics is *AppPlatformLogsforSpring*. To learn how to enable it, see [Diagnostic services](diagnostic-services.md).
+Azure Spring Apps supports exporting Spring application logs and metrics to Azure Storage, Event Hub, and [Log Analytics](../azure-monitor/logs/data-platform-logs.md). The table name in Log Analytics is *AppPlatformLogsforSpring*. To learn how to enable it, see [Diagnostic services](diagnostic-services.md).
-### Does Azure Spring Cloud support distributed tracing?
+### Does Azure Spring Apps support distributed tracing?
-Yes. For more information, see [Tutorial: Use Distributed Tracing with Azure Spring Cloud](./how-to-distributed-tracing.md).
+Yes. For more information, see [Tutorial: Use Distributed Tracing with Azure Spring Apps](./how-to-distributed-tracing.md).
::: zone pivot="programming-language-java" ### What resource types does Service Binding support?
Three services are currently supported:
Yes.
-### How many outbound public IP addresses does an Azure Spring Cloud instance have?
+### How many outbound public IP addresses does an Azure Spring Apps instance have?
The number of outbound public IP addresses may vary according to the tiers and other factors.
-| Azure Spring Cloud instance type | Default number of outbound public IP addresses |
+| Azure Spring Apps instance type | Default number of outbound public IP addresses |
| -- | - | | Basic Tier instances | 1 | | Standard Tier instances | 2 |
The number of outbound public IP addresses may vary according to the tiers and o
Yes, you can open a [support ticket](https://azure.microsoft.com/support/faq/) to request for more outbound public IP addresses.
-### When I delete/move an Azure Spring Cloud service instance, will its extension resources be deleted/moved as well?
+### When I delete/move an Azure Spring Apps service instance, will its extension resources be deleted/moved as well?
-It depends on the logic of resource providers that own the extension resources. The extension resources of a `Microsoft.AppPlatform` instance do not belong to the same namespace, so the behavior varies by resource provider. For example, the delete/move operation won't cascade to the **diagnostics settings** resources. If a new Azure Spring Cloud instance is provisioned with the same resource ID as the deleted one, or if the previous Azure Spring Cloud instance is moved back, the previous **diagnostics settings** resources continue extending it.
+It depends on the logic of resource providers that own the extension resources. The extension resources of a `Microsoft.AppPlatform` instance do not belong to the same namespace, so the behavior varies by resource provider. For example, the delete/move operation won't cascade to the **diagnostics settings** resources. If a new Azure Spring Apps instance is provisioned with the same resource ID as the deleted one, or if the previous Azure Spring Apps instance is moved back, the previous **diagnostics settings** resources continue extending it.
-You can delete Spring Cloud's diagnostic settings by using Azure CLI:
+You can delete the Azure Spring Apps diagnostic settings by using Azure CLI:
```azurecli
- az monitor diagnostic-settings delete --name $diagnosticSettingName --resource $azureSpringCloudResourceId
+ az monitor diagnostic-settings delete --name $DIAGNOSTIC_SETTINGS_NAME --resource $AZURE_SPRING_APPS_RESOURCE_ID
``` ::: zone pivot="programming-language-java" ## Java runtime and OS versions
-### Which versions of Java runtime are supported in Azure Spring Cloud?
+### Which versions of Java runtime are supported in Azure Spring Apps?
-Azure Spring Cloud supports Java LTS versions with the most recent builds, currently Java 8, Java 11, and Java17 are supported. For more information, see [Install the JDK for Azure and Azure Stack](/azure/developer/java/fundamentals/java-jdk-install).
+Azure Spring Apps supports Java LTS versions with the most recent builds, currently Java 8, Java 11, and Java17 are supported. For more information, see [Install the JDK for Azure and Azure Stack](/azure/developer/java/fundamentals/java-jdk-install).
### Who built these Java runtimes?
The most recent Ubuntu LTS version is used, currently [Ubuntu 20.04 LTS (Focal F
### How often are OS security patches applied?
-Security patches applicable to Azure Spring Cloud are rolled out to production on a monthly basis.
-Critical security patches (CVE score >= 9) applicable to Azure Spring Cloud are rolled out as soon as possible.
+Security patches applicable to Azure Spring Apps are rolled out to production on a monthly basis.
+Critical security patches (CVE score >= 9) applicable to Azure Spring Apps are rolled out as soon as possible.
::: zone-end ## Deployment
-### Does Azure Spring Cloud support blue-green deployment?
+### Does Azure Spring Apps support blue-green deployment?
Yes. For more information, see [Set up a staging environment](./how-to-staging-environment.md). ### Can I access Kubernetes to manipulate my application containers?
-No. Azure Spring Cloud abstracts the developer from the underlying architecture, allowing you to concentrate on application code and business logic.
+No. Azure Spring Apps abstracts the developer from the underlying architecture, allowing you to concentrate on application code and business logic.
-### Does Azure Spring Cloud support building containers from source?
+### Does Azure Spring Apps support building containers from source?
-Yes. For more information, see [Launch your Spring Cloud application from source code](./quickstart.md).
+Yes. For more information, see [Quickstart: Deploy your first application to Azure Spring Apps](./quickstart.md).
-### Does Azure Spring Cloud support autoscaling in app instances?
+### Does Azure Spring Apps support autoscaling in app instances?
Yes. For more information, see [Set up autoscale for applications](./how-to-setup-autoscale.md).
-### How does Azure Spring Cloud monitor the health status of my application?
+### How does Azure Spring Apps monitor the health status of my application?
-Azure Spring Cloud continuously probes port 1025 for customer's applications. These probes determine whether the application container is ready to start accepting traffic and whether Azure Spring Cloud needs to restart the application container. Internally, Azure Spring Cloud uses Kubernetes liveness and readiness probes to achieve the status monitoring.
+Azure Spring Apps continuously probes port 1025 for customer's applications. These probes determine whether the application container is ready to start accepting traffic and whether Azure Spring Apps needs to restart the application container. Internally, Azure Spring Apps uses Kubernetes liveness and readiness probes to achieve the status monitoring.
>[!NOTE]
-> Because of these probes, you currently can't launch applications in Azure Spring Cloud without exposing port 1025.
+> Because of these probes, you currently can't launch applications in Azure Spring Apps without exposing port 1025.
### Whether and when will my application be restarted? Yes. For more information, see [Monitor app lifecycle events using Azure Activity log and Azure Service Health](./monitor-app-lifecycle-events.md). ::: zone pivot="programming-language-java"
-### What are the best practices for migrating existing Spring Cloud applications to Azure Spring Cloud?
+### What are the best practices for migrating existing Spring applications to Azure Spring Apps?
-For more information, see [Migrate Spring Cloud applications to Azure Spring Cloud](/azure/developer/java/migration/migrate-spring-cloud-to-azure-spring-cloud).
+For more information, see [Migrate Spring applications to Azure Spring Apps](/azure/developer/java/migration/migrate-spring-cloud-to-azure-spring-cloud).
::: zone-end ::: zone pivot="programming-language-csharp"
We will enhance this part and avoid this error from usersΓÇÖ applications in sho
## Next steps
-If you have further questions, see the [Azure Spring Cloud troubleshooting guide](./troubleshoot.md).
+If you have further questions, see the [Azure Spring Apps troubleshooting guide](./troubleshoot.md).
spring-cloud Github Actions Key Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/github-actions-key-vault.md
Title: Authenticate Azure Spring Cloud with Key Vault in GitHub Actions
-description: How to use Azure Key Vault with a CI/CD workflow for Azure Spring Cloud with GitHub Actions
+ Title: Authenticate Azure Spring Apps with Key Vault in GitHub Actions
+description: How to use Azure Key Vault with a CI/CD workflow for Azure Spring Apps with GitHub Actions
Last updated 09/08/2020-+
-# Authenticate Azure Spring Cloud with Azure Key Vault in GitHub Actions
+# Authenticate Azure Spring Apps with Azure Key Vault in GitHub Actions
+
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
**This article applies to:** ✔️ Java ✔️ C# **This article applies to:** ✔️ Basic/Standard tier ✔️ Enterprise tier
-This article shows you how to use Key Vault with a CI/CD workflow for Azure Spring Cloud with GitHub Actions.
+This article shows you how to use Key Vault with a CI/CD workflow for Azure Spring Apps with GitHub Actions.
Key vault is a secure place to store keys. Enterprise users need to store credentials for CI/CD environments in scope that they control. The key to get credentials in the key vault should be limited to resource scope. It has access to only the key vault scope, not the entire Azure scope. It's like a key that can only open a strong box not a master key that can open all doors in a building. It's a way to get a key with another key, which is useful in a CICD workflow.
jobs:
with: azcliversion: 2.0.75 inlineScript: |
- az extension add --name spring-cloud # Spring CLI commands from here
- az spring-cloud list
+ az extension add --name spring # Spring CLI commands from here
+ az spring list
``` ## Next steps
-* [Spring Cloud GitHub Actions](./how-to-github-actions.md)
+* [Use Azure Spring Apps CI/CD with GitHub Actions](./how-to-github-actions.md)
spring-cloud How To Access Data Plane Azure Ad Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-access-data-plane-azure-ad-rbac.md
Title: "Access Config Server and Service Registry"-+ description: How to access Config Server and Service Registry Endpoints with Azure Active Directory role-based access control. Last updated 08/25/2021-+ # Access Config Server and Service Registry
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
+ **This article applies to:** ✔️ Basic/Standard tier ❌ Enterprise tier
-This article explains how to access the Spring Cloud Config Server and Spring Cloud Service Registry managed by Azure Spring Cloud using Azure Active Directory (Azure AD) role-based access control (RBAC).
+This article explains how to access the Spring Cloud Config Server and Spring Cloud Service Registry managed by Azure Spring Apps using Azure Active Directory (Azure AD) role-based access control (RBAC).
> [!NOTE]
-> Applications deployed and running inside the Azure Spring Cloud service are automatically wired up with certificate-based authentication and authorization when accessing the managed Spring Cloud Config Server and Service Registry. You don't need to follow this guidance for these applications. The related certificates are fully managed by the Azure Spring Cloud platform, and are automatically injected in your application when connected to Config Server and Service Registry.
+> Applications deployed and running inside the Azure Spring Apps service are automatically wired up with certificate-based authentication and authorization when accessing the managed Spring Cloud Config Server and Service Registry. You don't need to follow this guidance for these applications. The related certificates are fully managed by the Azure Spring Apps platform, and are automatically injected in your application when connected to Config Server and Service Registry.
## Assign role to Azure AD user/group, MSI, or service principal
Assign the role to the [user | group | service-principal | managed-identity] at
| Role name | Description | |-||
-| Azure Spring Cloud Config Server Reader | Allow read access to Azure Spring Cloud Config Server. |
-| Azure Spring Cloud Config Server Contributor | Allow read, write, and delete access to Azure Spring Cloud Config Server. |
-| Azure Spring Cloud Service Registry Reader | Allow read access to Azure Spring Cloud Service Registry. |
-| Azure Spring Cloud Service Registry Contributor | Allow read, write, and delete access to Azure Spring Cloud Service Registry. |
+| Azure Spring Apps Config Server Reader | Allow read access to Azure Spring Apps Config Server. |
+| Azure Spring Apps Config Server Contributor | Allow read, write, and delete access to Azure Spring Apps Config Server. |
+| Azure Spring Apps Service Registry Reader | Allow read access to Azure Spring Apps Service Registry. |
+| Azure Spring Apps Service Registry Contributor | Allow read, write, and delete access to Azure Spring Apps Service Registry. |
For detailed steps, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md).
After the role is assigned, the assignee can access the Spring Cloud Config Serv
az account get-access-token ```
-1. Compose the endpoint. We support the default endpoints of the Spring Cloud Config Server and Spring Cloud Service Registry managed by Azure Spring Cloud.
+1. Compose the endpoint. We support the default endpoints of the Spring Cloud Config Server and Spring Cloud Service Registry managed by Azure Spring Apps.
* *'https://SERVICE_NAME.svc.azuremicroservices.io/eureka/{path}'* * *'https://SERVICE_NAME.svc.azuremicroservices.io/config/{path}'*
For Eureka endpoints, see [Eureka-REST-operations](https://github.com/Netflix/eu
For config server endpoints and detailed path information, see [ResourceController.java](https://github.com/spring-cloud/spring-cloud-config/blob/main/spring-cloud-config-server/src/main/java/org/springframework/cloud/config/server/resource/ResourceController.java) and [EncryptionController.java](https://github.com/spring-cloud/spring-cloud-config/blob/main/spring-cloud-config-server/src/main/java/org/springframework/cloud/config/server/encryption/EncryptionController.java).
-## Register Spring Boot apps to Spring Cloud Config Server and Service Registry managed by Azure Spring Cloud
+## Register Spring Boot apps to Spring Cloud Config Server and Service Registry managed by Azure Spring Apps
-After the role is assigned, you can register Spring Boot apps to Spring Cloud Config Server and Service Registry managed by Azure Spring Cloud with Azure AD token authentication. Both Config Server and Service Registry support [custom REST template](https://cloud.spring.io/spring-cloud-config/reference/html/#custom-rest-template) to inject the bearer token for authentication.
+After the role is assigned, you can register Spring Boot apps to Spring Cloud Config Server and Service Registry managed by Azure Spring Apps with Azure AD token authentication. Both Config Server and Service Registry support [custom REST template](https://cloud.spring.io/spring-cloud-config/reference/html/#custom-rest-template) to inject the bearer token for authentication.
-For more information, see the samples [Access Azure Spring Cloud managed Config Server](https://github.com/Azure-Samples/Azure-Spring-Cloud-Samples/tree/master/custom-config-server-client) and [Access Azure Spring Cloud managed Spring Cloud Service Registry](https://github.com/Azure-Samples/Azure-Spring-Cloud-Samples/tree/master/custom-eureka-client). The following sections explain some important details in these samples.
+For more information, see the samples [Access Azure Spring Apps managed Config Server](https://github.com/Azure-Samples/Azure-Spring-Cloud-Samples/tree/master/custom-config-server-client) and [Access Azure Spring Apps managed Service Registry](https://github.com/Azure-Samples/Azure-Spring-Cloud-Samples/tree/master/custom-eureka-client). The following sections explain some important details in these samples.
**In *AccessTokenManager.java*:**
spring-cloud How To Appdynamics Java Agent Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-appdynamics-java-agent-monitor.md
Title: "How to monitor Spring Boot apps with the AppDynamics Java Agent (Preview)"-
-description: How to use the AppDynamics Java agent to monitor Spring Boot applications in Azure Spring Cloud.
+
+description: How to use the AppDynamics Java agent to monitor Spring Boot applications in Azure Spring Apps.
Last updated 10/19/2021-+ ms.devlang: azurecli # How to monitor Spring Boot apps with the AppDynamics Java Agent (Preview)
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
+ **This article applies to:** ✔️ Basic/Standard tier ✔️ Enterprise tier
-This article explains how to use the AppDynamics Java Agent to monitor Spring Boot applications in Azure Spring Cloud.
+This article explains how to use the AppDynamics Java Agent to monitor Spring Boot applications in Azure Spring Apps.
With the AppDynamics Java Agent, you can:
The following video introduces the AppDynamics Java in-process agent.
For the whole workflow, you need to:
-* Activate the AppDynamics Java in-process agent in Azure Spring Cloud to generate application metrics data.
+* Activate the AppDynamics Java in-process agent in Azure Spring Apps to generate application metrics data.
* Connect the AppDynamics Agent to the AppDynamics Controller to collect and visualize the data in the controller.
-![Diagram showing a Spring Boot application in 'Azure Spring Cloud' box with a two-directional arrow connecting it to an 'AppDynamics Agent' box, which also has an arrow pointing to an 'AppDynamics Controller' box](media/how-to-appdynamics-java-agent-monitor/appdynamics-activation.jpg)
+![Diagram showing a Spring Boot application in 'Azure Spring Apps' box with a two-directional arrow connecting it to an 'AppDynamics Agent' box, which also has an arrow pointing to an 'AppDynamics Controller' box](media/how-to-appdynamics-java-agent-monitor/appdynamics-activation.jpg)
### Activate an application with the AppDynamics Agent using the Azure CLI To activate an application through the Azure CLI, use the following steps. 1. Create a resource group.
-1. Create an instance of Azure Spring Cloud.
+1. Create an instance of Azure Spring Apps.
1. Create an application using the following command. Replace the placeholders *\<...>* with your own values. ```azurecli
- az spring-cloud app create \
+ az spring app create \
--resource-group "<your-resource-group-name>" \
- --service "<your-Azure-Spring-Cloud-instance-name>" \
+ --service "<your-Azure-Spring-Apps-instance-name>" \
--name "<your-app-name>" \ --is-public true ```
To activate an application through the Azure CLI, use the following steps.
1. Create a deployment with the AppDynamics Agent using environment variables. ```azurecli
- az spring-cloud app deploy \
+ az spring app deploy \
--resource-group "<your-resource-group-name>" \
- --service "<your-Azure-Spring-Cloud-instance-name>" \
+ --service "<your-Azure-Spring-Apps-instance-name>" \
--name "<your-app-name>" \ --jar-path app.jar \ --jvm-options="-javaagent:/opt/agents/appdynamics/java/javaagent.jar" \
To activate an application through the Azure CLI, use the following steps.
APPDYNAMICS_CONTROLLER_PORT=443 ```
-Azure Spring Cloud pre-installs the AppDynamics Java agent to the path */opt/agents/appdynamics/java/javaagent.jar*. You can activate the agent from your applications' JVM options, then configure the agent using environment variables. You can find values for these variables at [Monitor Azure Spring Cloud with Java Agent](https://docs.appdynamics.com/21.11/en/application-monitoring/install-app-server-agents/java-agent/monitor-azure-spring-cloud-with-java-agent). For more information about how these variables help to view and organize reports in the AppDynamics UI, see [Tiers and Nodes](https://docs.appdynamics.com/21.9/en/application-monitoring/tiers-and-nodes).
+Azure Spring Apps pre-installs the AppDynamics Java agent to the path */opt/agents/appdynamics/java/javaagent.jar*. You can activate the agent from your applications' JVM options, then configure the agent using environment variables. You can find values for these variables at [Monitor Azure Spring Apps with Java Agent](https://docs.appdynamics.com/21.11/en/application-monitoring/install-app-server-agents/java-agent/monitor-azure-spring-cloud-with-java-agent). For more information about how these variables help to view and organize reports in the AppDynamics UI, see [Tiers and Nodes](https://docs.appdynamics.com/21.9/en/application-monitoring/tiers-and-nodes).
### Activate an application with the AppDynamics Agent using the Azure portal To activate an application through the Azure portal, use the following steps.
-1. Navigate to your Azure Spring Cloud instance in the Azure portal.
+1. Navigate to your Azure Spring Apps instance in the Azure portal.
1. Select **Apps** from the **Settings** section of the left navigation pane.
You can also run a provisioning automation pipeline using Terraform or an Azure
### Automate provisioning using Terraform
-To configure the environment variables in a Terraform template, add the following code to the template, replacing the *\<...>* placeholders with your own values. For more information, see [Manages an Active Azure Spring Cloud Deployment](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/spring_cloud_active_deployment).
+To configure the environment variables in a Terraform template, add the following code to the template, replacing the *\<...>* placeholders with your own values. For more information, see [Manages an Active Azure Spring Apps Deployment](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/spring_cloud_active_deployment).
```terraform resource "azurerm_spring_cloud_java_deployment" "example" {
You can define more metrics for the JVM, as shown in this screenshot of the **Me
## View AppDynamics Agent logs
-By default, Azure Spring Cloud will print the *info* level logs of the AppDynamics Agent to `STDOUT`. The logs will be mixed with the application logs. You can find the explicit agent version from the application logs.
+By default, Azure Spring Apps will print the *info* level logs of the AppDynamics Agent to `STDOUT`. The logs will be mixed with the application logs. You can find the explicit agent version from the application logs.
You can also get the logs of the AppDynamics Agent from the following locations:
-* Azure Spring Cloud logs
-* Azure Spring Cloud Application Insights
-* Azure Spring Cloud LogStream
+* Azure Spring Apps logs
+* Azure Spring Apps Application Insights
+* Azure Spring Apps LogStream
## Learn about AppDynamics Agent upgrade
The AppDynamics Agent will be upgraded regularly with JDK (quarterly). Agent upg
## Configure VNet injection instance outbound traffic
-For VNet injection instances of Azure Spring Cloud, make sure the outbound traffic is configured correctly for AppDynamics Agent. For details, see [SaaS Domains and IP Ranges](https://docs.appdynamics.com/display/PA).
+For VNet injection instances of Azure Spring Apps, make sure the outbound traffic is configured correctly for AppDynamics Agent. For details, see [SaaS Domains and IP Ranges](https://docs.appdynamics.com/display/PA).
## Understand the limitations
-To understand the limitations of the AppDynamics Agent, see [Monitor Azure Spring Cloud with Java Agent](https://docs.appdynamics.com/21.11/en/application-monitoring/install-app-server-agents/java-agent/monitor-azure-spring-cloud-with-java-agent).
+To understand the limitations of the AppDynamics Agent, see [Monitor Azure Spring Apps with Java Agent](https://docs.appdynamics.com/21.11/en/application-monitoring/install-app-server-agents/java-agent/monitor-azure-spring-cloud-with-java-agent).
## Next steps
-* [Use Application Insights Java In-Process Agent in Azure Spring Cloud](./how-to-application-insights.md)
+* [Use Application Insights Java In-Process Agent in Azure Spring Apps](./how-to-application-insights.md)
spring-cloud How To Application Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-application-insights.md
Title: How to use Application Insights Java In-Process Agent in Azure Spring Cloud
-description: How to monitor apps using Application Insights Java In-Process Agent in Azure Spring Cloud.
+ Title: How to use Application Insights Java In-Process Agent in Azure Spring Apps
+description: How to monitor apps using Application Insights Java In-Process Agent in Azure Spring Apps.
Last updated 02/09/2022-+ zone_pivot_groups: spring-cloud-tier-selection
-# Use Application Insights Java In-Process Agent in Azure Spring Cloud
+# Use Application Insights Java In-Process Agent in Azure Spring Apps
+
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
**This article applies to:** ✔️ Basic/Standard tier ✔️ Enterprise tier
-This article explains how to monitor applications by using the Application Insights Java agent in Azure Spring Cloud.
+This article explains how to monitor applications by using the Application Insights Java agent in Azure Spring Apps.
With this feature you can:
When the **Application Insights** feature is enabled, you can:
Enable the Java In-Process Agent by using the following procedure. 1. Go to the **service | Overview** page of your service instance, then select **Application Insights** in the **Monitoring** section.
-1. Select **Enable Application Insights** to enable Application Insights in Azure Spring Cloud.
+1. Select **Enable Application Insights** to enable Application Insights in Azure Spring Apps.
1. Select an existing instance of Application Insights or create a new one. 1. When **Application Insights** is enabled, you can configure one optional sampling rate (default 10.0%).
- :::image type="content" source="media/spring-cloud-application-insights/insights-process-agent.png" alt-text="Screenshot of Azure portal Azure Spring Cloud instance with Application Insights page showing and 'Enable Application Insights' checkbox highlighted." lightbox="media/spring-cloud-application-insights/insights-process-agent.png":::
+ :::image type="content" source="media/spring-cloud-application-insights/insights-process-agent.png" alt-text="Screenshot of Azure portal Azure Spring Apps instance with Application Insights page showing and 'Enable Application Insights' checkbox highlighted." lightbox="media/spring-cloud-application-insights/insights-process-agent.png":::
1. Select **Save** to save the change. > [!Note]
-> Do not use the same Application Insights instance in different Azure Spring Cloud instances, or you'll see mixed data.
+> Do not use the same Application Insights instance in different Azure Spring Apps instances, or you'll see mixed data.
::: zone-end
You can use the Portal to check or update the current settings in Application In
1. Select **Application Insights**. 1. Enable Application Insights by selecting **Edit binding**, or the **Unbound** hyperlink.
- :::image type="content" source="media/enterprise/how-to-application-insights/application-insights-binding-enable.png" alt-text="Screenshot of Azure portal Azure Spring Cloud instance with Application Insights page showing and drop-down menu visible with 'Edit binding' option.":::
+ :::image type="content" source="media/enterprise/how-to-application-insights/application-insights-binding-enable.png" alt-text="Screenshot of Azure portal Azure Spring Apps instance with Application Insights page showing and drop-down menu visible with 'Edit binding' option.":::
1. Edit **Application Insights** or **Sampling rate**, then select **Save**.
You can use the Portal to check or update the current settings in Application In
1. Select **Application Insights**. 1. Select **Unbind binding** to disable Application Insights.
- :::image type="content" source="media/enterprise/how-to-application-insights/application-insights-unbind-binding.png" alt-text="Screenshot of Azure portal Azure Spring Cloud instance with Application Insights page showing and drop-down menu visible with 'Unbind binding' option.":::
+ :::image type="content" source="media/enterprise/how-to-application-insights/application-insights-unbind-binding.png" alt-text="Screenshot of Azure portal Azure Spring Apps instance with Application Insights page showing and drop-down menu visible with 'Unbind binding' option.":::
### Change Application Insights Settings Select the name under the *Application Insights* column to open the Application Insights section. ### Edit Application Insights buildpack bindings in Build Service
Application Insights settings are found in the *ApplicationInsights* item listed
## Manage Application Insights using Azure CLI
-You can manage Application Insights using Azure CLI commands. In the following commands, be sure to replace the *\<placeholder>* text with the values described. The *\<service-instance-name>* placeholder refers to the name of your Azure Spring Cloud instance.
+You can manage Application Insights using Azure CLI commands. In the following commands, be sure to replace the *\<placeholder>* text with the values described. The *\<service-instance-name>* placeholder refers to the name of your Azure Spring Apps instance.
### Enable Application Insights
-To configure Application Insights when creating an Azure Spring Cloud instance, use the following command. For the `app-insights` argument, you can specify an Application Insights name or resource ID.
+To configure Application Insights when creating an Azure Spring Apps instance, use the following command. For the `app-insights` argument, you can specify an Application Insights name or resource ID.
::: zone pivot="sc-standard-tier" ```azurecli
-az spring-cloud create \
+az spring create \
--resource-group <resource-group-name> \ --name "service-instance-name" \ --app-insights <name-or-resource-ID> \
az spring-cloud create \
::: zone pivot="sc-enterprise-tier" ```azurecli
-az spring-cloud create \
+az spring create \
--resource-group <resource-group-name> \ --name "service-instance-name" \ --app-insights <name-or-resource-ID> \
You can also use an Application Insights connection string (preferred) or instru
::: zone pivot="sc-standard-tier" ```azurecli
-az spring-cloud create \
+az spring create \
--resource-group <resource-group-name> \ --name <service-instance-name> \ --app-insights-key <connection-string-or-instrumentation-key> \
az spring-cloud create \
::: zone pivot="sc-enterprise-tier" ```azurecli
-az spring-cloud create \
+az spring create \
--resource-group <resource-group-name> \ --name <service-instance-name> \ --app-insights-key <connection-string-or-instrumentation-key> \
az spring-cloud create \
### Disable Application Insights
-To disable Application Insights when creating an Azure Spring Cloud instance, use the following command:
+To disable Application Insights when creating an Azure Spring Apps instance, use the following command:
::: zone pivot="sc-standard-tier" ```azurecli
-az spring-cloud create \
+az spring create \
--resource-group <resource-group-name> \ --name <service-instance-name> \ --disable-app-insights
az spring-cloud create \
::: zone pivot="sc-enterprise-tier" ```azurecli
-az spring-cloud create \
+az spring create \
--resource-group <resource-group-name> \ --name <service-instance-name> \ --disable-app-insights
az spring-cloud create \
### Check Application Insights settings
-To check the Application Insights settings of an existing Azure Spring Cloud instance, use the following command:
+To check the Application Insights settings of an existing Azure Spring Apps instance, use the following command:
```azurecli
-az spring-cloud app-insights show \
+az spring app-insights show \
--resource-group <resource-group-name> \ --name <service-instance-name> ```
az spring-cloud app-insights show \
To update Application Insights to use a connection string (preferred) or instrumentation key, use the following command: ```azurecli
-az spring-cloud app-insights update \
+az spring app-insights update \
--resource-group <resource-group-name> \ --name <service-instance-name> \ --app-insights-key <connection-string-or-instrumentation-key> \
az spring-cloud app-insights update \
To update Application Insights to use the resource name or ID, use the following command: ```azurecli
-az spring-cloud app-insights update \
+az spring app-insights update \
--resource-group <resource-group-name> \ --name <service-instance-name> \ --app-insights <name-or-resource-ID> \
az spring-cloud app-insights update \
### Disable Application Insights with the update command
-To disable Application Insights on an existing Azure Spring Cloud instance, use the following command:
+To disable Application Insights on an existing Azure Spring Apps instance, use the following command:
```azurecli
-az spring-cloud app-insights update \
+az spring app-insights update \
--resource-group <resource-group-name> \ --name <service-instance-name> \ --disable
Azure Enterprise tier uses [Buildpack Bindings](./how-to-enterprise-build-servic
To create an Application Insights buildpack binding, use the following command: ```azurecli
-az spring-cloud build-service builder buildpack-binding create \
+az spring build-service builder buildpack-binding create \
--resource-group <your-resource-group-name> \ --service <your-service-instance-name> \ --name <your-binding-name> \
az spring-cloud build-service builder buildpack-binding create \
To list all buildpack bindings, and find Application Insights bindings the type `ApplicationInsights`, use the following command: ```azurecli
-az spring-cloud build-service builder buildpack-binding list \
+az spring build-service builder buildpack-binding list \
--resource-group <your-resource-group-name> \ --service <your-service-resource-name> \ --builder-name <your-builder-name>
az spring-cloud build-service builder buildpack-binding list \
To replace an Application Insights buildpack binding, use the following command: ```azurecli
-az spring-cloud build-service builder buildpack-binding set \
+az spring build-service builder buildpack-binding set \
--resource-group <your-resource-group-name> \ --service <your-service-instance-name> \ --name <your-binding-name> \
az spring-cloud build-service builder buildpack-binding set \
To get an Application Insights buildpack binding, use the following command: ```azurecli
-az spring-cloud build-service builder buildpack-binding show \
+az spring build-service builder buildpack-binding show \
--resource-group <your-resource-group-name> \ --service <your-service-instance-name> \ --name <your-binding-name> \
az spring-cloud build-service builder buildpack-binding show \
To delete an Application Insights buildpack binding, use the following command: ```azurecli
-az spring-cloud build-service builder buildpack-binding delete \
+az spring build-service builder buildpack-binding delete \
--resource-group <your-resource-group-name> \ --service <your-service-instance-name> \ --name <your-binding-name> \
The Java agent will be updated/upgraded when the buildpack is updated.
## Java agent configuration hot-loading
-Azure Spring Cloud has enabled a hot-loading mechanism to adjust the settings of agent configuration without restart of applications.
+Azure Spring Apps has enabled a hot-loading mechanism to adjust the settings of agent configuration without restart of applications.
> [!Note] > The hot-loading mechanism has a delay in minutes.
Azure Spring Cloud has enabled a hot-loading mechanism to adjust the settings of
::: zone-end
-## Concept matching between Azure Spring Cloud and Application Insights
+## Concept matching between Azure Spring Apps and Application Insights
-| Azure Spring Cloud | Application Insights |
+| Azure Spring Apps | Application Insights |
| | | | `App` | * __Application Map__/Role<br />* __Live Metrics__/Role<br />* __Failures__/Roles/Cloud Role<br />* __Performance__/Roles/Could Role | | `App Instance` | * __Application Map__/Role Instance<br />* __Live Metrics__/Service Name<br />* __Failures__/Roles/Cloud Instance<br />* __Performance__/Roles/Could Instance |
-The name `App Instance` from Azure Spring Cloud will be changed or generated in the following scenarios:
+The name `App Instance` from Azure Spring Apps will be changed or generated in the following scenarios:
* You create a new application. * You deploy a JAR file or source code to an existing application.
The name `App Instance` from Azure Spring Cloud will be changed or generated in
* You restart the application. * You stop the deployment of an application, and then restart it.
-When data is stored in Application Insights, it contains the history of Azure Spring Cloud app instances created or deployed since the Java agent was enabled. For example, in the Application Insights portal, you can see application data created yesterday, but then deleted within a specific time range, like the last 24 hours. The following scenarios show how this works:
+When data is stored in Application Insights, it contains the history of Azure Spring Apps app instances created or deployed since the Java agent was enabled. For example, in the Application Insights portal, you can see application data created yesterday, but then deleted within a specific time range, like the last 24 hours. The following scenarios show how this works:
-* You created an application around 8:00 AM today from Azure Spring Cloud with the Java agent enabled, and then you deployed a JAR file to this application around 8:10 AM today. After some testing, you change the code and deploy a new JAR file to this application at 8:30 AM today. Then, you take a break, and when you come back around 11:00 AM, you check some data from Application Insights. You'll see:
+* You created an application around 8:00 AM today from Azure Spring Apps with the Java agent enabled, and then you deployed a JAR file to this application around 8:10 AM today. After some testing, you change the code and deploy a new JAR file to this application at 8:30 AM today. Then, you take a break, and when you come back around 11:00 AM, you check some data from Application Insights. You'll see:
* Three instances in Application Map with time ranges in the last 24 hours, and Failures, Performance, and Metrics. * One instance in Application Map with a time range in the last hour, and Failures, Performance, and Metrics. * One instance in Live Metrics.
-* You created an application around 8:00 AM today from Azure Spring Cloud with the Java agent enabled, and then you deployed a JAR file to this application around 8:10 AM today. Around 8:30 AM today, you try a blue/green deployment with another JAR file. Currently, you have two deployments for this application. After a break around 11:00 AM today, you want to check some data from Application Insights. You'll see:
+* You created an application around 8:00 AM today from Azure Spring Apps with the Java agent enabled, and then you deployed a JAR file to this application around 8:10 AM today. Around 8:30 AM today, you try a blue/green deployment with another JAR file. Currently, you have two deployments for this application. After a break around 11:00 AM today, you want to check some data from Application Insights. You'll see:
* Three instances in Application Map with time ranges in the last 24 hours, and Failures, Performance, and Metrics. * Two instances in Application Map with time ranges in last hour, and Failures, Performance, and Metrics. * Two instances in Live Metrics. ## Next steps
-* [Use distributed tracing with Azure Spring Cloud](./how-to-distributed-tracing.md)
+* [Use distributed tracing with Azure Spring Apps](./how-to-distributed-tracing.md)
* [Analyze logs and metrics](diagnostic-services.md) * [Stream logs in real time](./how-to-log-streaming.md) * [Application Map](../azure-monitor/app/app-map.md)
spring-cloud How To Bind Cosmos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-bind-cosmos.md
Title: Bind an Azure Cosmos DB to your application in Azure Spring Cloud
-description: Learn how to bind Azure Cosmos DB to your application in Azure Spring Cloud
+ Title: Bind an Azure Cosmos DB to your application in Azure Spring Apps
+description: Learn how to bind Azure Cosmos DB to your application in Azure Spring Apps
Last updated 10/06/2019 -+
-# Bind an Azure Cosmos DB database to your application in Azure Spring Cloud
+# Bind an Azure Cosmos DB database to your application in Azure Spring Apps
+
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
**This article applies to:** ✔️ Java ❌ C# **This article applies to:** ✔️ Basic/Standard tier ✔️ Enterprise tier
-Instead of manually configuring your Spring Boot applications, you can automatically bind select Azure services to your applications by using Azure Spring Cloud. This article demonstrates how to bind your application to an Azure Cosmos DB database.
+Instead of manually configuring your Spring Boot applications, you can automatically bind select Azure services to your applications by using Azure Spring Apps. This article demonstrates how to bind your application to an Azure Cosmos DB database.
Prerequisites:
-* A deployed Azure Spring Cloud instance. Follow our [quickstart on deploying via the Azure CLI](./quickstart.md) to get started.
+* A deployed Azure Spring Apps instance. Follow our [quickstart on deploying via the Azure CLI](./quickstart.md) to get started.
* An Azure Cosmos DB account with a minimum permission level of Contributor. ## Prepare your Java project
Prerequisites:
</dependency> ```
-1. Update the current app by running `az spring-cloud app deploy`, or create a new deployment for this change by running `az spring-cloud app deployment create`.
+1. Update the current app by running `az spring app deploy`, or create a new deployment for this change by running `az spring app deployment create`.
## Bind your app to the Azure Cosmos DB
Azure Cosmos DB has five different API types that support binding. The following
1. Record the name of your database. For this procedure, the database name is **testdb**.
-1. Go to your Azure Spring Cloud service page in the Azure portal. Go to **Application Dashboard** and select the application to bind to Azure Cosmos DB. This application is the same one you updated or deployed in the previous step.
+1. Go to your Azure Spring Apps service page in the Azure portal. Go to **Application Dashboard** and select the application to bind to Azure Cosmos DB. This application is the same one you updated or deployed in the previous step.
1. Select **Service binding**, and select **Create service binding**. To fill out the form, select: * The **Binding type** value **Azure Cosmos DB**.
Azure Cosmos DB has five different API types that support binding. The following
``` #### [Terraform](#tab/Terraform)
-The following Terraform script shows how to set up an Azure Spring Cloud app with Azure Cosmos DB MongoDB API.
+The following Terraform script shows how to set up an Azure Spring Apps app with Azure Cosmos DB MongoDB API.
```terraform provider "azurerm" {
resource "azurerm_spring_cloud_active_deployment" "example" {
## Next steps
-In this article, you learned how to bind your application in Azure Spring Cloud to an Azure Cosmos DB database. To learn more about binding services to your application, see [Bind to an Azure Cache for Redis cache](./how-to-bind-redis.md).
+In this article, you learned how to bind your application in Azure Spring Apps to an Azure Cosmos DB database. To learn more about binding services to your application, see [Bind to an Azure Cache for Redis cache](./how-to-bind-redis.md).
spring-cloud How To Bind Mysql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-bind-mysql.md
Title: How to bind an Azure Database for MySQL instance to your application in Azure Spring Cloud
-description: Learn how to bind an Azure Database for MySQL instance to your application in Azure Spring Cloud
+ Title: How to bind an Azure Database for MySQL instance to your application in Azure Spring Apps
+description: Learn how to bind an Azure Database for MySQL instance to your application in Azure Spring Apps
Last updated 11/04/2019 -+
-# Bind an Azure Database for MySQL instance to your application in Azure Spring Cloud
+# Bind an Azure Database for MySQL instance to your application in Azure Spring Apps
+
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
**This article applies to:** ✔️ Java ❌ C# **This article applies to:** ✔️ Basic/Standard tier ✔️ Enterprise tier
-With Azure Spring Cloud, you can bind select Azure services to your applications automatically, instead of having to configure your Spring Boot application manually. This article shows you how to bind your application to your Azure Database for MySQL instance.
+With Azure Spring Apps, you can bind select Azure services to your applications automatically, instead of having to configure your Spring Boot application manually. This article shows you how to bind your application to your Azure Database for MySQL instance.
## Prerequisites
-* A deployed Azure Spring Cloud instance
+* A deployed Azure Spring Apps instance
* An Azure Database for MySQL account * Azure CLI
-If you don't have a deployed Azure Spring Cloud instance, follow the instructions in [Quickstart: Launch an application in Azure Spring Cloud by using the Azure portal](./quickstart.md) to deploy your first Spring Cloud app.
+If you don't have a deployed Azure Spring Apps instance, follow the instructions in [Quickstart: Launch an application in Azure Spring Apps by using the Azure portal](./quickstart.md) to deploy your first Spring app.
## Prepare your Java project
If you don't have a deployed Azure Spring Cloud instance, follow the instruction
1. In the *application.properties* file, remove any `spring.datasource.*` properties.
-1. Update the current app by running `az spring-cloud app deploy`, or create a new deployment for this change by running `az spring-cloud app deployment create`.
+1. Update the current app by running `az spring app deploy`, or create a new deployment for this change by running `az spring app deployment create`.
## Bind your app to the Azure Database for MySQL instance
If you don't have a deployed Azure Spring Cloud instance, follow the instruction
1. Connect to the server, create a database named **testdb** from a MySQL client, and then create a new non-admin account.
-1. In the Azure portal, on your **Azure Spring Cloud** service page, look for the **Application Dashboard**, and then select the application to bind to your Azure Database for MySQL instance. This is the same application that you updated or deployed in the previous step.
+1. In the Azure portal, on your **Azure Spring Apps** service page, look for the **Application Dashboard**, and then select the application to bind to your Azure Database for MySQL instance. This is the same application that you updated or deployed in the previous step.
1. Select **Service binding**, and then select the **Create service binding** button.
If you don't have a deployed Azure Spring Cloud instance, follow the instruction
#### [Terraform](#tab/Terraform)
-The following Terraform script shows how to set up an Azure Spring Cloud app with Azure Database for MySQL.
+The following Terraform script shows how to set up an Azure Spring Apps app with Azure Database for MySQL.
```terraform provider "azurerm" {
resource "azurerm_spring_cloud_active_deployment" "example" {
## Next steps
-In this article, you learned how to bind an application in Azure Spring Cloud to an Azure Database for MySQL instance. To learn more about binding services to an application, see [Bind an Azure Cosmos DB database to an application in Azure Spring Cloud](./how-to-bind-cosmos.md).
+In this article, you learned how to bind an application in Azure Spring Apps to an Azure Database for MySQL instance. To learn more about binding services to an application, see [Bind an Azure Cosmos DB database to an application in Azure Spring Apps](./how-to-bind-cosmos.md).
spring-cloud How To Bind Redis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-bind-redis.md
Title: Bind Azure Cache for Redis to your application in Azure Spring Cloud
-description: Learn how to bind Azure Cache for Redis to your application in Azure Spring Cloud
+ Title: Bind Azure Cache for Redis to your application in Azure Spring Apps
+description: Learn how to bind Azure Cache for Redis to your application in Azure Spring Apps
Last updated 10/31/2019 -+
-# Bind Azure Cache for Redis to your application in Azure Spring Cloud
+# Bind Azure Cache for Redis to your application in Azure Spring Apps
+
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
**This article applies to:** ✔️ Java ❌ C# **This article applies to:** ✔️ Basic/Standard tier ✔️ Enterprise tier
-Instead of manually configuring your Spring Boot applications, you can automatically bind select Azure services to your applications by using Azure Spring Cloud. This article shows how to bind your application to Azure Cache for Redis.
+Instead of manually configuring your Spring Boot applications, you can automatically bind select Azure services to your applications by using Azure Spring Apps. This article shows how to bind your application to Azure Cache for Redis.
## Prerequisites
-* A deployed Azure Spring Cloud instance
+* A deployed Azure Spring Apps instance
* An Azure Cache for Redis service instance
-* The Azure Spring Cloud extension for the Azure CLI
+* The Azure Spring Apps extension for the Azure CLI
-If you don't have a deployed Azure Spring Cloud instance, follow the steps in the [quickstart on deploying an Azure Spring Cloud app](./quickstart.md).
+If you don't have a deployed Azure Spring Apps instance, follow the steps in the [quickstart on deploying an Azure Spring Apps app](./quickstart.md).
## Prepare your Java project
If you don't have a deployed Azure Spring Cloud instance, follow the steps in th
1. Remove any `spring.redis.*` properties from the `application.properties` file
-1. Update the current deployment using `az spring-cloud app update` or create a new deployment using `az spring-cloud app deployment create`.
+1. Update the current deployment using `az spring app update` or create a new deployment using `az spring app deployment create`.
## Bind your app to the Azure Cache for Redis #### [Service Binding](#tab/Service-Binding)
-1. Go to your Azure Spring Cloud service page in the Azure portal. Go to **Application Dashboard** and select the application to bind to Azure Cache for Redis. This application is the same one you updated or deployed in the previous step.
+1. Go to your Azure Spring Apps service page in the Azure portal. Go to **Application Dashboard** and select the application to bind to Azure Cache for Redis. This application is the same one you updated or deployed in the previous step.
1. Select **Service binding** and select **Create service binding**. Fill out the form, being sure to select the **Binding type** value **Azure Cache for Redis**, your Azure Cache for Redis server, and the **Primary** key option.
If you don't have a deployed Azure Spring Cloud instance, follow the steps in th
#### [Terraform](#tab/Terraform)
-The following Terraform script shows how to set up an Azure Spring Cloud app with Azure Cache for Redis.
+The following Terraform script shows how to set up an Azure Spring Apps app with Azure Cache for Redis.
```terraform provider "azurerm" {
resource "azurerm_spring_cloud_active_deployment" "example" {
## Next steps
-In this article, you learned how to bind your application in Azure Spring Cloud to Azure Cache for Redis. To learn more about binding services to your application, see [Bind to an Azure Database for MySQL instance](./how-to-bind-mysql.md).
+In this article, you learned how to bind your application in Azure Spring Apps to Azure Cache for Redis. To learn more about binding services to your application, see [Bind to an Azure Database for MySQL instance](./how-to-bind-mysql.md).
spring-cloud How To Built In Persistent Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-built-in-persistent-storage.md
Title: How to use built-in persistent storage in Azure Spring Cloud | Microsoft Docs
-description: How to use built-in persistent storage in Azure Spring Cloud
+ Title: How to use built-in persistent storage in Azure Spring Apps | Microsoft Docs
+description: How to use built-in persistent storage in Azure Spring Apps
Last updated 10/28/2021 -+
-# Use built-in persistent storage in Azure Spring Cloud
+# Use built-in persistent storage in Azure Spring Apps
+
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
**This article applies to:** ✔️ Java ✔️ C# **This article applies to:** ✔️ Basic/Standard tier ❌ Enterprise tier
-Azure Spring Cloud provides two types of built-in storage for your application: persistent and temporary.
+Azure Spring Apps provides two types of built-in storage for your application: persistent and temporary.
-By default, Azure Spring Cloud provides temporary storage for each application instance. Temporary storage is limited to 5 GB per instance with the default mount path /tmp.
+By default, Azure Spring Apps provides temporary storage for each application instance. Temporary storage is limited to 5 GB per instance with the default mount path /tmp.
> [!WARNING] > If you restart an application instance, the associated temporary storage is permanently deleted.
-Persistent storage is a file-share container managed by Azure and allocated per application. Data stored in persistent storage is shared by all instances of an application. An Azure Spring Cloud instance can have a maximum of 10 applications with persistent storage enabled. Each application is allocated 50 GB of persistent storage. The default mount path for persistent storage is /persistent.
+Persistent storage is a file-share container managed by Azure and allocated per application. Data stored in persistent storage is shared by all instances of an application. An Azure Spring Apps instance can have a maximum of 10 applications with persistent storage enabled. Each application is allocated 50 GB of persistent storage. The default mount path for persistent storage is /persistent.
> [!WARNING] > If you disable an applications's persistent storage, all of that storage is deallocated and all of the stored data is lost.
The portal can be used to enable or disable built-in persistent storage.
>![Locate the All Resources icon](media/portal-all-resources.jpg)
-1. Select the Azure Spring Cloud resource that needs persistent storage. In this example, the selected application is called **upspring**.
+1. Select the Azure Spring Apps resource that needs persistent storage. In this example, the selected application is called **upspring**.
> ![Select your application](media/select-service.jpg) 1. Under the **Settings** heading, select **Apps**.
-1. Your Azure Spring Cloud services appear in a table. Select the service that you want to add persistent storage to. In this example, the **gateway** service is selected.
+1. Your Azure Spring Apps services appear in a table. Select the service that you want to add persistent storage to. In this example, the **gateway** service is selected.
> ![Select your service](media/select-gateway.jpg)
If persistent storage is enabled, its size and path are shown on the **Persisten
#### [Azure CLI](#tab/azure-cli) ## Use the Azure CLI to enable or disable built-in persistent storage
-If necessary, install the Spring Cloud extension for the Azure CLI using this command:
+If necessary, install the Azure Spring Apps extension for the Azure CLI using this command:
```azurecli
-az extension add --name spring-cloud
+az extension add --name spring
``` Other operations:
Other operations:
* To create an app with built-in persistent storage enabled: ```azurecli
- az spring-cloud app create -n <app> -g <resource-group> -s <service-name> --enable-persistent-storage true
+ az spring app create -n <app> -g <resource-group> -s <service-name> --enable-persistent-storage true
``` * To enable built-in persistent storage for an existing app: ```azurecli
- az spring-cloud app update -n <app> -g <resource-group> -s <service-name> --enable-persistent-storage true
+ az spring app update -n <app> -g <resource-group> -s <service-name> --enable-persistent-storage true
``` * To disable built-in persistent storage in an existing app: ```azurecli
- az spring-cloud app update -n <app> -g <resource-group> -s <service-name> --enable-persistent-storage false
+ az spring app update -n <app> -g <resource-group> -s <service-name> --enable-persistent-storage false
```
spring-cloud How To Capture Dumps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-capture-dumps.md
Title: Capture heap dump and thread dump manually and use Java Flight Recorder in Azure Spring Cloud
+ Title: Capture heap dump and thread dump manually and use Java Flight Recorder in Azure Spring Apps
description: Learn how to manually capture a heap dump, a thread dump, or start Java Flight Recorder. Last updated 01/21/2022-+
-# Capture heap dump and thread dump manually and use Java Flight Recorder in Azure Spring Cloud
+# Capture heap dump and thread dump manually and use Java Flight Recorder in Azure Spring Apps
+
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
**This article applies to:** ✔️ Java ❌ C#
This article describes how to manually generate a heap dump or thread dump, and how to start Java Flight Recorder (JFR).
-Effective troubleshooting is critical to ensure you can fix issues in production environments and keep your business online. Azure Spring Cloud provides application log streaming and query, rich metrics emitting, alerts, distributed tracing, and so forth. However, when you get alerts about requests with high latency, JVM heap leak, or high CPU usage, there's no last-mile solution. For this reason, we've enabled you to manually generate a heap dump, generate a thread dump, and start JFR.
+Effective troubleshooting is critical to ensure you can fix issues in production environments and keep your business online. Azure Spring Apps provides application log streaming and query, rich metrics emitting, alerts, distributed tracing, and so forth. However, when you get alerts about requests with high latency, JVM heap leak, or high CPU usage, there's no last-mile solution. For this reason, we've enabled you to manually generate a heap dump, generate a thread dump, and start JFR.
## Prerequisites
-* A deployed Azure Spring Cloud service instance. To get started, see [Quickstart: Deploy your first application to Azure Spring Cloud](quickstart.md).
+* A deployed Azure Spring Apps service instance. To get started, see [Quickstart: Deploy your first application to Azure Spring Apps](quickstart.md).
* At least one application already created in your service instance.
-* Your own persistent storage as described in [How to enable your own persistent storage in Azure Spring Cloud](how-to-custom-persistent-storage.md). This storage is used to save generated diagnostic files. The paths you provide in the parameter values below should be under the mount path of the persistent storage bound to your app. If you want to use a path under the mount path, be sure to create the subpath beforehand.
+* Your own persistent storage as described in [How to enable your own persistent storage in Azure Spring Apps](how-to-custom-persistent-storage.md). This storage is used to save generated diagnostic files. The paths you provide in the parameter values below should be under the mount path of the persistent storage bound to your app. If you want to use a path under the mount path, be sure to create the subpath beforehand.
## Generate a heap dump
-Use the following command to generate a heap dump of your app in Azure Spring Cloud.
+Use the following command to generate a heap dump of your app in Azure Spring Apps.
```azurecli
-az spring-cloud app deployment generate-heap-dump \
+az spring app deployment generate-heap-dump \
--resource-group <resource-group-name> \
- --service <Azure-Spring-Cloud-instance-name> \
+ --service <Azure-Spring-Apps-instance-name> \
--app <app-name> \ --deployment <deployment-name> \ --app-instance <app-instance name> \
az spring-cloud app deployment generate-heap-dump \
## Generate a thread dump
-Use the following command to generate a thread dump of your app in Azure Spring Cloud.
+Use the following command to generate a thread dump of your app in Azure Spring Apps.
```azurecli
-az spring-cloud app deployment generate-thread-dump \
+az spring app deployment generate-thread-dump \
--resource-group <resource-group-name> \
- --service <Azure-Spring-Cloud-instance-name> \
+ --service <Azure-Spring-Apps-instance-name> \
--app <app-name> \ --deployment <deployment-name> \ --app-instance <app-instance name> \
az spring-cloud app deployment generate-thread-dump \
## Start JFR
-Use the following command to start JFR for your app in Azure Spring Cloud.
+Use the following command to start JFR for your app in Azure Spring Apps.
```azurecli
-az spring-cloud app deployment start-jfr \
+az spring app deployment start-jfr \
--resource-group <resource-group-name> \
- --service <Azure-Spring-Cloud-instance-name> \
+ --service <Azure-Spring-Apps-instance-name> \
--app <app-name> \ --deployment <deployment-name> \ --app-instance <app-instance name> \
The default value for `duration` is 60 seconds.
## Generate a dump using the Azure portal
-Use the following steps to generate a heap or thread dump of your app in Azure Spring Cloud.
+Use the following steps to generate a heap or thread dump of your app in Azure Spring Apps.
1. In the Azure portal, navigate to your target app, then select **Troubleshooting**. 2. In the **Troubleshooting** pane, select the app instance and the type of dump you'd like to collect.
Navigate to the target file path in your persistent storage and find your dump/J
## Next steps
-* [Use the diagnostic settings of JVM options for advanced troubleshooting in Azure Spring Cloud](how-to-dump-jvm-options.md)
+* [Use the diagnostic settings of JVM options for advanced troubleshooting in Azure Spring Apps](how-to-dump-jvm-options.md)
spring-cloud How To Cicd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-cicd.md
Title: Automate application deployments to Azure Spring Cloud
-description: Describes how to use the Azure Spring Cloud task for Azure Pipelines.
+ Title: Automate application deployments to Azure Spring Apps
+description: Describes how to use the Azure Spring Apps task for Azure Pipelines.
Last updated 09/13/2021 -+ zone_pivot_groups: programming-languages-spring-cloud
-# Automate application deployments to Azure Spring Cloud
+# Automate application deployments to Azure Spring Apps
+
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
**This article applies to:** ✔️ Basic/Standard tier ✔️ Enterprise tier
-This article shows you how to use the [Azure Spring Cloud task for Azure Pipelines](/azure/devops/pipelines/tasks/deploy/azure-spring-cloud) to deploy applications.
+This article shows you how to use the [Azure Spring Apps task for Azure Pipelines](/azure/devops/pipelines/tasks/deploy/azure-spring-cloud) to deploy applications.
Continuous integration and continuous delivery tools let you quickly deploy updates to existing applications with minimal effort and risk. Azure DevOps helps you organize and control these key jobs.
The following video describes end-to-end automation using tools of your choice,
## Create an Azure Resource Manager service connection
-First, create an Azure Resource Manager service connection to your Azure DevOps project. For instructions, see [Connect to Microsoft Azure](/azure/devops/pipelines/library/connect-to-azure). Be sure to select the same subscription you're using for your Azure Spring Cloud service instance.
+First, create an Azure Resource Manager service connection to your Azure DevOps project. For instructions, see [Connect to Microsoft Azure](/azure/devops/pipelines/library/connect-to-azure). Be sure to select the same subscription you're using for your Azure Spring Apps service instance.
## Build and deploy apps
-You can now build and deploy your projects using a series of tasks. The following Azure Pipelines template defines variables, a .NET Core task to build the application, and an Azure Spring Cloud task to deploy the application.
+You can now build and deploy your projects using a series of tasks. The following Azure Pipelines template defines variables, a .NET Core task to build the application, and an Azure Spring Apps task to deploy the application.
```yaml variables:
steps:
::: zone-end ::: zone pivot="programming-language-java"
-## Set up an Azure Spring Cloud instance and an Azure DevOps project
+## Set up an Azure Spring Apps instance and an Azure DevOps project
-First, use the following steps to set up an existing Azure Spring Cloud instance for use with Azure DevOps.
+First, use the following steps to set up an existing Azure Spring Apps instance for use with Azure DevOps.
-1. Go to your Azure Spring Cloud instance, then create a new app.
+1. Go to your Azure Spring Apps instance, then create a new app.
1. Go to the Azure DevOps portal, then create a new project under your chosen organization. If you don't have an Azure DevOps organization, you can create one for free. 1. Select **Repos**, then import the [Spring Boot demo code](https://github.com/spring-guides/gs-spring-boot) to the repository. ## Create an Azure Resource Manager service connection
-Next, create an Azure Resource Manager service connection to your Azure DevOps project. For instructions, see [Connect to Microsoft Azure](/azure/devops/pipelines/library/connect-to-azure). Be sure to select the same subscription you're using for your Azure Spring Cloud service instance.
+Next, create an Azure Resource Manager service connection to your Azure DevOps project. For instructions, see [Connect to Microsoft Azure](/azure/devops/pipelines/library/connect-to-azure). Be sure to select the same subscription you're using for your Azure Spring Apps service instance.
## Build and deploy apps
To deploy using a pipeline, follow these steps:
1. Select **Pipelines**, then create a new pipeline with a Maven template. 1. Edit the *azure-pipelines.yml* file to set the `mavenPomFile` field to *'complete/pom.xml'*.
-1. Select **Show assistant** on the right side, then select the **Azure Spring Cloud** template.
-1. Select the service connection you created for your Azure Subscription, then select your Spring Cloud instance and app instance.
+1. Select **Show assistant** on the right side, then select the **Azure Spring Apps** template.
+1. Select the service connection you created for your Azure Subscription, then select your Azure Spring Apps instance and app instance.
1. Disable **Use Staging Deployment**. 1. Set **Package or folder** to *complete/target/spring-boot-complete-0.0.1-SNAPSHOT.jar*. 1. Select **Add** to add this task to your pipeline.
To deploy using a pipeline, follow these steps:
:::image type="content" source="media/spring-cloud-how-to-cicd/pipeline-task-setting.jpg" alt-text="Screenshot of pipeline settings." lightbox="media/spring-cloud-how-to-cicd/pipeline-task-setting.jpg":::
- You can also build and deploy your projects using following pipeline template. This example first defines a Maven task to build the application, followed by a second task that deploys the JAR file using the Azure Spring Cloud task for Azure Pipelines.
+ You can also build and deploy your projects using following pipeline template. This example first defines a Maven task to build the application, followed by a second task that deploys the JAR file using the Azure Spring Apps task for Azure Pipelines.
```yaml steps:
To deploy using a pipeline, follow these steps:
inputs: azureSubscription: '<your service connection name>' Action: 'Deploy'
- AzureSpringCloud: <your Azure Spring Cloud service>
+ AzureSpringCloud: <your Azure Spring Apps service>
AppName: <app-name> UseStagingDeployment: false DeploymentName: 'default'
steps:
inputs: azureSubscription: '<your service connection name>' Action: 'Deploy'
- AzureSpringCloud: <your Azure Spring Cloud service>
+ AzureSpringCloud: <your Azure Spring Apps service>
AppName: <app-name> UseStagingDeployment: true Package: ./target/your-result-jar.jar
steps:
inputs: azureSubscription: '<your service connection name>' Action: 'Set Production'
- AzureSpringCloud: <your Azure Spring Cloud service>
+ AzureSpringCloud: <your Azure Spring Apps service>
AppName: <app-name> UseStagingDeployment: true ```
The following steps show you how to enable a blue-green deployment from the **Re
:::image type="content" source="media/spring-cloud-how-to-cicd/create-new-job.jpg" alt-text="Screenshot of where to select to add a task to a job."::: 1. Select the **+** to add a task to the job.
- 1. Search for the **Azure Spring Cloud** template, then select **Add** to add the task to the job.
- 1. Select **Azure Spring Cloud Deploy:** to edit the task.
+ 1. Search for the **Azure Spring Apps** template, then select **Add** to add the task to the job.
+ 1. Select **Azure Spring Apps Deploy:** to edit the task.
1. Fill this task with your app's information, then disable **Use Staging Deployment**. 1. Enable **Create a new staging deployment if one does not exist**, then enter a name in **Deployment**. 1. Select **Save** to save this task.
The following steps show you how to enable a blue-green deployment from the **Re
1. Under **Source (build pipeline)** select the pipeline created previously. 1. Select **Add**, then **Save**. 1. Select **1 job, 1 task** under **Stages**.
-1. Navigate to the **Azure Spring Cloud Deploy** task in **Stage 1**, then select the ellipsis next to **Package or folder**.
+1. Navigate to the **Azure Spring Apps Deploy** task in **Stage 1**, then select the ellipsis next to **Package or folder**.
1. Select *spring-boot-complete-0.0.1-SNAPSHOT.jar* in the dialog, then select **OK**. :::image type="content" source="media/spring-cloud-how-to-cicd/change-artifact-path.jpg" alt-text="Screenshot of the 'Select a file or folder' dialog box.":::
-1. Select the **+** to add another **Azure Spring Cloud** task to the job.
+1. Select the **+** to add another **Azure Spring Apps** task to the job.
2. Change the action to **Set Production Deployment**. 3. Select **Save**, then **Create release** to automatically start the deployment.
To deploy directly to Azure without a separate build step, use the following pip
inputs: azureSubscription: '<your service connection name>' Action: 'Deploy'
- AzureSpringCloud: <your Azure Spring Cloud service>
+ AzureSpringCloud: <your Azure Spring Apps service>
AppName: <app-name> UseStagingDeployment: false DeploymentName: 'default'
To deploy directly to Azure without a separate build step, use the following pip
## Next steps
-* [Quickstart: Deploy your first Spring Boot app in Azure Spring Cloud](./quickstart.md)
+* [Quickstart: Deploy your first Spring Boot app in Azure Spring Apps](./quickstart.md)
spring-cloud How To Circuit Breaker Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-circuit-breaker-metrics.md
Title: Collect Spring Cloud Resilience4J Circuit Breaker Metrics with Micrometer
-description: How to collect Spring Cloud Resilience4J Circuit Breaker Metrics with Micrometer in Azure Spring Cloud.
+description: How to collect Spring Cloud Resilience4J Circuit Breaker Metrics with Micrometer in Azure Spring Apps.
Last updated 12/15/2020-+ # Collect Spring Cloud Resilience4J Circuit Breaker Metrics with Micrometer (Preview)
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
+ **This article applies to:** ✔️ Basic/Standard tier ✔️ Enterprise tier This article shows you how to collect Spring Cloud Resilience4j Circuit Breaker Metrics with Application Insights Java in-process agent. With this feature you can monitor metrics of resilience4j circuit breaker from Application Insights with Micrometer.
cd spring-cloud-circuitbreaker-demo && mvn clean package -DskipTests
2. Create applications with endpoints ```azurecli
-az spring-cloud app create --name resilience4j --assign-endpoint \
- -s ${asc-service-name} -g ${asc-resource-group}
-az spring-cloud app create --name reactive-resilience4j --assign-endpoint \
- -s ${asc-service-name} -g ${asc-resource-group}
+az spring app create
+ --resource-group ${resource-group-name} \
+ --name resilience4j \
+ --service ${Azure-Spring-Apps-instance-name} \
+ --assign-endpoint
+az spring app create \
+ --resource-group ${resource-group-name} \
+ --service ${Azure-Spring-Apps-instance-name} \
+ --name reactive-resilience4j \
+ --assign-endpoint
``` 3. Deploy applications. ```azurecli
-az spring-cloud app deploy -n resilience4j \
+az spring app deploy -n resilience4j \
--jar-path ./spring-cloud-circuitbreaker-demo-resilience4j/target/spring-cloud-circuitbreaker-demo-resilience4j-0.0.1.BUILD-SNAPSHOT.jar \ -s ${service_name} -g ${resource_group}
-az spring-cloud app deploy -n reactive-resilience4j \
+az spring app deploy -n reactive-resilience4j \
--jar-path ./spring-cloud-circuitbreaker-demo-reactive-resilience4j/target/spring-cloud-circuitbreaker-demo-reactive-resilience4j-0.0.1.BUILD-SNAPSHOT.jar \ -s ${service_name} -g ${resource_group} ```
az spring-cloud app deploy -n reactive-resilience4j \
## Locate Resilence4j Metrics from Portal
-1. Select the **Application Insights** Blade from Azure Spring Cloud portal, and select **Application Insights**.
+1. Select the **Application Insights** Blade from Azure Spring Apps portal, and select **Application Insights**.
[ ![resilience4J 0](media/spring-cloud-resilience4j/resilience4J-0.png)](media/spring-cloud-resilience4j/resilience4J-0.PNG)
spring-cloud How To Config Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-config-server.md
Title: Configure your managed Spring Cloud Config Server in Azure Spring Cloud
-description: Learn how to configure a managed Spring Cloud Config Server in Azure Spring Cloud on the Azure portal
+ Title: Configure your managed Spring Cloud Config Server in Azure Spring Apps
+description: Learn how to configure a managed Spring Cloud Config Server in Azure Spring Apps on the Azure portal
Last updated 12/10/2021-+
-# Configure a managed Spring Cloud Config Server in Azure Spring Cloud
+# Configure a managed Spring Cloud Config Server in Azure Spring Apps
+
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
**This article applies to:** ✔️ Java ✔️ C# **This article applies to:** ✔️ Basic/Standard tier ❌ Enterprise tier
-This article shows you how to configure a managed Spring Cloud Config Server in Azure Spring Cloud service.
+This article shows you how to configure a managed Spring Cloud Config Server in Azure Spring Apps service.
Spring Cloud Config Server provides server and client-side support for an externalized configuration in a distributed system. The Config Server instance provides a central place to manage external properties for applications across all environments. For more information, see the [Spring Cloud Config Server reference](https://spring.io/projects/spring-cloud-config). ## Prerequisites * An Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
-* An already provisioned and running Azure Spring Cloud service of basic or standard tier. To set up and launch an Azure Spring Cloud service, see [Quickstart: Launch a Java Spring application by using the Azure CLI](./quickstart.md). Spring Cloud Config Server is not applicable to enterprise tier.
+* An already provisioned and running Azure Spring Apps service of basic or standard tier. To set up and launch an Azure Spring Apps service, see [Quickstart: Launch a Java Spring application by using the Azure CLI](./quickstart.md). Spring Cloud Config Server is not applicable to enterprise tier.
## Restriction
spring.jmx.enabled
## Create your Config Server files
-Azure Spring Cloud supports Azure DevOps, GitHub, GitLab, and Bitbucket for storing your Config Server files. When you've your repository ready, create the configuration files with the following instructions and store them there.
+Azure Spring Apps supports Azure DevOps, GitHub, GitLab, and Bitbucket for storing your Config Server files. When you've your repository ready, create the configuration files with the following instructions and store them there.
Additionally, some configurable properties are available only for certain types. The following subsections list the properties for each repository type.
All configurable properties used to set up private Git repository with SSH are l
| `strict-host-key-checking` | No | Indicates whether the Config Server instance will fail to start when using the private `host-key`. Should be *true* (default value) or *false*. | > [!NOTE]
-> Config Server takes `master` (om Git itself) as the default label if you don't specify one. But GitHub has changed the default branch from `master` to `main` recently. To avoid Azure Spring Cloud Config Server failure, be sure to pay attention to the default label when setting up Config Server with GitHub, especially for newly-created repositories.
+> Config Server takes `master` (om Git itself) as the default label if you don't specify one. But GitHub has changed the default branch from `master` to `main` recently. To avoid Azure Spring Apps Config Server failure, be sure to pay attention to the default label when setting up Config Server with GitHub, especially for newly-created repositories.
### Private repository with basic authentication
All configurable properties used to set up private Git repository with basic aut
| `password` | No | The password or personal access token used to access the Git repository server, _required_ when the Git repository server supports `Http Basic Authentication`. | > [!NOTE]
-> Many `Git` repository servers support the use of tokens rather than passwords for HTTP Basic Authentication. Some repositories allow tokens to persist indefinitely. However, some Git repository servers, including Azure DevOps Server, force tokens to expire in a few hours. Repositories that cause tokens to expire shouldn't use token-based authentication with Azure Spring Cloud.
+> Many `Git` repository servers support the use of tokens rather than passwords for HTTP Basic Authentication. Some repositories allow tokens to persist indefinitely. However, some Git repository servers, including Azure DevOps Server, force tokens to expire in a few hours. Repositories that cause tokens to expire shouldn't use token-based authentication with Azure Spring Apps.
> GitHub has removed support for password authentication, so you'll need to use a personal access token instead of password authentication for GitHub. For more information, see [Token authentication](https://github.blog/2020-12-15-token-authentication-requirements-for-git-operations/). ### Other Git repositories
The following table shows some examples for the **Additional repositories** sect
:::image type="content" source="media/spring-cloud-tutorial-config-server/additional-repositories.png" lightbox="media/spring-cloud-tutorial-config-server/additional-repositories.png" alt-text="Screenshot of Azure portal showing the Config Server page with the Patterns column of the 'Additional repositories' table highlighted.":::
-## Attach your Config Server repository to Azure Spring Cloud
+## Attach your Config Server repository to Azure Spring Apps
-Now that your configuration files are saved in a repository, you need to connect Azure Spring Cloud to it.
+Now that your configuration files are saved in a repository, you need to connect Azure Spring Apps to it.
1. Sign in to the [Azure portal](https://portal.azure.com).
-2. Go to your Azure Spring Cloud **Overview** page.
+2. Go to your Azure Spring Apps **Overview** page.
3. Select **Config Server** in the left navigation pane.
Now that your configuration files are saved in a repository, you need to connect
* **Public repository**: In the **Default repository** section, in the **Uri** box, paste the repository URI. Set the **Label** to **config**. Ensure that the **Authentication** setting is **Public**, and then select **Apply** to finish.
-* **Private repository**: Azure Spring Cloud supports basic password/token-based authentication and SSH.
+* **Private repository**: Azure Spring Apps supports basic password/token-based authentication and SSH.
- * **Basic Authentication**: In the **Default repository** section, in the **Uri** box, paste the repository URI, and then select the **Authentication** ("pencil" icon) button. In the **Edit Authentication** pane, in the **Authentication type** drop-down list, select **HTTP Basic**, and then enter your username and password/token to grant access to Azure Spring Cloud. Select **OK**, and then select **Apply** to finish setting up your Config Server instance.
+ * **Basic Authentication**: In the **Default repository** section, in the **Uri** box, paste the repository URI, and then select the **Authentication** ("pencil" icon) button. In the **Edit Authentication** pane, in the **Authentication type** drop-down list, select **HTTP Basic**, and then enter your username and password/token to grant access to Azure Spring Apps. Select **OK**, and then select **Apply** to finish setting up your Config Server instance.
![The Edit Authentication pane basic auth](media/spring-cloud-tutorial-config-server/basic-auth.png) > [!CAUTION]
- > Some Git repository servers use a *personal-token* or an *access-token*, such as a password, for **Basic Authentication**. You can use that kind of token as a password in Azure Spring Cloud, because it will never expire. But for other Git repository servers, such as Bitbucket and Azure DevOps Server, the *access-token* expires in one or two hours. This means that the option isn't viable when you use those repository servers with Azure Spring Cloud.
+ > Some Git repository servers use a *personal-token* or an *access-token*, such as a password, for **Basic Authentication**. You can use that kind of token as a password in Azure Spring Apps, because it will never expire. But for other Git repository servers, such as Bitbucket and Azure DevOps Server, the *access-token* expires in one or two hours. This means that the option isn't viable when you use those repository servers with Azure Spring Apps.
> GitHub has removed support for password authentication, so you'll need to use a personal access token instead of password authentication for GitHub. For more information, see [Token authentication](https://github.blog/2020-12-15-token-authentication-requirements-for-git-operations/). * **SSH**: In the **Default repository** section, in the **Uri** box, paste the repository URI, and then select the **Authentication** ("pencil" icon) button. In the **Edit Authentication** pane, in the **Authentication type** drop-down list, select **SSH**, and then enter your **Private key**. Optionally, specify your **Host key** and **Host key algorithm**. Be sure to include your public key in your Config Server repository. Select **OK**, and then select **Apply** to finish setting up your Config Server instance.
If you want to use an optional **Additional repositories** to configure your ser
### Enter repository information into a YAML file
-If you've written a YAML file with your repository settings, you can import the file directly from your local machine to Azure Spring Cloud. A simple YAML file for a private repository with basic authentication would look like this:
+If you've written a YAML file with your repository settings, you can import the file directly from your local machine to Azure Spring Apps. A simple YAML file for a private repository with basic authentication would look like this:
```yaml spring:
Select the **Import settings** button, and then select the YAML file from your p
The information from your YAML file should be displayed in the Azure portal. Select **Apply** to finish.
-## Using Azure Repos for Azure Spring Cloud Configuration
+## Using Azure Repos for Azure Spring Apps Configuration
-Azure Spring Cloud can access Git repositories that are public, secured by SSH, or secured using HTTP basic authentication. We'll use that last option, as its easier to create and manage with Azure Repos.
+Azure Spring Apps can access Git repositories that are public, secured by SSH, or secured using HTTP basic authentication. We'll use that last option, as its easier to create and manage with Azure Repos.
### Get repo url and credentials
Azure Spring Cloud can access Git repositories that are public, secured by SSH,
1. Select **Generate Git Credentials**. A username and password will appear and should be saved for use in the next section.
-### Configure Azure Spring Cloud to access the Git repository
+### Configure Azure Spring Apps to access the Git repository
1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Go to your Azure Spring Cloud **Overview** page.
+1. Go to your Azure Spring Apps **Overview** page.
1. Select the service to configure.
When properties are changed, services consuming those properties need to be noti
## Next steps
-In this article, you learned how to enable and configure your Spring Cloud Config Server instance. To learn more about managing your application, see [Scale an application in Azure Spring Cloud](./how-to-scale-manual.md).
+In this article, you learned how to enable and configure your Spring Cloud Config Server instance. To learn more about managing your application, see [Scale an application in Azure Spring Apps](./how-to-scale-manual.md).
spring-cloud How To Configure Palo Alto https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-configure-palo-alto.md
Title: How to configure Palo Alto for Azure Spring Cloud
-description: How to configure Palo Alto for Azure Spring Cloud
+ Title: How to configure Palo Alto for Azure Spring Apps
+description: How to configure Palo Alto for Azure Spring Apps
Last updated 09/17/2021-+
-# How to configure Palo Alto for Azure Spring Cloud
+# How to configure Palo Alto for Azure Spring Apps
+
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
**This article applies to:** ✔️ Java ✔️ C# **This article applies to:** ✔️ Basic/Standard tier ✔️ Enterprise tier
-This article describes how to use Azure Spring Cloud with a Palo Alto firewall.
+This article describes how to use Azure Spring Apps with a Palo Alto firewall.
-For example, the [Azure Spring Cloud reference architecture](./reference-architecture.md) includes an Azure Firewall to secure your applications. However, if your current deployments include a Palo Alto firewall, you can omit the Azure Firewall from the Azure Spring Cloud deployment and use Palo Alto instead, as described in this article.
+For example, the [Azure Spring Apps reference architecture](./reference-architecture.md) includes an Azure Firewall to secure your applications. However, if your current deployments include a Palo Alto firewall, you can omit the Azure Firewall from the Azure Spring Apps deployment and use Palo Alto instead, as described in this article.
-You should keep configuration information, such as rules and address wildcards, in CSV files in a Git repository. This article shows you how to use automation to apply these files to Palo Alto. To understand the configuration to be applied to Palo Alto, see [Customer responsibilities for running Azure Spring Cloud in VNET](./vnet-customer-responsibilities.md).
+You should keep configuration information, such as rules and address wildcards, in CSV files in a Git repository. This article shows you how to use automation to apply these files to Palo Alto. To understand the configuration to be applied to Palo Alto, see [Customer responsibilities for running Azure Spring Apps in VNET](./vnet-customer-responsibilities.md).
> [!Note] > In describing the use of REST APIs, this article uses the PowerShell variable syntax to indicate names and values that are left to your discretion. Be sure to use the same values in all the steps.
The [Reference Architecture Guide for Azure](https://www.paloaltonetworks.com/re
The rest of this article assumes you have the following two pre-configured network zones:
-* `Trust`, containing the interface connected to a virtual network peered with the Azure Spring Cloud virtual network.
+* `Trust`, containing the interface connected to a virtual network peered with the Azure Spring Apps virtual network.
* `UnTrust`, containing the interface to the public internet created earlier in the VM-Series deployment guide. ## Prepare CSV files Next, create three CSV files.
-Name the first file *AzureSpringCloudServices.csv*. This file should contain ingress ports for Azure Spring Cloud. The values in the following example are for demonstration purposes only. For all of the required values, see the [Azure Spring Cloud network requirements](./vnet-customer-responsibilities.md#azure-spring-cloud-network-requirements) section of [Customer responsibilities for running Azure Spring Cloud in VNET](./vnet-customer-responsibilities.md).
+Name the first file *AzureSpringAppsServices.csv*. This file should contain ingress ports for Azure Spring Apps. The values in the following example are for demonstration purposes only. For all of the required values, see the [Azure Spring Apps network requirements](./vnet-customer-responsibilities.md#azure-spring-apps-network-requirements) section of [Customer responsibilities for running Azure Spring Apps in VNET](./vnet-customer-responsibilities.md).
```CSV name,protocol,port,tag
-ASC_1194,udp,1194,AzureSpringCloud
-ASC_443,tcp,443,AzureSpringCloud
-ASC_9000,tcp,9000,AzureSpringCloud
-ASC_445,tcp,445,AzureSpringCloud
-ASC_123,udp,123,AzureSpringCloud
+ASC_1194,udp,1194,AzureSpringApps
+ASC_443,tcp,443,AzureSpringApps
+ASC_9000,tcp,9000,AzureSpringApps
+ASC_445,tcp,445,AzureSpringApps
+ASC_123,udp,123,AzureSpringApps
```
-Name the second file *AzureSpringCloudUrlCategories.csv*. This file should contain the addresses (with wildcards) that should be available for egress from Azure Spring Cloud. The values in the following example are for demonstration purposes only. For up-to-date values, see [Azure Spring Cloud FQDN requirements/application rules](./vnet-customer-responsibilities.md#azure-spring-cloud-fqdn-requirementsapplication-rules).
+Name the second file *AzureSpringAppsUrlCategories.csv*. This file should contain the addresses (with wildcards) that should be available for egress from Azure Spring Apps. The values in the following example are for demonstration purposes only. For up-to-date values, see [Azure Spring Apps FQDN requirements/application rules](./vnet-customer-responsibilities.md#azure-spring-apps-fqdn-requirementsapplication-rules).
```CSV name,description
$url = "https://${PaloAltoIpAddress}/restapi/v9.1/Objects/ServiceGroups?location
Invoke-RestMethod -Method Delete -Uri $url -Headers $paloAltoHeaders -SkipCertificateCheck ```
-Delete each Palo Alto service (as defined in *AzureSpringCloudServices.csv*) as shown in the following example:
+Delete each Palo Alto service (as defined in *AzureSpringAppsServices.csv*) as shown in the following example:
```powershell
-Get-Content .\AzureSpringCloudServices.csv | ConvertFrom-Csv | select name | ForEach-Object {
+Get-Content .\AzureSpringAppsServices.csv | ConvertFrom-Csv | select name | ForEach-Object {
$url = "https://${PaloAltoIpAddress}/restapi/v9.1/Objects/Services?location=vsys&vsys=vsys1&name=${_}" Invoke-RestMethod -Method Delete -Uri $url -Headers $paloAltoHeaders -SkipCertificateCheck }
Get-Content .\AzureSpringCloudServices.csv | ConvertFrom-Csv | select name | For
## Create a service and service group
-To automate the creation of services based on the *AzureSpringCloudServices.csv* file you created earlier, use the following example.
+To automate the creation of services based on the *AzureSpringAppsServices.csv* file you created earlier, use the following example.
```powershell # Define a function to create and submit a Palo Alto service creation request
function New-PaloAltoService {
} }
-# Now invoke that function for every row in AzureSpringCloudServices.csv
-Get-Content ./AzureSpringCloudServices.csv | ConvertFrom-Csv | New-PaloAltoService
+# Now invoke that function for every row in AzureSpringAppsServices.csv
+Get-Content ./AzureSpringAppsServices.csv | ConvertFrom-Csv | New-PaloAltoService
``` Next, create a Service Group for these services, as shown in the following example:
function New-PaloAltoServiceGroup {
$requestBody = @{ 'entry' = [ordered] @{ '@name' = $ServiceGroupName 'members' = @{ 'member' = $names }
- 'tag' = @{ 'member' = 'AzureSpringCloud' }
+ 'tag' = @{ 'member' = 'AzureSpringApps' }
} }
function New-PaloAltoServiceGroup {
} }
-# Run that function for all services in AzureSpringCloudServices.csv.
-Get-Content ./AzureSpringCloudServices.csv | ConvertFrom-Csv | New-PaloAltoServiceGroup -ServiceGroupName 'AzureSpringCloud_SG'
+# Run that function for all services in AzureSpringAppsServices.csv.
+Get-Content ./AzureSpringAppsServices.csv | ConvertFrom-Csv | New-PaloAltoServiceGroup -ServiceGroupName 'AzureSpringApps_SG'
``` ## Create custom URL categories
-Next, define custom URL categories for the service group to enable egress from Azure Spring Cloud, as shown in the following example.
+Next, define custom URL categories for the service group to enable egress from Azure Spring Apps, as shown in the following example.
```powershell # Read Service entries from CSV to enter into Palo Alto
-$csvImport = Get-Content ${PSScriptRoot}/AzureSpringCloudUrls.csv | ConvertFrom-Csv
+$csvImport = Get-Content ${PSScriptRoot}/AzureSpringAppsUrls.csv | ConvertFrom-Csv
# Convert name column of CSV to add to the Custom URL Group in Palo Alto $requestBody = @{ 'entry' = [ordered] @{
- '@name' = 'AzureSpringCloud_SG'
+ '@name' = 'AzureSpringApps_SG'
'list' = @{ 'member' = $csvImport.name } 'type' = 'URL List' } } | ConvertTo-Json -Depth 9
-$url = "https://${PaloAltoIpAddress}/restapi/v9.1/Objects/CustomURLCategories?location=vsys&vsys=vsys1&name=AzureSpringCloud_SG"
+$url = "https://${PaloAltoIpAddress}/restapi/v9.1/Objects/CustomURLCategories?location=vsys&vsys=vsys1&name=AzureSpringApps_SG"
try { $existingObject = Invoke-RestMethod -Method Get -Uri $url -SkipCertificateCheck -Headers $paloAltoHeaders
Next, create a JSON file to contain a security rule. Name the file *SecurityRule
{ "entry": [ {
- "@name": "azureSpringCloudRule",
+ "@name": "AzureSpringAppsRule",
"@location": "vsys", "@vsys": "vsys1", "to": {
Next, create a JSON file to contain a security rule. Name the file *SecurityRule
}, "service": { "member": [
- "AzureSpringCloud_SG"
+ "AzureSpringApps_SG"
] }, "hip-profiles": {
Next, create a JSON file to contain a security rule. Name the file *SecurityRule
Now, apply this rule to Palo Alto, as shown in the following example. ```powershell
-$url = "https://${PaloAltoIpAddress}/restapi/v9.1/Policies/SecurityRules?location=vsys&vsys=vsys1&name=azureSpringCloudRule"
+$url = "https://${PaloAltoIpAddress}/restapi/v9.1/Policies/SecurityRules?location=vsys&vsys=vsys1&name=AzureSpringAppsRule"
# Delete the rule if it already exists try {
$url = "https://${PaloAltoIpAddress}/api/?type=commit&cmd=<commit></commit>"
Invoke-RestMethod -Method Get -Uri $url -SkipCertificateCheck -Headers $paloAltoHeaders ```
-## Configure the Security Rules for Azure Spring Cloud subnets
+## Configure the Security Rules for Azure Spring Apps subnets
-Next, add network security rules to enable traffic from Palo Alto to access Azure Spring Cloud. The following examples reference the spoke Network Security Groups (NSGs) created by the Reference Architecture: `nsg-spokeapp` and `nsg-spokeruntime`.
+Next, add network security rules to enable traffic from Palo Alto to access Azure Spring Apps. The following examples reference the spoke Network Security Groups (NSGs) created by the Reference Architecture: `nsg-spokeapp` and `nsg-spokeruntime`.
Run the following Azure CLI commands in a PowerShell window to create the necessary network security rule for each of these NSGs, where `$PaloAltoAddressPrefix` is the Classless Inter-Domain Routing (CIDR) address of Palo Alto's private IPs.
az network nsg rule create `
## Configure the next hop
-After you've configured Palo Alto, configure Azure Spring Cloud to have Palo Alto as its next hop for outbound internet access. You can use the following Azure CLI commands in a PowerShell window for this configuration. Be sure to provide values for the following variables:
+After you've configured Palo Alto, configure Azure Spring Apps to have Palo Alto as its next hop for outbound internet access. You can use the following Azure CLI commands in a PowerShell window for this configuration. Be sure to provide values for the following variables:
-* `$AppResourceGroupName`: The name of the resource group containing your Azure Spring Cloud.
-* `$AzureSpringCloudServiceSubnetRouteTableName`: The name of the Azure Spring Cloud service/runtime subnet route table. In the reference architecture, this is set to `rt-spokeruntime`.
-* `$AzureSpringCloudAppSubnetRouteTableName`: The name of the Azure Spring Cloud app subnet route table. In the reference architecture, this is set to `rt-spokeapp`.
+* `$AppResourceGroupName`: The name of the resource group containing your Azure Spring Apps.
+* `$AzureSpringAppsServiceSubnetRouteTableName`: The name of the Azure Spring Apps service/runtime subnet route table. In the reference architecture, this is set to `rt-spokeruntime`.
+* `$AzureSpringAppsAppSubnetRouteTableName`: The name of the Azure Spring Apps app subnet route table. In the reference architecture, this is set to `rt-spokeapp`.
```azurecli az network route-table route create ` --resource-group ${AppResourceGroupName} ` --name default `
- --route-table-name ${AzureSpringCloudServiceSubnetRouteTableName} `
+ --route-table-name ${AzureSpringAppsServiceSubnetRouteTableName} `
--address-prefix 0.0.0.0/0 ` --next-hop-type VirtualAppliance ` --next-hop-ip-address ${PaloAltoIpAddress} `
az network route-table route create `
az network route-table route create ` --resource-group ${AppResourceGroupName} ` --name default `
- --route-table-name ${AzureSpringCloudAppSubnetRouteTableName} `
+ --route-table-name ${AzureSpringAppsAppSubnetRouteTableName} `
--address-prefix 0.0.0.0/0 ` --next-hop-type VirtualAppliance ` --next-hop-ip-address ${PaloAltoIpAddress} `
Your configuration is now complete.
## Next steps
-* [Stream Azure Spring Cloud app logs in real-time](./how-to-log-streaming.md)
-* [Application Insights Java In-Process Agent in Azure Spring Cloud](./how-to-application-insights.md)
-* [Automate application deployments to Azure Spring Cloud](./how-to-cicd.md)
+* [Stream Azure Spring Apps app logs in real-time](./how-to-log-streaming.md)
+* [Application Insights Java In-Process Agent in Azure Spring Apps](./how-to-application-insights.md)
+* [Automate application deployments to Azure Spring Apps](./how-to-cicd.md)
spring-cloud How To Custom Persistent Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-custom-persistent-storage.md
Title: How to enable your own persistent storage in Azure Spring Cloud | Microsoft Docs
-description: How to bring your own storage as persistent storages in Azure Spring Cloud
+ Title: How to enable your own persistent storage in Azure Spring Apps | Microsoft Docs
+description: How to bring your own storage as persistent storages in Azure Spring Apps
Last updated 2/18/2022 -+
-# How to enable your own persistent storage in Azure Spring Cloud
+# How to enable your own persistent storage in Azure Spring Apps
+
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
**This article applies to:** ✔️ Java ✔️ C# **This article applies to:** ✔️ Basic/Standard tier ✔️ Enterprise tier
-This article shows you how to enable your own persistent storage in Azure Spring Cloud.
+This article shows you how to enable your own persistent storage in Azure Spring Apps.
-When you use the built-in persistent storage in Azure Spring Cloud, artifacts generated by your application are uploaded into Azure Storage Accounts. Microsoft controls the encryption-at-rest and lifetime management policies for those artifacts.
+When you use the built-in persistent storage in Azure Spring Apps, artifacts generated by your application are uploaded into Azure Storage Accounts. Microsoft controls the encryption-at-rest and lifetime management policies for those artifacts.
With Bring Your Own Storage, these artifacts are uploaded into a storage account that you control. That means you control the encryption-at-rest policy, the lifetime management policy and network access. You will, however, be responsible for the costs associated with that storage account. ## Prerequisites * An existing Azure Storage Account and a pre-created Azure File Share. If you need to create a storage account and file share in Azure, see [Create an Azure file share](../storage/files/storage-how-to-create-file-share.md).
-* The [Azure Spring Cloud extension](/cli/azure/azure-cli-extensions-overview) for the Azure CLI
+* The [Azure Spring Apps extension](/cli/azure/azure-cli-extensions-overview) for the Azure CLI
> [!IMPORTANT]
-> If you deployed your Azure Spring Cloud in your own virtual network and you want the storage account to be accessed only from the virtual network, consult the following guidance:
+> If you deployed your Azure Spring Apps in your own virtual network and you want the storage account to be accessed only from the virtual network, consult the following guidance:
> - [Use private endpoints for Azure Storage](../storage/common/storage-private-endpoints.md) > - [Configure Azure Storage firewalls and virtual networks](../storage/common/storage-network-security.md), especially the [Grant access from a virtual network using service endpoint](../storage/common/storage-network-security.md#grant-access-from-a-virtual-network) section
With Bring Your Own Storage, these artifacts are uploaded into a storage account
> [!NOTE] > Updating persistent storage will result in the restart of applications.
-# [Portal](#tab/Azure-portal)
+### [Portal](#tab/Azure-portal)
-Use the following steps to bind an Azure Storage account as a storage resource in your Azure Spring Cloud and create an app with your own persistent storage.
+Use the following steps to bind an Azure Storage account as a storage resource in your Azure Spring Apps and create an app with your own persistent storage.
1. Go to the service **Overview** page, then select **Storage** in the left-hand navigation pane.
Use the following steps to bind an Azure Storage account as a storage resource i
| Setting | Value | |--|--|
- | Storage name | The name of the storage resource, which is a service-level resource in Azure Spring Cloud. |
+ | Storage name | The name of the storage resource, which is a service-level resource in Azure Spring Apps. |
| Account name | The name of the storage account. | | Account key | The storage account key. |
Use the following steps to bind an Azure Storage account as a storage resource i
:::image type="content" source="media/how-to-custom-persistent-storage/save-persistent-storage-changes.png" alt-text="Screenshot of Azure portal Persistent Storage section of the Configuration page." lightbox="media/how-to-custom-persistent-storage/save-persistent-storage-changes.png":::
-# [CLI](#tab/Azure-CLI)
+### [CLI](#tab/Azure-CLI)
You can enable your own storage with the Azure CLI by using the following steps.
-1. Use the following command to bind your Azure Storage account as a storage resource in your Azure Spring Cloud instance:
+1. Use the following command to bind your Azure Storage account as a storage resource in your Azure Spring Apps instance:
```azurecli
- az spring-cloud storage add \
+ az spring storage add \
--resource-group <resource-group-name> \
- --service <Azure-Spring-Cloud-instance-name> \
+ --service <Azure-Spring-Apps-instance-name> \
--name <storage-resource-name> \ --storage-type StorageAccount \ --account-name <account-name> \
You can enable your own storage with the Azure CLI by using the following steps.
1. Use the following command to create an app with your own persistent storage. ```azurecli
- az spring-cloud app create \
+ az spring app create \
--resource-group <resource-group-name> \
- --service <Azure-Spring-Cloud-instance-name> \
+ --service <Azure-Spring-Apps-instance-name> \
--name <app-name> \ --persistent-storage <path-to-JSON-file> ```
You can enable your own storage with the Azure CLI by using the following steps.
1. Optionally, add extra persistent storage to an existing app using the following command: ```azurecli
- az spring-cloud app append-persistent-storage \
+ az spring app append-persistent-storage \
--resource-group <resource-group-name> \
- --service <Azure-Spring-Cloud-instance-name> \
+ --service <Azure-Spring-Apps-instance-name> \
--name <app-name> \ --persistent-storage-type AzureFileVolume \ --share-name <Azure-file-share-name> \
You can enable your own storage with the Azure CLI by using the following steps.
1. Optionally, list all existing persistent storage of a specific storage resource using the following command: ```azurecli
- az spring-cloud storage list-persistent-storage \
+ az spring storage list-persistent-storage \
--resource-group <resource-group-name> \
- --service <Azure-Spring-Cloud-instance-name> \
+ --service <Azure-Spring-Apps-instance-name> \
--name <storage-resource-name> ```
You can enable your own storage with the Azure CLI by using the following steps.
## Use best practices
-Use the following best practices when adding your own persistent storage to Azure Spring Cloud.
+Use the following best practices when adding your own persistent storage to Azure Spring Apps.
-* To avoid potential latency issues, place the Azure Spring Cloud instance and the Azure Storage Account in the same Azure region.
+* To avoid potential latency issues, place the Azure Spring Apps instance and the Azure Storage Account in the same Azure region.
* In the Azure Storage Account, avoid regenerating the account key that's being used. The storage account contains two different keys. Use a step-by-step approach to ensure that the persistent storage remains available to the applications during key regeneration.
- For example, assuming that you used key1 to bind a storage account to Azure Spring Cloud, you would use the following steps:
+ For example, assuming that you used key1 to bind a storage account to Azure Spring Apps, you would use the following steps:
1. Regenerate key2. 1. Update the account key of the storage resource to use the regenerated key2.
- 1. Restart the applications that mount the persistent storage from this storage resource. (You can use `az spring-cloud storage list-persistent-storage` to list all related applications.)
+ 1. Restart the applications that mount the persistent storage from this storage resource. (You can use `az spring storage list-persistent-storage` to list all related applications.)
1. Regenerate key1. * If you delete an Azure Storage Account or Azure File Share, remove the corresponding storage resource or persistent storage in the applications to avoid possible errors. ## FAQs
-The following are frequently asked questions (FAQ) about using your own persistent storage with Azure Spring Cloud.
+The following are frequently asked questions (FAQ) about using your own persistent storage with Azure Spring Apps.
* If I have built-in persistent storage enabled, and then I enabled my own storage as extra persistent storage, will my data be migrated into my Storage Account?
The following are frequently asked questions (FAQ) about using your own persiste
* What are the reserved mount paths?
- *These mount paths are reserved by the Azure Spring Cloud service:*
+ *These mount paths are reserved by the Azure Spring Apps service:*
* */tmp* * */persistent*
The following are frequently asked questions (FAQ) about using your own persiste
* I'm using the service endpoint to configure the storage account to allow access only from my own virtual network. Why did I receive *Permission Denied* while trying to mount custom persistent storage to my applications?
- *A service endpoint provides network access on a subnet level only. Be sure you've added both subnets used by the Azure Spring Cloud instance to the scope of the service endpoint.*
+ *A service endpoint provides network access on a subnet level only. Be sure you've added both subnets used by the Azure Spring Apps instance to the scope of the service endpoint.*
## Next steps * [How to use Logback to write logs to custom persistent storage](how-to-write-log-to-custom-persistent-storage.md).
-* [Scale an application in Azure Spring Cloud](how-to-scale-manual.md).
+* [Scale an application in Azure Spring Apps](how-to-scale-manual.md).
spring-cloud How To Deploy In Azure Virtual Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-deploy-in-azure-virtual-network.md
Title: "Deploy Azure Spring Cloud in a virtual network"
-description: Deploy Azure Spring Cloud in a virtual network (VNet injection).
+ Title: "Deploy Azure Spring Apps in a virtual network"
+description: Deploy Azure Spring Apps in a virtual network (VNet injection).
Last updated 07/21/2020-+
-# Deploy Azure Spring Cloud in a virtual network
+# Deploy Azure Spring Apps in a virtual network
+
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
**This article applies to:** ✔️ Java ✔️ C# **This article applies to:** ✔️ Basic/Standard tier ✔️ Enterprise tier
-This tutorial explains how to deploy an Azure Spring Cloud instance in your virtual network. This deployment is sometimes called VNet injection.
+This tutorial explains how to deploy an Azure Spring Apps instance in your virtual network. This deployment is sometimes called VNet injection.
The deployment enables:
-* Isolation of Azure Spring Cloud apps and service runtime from the internet on your corporate network.
-* Azure Spring Cloud interaction with systems in on-premises data centers or Azure services in other virtual networks.
-* Empowerment of customers to control inbound and outbound network communications for Azure Spring Cloud.
+* Isolation of Azure Spring Apps apps and service runtime from the internet on your corporate network.
+* Azure Spring Apps interaction with systems in on-premises data centers or Azure services in other virtual networks.
+* Empowerment of customers to control inbound and outbound network communications for Azure Spring Apps.
The following video describes how to secure Spring Boot applications using managed virtual networks.
The following video describes how to secure Spring Boot applications using manag
> [!VIDEO https://www.youtube.com/embed/LbHD0jd8DTQ?list=PLPeZXlCR7ew8LlhnSH63KcM0XhMKxT1k_] > [!Note]
-> You can select your Azure virtual network only when you create a new Azure Spring Cloud service instance. You cannot change to use another virtual network after Azure Spring Cloud has been created.
+> You can select your Azure virtual network only when you create a new Azure Spring Apps service instance. You cannot change to use another virtual network after Azure Spring Apps has been created.
## Prerequisites
-Register the Azure Spring Cloud resource provider **Microsoft.AppPlatform** and **Microsoft.ContainerService** according to the instructions in [Register resource provider on Azure portal](../azure-resource-manager/management/resource-providers-and-types.md#azure-portal) or by running the following Azure CLI command:
+Register the Azure Spring Apps resource provider **Microsoft.AppPlatform** and **Microsoft.ContainerService** according to the instructions in [Register resource provider on Azure portal](../azure-resource-manager/management/resource-providers-and-types.md#azure-portal) or by running the following Azure CLI command:
```azurecli az provider register --namespace Microsoft.AppPlatform
az provider register --namespace Microsoft.ContainerService
## Virtual network requirements
-The virtual network to which you deploy your Azure Spring Cloud instance must meet the following requirements:
+The virtual network to which you deploy your Azure Spring Apps instance must meet the following requirements:
-* **Location**: The virtual network must reside in the same location as the Azure Spring Cloud instance.
-* **Subscription**: The virtual network must be in the same subscription as the Azure Spring Cloud instance.
-* **Subnets**: The virtual network must include two subnets dedicated to an Azure Spring Cloud instance:
+* **Location**: The virtual network must reside in the same location as the Azure Spring Apps instance.
+* **Subscription**: The virtual network must be in the same subscription as the Azure Spring Apps instance.
+* **Subnets**: The virtual network must include two subnets dedicated to an Azure Spring Apps instance:
* One for the service runtime. * One for your Spring applications.
- * There's a one-to-one relationship between these subnets and an Azure Spring Cloud instance. Use a new subnet for each service instance you deploy. Each subnet can only include a single service instance.
+ * There's a one-to-one relationship between these subnets and an Azure Spring Apps instance. Use a new subnet for each service instance you deploy. Each subnet can only include a single service instance.
* **Address space**: CIDR blocks up to */28* for both the service runtime subnet and the Spring applications subnet. * **Route table**: By default the subnets do not need existing route tables associated. You can [bring your own route table](#bring-your-own-route-table).
-The following procedures describe setup of the virtual network to contain the instance of Azure Spring Cloud.
+The following procedures describe setup of the virtual network to contain the instance of Azure Spring Apps.
## Create a virtual network #### [Portal](#tab/azure-portal)
-If you already have a virtual network to host an Azure Spring Cloud instance, skip steps 1, 2, and 3. You can start from step 4 to prepare subnets for the virtual network.
+If you already have a virtual network to host an Azure Spring Apps instance, skip steps 1, 2, and 3. You can start from step 4 to prepare subnets for the virtual network.
1. On the Azure portal menu, select **Create a resource**. From Azure Marketplace, select **Networking** > **Virtual network**.
If you already have a virtual network to host an Azure Spring Cloud instance, sk
|--|--| | Subscription | Select your subscription. | | Resource group | Select your resource group, or create a new one. |
- | Name | Enter **azure-spring-cloud-vnet**. |
+ | Name | Enter **azure-spring-apps-vnet**. |
| Location | Select **East US**. | 1. Select **Next: IP Addresses**.
If you already have a virtual network to host an Azure Spring Cloud instance, sk
#### [CLI](#tab/azure-CLI)
-If you already have a virtual network to host an Azure Spring Cloud instance, skip steps 1, 2, 3 and 4. You can start from step 5 to prepare subnets for the virtual network.
+If you already have a virtual network to host an Azure Spring Apps instance, skip steps 1, 2, 3 and 4. You can start from step 5 to prepare subnets for the virtual network.
-1. Define variables for your subscription, resource group, and Azure Spring Cloud instance. Customize the values based on your real environment.
+1. Define variables for your subscription, resource group, and Azure Spring Apps instance. Customize the values based on your real environment.
```azurecli SUBSCRIPTION='subscription-id' RESOURCE_GROUP='my-resource-group' LOCATION='eastus' SPRING_CLOUD_NAME='spring-cloud-name'
- VIRTUAL_NETWORK_NAME='azure-spring-cloud-vnet'
+ VIRTUAL_NETWORK_NAME='azure-spring-apps-vnet'
``` 1. Sign in to the Azure CLI and choose your active subscription.
If you already have a virtual network to host an Azure Spring Cloud instance, sk
## Grant service permission to the virtual network
-Azure Spring Cloud requires **Owner** permission to your virtual network, in order to grant a dedicated and dynamic service principal on the virtual network for further deployment and maintenance.
+Azure Spring Apps requires **Owner** permission to your virtual network, in order to grant a dedicated and dynamic service principal on the virtual network for further deployment and maintenance.
#### [Portal](#tab/azure-portal)
-Select the virtual network **azure-spring-cloud-vnet** you previously created.
+Select the virtual network **azure-spring-apps-vnet** you previously created.
1. Select **Access control (IAM)**, and then select **Add** > **Add role assignment**. ![Screenshot that shows the Access control screen.](./media/spring-cloud-v-net-injection/access-control.png)
-1. Assign the *Owner* role to the **Azure Spring Cloud Resource Provider**. For detailed steps, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md#step-2-open-the-add-role-assignment-page).
+1. Assign the *Owner* role to the **Azure Spring Apps Resource Provider**. For detailed steps, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md#step-2-open-the-add-role-assignment-page).
![Screenshot that shows owner assignment to resource provider.](./media/spring-cloud-v-net-injection/assign-owner-resource-provider.png)
az role assignment create \
-## Deploy an Azure Spring Cloud instance
+## Deploy an Azure Spring Apps instance
#### [Portal](#tab/azure-portal)
-To deploy an Azure Spring Cloud instance in the virtual network:
+To deploy an Azure Spring Apps instance in the virtual network:
1. Open the [Azure portal](https://portal.azure.com).
-1. In the top search box, search for **Azure Spring Cloud**. Select **Azure Spring Cloud** from the result.
+1. In the top search box, search for **Azure Spring Apps**. Select **Azure Spring Apps** from the result.
-1. On the **Azure Spring Cloud** page, select **Add**.
+1. On the **Azure Spring Apps** page, select **Add**.
-1. Fill out the form on the Azure Spring Cloud **Create** page.
+1. Fill out the form on the Azure Spring Apps **Create** page.
1. Select the same resource group and region as the virtual network.
-1. For **Name** under **Service Details**, select **azure-spring-cloud-vnet**.
+1. For **Name** under **Service Details**, select **azure-spring-apps-vnet**.
1. Select the **Networking** tab, and select the following values: | Setting | Value | ||-| | Deploy in your own virtual network | Select **Yes**. |
- | Virtual network | Select **azure-spring-cloud-vnet**. |
+ | Virtual network | Select **azure-spring-apps-vnet**. |
| Service runtime subnet | Select **service-runtime-subnet**. | | Spring apps subnet | Select **apps-subnet**. |
- ![Screenshot that shows the Networking tab on the Azure Spring Cloud Create page.](./media/spring-cloud-v-net-injection/creation-blade-networking-tab.png)
+ ![Screenshot that shows the Networking tab on the Azure Spring Apps Create page.](./media/spring-cloud-v-net-injection/creation-blade-networking-tab.png)
1. Select **Review and create**.
To deploy an Azure Spring Cloud instance in the virtual network:
#### [CLI](#tab/azure-CLI)
-To deploy an Azure Spring Cloud instance in the virtual network:
+To deploy an Azure Spring Apps instance in the virtual network:
-Create your Azure Spring Cloud instance by specifying the virtual network and subnets you just created,
+Create your Azure Spring Apps instance by specifying the virtual network and subnets you just created,
```azurecli
- az spring-cloud create \
+ az spring create \
--resource-group "$RESOURCE_GROUP" \ --name "$SPRING_CLOUD_NAME" \ --vnet $VIRTUAL_NETWORK_NAME \
Create your Azure Spring Cloud instance by specifying the virtual network and su
-After the deployment, two additional resource groups will be created in your subscription to host the network resources for the Azure Spring Cloud instance. Go to **Home**, and then select **Resource groups** from the top menu items to find the following new resource groups.
+After the deployment, two additional resource groups will be created in your subscription to host the network resources for the Azure Spring Apps instance. Go to **Home**, and then select **Resource groups** from the top menu items to find the following new resource groups.
The resource group named as **ap-svc-rt_{service instance name}_{service instance region}** contains network resources for the service runtime of the service instance.
Those network resources are connected to your virtual network created in the pre
![Screenshot that shows the virtual network with connected devices.](./media/spring-cloud-v-net-injection/vnet-with-connected-device.png) > [!Important]
- > The resource groups are fully managed by the Azure Spring Cloud service. Do *not* manually delete or modify any resource inside.
+ > The resource groups are fully managed by the Azure Spring Apps service. Do *not* manually delete or modify any resource inside.
## Using smaller subnet ranges
-This table shows the maximum number of app instances Azure Spring Cloud supports using smaller subnet ranges.
+This table shows the maximum number of app instances Azure Spring Apps supports using smaller subnet ranges.
| App subnet CIDR | Total IPs | Available IPs | Maximum app instances | | | | - | |
This table shows the maximum number of app instances Azure Spring Cloud supports
| /25 | 128 | 120 | <p>App with 0.5 core: 500 <br/> App with one core: 500<br/> App with two cores: 500<br/> App with three cores: 480<br> App with four cores: 360</p> | | /24 | 256 | 248 | <p>App with 0.5 core: 500 <br/> App with one core: 500<br/> App with two cores: 500<br/> App with three cores: 500<br/> App with four cores: 500</p> |
-For subnets, five IP addresses are reserved by Azure, and at least three IP addresses are required by Azure Spring Cloud. At least eight IP addresses are required, so /29 and /30 are nonoperational.
+For subnets, five IP addresses are reserved by Azure, and at least three IP addresses are required by Azure Spring Apps. At least eight IP addresses are required, so /29 and /30 are nonoperational.
For a service runtime subnet, the minimum size is /28. This size has no bearing on the number of app instances. ## Bring your own route table
-Azure Spring Cloud supports using existing subnets and route tables.
+Azure Spring Apps supports using existing subnets and route tables.
-If your custom subnets do not contain route tables, Azure Spring Cloud creates them for each of the subnets and adds rules to them throughout the instance lifecycle. If your custom subnets contain route tables, Azure Spring Cloud acknowledges the existing route tables during instance operations and adds/updates and/or rules accordingly for operations.
+If your custom subnets do not contain route tables, Azure Spring Apps creates them for each of the subnets and adds rules to them throughout the instance lifecycle. If your custom subnets contain route tables, Azure Spring Apps acknowledges the existing route tables during instance operations and adds/updates and/or rules accordingly for operations.
> [!Warning]
-> Custom rules can be added to the custom route tables and updated. However, rules are added by Azure Spring Cloud and these must not be updated or removed. Rules such as 0.0.0.0/0 must always exist on a given route table and map to the target of your internet gateway, such as an NVA or other egress gateway. Use caution when updating rules when only your custom rules are being modified.
+> Custom rules can be added to the custom route tables and updated. However, rules are added by Azure Spring Apps and these must not be updated or removed. Rules such as 0.0.0.0/0 must always exist on a given route table and map to the target of your internet gateway, such as an NVA or other egress gateway. Use caution when updating rules when only your custom rules are being modified.
### Route table requirements The route tables to which your custom vnet is associated must meet the following requirements:
-* You can associate your Azure route tables with your vnet only when you create a new Azure Spring Cloud service instance. You cannot change to use another route table after Azure Spring Cloud has been created.
+* You can associate your Azure route tables with your vnet only when you create a new Azure Spring Apps service instance. You cannot change to use another route table after Azure Spring Apps has been created.
* Both the Spring application subnet and the service runtime subnet must associate with different route tables or neither of them.
-* Permissions must be assigned before instance creation. Be sure to grant **Azure Spring Cloud Resource Provider** the *Owner* permission to your route tables.
+* Permissions must be assigned before instance creation. Be sure to grant **Azure Spring Apps Resource Provider** the *Owner* permission to your route tables.
* The associated route table resource cannot be updated after cluster creation. While the route table resource cannot be updated, custom rules can be modified on the route table. * You cannot reuse a route table with multiple instances due to potential conflicting routing rules. ## Next steps
-* [Troubleshooting Azure Spring Cloud in VNET](troubleshooting-vnet.md)
-* [Customer Responsibilities for Running Azure Spring Cloud in VNET](vnet-customer-responsibilities.md)
+* [Troubleshooting Azure Spring Apps in VNET](troubleshooting-vnet.md)
+* [Customer Responsibilities for Running Azure Spring Apps in VNET](vnet-customer-responsibilities.md)
spring-cloud How To Deploy Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-deploy-powershell.md
Title: Create and deploy applications in Azure Spring Cloud by using PowerShell
-description: How to create and deploy applications in Azure Spring Cloud by using PowerShell
+ Title: Create and deploy applications in Azure Spring Apps by using PowerShell
+description: How to create and deploy applications in Azure Spring Apps by using PowerShell
ms.devlang: azurepowershell Last updated 2/15/2022-+ # Create and deploy applications by using PowerShell
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
+ **This article applies to:** ✔️ Basic/Standard tier ✔️ Enterprise tier
-This article describes how you can create an instance of Azure Spring Cloud by using the [Az.SpringCloud](/powershell/module/Az.SpringCloud) PowerShell module.
+This article describes how you can create an instance of Azure Spring Apps by using the [Az.SpringCloud](/powershell/module/Az.SpringCloud) PowerShell module.
## Requirements
New-AzResourceGroup -Name <resource group name> -Location eastus
## Provision a new instance
-To create a new instance of Azure Spring Cloud, you use the
+To create a new instance of Azure Spring Apps, you use the
[New-AzSpringCloud](/powershell/module/az.springcloud/new-azspringcloud) cmdlet. The following
-example creates an Azure Spring Cloud service, with the name that you specified in the resource group you created previously.
+example creates an Azure Spring Apps service, with the name that you specified in the resource group you created previously.
```azurepowershell-interactive New-AzSpringCloud -ResourceGroupName <resource group name> -name <service instance name> -Location eastus
New-AzSpringCloud -ResourceGroupName <resource group name> -name <service instan
## Create a new application To create a new app, you use the
-[New-AzSpringCloudApp](/powershell/module/az.springcloud/new-azspringcloudapp) cmdlet. The following example creates an app in Azure Spring Cloud named `gateway`.
+[New-AzSpringCloudApp](/powershell/module/az.springcloud/new-azspringcloudapp) cmdlet. The following example creates an app in Azure Spring Apps named `gateway`.
```azurepowershell-interactive New-AzSpringCloudApp -ResourceGroupName <resource group name> -ServiceName <service instance name> -AppName gateway
New-AzSpringCloudApp -ResourceGroupName <resource group name> -ServiceName <serv
To create a new app Deployment, you use the [New-AzSpringCloudAppDeployment](/powershell/module/az.springcloud/new-azspringcloudappdeployment)
-cmdlet. The following example creates an app deployment in Azure Spring Cloud named `default`, for the `gateway` app.
+cmdlet. The following example creates an app deployment in Azure Spring Apps named `default`, for the `gateway` app.
```azurepowershell-interactive New-AzSpringCloudAppDeployment -ResourceGroupName <resource group name> -Name <service instance name> -AppName gateway -DeploymentName default
New-AzSpringCloudAppDeployment -ResourceGroupName <resource group name> -Name <s
## Get a service and its properties
-To get an Azure Spring Cloud service and its properties, you use the
+To get an Azure Spring Apps service and its properties, you use the
[Get-AzSpringCloud](/powershell/module/az.springcloud/get-azspringcloud) cmdlet. The following
-example retrieves information about the specified Azure Spring Cloud service.
+example retrieves information about the specified Azure Spring Apps service.
```azurepowershell-interactive Get-AzSpringCloud -ResourceGroupName <resource group name> -ServiceName <service instance name>
Get-AzSpringCloud -ResourceGroupName <resource group name> -ServiceName <service
## Get an application
-To get an app and its properties in Azure Spring Cloud, you use the
+To get an app and its properties in Azure Spring Apps, you use the
[Get-AzSpringCloudApp](/powershell/module/az.springcloud/get-azspringcloudapp) cmdlet. The following example retrieves information about the app `gateway`. ```azurepowershell-interactive
Get-AzSpringCloudApp -ResourceGroupName <resource group name> -ServiceName <serv
## Get an app deployment
-To get an app deployment and its properties in Azure Spring Cloud, you use the
-[Get-AzSpringCloudAppDeployment](/powershell/module/az.springcloud/get-azspringcloudappdeployment) cmdlet. The following example retrieves information about the `default` Azure Spring Cloud deployment.
+To get an app deployment and its properties in Azure Spring Apps, you use the
+[Get-AzSpringCloudAppDeployment](/powershell/module/az.springcloud/get-azspringcloudappdeployment) cmdlet. The following example retrieves information about the `default` Azure Spring Apps deployment.
```azurepowershell-interactive Get-AzSpringCloudAppDeployment -ResourceGroupName <resource group name> -ServiceName <service instance name> -AppName gateway -DeploymentName default
If the resources created in this article aren't needed, you can delete them by r
### Delete an app deployment
-To remove an app deployment in Azure Spring Cloud, you use the
-[Remove-AzSpringCloudAppDeployment](/powershell/module/az.springcloud/remove-azspringcloudappdeployment) cmdlet. The following example deletes an app deployed in Azure Spring Cloud named `default`, for the specified service and app.
+To remove an app deployment in Azure Spring Apps, you use the
+[Remove-AzSpringCloudAppDeployment](/powershell/module/az.springcloud/remove-azspringcloudappdeployment) cmdlet. The following example deletes an app deployed in Azure Spring Apps named `default`, for the specified service and app.
```azurepowershell-interactive Remove-AzSpringCloudAppDeployment -ResourceGroupName <resource group name> -ServiceName <service instance name> -AppName gateway -DeploymentName default
Remove-AzSpringCloudAppDeployment -ResourceGroupName <resource group name> -Serv
### Delete an app
-To remove an app in Azure Spring Cloud, you use the
+To remove an app in Azure Spring Apps, you use the
[Remove-AzSpringCloudApp](/powershell/module/Az.SpringCloud/remove-azspringcloudapp) cmdlet. The following example deletes the `gateway` app in the specified service and resource group. ```azurepowershell
Remove-AzSpringCloudApp -ResourceGroupName <resource group name> -ServiceName <s
### Delete a service
-To remove an Azure Spring Cloud service, you use the
-[Remove-AzSpringCloud](/powershell/module/Az.SpringCloud/remove-azspringcloud) cmdlet. The following example deletes the specified Azure Spring Cloud service.
+To remove an Azure Spring Apps service, you use the
+[Remove-AzSpringCloud](/powershell/module/Az.SpringCloud/remove-azspringcloud) cmdlet. The following example deletes the specified Azure Spring Apps service.
```azurepowershell Remove-AzSpringCloud -ResourceGroupName <resource group name> -ServiceName <service instance name>
Remove-AzResourceGroup -Name <resource group name>
## Next steps
-[Azure Spring Cloud developer resources](./resources.md)
+[Azure Spring Apps developer resources](./resources.md)
spring-cloud How To Deploy With Custom Container Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-deploy-with-custom-container-image.md
Title: How to deploy applications in Azure Spring Cloud with a custom container image (Preview)
-description: How to deploy applications in Azure Spring Cloud with a custom container image
+ Title: How to deploy applications in Azure Spring Apps with a custom container image (Preview)
+description: How to deploy applications in Azure Spring Apps with a custom container image
+ Last updated 4/28/2022 # Deploy an application with a custom container image (Preview)
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
+ **This article applies to:** ✔️ Standard tier ✔️ Enterprise tier
-This article explains how to deploy Spring Boot applications in Azure Spring Cloud using a custom container image. Deploying an application with a custom container supports most features as when deploying a JAR application. Other Java and non-Java applications can also be deployed with the container image.
+This article explains how to deploy Spring Boot applications in Azure Spring Apps using a custom container image. Deploying an application with a custom container supports most features as when deploying a JAR application. Other Java and non-Java applications can also be deployed with the container image.
## Prerequisites
This article explains how to deploy Spring Boot applications in Azure Spring Clo
To deploy an application to a custom container image, use the following steps:
-# [Azure CLI](#tab/azure-cli)
+### [Azure CLI](#tab/azure-cli)
To deploy a container image, use one of the following commands: * To deploy a container image to the public Docker Hub to an app, use the following command: ```azurecli
- az spring-cloud app deploy \
+ az spring app deploy \
--resource-group <your-resource-group> \ --name <your-app-name> \ --container-image <your-container-image> \
To deploy a container image, use one of the following commands:
* To deploy a container image from ACR to an app, or from another private registry to an app, use the following command: ```azurecli
- az spring-cloud app deploy \
+ az spring app deploy \
--resource-group <your-resource-group> \ --name <your-app-name> \ --container-image <your-container-image> \
To disable listening on a port for images that aren't web applications, add the
--disable-probe true ```
-# [Portal](#tab/azure-portal)
+### [Portal](#tab/azure-portal)
1. Open the [Azure portal](https://portal.azure.com).
-1. Open your existing Spring Cloud service instance.
+1. Open your existing Azure Spring Apps service instance.
1. Select **Apps** from left the menu, then select **Create App**. 1. Name your app, and in the **Runtime platform** pulldown list, select **Custom Container**.
AppDynamics:
To view the console logs of your container application, the following CLI command can be used: ```azurecli
-az spring-cloud app logs \
+az spring app logs \
--resource-group <your-resource-group> \ --name <your-app-name> \ --service <your-service-name> \
We recommend that you use Microsoft Defender for Cloud with ACR to prevent your
You can switch the deployment type directly by redeploying using the following command: ```azurecli
-az spring-cloud app deploy \
+az spring app deploy \
--resource-group <your-resource-group> \ --name <your-app-name> \ --container-image <your-container-image> \
az spring-cloud app deploy \
You can create another deployment using an existing JAR deployment using the following command: ```azurecli
-az spring-cloud app deployment create \
+az spring app deployment create \
--resource-group <your-resource-group> \ --name <your-deployment-name> \ --app <your-app-name> \
spring-cloud How To Distributed Tracing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-distributed-tracing.md
Title: "Use Distributed Tracing with Azure Spring Cloud"
-description: Learn how to use Spring Cloud's distributed tracing through Azure Application Insights
+ Title: "Use Distributed Tracing with Azure Spring Apps"
+description: Learn how to use Azure Spring Apps distributed tracing through Azure Application Insights
Last updated 10/06/2019 -+ zone_pivot_groups: programming-languages-spring-cloud
-# Use distributed tracing with Azure Spring Cloud (deprecated)
+# Use distributed tracing with Azure Spring Apps (deprecated)
+
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
> [!NOTE]
-> Distributed Tracing is deprecated. For more information, see [Application Insights Java In-Process Agent in Azure Spring Cloud](./how-to-application-insights.md).
+> Distributed Tracing is deprecated. For more information, see [Application Insights Java In-Process Agent in Azure Spring Apps](./how-to-application-insights.md).
-With the distributed tracing tools in Azure Spring Cloud, you can easily debug and monitor complex issues. Azure Spring Cloud integrates [Spring Cloud Sleuth](https://spring.io/projects/spring-cloud-sleuth) with Azure's [Application Insights](../azure-monitor/app/app-insights-overview.md). This integration provides powerful distributed tracing capability from the Azure portal.
+With the distributed tracing tools in Azure Spring Apps, you can easily debug and monitor complex issues. Azure Spring Apps integrates [Spring Cloud Sleuth](https://spring.io/projects/spring-cloud-sleuth) with Azure's [Application Insights](../azure-monitor/app/app-insights-overview.md). This integration provides powerful distributed tracing capability from the Azure portal.
::: zone pivot="programming-language-csharp" In this article, you learn how to enable a .NET Core Steeltoe app to use distributed tracing. ## Prerequisites
-To follow these procedures, you need a Steeltoe app that is already [prepared for deployment to Azure Spring Cloud](how-to-prepare-app-deployment.md).
+To follow these procedures, you need a Steeltoe app that is already [prepared for deployment to Azure Spring Apps](how-to-prepare-app-deployment.md).
## Dependencies
For Steeltoe 3.0.0, add the following NuGet package:
## Update configuration
-Add the following settings to the configuration source that will be used when the app runs in Azure Spring Cloud:
+Add the following settings to the configuration source that will be used when the app runs in Azure Spring Apps:
1. Set `management.tracing.alwaysSample` to true.
In this article, you learn how to:
## Prerequisites
-To follow these procedures, you need an Azure Spring Cloud service that is already provisioned and running. Complete the [Deploy your first Spring Boot app in Azure Spring Cloud](./quickstart.md) quickstart to provision and run an Azure Spring Cloud service.
+To follow these procedures, you need an Azure Spring Apps service that is already provisioned and running. Complete the [Deploy your first Spring Boot app in Azure Spring Apps](./quickstart.md) quickstart to provision and run an Azure Spring Apps service.
## Add dependencies
To follow these procedures, you need an Azure Spring Cloud service that is alrea
After this change, the Zipkin sender can send to the web.
-1. Skip this step if you followed our [guide to preparing an application in Azure Spring Cloud](how-to-prepare-app-deployment.md). Otherwise, go to your local development environment and edit your pom.xml file to include the following Spring Cloud Sleuth dependency:
+1. Skip this step if you followed our [guide to preparing an application in Azure Spring Apps](how-to-prepare-app-deployment.md). Otherwise, go to your local development environment and edit your pom.xml file to include the following Spring Cloud Sleuth dependency:
* Spring boot version < 2.4.x.
To follow these procedures, you need an Azure Spring Cloud service that is alrea
</dependencies> ```
-1. Build and deploy again for your Azure Spring Cloud service to reflect these changes.
+1. Build and deploy again for your Azure Spring Apps service to reflect these changes.
## Modify the sample rate
If you have already built and deployed an application, you can modify the sample
## Enable Application Insights
-1. Go to your Azure Spring Cloud service page in the Azure portal.
+1. Go to your Azure Spring Apps service page in the Azure portal.
1. On the **Monitoring** page, select **Distributed Tracing**. 1. Select **Edit setting** to edit or add a new setting. 1. Create a new Application Insights query, or select an existing one.
Application Insights provides monitoring capabilities in addition to the applica
## Disable Application Insights
-1. Go to your Azure Spring Cloud service page in the Azure portal.
+1. Go to your Azure Spring Apps service page in the Azure portal.
1. On **Monitoring**, select **Distributed Tracing**. 1. Select **Disable** to disable Application Insights. ## Next steps
-In this article, you learned how to enable and understand distributed tracing in Azure Spring Cloud. To learn about binding services to an application, see [Bind an Azure Cosmos DB database to an application in Azure Spring Cloud](./how-to-bind-cosmos.md).
+In this article, you learned how to enable and understand distributed tracing in Azure Spring Apps. To learn about binding services to an application, see [Bind an Azure Cosmos DB database to an application in Azure Spring Apps](./how-to-bind-cosmos.md).
spring-cloud How To Dump Jvm Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-dump-jvm-options.md
Title: Use the diagnostic settings of JVM options for advanced troubleshooting in Azure Spring Cloud
+ Title: Use the diagnostic settings of JVM options for advanced troubleshooting in Azure Spring Apps
description: Describes several best practices with JVM configuration to set heap dump, JFR, and GC logs. Last updated 01/21/2022-+
-# Use the diagnostic settings of JVM options for advanced troubleshooting in Azure Spring Cloud
+# Use the diagnostic settings of JVM options for advanced troubleshooting in Azure Spring Apps
+
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
**This article applies to:** ✔️ Java ❌ C# **This article applies to:** ✔️ Basic/Standard tier ✔️ Enterprise tier
-This article shows you how to use diagnostic settings through JVM options to conduct advanced troubleshooting in Azure Spring Cloud.
+This article shows you how to use diagnostic settings through JVM options to conduct advanced troubleshooting in Azure Spring Apps.
-There are several JVM-based application startup parameters related to heap dump, Java Flight Recorder (JFR), and garbage collection (GC) logs. In Azure Spring Cloud, we support JVM configuration using JVM options.
+There are several JVM-based application startup parameters related to heap dump, Java Flight Recorder (JFR), and garbage collection (GC) logs. In Azure Spring Apps, we support JVM configuration using JVM options.
-For more information on configuring JVM-based application startup parameters, see [az spring-cloud app deployment](/cli/azure/spring-cloud/app/deployment) in the Azure CLI reference documentation. The following sections provide several examples of useful values for the `--jvm-options` parameter.
+For more information on configuring JVM-based application startup parameters, see [az spring app deployment](/cli/azure/spring/app/deployment) in the Azure CLI reference documentation. The following sections provide several examples of useful values for the `--jvm-options` parameter.
## Prerequisites
-* A deployed Azure Spring Cloud service instance. Follow our [quickstart on deploying an app via the Azure CLI](./quickstart.md) to get started.
+* A deployed Azure Spring Apps service instance. Follow our [quickstart on deploying an app via the Azure CLI](./quickstart.md) to get started.
* At least one application already created in your service instance.
-* Your own persistent storage as described in [How to enable your own persistent storage in Azure Spring Cloud](how-to-custom-persistent-storage.md). This storage is used to save generated diagnostic files. The paths you provide in the parameter values below should be under the mount path of the persistent storage bound to your app. If you want to use a path under the mount path, be sure to create the subpath beforehand.
+* Your own persistent storage as described in [How to enable your own persistent storage in Azure Spring Apps](how-to-custom-persistent-storage.md). This storage is used to save generated diagnostic files. The paths you provide in the parameter values below should be under the mount path of the persistent storage bound to your app. If you want to use a path under the mount path, be sure to create the subpath beforehand.
## Generate a heap dump when out of memory
Use the following `--jvm-options` parameter to generate a JFR file. For more in
## Configure the path for generated files
-To ensure that you can access your files, be sure that the target path of your generated file is in the persistent storage bound to your app. For example, you can use JSON similar to the following example when you create your persistent storage in Azure Spring Cloud.
+To ensure that you can access your files, be sure that the target path of your generated file is in the persistent storage bound to your app. For example, you can use JSON similar to the following example when you create your persistent storage in Azure Spring Apps.
```json {
To ensure that you can access your files, be sure that the target path of your g
Alternately, you can use the following command to append to persistent storage. ```azurecli
-az spring-cloud app append-persistent-storage \
+az spring app append-persistent-storage \
--resource-group <resource-group-name> \
- --service <Azure-Spring-Cloud-instance-name> \
+ --service <Azure-Spring-Apps-instance-name> \
--name <app-name> \ --persistent-storage-type AzureFileVolume \ --storage-name <storage-resource-name> \
az spring-cloud app append-persistent-storage \
## Next steps
-* [Capture heap dump and thread dump manually and use Java Flight Recorder in Azure Spring Cloud](how-to-capture-dumps.md)
+* [Capture heap dump and thread dump manually and use Java Flight Recorder in Azure Spring Apps](how-to-capture-dumps.md)
spring-cloud How To Dynatrace One Agent Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-dynatrace-one-agent-monitor.md
Title: "How to monitor Spring Boot apps with Dynatrace Java OneAgent"
-description: How to use Dynatrace Java OneAgent to monitor Spring Boot applications in Azure Spring Cloud
+description: How to use Dynatrace Java OneAgent to monitor Spring Boot applications in Azure Spring Apps
Last updated 08/31/2021-+ ms.devlang: azurecli # How to monitor Spring Boot apps with Dynatrace Java OneAgent
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
+ **This article applies to:** ✔️ Basic/Standard tier ✔️ Enterprise tier
-This article shows you how to use Dynatrace OneAgent to monitor Spring Boot applications in Azure Spring Cloud.
+This article shows you how to use Dynatrace OneAgent to monitor Spring Boot applications in Azure Spring Apps.
With the Dynatrace OneAgent, you can:
The following video introduces Dynatrace OneAgent.
The following sections describe how to activate Dynatrace OneAgent.
-### Prepare your Azure Spring Cloud environment
+### Prepare your Azure Spring Apps environment
-1. Create an instance of Azure Spring Cloud.
+1. Create an instance of Azure Spring Apps.
1. Create an application that you want to report to Dynatrace by running the following command. Replace the placeholders *\<...>* with your own values. ```azurecli
- az spring-cloud app create \
+ az spring app create \
--resource-group <your-resource-group-name> \
- --service <your-Azure-Spring-Cloud-name> \
+ --service <your-Azure-Spring-Apps-name> \
--name <your-application-name> \ --is-public true ``` ### Determine the values for the required environment variables
-To activate Dynatrace OneAgent on your Azure Spring Cloud instance, you need to configure four environment variables: `DT_TENANT`, `DT_TENANTTOKEN`, `DT_CONNECTION_POINT`, and `DT_CLUSTER_ID`. For more information, see [Integrate OneAgent with Azure Spring Cloud](https://www.dynatrace.com/support/help/shortlink/azure-spring).
+To activate Dynatrace OneAgent on your Azure Spring Apps instance, you need to configure four environment variables: `DT_TENANT`, `DT_TENANTTOKEN`, `DT_CONNECTION_POINT`, and `DT_CLUSTER_ID`. For more information, see [Integrate OneAgent with Azure Spring Apps](https://www.dynatrace.com/support/help/shortlink/azure-spring).
For applications with multiple instances, Dynatrace has several ways to group them. `DT_CLUSTER_ID` is one of the ways. For more information, see [Process group detection](https://www.dynatrace.com/support/help/how-to-use-dynatrace/process-groups/configuration/pg-detection).
You can add the environment variable key/value pairs to your application using e
To add the key/value pairs using the Azure CLI, run the following command, replacing the placeholders *\<...>* with the values determined in the previous steps. ```azurecli
-az spring-cloud app deploy \
+az spring app deploy \
--resource-group <your-resource-group-name> \
- --service <your-Azure-Spring-Cloud-name> \
+ --service <your-Azure-Spring-Apps-name> \
--name <your-application-name> \ --jar-path app.jar \ --env \
To add the key/value pairs using the Azure portal, use the following steps:
1. Navigate to the list of your existing applications.
- :::image type="content" source="media/dynatrace-oneagent/existing-applications.png" alt-text="Screenshot of the Azure portal showing the Azure Spring Cloud Apps section." lightbox="media/dynatrace-oneagent/existing-applications.png":::
+ :::image type="content" source="media/dynatrace-oneagent/existing-applications.png" alt-text="Screenshot of the Azure portal showing the Azure Spring Apps Apps section." lightbox="media/dynatrace-oneagent/existing-applications.png":::
1. Select an application to navigate to the **Overview** page of the application.
Using Terraform or an Azure Resource Manager template (ARM template), you can al
### Automate provisioning using Terraform
-To configure the environment variables in a Terraform template, add the following code to the template, replacing the *\<...>* placeholders with your own values. For more information, see [Manages an Active Azure Spring Cloud Deployment](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/spring_cloud_active_deployment).
+To configure the environment variables in a Terraform template, add the following code to the template, replacing the *\<...>* placeholders with your own values. For more information, see [Manages an Active Azure Spring Apps Deployment](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/spring_cloud_active_deployment).
```terraform environment_variables = {
You can find **Backtrace** from **Databases/Details/Backtrace**:
## View Dynatrace OneAgent logs
-By default, Azure Spring Cloud will print the *info* level logs of the Dynatrace OneAgent to `STDOUT`. The logs will be mixed with the application logs. You can find the explicit agent version from the application logs.
+By default, Azure Spring Apps will print the *info* level logs of the Dynatrace OneAgent to `STDOUT`. The logs will be mixed with the application logs. You can find the explicit agent version from the application logs.
You can also get the logs of the Dynatrace agent from the following locations:
-* Azure Spring Cloud logs
-* Azure Spring Cloud Application Insights
-* Azure Spring Cloud LogStream
+* Azure Spring Apps logs
+* Azure Spring Apps Application Insights
+* Azure Spring Apps LogStream
You can apply some environment variables provided by Dynatrace to configure logging for the Dynatrace OneAgent. For example, `DT_LOGLEVELCON` controls the level of logs. > [!CAUTION]
-> We strongly recommend that you do not override the default logging behavior provided by Azure Spring Cloud for Dynatrace. If you do, the logging scenarios above will be blocked, and the log file(s) may be lost. For example, you should not output the `DT_LOGLEVELFILE` environment variable to your applications.
+> We strongly recommend that you do not override the default logging behavior provided by Azure Spring Apps for Dynatrace. If you do, the logging scenarios above will be blocked, and the log file(s) may be lost. For example, you should not output the `DT_LOGLEVELFILE` environment variable to your applications.
## Dynatrace OneAgent upgrade
The Dynatrace OneAgent auto-upgrade is disabled and will be upgraded quarterly w
## VNet injection instance outbound traffic configuration
-For a VNet injection instance of Azure Spring Cloud, you need to make sure the outbound traffic for Dynatrace communication endpoints is configured correctly for Dynatrace OneAgent. For information about how to get `communicationEndpoints`, see [Deployment API - GET connectivity information for OneAgent](https://www.dynatrace.com/support/help/dynatrace-api/environment-api/deployment/oneagent/get-connectivity-info/). For more information, see [Customer responsibilities for running Azure Spring Cloud in VNET](vnet-customer-responsibilities.md).
+For a VNet injection instance of Azure Spring Apps, you need to make sure the outbound traffic for Dynatrace communication endpoints is configured correctly for Dynatrace OneAgent. For information about how to get `communicationEndpoints`, see [Deployment API - GET connectivity information for OneAgent](https://www.dynatrace.com/support/help/dynatrace-api/environment-api/deployment/oneagent/get-connectivity-info/). For more information, see [Customer responsibilities for running Azure Spring Apps in VNET](vnet-customer-responsibilities.md).
## Dynatrace support model
For information about limitations when deploying Dynatrace OneAgent in applicati
## Next steps
-* [Use distributed tracing with Azure Spring Cloud](how-to-distributed-tracing.md)
+* [Use distributed tracing with Azure Spring Apps](how-to-distributed-tracing.md)
spring-cloud How To Elastic Apm Java Agent Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-elastic-apm-java-agent-monitor.md
Title: How to monitor Spring Boot apps with Elastic APM Java Agent
-description: How to use Elastic APM Java Agent to monitor Spring Boot applications running in Azure Spring Cloud
+description: How to use Elastic APM Java Agent to monitor Spring Boot applications running in Azure Spring Apps
Last updated 12/07/2021-+ # How to monitor Spring Boot apps with Elastic APM Java Agent
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
+ **This article applies to:** ✔️ Basic/Standard tier ✔️ Enterprise tier
-This article explains how to use Elastic APM Agent to monitor Spring Boot applications running in Azure Spring Cloud.
+This article explains how to use Elastic APM Agent to monitor Spring Boot applications running in Azure Spring Apps.
With the Elastic Observability Solution, you can achieve unified observability to:
-* Monitor apps using the Elastic APM Java Agent and using persistent storage with Azure Spring Cloud.
-* Use diagnostic settings to ship Azure Spring Cloud logs to Elastic. For more information, see [Analyze logs with Elastic (ELK) using diagnostics settings](how-to-elastic-diagnostic-settings.md).
+* Monitor apps using the Elastic APM Java Agent and using persistent storage with Azure Spring Apps.
+* Use diagnostic settings to ship Azure Spring Apps logs to Elastic. For more information, see [Analyze logs with Elastic (ELK) using diagnostics settings](how-to-elastic-diagnostic-settings.md).
The following video introduces unified observability for Spring Boot applications using Elastic.
The following video introduces unified observability for Spring Boot application
This article uses the Spring Petclinic sample to walk through the required steps. Use the following steps to deploy the sample application:
-1. Follow the steps in [Deploy Spring Boot apps using Azure Spring Cloud and MySQL](https://github.com/Azure-Samples/spring-petclinic-microservices#readme) until you reach the [Deploy Spring Boot applications and set environment variables](https://github.com/Azure-Samples/spring-petclinic-microservices#deploy-spring-boot-applications-and-set-environment-variables) section.
+1. Follow the steps in [Deploy Spring Boot apps using Azure Spring Apps and MySQL](https://github.com/Azure-Samples/spring-petclinic-microservices#readme) until you reach the [Deploy Spring Boot applications and set environment variables](https://github.com/Azure-Samples/spring-petclinic-microservices#deploy-spring-boot-applications-and-set-environment-variables) section.
-1. Use the Azure Spring Cloud extension for Azure CLI with the following command to create an application to run in Azure Spring Cloud:
+1. Use the Azure Spring Apps extension for Azure CLI with the following command to create an application to run in Azure Spring Apps:
```azurecli
- az spring-cloud app create \
+ az spring app create \
--resource-group <your-resource-group-name> \
- --service <your-Azure-Spring-Cloud-instance-name> \
+ --service <your-Azure-Spring-Apps-instance-name> \
--name <your-app-name> \ --is-public true ```
-## Enable custom persistent storage for Azure Spring Cloud
+## Enable custom persistent storage for Azure Spring Apps
Use the following steps to enable custom persistent storage:
-1. Follow the steps in [How to enable your own persistent storage in Azure Spring Cloud](how-to-custom-persistent-storage.md).
+1. Follow the steps in [How to enable your own persistent storage in Azure Spring Apps](how-to-custom-persistent-storage.md).
-1. Use the following Azure CLI command to add persistent storage for your Azure Spring Cloud apps.
+1. Use the following Azure CLI command to add persistent storage for your Azure Spring Apps apps.
```azurecli
- az spring-cloud app append-persistent-storage \
+ az spring app append-persistent-storage \
--resource-group <your-resource-group-name> \
- --service <your-Azure-Spring-Cloud-instance-name> \
+ --service <your-Azure-Spring-Apps-instance-name> \
--name <your-app-name> \ --persistent-storage-type AzureFileVolume \ --share-name <your-Azure-file-share-name> \
Before proceeding, you'll need your Elastic APM server connectivity information
1. After you have the Elastic APM endpoint and secret token, use the following command to activate Elastic APM Java agent when deploying applications. The placeholder *`<agent-location>`* refers to the mounted storage location of the Elastic APM Java Agent. ```azurecli
- az spring-cloud app deploy \
+ az spring app deploy \
--name <your-app-name> \ --artifact-path <unique-path-to-your-app-jar-on-custom-storage> \
- --jvm-options='-javaagent:<agent-location>' \
+ --jvm-options='-javaagent:<elastic-agent-location>' \
--env ELASTIC_APM_SERVICE_NAME=<your-app-name> \ ELASTIC_APM_APPLICATION_PACKAGES='<your-app-package-name>' \ ELASTIC_APM_SERVER_URL='<your-Elastic-APM-server-URL>' \
You can also run a provisioning automation pipeline using Terraform or an Azure
### Automate provisioning using Terraform
-To configure the environment variables in a Terraform template, add the following code to the template, replacing the *\<...>* placeholders with your own values. For more information, see [Manages an Active Azure Spring Cloud Deployment](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/spring_cloud_active_deployment).
+To configure the environment variables in a Terraform template, add the following code to the template, replacing the *\<...>* placeholders with your own values. For more information, see [Manages an Active Azure Spring Apps Deployment](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/spring_cloud_active_deployment).
```terraform resource "azurerm_spring_cloud_java_deployment" "example" { ...
- jvm_options = "-javaagent:<unique-path-to-your-app-jar-on-custom-storage>"
+ jvm_options = "-javaagent:<elastic-agent-location>"
... environment_variables = { "ELASTIC_APM_SERVICE_NAME"="<your-app-name>",
To configure the environment variables in an ARM template, add the following cod
"ELASTIC_APM_SERVER_URL"="<your-Elastic-APM-server-URL>", "ELASTIC_APM_SECRET_TOKEN"="<your-Elastic-APM-secret-token>" },
- "jvmOptions": "-javaagent:<unique-path-to-your-app-jar-on-custom-storage>",
+ "jvmOptions": "-javaagent:<elastic-agent-location>",
... } ```
You can drill down in a specific transaction to understand the transaction-speci
:::image type="content" source="media/how-to-elastic-apm-java-agent-monitor/elastic-apm-customer-service-latency-distribution.png" alt-text="Elastic / Kibana screenshot showing A P M Services Transactions page." lightbox="media/how-to-elastic-apm-java-agent-monitor/elastic-apm-customer-service-latency-distribution.png":::
-Elastic APM Java agent also captures the JVM metrics from the Azure Spring Cloud apps that are available with Kibana App for users for troubleshooting.
+Elastic APM Java agent also captures the JVM metrics from the Azure Spring Apps apps that are available with Kibana App for users for troubleshooting.
:::image type="content" source="media/how-to-elastic-apm-java-agent-monitor/elastic-apm-customer-service-jvm-metrics.png" alt-text="Elastic / Kibana screenshot showing A P M Services J V M page." lightbox="media/how-to-elastic-apm-java-agent-monitor/elastic-apm-customer-service-jvm-metrics.png":::
-Using the inbuilt AI engine in the Elastic solution, you can also enable Anomaly Detection on the Azure Spring Cloud Services and choose an appropriate action - such as Teams notification, creation of a JIRA issue, a webhook-based API call, and others.
+Using the inbuilt AI engine in the Elastic solution, you can also enable Anomaly Detection on the Azure Spring Apps Services and choose an appropriate action - such as Teams notification, creation of a JIRA issue, a webhook-based API call, and others.
:::image type="content" source="media/how-to-elastic-apm-java-agent-monitor/elastic-apm-alert-anomaly.png" alt-text="Elastic / Kibana screenshot showing A P M Services page with 'Create rule' pane showing and Actions highlighted." lightbox="media/how-to-elastic-apm-java-agent-monitor/elastic-apm-alert-anomaly.png"::: ## Next steps
-* [Quickstart: Deploy your first Spring Boot app in Azure Spring Cloud](./quickstart.md)
+* [Quickstart: Deploy your first Spring Boot app in Azure Spring Apps](./quickstart.md)
* [Deploy Elastic on Azure](https://www.elastic.co/blog/getting-started-with-the-azure-integration-enhancement)
spring-cloud How To Elastic Diagnostic Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-elastic-diagnostic-settings.md
Title: Analyze logs with Elastic Cloud from Azure Spring Cloud
-description: Learn how to analyze diagnostics logs in Azure Spring Cloud using Elastic
+ Title: Analyze logs with Elastic Cloud from Azure Spring Apps
+description: Learn how to analyze diagnostics logs in Azure Spring Apps using Elastic
Last updated 12/07/2021 -+ # Analyze logs with Elastic (ELK) using diagnostics settings
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
+ **This article applies to:** ✔️ Java ✔️ C# **This article applies to:** ✔️ Basic/Standard tier ✔️ Enterprise tier
-This article shows you how to use the diagnostics functionality of Azure Spring Cloud to analyze logs with Elastic (ELK).
+This article shows you how to use the diagnostics functionality of Azure Spring Apps to analyze logs with Elastic (ELK).
The following video introduces unified observability for Spring Boot applications using Elastic.
The following video introduces unified observability for Spring Boot application
To configure diagnostics settings, use the following steps:
-1. In the Azure portal, go to your Azure Spring Cloud instance.
+1. In the Azure portal, go to your Azure Spring Apps instance.
1. Select **diagnostics settings** option, then select **Add diagnostics setting**. 1. Enter a name for the setting, choose **Send to partner solution**, then select **Elastic** and an Elastic deployment where you want to send the logs. 1. Select **Save**.
To configure diagnostics settings, use the following steps:
> [!NOTE] > There might be a gap of up to 15 minutes between when logs are emitted and when they appear in your Elastic deployment.
-> If the Azure Spring Cloud instance is deleted or moved, the operation won't cascade to the diagnostics settings resources. You have to manually delete the diagnostics settings resources before you perform the operation against its parent, the Azure Spring Cloud instance. Otherwise, if you provision a new Azure Spring Cloud instance with the same resource ID as the deleted one, or if you move the Azure Spring Cloud instance back, the previous diagnostics settings resources will continue to extend it.
+> If the Azure Spring Apps instance is deleted or moved, the operation won't cascade to the diagnostics settings resources. You have to manually delete the diagnostics settings resources before you perform the operation against its parent, the Azure Spring Apps instance. Otherwise, if you provision a new Azure Spring Apps instance with the same resource ID as the deleted one, or if you move the Azure Spring Apps instance back, the previous diagnostics settings resources will continue to extend it.
## Analyze the logs with Elastic
Use the following steps to analyze the logs:
:::image type="content" source="media/how-to-elastic-diagnostic-settings/elastic-kibana-spring-cloud-dashboard.png" alt-text="Elastic / Kibana screenshot showing 'Spring Cloud type:dashboard' search results." lightbox="media/how-to-elastic-diagnostic-settings/elastic-kibana-spring-cloud-dashboard.png":::
-1. Select **[Logs Azure] Azure Spring Cloud logs Overview** from the results.
+1. Select **[Logs Azure] Azure Spring Apps logs Overview** from the results.
- :::image type="content" source="media/how-to-elastic-diagnostic-settings/elastic-kibana-asc-dashboard-full.png" alt-text="Elastic / Kibana screenshot showing Azure Spring Cloud Application Console Logs." lightbox="media/how-to-elastic-diagnostic-settings/elastic-kibana-asc-dashboard-full.png":::
+ :::image type="content" source="media/how-to-elastic-diagnostic-settings/elastic-kibana-asc-dashboard-full.png" alt-text="Elastic / Kibana screenshot showing Azure Spring Apps Application Console Logs." lightbox="media/how-to-elastic-diagnostic-settings/elastic-kibana-asc-dashboard-full.png":::
-1. Search on out-of-the-box Azure Spring Cloud dashboards by using the queries such as the following:
+1. Search on out-of-the-box Azure Spring Apps dashboards by using the queries such as the following:
```query azure.springcloudlogs.properties.app_name : "visits-service"
Application logs provide critical information and verbose logs about your applic
For more information about different queries, see [Guide to Kibana Query Language](https://www.elastic.co/guide/en/kibana/current/kuery-query.html).
-### Show all logs from Azure Spring Cloud
+### Show all logs from Azure Spring Apps
-To review a list of application logs from Azure Spring Cloud, sorted by time with the most recent logs shown first, run the following query in the **Search** box:
+To review a list of application logs from Azure Spring Apps, sorted by time with the most recent logs shown first, run the following query in the **Search** box:
```query azure_log_forwarder.resource_type : "Microsoft.AppPlatform/Spring"
azure_log_forwarder.resource_type : "Microsoft.AppPlatform/Spring"
:::image type="content" source="media/how-to-elastic-diagnostic-settings/elastic-kibana-kql-asc-logs.png" alt-text="Elastic / Kibana screenshot showing Discover app with all logs displayed." lightbox="media/how-to-elastic-diagnostic-settings/elastic-kibana-kql-asc-logs.png":::
-### Show specific log types from Azure Spring Cloud
+### Show specific log types from Azure Spring Apps
-To review a list of application logs from Azure Spring Cloud, sorted by time with the most recent logs shown first, run the following query in the **Search** box:
+To review a list of application logs from Azure Spring Apps, sorted by time with the most recent logs shown first, run the following query in the **Search** box:
```query azure.springcloudlogs.category : "ApplicationConsole"
azure.springcloudlogs.properties.type : "ServiceRegistry"
:::image type="content" source="media/how-to-elastic-diagnostic-settings/elastic-kibana-kql-service-registry.png" alt-text="Elastic / Kibana screenshot showing Discover app with Service Registry logs displayed." lightbox="media/how-to-elastic-diagnostic-settings/elastic-kibana-kql-service-registry.png":::
-## Visualizing logs from Azure Spring Cloud with Elastic
+## Visualizing logs from Azure Spring Apps with Elastic
Kibana allows you to visualize data with Dashboards and a rich ecosystem of visualizations. For more information, see [Dashboard and Visualization](https://www.elastic.co/guide/en/kibana/current/dashboard.html).
Use the following steps to show the various log levels in your logs so you can a
## Next steps
-* [Quickstart: Deploy your first Spring Boot app in Azure Spring Cloud](quickstart.md)
+* [Quickstart: Deploy your first Spring Boot app in Azure Spring Apps](quickstart.md)
* [Deploy Elastic on Azure](https://www.elastic.co/blog/getting-started-with-the-azure-integration-enhancement)
spring-cloud How To Enable Availability Zone https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-enable-availability-zone.md
Title: Create an Azure Spring Cloud instance with availability zone enabled-
-description: How to create an Azure Spring Cloud instance with availability zone enabled.
+ Title: Create an Azure Spring Apps instance with availability zone enabled
+
+description: How to create an Azure Spring Apps instance with availability zone enabled.
Last updated 04/14/2022-+
-# Create Azure Spring Cloud instance with availability zone enabled
+# Create Azure Spring Apps instance with availability zone enabled
+
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
**This article applies to:** ✔️ Standard tier ✔️ Enterprise tier
> [!NOTE] > This feature is not available in Basic tier.
-This article explains availability zones in Azure Spring Cloud, and how to enable them.
+This article explains availability zones in Azure Spring Apps, and how to enable them.
In Microsoft Azure, [Availability Zones (AZ)](../availability-zones/az-overview.md) are unique physical locations within an Azure region. Each zone is made up of one or more data centers that are equipped with independent power, cooling, and networking. Availability zones protect your applications and data from data center failures.
-When an Azure Spring Cloud service instance is created with availability zone enabled, Azure Spring Cloud will automatically distribute fundamental resources across logical sections of underlying Azure infrastructure. This distribution provides a higher level of availability to protect against a hardware failure or a planned maintenance event.
+When an Azure Spring Apps service instance is created with availability zone enabled, Azure Spring Apps will automatically distribute fundamental resources across logical sections of underlying Azure infrastructure. This distribution provides a higher level of availability to protect against a hardware failure or a planned maintenance event.
-## How to create an instance in Azure Spring Cloud with availability zone enabled
+## How to create an instance in Azure Spring Apps with availability zone enabled
>[!NOTE] > You can only enable availability zone when creating your instance. You can't enable or disable availability zone after creation of the service instance.
-You can enable availability zone in Azure Spring Cloud using the [Azure CLI](/cli/azure/install-azure-cli) or [Azure portal](https://portal.azure.com).
+You can enable availability zone in Azure Spring Apps using the [Azure CLI](/cli/azure/install-azure-cli) or [Azure portal](https://portal.azure.com).
-# [Azure CLI](#tab/azure-cli)
+### [Azure CLI](#tab/azure-cli)
-To create a service in Azure Spring Cloud with availability zone enabled using the Azure CLI, include the `--zone-redundant` parameter when you create your service in Azure Spring Cloud.
+To create a service in Azure Spring Apps with availability zone enabled using the Azure CLI, include the `--zone-redundant` parameter when you create your service in Azure Spring Apps.
```azurecli
-az spring-cloud create \
+az spring create \
--resource-group <your-resource-group-name> \
- --name <your-Azure-Spring-Cloud-instance-name> \
+ --name <your-Azure-Spring-Apps-instance-name> \
--location <location> \ --zone-redundant true ```
-# [Azure portal](#tab/portal)
+### [Azure portal](#tab/portal)
-To create a service in Azure Spring Cloud with availability zone enabled using the Azure portal, enable the Zone Redundant option when creating the instance.
+To create a service in Azure Spring Apps with availability zone enabled using the Azure portal, enable the Zone Redundant option when creating the instance.
![Image of where to enable availability zone using the portal.](media/spring-cloud-availability-zone/availability-zone-portal.png)
To create a service in Azure Spring Cloud with availability zone enabled using t
## Region availability
-Azure Spring Cloud currently supports availability zones in the following regions:
+Azure Spring Apps currently supports availability zones in the following regions:
- Australia East - Brazil South
spring-cloud How To Enable Ingress To App Tls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-enable-ingress-to-app-tls.md
Title: Enable ingress-to-app Transport Layer Security in Azure Spring Cloud-
+ Title: Enable ingress-to-app Transport Layer Security in Azure Spring Apps
+ description: How to enable ingress-to-app Transport Layer Security for an application. Last updated 04/12/2022-+ # Enable ingress-to-app TLS for an application
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
+ **This article applies to:** ✔️ Standard tier ✔️ Enterprise tier > [!NOTE] > This feature is not available in Basic tier.
-This article describes secure communications in Azure Spring Cloud. The article also explains how to enable ingress-to-app SSL/TLS to secure traffic from an ingress controller to applications that support HTTPS.
+This article describes secure communications in Azure Spring Apps. The article also explains how to enable ingress-to-app SSL/TLS to secure traffic from an ingress controller to applications that support HTTPS.
-The following picture shows the overall secure communication support in Azure Spring Cloud.
+The following picture shows the overall secure communication support in Azure Spring Apps.
-## Secure communication model within Azure Spring Cloud
+## Secure communication model within Azure Spring Apps
This section explains the secure communication model shown in the overview diagram above.
-1. The client request from the client to the application in Azure Spring Cloud comes into the ingress controller. The request can be either HTTP or HTTPS. The TLS certificate returned by the ingress controller is issued by the Microsoft Azure TLS issuing CA.
+1. The client request from the client to the application in Azure Spring Apps comes into the ingress controller. The request can be either HTTP or HTTPS. The TLS certificate returned by the ingress controller is issued by the Microsoft Azure TLS issuing CA.
If the app has been mapped to an existing custom domain and is configured as HTTPS only, the request to the ingress controller can only be HTTPS. The TLS certificate returned by the ingress controller is the SSL binding certificate for that custom domain. The server side SSL/TLS verification for the custom domain is done in the ingress controller.
-2. The secure communication between the ingress controller and the applications in Azure Spring Cloud are controlled by the ingress-to-app TLS. You can also control the communication through the portal or CLI, which will be explained later in this article. If ingress-to-app TLS is disabled, the communication between the ingress controller and the apps in Azure Spring Cloud is HTTP. If ingress-to-app TLS is enabled, the communication will be HTTPS and has no relation to the communication between the clients and the ingress controller. The ingress controller won't verify the certificate returned from the apps because the ingress-to-app TLS encrypts the communication.
+2. The secure communication between the ingress controller and the applications in Azure Spring Apps are controlled by the ingress-to-app TLS. You can also control the communication through the portal or CLI, which will be explained later in this article. If ingress-to-app TLS is disabled, the communication between the ingress controller and the apps in Azure Spring Apps is HTTP. If ingress-to-app TLS is enabled, the communication will be HTTPS and has no relation to the communication between the clients and the ingress controller. The ingress controller won't verify the certificate returned from the apps because the ingress-to-app TLS encrypts the communication.
-3. Communication between the apps and the Azure Spring Cloud services is always HTTPS and handled by Azure Spring Cloud. Such services include config server, service registry, and Eureka server.
+3. Communication between the apps and the Azure Spring Apps services is always HTTPS and handled by Azure Spring Apps. Such services include config server, service registry, and Eureka server.
-4. You manage the communication between the applications. You can also take advantage of Azure Spring Cloud features to load certificates into the application's trust store. For more information, see [Use TLS/SSL certificates in an application](./how-to-use-tls-certificate.md).
+4. You manage the communication between the applications. You can also take advantage of Azure Spring Apps features to load certificates into the application's trust store. For more information, see [Use TLS/SSL certificates in an application](./how-to-use-tls-certificate.md).
-5. You manage the communication between applications and external services. To reduce your development effort, Azure Spring Cloud helps you manage your public certificates and loads them into your application's trust store. For more information, see [Use TLS/SSL certificates in an application](./how-to-use-tls-certificate.md).
+5. You manage the communication between applications and external services. To reduce your development effort, Azure Spring Apps helps you manage your public certificates and loads them into your application's trust store. For more information, see [Use TLS/SSL certificates in an application](./how-to-use-tls-certificate.md).
## Enable ingress-to-app TLS for an application
The following section shows you how to enable ingress-to-app SSL/TLS to secure t
### Prerequisites -- A deployed Azure Spring Cloud instance. Follow our [quickstart on deploying via the Azure CLI](./quickstart.md) to get started.
+- A deployed Azure Spring Apps instance. Follow our [quickstart on deploying via the Azure CLI](./quickstart.md) to get started.
- If you're unfamiliar with ingress-to-app TLS, see the [end-to-end TLS sample](https://github.com/Azure-Samples/spring-boot-secure-communications-using-end-to-end-tls-ssl). - To securely load the required certificates into Spring Boot apps, you can use [spring-cloud-azure-starter-keyvault-certificates](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/spring/spring-cloud-azure-starter-keyvault-certificates). ### Enable ingress-to-app TLS on an existing app
-Use the command `az spring-cloud app update --enable-ingress-to-app-tls` to enable or disable ingress-to-app TLS for an app.
+Use the command `az spring app update --enable-ingress-to-app-tls` to enable or disable ingress-to-app TLS for an app.
```azurecli
-az spring-cloud app update --enable-ingress-to-app-tls -n app_name -s service_name -g resource_group_name
-az spring-cloud app update --enable-ingress-to-app-tls false -n app_name -s service_name -g resource_group_name
+az spring app update --enable-ingress-to-app-tls -n app_name -s service_name -g resource_group_name
+az spring app update --enable-ingress-to-app-tls false -n app_name -s service_name -g resource_group_name
``` ### Enable ingress-to-app TLS when you bind a custom domain
-Use the command `az spring-cloud app custom-domain update --enable-ingress-to-app-tls` or `az spring-cloud app custom-domain bind --enable-ingress-to-app-tls` to enable or disable ingress-to-app TLS for an app.
+Use the command `az spring app custom-domain update --enable-ingress-to-app-tls` or `az spring app custom-domain bind --enable-ingress-to-app-tls` to enable or disable ingress-to-app TLS for an app.
```azurecli
-az spring-cloud app custom-domain update --enable-ingress-to-app-tls -n app_name -s service_name -g resource_group_name
-az spring-cloud app custom-domain bind --enable-ingress-to-app-tls -n app_name -s service_name -g resource_group_name
+az spring app custom-domain update --enable-ingress-to-app-tls -n app_name -s service_name -g resource_group_name
+az spring app custom-domain bind --enable-ingress-to-app-tls -n app_name -s service_name -g resource_group_name
``` ### Enable ingress-to-app TLS using the Azure portal
To enable ingress-to-app TLS in the [Azure portal](https://portal.azure.com/), f
### Verify ingress-to-app TLS status
-Use the command `az spring-cloud app show` to check the value of `enableEndToEndTls`.
+Use the command `az spring app show` to check the value of `enableEndToEndTls`.
```azurecli
-az spring-cloud app show -n app_name -s service_name -g resource_group_name
+az spring app show -n app_name -s service_name -g resource_group_name
``` ## Next steps
spring-cloud How To Enable System Assigned Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-enable-system-assigned-managed-identity.md
Title: Enable system-assigned managed identity for applications in Azure Spring Cloud-
+ Title: Enable system-assigned managed identity for applications in Azure Spring Apps
+ description: How to enable system-assigned managed identity for applications. Last updated 04/15/2022-+ zone_pivot_groups: spring-cloud-tier-selection
-# Enable system-assigned managed identity for an application in Azure Spring Cloud
+# Enable system-assigned managed identity for an application in Azure Spring Apps
+
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
**This article applies to:** ✔️ Basic/Standard tier ✔️ Enterprise tier
-This article shows you how to enable and disable system-assigned managed identities for an application in Azure Spring Cloud, using the Azure portal and CLI.
+This article shows you how to enable and disable system-assigned managed identities for an application in Azure Spring Apps, using the Azure portal and CLI.
-Managed identities for Azure resources provide an automatically managed identity in Azure Active Directory to an Azure resource such as your application in Azure Spring Cloud. You can use this identity to authenticate to any service that supports Azure AD authentication, without having credentials in your code.
+Managed identities for Azure resources provide an automatically managed identity in Azure Active Directory to an Azure resource such as your application in Azure Spring Apps. You can use this identity to authenticate to any service that supports Azure AD authentication, without having credentials in your code.
## Prerequisites
If you're unfamiliar with managed identities for Azure resources, see the [Manag
::: zone pivot="sc-enterprise-tier" -- An already provisioned Azure Spring Cloud Enterprise tier instance. For more information, see [Quickstart: Provision an Azure Spring Cloud service instance using the Enterprise tier](quickstart-provision-service-instance-enterprise.md).
+- An already provisioned Azure Spring Apps Enterprise tier instance. For more information, see [Quickstart: Provision an Azure Spring Apps service instance using the Enterprise tier](quickstart-provision-service-instance-enterprise.md).
- [Azure CLI version 2.30.0 or higher](/cli/azure/install-azure-cli). - [!INCLUDE [install-app-user-identity-extension](includes/install-app-user-identity-extension.md)]
If you're unfamiliar with managed identities for Azure resources, see the [Manag
::: zone pivot="sc-standard-tier" -- An already provisioned Azure Spring Cloud instance. For more information, see [Quickstart: Deploy your first application to Azure Spring Cloud](./quickstart.md).
+- An already provisioned Azure Spring Apps instance. For more information, see [Quickstart: Deploy your first application to Azure Spring Apps](./quickstart.md).
- [Azure CLI version 2.30.0 or higher](/cli/azure/install-azure-cli). - [!INCLUDE [install-app-user-identity-extension](includes/install-app-user-identity-extension.md)]
You can enable system-assigned managed identity during app creation or on an exi
The following example creates an app named *app_name* with a system-assigned managed identity, as requested by the `--assign-identity` parameter. ```azurecli
-az spring-cloud app create \
+az spring app create \
--resource-group <resource-group-name> \ --name <app-name> \ --service <service-instance-name> \
az spring-cloud app create \
### Enable system-assigned managed identity on an existing app**
-Use `az spring-cloud app identity assign` command to enable the system-assigned identity on an existing app.
+Use `az spring app identity assign` command to enable the system-assigned identity on an existing app.
```azurecli
-az spring-cloud app identity assign \
+az spring app identity assign \
--resource-group <resource-group-name> \ --name <app-name> \ --service <service-instance-name> \
An app can use its managed identity to get tokens to access other resources prot
You may need to [configure the target resource to allow access from your application](../active-directory/managed-identities-azure-resources/howto-assign-access-portal.md). For example, if you request a token to access Key Vault, make sure you have added an access policy that includes your application's identity. Otherwise, your calls to Key Vault will be rejected, even if they include the token. To learn more about which resources support Azure Active Directory tokens, see [Azure services that support Azure AD authentication](../active-directory/managed-identities-azure-resources/services-support-managed-identities.md#azure-services-that-support-azure-ad-authentication).
-Azure Spring Cloud shares the same endpoint for token acquisition with Azure Virtual Machine. We recommend using Java SDK or spring boot starters to acquire a token. See [How to use VM token](../active-directory/managed-identities-azure-resources/how-to-use-vm-token.md) for various code and script examples and guidance on important topics such as handling token expiration and HTTP errors.
+Azure Spring Apps shares the same endpoint for token acquisition with Azure Virtual Machine. We recommend using Java SDK or spring boot starters to acquire a token. See [How to use VM token](../active-directory/managed-identities-azure-resources/how-to-use-vm-token.md) for various code and script examples and guidance on important topics such as handling token expiration and HTTP errors.
## Disable system-assigned identity from an app
Removing a system-assigned identity will also delete it from Azure AD. Deleting
To remove system-assigned managed identity from an app that no longer needs it:
-1. Sign in to the portal using an account associated with the Azure subscription that contains the Azure Spring Cloud instance.
+1. Sign in to the portal using an account associated with the Azure subscription that contains the Azure Spring Apps instance.
1. Navigate to the desired application and select **Identity**. 1. Under **System assigned**/**Status**, select **Off** and then select **Save**:
To remove system-assigned managed identity from an app that no longer needs it:
To remove system-assigned managed identity from an app that no longer needs it, use the following command: ```azurecli
-az spring-cloud app identity remove \
+az spring app identity remove \
--resource-group <resource-group-name> \ --name <app-name> \ --service <service-instance-name> \
spring-cloud How To Enterprise Application Configuration Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-enterprise-application-configuration-service.md
Title: Use Application Configuration Service for Tanzu with Azure Spring Cloud Enterprise Tier-
-description: How to use Application Configuration Service for Tanzu with Azure Spring Cloud Enterprise Tier.
+ Title: Use Application Configuration Service for Tanzu with Azure Spring Apps Enterprise Tier
+
+description: How to use Application Configuration Service for Tanzu with Azure Spring Apps Enterprise Tier.
Last updated 02/09/2022-+ # Use Application Configuration Service for Tanzu
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
+ **This article applies to:** ❌ Basic/Standard tier ✔️ Enterprise tier
-This article shows you how to use Application Configuration Service for VMware Tanzu® with Azure Spring Cloud Enterprise Tier.
+This article shows you how to use Application Configuration Service for VMware Tanzu® with Azure Spring Apps Enterprise Tier.
[Application Configuration Service for Tanzu](https://docs.pivotal.io/tcs-k8s/0-1/) is one of the commercial VMware Tanzu components. It enables the management of Kubernetes-native ConfigMap resources that are populated from properties defined in one or more Git repositories.
With Application Configuration Service for Tanzu, you have a central place to ma
## Prerequisites -- An already provisioned Azure Spring Cloud Enterprise tier instance with Application Configuration Service for Tanzu enabled. For more information, see [Quickstart: Provision an Azure Spring Cloud service instance using the Enterprise tier](quickstart-provision-service-instance-enterprise.md).
+- An already provisioned Azure Spring Apps Enterprise tier instance with Application Configuration Service for Tanzu enabled. For more information, see [Quickstart: Provision an Azure Spring Apps service instance using the Enterprise tier](quickstart-provision-service-instance-enterprise.md).
> [!NOTE]
- > To use Application Configuration Service for Tanzu, you must enable it when you provision your Azure Spring Cloud service instance. You cannot enable it after provisioning at this time.
+ > To use Application Configuration Service for Tanzu, you must enable it when you provision your Azure Spring Apps service instance. You cannot enable it after provisioning at this time.
## Manage Application Configuration Service for Tanzu settings
Use the following steps to refresh your application configuration after you upda
1. Load the configuration to Application Configuration Service for Tanzu.
- The refresh frequency is managed by Azure Spring Cloud and fixed to 60 seconds.
+ The refresh frequency is managed by Azure Spring Apps and fixed to 60 seconds.
1. Load the configuration to your application.
You can configure Application Configuration Service for Tanzu using the portal b
You can configure Application Configuration Service for Tanzu using the CLI, by following these steps: ```azurecli
-az spring-cloud application-configuration-service git repo add \
+az spring application-configuration-service git repo add \
--name <entry-name> \ --patterns <patterns> \ --uri <git-backend-uri> \
To use the centralized configurations, you must bind the app to Application Conf
You can use Application Configuration Service for Tanzu with applications, by using this command: ```azurecli
-az spring-cloud application-configuration-service bind --app <app-name>
-az spring-cloud app deploy \
+az spring application-configuration-service bind --app <app-name>
+az spring app deploy \
--name <app-name> \ --artifact-path <path-to-your-JAR-file> \ --config-file-pattern <config-file-pattern>
az spring-cloud app deploy \
## Next steps -- [Azure Spring Cloud](index.yml)
+- [Azure Spring Apps](index.yml)
spring-cloud How To Enterprise Build Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-enterprise-build-service.md
Title: How to Use Tanzu Build Service in Azure Spring Cloud Enterprise Tier-
-description: How to Use Tanzu Build Service in Azure Spring Cloud Enterprise Tier
+ Title: How to Use Tanzu Build Service in Azure Spring Apps Enterprise Tier
+
+description: How to Use Tanzu Build Service in Azure Spring Apps Enterprise Tier
Last updated 02/09/2022-+ # Use Tanzu Build Service
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
+ **This article applies to:** ❌ Basic/Standard tier ✔️ Enterprise tier
-This article describes the extra configuration and functionality included in VMware Tanzu® Build Service™ with Azure Spring Cloud Enterprise Tier.
+This article describes the extra configuration and functionality included in VMware Tanzu® Build Service™ with Azure Spring Apps Enterprise Tier.
-In Azure Spring Cloud, the existing Standard tier already supports compiling user source code into [OCI images](https://opencontainers.org/) through [Kpack](https://github.com/pivotal/kpack). Kpack is a Kubernetes (K8s) implementation of [Cloud Native Buildpacks (CNB)](https://buildpacks.io/) provided by VMware. This article provides details about the extra configurations and functionality exposed in the Azure Spring Cloud Enterprise tier.
+In Azure Spring Apps, the existing Standard tier already supports compiling user source code into [OCI images](https://opencontainers.org/) through [Kpack](https://github.com/pivotal/kpack). Kpack is a Kubernetes (K8s) implementation of [Cloud Native Buildpacks (CNB)](https://buildpacks.io/) provided by VMware. This article provides details about the extra configurations and functionality exposed in the Azure Spring Apps Enterprise tier.
## Build Agent Pool
-Tanzu Build Service in the Enterprise tier is the entry point to containerize user applications from both source code and artifacts. There's a dedicated build agent pool that reserves compute resources for a given number of concurrent build tasks. The build agent pool prevents resource contention with your running apps. You can configure the number of resources given to the build agent pool during or after creating a new service instance of Azure Spring Cloud using the **VMware Tanzu settings**.
+Tanzu Build Service in the Enterprise tier is the entry point to containerize user applications from both source code and artifacts. There's a dedicated build agent pool that reserves compute resources for a given number of concurrent build tasks. The build agent pool prevents resource contention with your running apps. You can configure the number of resources given to the build agent pool during or after creating a new service instance of Azure Spring Apps using the **VMware Tanzu settings**.
The Build Agent Pool scale set sizes available are:
The Build Agent Pool scale set sizes available are:
The following image shows the resources given to the Tanzu Build Service Agent Pool after you've successfully provisioned the service instance. You can also update the configured agent pool size. ## Default Builder and Tanzu Buildpacks
In the Enterprise Tier, a default builder is provided within Tanzu Build Service
Tanzu Buildpacks make it easier to integrate with other software like New Relic. They're configured as optional and will only run with proper configuration. For more information, see the [Buildpack bindings](#buildpack-bindings) section.
-The following list shows the Tanzu Buildpacks available in Azure Spring Cloud Enterprise edition:
+The following list shows the Tanzu Buildpacks available in Azure Spring Apps Enterprise edition:
- tanzu-buildpacks/java-azure - tanzu-buildpacks/dotnet-core
You can delete any custom builder, but the `default` builder is read only.
When you deploy an app, you can build the app by specifying a specific builder in the command: ```azurecli
-az spring-cloud app deploy \
+az spring app deploy \
--name <app-name> \ --builder <builder-name> \ --artifact-path <path-to-your-JAR-file>
A build task will be triggered when an app is deployed from an Azure CLI command
## Buildpack bindings
-You can configure Kpack Images with Service Bindings as described in the [Cloud Native Buildpacks Bindings specification](https://github.com/buildpacks/spec/blob/adbc70f5672e474e984b77921c708e1475e163c1/extensions/bindings.md). Azure Spring Cloud Enterprise tier uses Service Bindings to integrate with [Tanzu Partner Buildpacks](https://docs.pivotal.io/tanzu-buildpacks/partner-integrations/partner-integration-buildpacks.html). For example, we use Binding to integrate [Azure Application Insights](../azure-monitor/app/app-insights-overview.md) using the [Paketo Azure Application Insights Buildpack](https://github.com/paketo-buildpacks/azure-application-insights).
+You can configure Kpack Images with Service Bindings as described in the [Cloud Native Buildpacks Bindings specification](https://github.com/buildpacks/spec/blob/adbc70f5672e474e984b77921c708e1475e163c1/extensions/bindings.md). Azure Spring Apps Enterprise tier uses Service Bindings to integrate with [Tanzu Partner Buildpacks](https://docs.pivotal.io/tanzu-buildpacks/partner-integrations/partner-integration-buildpacks.html). For example, we use Binding to integrate [Azure Application Insights](../azure-monitor/app/app-insights-overview.md) using the [Paketo Azure Application Insights Buildpack](https://github.com/paketo-buildpacks/azure-application-insights).
Currently, buildpack binding only supports binding the buildpacks listed below. Follow the documentation links listed under each type to configure the properties and secrets for buildpack binding.
Currently, buildpack binding only supports binding the buildpacks listed below.
You can manage buildpack bindings with the Azure portal or the Azure CLI.
-# [Portal](#tab/azure-portal)
+### [Portal](#tab/azure-portal)
-## View buildpack bindings using the Azure portal
+### View buildpack bindings using the Azure portal
Follow these steps to view the current buildpack bindings:
Follow these steps to view the current buildpack bindings:
1. Select **Build Service**. 1. Select **Edit** under the **Bindings** column to view the bindings configured under a builder.
-## Unbind a buildpack binding
+### Unbind a buildpack binding
There are two ways to unbind a buildpack binding. You can either select the **Bound** hyperlink and then select **Unbind binding**, or select **Edit Binding** and then select **Unbind**. If you unbind a binding, the bind status will change from **Bound** to **Unbound**.
-# [Azure CLI](#tab/azure-cli)
+### [Azure CLI](#tab/azure-cli)
-## View buildpack bindings using the Azure CLI
+### View buildpack bindings using the Azure CLI
View the current buildpack bindings using the following command: ```azurecli
-az spring-cloud build-service builder buildpack-binding list \
+az spring build-service builder buildpack-binding list \
--resource-group <your-resource-group-name> \ --service <your-service-instance-name> \ --builder-name <your-builder-name> ```
-## Create a binding
+### Create a binding
Use this command to change the binding from **Unbound** to **Bound** status: ```azurecli
-az spring-cloud build-service builder buildpack-binding create \
+az spring build-service builder buildpack-binding create \
--resource-group <your-resource-group-name> \ --service <your-service-instance-name> \ --name <your-buildpack-binding-name> \
az spring-cloud build-service builder buildpack-binding create \
For information on the `properties` and `secrets` parameters for your buildpack, see the [Buildpack bindings](#buildpack-bindings) section.
-## Show the details for a specific binding
+### Show the details for a specific binding
You can view the details of a specific binding using the following command: ```azurecli
-az spring-cloud build-service builder buildpack-binding show \
+az spring build-service builder buildpack-binding show \
--resource-group <your-resource-group-name> \ --service <your-service-instance-name> \ --name <your-buildpack-binding-name> \ --builder-name <your-builder-name> ```
-## Edit the properties of a binding
+### Edit the properties of a binding
You can change a binding's properties using the following command: ```azurecli
-az spring-cloud build-service builder buildpack-binding set \
+az spring build-service builder buildpack-binding set \
--resource-group <your-resource-group-name> \ --service <your-service-instance-name> \ --name <your-buildpack-binding-name> \
az spring-cloud build-service builder buildpack-binding set \
For more information on the `properties` and `secrets` parameters for your buildpack, see the [Buildpack bindings](#buildpack-bindings) section.
-#### Delete a binding
+### Delete a binding
Use the following command to change the binding status from **Bound** to **Unbound**. ```azurecli
-az spring-cloud build-service builder buildpack-binding delete \
+az spring build-service builder buildpack-binding delete \
--resource-group <your-resource-group-name> \ --service <your-service-instance-name> \ --name <your-buildpack-binding-name> \
az spring-cloud build-service builder buildpack-binding delete \
## Next steps -- [Azure Spring Cloud](index.yml)
+- [Azure Spring Apps](index.yml)
spring-cloud How To Enterprise Deploy Non Java Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-enterprise-deploy-non-java-apps.md
Title: How to Deploy Non-Java Applications in Azure Spring Cloud Enterprise Tier-
-description: How to Deploy Non-Java Applications in Azure Spring Cloud Enterprise Tier
+ Title: How to Deploy Non-Java Applications in Azure Spring Apps Enterprise Tier
+
+description: How to Deploy Non-Java Applications in Azure Spring Apps Enterprise Tier
Last updated 02/09/2022-+
-# How to deploy non-Java applications in Azure Spring Cloud
+# How to deploy non-Java applications in Azure Spring Apps
+
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
**This article applies to:** ❌ Basic/Standard tier ✔️ Enterprise tier
-This article shows you how to deploy your non-java application to Azure Spring Cloud Enterprise tier.
+This article shows you how to deploy your non-java application to Azure Spring Apps Enterprise tier.
## Prerequisites -- An already provisioned Azure Spring Cloud Enterprise tier instance. For more information, see [Quickstart: Provision an Azure Spring Cloud service instance using the Enterprise tier](quickstart-provision-service-instance-enterprise.md).-- One or more applications running in Azure Spring Cloud. For more information on creating apps, see [Launch your Spring Cloud application from source code](./how-to-launch-from-source.md).
+- An already provisioned Azure Spring Apps Enterprise tier instance. For more information, see [Quickstart: Provision an Azure Spring Apps service instance using the Enterprise tier](quickstart-provision-service-instance-enterprise.md).
+- One or more applications running in Azure Spring Apps. For more information on creating apps, see [How to Deploy Spring Boot applications from Azure CLI](./how-to-launch-from-source.md).
- [Azure CLI](/cli/azure/install-azure-cli), version 2.0.67 or higher. - Your application source code.
To deploy from a source code folder your local machine, see [Non-Java applicatio
To deploy the source code folder to an active deployment, use the following command: ```azurecli
-az spring-cloud app deploy
+az spring app deploy
--resource-group <your-resource-group-name> \
- --service <your-Azure-Spring-Cloud-name> \
+ --service <your-Azure-Spring-Apps-name> \
--name <your-app-name> \ --source-path <path-to-source-code> ```
az spring-cloud app deploy
Your application must conform to the following restrictions: - Your application must listen on port 8080. The service checks the port on TCP for readiness and liveness.-- If your source code contains a package management folder, such as *node_modules*, ensure the folder contains all the dependencies. Otherwise, remove it and let Azure Spring Cloud install it.
+- If your source code contains a package management folder, such as *node_modules*, ensure the folder contains all the dependencies. Otherwise, remove it and let Azure Spring Apps install it.
- To see whether your source code language is supported and the feature is provided, see the [Support Matrix](#support-matrix) section. ## Support matrix
The following table indicates the features supported for each language.
## Next steps -- [Azure Spring Cloud](index.yml)
+- [Azure Spring Apps](index.yml)
spring-cloud How To Enterprise Marketplace Offer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-enterprise-marketplace-offer.md
Title: How to view the Azure Spring Cloud Enterprise Tier offering from Azure Marketplace
-description: How to view the Azure Spring Cloud Enterprise Tier offering from Azure Marketplace.
+ Title: How to view the Azure Spring Apps Enterprise Tier offering from Azure Marketplace
+description: How to view the Azure Spring Apps Enterprise Tier offering from Azure Marketplace.
Last updated 02/09/2022-+
-# View Azure Spring Cloud Enterprise Tier offering in Azure Marketplace
+# View Azure Spring Apps Enterprise Tier offering in Azure Marketplace
+
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
**This article applies to:** ❌ Basic/Standard tier ✔️ Enterprise tier
-This article shows you how to view the Azure Spring Cloud Enterprise Tier with VMware Tanzu offering through Azure Marketplace and how to redirect to the Azure Spring Cloud Enterprise tier creation page from Azure Marketplace.
+This article shows you how to view the Azure Spring Apps Enterprise Tier with VMware Tanzu offering through Azure Marketplace and how to redirect to the Azure Spring Apps Enterprise tier creation page from Azure Marketplace.
-Azure Spring Cloud Enterprise Tier is optimized for the needs of enterprise Spring developers through advanced configurability, flexibility, portability, and enterprise-ready VMware Spring Runtime 24x7 support. Developers also benefit from commercial Tanzu components, such as VMware Tanzu® Build Service™, Application Configuration Service for VMware Tanzu®, and VMware Tanzu® Service Registry, and access to Spring experts.
+Azure Spring Apps Enterprise Tier is optimized for the needs of enterprise Spring developers through advanced configurability, flexibility, portability, and enterprise-ready VMware Spring Runtime 24x7 support. Developers also benefit from commercial Tanzu components, such as VMware Tanzu® Build Service™, Application Configuration Service for VMware Tanzu®, and VMware Tanzu® Service Registry, and access to Spring experts.
-You can obtain and pay for a license to Tanzu components through an [Azure Marketplace offering](https://aka.ms/ascmpoffer). Azure Spring Cloud manages the license acquisition so you won't have to do it yourself.
+You can obtain and pay for a license to Tanzu components through an [Azure Marketplace offering](https://aka.ms/ascmpoffer). Azure Spring Apps manages the license acquisition so you won't have to do it yourself.
## Prerequisites
To purchase in the Azure Marketplace, you must meet the following prerequisites:
- Your organization allows [Azure Marketplace purchases](../cost-management-billing/manage/ea-azure-marketplace.md#enabling-azure-marketplace-purchases). - Your organization allows acquiring any Azure Marketplace software application listed in [Purchase policy management](/marketplace/azure-purchasing-invoicing#purchase-policy-management).
-## View Azure Spring Cloud Enterprise Tier offering from Azure Marketplace
+## View Azure Spring Apps Enterprise Tier offering from Azure Marketplace
-To see the offering and read a detailed description, see [Azure Spring Cloud Enterprise Tier](https://aka.ms/ascmpoffer).
+To see the offering and read a detailed description, see [Azure Spring Apps Enterprise Tier](https://aka.ms/ascmpoffer).
To see the supported plans in your market, select **Plans + Pricing**. > [!NOTE] > If you see "No plans are available for market '\<Location>'", that means none of your Azure subscriptions can purchase the SaaS offering. For more information, see [No plans are available for market '\<Location>'](./troubleshoot.md#no-plans-are-available-for-market-location) in [Troubleshooting](./troubleshoot.md). To see the Enterprise Tier creation page, select **Subscribe** ## Next steps -- [Azure Spring Cloud](index.yml)
+- [Azure Spring Apps](index.yml)
spring-cloud How To Enterprise Service Registry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-enterprise-service-registry.md
Title: How to Use Tanzu Service Registry with Azure Spring Cloud Enterprise Tier-
-description: How to use Tanzu Service Registry with Azure Spring Cloud Enterprise Tier.
+ Title: How to Use Tanzu Service Registry with Azure Spring Apps Enterprise Tier
+
+description: How to use Tanzu Service Registry with Azure Spring Apps Enterprise Tier.
Last updated 02/09/2022-+ # Use Tanzu Service Registry
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
+ **This article applies to:** ❌ Basic/Standard tier ✔️ Enterprise tier
-This article shows you how to use VMware Tanzu® Service Registry with Azure Spring Cloud Enterprise Tier.
+This article shows you how to use VMware Tanzu® Service Registry with Azure Spring Apps Enterprise Tier.
[Tanzu Service Registry](https://docs.vmware.com/en/Spring-Cloud-Services-for-VMware-Tanzu/2.1/spring-cloud-services/GUID-service-registry-https://docsupdatetracker.net/index.html) is one of the commercial VMware Tanzu components. It provides your apps with an implementation of the Service Discovery pattern, one of the key tenets of a Spring-based architecture. It can be difficult, and brittle in production, to hand-configure each client of a service or adopt some form of access convention. Instead, your apps can use Tanzu Service Registry to dynamically discover and call registered services. ## Prerequisites -- An already provisioned Azure Spring Cloud Enterprise tier instance with Tanzu Service Registry enabled. For more information, see [Quickstart: Provision an Azure Spring Cloud service instance using the Enterprise tier](quickstart-provision-service-instance-enterprise.md).
+- An already provisioned Azure Spring Apps Enterprise tier instance with Tanzu Service Registry enabled. For more information, see [Quickstart: Provision an Azure Spring Apps service instance using the Enterprise tier](quickstart-provision-service-instance-enterprise.md).
> [!NOTE]
- > To use Tanzu Service Registry, you must enable it when you provision your Azure Spring Cloud service instance. You cannot enable it after provisioning at this time.
+ > To use Tanzu Service Registry, you must enable it when you provision your Azure Spring Apps service instance. You cannot enable it after provisioning at this time.
## Use Service Registry with apps
Use the following steps to bind an application to Tanzu Service Registry.
1. Select **Bind app** and choose one app in the dropdown, then select **Apply** to bind.
- :::image type="content" source="media/enterprise/how-to-enterprise-service-registry/service-reg-app-bind-dropdown.png" alt-text="Screenshot of Azure portal showing Azure Spring Cloud Service Registry page and 'App binding' section with 'Bind app' dropdown showing.":::
+ :::image type="content" source="media/enterprise/how-to-enterprise-service-registry/service-reg-app-bind-dropdown.png" alt-text="Screenshot of Azure portal showing Azure Spring Apps Service Registry page and 'App binding' section with 'Bind app' dropdown showing.":::
> [!NOTE] > When you change the bind/unbind status, you must restart or redeploy the app to make the change take effect.
spring-cloud How To Github Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-github-actions.md
Title: Use Azure Spring Cloud CI/CD with GitHub Actions
-description: How to build up a CI/CD workflow for Azure Spring Cloud with GitHub Actions
+ Title: Use Azure Spring Apps CI/CD with GitHub Actions
+description: How to build up a CI/CD workflow for Azure Spring Apps with GitHub Actions
Last updated 09/08/2020-+ zone_pivot_groups: programming-languages-spring-cloud
-# Use Azure Spring Cloud CI/CD with GitHub Actions
+# Use Azure Spring Apps CI/CD with GitHub Actions
+
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
**This article applies to:** ✔️ Basic/Standard tier ✔️ Enterprise tier
-This article shows you how to build up a CI/CD workflow for Azure Spring Cloud with GitHub Actions.
+This article shows you how to build up a CI/CD workflow for Azure Spring Apps with GitHub Actions.
-GitHub Actions support an automated software development lifecycle workflow. With GitHub Actions for Azure Spring Cloud you can create workflows in your repository to build, test, package, release, and deploy to Azure.
+GitHub Actions support an automated software development lifecycle workflow. With GitHub Actions for Azure Spring Apps you can create workflows in your repository to build, test, package, release, and deploy to Azure.
## Prerequisites
You need an Azure service principal credential to authorize Azure login action.
```azurecli az login
-az ad sp create-for-rbac --role contributor --scopes /subscriptions/<SUBSCRIPTION_ID> --sdk-auth
+az ad sp create-for-rbac \
+ --role contributor \
+ --scopes /subscriptions/<SUBSCRIPTION_ID> \
+ --sdk-auth
``` To access to a specific resource group, you can reduce the scope: ```azurecli
-az ad sp create-for-rbac --role contributor --scopes /subscriptions/<SUBSCRIPTION_ID>/resourceGroups/<RESOURCE_GROUP> --sdk-auth
+az ad sp create-for-rbac \
+ --role contributor \
+ --scopes /subscriptions/<SUBSCRIPTION_ID>/resourceGroups/<RESOURCE_GROUP> \
+ --sdk-auth
``` The command should output a JSON object:
Set the secret name to `AZURE_CREDENTIALS` and its value to the JSON string that
![Set secret data](./media/github-actions/actions2.png)
-You can also get the Azure login credential from Key Vault in GitHub actions as explained in [Authenticate Azure Spring with Key Vault in GitHub Actions](./github-actions-key-vault.md).
+You can also get the Azure login credential from Key Vault in GitHub Actions as explained in [Authenticate Azure Spring with Key Vault in GitHub Actions](./github-actions-key-vault.md).
## Provision service instance
-To provision your Azure Spring Cloud service instance, run the following commands using the Azure CLI.
+To provision your Azure Spring Apps service instance, run the following commands using the Azure CLI.
```azurecli
-az extension add --name spring-cloud
-az group create --location eastus --name <resource group name>
-az spring-cloud create -n <service instance name> -g <resource group name>
-az spring-cloud config-server git set -n <service instance name> --uri https://github.com/xxx/Azure-Spring-Cloud-Samples --label main --search-paths steeltoe-sample/config
+az extension add --name spring
+az group create \
+ --name <resource-group-name> \
+ --location eastus
+az spring create \
+ --resource-group <resource-group-name> \
+ --name <service-instance-name>
+az spring config-server git set \
+ --name <service-instance-name> \
+ --uri https://github.com/Azure-Samples/Azure-Spring-Cloud-Samples \
+ --label main \
+ --search-paths steeltoe-sample/config
``` ## Build the workflow
The workflow is defined using the following options.
### Prepare for deployment with Azure CLI
-The command `az spring-cloud app create` is currently not idempotent. After you run it once, you'll get an error if you run the same command again. We recommend this workflow on existing Azure Spring Cloud apps and instances.
+The command `az spring app create` is currently not idempotent. After you run it once, you'll get an error if you run the same command again. We recommend this workflow on existing Azure Spring Apps apps and instances.
Use the following Azure CLI commands for preparation: ```azurecli
-az config set defaults.group=<service group name>
-az config set defaults.spring-cloud=<service instance name>
-az spring-cloud app create --name planet-weather-provider
-az spring-cloud app create --name solar-system-weather
+az config set defaults.group=<service-group-name>
+az config set defaults.spring-cloud=<service-instance-name>
+az spring app create --name planet-weather-provider
+az spring app create --name solar-system-weather
``` ### Deploy with Azure CLI directly
jobs:
- name: install Azure CLI extension run: |
- az extension add --name spring-cloud --yes
+ az extension add --name spring --yes
- name: Build and package planet-weather-provider app working-directory: ${{env.working-directory}}/src/planet-weather-provider run: | dotnet publish
- az spring-cloud app deploy -n planet-weather-provider --runtime-version NetCore_31 --main-entry Microsoft.Azure.SpringCloud.Sample.PlanetWeatherProvider.dll --artifact-path ./publish-deploy-planet.zip -s ${{ env.service-name }} -g ${{ env.resource-group-name }}
+ az spring app deploy -n planet-weather-provider --runtime-version NetCore_31 --main-entry Microsoft.Azure.SpringCloud.Sample.PlanetWeatherProvider.dll --artifact-path ./publish-deploy-planet.zip -s ${{ env.service-name }} -g ${{ env.resource-group-name }}
- name: Build solar-system-weather app working-directory: ${{env.working-directory}}/src/solar-system-weather run: | dotnet publish
- az spring-cloud app deploy -n solar-system-weather --runtime-version NetCore_31 --main-entry Microsoft.Azure.SpringCloud.Sample.SolarSystemWeather.dll --artifact-path ./publish-deploy-solar.zip -s ${{ env.service-name }} -g ${{ env.resource-group-name }}
+ az spring app deploy -n solar-system-weather --runtime-version NetCore_31 --main-entry Microsoft.Azure.SpringCloud.Sample.SolarSystemWeather.dll --artifact-path ./publish-deploy-solar.zip -s ${{ env.service-name }} -g ${{ env.resource-group-name }}
``` ::: zone-end
You need an Azure service principal credential to authorize Azure login action.
```azurecli az login
-az ad sp create-for-rbac --role contributor --scopes /subscriptions/<SUBSCRIPTION_ID> --sdk-auth
+az ad sp create-for-rbac \
+ --role contributor \
+ --scopes /subscriptions/<SUBSCRIPTION_ID> \
+ --sdk-auth
``` To access to a specific resource group, you can reduce the scope: ```azurecli
-az ad sp create-for-rbac --role contributor --scopes /subscriptions/<SUBSCRIPTION_ID>/resourceGroups/<RESOURCE_GROUP> --sdk-auth
+az ad sp create-for-rbac \
+ --role contributor \
+ --scopes /subscriptions/<SUBSCRIPTION_ID>/resourceGroups/<RESOURCE_GROUP> \
+ --sdk-auth
``` The command should output a JSON object:
Set the secret name to `AZURE_CREDENTIALS` and its value to the JSON string that
![Set secret data](./media/github-actions/actions2.png)
-You can also get the Azure login credential from Key Vault in GitHub actions as explained in [Authenticate Azure Spring with Key Vault in GitHub Actions](./github-actions-key-vault.md).
+You can also get the Azure login credential from Key Vault in GitHub Actions as explained in [Authenticate Azure Spring with Key Vault in GitHub Actions](./github-actions-key-vault.md).
## Provision service instance
-To provision your Azure Spring Cloud service instance, run the following commands using the Azure CLI.
+To provision your Azure Spring Apps service instance, run the following commands using the Azure CLI.
```azurecli
-az extension add --name spring-cloud
+az extension add --name spring
az group create --location eastus --name <resource group name>
-az spring-cloud create -n <service instance name> -g <resource group name>
-az spring-cloud config-server git set -n <service instance name> --uri https://github.com/xxx/piggymetrics --label config
+az spring create -n <service instance name> -g <resource group name>
+az spring config-server git set -n <service instance name> --uri https://github.com/xxx/piggymetrics --label config
``` ## End-to-end sample workflows
The following sections show you various options for deploying your app.
#### To production
-Azure Spring Cloud supports deploying to deployments with built artifacts (e.g., JAR or .NET Core ZIP) or source code archive.
-The following example deploys to the default production deployment in Azure Spring Cloud using JAR file built by Maven. This is the only possible deployment scenario when using the Basic SKU:
+Azure Spring Apps supports deploying to deployments with built artifacts (e.g., JAR or .NET Core ZIP) or source code archive.
+The following example deploys to the default production deployment in Azure Spring Apps using JAR file built by Maven. This is the only possible deployment scenario when using the Basic SKU:
```yml name: AzureSpringCloud
jobs:
package: ${{ env.ASC_PACKAGE_PATH }}/**/*.jar ```
-The following example deploys to the default production deployment in Azure Spring Cloud using source code.
+The following example deploys to the default production deployment in Azure Spring Apps using source code.
```yml name: AzureSpringCloud
The "Delete Staging Deployment" action allows you to delete the deployment not r
## Deploy with Maven Plugin
-Another option is to use the [Maven Plugin](./quickstart.md) for deploying the Jar and updating App settings. The command `mvn azure-spring-cloud:deploy` is idempotent and will automatically create Apps if needed. You don't need to create corresponding apps in advance.
+Another option is to use the [Maven Plugin](./quickstart.md) for deploying the Jar and updating App settings. The command `mvn azure-spring-apps:deploy` is idempotent and will automatically create Apps if needed. You don't need to create corresponding apps in advance.
```yaml name: AzureSpringCloud
jobs:
creds: ${{ secrets.AZURE_CREDENTIALS }} # Maven deploy, make sure you have correct configurations in your pom.xml
- - name: deploy to Azure Spring Cloud using Maven
+ - name: deploy to Azure Spring Apps using Maven
run: |
- mvn azure-spring-cloud:deploy
+ mvn azure-spring-apps:deploy
``` ::: zone-end
If your action runs in error, for example, if you haven't set the Azure credenti
## Next steps
-* [Key Vault for Spring Cloud GitHub actions](./github-actions-key-vault.md)
+* [Authenticate Azure Spring Apps with Azure Key Vault in GitHub Actions](./github-actions-key-vault.md)
* [Azure Active Directory service principals](/cli/azure/ad/sp#az-ad-sp-create-for-rbac) * [GitHub Actions for Azure](https://github.com/Azure/actions/)
spring-cloud How To Integrate Azure Load Balancers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-integrate-azure-load-balancers.md
Title: Tutorial - Integrate Azure Spring Cloud with Azure Load Balance Solutions
-description: How to integrate Azure Spring Cloud with Azure Load Balance Solutions
+ Title: Tutorial - Integrate Azure Spring Apps with Azure Load Balance Solutions
+description: How to integrate Azure Spring Apps with Azure Load Balance Solutions
Last updated 04/20/2020-+
-# Integrate Azure Spring Cloud with Azure Load Balance Solutions
+# Integrate Azure Spring Apps with Azure Load Balance Solutions
+
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
**This article applies to:** ✔️ Java ✔️ C# **This article applies to:** ✔️ Basic/Standard tier ✔️ Enterprise tier
-Azure Spring Cloud supports Spring applications on Azure. Increasing business can require multiple data centers with management of multiple instances of Azure Spring Cloud.
+Azure Spring Apps supports Spring applications on Azure. Increasing business can require multiple data centers with management of multiple instances of Azure Spring Apps.
-Azure already provides different load-balance solutions. There are three options to integrate Azure Spring Cloud with Azure load-balance solutions:
+Azure already provides different load-balance solutions. There are three options to integrate Azure Spring Apps with Azure load-balance solutions:
-1. Integrate Azure Spring Cloud with Azure Traffic Manager
-2. Integrate Azure Spring Cloud with Azure App Gateway
-3. Integrate Azure Spring Cloud with Azure Front Door
+1. Integrate Azure Spring Apps with Azure Traffic Manager
+2. Integrate Azure Spring Apps with Azure App Gateway
+3. Integrate Azure Spring Apps with Azure Front Door
## Prerequisites
-* Azure Spring Cloud: [How to create an Azure spring cloud service](./quickstart.md)
+* Azure Spring Apps: [How to create an Azure Spring Apps service](./quickstart.md)
* Azure Traffic * Azure App Gateway: [How to create an application gateway](../application-gateway/quick-create-portal.md) * Azure Front Door: [How to create a front door](../frontdoor/quickstart-create-front-door.md)
-## Integrate Azure Spring Cloud with Azure Traffic Manager
+## Integrate Azure Spring Apps with Azure Traffic Manager
-To integrate Azure spring cloud with Traffic Manager, add its public endpoints as traffic managerΓÇÖs endpoints and then configure custom domain for both traffic manager and Azure spring cloud.
+To integrate Azure Spring Apps with Traffic Manager, add its public endpoints as traffic managerΓÇÖs endpoints and then configure custom domain for both traffic manager and Azure Spring Apps.
### Add Endpoint in Traffic Manager Add endpoints in traffic 1. Specify **Type** to be *External endpoint*.
-1. Input fully qualified domain name (FQDN) of each Azure spring cloud public endpoint.
+1. Input fully qualified domain name (FQDN) of each Azure Spring Apps public endpoint.
1. Select **OK**. ![Traffic Manager 1](media/spring-cloud-load-balancers/traffic-manager-1.png)
Add endpoints in traffic
To finish the configuration: 1. Sign in to the website of your domain provider, and create a CNAME record mapping from your custom domain to traffic managerΓÇÖs Azure default domain name.
-1. Follow instructions [How to add custom domain to Azure Spring Cloud](./tutorial-custom-domain.md).
-1. Add above custom domain binding to traffic manager to Azure spring cloud corresponding app service and upload SSL certificate there.
+1. Follow instructions [How to add custom domain to Azure Spring Apps](./tutorial-custom-domain.md).
+1. Add above custom domain binding to traffic manager to Azure Spring Apps corresponding app service and upload SSL certificate there.
![Traffic Manager 3](media/spring-cloud-load-balancers/traffic-manager-3.png)
-## Integrate Azure Spring Cloud with Azure App Gateway
+## Integrate Azure Spring Apps with Azure App Gateway
-To integrate with Azure Spring Cloud service, complete the following configurations:
+To integrate with Azure Spring Apps service, complete the following configurations:
### Configure Backend Pool 1. Specify **Target type** as *IP address* or *FQDN*.
-1. Enter your Azure spring cloud public endpoints.
+1. Enter your Azure Spring Apps public endpoints.
![App Gateway 1](media/spring-cloud-load-balancers/app-gateway-1.png)
To integrate with Azure Spring Cloud service, complete the following configurati
### Configure Rewrite Set 1. Select **Rewrites** then **Rewrite set** to add a rewrite set.
-1. Select the routing rules that route requests to Azure Spring Cloud public endpoints.
+1. Select the routing rules that route requests to Azure Spring Apps public endpoints.
1. On **Rewrite rule configuration** tab, select **Add rewrite rule**. 1. **Rewrite type**: select **Request Header** 1. **Action type**: select **Delete**
To integrate with Azure Spring Cloud service, complete the following configurati
![App Gateway 4](media/spring-cloud-load-balancers/app-gateway-4.png)
-## Integrate Azure Spring Cloud with Azure Front Door
+## Integrate Azure Spring Apps with Azure Front Door
-To integrate with Azure Spring Cloud service and configure backend pool, use the following steps:
+To integrate with Azure Spring Apps service and configure backend pool, use the following steps:
1. **Add backend pool**. 1. Specify the backend endpoint by adding host.
To integrate with Azure Spring Cloud service and configure backend pool, use the
![Front Door 1](media/spring-cloud-load-balancers/front-door-1.png) 1. Specify **backend host type** as *custom host*.
-1. Input FQDN of your Azure Spring Cloud public endpoints in **backend host name**.
+1. Input FQDN of your Azure Spring Apps public endpoints in **backend host name**.
1. Accept the **backend host header** default, which is the same as **backend host name**. ![Front Door 2](media/spring-cloud-load-balancers/front-door-2.png)
spring-cloud How To Intellij Deploy Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-intellij-deploy-apps.md
Title: "Tutorial: Deploy Spring Boot applications using IntelliJ"
-description: Use IntelliJ to deploy applications to Azure Spring Cloud.
+description: Use IntelliJ to deploy applications to Azure Spring Apps.
Last updated 11/03/2021-+ # Deploy Spring Boot applications using IntelliJ
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
+ **This article applies to:** ✔️ Java ❌ C# **This article applies to:** ✔️ Basic/Standard tier ✔️ Enterprise tier
-The IntelliJ plug-in for Azure Spring Cloud supports application deployment from IntelliJ IDEA.
+The IntelliJ plug-in for Azure Spring Apps supports application deployment from IntelliJ IDEA.
Before running this example, you can try the [basic quickstart](./quickstart.md).
You can add the Azure Toolkit for IntelliJ IDEA 3.51.0 from the IntelliJ **Plugi
The following procedures deploy a Hello World application using IntelliJ IDEA. * Open the gs-spring-boot project
-* Deploy to Azure Spring Cloud
+* Deploy to Azure Spring Apps
* Show streaming logs ## Open gs-spring-boot project
The following procedures deploy a Hello World application using IntelliJ IDEA.
![Import Project](media/spring-cloud-intellij-howto/import-project-1.png)
-## Deploy to Azure Spring Cloud
+## Deploy to Azure Spring Apps
In order to deploy to Azure you must sign-in with your Azure account, and choose your subscription. For sign-in details, see [Installation and sign-in](/azure/developer/java/toolkit-for-intellij/create-hello-world-web-app#installation-and-sign-in).
-1. Right-click your project in IntelliJ project explorer, and select **Azure** -> **Deploy to Azure Spring Cloud**.
+1. Right-click your project in IntelliJ project explorer, and select **Azure** -> **Deploy to Azure Spring Apps**.
![Deploy to Azure 1](media/spring-cloud-intellij-howto/deploy-to-azure-1.png)
In order to deploy to Azure you must sign-in with your Azure account, and choose
1. The plug-in will run the command `mvn package` on the project and then create the new app and deploy the jar generated by the `package` command.
-1. If the app URL is not shown in the output window, get it from the Azure portal. Navigate from your resource group to the instance of Azure Spring Cloud. Then select **Apps**. The running app will be listed. Select the app, then copy the **URL** or **Test Endpoint**.
+1. If the app URL is not shown in the output window, get it from the Azure portal. Navigate from your resource group to the instance of Azure Spring Apps. Then select **Apps**. The running app will be listed. Select the app, then copy the **URL** or **Test Endpoint**.
![Get test URL](media/spring-cloud-intellij-howto/get-test-url.png)
To get the logs:
## Next steps
-* [Prepare Spring application for Azure Spring Cloud](how-to-prepare-app-deployment.md)
+* [Prepare Spring application for Azure Spring Apps](how-to-prepare-app-deployment.md)
* [Learn more about Azure Toolkit for IntelliJ](/azure/developer/java/toolkit-for-intellij/)
spring-cloud How To Launch From Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-launch-from-source.md
Title: "How to - Launch your Spring Cloud application from source code"
-description: In this quickstart, learn how to launch your application in Azure Spring Cloud directly from your source code
+ Title: How to Deploy Spring Boot applications from Azure CLI
+description: In this quickstart, learn how to launch your application in Azure Spring Apps directly from your source code
Last updated 11/12/2021 -+ # How to Deploy Spring Boot applications from Azure CLI
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
+ **This article applies to:** ✔️ Java ❌ C# **This article applies to:** ✔️ Basic/Standard tier ✔️ Enterprise tier
-Azure Spring Cloud enables Spring Boot applications on Azure.
+Azure Spring Apps enables Spring Boot applications on Azure.
You can launch applications directly from Java source code or from a pre-built JAR. This article explains the deployment procedures.
Before you begin, ensure that your Azure subscription has the required dependenc
## Install the Azure CLI extension
-Install the Azure Spring Cloud extension for the Azure CLI with the following command
+Install the Azure Spring Apps extension for the Azure CLI with the following command
```azurecli
-az extension add --name spring-cloud
+az extension add --name spring
``` ## Provision a service instance using the Azure CLI
az account list -o table
az account set --subscription ```
-Create a resource group to contain your service in Azure Spring Cloud. You can learn more about [Azure Resource Groups](../azure-resource-manager/management/overview.md).
+Create a resource group to contain your service in Azure Spring Apps. You can learn more about [Azure Resource Groups](../azure-resource-manager/management/overview.md).
```azurecli az group create --location eastus --name <resource-group-name> ```
-Run the following commands to provision an instance of Azure Spring Cloud. Prepare a name for your service in Azure Spring Cloud. The name must be between 4 and 32 characters and can contain only lowercase letters, numbers, and hyphens. The first character of the service name must be a letter and the last character must be either a letter or a number.
+Run the following commands to provision an instance of Azure Spring Apps. Prepare a name for your service in Azure Spring Apps. The name must be between 4 and 32 characters and can contain only lowercase letters, numbers, and hyphens. The first character of the service name must be a letter and the last character must be either a letter or a number.
```azurecli
-az spring-cloud create --resource-group <resource-group-name> --name <resource-name>
+az spring create --resource-group <resource-group-name> --name <resource-name>
``` The service instance will take about five minutes to deploy.
-Set your default resource group name and Azure Spring Cloud instance name using the following commands:
+Set your default resource group name and Azure Spring Apps instance name using the following commands:
```azurecli az config set defaults.group=<service-group-name> az config set defaults.spring-cloud=<service-instance-name> ```
-## Create the application in Azure Spring Cloud
+## Create the application in Azure Spring Apps
-The following command creates an application in Azure Spring Cloud in your subscription. This creates an empty service to which you can upload your application.
+The following command creates an application in Azure Spring Apps in your subscription. This creates an empty service to which you can upload your application.
```azurecli
-az spring-cloud app create --name <app-name>
+az spring app create --name <app-name>
``` ## Deploy your Spring Boot application
To deploy from a JAR built on your local machine, ensure that your build produce
To deploy the fat-JAR to an active deployment ```azurecli
-az spring-cloud app deploy --name <app-name> --jar-path <path-to-fat-JAR>
+az spring app deploy --name <app-name> --jar-path <path-to-fat-JAR>
``` To deploy the fat-JAR to a specific deployment ```azurecli
-az spring-cloud app deployment create --app <app-name> \
+az spring app deployment create --app <app-name> \
--name <deployment-name> \ --jar-path <path-to-fat-JAR> ``` ### Deploy from source code
-Azure Spring Cloud uses [kpack](https://github.com/pivotal/kpack) to build your project. You can use Azure CLI to upload your source code, build your project using kpack, and deploy it to the target application.
+Azure Spring Apps uses [kpack](https://github.com/pivotal/kpack) to build your project. You can use Azure CLI to upload your source code, build your project using kpack, and deploy it to the target application.
> [!WARNING] > The project must produce only one JAR file with a `main-class` entry in the `MANIFEST.MF` in `target` (for Maven deployments) or `build/libs` (for Gradle deployments). Multiple JAR files with `main-class` entries will cause the deployment to fail.
For single module Maven / Gradle projects:
```azurecli cd <path-to-maven-or-gradle-source-root>
-az spring-cloud app deploy --name <app-name>
+az spring app deploy --name <app-name>
``` For Maven / Gradle projects with multiple modules, repeat for each module: ```azurecli cd <path-to-maven-or-gradle-source-root>
-az spring-cloud app deploy --name <app-name> \
+az spring app deploy --name <app-name> \
--target-module <relative-path-to-module> ```
az spring-cloud app deploy --name <app-name> \
Review the kpack build logs using the following command: ```azurecli
-az spring-cloud app show-deploy-log --name <app-name>
+az spring app show-deploy-log --name <app-name>
``` > [!NOTE]
In this quickstart, you learned how to:
> * Assign public IP for your application gateway > [!div class="nextstepaction"]
-> [Spring Cloud logs, metrics, tracing](./quickstart-logs-metrics-tracing.md)
+> [Quickstart: Monitoring Azure Spring Apps with logs, metrics, and tracing](./quickstart-logs-metrics-tracing.md)
-More samples are available on GitHub: [Azure Spring Cloud Samples](https://github.com/Azure-Samples/Azure-Spring-Cloud-Samples/tree/master/service-binding-cosmosdb-sql).
+More samples are available on GitHub: [Azure Spring Apps Samples](https://github.com/Azure-Samples/Azure-Spring-Cloud-Samples/tree/master/service-binding-cosmosdb-sql).
spring-cloud How To Log Streaming https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-log-streaming.md
Title: Stream Azure Spring Cloud app logs in real-time
+ Title: Stream Azure Spring Apps app logs in real-time
description: How to use log streaming to view application logs instantly Last updated 01/14/2019-+
-# Stream Azure Spring Cloud app logs in real-time
+# Stream Azure Spring Apps app logs in real-time
+
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
**This article applies to:** ✔️ Java ✔️ C# **This article applies to:** ✔️ Basic/Standard tier ✔️ Enterprise tier
-Azure Spring Cloud enables log streaming in Azure CLI to get real-time application console logs for troubleshooting. You can also [Analyze logs and metrics with diagnostics settings](./diagnostic-services.md).
+Azure Spring Apps enables log streaming in Azure CLI to get real-time application console logs for troubleshooting. You can also [Analyze logs and metrics with diagnostics settings](./diagnostic-services.md).
## Prerequisites
-* Install [Azure CLI extension](/cli/azure/install-azure-cli) for Spring Cloud, minimum version 0.2.0 .
-* An instance of **Azure Spring Cloud** with a running application, for example [Spring Cloud app](./quickstart.md).
-
-> [!NOTE]
-> The Azure Spring Cloud CLI extension is updated from version 0.2.0 to 0.2.1. This change affects the syntax of the command for log streaming: `az spring-cloud app log tail` is replaced by `az spring-cloud app logs`. The command: `az spring-cloud app log tail` will be deprecated in a future release. If you have been using version 0.2.0, you can upgrade to 0.2.1. First, remove the old version with the command: `az extension remove --name spring-cloud`. Then, install 0.2.1 by the command: `az extension add --name spring-cloud`.
+* [Azure CLI](/cli/azure/install-azure-cli) with the Azure Spring Apps extension, minimum version 1.0.0. You can install the extension by using the following command: `az extension add --name spring`
+* An instance of **Azure Spring Apps** with a running application. For more information, see [Quickstart: Deploy your first application to Azure Spring Apps](./quickstart.md).
## Use CLI to tail logs
In following examples, the resource group and service name will be omitted in th
If an app named auth-service has only one instance, you can view the log of the app instance with the following command: ```azurecli
-az spring-cloud app logs --name <application name>
+az spring app logs --name <application name>
``` This will return logs similar to the following examples, where `auth-service` is the application name.
If multiple instances exist for the app named `auth-service`, you can view the i
First, you can get the app instance names with following command. ```azurecli
-az spring-cloud app show --name auth-service --query properties.activeDeployment.properties.instances --output table
+az spring app show --name auth-service --query properties.activeDeployment.properties.instances --output table
``` This command produces results similar to the following output:
auth-service-default-12-75cc4577fc-n25mh Running UP
Then, you can stream logs of an app instance with the option `-i/--instance` option: ```azurecli
-az spring-cloud app logs --name auth-service --instance auth-service-default-12-75cc4577fc-pw7hb
+az spring app logs --name auth-service --instance auth-service-default-12-75cc4577fc-pw7hb
```
-You can also get details of app instances from the Azure portal. After selecting **Apps** in the left navigation pane of your Azure Spring Cloud service, select **App Instances**.
+You can also get details of app instances from the Azure portal. After selecting **Apps** in the left navigation pane of your Azure Spring Apps service, select **App Instances**.
### Continuously stream new logs
-By default, `az spring-cloud app logs` prints only existing logs streamed to the app console and then exits. If you want to stream new logs, add `-f/--follow`:
+By default, `az spring app logs` prints only existing logs streamed to the app console and then exits. If you want to stream new logs, add `-f/--follow`:
```azurecli
-az spring-cloud app logs --name auth-service --follow
+az spring app logs --name auth-service --follow
```
-When you use `--follow` to tail instant logs, the Azure Spring Cloud log streaming service will send heartbeat logs to the client every minute unless your application is writing logs constantly. These heartbeat log messages look like `2020-01-15 04:27:13.473: No log from server`.
+When you use `--follow` to tail instant logs, the Azure Spring Apps log streaming service will send heartbeat logs to the client every minute unless your application is writing logs constantly. These heartbeat log messages look like `2020-01-15 04:27:13.473: No log from server`.
To check all the logging options supported: ```azurecli
-az spring-cloud app logs --help
+az spring app logs --help
``` ### Format JSON structured logs > [!NOTE]
-> Requires spring-cloud extension version 2.4.0 or later.
+> Requires spring extension version 2.4.0 or later.
When the [Structured application log](./structured-app-log.md) is enabled for the app, the logs are printed in JSON format. This makes it difficult to read. The `--format-json` argument can be used to format the JSON logs into human readable format. ```azurecli # Raw JSON log
-$ az spring-cloud app logs --name auth-service
+$ az spring app logs --name auth-service
{"timestamp":"2021-05-26T03:35:27.533Z","logger":"com.netflix.discovery.DiscoveryClient","level":"INFO","thread":"main","mdc":{},"message":"Disable delta property : false"} {"timestamp":"2021-05-26T03:35:27.533Z","logger":"com.netflix.discovery.DiscoveryClient","level":"INFO","thread":"main","mdc":{},"message":"Single vip registry refresh property : null"} # Formatted JSON log
-$ az spring-cloud app logs --name auth-service --format-json
+$ az spring app logs --name auth-service --format-json
2021-05-26T03:35:27.533Z INFO [ main] com.netflix.discovery.DiscoveryClient : Disable delta property : false 2021-05-26T03:35:27.533Z INFO [ main] com.netflix.discovery.DiscoveryClient : Single vip registry refresh property : null ```
The `--format-json` argument also takes optional customized format, using the ke
```azurecli # Custom format
-$ az spring-cloud app logs --name auth-service --format-json="{message}{n}"
+$ az spring app logs --name auth-service --format-json="{message}{n}"
Disable delta property : false Single vip registry refresh property : null ```
Single vip registry refresh property : null
## Next steps
-* [Quickstart: Monitoring Azure Spring Cloud apps with logs, metrics, and tracing](./quickstart-logs-metrics-tracing.md)
+* [Quickstart: Monitoring Azure Spring Apps apps with logs, metrics, and tracing](./quickstart-logs-metrics-tracing.md)
* [Analyze logs and metrics with diagnostics settings](./diagnostic-services.md)
spring-cloud How To Manage User Assigned Managed Identities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-manage-user-assigned-managed-identities.md
Title: Manage user-assigned managed identities for an application in Azure Spring Cloud (preview)
+ Title: Manage user-assigned managed identities for an application in Azure Spring Apps (preview)
description: How to manage user-assigned managed identities for applications. Last updated 03/31/2022-+ zone_pivot_groups: spring-cloud-tier-selection
-# Manage user-assigned managed identities for an application in Azure Spring Cloud (preview)
+# Manage user-assigned managed identities for an application in Azure Spring Apps (preview)
+
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
**This article applies to:** ✔️ Basic/Standard tier ✔️ Enterprise tier
-This article shows you how to assign or remove user-assigned managed identities for an application in Azure Spring Cloud, using the Azure portal and Azure CLI.
+This article shows you how to assign or remove user-assigned managed identities for an application in Azure Spring Apps, using the Azure portal and Azure CLI.
-Managed identities for Azure resources provide an automatically managed identity in Azure Active Directory (Azure AD) to an Azure resource such as your application in Azure Spring Cloud. You can use this identity to authenticate to any service that supports Azure AD authentication, without having credentials in your code.
+Managed identities for Azure resources provide an automatically managed identity in Azure Active Directory (Azure AD) to an Azure resource such as your application in Azure Spring Apps. You can use this identity to authenticate to any service that supports Azure AD authentication, without having credentials in your code.
## Prerequisites
Managed identities for Azure resources provide an automatically managed identity
::: zone pivot="sc-enterprise-tier" -- An already provisioned Azure Spring Cloud Enterprise tier instance. For more information, see [Quickstart: Provision an Azure Spring Cloud service instance using the Enterprise tier](quickstart-provision-service-instance-enterprise.md).
+- An already provisioned Azure Spring Apps Enterprise tier instance. For more information, see [Quickstart: Provision an Azure Spring Apps service instance using the Enterprise tier](quickstart-provision-service-instance-enterprise.md).
- [Azure CLI version 2.30.0 or higher](/cli/azure/install-azure-cli). - [!INCLUDE [install-app-user-identity-extension](includes/install-app-user-identity-extension.md)] - At least one already provisioned user-assigned managed identity. For more information, see [Manage user-assigned managed identities](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md).
Managed identities for Azure resources provide an automatically managed identity
::: zone pivot="sc-standard-tier" -- An already provisioned Azure Spring Cloud instance. For more information, see [Quickstart: Deploy your first application to Azure Spring Cloud](./quickstart.md).
+- An already provisioned Azure Spring Apps instance. For more information, see [Quickstart: Deploy your first application to Azure Spring Apps](./quickstart.md).
- [Azure CLI version 2.30.0 or higher](/cli/azure/install-azure-cli). - [!INCLUDE [install-app-user-identity-extension](includes/install-app-user-identity-extension.md)] - At least one already provisioned user-assigned managed identity. For more information, see [Manage user-assigned managed identities](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md).
Managed identities for Azure resources provide an automatically managed identity
Create an application and assign user-assigned managed identity at the same time by using the following command: ```azurecli
-az spring-cloud app create \
+az spring app create \
--resource-group <resource-group-name> \ --name <app-name> \ --service <service-instance-name> \
To assign user-assigned managed identity to an existing application in the Azure
Use the following command to assign one or more user-assigned managed identities on an existing app: ```azurecli
-az spring-cloud app identity assign \
+az spring app identity assign \
--resource-group <resource-group-name> \ --name <app-name> \ --service <service-instance-name> \
An application can use its managed identity to get tokens to access other resour
You may need to configure the target resource to allow access from your application. For more information, see [Assign a managed identity access to a resource by using the Azure portal](../active-directory/managed-identities-azure-resources/howto-assign-access-portal.md). For example, if you request a token to access Key Vault, be sure you have added an access policy that includes your application's identity. Otherwise, your calls to Key Vault will be rejected, even if they include the token. To learn more about which resources support Azure Active Directory tokens, see [Azure services that support Azure AD authentication](../active-directory/managed-identities-azure-resources/services-azure-active-directory-support.md)
-Azure Spring Cloud shares the same endpoint for token acquisition with Azure Virtual Machines. We recommend using Java SDK or Spring Boot starters to acquire a token. For various code and script examples and guidance on important topics such as handling token expiration and HTTP errors, see [How to use managed identities for Azure resources on an Azure VM to acquire an access token](../active-directory/managed-identities-azure-resources/how-to-use-vm-token.md).
+Azure Spring Apps shares the same endpoint for token acquisition with Azure Virtual Machines. We recommend using Java SDK or Spring Boot starters to acquire a token. For various code and script examples and guidance on important topics such as handling token expiration and HTTP errors, see [How to use managed identities for Azure resources on an Azure VM to acquire an access token](../active-directory/managed-identities-azure-resources/how-to-use-vm-token.md).
## Remove user-assigned managed identities from an existing app
Removing user-assigned managed identities will remove the assignment between the
To remove user-assigned managed identities from an application that no longer needs it, follow these steps:
-1. Sign in to the Azure portal using an account associated with the Azure subscription that contains the Azure Spring Cloud instance.
+1. Sign in to the Azure portal using an account associated with the Azure subscription that contains the Azure Spring Apps instance.
1. Navigate to the desired application and select **Identity**. 1. Under **User assigned**, select target identities and then select **Remove**.
To remove user-assigned managed identities from an application that no longer ne
To remove user-assigned managed identities from an application that no longer needs it, use the following command: ```azurecli
-az spring-cloud app identity remove \
+az spring app identity remove \
--resource-group <resource-group-name> \ --name <app-name> \ --service <service-instance-name> \
az spring-cloud app identity remove \
## Limitations
-For user-assigned managed identity limitations, see [Quotas and service plans for Azure Spring Cloud](./quotas.md).
+For user-assigned managed identity limitations, see [Quotas and service plans for Azure Spring Apps](./quotas.md).
## Next steps
spring-cloud How To Maven Deploy Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-maven-deploy-apps.md
Title: "Tutorial: Deploy Spring Boot applications using Maven"-
-description: Use Maven to deploy applications to Azure Spring Cloud.
+
+description: Use Maven to deploy applications to Azure Spring Apps.
Last updated 04/07/2022-+ # Deploy Spring Boot applications using Maven
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
+ **This article applies to:** ✔️ Java ❌ C# **This article applies to:** ✔️ Basic/Standard tier ✔️ Enterprise tier
-This article shows you how to use the Azure Spring Cloud Maven plugin to configure and deploy applications to Azure Spring Cloud.
+This article shows you how to use the Azure Spring Apps Maven plugin to configure and deploy applications to Azure Spring Apps.
## Prerequisites * An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* An already provisioned Azure Spring Cloud instance.
+* An already provisioned Azure Spring Apps instance.
* [JDK 8 or JDK 11](/azure/developer/java/fundamentals/java-jdk-install) * [Apache Maven](https://maven.apache.org/download.cgi)
-* [Azure CLI version 2.0.67 or higher](/cli/azure/install-azure-cli) with the Azure Spring Cloud extension. You can install the extension by using the following command: `az extension add --name spring-cloud`
+* [Azure CLI version 2.0.67 or higher](/cli/azure/install-azure-cli) with the Azure Spring Apps extension. You can install the extension by using the following command: `az extension add --name spring`
-## Generate a Spring Cloud project
+## Generate a Spring project
-To create a Spring Cloud project for use in this article, use the following steps:
+To create a Spring project for use in this article, use the following steps:
-1. Navigate to [Spring Initializr](https://start.spring.io/#!type=maven-project&language=java&platformVersion=2.5.7&packaging=jar&jvmVersion=1.8&groupId=com.example&artifactId=hellospring&name=hellospring&description=Demo%20project%20for%20Spring%20Boot&packageName=com.example.hellospring&dependencies=web,cloud-eureka,actuator,cloud-config-client) to generate a sample project with the recommended dependencies for Azure Spring Cloud. This link uses the following URL to provide default settings for you.
+1. Navigate to [Spring Initializr](https://start.spring.io/#!type=maven-project&language=java&platformVersion=2.5.7&packaging=jar&jvmVersion=1.8&groupId=com.example&artifactId=hellospring&name=hellospring&description=Demo%20project%20for%20Spring%20Boot&packageName=com.example.hellospring&dependencies=web,cloud-eureka,actuator,cloud-config-client) to generate a sample project with the recommended dependencies for Azure Spring Apps. This link uses the following URL to provide default settings for you.
```url https://start.spring.io/#!type=maven-project&language=java&platformVersion=2.5.7&packaging=jar&jvmVersion=1.8&groupId=com.example&artifactId=hellospring&name=hellospring&description=Demo%20project%20for%20Spring%20Boot&packageName=com.example.hellospring&dependencies=web,cloud-eureka,actuator,cloud-config-client
To create a Spring Cloud project for use in this article, use the following step
@RequestMapping("/") public String index() {
- return "Greetings from Azure Spring Cloud!";
+ return "Greetings from Azure Spring Apps!";
} }
mvn clean package -DskipTests -Denv=cloud
Compiling the project takes several minutes. After it's completed, you should have individual JAR files for each service in their respective folders.
-## Provision an instance of Azure Spring Cloud
+## Provision an instance of Azure Spring Apps
-The following procedure creates an instance of Azure Spring Cloud using the Azure portal.
+The following procedure creates an instance of Azure Spring Apps using the Azure portal.
1. In a new tab, open the [Azure portal](https://portal.azure.com/).
-2. From the top search box, search for **Azure Spring Cloud**.
+2. From the top search box, search for **Azure Spring Apps**.
-3. Select **Azure Spring Cloud** from the results.
+3. Select **Azure Spring Apps** from the results.
- ![ASC icon start](media/spring-cloud-quickstart-launch-app-portal/find-spring-cloud-start.png)
+ :::image type="content" source="media/spring-cloud-quickstart-launch-app-portal/find-spring-cloud-start.png" alt-text="Screenshot of Azure portal showing Azure Spring Apps service in search results.":::
-4. On the Azure Spring Cloud page, select **Create**.
+4. On the Azure Spring Apps page, select **Create**.
- ![ASC icon add](media/spring-cloud-quickstart-launch-app-portal/spring-cloud-create.png)
+ :::image type="content" source="media/spring-cloud-quickstart-launch-app-portal/spring-cloud-create.png" alt-text="Screenshot of Azure portal showing Azure Spring Apps resource with Create button highlighted.":::
-5. Fill out the form on the Azure Spring Cloud **Create** page. Consider the following guidelines:
+5. Fill out the form on the Azure Spring Apps **Create** page. Consider the following guidelines:
- **Subscription**: Select the subscription you want to be billed for this resource. - **Resource group**: Creating new resource groups for new resources is a best practice. You will use this resource group in later steps as **\<resource group name\>**. - **Service Details/Name**: Specify the **\<service instance name\>**. The name must be between 4 and 32 characters long and can contain only lowercase letters, numbers, and hyphens. The first character of the service name must be a letter and the last character must be either a letter or a number. - **Location**: Select the region for your service instance.
- ![ASC portal start](media/spring-cloud-quickstart-launch-app-portal/portal-start.png)
+ :::image type="content" source="media/spring-cloud-quickstart-launch-app-portal/portal-start.png" alt-text="Screenshot of Azure portal showing Azure Spring Apps Create page.":::
6. Select **Review and create**.
-## Generate configurations and deploy to the Azure Spring Cloud
+## Generate configurations and deploy to the Azure Spring Apps
To generate configurations and deploy the app, follow these steps: 1. Run the following command from the *hellospring* root folder, which contains the POM file. If you've already signed-in with Azure CLI, the command will automatically pick up the credentials. Otherwise, the command will prompt you with sign-in instructions. For more information, see [Authentication](https://github.com/microsoft/azure-maven-plugins/wiki/Authentication) in the [azure-maven-plugins](https://github.com/microsoft/azure-maven-plugins) repository on GitHub. ```azurecli
- mvn com.microsoft.azure:azure-spring-cloud-maven-plugin:1.7.0:config
+ mvn com.microsoft.azure:azure-spring-apps-maven-plugin:1.10.0:config
``` You'll be asked to select:
- * **Subscription ID** - the subscription you used to create an Azure Spring Cloud instance.
- * **Service instance** - the name of your Azure Spring Cloud instance.
+ * **Subscription ID** - the subscription you used to create an Azure Spring Apps instance.
+ * **Service instance** - the name of your Azure Spring Apps instance.
* **App name** - an app name of your choice, or use the default value `artifactId`. * **Public endpoint** - *true* to expose the app to public access; otherwise, *false*.
To generate configurations and deploy the app, follow these steps:
<plugins> <plugin> <groupId>com.microsoft.azure</groupId>
- <artifactId>azure-spring-cloud-maven-plugin</artifactId>
- <version>1.7.0</version>
+ <artifactId>azure-spring-apps-maven-plugin</artifactId>
+ <version>1.10.0</version>
<configuration> <subscriptionId>xxxxxxxxx-xxxx-xxxx-xxxxxxxxxxxx</subscriptionId> <clusterName>v-spr-cld</clusterName>
To generate configurations and deploy the app, follow these steps:
1. Deploy the app using the following command. ```azurecli
- mvn azure-spring-cloud:deploy
+ mvn azure-spring-apps:deploy
``` ## Verify the services
After deployment has completed, you can access the app at `https://<service inst
## Clean up resources
-If you plan to continue working with the example application, you might want to leave the resources in place. When no longer needed, delete the resource group containing your Azure Spring Cloud instance. To delete the resource group by using Azure CLI, use the following commands:
+If you plan to continue working with the example application, you might want to leave the resources in place. When no longer needed, delete the resource group containing your Azure Spring Apps instance. To delete the resource group by using Azure CLI, use the following commands:
```azurecli echo "Enter the Resource Group name:" &&
echo "Press [ENTER] to continue ..."
## Next steps
-* [Prepare Spring application for Azure Spring Cloud](how-to-prepare-app-deployment.md)
-* [Learn more about Azure Spring Cloud Maven Plugin](https://github.com/microsoft/azure-maven-plugins/wiki/Azure-Spring-Cloud)
+* [Prepare Spring application for Azure Spring Apps](how-to-prepare-app-deployment.md)
+* [Learn more about Azure Spring Apps Maven Plugin](https://github.com/microsoft/azure-maven-plugins/wiki/Azure-Spring-Cloud)
spring-cloud How To Migrate Standard Tier To Enterprise Tier https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-migrate-standard-tier-to-enterprise-tier.md
Title: How to migrate an Azure Spring Cloud Basic or Standard tier instance to Enterprise tier-
-description: How to migrate an Azure Spring Cloud Basic or Standard tier instance to Enterprise tier
+ Title: How to migrate an Azure Spring Apps Basic or Standard tier instance to Enterprise tier
+
+description: How to migrate an Azure Spring Apps Basic or Standard tier instance to Enterprise tier
Last updated 05/09/2022-+
-# Migrate an Azure Spring Cloud Basic or Standard tier instance to Enterprise tier
+# Migrate an Azure Spring Apps Basic or Standard tier instance to Enterprise tier
+
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
**This article applies to:** ✔️ Basic/Standard tier ✔️ Enterprise tier
This article shows you how to migrate an existing application in Basic or Standa
## Prerequisites -- An already provisioned Azure Spring Cloud Enterprise tier service instance with Spring Cloud Gateway for Tanzu enabled. For more information, see [Quickstart: Provision an Azure Spring Cloud service instance using Enterprise tier](./quickstart-provision-service-instance-enterprise.md). However, you won't need to change any code in your applications.
+- An already provisioned Azure Spring Apps Enterprise tier service instance with Spring Cloud Gateway for Tanzu enabled. For more information, see [Quickstart: Provision an Azure Spring Apps service instance using Enterprise tier](./quickstart-provision-service-instance-enterprise.md). However, you won't need to change any code in your applications.
- [Azure CLI version 2.0.67 or later](/cli/azure/install-azure-cli). ## Using Application Configuration Service for configuration
Use the following steps to create and configure an application using Spring Clou
### Create an app for Spring Cloud Gateway to route traffic to
-1. Create an app which Spring Cloud Gateway for Tanzu will route traffic to by following the instructions in [Quickstart: Build and deploy apps to Azure Spring Cloud using the Enterprise tier](quickstart-deploy-apps-enterprise.md).
+1. Create an app which Spring Cloud Gateway for Tanzu will route traffic to by following the instructions in [Quickstart: Build and deploy apps to Azure Spring Apps using the Enterprise tier](quickstart-deploy-apps-enterprise.md).
1. Assign a public endpoint to the gateway to access it.
For more information, see [Use Spring Cloud Gateway for Tanzu](./how-to-use-ente
## Next steps -- [Azure Spring Cloud](index.yml)
+- [Azure Spring Apps](index.yml)
spring-cloud How To Move Across Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-move-across-regions.md
Title: How to move an Azure Spring Cloud service instance to another region
-description: Describes how to move an Azure Spring Cloud service instance to another region
+ Title: How to move an Azure Spring Apps service instance to another region
+description: Describes how to move an Azure Spring Apps service instance to another region
Last updated 01/27/2022-+
-# Move an Azure Spring Cloud service instance to another region
+# Move an Azure Spring Apps service instance to another region
+
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
**This article applies to:** ✔️ Basic/Standard tier ✔️ Enterprise tier
-This article shows you how to move your Azure Spring Cloud service instance to another region. Moving your instance is useful, for example, as part of a disaster recovery plan or to create a duplicate testing environment.
+This article shows you how to move your Azure Spring Apps service instance to another region. Moving your instance is useful, for example, as part of a disaster recovery plan or to create a duplicate testing environment.
-You can't move an Azure Spring Cloud instance from one region to another directly, but you can use an Azure Resource Manager template (ARM template) to deploy to a new region. For more information about using Azure Resource Manager and templates, see [Quickstart: Create and deploy Azure Resource Manager templates by using the Azure portal](../azure-resource-manager/templates/quickstart-create-templates-use-the-portal.md).
+You can't move an Azure Spring Apps instance from one region to another directly, but you can use an Azure Resource Manager template (ARM template) to deploy to a new region. For more information about using Azure Resource Manager and templates, see [Quickstart: Create and deploy Azure Resource Manager templates by using the Azure portal](../azure-resource-manager/templates/quickstart-create-templates-use-the-portal.md).
Before you move your service instance, you should be aware of the following limitations: - Different feature sets are supported by different pricing tiers (SKUs). If you change the SKU, you may need to change the template to include only features supported by the target SKU.-- You might not be able to move all sub-resources in Azure Spring Cloud using the template. Your move may require extra setup after the template is deployed. For more information, see the [Configure the new Azure Spring Cloud service instance](#configure-the-new-azure-spring-cloud-service-instance) section.-- When you move a virtual network (VNet) instance (see [Deploy Azure Spring Cloud in a virtual network](how-to-deploy-in-azure-virtual-network.md)), you'll need to create new network resources.
+- You might not be able to move all sub-resources in Azure Spring Apps using the template. Your move may require extra setup after the template is deployed. For more information, see the [Configure the new Azure Spring Apps service instance](#configure-the-new-azure-spring-apps-service-instance) section.
+- When you move a virtual network (VNet) instance (see [Deploy Azure Spring Apps in a virtual network](how-to-deploy-in-azure-virtual-network.md)), you'll need to create new network resources.
## Prerequisites -- A running Azure Spring Cloud instance.-- A target region that supports Azure Spring Cloud and its related features.
+- A running Azure Spring Apps instance.
+- A target region that supports Azure Spring Apps and its related features.
- [Azure CLI](/cli/azure/install-azure-cli) if you aren't using the Azure portal. ## Export the template
Before you move your service instance, you should be aware of the following limi
First, use the following steps to export the template: 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Select **All resources** in the left menu, then select your Azure Spring Cloud instance.
+1. Select **All resources** in the left menu, then select your Azure Spring Apps instance.
1. Under **Automation**, select **Export template**. 1. Select **Download** on the **Export template** pane. 1. Locate the *.zip* file, unzip it, and get the *template.json* file. This file contains the resource template.
az group export --resource-group <resource-group> --resource-ids <resource-id>
## Modify the template
-Next, use the following steps to modify the *template.json* file. In the examples shown here, the new Azure Spring Cloud instance name is *new-service-name*, and the previous instance name is *old-service-name*.
+Next, use the following steps to modify the *template.json* file. In the examples shown here, the new Azure Spring Apps instance name is *new-service-name*, and the previous instance name is *old-service-name*.
1. Change all `name` instances in the template from *old-service-name* to *new-service-name*, as shown in the following example:
Next, use the following steps to modify the *template.json* file. In the example
} ```
-1. If any custom domain resources are configured, you need to create the CNAME records as described in [Tutorial: Map an existing custom domain to Azure Spring Cloud](tutorial-custom-domain.md). Be sure the record name is expected for the new service name.
+1. If any custom domain resources are configured, you need to create the CNAME records as described in [Tutorial: Map an existing custom domain to Azure Spring Apps](tutorial-custom-domain.md). Be sure the record name is expected for the new service name.
1. Change all `relativePath` instances in the template `properties` for all app resources to `<default>`, as shown in the following example:
Next, use the following steps to modify the *template.json* file. In the example
} ```
- After the app is created, it uses a default banner application. You'LL need to deploy the JAR files again using the Azure CLI. For more information, see the [Configure the new Azure Spring Cloud service instance](#configure-the-new-azure-spring-cloud-service-instance) section below.
+ After the app is created, it uses a default banner application. You'LL need to deploy the JAR files again using the Azure CLI. For more information, see the [Configure the new Azure Spring Apps service instance](#configure-the-new-azure-spring-apps-service-instance) section below.
1. If service binding was used and you want to import it to the new service instance, add the `key` property for the target bound resource. In the following example, a bound MySQL database would be included:
Wait until the template has deployed successfully. If the deployment fails, view
-## Configure the new Azure Spring Cloud service instance
+## Configure the new Azure Spring Apps service instance
-Some features aren't exported to the template, or can't be imported with a template. You must manually set up some Azure Spring Cloud items on the new instance after the template deployment completes successfully. The following guidelines describe these requirements:
+Some features aren't exported to the template, or can't be imported with a template. You must manually set up some Azure Spring Apps items on the new instance after the template deployment completes successfully. The following guidelines describe these requirements:
-- The JAR files for the previous service aren't deployed directly to the new service instance. To deploy all apps, follow the instructions in [Quickstart: Build and deploy apps to Azure Spring Cloud](quickstart-deploy-apps.md). If there's no active deployment configured automatically, you must configure a production deployment. For more information, see [Set up a staging environment in Azure Spring Cloud](how-to-staging-environment.md).
+- The JAR files for the previous service aren't deployed directly to the new service instance. To deploy all apps, follow the instructions in [Quickstart: Build and deploy apps to Azure Spring Apps](quickstart-deploy-apps.md). If there's no active deployment configured automatically, you must configure a production deployment. For more information, see [Set up a staging environment in Azure Spring Apps](how-to-staging-environment.md).
- Config Server won't be imported automatically. To set up Config Server on your new instance, see [Set up a Spring Cloud Config Server instance for your service](how-to-config-server.md).-- Managed identity will be created automatically for the new service instance, but the object ID will be different from the previous instance. For managed identity to work in the new service instance, follow the instructions in [How to enable system-assigned managed identity for applications in Azure Spring Cloud](how-to-enable-system-assigned-managed-identity.md).-- For Monitoring -> Metrics, see [Metrics for Azure Spring Cloud](concept-metrics.md). To avoid mixing the data, we recommend that you create a new Log Analytics instance to collect the new data. You should also create a new instance for other monitoring configurations.
+- Managed identity will be created automatically for the new service instance, but the object ID will be different from the previous instance. For managed identity to work in the new service instance, follow the instructions in [How to enable system-assigned managed identity for applications in Azure Spring Apps](how-to-enable-system-assigned-managed-identity.md).
+- For Monitoring -> Metrics, see [Metrics for Azure Spring Apps](concept-metrics.md). To avoid mixing the data, we recommend that you create a new Log Analytics instance to collect the new data. You should also create a new instance for other monitoring configurations.
- For Monitoring -> Diagnostic settings and logs, see [Analyze logs and metrics with diagnostics settings](diagnostic-services.md).-- For Monitoring -> Application Insights, see [Application Insights Java In-Process Agent in Azure Spring Cloud](how-to-application-insights.md).
+- For Monitoring -> Application Insights, see [Application Insights Java In-Process Agent in Azure Spring Apps](how-to-application-insights.md).
## Next steps -- [Quickstart: Build and deploy apps to Azure Spring Cloud](quickstart-deploy-apps.md)-- [Quickstart: Set up Azure Spring Cloud Config Server](quickstart-setup-config-server.md)
+- [Quickstart: Build and deploy apps to Azure Spring Apps](quickstart-deploy-apps.md)
+- [Quickstart: Set up Azure Spring Apps Config Server](quickstart-setup-config-server.md)
- [Quickstart: Set up a Log Analytics workspace](quickstart-setup-log-analytics.md)
spring-cloud How To New Relic Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-new-relic-monitor.md
Title: "How to monitor Spring Boot apps using New Relic Java agent"-+ description: Learn how to monitor Spring Boot applications using the New Relic Java agent. Last updated 04/07/2021-+ ms.devlang: azurecli # How to monitor Spring Boot apps using New Relic Java agent (Preview)
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
+ **This article applies to:** ✔️ Basic/Standard tier ✔️ Enterprise tier
-This article shows you how to monitor of Spring Boot applications in Azure Spring Cloud with the New Relic Java agent.
+This article shows you how to monitor of Spring Boot applications in Azure Spring Apps with the New Relic Java agent.
With the New Relic Java agent, you can:
With the New Relic Java agent, you can:
* Configure the New Relic Java agent using environment variables. * Check all monitoring data from the New Relic dashboard.
-The following video describes how to activate and monitor Spring Boot applications in Azure Spring Cloud using New Relic One.
+The following video describes how to activate and monitor Spring Boot applications in Azure Spring Apps using New Relic One.
<br>
The following video describes how to activate and monitor Spring Boot applicatio
Use the following procedure to access the agent:
-1. Create an instance of Azure Spring Cloud.
+1. Create an instance of Azure Spring Apps.
2. Create an application. ```azurecli
- az spring-cloud app create --name "appName" --is-public true \
+ az spring app create --name "appName" --is-public true \
-s "resourceName" -g "resourceGroupName" ``` 3. Create a deployment with the New Relic agent and environment variables. ```azurecli
- az spring-cloud app deploy --name "appName" --jar-path app.jar \
+ az spring app deploy --name "appName" --jar-path app.jar \
-s "resourceName" -g "resourceGroupName" \ --jvm-options="-javaagent:/opt/agents/newrelic/java/newrelic-agent.jar" \ --env NEW_RELIC_APP_NAME=appName NEW_RELIC_LICENSE_KEY=newRelicLicenseKey ```
-Azure Spring Cloud pre-installs the New Relic Java agent to */opt/agents/newrelic/java/newrelic-agent.jar*. Customers can activate the agent from applications' **JVM options**, as well as configure the agent using the [New Relic Java agent environment variables](https://docs.newrelic.com/docs/agents/java-agent/configuration/java-agent-configuration-config-file/#Environment_Variables).
+Azure Spring Apps pre-installs the New Relic Java agent to */opt/agents/newrelic/java/newrelic-agent.jar*. Customers can activate the agent from applications' **JVM options**, as well as configure the agent using the [New Relic Java agent environment variables](https://docs.newrelic.com/docs/agents/java-agent/configuration/java-agent-configuration-config-file/#Environment_Variables).
## Portal
You can also run a provisioning automation pipeline using Terraform or an Azure
### Automate provisioning using Terraform
-To configure the environment variables in a Terraform template, add the following code to the template, replacing the *\<...>* placeholders with your own values. For more information, see [Manages an Active Azure Spring Cloud Deployment](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/spring_cloud_active_deployment).
+To configure the environment variables in a Terraform template, add the following code to the template, replacing the *\<...>* placeholders with your own values. For more information, see [Manages an Active Azure Spring Apps Deployment](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/spring_cloud_active_deployment).
```terraform resource "azurerm_spring_cloud_java_deployment" "example" {
To configure the environment variables in an ARM template, add the following cod
## View New Relic Java Agent logs
-By default, Azure Spring Cloud will print the logs of the New Relic Java agent to `STDOUT`. The logs will be mixed with the application logs. You can find the explicit agent version from the application logs.
+By default, Azure Spring Apps will print the logs of the New Relic Java agent to `STDOUT`. The logs will be mixed with the application logs. You can find the explicit agent version from the application logs.
You can also get the logs of the New Relic agent from the following locations:
-* Azure Spring Cloud logs
-* Azure Spring Cloud Application Insights
-* Azure Spring Cloud LogStream
+* Azure Spring Apps logs
+* Azure Spring Apps Application Insights
+* Azure Spring Apps LogStream
You can leverage some environment variables provided by New Relic to configure the logging of the New Agent, such as, `NEW_RELIC_LOG_LEVEL` to control the level of logs. For more information, see [New Relic Environment Variables](https://docs.newrelic.com/docs/agents/java-agent/configuration/java-agent-configuration-config-file/#Environment_Variables). > [!CAUTION]
-> We strongly recommend that you *do not* override the logging default behavior provided by Azure Spring Cloud for New Relic. If you do, the logging scenarios in above scenarios will be blocked, and the log file(s) may be lost. For example, you should not pass the following environment variables to your applications. Log file(s) may be lost after restart or redeployment of application(s).
+> We strongly recommend that you *do not* override the logging default behavior provided by Azure Spring Apps for New Relic. If you do, the logging scenarios in above scenarios will be blocked, and the log file(s) may be lost. For example, you should not pass the following environment variables to your applications. Log file(s) may be lost after restart or redeployment of application(s).
> > * NEW_RELIC_LOG > * NEW_RELIC_LOG_FILE_PATH
The New Relic Java agent will update/upgrade the JDK regularly. The agent update
## Vnet Injection Instance Outbound Traffic Configuration
-For a vnet injection instance of Azure Spring Cloud, you need to make sure the outbound traffic is configured correctly for the New Relic Java agent. For more information, see [Networks of New Relic](https://docs.newrelic.com/docs/using-new-relic/cross-product-functions/install-configure/networks/#agents).
+For a vnet injection instance of Azure Spring Apps, you need to make sure the outbound traffic is configured correctly for the New Relic Java agent. For more information, see [Networks of New Relic](https://docs.newrelic.com/docs/using-new-relic/cross-product-functions/install-configure/networks/#agents).
## Next steps
spring-cloud How To Outbound Public Ip https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-outbound-public-ip.md
Title: How - to identify outbound public IP addresses in Azure Spring Cloud
+ Title: How - to identify outbound public IP addresses in Azure Spring Apps
description: How to view the static outbound public IP addresses to communicate with external resources, such as Database, Storage, Key Vault, etc. Last updated 09/17/2020-+
-# How to identify outbound public IP addresses in Azure Spring Cloud
+# How to identify outbound public IP addresses in Azure Spring Apps
+
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
**This article applies to:** ✔️ Basic/Standard tier ✔️ Enterprise tier
-This article explains how to view static outbound public IP addresses of applications in Azure Spring Cloud. Public IPs are used to communicate with external resources, such as databases, storage, and key vaults.
+This article explains how to view static outbound public IP addresses of applications in Azure Spring Apps. Public IPs are used to communicate with external resources, such as databases, storage, and key vaults.
> [!IMPORTANT]
-> If the Azure Spring Cloud instance is deployed in your own virtual network, you can leverage either Network Security Group or Azure Firewall to fully control the egress traffic.
+> If the Azure Spring Apps instance is deployed in your own virtual network, you can leverage either Network Security Group or Azure Firewall to fully control the egress traffic.
-## How IP addresses work in Azure Spring Cloud
+## How IP addresses work in Azure Spring Apps
-An Azure Spring Cloud service has one or more outbound public IP addresses. The number of outbound public IP addresses may vary according to the tiers and other factors.
+An Azure Spring Apps service has one or more outbound public IP addresses. The number of outbound public IP addresses may vary according to the tiers and other factors.
The outbound public IP addresses are usually constant and remain the same, but there are exceptions. ## When outbound IPs change
-Each Azure Spring Cloud instance has a set number of outbound public IP addresses at any given time. Any outbound connection from the applications, such as to a back-end database, uses one of the outbound public IP addresses as the origin IP address. The IP address is selected randomly at runtime, so your back-end service must open its firewall to all the outbound IP addresses.
+Each Azure Spring Apps instance has a set number of outbound public IP addresses at any given time. Any outbound connection from the applications, such as to a back-end database, uses one of the outbound public IP addresses as the origin IP address. The IP address is selected randomly at runtime, so your back-end service must open its firewall to all the outbound IP addresses.
The number of outbound public IPs changes when you perform one of the following actions: -- Upgrade your Azure Spring Cloud instance between tiers.
+- Upgrade your Azure Spring Apps instance between tiers.
- Raise a support ticket for more outbound public IPs for business needs. ## Find outbound IPs
To find the outbound public IP addresses currently used by your service instance
You can find the same information by running the following command in the Cloud Shell ```azurecli
-az spring-cloud show --resource-group <group_name> --name <service_name> --query properties.networkProfile.outboundIps.publicIps --output tsv
+az spring show --resource-group <group_name> --name <service_name> --query properties.networkProfile.outboundIps.publicIps --output tsv
``` ## Next steps * [Learn more about managed identities for Azure resources](../active-directory/managed-identities-azure-resources/overview.md)
-* [Learn more about key vault in Azure Spring Cloud](./tutorial-managed-identities-key-vault.md)
+* [Learn more about key vault in Azure Spring Apps](./tutorial-managed-identities-key-vault.md)
spring-cloud How To Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-permissions.md
Title: "Use permissions in Azure Spring Cloud"
-description: This article shows you how to create custom roles that delegate permissions to Azure Spring Cloud resources.
+ Title: "Use permissions in Azure Spring Apps"
+description: This article shows you how to create custom roles that delegate permissions to Azure Spring Apps resources.
Last updated 09/04/2020-+
-# How to use permissions in Azure Spring Cloud
+# How to use permissions in Azure Spring Apps
+
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
**This article applies to:** ✔️ Basic/Standard tier ✔️ Enterprise tier
-This article shows you how to create custom roles that delegate permissions to Azure Spring Cloud resources. Custom roles extend [Azure built-in roles](../role-based-access-control/built-in-roles.md) with various stock permissions.
+This article shows you how to create custom roles that delegate permissions to Azure Spring Apps resources. Custom roles extend [Azure built-in roles](../role-based-access-control/built-in-roles.md) with various stock permissions.
We'll implement the following custom roles.
We'll implement the following custom roles.
* **DevOps Engineer role**:
- * Create, read, update, and delete everything in Azure Spring Cloud
+ * Create, read, update, and delete everything in Azure Spring Apps
* **Ops - Site Reliability Engineering role**:
We'll implement the following custom roles.
* **Azure Pipelines / Jenkins / GitHub Actions role**: * Perform create, read, update, and delete operations
- * Use Terraform or ARM templates to create and configure everything in Azure Spring Cloud and apps within a service instance: Azure Pipelines, Jenkins, and GitHub Actions
+ * Use Terraform or ARM templates to create and configure everything in Azure Spring Apps and apps within a service instance: Azure Pipelines, Jenkins, and GitHub Actions
## Define the Developer role
The Developer role includes permissions to restart apps and see their log stream
![Screenshot that shows the Add permissions button.](media/spring-cloud-permissions/add-permissions.png)
-7. In the search box, search for **Microsoft.app**. Select **Microsoft Azure Spring Cloud**:
+7. In the search box, search for **Microsoft.app**. Select **Microsoft Azure Spring Apps**:
![Screenshot that shows the results of searching for Microsoft.app.](media/spring-cloud-permissions/spring-cloud-permissions.png)
The Developer role includes permissions to restart apps and see their log stream
Under **Microsoft.AppPlatform/Spring**, select:
- * **Write : Create or Update Azure Spring Cloud service instance**
- * **Read : Get Azure Spring Cloud service instance**
- * **Other : List Azure Spring Cloud service instance test keys**
+ * **Write : Create or Update Azure Spring Apps service instance**
+ * **Read : Get Azure Spring Apps service instance**
+ * **Other : List Azure Spring Apps service instance test keys**
(For Enterprise tier only) Under **Microsoft.AppPlatform/Spring/buildServices**, select:
- * **Read : Read Microsoft Azure Spring Cloud Build Services**
- * **Other : Get an Upload URL in Azure Spring Cloud**
+ * **Read : Read Microsoft Azure Spring Apps Build Services**
+ * **Other : Get an Upload URL in Azure Spring Apps**
(For Enterprise tier only) Under **Microsoft.AppPlatform/Spring/buildServices/builds**, select:
- * **Read : Read Microsoft Azure Spring Cloud Builds**
- * **Write : Write Microsoft Azure Spring Cloud Builds**
+ * **Read : Read Microsoft Azure Spring Apps Builds**
+ * **Write : Write Microsoft Azure Spring Apps Builds**
(For Enterprise tier only) Under **Microsoft.AppPlatform/Spring/buildServices/builds/results**, select:
- * **Read : Read Microsoft Azure Spring Cloud Build Results**
- * **Other : Get an Log File URL in Azure Spring Cloud**
+ * **Read : Read Microsoft Azure Spring Apps Build Results**
+ * **Other : Get an Log File URL in Azure Spring Apps**
(For Enterprise tier only) Under **Microsoft.AppPlatform/Spring/buildServices/builders**, select:
- * **Read : Read Microsoft Azure Spring Cloud Builders**
- * **Write : Write Microsoft Azure Spring Cloud Builders**
- * **Delete : Delete Microsoft Azure Spring Cloud Builders**
+ * **Read : Read Microsoft Azure Spring Apps Builders**
+ * **Write : Write Microsoft Azure Spring Apps Builders**
+ * **Delete : Delete Microsoft Azure Spring Apps Builders**
(For Enterprise tier only) Under **Microsoft.AppPlatform/Spring/buildServices/builders/buildpackBindings**, select:
- * **Read : Read Microsoft Azure Spring Cloud Builder BuildpackBinding**
- * **Write : Write Microsoft Azure Spring Cloud Builder BuildpackBinding**
- * **Delete : Delete Microsoft Azure Spring Cloud Builder BuildpackBinding**
+ * **Read : Read Microsoft Azure Spring Apps Builder BuildpackBinding**
+ * **Write : Write Microsoft Azure Spring Apps Builder BuildpackBinding**
+ * **Delete : Delete Microsoft Azure Spring Apps Builder BuildpackBinding**
(For Enterprise tier only) Under **Microsoft.AppPlatform/Spring/buildServices/supportedBuildpacks**, select:
- * **Read : Read Microsoft Azure Spring Cloud Supported Buildpacks**
+ * **Read : Read Microsoft Azure Spring Apps Supported Buildpacks**
(For Enterprise tier only) Under **Microsoft.AppPlatform/Spring/buildServices/supportedStacks**, select:
- * **Read : Read Microsoft Azure Spring Cloud Supported Stacks**
+ * **Read : Read Microsoft Azure Spring Apps Supported Stacks**
Under **Microsoft.AppPlatform/Spring/apps**, select:
- * **Read : Read Microsoft Azure Spring Cloud application**
- * **Other : Get Microsoft Azure Spring Cloud application resource upload URL**
+ * **Read : Read Microsoft Azure Spring Apps application**
+ * **Other : Get Microsoft Azure Spring Apps application resource upload URL**
Under **Microsoft.AppPlatform/Spring/apps/bindings**, select:
- * **Read : Read Microsoft Azure Spring Cloud application binding**
+ * **Read : Read Microsoft Azure Spring Apps application binding**
Under **Microsoft.AppPlatform/Spring/apps/deployments**, select:
- * **Write : Write Microsoft Azure Spring Cloud application deployment**
- * **Read : Read Microsoft Azure Spring Cloud application deployment**
- * **Other : Start Microsoft Azure Spring Cloud application deployment**
- * **Other : Stop Microsoft Azure Spring Cloud application deployment**
- * **Other : Restart Microsoft Azure Spring Cloud application deployment**
- * **Other : Get Microsoft Azure Spring Cloud application deployment log file URL**
+ * **Write : Write Microsoft Azure Spring Apps application deployment**
+ * **Read : Read Microsoft Azure Spring Apps application deployment**
+ * **Other : Start Microsoft Azure Spring Apps application deployment**
+ * **Other : Stop Microsoft Azure Spring Apps application deployment**
+ * **Other : Restart Microsoft Azure Spring Apps application deployment**
+ * **Other : Get Microsoft Azure Spring Apps application deployment log file URL**
Under **Microsoft.AppPlatform/Spring/apps/domains**, select:
- * **Read : Read Microsoft Azure Spring Cloud application custom domain**
+ * **Read : Read Microsoft Azure Spring Apps application custom domain**
Under **Microsoft.AppPlatform/Spring/certificates**, select:
- * **Read : Read Microsoft Azure Spring Cloud certificate**
+ * **Read : Read Microsoft Azure Spring Apps certificate**
Under **Microsoft.AppPlatform/locations/operationResults/Spring**, select:
The Developer role includes permissions to restart apps and see their log stream
## Define the DevOps Engineer role
-This procedure defines a role that has permissions to deploy, test, and restart Azure Spring Cloud apps.
+This procedure defines a role that has permissions to deploy, test, and restart Azure Spring Apps apps.
### [Portal](#tab/Azure-portal)
This procedure defines a role that has permissions to deploy, test, and restart
Under **Microsoft.AppPlatform/Spring**, select:
- * **Write : Create or Update Azure Spring Cloud service instance**
- * **Delete : Delete Azure Spring Cloud service instance**
- * **Read : Get Azure Spring Cloud service instance**
- * **Other : Enable Azure Spring Cloud service instance test endpoint**
- * **Other : Disable Azure Spring Cloud service instance test endpoint**
- * **Other : List Azure Spring Cloud service instance test keys**
- * **Other : Regenerate Azure Spring Cloud service instance test key**
+ * **Write : Create or Update Azure Spring Apps service instance**
+ * **Delete : Delete Azure Spring Apps service instance**
+ * **Read : Get Azure Spring Apps service instance**
+ * **Other : Enable Azure Spring Apps service instance test endpoint**
+ * **Other : Disable Azure Spring Apps service instance test endpoint**
+ * **Other : List Azure Spring Apps service instance test keys**
+ * **Other : Regenerate Azure Spring Apps service instance test key**
(For Enterprise tier only) Under **Microsoft.AppPlatform/Spring/buildServices**, select:
- * **Read : Read Microsoft Azure Spring Cloud Build Services**
- * **Other : Get an Upload URL in Azure Spring Cloud**
+ * **Read : Read Microsoft Azure Spring Apps Build Services**
+ * **Other : Get an Upload URL in Azure Spring Apps**
(For Enterprise tier only) Under **Microsoft.AppPlatform/Spring/buildServices/agentPools**, select:
- * **Read : Read Microsoft Azure Spring Cloud Agent Pools**
- * **Write : Write Microsoft Azure Spring Cloud Agent Pools**
+ * **Read : Read Microsoft Azure Spring Apps Agent Pools**
+ * **Write : Write Microsoft Azure Spring Apps Agent Pools**
(For Enterprise tier only) Under **Microsoft.AppPlatform/Spring/buildServices/builds**, select:
- * **Read : Read Microsoft Azure Spring Cloud Builds**
- * **Write : Write Microsoft Azure Spring Cloud Builds**
+ * **Read : Read Microsoft Azure Spring Apps Builds**
+ * **Write : Write Microsoft Azure Spring Apps Builds**
(For Enterprise tier only) Under **Microsoft.AppPlatform/Spring/buildServices/builds/results**, select:
- * **Read : Read Microsoft Azure Spring Cloud Build Results**
- * **Other : Get an Log File URL in Azure Spring Cloud**
+ * **Read : Read Microsoft Azure Spring Apps Build Results**
+ * **Other : Get an Log File URL in Azure Spring Apps**
(For Enterprise tier only) Under **Microsoft.AppPlatform/Spring/buildServices/builders**, select:
- * **Read : Read Microsoft Azure Spring Cloud Builders**
- * **Write : Write Microsoft Azure Spring Cloud Builders**
- * **Delete : Delete Microsoft Azure Spring Cloud Builders**
+ * **Read : Read Microsoft Azure Spring Apps Builders**
+ * **Write : Write Microsoft Azure Spring Apps Builders**
+ * **Delete : Delete Microsoft Azure Spring Apps Builders**
(For Enterprise tier only) Under **Microsoft.AppPlatform/Spring/buildServices/builders/buildpackBindings**, select:
- * **Read : Read Microsoft Azure Spring Cloud Builder BuildpackBinding**
- * **Write : Write Microsoft Azure Spring Cloud Builder BuildpackBinding**
- * **Delete : Delete Microsoft Azure Spring Cloud Builder BuildpackBinding**
+ * **Read : Read Microsoft Azure Spring Apps Builder BuildpackBinding**
+ * **Write : Write Microsoft Azure Spring Apps Builder BuildpackBinding**
+ * **Delete : Delete Microsoft Azure Spring Apps Builder BuildpackBinding**
(For Enterprise tier only) Under **Microsoft.AppPlatform/Spring/buildServices/supportedBuildpacks**, select:
- * **Read : Read Microsoft Azure Spring Cloud Supported Buildpacks**
+ * **Read : Read Microsoft Azure Spring Apps Supported Buildpacks**
(For Enterprise tier only) Under **Microsoft.AppPlatform/Spring/buildServices/supportedStacks**, select:
- * **Read : Read Microsoft Azure Spring Cloud Supported Stacks**
+ * **Read : Read Microsoft Azure Spring Apps Supported Stacks**
Under **Microsoft.AppPlatform/Spring/apps**, select:
- * **Write : Write Microsoft Azure Spring Cloud application**
- * **Delete : Delete Microsoft Azure Spring Cloud application**
- * **Read : Read Microsoft Azure Spring Cloud application**
- * **Other : Get Microsoft Azure Spring Cloud application resource upload URL**
- * **Other : Validate Microsoft Azure Spring Cloud application custom domain**
+ * **Write : Write Microsoft Azure Spring Apps application**
+ * **Delete : Delete Microsoft Azure Spring Apps application**
+ * **Read : Read Microsoft Azure Spring Apps application**
+ * **Other : Get Microsoft Azure Spring Apps application resource upload URL**
+ * **Other : Validate Microsoft Azure Spring Apps application custom domain**
Under **Microsoft.AppPlatform/Spring/apps/bindings**, select:
- * **Write : Write Microsoft Azure Spring Cloud application binding**
- * **Delete : Delete Microsoft Azure Spring Cloud application binding**
- * **Read : Read Microsoft Azure Spring Cloud application binding**
+ * **Write : Write Microsoft Azure Spring Apps application binding**
+ * **Delete : Delete Microsoft Azure Spring Apps application binding**
+ * **Read : Read Microsoft Azure Spring Apps application binding**
Under **Microsoft.AppPlatform/Spring/apps/deployments**, select:
- * **Write : Write Microsoft Azure Spring Cloud application deployment**
- * **Delete : Delete Azure Spring Cloud application deployment**
- * **Read : Read Microsoft Azure Spring Cloud application deployment**
- * **Other : Start Microsoft Azure Spring Cloud application deployment**
- * **Other : Stop Microsoft Azure Spring Cloud application deployment**
- * **Other : Restart Microsoft Azure Spring Cloud application deployment**
- * **Other : Get Microsoft Azure Spring Cloud application deployment log file URL**
+ * **Write : Write Microsoft Azure Spring Apps application deployment**
+ * **Delete : Delete Azure Spring Apps application deployment**
+ * **Read : Read Microsoft Azure Spring Apps application deployment**
+ * **Other : Start Microsoft Azure Spring Apps application deployment**
+ * **Other : Stop Microsoft Azure Spring Apps application deployment**
+ * **Other : Restart Microsoft Azure Spring Apps application deployment**
+ * **Other : Get Microsoft Azure Spring Apps application deployment log file URL**
Under **Microsoft.AppPlatform/Spring/apps/deployments/skus**, select:
This procedure defines a role that has permissions to deploy, test, and restart
## Define the Ops - Site Reliability Engineering role
-This procedure defines a role that has permissions to deploy, test, and restart Azure Spring Cloud apps.
+This procedure defines a role that has permissions to deploy, test, and restart Azure Spring Apps apps.
### [Portal](#tab/Azure-portal)
This procedure defines a role that has permissions to deploy, test, and restart
Under **Microsoft.AppPlatform/Spring**, select:
- * **Read : Get Azure Spring Cloud service instance**
- * **Other : List Azure Spring Cloud service instance test keys**
+ * **Read : Get Azure Spring Apps service instance**
+ * **Other : List Azure Spring Apps service instance test keys**
Under **Microsoft.AppPlatform/Spring/apps**, select:
- * **Read : Read Microsoft Azure Spring Cloud application**
+ * **Read : Read Microsoft Azure Spring Apps application**
Under **Microsoft.AppPlatform/apps/deployments**, select:
- * **Read : Read Microsoft Azure Spring Cloud application deployment**
- * **Other : Start Microsoft Azure Spring Cloud application deployment**
- * **Other : Stop Microsoft Azure Spring Cloud application deployment**
- * **Other : Restart Microsoft Azure Spring Cloud application deployment**
+ * **Read : Read Microsoft Azure Spring Apps application deployment**
+ * **Other : Start Microsoft Azure Spring Apps application deployment**
+ * **Other : Stop Microsoft Azure Spring Apps application deployment**
+ * **Other : Restart Microsoft Azure Spring Apps application deployment**
Under **Microsoft.AppPlatform/locations/operationResults/Spring**, select:
This procedure defines a role that has permissions to deploy, test, and restart
## Define the Azure Pipelines / Jenkins / GitHub Actions role
-This role can create and configure everything in Azure Spring Cloud and apps with a service instance. This role is for releasing or deploying code.
+This role can create and configure everything in Azure Spring Apps and apps with a service instance. This role is for releasing or deploying code.
### [Portal](#tab/Azure-portal)
This role can create and configure everything in Azure Spring Cloud and apps wit
Under **Microsoft.AppPlatform/Spring**, select:
- * **Write : Create or Update Azure Spring Cloud service instance**
- * **Delete : Delete Azure Spring Cloud service instance**
- * **Read : Get Azure Spring Cloud service instance**
- * **Other : Enable Azure Spring Cloud service instance test endpoint**
- * **Other : Disable Azure Spring Cloud service instance test endpoint**
- * **Other : List Azure Spring Cloud service instance test keys**
- * **Other : Regenerate Azure Spring Cloud service instance test key**
+ * **Write : Create or Update Azure Spring Apps service instance**
+ * **Delete : Delete Azure Spring Apps service instance**
+ * **Read : Get Azure Spring Apps service instance**
+ * **Other : Enable Azure Spring Apps service instance test endpoint**
+ * **Other : Disable Azure Spring Apps service instance test endpoint**
+ * **Other : List Azure Spring Apps service instance test keys**
+ * **Other : Regenerate Azure Spring Apps service instance test key**
(For Enterprise tier only) Under **Microsoft.AppPlatform/Spring/buildServices**, select:
- * **Read : Read Microsoft Azure Spring Cloud Build Services**
- * **Other : Get an Upload URL in Azure Spring Cloud**
+ * **Read : Read Microsoft Azure Spring Apps Build Services**
+ * **Other : Get an Upload URL in Azure Spring Apps**
(For Enterprise tier only) Under **Microsoft.AppPlatform/Spring/buildServices/builds**, select:
- * **Read : Read Microsoft Azure Spring Cloud Builds**
- * **Write : Write Microsoft Azure Spring Cloud Builds**
+ * **Read : Read Microsoft Azure Spring Apps Builds**
+ * **Write : Write Microsoft Azure Spring Apps Builds**
(For Enterprise tier only) Under **Microsoft.AppPlatform/Spring/buildServices/builds/results**, select:
- * **Read : Read Microsoft Azure Spring Cloud Build Results**
- * **Other : Get an Log File URL in Azure Spring Cloud**
+ * **Read : Read Microsoft Azure Spring Apps Build Results**
+ * **Other : Get an Log File URL in Azure Spring Apps**
(For Enterprise tier only) Under **Microsoft.AppPlatform/Spring/buildServices/builders**, select:
- * **Read : Read Microsoft Azure Spring Cloud Builders**
- * **Write : Write Microsoft Azure Spring Cloud Builders**
- * **Delete : Delete Microsoft Azure Spring Cloud Builders**
+ * **Read : Read Microsoft Azure Spring Apps Builders**
+ * **Write : Write Microsoft Azure Spring Apps Builders**
+ * **Delete : Delete Microsoft Azure Spring Apps Builders**
(For Enterprise tier only) Under **Microsoft.AppPlatform/Spring/buildServices/builders/buildpackBindings**, select:
- * **Read : Read Microsoft Azure Spring Cloud Builder BuildpackBinding**
- * **Write : Write Microsoft Azure Spring Cloud Builder BuildpackBinding**
- * **Delete : Delete Microsoft Azure Spring Cloud Builder BuildpackBinding**
+ * **Read : Read Microsoft Azure Spring Apps Builder BuildpackBinding**
+ * **Write : Write Microsoft Azure Spring Apps Builder BuildpackBinding**
+ * **Delete : Delete Microsoft Azure Spring Apps Builder BuildpackBinding**
(For Enterprise tier only) Under **Microsoft.AppPlatform/Spring/buildServices/supportedBuildpacks**, select:
- * **Read : Read Microsoft Azure Spring Cloud Supported Buildpacks**
+ * **Read : Read Microsoft Azure Spring Apps Supported Buildpacks**
(For Enterprise tier only) Under **Microsoft.AppPlatform/Spring/buildServices/supportedStacks**, select:
- * **Read : Read Microsoft Azure Spring Cloud Supported Stacks**
+ * **Read : Read Microsoft Azure Spring Apps Supported Stacks**
Under **Microsoft.AppPlatform/Spring/apps**, select:
- * **Write : Write Microsoft Azure Spring Cloud application**
- * **Delete : Delete Microsoft Azure Spring Cloud application**
- * **Read : Read Microsoft Azure Spring Cloud application**
- * **Other : Get Microsoft Azure Spring Cloud application resource upload URL**
- * **Other : Validate Microsoft Azure Spring Cloud application custom domain**
+ * **Write : Write Microsoft Azure Spring Apps application**
+ * **Delete : Delete Microsoft Azure Spring Apps application**
+ * **Read : Read Microsoft Azure Spring Apps application**
+ * **Other : Get Microsoft Azure Spring Apps application resource upload URL**
+ * **Other : Validate Microsoft Azure Spring Apps application custom domain**
Under **Microsoft.AppPlatform/Spring/apps/bindings**, select:
- * **Write : Write Microsoft Azure Spring Cloud application binding**
- * **Delete : Delete Microsoft Azure Spring Cloud application binding**
- * **Read : Read Microsoft Azure Spring Cloud application binding**
+ * **Write : Write Microsoft Azure Spring Apps application binding**
+ * **Delete : Delete Microsoft Azure Spring Apps application binding**
+ * **Read : Read Microsoft Azure Spring Apps application binding**
Under **Microsoft.AppPlatform/Spring/apps/deployments**, select:
- * **Write : Write Microsoft Azure Spring Cloud application deployment**
- * **Delete : Delete Azure Spring Cloud application deployment**
- * **Read : Read Microsoft Azure Spring Cloud application deployment**
- * **Other : Start Microsoft Azure Spring Cloud application deployment**
- * **Other : Stop Microsoft Azure Spring Cloud application deployment**
- * **Other : Restart Microsoft Azure Spring Cloud application deployment**
- * **Other : Get Microsoft Azure Spring Cloud application deployment log file URL**
+ * **Write : Write Microsoft Azure Spring Apps application deployment**
+ * **Delete : Delete Azure Spring Apps application deployment**
+ * **Read : Read Microsoft Azure Spring Apps application deployment**
+ * **Other : Start Microsoft Azure Spring Apps application deployment**
+ * **Other : Stop Microsoft Azure Spring Apps application deployment**
+ * **Other : Restart Microsoft Azure Spring Apps application deployment**
+ * **Other : Get Microsoft Azure Spring Apps application deployment log file URL**
Under **Microsoft.AppPlatform/Spring/apps/deployments/skus**, select:
spring-cloud How To Prepare App Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-prepare-app-deployment.md
Title: How to prepare an application for deployment in Azure Spring Cloud
-description: Learn how to prepare an application for deployment to Azure Spring Cloud.
+ Title: How to prepare an application for deployment in Azure Spring Apps
+description: Learn how to prepare an application for deployment to Azure Spring Apps.
Last updated 07/06/2021 -+ zone_pivot_groups: programming-languages-spring-cloud
-# Prepare an application for deployment in Azure Spring Cloud
+# Prepare an application for deployment in Azure Spring Apps
+
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
**This article applies to:** ✔️ Basic/Standard tier ✔️ Enterprise tier ::: zone pivot="programming-language-csharp"
-This article shows how to prepare an existing Steeltoe application for deployment to Azure Spring Cloud. Azure Spring Cloud provides robust services to host, monitor, scale, and update a Steeltoe app.
+This article shows how to prepare an existing Steeltoe application for deployment to Azure Spring Apps. Azure Spring Apps provides robust services to host, monitor, scale, and update a Steeltoe app.
-This article explains the dependencies, configuration, and code that are required to run a .NET Core Steeltoe app in Azure Spring Cloud. For information about how to deploy an application to Azure Spring Cloud, see [Deploy your first Spring Boot app in Azure Spring Cloud](./quickstart.md).
+This article explains the dependencies, configuration, and code that are required to run a .NET Core Steeltoe app in Azure Spring Apps. For information about how to deploy an application to Azure Spring Apps, see [Deploy your first Spring Boot app in Azure Spring Apps](./quickstart.md).
>[!Note]
-> Steeltoe support for Azure Spring Cloud is currently offered as a public preview. Public preview offerings allow customers to experiment with new features prior to their official release. Public preview features and services are not meant for production use. For more information about support during previews, see the [FAQ](https://azure.microsoft.com/support/faq/) or file a [Support request](../azure-portal/supportability/how-to-create-azure-support-request.md).
+> Steeltoe support for Azure Spring Apps is currently offered as a public preview. Public preview offerings allow customers to experiment with new features prior to their official release. Public preview features and services are not meant for production use. For more information about support during previews, see the [FAQ](https://azure.microsoft.com/support/faq/) or file a [Support request](../azure-portal/supportability/how-to-create-azure-support-request.md).
## Supported versions
-Azure Spring Cloud supports:
+Azure Spring Apps supports:
* .NET Core 3.1 * Steeltoe 2.4 and 3.0
public static IHostBuilder CreateHostBuilder(string[] args) =>
> [!NOTE] > Eureka is not applicable to enterprise tier. If you're using enterprise tier, see [Use Service Registry](how-to-enterprise-service-registry.md).
-In the configuration source that will be used when the app runs in Azure Spring Cloud, set `spring.application.name` to the same name as the Azure Spring Cloud app to which the project will be deployed.
+In the configuration source that will be used when the app runs in Azure Spring Apps, set `spring.application.name` to the same name as the Azure Spring Apps app to which the project will be deployed.
-For example, if you deploy a .NET project named `EurekaDataProvider` to an Azure Spring Cloud app named `planet-weather-provider` the *appSettings.json* file should include the following JSON:
+For example, if you deploy a .NET project named `EurekaDataProvider` to an Azure Spring Apps app named `planet-weather-provider` the *appSettings.json* file should include the following JSON:
```json "spring": {
using (var client = new HttpClient(discoveryHandler, false))
::: zone-end ::: zone pivot="programming-language-java"
-This article shows how to prepare an existing Java Spring application for deployment to Azure Spring Cloud. If configured properly, Azure Spring Cloud provides robust services to monitor, scale, and update your Java Spring Cloud application.
+This article shows how to prepare an existing Java Spring application for deployment to Azure Spring Apps. If configured properly, Azure Spring Apps provides robust services to monitor, scale, and update your Java Spring application.
Before running this example, you can try the [basic quickstart](./quickstart.md).
-Other examples explain how to deploy an application to Azure Spring Cloud when the POM file is configured.
+Other examples explain how to deploy an application to Azure Spring Apps when the POM file is configured.
* [Launch your first App](./quickstart.md) * [Introduction to the sample app](./quickstart-sample-app-introduction.md)
This article explains the required dependencies and how to add them to the POM f
## Java Runtime version
-For details, see the [Java runtime and OS versions](./faq.md?pivots=programming-language-java#java-runtime-and-os-versions) section of the [Azure Spring Cloud FAQ](./faq.md).
+For details, see the [Java runtime and OS versions](./faq.md?pivots=programming-language-java#java-runtime-and-os-versions) section of the [Azure Spring Apps FAQ](./faq.md).
## Spring Boot and Spring Cloud versions
-To prepare an existing Spring Boot application for deployment to Azure Spring Cloud, include the Spring Boot and Spring Cloud dependencies in the application POM file as shown in the following sections.
+To prepare an existing Spring Boot application for deployment to Azure Spring Apps, include the Spring Boot and Spring Cloud dependencies in the application POM file as shown in the following sections.
-Azure Spring Cloud will support the latest Spring Boot or Spring Cloud major version starting from 30 days after its release. The latest minor version will be supported as soon as it is released. You can get supported Spring Boot versions from [Spring Boot Releases](https://github.com/spring-projects/spring-boot/wiki/Supported-Versions#releases) and Spring Cloud versions from [Spring Cloud Releases](https://github.com/spring-cloud/spring-cloud-release/wiki).
+Azure Spring Apps will support the latest Spring Boot or Spring Cloud major version starting from 30 days after its release. The latest minor version will be supported as soon as it's released. You can get supported Spring Boot versions from [Spring Boot Releases](https://github.com/spring-projects/spring-boot/wiki/Supported-Versions#releases) and Spring Cloud versions from [Spring Cloud Releases](https://github.com/spring-cloud/spring-cloud-release/wiki).
The following table lists the supported Spring Boot and Spring Cloud combinations:
For Spring Boot version 2.6, add the following dependencies to the application P
``` > [!WARNING]
-> Don't specify `server.port` in your configuration. Azure Spring Cloud will override this setting to a fixed port number. You must also respect this setting and not specify a server port in your code.
+> Don't specify `server.port` in your configuration. Azure Spring Apps will override this setting to a fixed port number. You must also respect this setting and not specify a server port in your code.
-## Other recommended dependencies to enable Azure Spring Cloud features
+## Other recommended dependencies to enable Azure Spring Apps features
-To enable the built-in features of Azure Spring Cloud from service registry to distributed tracing, you need to also include the following dependencies in your application. You can drop some of these dependencies if you don't need corresponding features for the specific apps.
+To enable the built-in features of Azure Spring Apps from service registry to distributed tracing, you need to also include the following dependencies in your application. You can drop some of these dependencies if you don't need corresponding features for the specific apps.
### Service Registry
To use Application Configuration Service for Tanzu, do the following steps for e
Another option is to set the config file patterns at the same time as your app deployment, as shown in the following example: ```azurecli
- az spring-cloud app deploy \
+ az spring app deploy \
--name <app-name> \ --artifact-path <path-to-your-JAR-file> \ --config-file-pattern <config-file-pattern>
Include the `spring-boot-starter-actuator` dependency in the dependencies sectio
## Next steps
-In this article, you learned how to configure your Java Spring application for deployment to Azure Spring Cloud. To learn how to set up a Config Server instance, see [Set up a Config Server instance](./how-to-config-server.md).
+In this article, you learned how to configure your Java Spring application for deployment to Azure Spring Apps. To learn how to set up a Config Server instance, see [Set up a Config Server instance](./how-to-config-server.md).
-More samples are available on GitHub: [Azure Spring Cloud Samples](https://github.com/Azure-Samples/Azure-Spring-Cloud-Samples).
+More samples are available on GitHub: [Azure Spring Apps Samples](https://github.com/Azure-Samples/Azure-Spring-Cloud-Samples).
::: zone-end
spring-cloud How To Scale Manual https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-scale-manual.md
Title: "Scale an application in Azure Spring Cloud | Microsoft Docs"
-description: Learn how to scale an application with Azure Spring Cloud in the Azure portal
+ Title: "Scale an application in Azure Spring Apps | Microsoft Docs"
+description: Learn how to scale an application with Azure Spring Apps in the Azure portal
Last updated 10/06/2019-+
-# Scale an application in Azure Spring Cloud
+# Scale an application in Azure Spring Apps
+
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
**This article applies to:** ✔️ Java ✔️ C# **This article applies to:** ✔️ Basic/Standard tier ✔️ Enterprise tier
-This article demonstrates how to scale any Spring application using the Azure Spring Cloud dashboard in the Azure portal.
+This article demonstrates how to scale any Spring application using the Azure Spring Apps dashboard in the Azure portal.
Scale your application up and down by modifying its number of virtual CPUs (vCPUs) and amount of memory. Scale your application in and out by modifying the number of application instances.
After you finish, you'll know how to make quick manual changes to each applicati
To follow these procedures, you need: * An Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
-* A deployed Azure Spring Cloud service instance. Follow the [quickstart on deploying an app via the Azure CLI](./quickstart.md) to get started.
+* A deployed Azure Spring Apps service instance. Follow the [quickstart on deploying an app via the Azure CLI](./quickstart.md) to get started.
* At least one application already created in your service instance. ## Navigate to the Scale page in the Azure portal 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Go to your Azure Spring Cloud **Overview** page.
+1. Go to your Azure Spring Apps **Overview** page.
1. Select the resource group that contains your service.
If you are on the Basic tier and constrained by one or more of these [limits](./
## Next steps
-This example explained how to manually scale an application in Azure Spring Cloud. To learn how to monitor an application by setting up alerts, see [Set-up autoscale](./how-to-setup-autoscale.md).
+This example explained how to manually scale an application in Azure Spring Apps. To learn how to monitor an application by setting up alerts, see [Set-up autoscale](./how-to-setup-autoscale.md).
> [!div class="nextstepaction"] > [Learn how to set up alerts](./tutorial-alerts-action-groups.md)
spring-cloud How To Self Diagnose Running In Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-self-diagnose-running-in-vnet.md
Title: "How to self-diagnose Azure Spring Cloud VNET"
-description: Learn how to self-diagnose and solve problems in Azure Spring Cloud running in VNET.
+ Title: "How to self-diagnose Azure Spring Apps VNET"
+description: Learn how to self-diagnose and solve problems in Azure Spring Apps running in VNET.
Last updated 01/25/2021-+
-# Self-diagnose running Azure Spring Cloud in VNET
+# Self-diagnose running Azure Spring Apps in VNET
+
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
**This article applies to:** ✔️ Basic/Standard tier ✔️ Enterprise tier
-This article shows you how to use Azure Spring Cloud diagnostics to diagnose and solve problems in Azure Spring Cloud running in VNET.
+This article shows you how to use Azure Spring Apps diagnostics to diagnose and solve problems in Azure Spring Apps running in VNET.
-Azure Spring Cloud diagnostics supports interactive troubleshooting apps running in virtual networks without configuration. Azure Spring Cloud diagnostics identifies problems and guides you to information that helps troubleshoot and resolve them.
+Azure Spring Apps diagnostics supports interactive troubleshooting apps running in virtual networks without configuration. Azure Spring Apps diagnostics identifies problems and guides you to information that helps troubleshoot and resolve them.
## Navigate to the diagnostics page The following procedure starts diagnostics for networked applications. 1. Sign in to the Azure portal.
-1. Go to your Azure Spring Cloud Overview page.
+1. Go to your Azure Spring Apps Overview page.
1. Select **Diagnose and solve problems** in the menu on the left navigation pane. 1. Select the third category, **Networking**.
The following procedure starts diagnostics for networked applications.
## View a diagnostic report
-After you select the **Networking** category, you can view two issues related to Networking specific to your VNet injected Azure Spring Cloud: **DNS Resolution** and **Required Outbound Traffic**.
+After you select the **Networking** category, you can view two issues related to Networking specific to your VNet injected Azure Spring Apps: **DNS Resolution** and **Required Outbound Traffic**.
![Self diagnostic options](media/spring-cloud-self-diagnose-vnet/self-diagostic-dns-req-outbound-options.png)
Maybe your network is blocked or the log service is down.
## Next steps
-* [How to self diagnose Azure Spring Cloud](./how-to-self-diagnose-solve.md)
+* [How to self diagnose Azure Spring Apps](./how-to-self-diagnose-solve.md)
spring-cloud How To Self Diagnose Solve https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-self-diagnose-solve.md
Title: "How to self-diagnose and solve problems in Azure Spring Cloud"
-description: Learn how to self-diagnose and solve problems in Azure Spring Cloud.
+ Title: "How to self-diagnose and solve problems in Azure Spring Apps"
+description: Learn how to self-diagnose and solve problems in Azure Spring Apps.
Last updated 05/29/2020-+
-# Self-diagnose and solve problems in Azure Spring Cloud
+# Self-diagnose and solve problems in Azure Spring Apps
+
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
**This article applies to:** ✔️ Java ✔️ C# **This article applies to:** ✔️ Basic/Standard tier ✔️ Enterprise tier
-This article shows you how to use Azure Spring Cloud diagnostics.
+This article shows you how to use Azure Spring Apps diagnostics.
-Azure Spring Cloud diagnostics is an interactive experience to troubleshoot your app without configuration. Azure Spring Cloud diagnostics identifies problems and guides you to information that helps troubleshoot and resolve issues.
+Azure Spring Apps diagnostics is an interactive experience to troubleshoot your app without configuration. Azure Spring Apps diagnostics identifies problems and guides you to information that helps troubleshoot and resolve issues.
## Prerequisites To complete this exercise, you need: * An Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
-* A deployed Azure Spring Cloud service instance. Follow our [quickstart on deploying an app via the Azure CLI](./quickstart.md) to get started.
+* A deployed Azure Spring Apps service instance. Follow our [quickstart on deploying an app via the Azure CLI](./quickstart.md) to get started.
* At least one application already created in your service instance. ## Navigate to the diagnostics page 1. Sign in to the Azure portal.
-2. Go to your Azure Spring Cloud **Overview** page.
+2. Go to your Azure Spring Apps **Overview** page.
3. Select **Diagnose and solve problems** in the left navigation pane. ![Diagnose, solve dialog](media/spring-cloud-diagnose/diagnose-solve-dialog.png)
Some results contain related documentation.
## Next steps
-* [Monitor Spring Cloud resources using alerts and action groups](./tutorial-alerts-action-groups.md)
-* [Security controls for Azure Spring Cloud Service](./concept-security-controls.md)
+* [Monitor Spring app resources using alerts and action groups](./tutorial-alerts-action-groups.md)
+* [Security controls for Azure Spring Apps Service](./concept-security-controls.md)
spring-cloud How To Service Registration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-service-registration.md
Title: Discover and register your Spring Boot applications in Azure Spring Cloud
-description: Discover and register your Spring Boot applications with managed Spring Cloud Service Registry (OSS) in Azure Spring Cloud
+ Title: Discover and register your Spring Boot applications in Azure Spring Apps
+description: Discover and register your Spring Boot applications with managed Spring Cloud Service Registry (OSS) in Azure Spring Apps
Last updated 05/09/2022-+ zone_pivot_groups: programming-languages-spring-cloud # Discover and register your Spring Boot applications
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
+ **This article applies to:** ✔️ Basic/Standard tier ❌ Enterprise tier This article shows you how to register your application using Spring Cloud Service Registry.
-Service registration and discovery are key requirements for maintaining a list of live app instances to call, and routing and load balancing inbound requests. Configuring each client manually takes time and introduces the possibility of human error. Azure Spring Cloud provides two options for you to solve this problem:
+Service registration and discovery are key requirements for maintaining a list of live app instances to call, and routing and load balancing inbound requests. Configuring each client manually takes time and introduces the possibility of human error. Azure Spring Apps provides two options for you to solve this problem:
* Use Kubernetes Service Discovery approach to invoke calls among your apps.
- Azure Spring Cloud creates a corresponding kubernetes service for every app running in it using app name as the kubernetes service name. So you can invoke calls in one app to another app by using app name in a http/https request like http(s)://{app name}/path. And this approach is also suitable for Enterprise tier.
+ Azure Spring Apps creates a corresponding kubernetes service for every app running in it using app name as the kubernetes service name. So you can invoke calls in one app to another app by using app name in a http/https request like http(s)://{app name}/path. And this approach is also suitable for Enterprise tier.
-* Use Managed Spring Cloud Service Registry (OSS) in Azure Spring Cloud.
+* Use Managed Spring Cloud Service Registry (OSS) in Azure Spring Apps.
After configuration, a Service Registry server will control service registration and discovery for your applications. The Service Registry server maintains a registry of live app instances, enables client-side load-balancing, and decouples service providers from clients without relying on DNS. ::: zone pivot="programming-language-csharp"
-For information about how to set up service registration for a Steeltoe app, see [Prepare a Java Spring application for deployment in Azure Spring Cloud](how-to-prepare-app-deployment.md).
+For information about how to set up service registration for a Steeltoe app, see [Prepare a Java Spring application for deployment in Azure Spring Apps](how-to-prepare-app-deployment.md).
::: zone-end
spring-cloud How To Setup Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-setup-autoscale.md
Last updated 11/03/2021-+ # Set up autoscale for applications
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
+ **This article applies to:** ✔️ Java ✔️ C# **This article applies to:** ✔️ Basic/Standard tier ✔️ Enterprise tier This article describes how to set up Autoscale settings for your applications using the Microsoft Azure portal or the Azure CLI.
-Autoscale is a built-in feature of Azure Spring Cloud that helps applications perform their best when demand changes. Azure Spring Cloud supports scale-out and scale-in, which includes modifying the number of app instances and load balancing.
+Autoscale is a built-in feature of Azure Spring Apps that helps applications perform their best when demand changes. Azure Spring Apps supports scale-out and scale-in, which includes modifying the number of app instances and load balancing.
## Prerequisites To follow these procedures, you need: * An Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
-* A deployed Azure Spring Cloud service instance. Follow the [quickstart on deploying an app via the Azure CLI](./quickstart.md) to get started.
+* A deployed Azure Spring Apps service instance. Follow the [quickstart on deploying an app via the Azure CLI](./quickstart.md) to get started.
* At least one application already created in your service instance. ## Navigate to the Autoscale page in the Azure portal 1. Sign in to the [Azure portal](https://portal.azure.com/).
-2. Go to the Azure Spring Cloud **Overview** page.
+2. Go to the Azure Spring Apps **Overview** page.
3. Select the resource group that contains your service. 4. Select the **Apps** tab under **Settings** in the menu on the left navigation pane. 5. Select the application for which you want to set up Autoscale. In this example, select the application named **demo**. You should then see the application's **Overview** page.
You can also set Autoscale modes using the Azure CLI. The following commands cre
--condition "tomcat.global.request.total.count > 100 avg 1m where AppName == demo and Deployment == default" ```
-For information on the available metrics, see the [User metrics options](./concept-metrics.md#user-metrics-options) section of [Metrics for Azure Spring Cloud](./concept-metrics.md).
+For information on the available metrics, see the [User metrics options](./concept-metrics.md#user-metrics-options) section of [Metrics for Azure Spring Apps](./concept-metrics.md).
## Upgrade to the Standard tier
If you're on the Basic tier and constrained by one or more of these limits, you
## Next steps * [Overview of autoscale in Microsoft Azure](../azure-monitor/autoscale/autoscale-overview.md)
-* [Azure CLI Monitoring autoscale](/cli/azure/monitor/autoscale)
+* [Azure CLI Monitoring autoscale](/cli/azure/monitor/autoscale)
spring-cloud How To Staging Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-staging-environment.md
Title: Set up a staging environment in Azure Spring Cloud | Microsoft Docs
-description: Learn how to use blue-green deployment with Azure Spring Cloud
+ Title: Set up a staging environment in Azure Spring Apps | Microsoft Docs
+description: Learn how to use blue-green deployment with Azure Spring Apps
Last updated 01/14/2021 -+
-# Set up a staging environment in Azure Spring Cloud
+# Set up a staging environment in Azure Spring Apps
+
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
**This article applies to:** ✔️ Java ❌ C# **This article applies to:** ✔️ Basic/Standard tier ✔️ Enterprise tier
-This article explains how to set up a staging deployment by using the blue-green deployment pattern in Azure Spring Cloud. Blue-green deployment is an Azure DevOps continuous delivery pattern that relies on keeping an existing (blue) version live while a new (green) one is deployed. This article shows you how to put that staging deployment into production without changing the production deployment.
+This article explains how to set up a staging deployment by using the blue-green deployment pattern in Azure Spring Apps. Blue-green deployment is an Azure DevOps continuous delivery pattern that relies on keeping an existing (blue) version live while a new (green) one is deployed. This article shows you how to put that staging deployment into production without changing the production deployment.
## Prerequisites
-* Azure Spring Cloud instance on a Standard pricing tier
-* [Azure Spring Cloud extension](/cli/azure/azure-cli-extensions-overview) for the Azure CLI
+* Azure Spring Apps instance on a Standard pricing tier
+* [Azure Spring Apps extension](/cli/azure/azure-cli-extensions-overview) for the Azure CLI
This article uses an application built from Spring Initializr. If you want to use a different application for this example, you'll need to make a simple change in a public-facing portion of the application to differentiate your staging deployment from production. > [!TIP] > [Azure Cloud Shell](https://shell.azure.com) is a free interactive shell that you can use to run the instructions in this article. It has common, preinstalled Azure tools, including the latest versions of Git, JDK, Maven, and the Azure CLI. If you're signed in to your Azure subscription, start your Cloud Shell instance. To learn more, see [Overview of Azure Cloud Shell](../cloud-shell/overview.md).
-To set up blue-green deployment in Azure Spring Cloud, follow the instructions in the next sections.
+To set up blue-green deployment in Azure Spring Apps, follow the instructions in the next sections.
## Install the Azure CLI extension
-Install the Azure Spring Cloud extension for the Azure CLI by using the following command:
+Install the Azure Spring Apps extension for the Azure CLI by using the following command:
```azurecli
-az extension add --name spring-cloud
+az extension add --name spring
``` ## Prepare the app and deployments
To build the application, follow these steps:
@RequestMapping("/") public String index() {
- return "Greetings from Azure Spring Cloud!";
+ return "Greetings from Azure Spring Apps!";
} }
To build the application, follow these steps:
mvn clean package -DskipTests ```
-5. Create the app in your Azure Spring Cloud instance:
+5. Create the app in your Azure Spring Apps instance:
```azurecli
- az spring-cloud app create -n demo -g <resourceGroup> -s <Azure Spring Cloud instance> --assign-endpoint
+ az spring app create -n demo -g <resourceGroup> -s <Azure Spring Apps instance> --assign-endpoint
```
-6. Deploy the app to Azure Spring Cloud:
+6. Deploy the app to Azure Spring Apps:
```azurecli
- az spring-cloud app deploy -n demo -g <resourceGroup> -s <Azure Spring Cloud instance> --jar-path target\hellospring-0.0.1-SNAPSHOT.jar
+ az spring app deploy -n demo -g <resourceGroup> -s <Azure Spring Apps instance> --jar-path target\hellospring-0.0.1-SNAPSHOT.jar
``` 7. Modify the code for your staging deployment:
To build the application, follow these steps:
@RequestMapping("/") public String index() {
- return "Greetings from Azure Spring Cloud! THIS IS THE GREEN DEPLOYMENT";
+ return "Greetings from Azure Spring Apps! THIS IS THE GREEN DEPLOYMENT";
} }
To build the application, follow these steps:
9. Create the green deployment: ```azurecli
- az spring-cloud app deployment create -n green --app demo -g <resourceGroup> -s <Azure Spring Cloud instance> --jar-path target\hellospring-0.0.1-SNAPSHOT.jar
+ az spring app deployment create -n green --app demo -g <resourceGroup> -s <Azure Spring Apps instance> --jar-path target\hellospring-0.0.1-SNAPSHOT.jar
``` ## View apps and deployments View deployed apps by using the following procedure:
-1. Go to your Azure Spring Cloud instance in the Azure portal.
+1. Go to your Azure Spring Apps instance in the Azure portal.
1. From the left pane, open the **Apps** pane to view apps for your service instance.
If you visit your public-facing app gateway at this point, you should see the ol
If you're not satisfied with your change, you can modify your application code, build a new .jar package, and upload it to your green deployment by using the Azure CLI: ```azurecli
-az spring-cloud app deploy -g <resource-group-name> -s <service-instance-name> -n gateway -d green --jar-path gateway.jar
+az spring app deploy -g <resource-group-name> -s <service-instance-name> -n gateway -d green --jar-path gateway.jar
``` ## Delete the staging deployment
To delete your staging deployment from the Azure portal, go to the page for your
Alternatively, delete your staging deployment from the Azure CLI by running the following command: ```azurecli
-az spring-cloud app deployment delete -n <staging-deployment-name> -g <resource-group-name> -s <service-instance-name> --app gateway
+az spring app deployment delete -n <staging-deployment-name> -g <resource-group-name> -s <service-instance-name> --app gateway
``` ## Next steps
-* [CI/CD for Azure Spring Cloud](./how-to-cicd.md?pivots=programming-language-java)
+* [CI/CD for Azure Spring Apps](./how-to-cicd.md?pivots=programming-language-java)
spring-cloud How To Start Stop Delete https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-start-stop-delete.md
Title: Start, stop, and delete an application in Azure Spring Cloud | Microsoft Docs
-description: How to start, stop, and delete an application in Azure Spring Cloud
+ Title: Start, stop, and delete an application in Azure Spring Apps | Microsoft Docs
+description: How to start, stop, and delete an application in Azure Spring Apps
Last updated 10/31/2019 -+
-# Start, stop, and delete an application in Azure Spring Cloud
+# Start, stop, and delete an application in Azure Spring Apps
+
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
**This article applies to:** ✔️ Java ✔️ C# **This article applies to:** ✔️ Basic/Standard tier ✔️ Enterprise tier
-This guide explains how to change an application's state in Azure Spring Cloud by using either the Azure portal or the Azure CLI.
+This guide explains how to change an application's state in Azure Spring Apps by using either the Azure portal or the Azure CLI.
## Using the Azure portal After you deploy an application, you can start, stop, and delete it by using the Azure portal.
-1. Go to your Azure Spring Cloud service instance in the Azure portal.
+1. Go to your Azure Spring Apps service instance in the Azure portal.
1. Select the **Application Dashboard** tab. 1. Select the application whose state you want to change. 1. On the **Overview** page for that application, select **Start/Stop**, **Restart**, or **Delete**.
After you deploy an application, you can start, stop, and delete it by using the
## Using the Azure CLI > [!NOTE]
-> You can use optional parameters and configure defaults with the Azure CLI. Learn more about the Azure CLI by reading [our reference documentation](/cli/azure/spring-cloud).
+> You can use optional parameters and configure defaults with the Azure CLI. Learn more about the Azure CLI by reading [our reference documentation](/cli/azure/spring).
-First, install the Azure Spring Cloud extension for the Azure CLI as follows:
+First, install the Azure Spring Apps extension for the Azure CLI as follows:
```azurecli
-az extension add --name spring-cloud
+az extension add --name spring
``` Next, select any of these Azure CLI operations:
Next, select any of these Azure CLI operations:
* To start your application: ```azurecli
- az spring-cloud app start -n <application name> -g <resource group> -s <Azure Spring Cloud name>
+ az spring app start -n <application name> -g <resource group> -s <Azure Spring Apps name>
``` * To stop your application: ```azurecli
- az spring-cloud app stop -n <application name> -g <resource group> -s <Azure Spring Cloud name>
+ az spring app stop -n <application name> -g <resource group> -s <Azure Spring Apps name>
``` * To restart your application: ```azurecli
- az spring-cloud app restart -n <application name> -g <resource group> -s <Azure Spring Cloud name>
+ az spring app restart -n <application name> -g <resource group> -s <Azure Spring Apps name>
``` * To delete your application: ```azurecli
- az spring-cloud app delete -n <application name> -g <resource group> -s <Azure Spring Cloud name>
+ az spring app delete -n <application name> -g <resource group> -s <Azure Spring Apps name>
```
spring-cloud How To Start Stop Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-start-stop-service.md
Title: How to start or stop an Azure Spring Cloud service instance
-description: Describes how to start or stop an Azure Spring Cloud service instance
+ Title: How to start or stop an Azure Spring Apps service instance
+description: Describes how to start or stop an Azure Spring Apps service instance
Last updated 11/04/2021-+
-# Start or stop your Azure Spring Cloud service instance
+# Start or stop your Azure Spring Apps service instance
+
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
**This article applies to:** ✔️ Basic/Standard tier ✔️ Enterprise tier
-This article shows you how to start or stop your Azure Spring Cloud service instance.
+This article shows you how to start or stop your Azure Spring Apps service instance.
> [!NOTE] > Stop and start is currently under preview and we do not recommend this feature for production.
-Your applications running in Azure Spring Cloud may not need to run continuously - for example, if you have a service instance that's used only during business hours. At these times, Azure Spring Cloud may be idle, and running only the system components.
+Your applications running in Azure Spring Apps may not need to run continuously - for example, if you have a service instance that's used only during business hours. At these times, Azure Spring Apps may be idle, and running only the system components.
-You can reduce the active footprint of Azure Spring Cloud by reducing the running instances and ensuring costs for compute resources are reduced.
+You can reduce the active footprint of Azure Spring Apps by reducing the running instances and ensuring costs for compute resources are reduced.
-To reduce your costs further, you can completely stop your Azure Spring Cloud service instance. All user apps and system components will be stopped. However, all your objects and network settings will be saved so you can restart your service instance and pick up right where you left off.
+To reduce your costs further, you can completely stop your Azure Spring Apps service instance. All user apps and system components will be stopped. However, all your objects and network settings will be saved so you can restart your service instance and pick up right where you left off.
> [!NOTE]
-> The state of a stopped Azure Spring Cloud service instance is preserved for up to 90 days during preview. If your cluster is stopped for more than 90 days, the cluster state cannot be recovered.
+> The state of a stopped Azure Spring Apps service instance is preserved for up to 90 days during preview. If your cluster is stopped for more than 90 days, the cluster state cannot be recovered.
> The maximum stop time may change after preview.
-You can only start, view, or delete a stopped Azure Spring Cloud service instance. You must start your service instance before performing any update operation, such as creating or scaling an app.
+You can only start, view, or delete a stopped Azure Spring Apps service instance. You must start your service instance before performing any update operation, such as creating or scaling an app.
## Prerequisites -- An existing service instance in Azure Spring Cloud. To create a new service instance, see [Quickstart: Deploy your first application in Azure Spring Cloud](./quickstart.md).
+- An existing service instance in Azure Spring Apps. To create a new service instance, see [Quickstart: Deploy your first application in Azure Spring Apps](./quickstart.md).
- (Optional) [Azure CLI](/cli/azure/install-azure-cli) version 2.11.2 or later.
-# [Portal](#tab/azure-portal)
+## [Portal](#tab/azure-portal)
## Stop a running instance
-In the Azure portal, use the following steps to stop a running Azure Spring Cloud instance:
+In the Azure portal, use the following steps to stop a running Azure Spring Apps instance:
-1. Go to the Azure Spring Cloud service overview page.
+1. Go to the Azure Spring Apps service overview page.
2. Select **Stop** to stop a running instance.
- :::image type="content" source="media/stop-start-service/spring-cloud-stop-service.png" alt-text="Screenshot of Azure portal showing the Azure Spring Cloud Overview page with the Stop button and Status value highlighted.":::
+ :::image type="content" source="media/stop-start-service/spring-cloud-stop-service.png" alt-text="Screenshot of Azure portal showing the Azure Spring Apps Overview page with the Stop button and Status value highlighted.":::
3. After the instance stops, the status will show **Succeeded (Stopped)**. ## Start a stopped instance
-In the Azure portal, use the following steps to start a stopped Azure Spring Cloud instance:
+In the Azure portal, use the following steps to start a stopped Azure Spring Apps instance:
-1. Go to Azure Spring Cloud service overview page.
+1. Go to Azure Spring Apps service overview page.
2. Select **Start** to start a stopped instance.
- :::image type="content" source="media/stop-start-service/spring-cloud-start-service.png" alt-text="Screenshot of Azure portal showing the Azure Spring Cloud Overview page with the Start button and Status value highlighted.":::
+ :::image type="content" source="media/stop-start-service/spring-cloud-start-service.png" alt-text="Screenshot of Azure portal showing the Azure Spring Apps Overview page with the Start button and Status value highlighted.":::
3. After the instance starts, the status will show **Succeeded (Running)**.
-# [Azure CLI](#tab/azure-cli)
+## [Azure CLI](#tab/azure-cli)
## Stop a running instance
-With the Azure CLI, use the following command to stop a running Azure Spring Cloud instance:
+With the Azure CLI, use the following command to stop a running Azure Spring Apps instance:
```azurecli
-az spring-cloud stop \
+az spring stop \
--name <service-instance-name> \ --resource-group <resource-group-name> ``` ## Start a stopped instance
-With the Azure CLI, use the following command to start a stopped Azure Spring Cloud instance:
+With the Azure CLI, use the following command to start a stopped Azure Spring Apps instance:
```azurecli
-az spring-cloud start \
+az spring start \
--name <service-instance-name> \ --resource-group <resource-group-name> ```
az spring-cloud start \
After the instance stops or starts, use the following command to check the power state: ```azurecli
-az spring-cloud show \
+az spring show \
--name <service-instance-name> \ --resource-group <resource-group-name> ```
az spring-cloud show \
## Next steps
-* [Monitor app lifecycle events using Azure Activity log and Azure Service Health](./monitor-app-lifecycle-events.md)
-* [Monitor usage and estimated costs in Azure Monitor](../azure-monitor/usage-estimated-costs.md)
+- [Monitor app lifecycle events using Azure Activity log and Azure Service Health](./monitor-app-lifecycle-events.md)
+- [Monitor usage and estimated costs in Azure Monitor](../azure-monitor/usage-estimated-costs.md)
spring-cloud How To Use Enterprise Api Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-use-enterprise-api-portal.md
Title: How to use API portal for VMware Tanzu with Azure Spring Cloud Enterprise Tier-
-description: How to use API portal for VMware Tanzu with Azure Spring Cloud Enterprise Tier.
+ Title: How to use API portal for VMware Tanzu with Azure Spring Apps Enterprise Tier
+
+description: How to use API portal for VMware Tanzu with Azure Spring Apps Enterprise Tier.
Last updated 02/09/2022-+ # Use API portal for VMware Tanzu
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
+ **This article applies to:** ❌ Basic/Standard tier ✔️ Enterprise tier
-This article shows you how to use API portal for VMware Tanzu® with Azure Spring Cloud Enterprise Tier.
+This article shows you how to use API portal for VMware Tanzu® with Azure Spring Apps Enterprise Tier.
[API portal](https://docs.vmware.com/en/API-portal-for-VMware-Tanzu/1.0/api-portal/GUID-https://docsupdatetracker.net/index.html) is one of the commercial VMware Tanzu components. API portal supports viewing API definitions from [Spring Cloud Gateway for VMware Tanzu®](./how-to-use-enterprise-spring-cloud-gateway.md) and testing of specific API routes from the browser. It also supports enabling Single Sign-On authentication via configuration. ## Prerequisites -- An already provisioned Azure Spring Cloud Enterprise tier instance with API portal enabled. For more information, see [Quickstart: Provision an Azure Spring Cloud service instance using the Enterprise tier](quickstart-provision-service-instance-enterprise.md).
+- An already provisioned Azure Spring Apps Enterprise tier instance with API portal enabled. For more information, see [Quickstart: Provision an Azure Spring Apps service instance using the Enterprise tier](quickstart-provision-service-instance-enterprise.md).
> [!NOTE]
- > To use API portal, you must enable it when you provision your Azure Spring Cloud service instance. You cannot enable it after provisioning at this time.
+ > To use API portal, you must enable it when you provision your Azure Spring Apps service instance. You cannot enable it after provisioning at this time.
- [Spring Cloud Gateway for Tanzu](./how-to-use-enterprise-spring-cloud-gateway.md) is enabled during provisioning and the corresponding API metadata is configured.
To access API portal, use the following steps to assign a public endpoint:
You can also use the Azure CLI to assign a public endpoint with the following command: ```azurecli
-az spring-cloud api-portal update --assign-endpoint
+az spring api-portal update --assign-endpoint
``` ## View the route information through API portal
Select the `endpoint URL` to go to API portal. You'll see all the routes configu
## Next steps -- [Azure Spring Cloud](index.yml)
+- [Azure Spring Apps](index.yml)
spring-cloud How To Use Enterprise Spring Cloud Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-use-enterprise-spring-cloud-gateway.md
Title: How to use Spring Cloud Gateway for Tanzu with Azure Spring Cloud Enterprise Tier-
-description: How to use Spring Cloud Gateway for Tanzu with Azure Spring Cloud Enterprise Tier.
+ Title: How to use Spring Cloud Gateway for Tanzu with Azure Spring Apps Enterprise Tier
+
+description: How to use Spring Cloud Gateway for Tanzu with Azure Spring Apps Enterprise Tier.
Last updated 02/09/2022-+ # Use Spring Cloud Gateway for Tanzu
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
+ **This article applies to:** ❌ Basic/Standard tier ✔️ Enterprise tier
-This article shows you how to use Spring Cloud Gateway for VMware Tanzu® with Azure Spring Cloud Enterprise Tier.
+This article shows you how to use Spring Cloud Gateway for VMware Tanzu® with Azure Spring Apps Enterprise Tier.
[Spring Cloud Gateway for Tanzu](https://docs.vmware.com/en/VMware-Spring-Cloud-Gateway-for-Kubernetes/https://docsupdatetracker.net/index.html) is one of the commercial VMware Tanzu components. It's based on the open-source Spring Cloud Gateway project. Spring Cloud Gateway for Tanzu handles cross-cutting concerns for API development teams, such as Single Sign-On (SSO), access control, rate-limiting, resiliency, security, and more. You can accelerate API delivery using modern cloud native patterns, and any programming language you choose for API development.
To integrate with [API portal for VMware Tanzu®](./how-to-use-enterprise-api-po
## Prerequisites -- An already provisioned Azure Spring Cloud Enterprise tier service instance with Spring Cloud Gateway for Tanzu enabled. For more information, see [Quickstart: Provision an Azure Spring Cloud service instance using the Enterprise tier](quickstart-provision-service-instance-enterprise.md).
+- An already provisioned Azure Spring Apps Enterprise tier service instance with Spring Cloud Gateway for Tanzu enabled. For more information, see [Quickstart: Provision an Azure Spring Apps service instance using the Enterprise tier](quickstart-provision-service-instance-enterprise.md).
> [!NOTE]
- > To use Spring Cloud Gateway for Tanzu, you must enable it when you provision your Azure Spring Cloud service instance. You cannot enable it after provisioning at this time.
+ > To use Spring Cloud Gateway for Tanzu, you must enable it when you provision your Azure Spring Apps service instance. You cannot enable it after provisioning at this time.
- [Azure CLI version 2.0.67 or later](/cli/azure/install-azure-cli). ## How Spring Cloud Gateway for Tanzu works
-Spring Cloud Gateway for Tanzu has two components: Spring Cloud Gateway for Tanzu operator and Spring Cloud Gateway for Tanzu instance. The operator is responsible for the lifecycle of Spring Cloud Gateway for Tanzu instances and routing rules. It's transparent to the developer and Azure Spring Cloud will manage it.
+Spring Cloud Gateway for Tanzu has two components: Spring Cloud Gateway for Tanzu operator and Spring Cloud Gateway for Tanzu instance. The operator is responsible for the lifecycle of Spring Cloud Gateway for Tanzu instances and routing rules. It's transparent to the developer and Azure Spring Apps will manage it.
Spring Cloud Gateway for Tanzu instance routes traffic according to rules. It supports rich features, and you can customize it using the sections below. Both scale in/out and up/down are supported to meet dynamic traffic load.
The following tables list the route definitions. All the properties are optional
| order | Route processing order, same as Spring Cloud Gateway for Tanzu | | tags | Classification tags, will be applied to methods in the generated OpenAPI documentation |
-Not all the filters/predicates are supported in Azure Spring Cloud because of security/compatible reasons. The following are not supported:
+Not all the filters/predicates are supported in Azure Spring Apps because of security/compatible reasons. The following are not supported:
- BasicAuth - JWTKey
Not all the filters/predicates are supported in Azure Spring Cloud because of se
Use the following steps to create an example application using Spring Cloud Gateway for Tanzu.
-1. To create an app in Azure Spring Cloud which the Spring Cloud Gateway for Tanzu would route traffic to, follow the instructions in [Quickstart: Build and deploy apps to Azure Spring Cloud using the Enterprise tier](quickstart-deploy-apps-enterprise.md). Select `customers-service` for this example.
+1. To create an app in Azure Spring Apps which the Spring Cloud Gateway for Tanzu would route traffic to, follow the instructions in [Quickstart: Build and deploy apps to Azure Spring Apps using the Enterprise tier](quickstart-deploy-apps-enterprise.md). Select `customers-service` for this example.
1. Assign a public endpoint to the gateway to access it.
Use the following steps to create an example application using Spring Cloud Gate
Select **Yes** next to *Assign endpoint* to assign a public endpoint. You'll get a URL in a few minutes. Save the URL to use later.
- :::image type="content" source="media/enterprise/getting-started-enterprise/gateway-overview.png" alt-text="Screenshot of Azure portal Azure Spring Cloud overview page with 'Assign endpoint' highlighted.":::
+ :::image type="content" source="media/enterprise/getting-started-enterprise/gateway-overview.png" alt-text="Screenshot of Azure portal Azure Spring Apps overview page with 'Assign endpoint' highlighted.":::
You can also use CLI to do it, as shown in the following command: ```azurecli
- az spring-cloud gateway update --assign-endpoint
+ az spring gateway update --assign-endpoint
``` 1. Use the following command to configure Spring Cloud Gateway for Tanzu properties: ```azurecli
- az spring-cloud gateway update \
+ az spring gateway update \
--api-description "<api-description>" \ --api-title "<api-title>" \ --api-version "v0.1" \
Use the following steps to create an example application using Spring Cloud Gate
You can also view those properties in the portal.
- :::image type="content" source="media/enterprise/how-to-use-enterprise-spring-cloud-gateway/gateway-configuration.png" alt-text="Screenshot of Azure portal showing Azure Spring Cloud Spring Cloud Gateway page with Configuration pane showing.":::
+ :::image type="content" source="media/enterprise/how-to-use-enterprise-spring-cloud-gateway/gateway-configuration.png" alt-text="Screenshot of Azure portal showing Azure Spring Apps Spring Cloud Gateway page with Configuration pane showing.":::
1. Configure routing rules to apps.
Use the following steps to create an example application using Spring Cloud Gate
Use the following command to apply the rule to the app `customers-service`: ```azurecli
- az spring-cloud gateway route-config create \
+ az spring gateway route-config create \
--name customers-service-rule \ --app-name customers-service \ --routes-file customers-service.json
Use the following steps to create an example application using Spring Cloud Gate
You can also view the routes in the portal.
- :::image type="content" source="media/enterprise/how-to-use-enterprise-spring-cloud-gateway/gateway-route.png" alt-text="Screenshot of Azure portal Azure Spring Cloud Spring Cloud Gateway page showing 'Routing rules' pane.":::
+ :::image type="content" source="media/enterprise/how-to-use-enterprise-spring-cloud-gateway/gateway-route.png" alt-text="Screenshot of Azure portal Azure Spring Apps Spring Cloud Gateway page showing 'Routing rules' pane.":::
1. Use the following command to access the `customers service` and `owners` APIs through the gateway endpoint:
Use the following steps to create an example application using Spring Cloud Gate
```azurecli az configure --defaults group=<resource group name> spring-cloud=<service name>
- az spring-cloud gateway route-config show \
+ az spring gateway route-config show \
--name customers-service-rule \ --query '{appResourceId:properties.appResourceId, routes:properties.routes}'
- az spring-cloud gateway route-config list \
+ az spring gateway route-config list \
--query '[].{name:name, appResourceId:properties.appResourceId, routes:properties.routes}' ``` ## Next steps -- [Azure Spring Cloud](index.yml)
+- [Azure Spring Apps](index.yml)
spring-cloud How To Use Managed Identities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-use-managed-identities.md
Title: Managed identities for applications in Azure Spring Cloud-
+ Title: Managed identities for applications in Azure Spring Apps
+ description: Home page for managed identities for applications. Last updated 04/15/2022-+ zone_pivot_groups: spring-cloud-tier-selection
-# Use managed identities for applications in Azure Spring Cloud
+# Use managed identities for applications in Azure Spring Apps
+
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
**This article applies to:** ✔️ Basic/Standard tier ✔️ Enterprise tier
-This article shows you how to use system-assigned and user-assigned managed identities for applications in Azure Spring Cloud.
+This article shows you how to use system-assigned and user-assigned managed identities for applications in Azure Spring Apps.
-Managed identities for Azure resources provide an automatically managed identity in Azure Active Directory (Azure AD) to an Azure resource such as your application in Azure Spring Cloud. You can use this identity to authenticate to any service that supports Azure AD authentication, without having credentials in your code.
+Managed identities for Azure resources provide an automatically managed identity in Azure Active Directory (Azure AD) to an Azure resource such as your application in Azure Spring Apps. You can use this identity to authenticate to any service that supports Azure AD authentication, without having credentials in your code.
## Feature status
An application can use its managed identity to get tokens to access other resour
You may need to configure the target resource to allow access from your application. For more information, see [Assign a managed identity access to a resource by using the Azure portal](../active-directory/managed-identities-azure-resources/howto-assign-access-portal.md). For example, if you request a token to access Key Vault, be sure you have added an access policy that includes your application's identity. Otherwise, your calls to Key Vault will be rejected, even if they include the token. To learn more about which resources support Azure Active Directory tokens, see [Azure services that support Azure AD authentication](../active-directory/managed-identities-azure-resources/services-azure-active-directory-support.md).
-Azure Spring Cloud shares the same endpoint for token acquisition with Azure Virtual Machines. We recommend using Java SDK or Spring Boot starters to acquire a token. For various code and script examples and guidance on important topics such as handling token expiration and HTTP errors, see [How to use managed identities for Azure resources on an Azure VM to acquire an access token](../active-directory/managed-identities-azure-resources/how-to-use-vm-token.md).
+Azure Spring Apps shares the same endpoint for token acquisition with Azure Virtual Machines. We recommend using Java SDK or Spring Boot starters to acquire a token. For various code and script examples and guidance on important topics such as handling token expiration and HTTP errors, see [How to use managed identities for Azure resources on an Azure VM to acquire an access token](../active-directory/managed-identities-azure-resources/how-to-use-vm-token.md).
## Examples of connecting Azure services in application code
The following table provides links to articles that contain examples:
| Azure service | tutorial | |--||
-| Key Vault | [Tutorial: Use a managed identity to connect Key Vault to an Azure Spring Cloud app](tutorial-managed-identities-key-vault.md) |
-| Azure Functions | [Tutorial: Use a managed identity to invoke Azure Functions from an Azure Spring Cloud app](tutorial-managed-identities-functions.md) |
-| Azure SQL | [Use a managed identity to connect Azure SQL Database to an Azure Spring Cloud app](connect-managed-identity-to-azure-sql.md) |
+| Key Vault | [Tutorial: Use a managed identity to connect Key Vault to an Azure Spring Apps app](tutorial-managed-identities-key-vault.md) |
+| Azure Functions | [Tutorial: Use a managed identity to invoke Azure Functions from an Azure Spring Apps app](tutorial-managed-identities-functions.md) |
+| Azure SQL | [Use a managed identity to connect Azure SQL Database to an Azure Spring Apps app](connect-managed-identity-to-azure-sql.md) |
## Best practices when using managed identities
We highly recommend that you use system-assigned and user-assigned managed ident
### Maximum number of user-assigned managed identities per application
-For the maximum number of user-assigned managed identities per application, see [Quotas and Service Plans for Azure Spring Cloud](./quotas.md).
+For the maximum number of user-assigned managed identities per application, see [Quotas and Service Plans for Azure Spring Apps](./quotas.md).
### Azure services that aren't supported
spring-cloud How To Use Tls Certificate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-use-tls-certificate.md
Title: Use TLS/SSL certificates in your application in Azure Spring Cloud-
+ Title: Use TLS/SSL certificates in your application in Azure Spring Apps
+ description: Use TLS/SSL certificates in an application. Last updated 10/08/2021-+
-# Use TLS/SSL certificates in your application in Azure Spring Cloud
+# Use TLS/SSL certificates in your application in Azure Spring Apps
+
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
**This article applies to:** ✔️ Basic/Standard tier ✔️ Enterprise tier
-This article shows you how to use public certificates in Azure Spring Cloud for your application. Your app may act as a client and access an external service that requires certificate authentication, or it may need to perform cryptographic tasks.
+This article shows you how to use public certificates in Azure Spring Apps for your application. Your app may act as a client and access an external service that requires certificate authentication, or it may need to perform cryptographic tasks.
-When you let Azure Spring Cloud manage your TLS/SSL certificates, you can maintain the certificates and your application code separately to safeguard your sensitive data. Your app code can access the public certificates you add to your Azure Spring Cloud instance.
+When you let Azure Spring Apps manage your TLS/SSL certificates, you can maintain the certificates and your application code separately to safeguard your sensitive data. Your app code can access the public certificates you add to your Azure Spring Apps instance.
> [!NOTE] > Azure CLI and Terraform support and samples will be coming soon to this article. ## Prerequisites -- An application deployed to Azure Spring Cloud. See [Quickstart: Deploy your first application in Azure Spring Cloud](./quickstart.md), or use an existing app.
+- An application deployed to Azure Spring Apps. See [Quickstart: Deploy your first application in Azure Spring Apps](./quickstart.md), or use an existing app.
- Either a certificate file with *.crt*, *.cer*, *.pem*, or *.der* extension, or a deployed instance of Azure Key Vault with a private certificate. ## Import a certificate
-You can choose to import your certificate into your Azure Spring Cloud instance from either Key Vault or use a local certificate file.
+You can choose to import your certificate into your Azure Spring Apps instance from either Key Vault or use a local certificate file.
### Import a certificate from Key Vault
-You need to grant Azure Spring Cloud access to your key vault before you import your certificate using these steps:
+You need to grant Azure Spring Apps access to your key vault before you import your certificate using these steps:
1. Sign in to the [Azure portal](https://portal.azure.com). 1. Select **Key vaults**, then select the Key Vault you'll import your certificate from.
You need to grant Azure Spring Cloud access to your key vault before you import
:::image type="content" source="media/use-tls-certificates/grant-key-vault-permission.png" alt-text="Screenshot of Azure portal 'Create an access policy' page with Permission pane showing and Get and List permissions highlighted." lightbox="media/use-tls-certificates/grant-key-vault-permission.png":::
-1. Under **Principal**, select your **Azure Spring Cloud Resource Provider**.
+1. Under **Principal**, select your **Azure Spring Apps Resource Provider**.
- :::image type="content" source="media/use-tls-certificates/select-service-principal.png" alt-text="Screenshot of Azure portal 'Create an access policy' page with Principal pane showing and Azure Spring Cloud Resource Provider highlighted." lightbox="media/use-tls-certificates/select-service-principal.png":::
+ :::image type="content" source="media/use-tls-certificates/select-service-principal.png" alt-text="Screenshot of Azure portal 'Create an access policy' page with Principal pane showing and Azure Spring Apps Resource Provider highlighted." lightbox="media/use-tls-certificates/select-service-principal.png":::
1. Select **Review + Create**, then select **Create**.
You can import a certificate file stored locally using these steps:
## Load a certificate
-To load a certificate into your application in Azure Spring Cloud, start with these steps:
+To load a certificate into your application in Azure Spring Apps, start with these steps:
1. Go to your application instance. 1. From the left navigation pane of your app, select **Certificate management**.
To load a certificate into your application in Azure Spring Cloud, start with th
### Load a certificate from code
-Your loaded certificates are available in the */etc/azure-spring-cloud/certs/public* folder. Use the following Java code to load a public certificate in an application in Azure Spring Cloud.
+Your loaded certificates are available in the */etc/azure-spring-cloud/certs/public* folder. Use the following Java code to load a public certificate in an application in Azure Spring Apps.
```java CertificateFactory factory = CertificateFactory.getInstance("X509");
spring-cloud How To Write Log To Custom Persistent Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-write-log-to-custom-persistent-storage.md
Title: How to use Logback to write logs to custom persistent storage in Azure Spring Cloud | Microsoft Docs
-description: How to use Logback to write logs to custom persistent storage in Azure Spring Cloud.
+ Title: How to use Logback to write logs to custom persistent storage in Azure Spring Apps | Microsoft Docs
+description: How to use Logback to write logs to custom persistent storage in Azure Spring Apps.
Last updated 11/17/2021-+ # How to use Logback to write logs to custom persistent storage
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
+ **This article applies to:** ✔️ Java ❌ C# **This article applies to:** ✔️ Basic/Standard tier ✔️ Enterprise tier
-This article shows you how to load Logback and write logs to custom persistent storage in Azure Spring Cloud.
+This article shows you how to load Logback and write logs to custom persistent storage in Azure Spring Apps.
> [!NOTE] > When a file in the application's classpath has one of the following names, Spring Boot will automatically load it over the default configuration for Logback:
This article shows you how to load Logback and write logs to custom persistent s
## Prerequisites
-* An existing storage resource bound to an Azure Spring Cloud instance. If you need to bind a storage resource, see [How to enable your own persistent storage in Azure Spring Cloud](./how-to-custom-persistent-storage.md).
+* An existing storage resource bound to an Azure Spring Apps instance. If you need to bind a storage resource, see [How to enable your own persistent storage in Azure Spring Apps](./how-to-custom-persistent-storage.md).
* The Logback dependency included in your application. For more information on Logback, see [A Guide To Logback](https://www.baeldung.com/logback).
-* The [Azure Spring Cloud extension](/cli/azure/azure-cli-extensions-overview) for the Azure CLI
+* The [Azure Spring Apps extension](/cli/azure/azure-cli-extensions-overview) for the Azure CLI
## Edit the Logback configuration to write logs into a specific path
In the preceding example, there are two placeholders named `{LOGS}` in the path
## Use the Azure CLI to create and deploy a new app with Logback on persistent storage
-1. Use the following command to create an application in Azure Spring Cloud with persistent storage enabled and the environment variable set:
+1. Use the following command to create an application in Azure Spring Apps with persistent storage enabled and the environment variable set:
```azurecli
- az spring-cloud app create \
+ az spring app create \
--resource-group <resource-group-name> \ --name <app-name> \ --service <spring-instance-name> \
In the preceding example, there are two placeholders named `{LOGS}` in the path
1. Use the following command to deploy your application: ```azurecli
- az spring-cloud app deploy \
+ az spring app deploy \
--resource-group <resource-group-name> \ --name <app-name> \ --service <spring-instance-name> \
In the preceding example, there are two placeholders named `{LOGS}` in the path
1. Use the following command to check your application's console log: ```azurecli
- az spring-cloud app logs \
+ az spring app logs \
--resource-group <resource-group-name> \ --name <app-name> \ --service <spring-instance-name>
In the preceding example, there are two placeholders named `{LOGS}` in the path
The path or persistent storage where the logs are saved can be changed at any time. The application will restart when changes are made to either environment variables or persistent storage. ```azurecli
- az spring-cloud app update \
+ az spring app update \
--resource-group <resource-group-name> \ --name <app-name> \ --service <spring-instance-name> \
In the preceding example, there are two placeholders named `{LOGS}` in the path
## Next steps
-* [Structured application log for Azure Spring Cloud](./structured-app-log.md)
+* [Structured application log for Azure Spring Apps](./structured-app-log.md)
* [Analyzing logs and metrics with diagnostic settings](./diagnostic-services.md)
spring-cloud Monitor App Lifecycle Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/monitor-app-lifecycle-events.md
Last updated 08/19/2021-+ # Monitor app lifecycle events using Azure Activity log and Azure Service Health
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
+ **This article applies to:** ✔️ Basic/Standard tier ✔️ Enterprise tier This article shows you how to monitor app lifecycle events and set up alerts with Azure Activity log and Azure Service Health.
-Azure Spring Cloud provides built-in tools to monitor the status and health of your applications. App lifecycle events help you understand any changes that were made to your applications so you can take action as necessary.
+Azure Spring Apps provides built-in tools to monitor the status and health of your applications. App lifecycle events help you understand any changes that were made to your applications so you can take action as necessary.
## Prerequisites - An Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.-- A deployed Azure Spring Cloud service instance and at least one application already created in your service instance. For more information, see [Quickstart: Deploy your first Spring Boot app in Azure Spring Cloud](quickstart.md).
+- A deployed Azure Spring Apps service instance and at least one application already created in your service instance. For more information, see [Quickstart: Deploy your first Spring Boot app in Azure Spring Apps](quickstart.md).
## Monitor app lifecycle events triggered by users in Azure Activity logs
For example, when you restart your app, you can find the affected instances from
### Monitor unplanned app lifecycle events
-When your app is restarted because of unplanned events, your Azure Spring Cloud instance will show a status of **degraded** in the **Resource health** section of the Azure portal. Degraded means that your resource detected a potential loss in performance, although it's still available for use. Examples of unplanned events include app crash, health check failure, and system outage.
+When your app is restarted because of unplanned events, your Azure Spring Apps instance will show a status of **degraded** in the **Resource health** section of the Azure portal. Degraded means that your resource detected a potential loss in performance, although it's still available for use. Examples of unplanned events include app crash, health check failure, and system outage.
:::image type="content" source="media/monitor-app-lifecycle-events/resource-health-detail.png" alt-text="Screenshot of the resource health pane.":::
Your app may be restarted during platform maintenance. You can receive a mainten
:::image type="content" source="media/monitor-app-lifecycle-events/planned-maintenance-notification.png" lightbox="media/monitor-app-lifecycle-events/planned-maintenance-notification.png" alt-text="Screenshot of Azure portal example notification for planned maintenance.":::
-When platform maintenance happens, your Azure Spring Cloud instance will also show a status of **degraded**. If restarting is needed during platform maintenance, Azure Spring Cloud will perform a rolling update to incrementally update your applications. Rolling updates are designed to update your workloads without downtime. You can find the latest status in the health history page.
+When platform maintenance happens, your Azure Spring Apps instance will also show a status of **degraded**. If restarting is needed during platform maintenance, Azure Spring Apps will perform a rolling update to incrementally update your applications. Rolling updates are designed to update your workloads without downtime. You can find the latest status in the health history page.
:::image type="content" source="media/monitor-app-lifecycle-events/planned-maintenance-in-progress.png" lightbox="media/monitor-app-lifecycle-events/planned-maintenance-in-progress.png" alt-text="Screenshot of Azure portal example log for planned maintenance in progress."::: >[!NOTE]
-> Currently, Azure Spring Cloud performs one regular planned maintenance to upgrade the underlying Kubernetes version every 2-4 months. For a detailed maintenance timeline, check the notifications on the Azure Service Health page.
+> Currently, Azure Spring Apps performs one regular planned maintenance to upgrade the underlying Kubernetes version every 2-4 months. For a detailed maintenance timeline, check the notifications on the Azure Service Health page.
## Set up alerts
The following steps show you how to create an alert rule for planned maintenance
## Next steps
-[Self-diagnose and solve problems in Azure Spring Cloud](how-to-self-diagnose-solve.md)
+[Self-diagnose and solve problems in Azure Spring Apps](how-to-self-diagnose-solve.md)
spring-cloud Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/overview.md
Title: Introduction to Azure Spring Cloud
-description: Learn the features and benefits of Azure Spring Cloud to deploy and manage Java Spring applications in Azure.
+ Title: Introduction to Azure Spring Apps
+description: Learn the features and benefits of Azure Spring Apps to deploy and manage Java Spring applications in Azure.
Last updated 03/09/2021 -+ #Customer intent: As an Azure Cloud user, I want to deploy, run, and monitor Spring applications.
-# What is Azure Spring Cloud?
+# What is Azure Spring Apps?
+
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
**This article applies to:** ✔️ Basic/Standard tier ✔️ Enterprise tier
-Azure Spring Cloud makes it easy to deploy Spring Boot applications to Azure without any code changes. The service manages the infrastructure of Spring Cloud applications so developers can focus on their code. Azure Spring Cloud provides lifecycle management using comprehensive monitoring and diagnostics, configuration management, service discovery, CI/CD integration, blue-green deployments, and more.
+Azure Spring Apps makes it easy to deploy Spring Boot applications to Azure without any code changes. The service manages the infrastructure of Spring applications so developers can focus on their code. Azure Spring Apps provides lifecycle management using comprehensive monitoring and diagnostics, configuration management, service discovery, CI/CD integration, blue-green deployments, and more.
-The following video shows an app composed of Spring Boot applications running on Azure using Azure Spring Cloud.
+The following video shows an app composed of Spring Boot applications running on Azure using Azure Spring Apps.
<br> > [!VIDEO https://www.youtube.com/embed/1jOXMFc1oRg]
-## Why use Azure Spring Cloud?
+## Why use Azure Spring Apps?
-Deployment of applications to Azure Spring Cloud has many benefits. You can:
+Deployment of applications to Azure Spring Apps has many benefits. You can:
* Efficiently migrate existing Spring apps and manage cloud scaling and costs. * Modernize apps with Spring Cloud patterns to improve agility and speed of delivery.
Deployment of applications to Azure Spring Cloud has many benefits. You can:
* Develop and deploy rapidly without containerization dependencies. * Monitor production workloads efficiently and effortlessly.
-Azure Spring Cloud supports both Java [Spring Boot](https://spring.io/projects/spring-boot) and ASP.NET Core [Steeltoe](https://steeltoe.io/) apps. Steeltoe support is currently offered as a public preview. Public preview offerings let you experiment with new features prior to their official release.
+Azure Spring Apps supports both Java [Spring Boot](https://spring.io/projects/spring-boot) and ASP.NET Core [Steeltoe](https://steeltoe.io/) apps. Steeltoe support is currently offered as a public preview. Public preview offerings let you experiment with new features prior to their official release.
## Service overview
-As part of the Azure ecosystem, Azure Spring Cloud allows easy binding to other Azure services including storage, databases, monitoring, and more.
+As part of the Azure ecosystem, Azure Spring Apps allows easy binding to other Azure services including storage, databases, monitoring, and more.
-![Azure Spring Cloud overview](media/spring-cloud-principles/azure-spring-cloud-overview.png)
+![Azure Spring Apps overview](media/spring-cloud-principles/azure-spring-cloud-overview.png)
-* Azure Spring Cloud is a fully managed service for Spring Boot apps that lets you focus on building and running apps without the hassle of managing infrastructure.
+* Azure Spring Apps is a fully managed service for Spring Boot apps that lets you focus on building and running apps without the hassle of managing infrastructure.
-* Simply deploy your JARs or code for your Spring Boot app or Zip for your Steeltoe app, and Azure Spring Cloud will automatically wire your apps with Spring service runtime and built-in app lifecycle.
+* Simply deploy your JARs or code for your Spring Boot app or Zip for your Steeltoe app, and Azure Spring Apps will automatically wire your apps with Spring service runtime and built-in app lifecycle.
* Monitoring is simple. After deployment you can monitor app performance, fix errors, and rapidly improve applications. * Full integration to Azure's ecosystems and services.
-* Azure Spring Cloud is enterprise ready with fully managed infrastructure, built-in lifecycle management, and ease of monitoring.
+* Azure Spring Apps is enterprise ready with fully managed infrastructure, built-in lifecycle management, and ease of monitoring.
-### Get started with Azure Spring Cloud
+### Get started with Azure Spring Apps
The following quickstarts will help you get started:
The following quickstarts will help you get started:
The following quickstarts apply to Basic/Standard tier only. For Enterprise tier quickstarts, see the next section.
-* [Provision an Azure Spring Cloud service instance](quickstart-provision-service-instance.md)
+* [Provision an Azure Spring Apps service instance](quickstart-provision-service-instance.md)
* [Set up the configuration server](quickstart-setup-config-server.md) * [Build and deploy apps](quickstart-deploy-apps.md) ## Enterprise Tier overview
-Based on our learnings from customer engagements, we built Azure Spring Cloud Enterprise tier with commercially supported Spring runtime components to help enterprise customers to ship faster and unlock SpringΓÇÖs full potential, including feature parity and region parity with Standard tier.
+Based on our learnings from customer engagements, we built Azure Spring Apps Enterprise tier with commercially supported Spring runtime components to help enterprise customers to ship faster and unlock SpringΓÇÖs full potential, including feature parity and region parity with Standard tier.
-The following video introduces Azure Spring Cloud Enterprise tier.
+The following video introduces Azure Spring Apps Enterprise tier.
<br>
The following video introduces Azure Spring Cloud Enterprise tier.
### Deploy and manage Spring and polyglot applications
-The fully managed VMware Tanzu® Build Service™ in Azure Spring Cloud Enterprise tier automates container creation, management and governance at enterprise scale using open-source [Cloud Native Buildpacks](https://buildpacks.io/) and commercial [VMware Tanzu® Buildpacks](https://docs.pivotal.io/tanzu-buildpacks/). Tanzu Build Service offers a higher-level abstraction for building apps and provides a balance of control that reduces the operational burden on developers and supports enterprise IT operators who manage applications at scale. You can configure what Buildpacks to apply and build Spring applications and polyglot applications that run alongside Spring applications on Azure Spring Cloud.
+The fully managed VMware Tanzu® Build Service™ in Azure Spring Apps Enterprise tier automates container creation, management and governance at enterprise scale using open-source [Cloud Native Buildpacks](https://buildpacks.io/) and commercial [VMware Tanzu® Buildpacks](https://docs.pivotal.io/tanzu-buildpacks/). Tanzu Build Service offers a higher-level abstraction for building apps and provides a balance of control that reduces the operational burden on developers and supports enterprise IT operators who manage applications at scale. You can configure what Buildpacks to apply and build Spring applications and polyglot applications that run alongside Spring applications on Azure Spring Apps.
Tanzu Buildpacks makes it easier to build Spring, Java, NodeJS, Python, Go and .NET Core applications and configure application performance monitoring agents such as Application Insights, New Relic, Dynatrace, AppDynamics, and Elastic.
Tanzu Buildpacks makes it easier to build Spring, Java, NodeJS, Python, Go and .
You can manage and discover request routes and APIs exposed by applications using the fully managed Spring Cloud Gateway for VMware Tanzu® and API portal for VMware Tanzu®.
-Spring Cloud Gateway for Tanzu effectively routes diverse client requests to applications in Azure Spring Cloud, Azure, and on-premises, and addresses cross-cutting considerations for applications behind the Gateway such as securing, routing, rate limiting, caching, monitoring, resiliency and hiding applications. You can configure:
+Spring Cloud Gateway for Tanzu effectively routes diverse client requests to applications in Azure Spring Apps, Azure, and on-premises, and addresses cross-cutting considerations for applications behind the Gateway such as securing, routing, rate limiting, caching, monitoring, resiliency and hiding applications. You can configure:
* Single sign-on integration with your preferred identity provider without any additional code or dependencies. * Dynamic routing rules to applications without any application redeployment.
API Portal for VMware Tanzu provides API consumers with the ability to find and
### Use flexible and configurable VMware Tanzu components
-With Azure Spring Cloud Enterprise tier, you can use fully managed VMware Tanzu components on Azure. You can select which VMware Tanzu components you want to use in your environment during Enterprise instance creation. Tanzu Build Service, Spring Cloud Gateway for Tanzu, API Portal for VMware Tanzu, Application Configuration Service for VMware Tanzu®, and VMware Tanzu® Service Registry are available during public preview.
+With Azure Spring Apps Enterprise tier, you can use fully managed VMware Tanzu components on Azure. You can select which VMware Tanzu components you want to use in your environment during Enterprise instance creation. Tanzu Build Service, Spring Cloud Gateway for Tanzu, API Portal for VMware Tanzu, Application Configuration Service for VMware Tanzu®, and VMware Tanzu® Service Registry are available during public preview.
VMware Tanzu components deliver increased value so you can: * Grow your enterprise grade application portfolio from a few applications to thousands with end-to-end observability while delegating operational complexity to Microsoft and VMware.
-* Lift and shift Spring applications across Azure Spring Cloud and any other compute environment.
+* Lift and shift Spring applications across Azure Spring Apps and any other compute environment.
* Control your build dependencies, deploy polyglot applications, and deploy Spring Cloud middleware components as needed.
-Microsoft and VMware will continue to add more enterprise-grade features, including Tanzu components such as Application Live View for VMware Tanzu®, Application Accelerator for VMware Tanzu®, and Spring Cloud Data Flow for VMware Tanzu®, although the Azure Spring Cloud Enterprise tier roadmap is not confirmed and is subject to change.
+Microsoft and VMware will continue to add more enterprise-grade features, including Tanzu components such as Application Live View for VMware Tanzu®, Application Accelerator for VMware Tanzu®, and Spring Cloud Data Flow for VMware Tanzu®, although the Azure Spring Apps Enterprise tier roadmap is not confirmed and is subject to change.
### Unlock SpringΓÇÖs full potential with Long-Term Support (LTS)
-Azure Spring Cloud Enterprise tier includes VMware Spring Runtime Support for application development and deployments. This support gives you access to Spring experts, enabling you to unlock the full potential of the Spring ecosystem to develop and deploy applications faster.
+Azure Spring Apps Enterprise tier includes VMware Spring Runtime Support for application development and deployments. This support gives you access to Spring experts, enabling you to unlock the full potential of the Spring ecosystem to develop and deploy applications faster.
-Typically, open-source Spring project minor releases are supported for a minimum of 12 months from the date of initial release. In Azure Spring Cloud Enterprise, Spring project minor releases will receive commercial support for a minimum of 24 months from the date of initial release through the VMware Spring Runtime Support entitlement. This extended support ensures the security and stability of your Spring application portfolio even after the open source end of life dates. For more information, see [Spring Boot support](https://spring.io/projects/spring-boot#support).
+Typically, open-source Spring project minor releases are supported for a minimum of 12 months from the date of initial release. In Azure Spring Apps Enterprise, Spring project minor releases will receive commercial support for a minimum of 24 months from the date of initial release through the VMware Spring Runtime Support entitlement. This extended support ensures the security and stability of your Spring application portfolio even after the open source end of life dates. For more information, see [Spring Boot support](https://spring.io/projects/spring-boot#support).
### Fully integrate into the Azure and Java ecosystems
-Azure Spring Cloud, including Enterprise tier, runs on Azure in a fully managed environment. You get all the benefits of Azure and the Java ecosystem, and the experience is familiar and intuitive, as shown in the following table:
+Azure Spring Apps, including Enterprise tier, runs on Azure in a fully managed environment. You get all the benefits of Azure and the Java ecosystem, and the experience is familiar and intuitive, as shown in the following table:
| Best practice | Ecosystem | |--|--|
After you create your Enterprise tier service instance and deploy your applicati
The following quickstarts will help you get started using the Enterprise tier: * [View Enterprise Tier offering](how-to-enterprise-marketplace-offer.md)
-* [Provision an Azure Spring Cloud instance using the Enterprise tier](quickstart-provision-service-instance-enterprise.md)
+* [Provision an Azure Spring Apps instance using the Enterprise tier](quickstart-provision-service-instance-enterprise.md)
* [Set up Application Configuration Service for Tanzu](quickstart-setup-application-configuration-service-enterprise.md) * [Build and deploy applications](quickstart-deploy-apps-enterprise.md)
-Most of the Azure Spring Cloud documentation applies to all tiers. Some articles apply only to Enterprise tier or only to Basic/Standard tier, as indicated at the beginning of each article.
+Most of the Azure Spring Apps documentation applies to all tiers. Some articles apply only to Enterprise tier or only to Basic/Standard tier, as indicated at the beginning of each article.
As a quick reference, the articles listed above and the articles in the following list apply to Enterprise tier only, or contain significant content that applies only to Enterprise tier:
As a quick reference, the articles listed above and the articles in the followin
## Next steps > [!div class="nextstepaction"]
-> [Spring Cloud quickstart](quickstart.md)
+> [Quickstart: Deploy your first application to Azure Spring Apps](quickstart.md)
-Samples are available on GitHub. See [Azure Spring Cloud Samples](https://github.com/Azure-Samples/Azure-Spring-Cloud-Samples/tree/master/).
+Samples are available on GitHub. See [Azure Spring Apps Samples](https://github.com/Azure-Samples/Azure-Spring-Cloud-Samples/tree/master/).
spring-cloud Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/policy-reference.md
Title: Built-in policy definitions for Azure Spring Cloud
-description: Lists Azure Policy built-in policy definitions for Azure Spring Cloud. These built-in policy definitions provide common approaches to managing your Azure resources.
+ Title: Built-in policy definitions for Azure Spring Apps
+description: Lists Azure Policy built-in policy definitions for Azure Spring Apps. These built-in policy definitions provide common approaches to managing your Azure resources.
Last updated 05/11/2022 -+
-# Azure Policy built-in definitions for Azure Spring Cloud
+# Azure Policy built-in definitions for Azure Spring Apps
+
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
**This article applies to:** ✔️ Java ✔️ C# **This article applies to:** ✔️ Basic/Standard tier ✔️ Enterprise tier This page is an index of [Azure Policy](../governance/policy/overview.md) built-in policy
-definitions for Azure Spring Cloud. For additional Azure Policy built-ins for other services, see
+definitions for Azure Spring Apps. For additional Azure Policy built-ins for other services, see
[Azure Policy built-in definitions](../governance/policy/samples/built-in-policies.md). The name of each built-in policy definition links to the policy definition in the Azure portal. Use the link in the **Version** column to view the source on the [Azure Policy GitHub repo](https://github.com/Azure/azure-policy).
-## Azure Spring Cloud
+## Azure Spring Apps
[!INCLUDE [azure-policy-reference-service-spring-cloud](../../includes/policy/reference/byrp/microsoft.appplatform.md)]
spring-cloud Principles Microservice Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/principles-microservice-apps.md
Title: Java and base OS for Azure Spring Cloud apps
-description: Principles for maintaining healthy Java and base operating system for Azure Spring Cloud apps
+ Title: Java and base OS for Azure Spring Apps apps
+description: Principles for maintaining healthy Java and base operating system for Azure Spring Apps apps
Last updated 10/12/2021-+
-# Java and Base OS for Azure Spring Cloud apps
+# Java and Base OS for Azure Spring Apps apps
+
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
**This article applies to:** ✔️ Java
-The following are principles for maintaining healthy Java and base operating system for Azure Spring Cloud apps.
+The following are principles for maintaining healthy Java and base operating system for Azure Spring Apps apps.
## Principles for healthy Java and Base OS * Shall be the same base operating system across tiers - Basic | Standard | Premium.
- * Currently, apps on Azure Spring Cloud use a mix of Debian 10 and Ubuntu 18.04.
+ * Currently, apps on Azure Spring Apps use a mix of Debian 10 and Ubuntu 18.04.
* VMware Tanzu® Build Service™ uses Ubuntu 18.04. * Shall be the same base operating system regardless of deployment starting points - source | JAR
- * Currently, apps on Azure Spring Cloud use a mix of Debian 10 and Ubuntu 18.04.
+ * Currently, apps on Azure Spring Apps use a mix of Debian 10 and Ubuntu 18.04.
* Base operating system shall be free of security vulnerabilities.
The following are principles for maintaining healthy Java and base operating sys
* Shall use JRE-headless.
- * Currently, apps on Azure Spring Cloud use JDK. JRE-headless is a smaller image.
+ * Currently, apps on Azure Spring Apps use JDK. JRE-headless is a smaller image.
* Shall use the most recent builds of Java.
- * Currently, apps on Azure Spring Cloud use Java 8 build 242. This is an outdated build.
+ * Currently, apps on Azure Spring Apps use Java 8 build 242. This is an outdated build.
-Azul Systems will continuously scan for changes to base operating systems and keep the last built images up to date. Azure Spring Cloud looks for changes to images and continuously updates them across deployments.
+Azul Systems will continuously scan for changes to base operating systems and keep the last built images up to date. Azure Spring Apps looks for changes to images and continuously updates them across deployments.
-## FAQ for Azure Spring Cloud
+## FAQ for Azure Spring Apps
* Which versions of Java are supported? Major version and build number.
Azul Systems will continuously scan for changes to base operating systems and ke
* Open a support ticket with Azure Support.
-## Default deployment on Azure Spring Cloud
+## Default deployment on Azure Spring Apps
> ![Default deployment](media/spring-cloud-principles/spring-cloud-default-deployment.png) ## Next steps
-* [Quickstart: Deploy your first Spring Boot app in Azure Spring Cloud](./quickstart.md)
+* [Quickstart: Deploy your first Spring Boot app in Azure Spring Apps](./quickstart.md)
* [Java long-term support for Azure and Azure Stack](/azure/developer/java/fundamentals/java-support-on-azure)
spring-cloud Quickstart Deploy Apps Enterprise https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/quickstart-deploy-apps-enterprise.md
Title: "Quickstart - Build and deploy apps to Azure Spring Cloud Enterprise tier"
-description: Describes app deployment to Azure Spring Cloud Enterprise tier.
+ Title: "Quickstart - Build and deploy apps to Azure Spring Apps Enterprise tier"
+description: Describes app deployment to Azure Spring Apps Enterprise tier.
Last updated 02/09/2022-+
-# Quickstart: Build and deploy apps to Azure Spring Cloud using the Enterprise tier
+# Quickstart: Build and deploy apps to Azure Spring Apps using the Enterprise tier
+
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
**This article applies to:** ❌ Basic/Standard tier ✔️ Enterprise tier
-This quickstart shows you how to build and deploy applications to Azure Spring Cloud using the Enterprise tier.
+This quickstart shows you how to build and deploy applications to Azure Spring Apps using the Enterprise tier.
## Prerequisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- An already provisioned Azure Spring Cloud Enterprise tier instance. For more information, see [Quickstart: Provision an Azure Spring Cloud service using the Enterprise tier](quickstart-provision-service-instance-enterprise.md).
+- An already provisioned Azure Spring Apps Enterprise tier instance. For more information, see [Quickstart: Provision an Azure Spring Apps service using the Enterprise tier](quickstart-provision-service-instance-enterprise.md).
- [Apache Maven](https://maven.apache.org/download.cgi) - [The Azure CLI version 2.0.67 or higher](/cli/azure/install-azure-cli). - [!INCLUDE [install-enterprise-extension](includes/install-enterprise-extension.md)] ## Create and configure apps
-To create apps on Azure Spring Cloud, follow these steps:
+To create apps on Azure Spring Apps, follow these steps:
1. To set the CLI defaults, use the following commands. Be sure to replace the placeholders with your own values.
To create apps on Azure Spring Cloud, follow these steps:
1. To create the two core applications for PetClinic, `api-gateway` and `customers-service`, use the following commands: ```azurecli
- az spring-cloud app create --name api-gateway --instance-count 1 --memory 2Gi --assign-endpoint
- az spring-cloud app create --name customers-service --instance-count 1 --memory 2Gi
+ az spring app create --name api-gateway --instance-count 1 --memory 2Gi --assign-endpoint
+ az spring app create --name customers-service --instance-count 1 --memory 2Gi
``` ## Bind apps to Application Configuration Service for Tanzu and Tanzu Service Registry
To bind apps to Application Configuration Service for VMware Tanzu®, follow the
1. Select **App binding**, then select **Bind app**. 1. Choose one app in the dropdown and select **Apply** to bind the application to Application Configuration Service for Tanzu.
- ![Screenshot of Azure portal Azure Spring Cloud with Application Configuration Service page and 'App binding' section with 'Bind app' dialog showing.](./media/enterprise/getting-started-enterprise/config-service-app-bind-dropdown.png)
+ :::image type="content" source="media/enterprise/getting-started-enterprise/config-service-app-bind-dropdown.png" alt-text="Screenshot of Azure portal Azure Spring Apps with Application Configuration Service page and 'App binding' section with 'Bind app' dialog showing.":::
A list under **App name** shows the apps bound with Application Configuration Service for Tanzu, as shown in the following screenshot:
-![Screenshot of Azure portal Azure Spring Cloud with Application Configuration Service page and 'App binding' section with app list showing.](./media/enterprise/getting-started-enterprise/config-service-app-bind.png)
To bind apps to VMware Tanzu® Service Registry, follow these steps.
To bind apps to VMware Tanzu® Service Registry, follow these steps.
1. Select **App binding**, then select **Bind app**. 1. Choose one app in the dropdown, and then select **Apply** to bind the application to Tanzu Service Registry.
- :::image type="content" source="media/enterprise/getting-started-enterprise/service-reg-app-bind-dropdown.png" alt-text="Screenshot of Azure portal Azure Spring Cloud with Service Registry page and 'Bind app' dialog showing.":::
+ :::image type="content" source="media/enterprise/getting-started-enterprise/service-reg-app-bind-dropdown.png" alt-text="Screenshot of Azure portal Azure Spring Apps with Service Registry page and 'Bind app' dialog showing.":::
A list under **App name** shows the apps bound with Tanzu Service Registry, as shown in the following screenshot: ### [Azure CLI](#tab/azure-cli) To bind apps to Application Configuration Service for VMware Tanzu® and VMware Tanzu® Service Registry, use the following commands. ```azurecli
-az spring-cloud application-configuration-service bind --app api-gateway
-az spring-cloud application-configuration-service bind --app customers-service
-az spring-cloud service-registry bind --app api-gateway
-az spring-cloud service-registry bind --app customers-service
+az spring application-configuration-service bind --app api-gateway
+az spring application-configuration-service bind --app customers-service
+az spring service-registry bind --app api-gateway
+az spring service-registry bind --app customers-service
```
To build locally, use the following steps:
1. Deploy the JAR files built in the previous step using the following commands: ```azurecli
- az spring-cloud app deploy \
+ az spring app deploy \
--name api-gateway \ --artifact-path spring-petclinic-api-gateway/target/spring-petclinic-api-gateway-2.3.6.jar \ --config-file-patterns api-gateway
- az spring-cloud app deploy \
+ az spring app deploy \
--name customers-service \ --artifact-path spring-petclinic-customers-service/target/spring-petclinic-customers-service-2.3.6.jar \ --config-file-patterns customers-service
To build locally, use the following steps:
1. Query the application status after deployment by using the following command: ```azurecli
- az spring-cloud app list --output table
+ az spring app list --output table
``` This command produces output similar to the following example: ```output
- Name Location ResourceGroup Public Url Production Deployment Provisioning State CPU Memory Running Instance Registered Instance Persistent Storage Bind Service Registry Bind Application Configuration Service
- -- - -- -- -- -- -- -- -
- api-gateway eastus <resource group> https://<service_name>-api-gateway.asc-test.net default Succeeded 1 2Gi 1/1 1/1 - True True
- customers-service eastus <resource group> default Succeeded 1 2Gi 1/1 1/1 - True True
+ Name Location ResourceGroup Public Url Production Deployment Provisioning State CPU Memory Running Instance Registered Instance Persistent Storage Bind Service Registry Bind Application Configuration Service
+ -- - - - -- -- -- -- -- -- -
+ api-gateway eastus <resource group> https://<service_name>-api-gateway.azuremicroservices.io default Succeeded 1 2Gi 1/1 1/1 - True True
+ customers-service eastus <resource group> default Succeeded 1 2Gi 1/1 1/1 - True True
``` ### Verify the applications
spring-cloud Quickstart Deploy Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/quickstart-deploy-apps.md
Title: "Quickstart - Build and deploy apps to Azure Spring Cloud"
-description: Describes app deployment to Azure Spring Cloud.
+ Title: "Quickstart - Build and deploy apps to Azure Spring Apps"
+description: Describes app deployment to Azure Spring Apps.
Last updated 11/15/2021-+ zone_pivot_groups: programming-languages-spring-cloud
-# Quickstart: Build and deploy apps to Azure Spring Cloud
+# Quickstart: Build and deploy apps to Azure Spring Apps
+
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
**This article applies to:** ✔️ Basic/Standard tier ❌ Enterprise tier ::: zone pivot="programming-language-csharp"
-In this quickstart, you build and deploy Spring applications to Azure Spring Cloud using the Azure CLI.
+In this quickstart, you build and deploy Spring applications to Azure Spring Apps using the Azure CLI.
## Prerequisites * Complete the previous quickstarts in this series:
- * [Provision Azure Spring Cloud service](./quickstart-provision-service-instance.md).
- * [Set up Azure Spring Cloud configuration server](./quickstart-setup-config-server.md).
+ * [Provision Azure Spring Apps service](./quickstart-provision-service-instance.md).
+ * [Set up Azure Spring Apps configuration server](./quickstart-setup-config-server.md).
## Download the sample app
If you've been using the Azure Cloud Shell up to this point, switch to a local c
## Deploy PlanetWeatherProvider
-1. Create an app for the PlanetWeatherProvider project in your Azure Spring Cloud instance.
+1. Create an app for the PlanetWeatherProvider project in your Azure Spring Apps instance.
```azurecli
- az spring-cloud app create --name planet-weather-provider --runtime-version NetCore_31
+ az spring app create --name planet-weather-provider --runtime-version NetCore_31
``` To enable automatic service registration, you have given the app the same name as the value of `spring.application.name` in the project's *appsettings.json* file:
If you've been using the Azure Cloud Shell up to this point, switch to a local c
Make sure that the command prompt is in the project folder before running the following command. ```azurecli
- az spring-cloud app deploy -n planet-weather-provider --runtime-version NetCore_31 --main-entry Microsoft.Azure.SpringCloud.Sample.PlanetWeatherProvider.dll --artifact-path ./publish-deploy-planet.zip
+ az spring app deploy -n planet-weather-provider --runtime-version NetCore_31 --main-entry Microsoft.Azure.SpringCloud.Sample.PlanetWeatherProvider.dll --artifact-path ./publish-deploy-planet.zip
``` The `--main-entry` option specifies the relative path from the *.zip* file's root folder to the *.dll* file that contains the application's entry point. After the service uploads the *.zip* file, it extracts all the files and folders and tries to execute the entry point in the specified *.dll* file.
If you've been using the Azure Cloud Shell up to this point, switch to a local c
## Deploy SolarSystemWeather
-1. Create another app in your Azure Spring Cloud instance, this time for the SolarSystemWeather project:
+1. Create another app in your Azure Spring Apps instance, this time for the SolarSystemWeather project:
```azurecli
- az spring-cloud app create --name solar-system-weather --runtime-version NetCore_31
+ az spring app create --name solar-system-weather --runtime-version NetCore_31
``` `solar-system-weather` is the name that is specified in the `SolarSystemWeather` project's *appsettings.json* file.
If you've been using the Azure Cloud Shell up to this point, switch to a local c
1. Deploy to Azure. ```azurecli
- az spring-cloud app deploy -n solar-system-weather --runtime-version NetCore_31 --main-entry Microsoft.Azure.SpringCloud.Sample.SolarSystemWeather.dll --artifact-path ./publish-deploy-solar.zip
+ az spring app deploy -n solar-system-weather --runtime-version NetCore_31 --main-entry Microsoft.Azure.SpringCloud.Sample.SolarSystemWeather.dll --artifact-path ./publish-deploy-solar.zip
``` This command may take several minutes to run.
To test the application, send an HTTP GET request to the `solar-system-weather`
1. To assign the endpoint, run the following command. ```azurecli
- az spring-cloud app update -n solar-system-weather --assign-endpoint true
+ az spring app update -n solar-system-weather --assign-endpoint true
``` 1. To get the URL of the endpoint, run the following command.
To test the application, send an HTTP GET request to the `solar-system-weather`
Windows: ```azurecli
- az spring-cloud app show -n solar-system-weather -o table
+ az spring app show -n solar-system-weather -o table
``` Linux: ```azurecli
- az spring-cloud app show --name solar-system-weather | grep url
+ az spring app show --name solar-system-weather | grep url
``` ## Test the application
This response shows that both Spring apps are working. The `SolarSystemWeather`
::: zone-end ::: zone pivot="programming-language-java"
-This document explains how to build and deploy Spring applications to Azure Spring Cloud using:
+This document explains how to build and deploy Spring applications to Azure Spring Apps using:
* Azure CLI * Maven Plugin * Intellij
-Before deployment using Azure CLI or Maven, complete the examples that [provision an instance of Azure Spring Cloud](./quickstart-provision-service-instance.md) and [set up the config server](./quickstart-setup-config-server.md). For enterprise tier, please follow [set up Application Configuration Service](./how-to-enterprise-application-configuration-service.md).
+Before deployment using Azure CLI or Maven, complete the examples that [provision an instance of Azure Spring Apps](./quickstart-provision-service-instance.md) and [set up the config server](./quickstart-setup-config-server.md). For enterprise tier, please follow [set up Application Configuration Service](./how-to-enterprise-application-configuration-service.md).
## Prerequisites * [Install JDK 8 or JDK 11](/azure/developer/java/fundamentals/java-jdk-install) * [Sign up for an Azure subscription](https://azure.microsoft.com/free/)
-* (Optional) [Install the Azure CLI version 2.0.67 or higher](/cli/azure/install-azure-cli) and install the Azure Spring Cloud extension with command: `az extension add --name spring-cloud`
+* (Optional) [Install the Azure CLI version 2.0.67 or higher](/cli/azure/install-azure-cli) and install the Azure Spring Apps extension with command: `az extension add --name spring`
* (Optional) [Install the Azure Toolkit for IntelliJ](https://plugins.jetbrains.com/plugin/8053-azure-toolkit-for-intellij/) and [sign in](/azure/developer/java/toolkit-for-intellij/create-hello-world-web-app#installation-and-sign-in) ## Deployment procedures
Before deployment using Azure CLI or Maven, complete the examples that [provisio
Compiling the project takes 5-10 minutes. Once completed, you should have individual JAR files for each service in their respective folders.
-## Create and deploy apps on Azure Spring Cloud
+## Create and deploy apps on Azure Spring Apps
1. If you didn't run the following commands in the previous quickstarts, set the CLI defaults.
Compiling the project takes 5-10 minutes. Once completed, you should have indivi
1. Create the 2 core Spring applications for PetClinic: API gateway and customers-service. ```azurecli
- az spring-cloud app create --name api-gateway --instance-count 1 --memory 2Gi --assign-endpoint
- az spring-cloud app create --name customers-service --instance-count 1 --memory 2Gi
+ az spring app create --name api-gateway --instance-count 1 --memory 2Gi --assign-endpoint
+ az spring app create --name customers-service --instance-count 1 --memory 2Gi
``` 1. Deploy the JAR files built in the previous step. ```azurecli
- az spring-cloud app deploy \
+ az spring app deploy \
--name api-gateway \ --jar-path spring-petclinic-api-gateway/target/spring-petclinic-api-gateway-2.5.1.jar \ --jvm-options="-Xms2048m -Xmx2048m"
- az spring-cloud app deploy \
+ az spring app deploy \
--name customers-service \ --jar-path spring-petclinic-customers-service/target/spring-petclinic-customers-service-2.5.1.jar \ --jvm-options="-Xms2048m -Xmx2048m"
Compiling the project takes 5-10 minutes. Once completed, you should have indivi
1. Query app status after deployments with the following command. ```azurecli
- az spring-cloud app list --output table
+ az spring app list --output table
``` This command produces output similar to the following example:
Access the app gateway and customers service from browser with the **Public Url*
![Access petclinic customers service](media/build-and-deploy/access-customers-service.png) > [!TIP]
-> To troubleshot deployments, you can use the following command to get logs streaming in real time whenever the app is running `az spring-cloud app logs --name <app name> -f`.
+> To troubleshot deployments, you can use the following command to get logs streaming in real time whenever the app is running `az spring app logs --name <app name> -f`.
## Deploy extra apps To get the PetClinic app functioning with all features like Admin Server, Visits and Veterinarians, you can deploy the other apps with following commands: ```azurecli
-az spring-cloud app create --name admin-server --instance-count 1 --memory 2Gi --assign-endpoint
-az spring-cloud app create --name vets-service --instance-count 1 --memory 2Gi
-az spring-cloud app create --name visits-service --instance-count 1 --memory 2Gi
-az spring-cloud app deploy --name admin-server --jar-path spring-petclinic-admin-server/target/spring-petclinic-admin-server-2.5.1.jar --jvm-options="-Xms2048m -Xmx2048m"
-az spring-cloud app deploy --name vets-service --jar-path spring-petclinic-vets-service/target/spring-petclinic-vets-service-2.5.1.jar --jvm-options="-Xms2048m -Xmx2048m"
-az spring-cloud app deploy --name visits-service --jar-path spring-petclinic-visits-service/target/spring-petclinic-visits-service-2.5.1.jar --jvm-options="-Xms2048m -Xmx2048m"
+az spring app create --name admin-server --instance-count 1 --memory 2Gi --assign-endpoint
+az spring app create --name vets-service --instance-count 1 --memory 2Gi
+az spring app create --name visits-service --instance-count 1 --memory 2Gi
+az spring app deploy --name admin-server --jar-path spring-petclinic-admin-server/target/spring-petclinic-admin-server-2.5.1.jar --jvm-options="-Xms2048m -Xmx2048m"
+az spring app deploy --name vets-service --jar-path spring-petclinic-vets-service/target/spring-petclinic-vets-service-2.5.1.jar --jvm-options="-Xms2048m -Xmx2048m"
+az spring app deploy --name visits-service --jar-path spring-petclinic-visits-service/target/spring-petclinic-visits-service-2.5.1.jar --jvm-options="-Xms2048m -Xmx2048m"
``` #### [Maven](#tab/Maven)
az spring-cloud app deploy --name visits-service --jar-path spring-petclinic-vis
Compiling the project takes 5 -10 minutes. Once completed, you should have individual JAR files for each service in their respective folders.
-## Generate configurations and deploy to the Azure Spring Cloud
+## Generate configurations and deploy to the Azure Spring Apps
1. Generate configurations by running the following command in the root folder of Pet Clinic containing the parent POM. If you have already signed-in with Azure CLI, the command will automatically pick up the credentials. Otherwise, it will sign you in with prompt instructions. For more information, see our [wiki page](https://github.com/microsoft/azure-maven-plugins/wiki/Authentication). ```azurecli
- mvn com.microsoft.azure:azure-spring-cloud-maven-plugin:1.7.0:config
+ mvn com.microsoft.azure:azure-spring-apps-maven-plugin:1.10.0:config
``` You will be asked to select: * **Modules:** Select `api-gateway` and `customers-service`.
- * **Subscription:** This is your subscription used to create an Azure Spring Cloud instance.
- * **Service Instance:** This is the name of your Azure Spring Cloud instance.
+ * **Subscription:** This is your subscription used to create an Azure Spring Apps instance.
+ * **Service Instance:** This is the name of your Azure Spring Apps instance.
* **Public endpoint:** In the list of provided projects, enter the number that corresponds with `api-gateway`. This gives it public access. 1. Verify the `appName` elements in the POM files are correct:
Compiling the project takes 5 -10 minutes. Once completed, you should have indiv
<plugins> <plugin> <groupId>com.microsoft.azure</groupId>
- <artifactId>azure-spring-cloud-maven-plugin</artifactId>
- <version>1.7.0</version>
+ <artifactId>azure-spring-apps-maven-plugin</artifactId>
+ <version>1.10.0</version>
<configuration> <subscriptionId>xxxxxxxxx-xxxx-xxxx-xxxxxxxxxxxx</subscriptionId> <clusterName>v-spr-cld</clusterName>
Compiling the project takes 5 -10 minutes. Once completed, you should have indiv
1. The POM now contains the plugin dependencies and configurations. Deploy the apps using the following command. ```azurecli
- mvn azure-spring-cloud:deploy
+ mvn azure-spring-apps:deploy
``` ## Verify the services
Correct app names in each `pom.xml` for above modules and then run the `deploy`
![Import Project](media/spring-cloud-intellij-howto/import-project-1-pet-clinic.png)
-### Deploy api-gateway app to Azure Spring Cloud
+### Deploy api-gateway app to Azure Spring Apps
In order to deploy to Azure you must sign in with your Azure account with Azure Toolkit for IntelliJ, and choose your subscription. For sign-in details, see [Installation and sign-in](/azure/developer/java/toolkit-for-intellij/create-hello-world-web-app#installation-and-sign-in).
-1. Right-click your project in IntelliJ project explorer, and select **Azure** -> **Deploy to Azure Spring Cloud**.
+1. Right-click your project in IntelliJ project explorer, and select **Azure** -> **Deploy to Azure Spring Apps**.
![Deploy to Azure 1](media/spring-cloud-intellij-howto/deploy-to-azure-1-pet-clinic.png) 1. In the **Name** field, append *:api-gateway* to the existing **Name**. 1. In the **Artifact** textbox, select *spring-petclinic-api-gateway-2.5.1*. 1. In the **Subscription** textbox, verify your subscription.
-1. In the **Spring Cloud** textbox, select the instance of Azure Spring Cloud that you created in [Provision Azure Spring Cloud instance](./quickstart-provision-service-instance.md).
+1. In the **Spring Cloud** textbox, select the instance of Azure Spring Apps that you created in [Provision Azure Spring Apps instance](./quickstart-provision-service-instance.md).
1. Set **Public Endpoint** to *Enable*. 1. In the **App:** textbox, select **Create app...**. 1. Enter *api-gateway*, then select **OK**.
In order to deploy to Azure you must sign in with your Azure account with Azure
![Deploy to Azure OK](media/spring-cloud-intellij-howto/deploy-to-azure-spring-cloud-2-pet-clinic.png)
-1. Start the deployment by selecting the **Run** button at the bottom of the **Deploy Azure Spring Cloud app** dialog. The plug-in will run the command `mvn package` on the `api-gateway` app and deploy the jar generated by the `package` command.
+1. Start the deployment by selecting the **Run** button at the bottom of the **Deploy Azure Spring Apps app** dialog. The plug-in will run the command `mvn package` on the `api-gateway` app and deploy the jar generated by the `package` command.
-### Deploy customers-service and other apps to Azure Spring Cloud
+### Deploy customers-service and other apps to Azure Spring Apps
-Repeat the steps above to deploy `customers-service` and other Pet Clinic apps to Azure Spring Cloud:
+Repeat the steps above to deploy `customers-service` and other Pet Clinic apps to Azure Spring Apps:
1. Modify the **Name** and **Artifact** to identify the `customers-service` app. 1. In the **App:** textbox, select **Create app...** to create `customers-service` app. 1. Verify that the **Public Endpoint** option is set to *Disabled*. 1. In the **Before launch** section of the dialog, switch the **Working directory** to the *petclinic/customers-service* folder.
-1. Start the deployment by selecting the **Run** button at the bottom of the **Deploy Azure Spring Cloud app** dialog.
+1. Start the deployment by selecting the **Run** button at the bottom of the **Deploy Azure Spring Apps app** dialog.
## Verify the services
spring-cloud Quickstart Deploy Infrastructure Vnet Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/quickstart-deploy-infrastructure-vnet-azure-cli.md
Title: Quickstart - Provision Azure Spring Cloud using Azure CLI
-description: This quickstart shows you how to use Azure CLI to deploy a Spring Cloud cluster into an existing virtual network.
+ Title: Quickstart - Provision Azure Spring Apps using Azure CLI
+description: This quickstart shows you how to use Azure CLI to deploy an Azure Spring Apps cluster into an existing virtual network.
-+ Last updated 11/12/2021
-# Quickstart: Provision Azure Spring Cloud using Azure CLI
+# Quickstart: Provision Azure Spring Apps using Azure CLI
+
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
**This article applies to:** ✔️ Basic/Standard tier ✔️ Enterprise tier
-This quickstart describes how to use Azure CLI to deploy an Azure Spring Cloud cluster into an existing virtual network.
+This quickstart describes how to use Azure CLI to deploy an Azure Spring Apps cluster into an existing virtual network.
-Azure Spring Cloud makes it easy to deploy Spring applications to Azure without any code changes. The service manages the infrastructure of Spring Cloud applications so developers can focus on their code. Azure Spring Cloud provides lifecycle management using comprehensive monitoring and diagnostics, configuration management, service discovery, CI/CD integration, blue-green deployments, and more.
+Azure Spring Apps makes it easy to deploy Spring applications to Azure without any code changes. The service manages the infrastructure of Spring applications so developers can focus on their code. Azure Spring Apps provides lifecycle management using comprehensive monitoring and diagnostics, configuration management, service discovery, CI/CD integration, blue-green deployments, and more.
## Prerequisites * An Azure subscription. If you don't have a subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
-* Two dedicated subnets for the Azure Spring Cloud cluster, one for the service runtime and another for the Spring applications. For subnet and virtual network requirements, see the [Virtual network requirements](how-to-deploy-in-azure-virtual-network.md#virtual-network-requirements) section of [Deploy Azure Spring Cloud in a virtual network](how-to-deploy-in-azure-virtual-network.md).
-* An existing Log Analytics workspace for Azure Spring Cloud diagnostics settings and a workspace-based Application Insights resource. For more information, see [Analyze logs and metrics with diagnostics settings](diagnostic-services.md) and [Application Insights Java In-Process Agent in Azure Spring Cloud](how-to-application-insights.md).
-* Three internal Classless Inter-Domain Routing (CIDR) ranges (at least */16* each) that you've identified for use by the Azure Spring Cloud cluster. These CIDR ranges will not be directly routable and will be used only internally by the Azure Spring Cloud cluster. Clusters may not use *169.254.0.0/16*, *172.30.0.0/16*, *172.31.0.0/16*, or *192.0.2.0/24* for the internal Spring Cloud CIDR ranges, or any IP ranges included within the cluster virtual network address range.
-* Service permission granted to the virtual network. The Azure Spring Cloud Resource Provider requires Owner permission to your virtual network in order to grant a dedicated and dynamic service principal on the virtual network for further deployment and maintenance. For instructions and more information, see the [Grant service permission to the virtual network](how-to-deploy-in-azure-virtual-network.md#grant-service-permission-to-the-virtual-network) section of [Deploy Azure Spring Cloud in a virtual network](how-to-deploy-in-azure-virtual-network.md).
+* Two dedicated subnets for the Azure Spring Apps cluster, one for the service runtime and another for the Spring applications. For subnet and virtual network requirements, see the [Virtual network requirements](how-to-deploy-in-azure-virtual-network.md#virtual-network-requirements) section of [Deploy Azure Spring Apps in a virtual network](how-to-deploy-in-azure-virtual-network.md).
+* An existing Log Analytics workspace for Azure Spring Apps diagnostics settings and a workspace-based Application Insights resource. For more information, see [Analyze logs and metrics with diagnostics settings](diagnostic-services.md) and [Application Insights Java In-Process Agent in Azure Spring Apps](how-to-application-insights.md).
+* Three internal Classless Inter-Domain Routing (CIDR) ranges (at least */16* each) that you've identified for use by the Azure Spring Apps cluster. These CIDR ranges will not be directly routable and will be used only internally by the Azure Spring Apps cluster. Clusters may not use *169.254.0.0/16*, *172.30.0.0/16*, *172.31.0.0/16*, or *192.0.2.0/24* for the internal Spring app CIDR ranges, or any IP ranges included within the cluster virtual network address range.
+* Service permission granted to the virtual network. The Azure Spring Apps Resource Provider requires Owner permission to your virtual network in order to grant a dedicated and dynamic service principal on the virtual network for further deployment and maintenance. For instructions and more information, see the [Grant service permission to the virtual network](how-to-deploy-in-azure-virtual-network.md#grant-service-permission-to-the-virtual-network) section of [Deploy Azure Spring Apps in a virtual network](how-to-deploy-in-azure-virtual-network.md).
* If you're using Azure Firewall or a Network Virtual Appliance (NVA), you'll also need to satisfy the following prerequisites: * Network and fully qualified domain name (FQDN) rules. For more information, see [Virtual network requirements](how-to-deploy-in-azure-virtual-network.md#virtual-network-requirements).
- * A unique User Defined Route (UDR) applied to each of the service runtime and Spring application subnets. For more information about UDRs, see [Virtual network traffic routing](../virtual-network/virtual-networks-udr-overview.md). The UDR should be configured with a route for *0.0.0.0/0* with a destination of your NVA before deploying the Spring Cloud cluster. For more information, see the [Bring your own route table](how-to-deploy-in-azure-virtual-network.md#bring-your-own-route-table) section of [Deploy Azure Spring Cloud in a virtual network](how-to-deploy-in-azure-virtual-network.md).
+ * A unique User Defined Route (UDR) applied to each of the service runtime and Spring application subnets. For more information about UDRs, see [Virtual network traffic routing](../virtual-network/virtual-networks-udr-overview.md). The UDR should be configured with a route for *0.0.0.0/0* with a destination of your NVA before deploying the Azure Spring Apps cluster. For more information, see the [Bring your own route table](how-to-deploy-in-azure-virtual-network.md#bring-your-own-route-table) section of [Deploy Azure Spring Apps in a virtual network](how-to-deploy-in-azure-virtual-network.md).
* [Azure CLI](/cli/azure/install-azure-cli) ## Review the Azure CLI deployment script
-The deployment script used in this quickstart is from the [Azure Spring Cloud reference architecture](reference-architecture.md).
+The deployment script used in this quickstart is from the [Azure Spring Apps reference architecture](reference-architecture.md).
```azurecli #!/bin/bash
echo "Enter Azure region for resource deployment: "
read region location=$region
-echo "Enter Azure Spring cloud Resource Group Name: "
-read azurespringcloudrg
-azurespringcloud_resource_group_name=$azurespringcloudrg
+echo "Enter Azure Spring Apps Resource Group Name: "
+read azurespringappsrg
+azurespringapps_resource_group_name=$azurespringappsrg
-echo "Enter Azure Spring cloud VNet Resource Group Name: "
-read azurespringcloudvnetrg
-azurespringcloud_vnet_resource_group_name=$azurespringcloudvnetrg
+echo "Enter Azure Spring Apps VNet Resource Group Name: "
+read azurespringappsvnetrg
+azurespringapps_vnet_resource_group_name=$azurespringappsvnetrg
-echo "Enter Azure Spring cloud Spoke VNet : "
-read azurespringcloudappspokevnet
-azurespringcloudappspokevnet=$azurespringcloudappspokevnet
+echo "Enter Azure Spring Apps Spoke VNet : "
+read azurespringappsappspokevnet
+azurespringappsappspokevnet=$azurespringappsappspokevnet
-echo "Enter Azure Spring cloud App SubNet : "
-read azurespringcloudappsubnet
-azurespringcloud_app_subnet_name='/subscriptions/'$subscription'/resourcegroups/'$azurespringcloud_vnet_resource_group_name'/providers/Microsoft.Network/virtualNetworks/'$azurespringcloudappspokevnet'/subnets/'$azurespringcloudappsubnet
+echo "Enter Azure Spring Apps App SubNet : "
+read azurespringappsappsubnet
+azurespringapps_app_subnet_name='/subscriptions/'$subscription'/resourcegroups/'$azurespringapps_vnet_resource_group_name'/providers/Microsoft.Network/virtualNetworks/'$azurespringappsappspokevnet'/subnets/'$azurespringappsappsubnet
-echo "Enter Azure Spring cloud Service SubNet : "
-read azurespringcloudservicesubnet
-azurespringcloud_service_subnet_name='/subscriptions/'$subscription'/resourcegroups/'$azurespringcloud_vnet_resource_group_name'/providers/Microsoft.Network/virtualNetworks/'$azurespringcloudappspokevnet'/subnets/'$azurespringcloudservicesubnet
+echo "Enter Azure Spring Apps Service SubNet : "
+read azurespringappsservicesubnet
+azurespringapps_service_subnet_name='/subscriptions/'$subscription'/resourcegroups/'$azurespringapps_vnet_resource_group_name'/providers/Microsoft.Network/virtualNetworks/'$azurespringappsappspokevnet'/subnets/'$azurespringappsservicesubnet
echo "Enter Azure Log Analytics Workspace Resource Group Name: " read loganalyticsrg
echo "Enter Log Analytics Workspace Resource ID: "
read workspace workspaceID='/subscriptions/'$subscription'/resourcegroups/'$loganalyticsrg'/providers/microsoft.operationalinsights/workspaces/'$workspace
-echo "Enter Reserved CIDR Ranges for Azure Spring Cloud: "
+echo "Enter Reserved CIDR Ranges for Azure Spring Apps: "
read reservedcidrrange reservedcidrrange=$reservedcidrrange
read tag
tags=$tag randomstring=$(LC_ALL=C tr -dc 'a-z0-9' < /dev/urandom | fold -w 13 | head -n 1)
-azurespringcloud_service='spring-'$randomstring #Name of unique Spring Cloud resource
-azurespringcloud_appinsights=$azurespringcloud_service
-azurespringcloud_resourceid='/subscriptions/'$subscription'/resourceGroups/'$azurespringcloud_resource_group_name'/providers/Microsoft.AppPlatform/Spring/'$azurespringcloud_service
+azurespringapps_service='spring-'$randomstring #Name of unique Azure Spring Apps service instance
+azurespringapps_appinsights=$azurespringapps_service
+azurespringapps_resourceid='/subscriptions/'$subscription'/resourceGroups/'$azurespringapps_resource_group_name'/providers/Microsoft.AppPlatform/Spring/'$azurespringapps_service
# Create Application Insights az monitor app-insights component create \
- --app ${azurespringcloud_service} \
+ --app ${azurespringapps_service} \
--location ${location} \ --kind web \
- -g ${azurespringcloudrg} \
+ -g ${azurespringappsrg} \
--application-type web \ --workspace ${workspaceID}
-# Create Azure Spring Cloud Instance
-az spring-cloud create \
- -n ${azurespringcloud_service} \
- -g ${azurespringcloudrg} \
+# Create Azure Spring Apps Instance
+az spring create \
+ -n ${azurespringapps_service} \
+ -g ${azurespringappsrg} \
-l ${location} \ --enable-java-agent true \
- --app-insights ${azurespringcloud_service} \
+ --app-insights ${azurespringapps_service} \
--sku Standard \
- --app-subnet ${azurespringcloud_app_subnet_name} \
- --service-runtime-subnet ${azurespringcloud_service_subnet_name} \
+ --app-subnet ${azurespringapps_app_subnet_name} \
+ --service-runtime-subnet ${azurespringapps_service_subnet_name} \
--reserved-cidr-range ${reservedcidrrange} \ --tags ${tags}
-# Update diagnostic setting for Azure Spring Cloud instance
+# Update diagnostic setting for Azure Spring Apps instance
az monitor diagnostic-settings create \ --name monitoring \
- --resource ${azurespringcloud_resourceid} \
+ --resource ${azurespringapps_resourceid} \
--logs '[{"category": "ApplicationConsole","enabled": true}]' \ --workspace ${workspaceID} ``` ## Deploy the cluster
-To deploy the Azure Spring Cloud cluster using the Azure CLI script, follow these steps:
+To deploy the Azure Spring Apps cluster using the Azure CLI script, follow these steps:
1. Sign in to Azure by using the following command:
To deploy the Azure Spring Cloud cluster using the Azure CLI script, follow thes
az account set --subscription "<your subscription name>" ```
-1. Register the Azure Spring Cloud Resource Provider.
+1. Register the Azure Spring Apps Resource Provider.
```azurecli az provider register --namespace 'Microsoft.AppPlatform'
To deploy the Azure Spring Cloud cluster using the Azure CLI script, follow thes
1. Add the required extensions to Azure CLI. ```azurecli
- az extension add --name spring-cloud
+ az extension add --name spring
```
-1. Choose a deployment location from the regions where Azure Spring Cloud is available, as shown in [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=spring-cloud&regions=all).
+1. Choose a deployment location from the regions where Azure Spring Apps is available, as shown in [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=spring-cloud&regions=all).
1. Use the following command to generate a list of Azure locations. Take note of the short **Name** value for the region you selected in the previous step.
To deploy the Azure Spring Cloud cluster using the Azure CLI script, follow thes
* The name of the resource group that you created earlier. * The name of the virtual network resource group where you'll deploy your resources. * The name of the spoke virtual network (for example, *vnet-spoke*).
- * The name of the subnet to be used by the Spring Cloud App Service (for example, *snet-app*).
- * The name of the subnet to be used by the Spring Cloud runtime service (for example, *snet-runtime*).
+ * The name of the subnet to be used by the Azure Spring Apps service (for example, *snet-app*).
+ * The name of the subnet to be used by the Spring runtime service (for example, *snet-runtime*).
* The name of the resource group for the Azure Log Analytics workspace to be used for storing diagnostic logs. * The name of the Azure Log Analytics workspace (for example, *la-cb5sqq6574o2a*).
- * The CIDR ranges from your virtual network to be used by Azure Spring Cloud (for example, *XX.X.X.X/16,XX.X.X.X/16,XX.X.X.X/16*).
+ * The CIDR ranges from your virtual network to be used by Azure Spring Apps (for example, *XX.X.X.X/16,XX.X.X.X/16,XX.X.X.X/16*).
* The key/value pairs to be applied as tags on all resources that support tags. For more information, see [Use tags to organize your Azure resources and management hierarchy](../azure-resource-manager/management/tag-resources.md). Use a space-separated list to apply multiple tags (for example, *environment=Dev BusinessUnit=finance*). After you provide this information, the script will create and deploy the Azure resources.
echo "Press [ENTER] to continue ..."
## Next steps
-In this quickstart, you deployed an Azure Spring Cloud instance into an existing virtual network using Azure CLI, and then validated the deployment. To learn more about Azure Spring Cloud, continue on to the resources below.
+In this quickstart, you deployed an Azure Spring Apps instance into an existing virtual network using Azure CLI, and then validated the deployment. To learn more about Azure Spring Apps, continue on to the resources below.
* Deploy one of the following sample applications from the locations below: * [Pet Clinic App with MySQL Integration](https://github.com/azure-samples/spring-petclinic-microservices) * [Simple Hello World](./quickstart.md?pivots=programming-language-java&tabs=Azure-CLI).
-* Use [custom domains](tutorial-custom-domain.md) with Azure Spring Cloud.
-* Expose applications in Azure Spring Cloud to the internet using [Azure Application Gateway](expose-apps-gateway-azure-firewall.md).
-* View the secure end-to-end [Azure Spring Cloud reference architecture](reference-architecture.md), which is based on the [Microsoft Azure Well-Architected Framework](/azure/architecture/framework/).
+* Use [custom domains](tutorial-custom-domain.md) with Azure Spring Apps.
+* Expose applications in Azure Spring Apps to the internet using [Azure Application Gateway](expose-apps-gateway-azure-firewall.md).
+* View the secure end-to-end [Azure Spring Apps reference architecture](reference-architecture.md), which is based on the [Microsoft Azure Well-Architected Framework](/azure/architecture/framework/).
spring-cloud Quickstart Deploy Infrastructure Vnet Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/quickstart-deploy-infrastructure-vnet-bicep.md
Title: Quickstart - Provision Azure Spring Cloud using Bicep
-description: This quickstart shows you how to use Bicep to deploy a Spring Cloud cluster into an existing virtual network.
+ Title: Quickstart - Provision Azure Spring Apps using Bicep
+description: This quickstart shows you how to use Bicep to deploy an Azure Spring Apps cluster into an existing virtual network.
-+ Last updated 11/12/2021
-# Quickstart: Provision Azure Spring Cloud using Bicep
+# Quickstart: Provision Azure Spring Apps using Bicep
+
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
**This article applies to:** ✔️ Basic/Standard tier ✔️ Enterprise tier
-This quickstart describes how to use a Bicep template to deploy an Azure Spring Cloud cluster into an existing virtual network.
+This quickstart describes how to use a Bicep template to deploy an Azure Spring Apps cluster into an existing virtual network.
-Azure Spring Cloud makes it easy to deploy Spring applications to Azure without any code changes. The service manages the infrastructure of Spring Cloud applications so developers can focus on their code. Azure Spring Cloud provides lifecycle management using comprehensive monitoring and diagnostics, configuration management, service discovery, CI/CD integration, blue-green deployments, and more.
+Azure Spring Apps makes it easy to deploy Spring applications to Azure without any code changes. The service manages the infrastructure of Spring applications so developers can focus on their code. Azure Spring Apps provides lifecycle management using comprehensive monitoring and diagnostics, configuration management, service discovery, CI/CD integration, blue-green deployments, and more.
## Prerequisites * An Azure subscription. If you don't have a subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
-* Two dedicated subnets for the Azure Spring Cloud cluster, one for the service runtime and another for the Spring applications. For subnet and virtual network requirements, see the [Virtual network requirements](how-to-deploy-in-azure-virtual-network.md#virtual-network-requirements) section of [Deploy Azure Spring Cloud in a virtual network](how-to-deploy-in-azure-virtual-network.md).
-* An existing Log Analytics workspace for Azure Spring Cloud diagnostics settings. For more information, see [Analyze logs and metrics with diagnostics settings](diagnostic-services.md).
-* Three internal Classless Inter-Domain Routing (CIDR) ranges (at least */16* each) that you've identified for use by the Azure Spring Cloud cluster. These CIDR ranges will not be directly routable and will be used only internally by the Azure Spring Cloud cluster. Clusters may not use *169.254.0.0/16*, *172.30.0.0/16*, *172.31.0.0/16*, or *192.0.2.0/24* for the internal Spring Cloud CIDR ranges, or any IP ranges included within the cluster virtual network address range.
-* Service permission granted to the virtual network. The Azure Spring Cloud Resource Provider requires Owner permission to your virtual network in order to grant a dedicated and dynamic service principal on the virtual network for further deployment and maintenance. For instructions and more information, see the [Grant service permission to the virtual network](how-to-deploy-in-azure-virtual-network.md#grant-service-permission-to-the-virtual-network) section of [Deploy Azure Spring Cloud in a virtual network](how-to-deploy-in-azure-virtual-network.md).
+* Two dedicated subnets for the Azure Spring Apps cluster, one for the service runtime and another for the Spring applications. For subnet and virtual network requirements, see the [Virtual network requirements](how-to-deploy-in-azure-virtual-network.md#virtual-network-requirements) section of [Deploy Azure Spring Apps in a virtual network](how-to-deploy-in-azure-virtual-network.md).
+* An existing Log Analytics workspace for Azure Spring Apps diagnostics settings. For more information, see [Analyze logs and metrics with diagnostics settings](diagnostic-services.md).
+* Three internal Classless Inter-Domain Routing (CIDR) ranges (at least */16* each) that you've identified for use by the Azure Spring Apps cluster. These CIDR ranges will not be directly routable and will be used only internally by the Azure Spring Apps cluster. Clusters may not use *169.254.0.0/16*, *172.30.0.0/16*, *172.31.0.0/16*, or *192.0.2.0/24* for the internal Spring app CIDR ranges, or any IP ranges included within the cluster virtual network address range.
+* Service permission granted to the virtual network. The Azure Spring Apps Resource Provider requires Owner permission to your virtual network in order to grant a dedicated and dynamic service principal on the virtual network for further deployment and maintenance. For instructions and more information, see the [Grant service permission to the virtual network](how-to-deploy-in-azure-virtual-network.md#grant-service-permission-to-the-virtual-network) section of [Deploy Azure Spring Apps in a virtual network](how-to-deploy-in-azure-virtual-network.md).
* If you're using Azure Firewall or a Network Virtual Appliance (NVA), you'll also need to satisfy the following prerequisites: * Network and fully qualified domain name (FQDN) rules. For more information, see [Virtual network requirements](how-to-deploy-in-azure-virtual-network.md#virtual-network-requirements).
- * A unique User Defined Route (UDR) applied to each of the service runtime and Spring application subnets. For more information about UDRs, see [Virtual network traffic routing](../virtual-network/virtual-networks-udr-overview.md). The UDR should be configured with a route for *0.0.0.0/0* with a destination of your NVA before deploying the Spring Cloud cluster. For more information, see the [Bring your own route table](how-to-deploy-in-azure-virtual-network.md#bring-your-own-route-table) section of [Deploy Azure Spring Cloud in a virtual network](how-to-deploy-in-azure-virtual-network.md).
+ * A unique User Defined Route (UDR) applied to each of the service runtime and Spring application subnets. For more information about UDRs, see [Virtual network traffic routing](../virtual-network/virtual-networks-udr-overview.md). The UDR should be configured with a route for *0.0.0.0/0* with a destination of your NVA before deploying the Azure Spring Apps cluster. For more information, see the [Bring your own route table](how-to-deploy-in-azure-virtual-network.md#bring-your-own-route-table) section of [Deploy Azure Spring Apps in a virtual network](how-to-deploy-in-azure-virtual-network.md).
* [Azure CLI](/cli/azure/install-azure-cli) ## Deploy using Bicep
To deploy the cluster, follow these steps:
1. Create an *azuredeploy.bicep* file with the following contents: ```Bicep
- @description('The instance name of the Azure Spring Cloud resource')
- param springCloudInstanceName string
+ @description('The instance name of the Azure Spring Apps resource')
+ param springInstanceName string
- @description('The name of the Application Insights instance for Azure Spring Cloud')
+ @description('The name of the Application Insights instance for Azure Spring Apps')
param appInsightsName string @description('The resource ID of the existing Log Analytics workspace. This will be used for both diagnostics logs and Application Insights') param laWorkspaceResourceId string
- @description('The resourceID of the Azure Spring Cloud App Subnet')
- param springCloudAppSubnetID string
+ @description('The resourceID of the Azure Spring Apps App Subnet')
+ param springAppSubnetID string
- @description('The resourceID of the Azure Spring Cloud Runtime Subnet')
- param springCloudRuntimeSubnetID string
+ @description('The resourceID of the Azure Spring Apps Runtime Subnet')
+ param springRuntimeSubnetID string
- @description('Comma-separated list of IP address ranges in CIDR format. The IP ranges are reserved to host underlying Azure Spring Cloud infrastructure, which should be 3 at least /16 unused IP ranges, must not overlap with any Subnet IP ranges')
- param springCloudServiceCidrs string = '10.0.0.0/16,10.2.0.0/16,10.3.0.1/16'
+ @description('Comma-separated list of IP address ranges in CIDR format. The IP ranges are reserved to host underlying Azure Spring Apps infrastructure, which should be 3 at least /16 unused IP ranges, must not overlap with any Subnet IP ranges')
+ param springServiceCidrs string = '10.0.0.0/16,10.2.0.0/16,10.3.0.1/16'
@description('The tags that will be associated to the Resources') param tags object = { environment: 'lab' }
- var springCloudSkuName = 'S0'
- var springCloudSkuTier = 'Standard'
+ var springSkuName = 'S0'
+ var springSkuTier = 'Standard'
var location = resourceGroup().location resource appInsights 'Microsoft.Insights/components@2020-02-02-preview' = {
To deploy the cluster, follow these steps:
} }
- resource springCloudInstance 'Microsoft.AppPlatform/Spring@2020-07-01' = {
- name: springCloudInstanceName
+ resource springInstance 'Microsoft.AppPlatform/Spring@2020-07-01' = {
+ name: springInstanceName
location: location tags: tags sku: {
- name: springCloudSkuName
- tier: springCloudSkuTier
+ name: springSkuName
+ tier: springSkuTier
} properties: { networkProfile: {
- serviceCidr: springCloudServiceCidrs
- serviceRuntimeSubnetId: springCloudRuntimeSubnetID
- appSubnetId: springCloudAppSubnetID
+ serviceCidr: springServiceCidrs
+ serviceRuntimeSubnetId: springRuntimeSubnetID
+ appSubnetId: springAppSubnetID
} } }
- resource springCloudMonitoringSettings 'Microsoft.AppPlatform/Spring/monitoringSettings@2020-07-01' = {
- name: '${springCloudInstance.name}/default'
+ resource springMonitoringSettings 'Microsoft.AppPlatform/Spring/monitoringSettings@2020-07-01' = {
+ name: '${springInstance.name}/default'
properties: { traceEnabled: true appInsightsInstrumentationKey: appInsights.properties.InstrumentationKey } }
- resource springCloudDiagnostics 'microsoft.insights/diagnosticSettings@2017-05-01-preview' = {
+ resource springDiagnostics 'microsoft.insights/diagnosticSettings@2017-05-01-preview' = {
name: 'monitoring'
- scope: springCloudInstance
+ scope: springInstance
properties: { workspaceId: laWorkspaceResourceId logs: [
To deploy the cluster, follow these steps:
1. Open a Bash window and run the following Azure CLI command, replacing the *\<value>* placeholders with the following values:
- * **resource-group:** The resource group name for deploying the Spring Cloud instance.
- * **springCloudInstanceName:** The name of the Azure Spring Cloud resource.
- * **appInsightsName:** The name of the Application Insights instance for Azure Spring Cloud.
+ * **resource-group:** The resource group name for deploying the Azure Spring Apps instance.
+ * **springCloudInstanceName:** The name of the Azure Spring Apps resource.
+ * **appInsightsName:** The name of the Application Insights instance for Azure Spring Apps.
* **laWorkspaceResourceId:** The resource ID of the existing Log Analytics workspace (for example, */ subscriptions/\<your subscription>/resourcegroups/\<your log analytics resource group>/providers/ Microsoft.OperationalInsights/workspaces/\<your log analytics workspace name>*.)
- * **springCloudAppSubnetID:** The resourceID of the Azure Spring Cloud App Subnet.
- * **springCloudRuntimeSubnetID:** The resourceID of the Azure Spring Cloud Runtime Subnet.
- * **springCloudServiceCidrs:** A comma-separated list of IP address ranges (3 in total) in CIDR format. The IP ranges are reserved to host underlying Azure Spring Cloud infrastructure. These 3 ranges should be at least */16* unused IP ranges, and must not overlap with any routable subnet IP ranges used within the network.
+ * **springCloudAppSubnetID:** The resourceID of the Azure Spring Apps App Subnet.
+ * **springCloudRuntimeSubnetID:** The resourceID of the Azure Spring Apps Runtime Subnet.
+ * **springCloudServiceCidrs:** A comma-separated list of IP address ranges (3 in total) in CIDR format. The IP ranges are reserved to host underlying Azure Spring Apps infrastructure. These 3 ranges should be at least */16* unused IP ranges, and must not overlap with any routable subnet IP ranges used within the network.
```azurecli az deployment group create \
To deploy the cluster, follow these steps:
springCloudServiceCidrs=<value> ```
- This command uses the Bicep template to create an Azure Spring Cloud instance in an existing virtual network. The command also creates a workspace-based Application Insights instance in an existing Azure Monitor Log Analytics Workspace.
+ This command uses the Bicep template to create an Azure Spring Apps instance in an existing virtual network. The command also creates a workspace-based Application Insights instance in an existing Azure Monitor Log Analytics Workspace.
## Review deployed resources
echo "Press [ENTER] to continue ..."
## Next steps
-In this quickstart, you deployed an Azure Spring Cloud instance into an existing virtual network using Bicep, and then validated the deployment. To learn more about Azure Spring Cloud, continue on to the resources below.
+In this quickstart, you deployed an Azure Spring Apps instance into an existing virtual network using Bicep, and then validated the deployment. To learn more about Azure Spring Apps, continue on to the resources below.
* Deploy one of the following sample applications from the locations below: * [Pet Clinic App with MySQL Integration](https://github.com/azure-samples/spring-petclinic-microservices) * [Simple Hello World](./quickstart.md?pivots=programming-language-java&tabs=Azure-CLI).
-* Use [custom domains](tutorial-custom-domain.md) with Azure Spring Cloud.
-* Expose applications in Azure Spring Cloud to the internet using [Azure Application Gateway](expose-apps-gateway-azure-firewall.md).
-* View the secure end-to-end [Azure Spring Cloud reference architecture](reference-architecture.md), which is based on the [Microsoft Azure Well-Architected Framework](/azure/architecture/framework/).
+* Use [custom domains](tutorial-custom-domain.md) with Azure Spring Apps.
+* Expose applications in Azure Spring Apps to the internet using [Azure Application Gateway](expose-apps-gateway-azure-firewall.md).
+* View the secure end-to-end [Azure Spring Apps reference architecture](reference-architecture.md), which is based on the [Microsoft Azure Well-Architected Framework](/azure/architecture/framework/).
spring-cloud Quickstart Deploy Infrastructure Vnet Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/quickstart-deploy-infrastructure-vnet-terraform.md
Title: Quickstart - Provision Azure Spring Cloud using Terraform
-description: This quickstart shows you how to use Terraform to deploy a Spring Cloud cluster into an existing virtual network.
+ Title: Quickstart - Provision Azure Spring Apps using Terraform
+description: This quickstart shows you how to use Terraform to deploy an Azure Spring Apps cluster into an existing virtual network.
-+ Last updated 11/12/2021
-# Quickstart: Provision Azure Spring Cloud using Terraform
+# Quickstart: Provision Azure Spring Apps using Terraform
+
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
**This article applies to:** ✔️ Basic/Standard tier ✔️ Enterprise tier
-This quickstart describes how to use Terraform to deploy an Azure Spring Cloud cluster into an existing virtual network.
+This quickstart describes how to use Terraform to deploy an Azure Spring Apps cluster into an existing virtual network.
-Azure Spring Cloud makes it easy to deploy Spring applications to Azure without any code changes. The service manages the infrastructure of Spring Cloud applications so developers can focus on their code. Azure Spring Cloud provides lifecycle management using comprehensive monitoring and diagnostics, configuration management, service discovery, CI/CD integration, blue-green deployments, and more.
+Azure Spring Apps makes it easy to deploy Spring applications to Azure without any code changes. The service manages the infrastructure of Spring applications so developers can focus on their code. Azure Spring Apps provides lifecycle management using comprehensive monitoring and diagnostics, configuration management, service discovery, CI/CD integration, blue-green deployments, and more.
## Prerequisites * An Azure subscription. If you don't have a subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. * [Hashicorp Terraform](https://www.terraform.io/downloads.html)
-* Two dedicated subnets for the Azure Spring Cloud cluster, one for the service runtime and another for the Spring applications. For subnet and virtual network requirements, see the [Virtual network requirements](how-to-deploy-in-azure-virtual-network.md#virtual-network-requirements) section of [Deploy Azure Spring Cloud in a virtual network](how-to-deploy-in-azure-virtual-network.md).
-* An existing Log Analytics workspace for Azure Spring Cloud diagnostics settings and a workspace-based Application Insights resource. For more information, see [Analyze logs and metrics with diagnostics settings](diagnostic-services.md) and [Application Insights Java In-Process Agent in Azure Spring Cloud](how-to-application-insights.md).
-* Three internal Classless Inter-Domain Routing (CIDR) ranges (at least */16* each) that you've identified for use by the Azure Spring Cloud cluster. These CIDR ranges will not be directly routable and will be used only internally by the Azure Spring Cloud cluster. Clusters may not use *169.254.0.0/16*, *172.30.0.0/16*, *172.31.0.0/16*, or *192.0.2.0/24* for the internal Spring Cloud CIDR ranges, or any IP ranges included within the cluster virtual network address range.
-* Service permission granted to the virtual network. The Azure Spring Cloud Resource Provider requires Owner permission to your virtual network in order to grant a dedicated and dynamic service principal on the virtual network for further deployment and maintenance. For instructions and more information, see the [Grant service permission to the virtual network](how-to-deploy-in-azure-virtual-network.md#grant-service-permission-to-the-virtual-network) section of [Deploy Azure Spring Cloud in a virtual network](how-to-deploy-in-azure-virtual-network.md).
+* Two dedicated subnets for the Azure Spring Apps cluster, one for the service runtime and another for the Spring applications. For subnet and virtual network requirements, see the [Virtual network requirements](how-to-deploy-in-azure-virtual-network.md#virtual-network-requirements) section of [Deploy Azure Spring Apps in a virtual network](how-to-deploy-in-azure-virtual-network.md).
+* An existing Log Analytics workspace for Azure Spring Apps diagnostics settings and a workspace-based Application Insights resource. For more information, see [Analyze logs and metrics with diagnostics settings](diagnostic-services.md) and [Application Insights Java In-Process Agent in Azure Spring Apps](how-to-application-insights.md).
+* Three internal Classless Inter-Domain Routing (CIDR) ranges (at least */16* each) that you've identified for use by the Azure Spring Apps cluster. These CIDR ranges will not be directly routable and will be used only internally by the Azure Spring Apps cluster. Clusters may not use *169.254.0.0/16*, *172.30.0.0/16*, *172.31.0.0/16*, or *192.0.2.0/24* for the internal Spring app CIDR ranges, or any IP ranges included within the cluster virtual network address range.
+* Service permission granted to the virtual network. The Azure Spring Apps Resource Provider requires Owner permission to your virtual network in order to grant a dedicated and dynamic service principal on the virtual network for further deployment and maintenance. For instructions and more information, see the [Grant service permission to the virtual network](how-to-deploy-in-azure-virtual-network.md#grant-service-permission-to-the-virtual-network) section of [Deploy Azure Spring Apps in a virtual network](how-to-deploy-in-azure-virtual-network.md).
* If you're using Azure Firewall or a Network Virtual Appliance (NVA), you'll also need to satisfy the following prerequisites: * Network and fully qualified domain name (FQDN) rules. For more information, see [Virtual network requirements](how-to-deploy-in-azure-virtual-network.md#virtual-network-requirements).
- * A unique User Defined Route (UDR) applied to each of the service runtime and Spring application subnets. For more information about UDRs, see [Virtual network traffic routing](../virtual-network/virtual-networks-udr-overview.md). The UDR should be configured with a route for *0.0.0.0/0* with a destination of your NVA before deploying the Spring Cloud cluster. For more information, see the [Bring your own route table](how-to-deploy-in-azure-virtual-network.md#bring-your-own-route-table) section of [Deploy Azure Spring Cloud in a virtual network](how-to-deploy-in-azure-virtual-network.md).
+ * A unique User Defined Route (UDR) applied to each of the service runtime and Spring application subnets. For more information about UDRs, see [Virtual network traffic routing](../virtual-network/virtual-networks-udr-overview.md). The UDR should be configured with a route for *0.0.0.0/0* with a destination of your NVA before deploying the Azure Spring Apps cluster. For more information, see the [Bring your own route table](how-to-deploy-in-azure-virtual-network.md#bring-your-own-route-table) section of [Deploy Azure Spring Apps in a virtual network](how-to-deploy-in-azure-virtual-network.md).
## Review the configuration file
-The configuration file used in this quickstart is from the [Azure Spring Cloud reference architecture](reference-architecture.md).
+The configuration file used in this quickstart is from the [Azure Spring Apps reference architecture](reference-architecture.md).
```hcl provider "azurerm" {
resource "azurerm_spring_cloud_service" "sc" {
location = var.location network {
- app_subnet_id = "/subscriptions/${var.subscription}/resourceGroups/${var.azurespringcloudvnetrg}/providers/Microsoft.Network/virtualNetworks/${var.vnet_spoke_name}/subnets/${var.app_subnet_id}"
- service_runtime_subnet_id = "/subscriptions/${var.subscription}/resourceGroups/${var.azurespringcloudvnetrg}/providers/Microsoft.Network/virtualNetworks/${var.vnet_spoke_name}/subnets/${var.service_runtime_subnet_id}"
+ app_subnet_id = "/subscriptions/${var.subscription}/resourceGroups/${var.azurespringappsvnetrg}/providers/Microsoft.Network/virtualNetworks/${var.vnet_spoke_name}/subnets/${var.app_subnet_id}"
+ service_runtime_subnet_id = "/subscriptions/${var.subscription}/resourceGroups/${var.azurespringappsvnetrg}/providers/Microsoft.Network/virtualNetworks/${var.vnet_spoke_name}/subnets/${var.service_runtime_subnet_id}"
cidr_ranges = var.sc_cidr }
resource "azurerm_spring_cloud_service" "sc" {
resource "azurerm_monitor_diagnostic_setting" "sc_diag" { name = "monitoring" target_resource_id = azurerm_spring_cloud_service.sc.id
- log_analytics_workspace_id = "/subscriptions/${var.subscription}/resourceGroups/${var.azurespringcloudvnetrg}/providers/Microsoft.OperationalInsights/workspaces/${var.sc_law_id}"
+ log_analytics_workspace_id = "/subscriptions/${var.subscription}/resourceGroups/${var.azurespringappsvnetrg}/providers/Microsoft.OperationalInsights/workspaces/${var.sc_law_id}"
log { category = "ApplicationConsole"
To apply the configuration, follow these steps:
* The subscription ID of the Azure account you'll be deploying to.
- * A deployment location from the regions where Azure Spring Cloud is available, as shown in [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=spring-cloud&regions=all). You'll need the short form of the location name. To get this value, use the following command to generate a list of Azure locations, then look up the **Name** value for the region you selected.
+ * A deployment location from the regions where Azure Spring Apps is available, as shown in [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=spring-cloud&regions=all). You'll need the short form of the location name. To get this value, use the following command to generate a list of Azure locations, then look up the **Name** value for the region you selected.
```azurecli az account list-locations --output table ``` * The name of the resource group you'll deploy to.
- * A name of your choice for the Spring Cloud Deployment.
+ * A name of your choice for the Spring app deployment.
* The name of the virtual network resource group where you'll deploy your resources. * The name of the spoke virtual network (for example, *vnet-spoke*).
- * The name of the subnet to be used by the Spring Cloud App Service (for example, *snet-app*).
- * The name of the subnet to be used by the Spring Cloud runtime service (for example, *snet-runtime*).
+ * The name of the subnet to be used by the Azure Spring Apps service (for example, *snet-app*).
+ * The name of the subnet to be used by the Spring runtime service (for example, *snet-runtime*).
* The name of the Azure Log Analytics workspace.
- * The CIDR ranges from your virtual network to be used by Azure Spring Cloud (for example, *XX.X.X.X/16,XX.X.X.X/16,XX.X.X.X/16*).
+ * The CIDR ranges from your virtual network to be used by Azure Spring Apps (for example, *XX.X.X.X/16,XX.X.X.X/16,XX.X.X.X/16*).
* The key/value pairs to be applied as tags on all resources that support tags. For more information, see [Use tags to organize your Azure resources and management hierarchy](../azure-resource-manager/management/tag-resources.md). 1. Run the following command to initialize the Terraform modules:
terraform destroy -auto-approve
## Next steps
-In this quickstart, you deployed an Azure Spring Cloud instance into an existing virtual network using Terraform, and then validated the deployment. To learn more about Azure Spring Cloud, continue on to the resources below.
+In this quickstart, you deployed an Azure Spring Apps instance into an existing virtual network using Terraform, and then validated the deployment. To learn more about Azure Spring Apps, continue on to the resources below.
* Deploy one of the following sample applications from the locations below: * [Pet Clinic App with MySQL Integration](https://github.com/azure-samples/spring-petclinic-microservices) * [Simple Hello World](./quickstart.md?pivots=programming-language-java&tabs=Azure-CLI)
-* Use [custom domains](tutorial-custom-domain.md) with Azure Spring Cloud.
-* Expose applications in Azure Spring Cloud to the internet using [Azure Application Gateway](expose-apps-gateway-azure-firewall.md).
-* View the secure end-to-end [Azure Spring Cloud reference architecture](reference-architecture.md), which is based on the [Microsoft Azure Well-Architected Framework](/azure/architecture/framework/).
+* Use [custom domains](tutorial-custom-domain.md) with Azure Spring Apps.
+* Expose applications in Azure Spring Apps to the internet using [Azure Application Gateway](expose-apps-gateway-azure-firewall.md).
+* View the secure end-to-end [Azure Spring Apps reference architecture](reference-architecture.md), which is based on the [Microsoft Azure Well-Architected Framework](/azure/architecture/framework/).
spring-cloud Quickstart Deploy Infrastructure Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/quickstart-deploy-infrastructure-vnet.md
Title: Quickstart - Provision Azure Spring Cloud using an Azure Resource Manager template (ARM template)
-description: This quickstart shows you how to use an ARM template to deploy a Spring Cloud cluster into an existing virtual network.
+ Title: Quickstart - Provision Azure Spring Apps using an Azure Resource Manager template (ARM template)
+description: This quickstart shows you how to use an ARM template to deploy an Azure Spring Apps cluster into an existing virtual network.
-+ Last updated 11/12/2021
-# Quickstart: Provision Azure Spring Cloud using an ARM template
+# Quickstart: Provision Azure Spring Apps using an ARM template
+
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
**This article applies to:** ✔️ Basic/Standard tier ✔️ Enterprise tier
-This quickstart describes how to use an Azure Resource Manager template (ARM template) to deploy an Azure Spring Cloud cluster into an existing virtual network.
+This quickstart describes how to use an Azure Resource Manager template (ARM template) to deploy an Azure Spring Apps cluster into an existing virtual network.
-Azure Spring Cloud makes it easy to deploy Spring applications to Azure without any code changes. The service manages the infrastructure of Spring Cloud applications so developers can focus on their code. Azure Spring Cloud provides lifecycle management using comprehensive monitoring and diagnostics, configuration management, service discovery, CI/CD integration, blue-green deployments, and more.
+Azure Spring Apps makes it easy to deploy Spring applications to Azure without any code changes. The service manages the infrastructure of Spring applications so developers can focus on their code. Azure Spring Apps provides lifecycle management using comprehensive monitoring and diagnostics, configuration management, service discovery, CI/CD integration, blue-green deployments, and more.
[!INCLUDE [About Azure Resource Manager](../../includes/resource-manager-quickstart-introduction.md)]
If your environment meets the prerequisites and you're familiar with using ARM t
## Prerequisites * An Azure subscription. If you don't have a subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
-* Two dedicated subnets for the Azure Spring Cloud cluster, one for the service runtime and another for the Spring applications. For subnet and virtual network requirements, see the [Virtual network requirements](how-to-deploy-in-azure-virtual-network.md#virtual-network-requirements) section of [Deploy Azure Spring Cloud in a virtual network](how-to-deploy-in-azure-virtual-network.md).
-* An existing Log Analytics workspace for Azure Spring Cloud diagnostics settings and a workspace-based Application Insights resource. For more information, see [Analyze logs and metrics with diagnostics settings](diagnostic-services.md) and [Application Insights Java In-Process Agent in Azure Spring Cloud](how-to-application-insights.md).
-* Three internal Classless Inter-Domain Routing (CIDR) ranges (at least */16* each) that you've identified for use by the Azure Spring Cloud cluster. These CIDR ranges will not be directly routable and will be used only internally by the Azure Spring Cloud cluster. Clusters may not use *169.254.0.0/16*, *172.30.0.0/16*, *172.31.0.0/16*, or *192.0.2.0/24* for the internal Spring Cloud CIDR ranges, or any IP ranges included within the cluster virtual network address range.
-* Service permission granted to the virtual network. The Azure Spring Cloud Resource Provider requires Owner permission to your virtual network in order to grant a dedicated and dynamic service principal on the virtual network for further deployment and maintenance. For instructions and more information, see the [Grant service permission to the virtual network](how-to-deploy-in-azure-virtual-network.md#grant-service-permission-to-the-virtual-network) section of [Deploy Azure Spring Cloud in a virtual network](how-to-deploy-in-azure-virtual-network.md).
+* Two dedicated subnets for the Azure Spring Apps cluster, one for the service runtime and another for the Spring applications. For subnet and virtual network requirements, see the [Virtual network requirements](how-to-deploy-in-azure-virtual-network.md#virtual-network-requirements) section of [Deploy Azure Spring Apps in a virtual network](how-to-deploy-in-azure-virtual-network.md).
+* An existing Log Analytics workspace for Azure Spring Apps diagnostics settings and a workspace-based Application Insights resource. For more information, see [Analyze logs and metrics with diagnostics settings](diagnostic-services.md) and [Application Insights Java In-Process Agent in Azure Spring Apps](how-to-application-insights.md).
+* Three internal Classless Inter-Domain Routing (CIDR) ranges (at least */16* each) that you've identified for use by the Azure Spring Apps cluster. These CIDR ranges will not be directly routable and will be used only internally by the Azure Spring Apps cluster. Clusters may not use *169.254.0.0/16*, *172.30.0.0/16*, *172.31.0.0/16*, or *192.0.2.0/24* for the internal Spring app CIDR ranges, or any IP ranges included within the cluster virtual network address range.
+* Service permission granted to the virtual network. The Azure Spring Apps Resource Provider requires Owner permission to your virtual network in order to grant a dedicated and dynamic service principal on the virtual network for further deployment and maintenance. For instructions and more information, see the [Grant service permission to the virtual network](how-to-deploy-in-azure-virtual-network.md#grant-service-permission-to-the-virtual-network) section of [Deploy Azure Spring Apps in a virtual network](how-to-deploy-in-azure-virtual-network.md).
* If you're using Azure Firewall or a Network Virtual Appliance (NVA), you'll also need to satisfy the following prerequisites: * Network and fully qualified domain name (FQDN) rules. For more information, see [Virtual network requirements](how-to-deploy-in-azure-virtual-network.md#virtual-network-requirements).
- * A unique User Defined Route (UDR) applied to each of the service runtime and Spring application subnets. For more information about UDRs, see [Virtual network traffic routing](../virtual-network/virtual-networks-udr-overview.md). The UDR should be configured with a route for *0.0.0.0/0* with a destination of your NVA before deploying the Spring Cloud cluster. For more information, see the [Bring your own route table](how-to-deploy-in-azure-virtual-network.md#bring-your-own-route-table) section of [Deploy Azure Spring Cloud in a virtual network](how-to-deploy-in-azure-virtual-network.md).
+ * A unique User Defined Route (UDR) applied to each of the service runtime and Spring application subnets. For more information about UDRs, see [Virtual network traffic routing](../virtual-network/virtual-networks-udr-overview.md). The UDR should be configured with a route for *0.0.0.0/0* with a destination of your NVA before deploying the Azure Spring Apps cluster. For more information, see the [Bring your own route table](how-to-deploy-in-azure-virtual-network.md#bring-your-own-route-table) section of [Deploy Azure Spring Apps in a virtual network](how-to-deploy-in-azure-virtual-network.md).
## Review the template
-The template used in this quickstart is from the [Azure Spring Cloud reference architecture](reference-architecture.md).
+The template used in this quickstart is from the [Azure Spring Apps reference architecture](reference-architecture.md).
:::code language="json" source="~/azure-spring-cloud-reference-architecture/ARM/brownfield-deployment/azuredeploy.json"::: Two Azure resources are defined in the template:
-* [Microsoft.AppPlatform/Spring](/azure/templates/microsoft.appplatform/spring): Create an Azure Spring Cloud instance.
+* [Microsoft.AppPlatform/Spring](/azure/templates/microsoft.appplatform/spring): Create an Azure Spring Apps instance.
* [Microsoft.Insights/components](/azure/templates/microsoft.insights/components): Create an Application Insights workspace.
-For Azure CLI, Terraform, and Bicep deployments, see the [Azure Spring Cloud Reference Architecture](https://github.com/Azure/azure-spring-cloud-reference-architecture) repository on GitHub.
+For Azure CLI, Terraform, and Bicep deployments, see the [Azure Spring Apps Reference Architecture](https://github.com/Azure/azure-spring-cloud-reference-architecture) repository on GitHub.
## Deploy the template To deploy the template, follow these steps:
-1. Select the following image to sign in to Azure and open a template. The template creates an Azure Spring Cloud instance into an existing Virtual Network and a workspace-based Application Insights instance into an existing Azure Monitor Log Analytics Workspace.
+1. Select the following image to sign in to Azure and open a template. The template creates an Azure Spring Apps instance into an existing Virtual Network and a workspace-based Application Insights instance into an existing Azure Monitor Log Analytics Workspace.
[![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg?sanitize=true)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-spring-cloud-reference-architecture%2Fmain%2FARM%2Fbrownfield-deployment%2fazuredeploy.json) 2. Enter values for the following fields: * **Resource Group:** select **Create new**, enter a unique name for the **resource group**, and then select **OK**.
- * **springCloudInstanceName:** Enter the name of the Azure Spring Cloud resource.
- * **appInsightsName:** Enter the name of the Application Insights instance for Azure Spring Cloud.
+ * **springCloudInstanceName:** Enter the name of the Azure Spring Apps resource.
+ * **appInsightsName:** Enter the name of the Application Insights instance for Azure Spring Apps.
* **laWorkspaceResourceId:** Enter the resource ID of the existing Log Analytics workspace (for example, */subscriptions/\<your subscription>/resourcegroups/\<your log analytics resource group>/providers/Microsoft.OperationalInsights/workspaces/\<your log analytics workspace name>*.)
- * **springCloudAppSubnetID:** Enter the resourceID of the Azure Spring Cloud App Subnet.
- * **springCloudRuntimeSubnetID:** Enter the resourceID of the Azure Spring Cloud Runtime Subnet.
- * **springCloudServiceCidrs:** Enter a comma-separated list of IP address ranges (3 in total) in CIDR format. The IP ranges are reserved to host underlying Azure Spring Cloud infrastructure. These 3 ranges should be at least */16* unused IP ranges, and must not overlap with any routable subnet IP ranges used within the network.
+ * **springCloudAppSubnetID:** Enter the resourceID of the Azure Spring Apps App Subnet.
+ * **springCloudRuntimeSubnetID:** Enter the resourceID of the Azure Spring Apps Runtime Subnet.
+ * **springCloudServiceCidrs:** Enter a comma-separated list of IP address ranges (3 in total) in CIDR format. The IP ranges are reserved to host underlying Azure Spring Apps infrastructure. These 3 ranges should be at least */16* unused IP ranges, and must not overlap with any routable subnet IP ranges used within the network.
* **tags:** Enter any custom tags. 3. Select **Review + Create** and then **Create**.
You can either use the Azure portal to check the deployed resources, or use Azur
If you plan to continue working with subsequent quickstarts and tutorials, you might want to leave these resources in place. When no longer needed, delete the resource group, which deletes the resources in the resource group. To delete the resource group by using Azure CLI or Azure PowerShell, use the following commands:
-# [CLI](#tab/azure-cli)
+### [CLI](#tab/azure-cli)
```azurecli echo "Enter the Resource Group name:" &&
az group delete --name $resourceGroupName &&
echo "Press [ENTER] to continue ..." ```
-# [PowerShell](#tab/azure-powershell)
+### [PowerShell](#tab/azure-powershell)
```azurepowershell-interactive $resourceGroupName = Read-Host -Prompt "Enter the Resource Group name"
Write-Host "Press [ENTER] to continue..."
## Next steps
-In this quickstart, you deployed an Azure Spring Cloud instance into an existing virtual network using an ARM template, and then validated the deployment. To learn more about Azure Spring Cloud and Azure Resource Manager, continue on to the resources below.
+In this quickstart, you deployed an Azure Spring Apps instance into an existing virtual network using an ARM template, and then validated the deployment. To learn more about Azure Spring Apps and Azure Resource Manager, continue on to the resources below.
* Deploy one of the following sample applications from the locations below: * [Pet Clinic App with MySQL Integration](https://github.com/azure-samples/spring-petclinic-microservices) * [Simple Hello World](./quickstart.md?pivots=programming-language-java&tabs=Azure-CLI)
-* Use [custom domains](tutorial-custom-domain.md) with Azure Spring Cloud.
-* Expose applications in Azure Spring Cloud to the internet using [Azure Application Gateway](expose-apps-gateway-azure-firewall.md).
-* View the secure end-to-end [Azure Spring Cloud reference architecture](reference-architecture.md), which is based on the [Microsoft Azure Well-Architected Framework](/azure/architecture/framework/).
+* Use [custom domains](tutorial-custom-domain.md) with Azure Spring Apps.
+* Expose applications in Azure Spring Apps to the internet using [Azure Application Gateway](expose-apps-gateway-azure-firewall.md).
+* View the secure end-to-end [Azure Spring Apps reference architecture](reference-architecture.md), which is based on the [Microsoft Azure Well-Architected Framework](/azure/architecture/framework/).
* Learn more about [Azure Resource Manager](../azure-resource-manager/management/overview.md).
spring-cloud Quickstart Integrate Azure Database Mysql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/quickstart-integrate-azure-database-mysql.md
Title: "Quickstart - Integrate with Azure Database for MySQL"
-description: Explains how to provision and prepare an Azure Database for MySQL instance, and then configure Pet Clinic on Azure Spring Cloud to use it as a persistent database with only one command.
+description: Explains how to provision and prepare an Azure Database for MySQL instance, and then configure Pet Clinic on Azure Spring Apps to use it as a persistent database with only one command.
Last updated 10/15/2021-+
-# Quickstart: Integrate Azure Spring Cloud with Azure Database for MySQL
+# Quickstart: Integrate Azure Spring Apps with Azure Database for MySQL
+
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
**This article applies to:** ✔️ Basic/Standard tier ✔️ Enterprise tier
-Pet Clinic, as deployed in the default configuration [Quickstart: Build and deploy apps to Azure Spring Cloud](quickstart-deploy-apps.md), uses an in-memory database (HSQLDB) that is populated with data at startup. This quickstart explains how to provision and prepare an Azure Database for MySQL instance and then configure Pet Clinic on Azure Spring Cloud to use it as a persistent database with only one command.
+Pet Clinic, as deployed in the default configuration [Quickstart: Build and deploy apps to Azure Spring Apps](quickstart-deploy-apps.md), uses an in-memory database (HSQLDB) that is populated with data at startup. This quickstart explains how to provision and prepare an Azure Database for MySQL instance and then configure Pet Clinic on Azure Spring Apps to use it as a persistent database with only one command.
## Prerequisites
export MYSQL_DATABASE_NAME=petclinic
To enable MySQL as database for the sample app, simply update the *customer-service* app with active profile MySQL and database credentials as environment variables. ```azcli
-az spring-cloud app update \
+az spring app update \
--name customers-service \ --jvm-options="-Xms2048m -Xmx2048m -Dspring.profiles.active=mysql" \ --env \
az spring-cloud app update \
## Update extra apps ```azcli
-az spring-cloud app update --name api-gateway \
+az spring app update --name api-gateway \
--jvm-options="-Xms2048m -Xmx2048m -Dspring.profiles.active=mysql"
-az spring-cloud app update --name admin-server \
+az spring app update --name admin-server \
--jvm-options="-Xms2048m -Xmx2048m -Dspring.profiles.active=mysql"
-az spring-cloud app update --name customers-service \
+az spring app update --name customers-service \
--jvm-options="-Xms2048m -Xmx2048m -Dspring.profiles.active=mysql" \ --env \ MYSQL_SERVER_FULL_NAME=${MYSQL_SERVER_FULL_NAME} \ MYSQL_DATABASE_NAME=${MYSQL_DATABASE_NAME} \ MYSQL_SERVER_ADMIN_LOGIN_NAME=${MYSQL_SERVER_ADMIN_LOGIN_NAME} \ MYSQL_SERVER_ADMIN_PASSWORD=${MYSQL_SERVER_ADMIN_PASSWORD}
-az spring-cloud app update --name vets-service \
+az spring app update --name vets-service \
--jvm-options="-Xms2048m -Xmx2048m -Dspring.profiles.active=mysql" \ --env \ MYSQL_SERVER_FULL_NAME=${MYSQL_SERVER_FULL_NAME} \ MYSQL_DATABASE_NAME=${MYSQL_DATABASE_NAME} \ MYSQL_SERVER_ADMIN_LOGIN_NAME=${MYSQL_SERVER_ADMIN_LOGIN_NAME} \ MYSQL_SERVER_ADMIN_PASSWORD=${MYSQL_SERVER_ADMIN_PASSWORD}
-az spring-cloud app update --name visits-service \
+az spring app update --name visits-service \
--jvm-options="-Xms2048m -Xmx2048m -Dspring.profiles.active=mysql" \ --env \ MYSQL_SERVER_FULL_NAME=${MYSQL_SERVER_FULL_NAME} \
echo "Press [ENTER] to continue ..."
## Next steps
-* [Bind an Azure Database for MySQL instance to your application in Azure Spring Cloud](how-to-bind-mysql.md)
-* [Use a managed identity to connect Azure SQL Database to an app in Azure Spring Cloud](./connect-managed-identity-to-azure-sql.md)
+* [Bind an Azure Database for MySQL instance to your application in Azure Spring Apps](how-to-bind-mysql.md)
+* [Use a managed identity to connect Azure SQL Database to an app in Azure Spring Apps](./connect-managed-identity-to-azure-sql.md)
spring-cloud Quickstart Logs Metrics Tracing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/quickstart-logs-metrics-tracing.md
Title: "Quickstart - Monitoring Azure Spring Cloud apps with logs, metrics, and tracing"
-description: Use log streaming, log analytics, metrics, and tracing to monitor PetClinic sample apps on Azure Spring Cloud.
+ Title: "Quickstart - Monitoring Azure Spring Apps apps with logs, metrics, and tracing"
+description: Use log streaming, log analytics, metrics, and tracing to monitor PetClinic sample apps on Azure Spring Apps.
Last updated 10/12/2021-+ zone_pivot_groups: programming-languages-spring-cloud
-# Quickstart: Monitoring Azure Spring Cloud apps with logs, metrics, and tracing
+# Quickstart: Monitoring Azure Spring Apps apps with logs, metrics, and tracing
+
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
**This article applies to:** ✔️ Basic/Standard tier ✔️ Enterprise tier ::: zone pivot="programming-language-csharp"
-With the built-in monitoring capability in Azure Spring Cloud, you can debug and monitor complex issues. Azure Spring Cloud integrates Steeltoe [distributed tracing](https://docs.steeltoe.io/api/v3/tracing/) with Azure's [Application Insights](../azure-monitor/app/app-insights-overview.md). This integration provides powerful logs, metrics, and distributed tracing capability from the Azure portal.
+With the built-in monitoring capability in Azure Spring Apps, you can debug and monitor complex issues. Azure Spring Apps integrates Steeltoe [distributed tracing](https://docs.steeltoe.io/api/v3/tracing/) with Azure's [Application Insights](../azure-monitor/app/app-insights-overview.md). This integration provides powerful logs, metrics, and distributed tracing capability from the Azure portal.
The following procedures explain how to use Log Streaming, Log Analytics, Metrics, and Distributed Tracing with the sample app that you deployed in the preceding quickstarts.
The following procedures explain how to use Log Streaming, Log Analytics, Metric
* Complete the previous quickstarts in this series:
- * [Provision Azure Spring Cloud service](./quickstart-provision-service-instance.md).
- * [Set up Azure Spring Cloud configuration server](./quickstart-setup-config-server.md).
+ * [Provision Azure Spring Apps service](./quickstart-provision-service-instance.md).
+ * [Set up Azure Spring Apps configuration server](./quickstart-setup-config-server.md).
* [Build and deploy apps](./quickstart-deploy-apps.md). * [Set up Log Analytics workspace](./quickstart-setup-log-analytics.md). ## Logs
-There are two ways to see logs on Azure Spring Cloud: **Log Streaming** of real-time logs per app instance or **Log Analytics** for aggregated logs with advanced query capability.
+There are two ways to see logs on Azure Spring Apps: **Log Streaming** of real-time logs per app instance or **Log Analytics** for aggregated logs with advanced query capability.
### Log streaming You can use log streaming in the Azure CLI with the following command. ```azurecli
-az spring-cloud app logs -n solar-system-weather -f
+az spring app logs -n solar-system-weather -f
``` You will see output similar to the following example:
Executing ObjectResult, writing value of type 'System.Collections.Generic.KeyVal
``` > [!TIP]
-> Use `az spring-cloud app logs -h` to explore more parameters and log stream functionality.
+> Use `az spring app logs -h` to explore more parameters and log stream functionality.
### Log Analytics
-1. In the Azure portal, go to the **service | Overview** page and select **Logs** in the **Monitoring** section. Select **Run** on one of the sample queries for Azure Spring Cloud.
+1. In the Azure portal, go to the **service | Overview** page and select **Logs** in the **Monitoring** section. Select **Run** on one of the sample queries for Azure Spring Apps.
[ ![Logs Analytics entry](media/spring-cloud-quickstart-logs-metrics-tracing/logs-entry.png) ](media/spring-cloud-quickstart-logs-metrics-tracing/logs-entry.png#lightbox)
Executing ObjectResult, writing value of type 'System.Collections.Generic.KeyVal
::: zone-end ::: zone pivot="programming-language-java"
-With the built-in monitoring capability in Azure Spring Cloud, you can debug and monitor complex issues. Azure Spring Cloud integrates [Spring Cloud Sleuth](https://spring.io/projects/spring-cloud-sleuth) with Azure's [Application Insights](../azure-monitor/app/app-insights-overview.md). This integration provides powerful logs, metrics, and distributed tracing capability from the Azure portal. The following procedures explain how to use Log Streaming, Log Analytics, Metrics, and Distributed tracing with deployed PetClinic apps.
+With the built-in monitoring capability in Azure Spring Apps, you can debug and monitor complex issues. Azure Spring Apps integrates [Spring Cloud Sleuth](https://spring.io/projects/spring-cloud-sleuth) with Azure's [Application Insights](../azure-monitor/app/app-insights-overview.md). This integration provides powerful logs, metrics, and distributed tracing capability from the Azure portal. The following procedures explain how to use Log Streaming, Log Analytics, Metrics, and Distributed tracing with deployed PetClinic apps.
## Prerequisites Complete previous steps:
-* [Provision an instance of Azure Spring Cloud](./quickstart-provision-service-instance.md)
+* [Provision an instance of Azure Spring Apps](./quickstart-provision-service-instance.md)
* [Set up the config server](./quickstart-setup-config-server.md). For enterprise tier, please follow [set up Application Configuration Service](./how-to-enterprise-application-configuration-service.md). * [Build and deploy apps](./quickstart-deploy-apps.md). * [Set up Log Analytics workspace](./quickstart-setup-log-analytics.md). ## Logs
-There are two ways to see logs on Azure Spring Cloud: **Log Streaming** of real-time logs per app instance or **Log Analytics** for aggregated logs with advanced query capability.
+There are two ways to see logs on Azure Spring Apps: **Log Streaming** of real-time logs per app instance or **Log Analytics** for aggregated logs with advanced query capability.
### Log streaming
There are two ways to see logs on Azure Spring Cloud: **Log Streaming** of real-
You can use log streaming in the Azure CLI with the following command. ```azurecli
-az spring-cloud app logs -s <service instance name> -g <resource group name> -n gateway -f
+az spring app logs -s <service instance name> -g <resource group name> -n gateway -f
``` You will see logs like this:
You will see logs like this:
[ ![Log Streaming from Azure CLI](media/spring-cloud-quickstart-logs-metrics-tracing/logs-streaming-cli.png) ](media/spring-cloud-quickstart-logs-metrics-tracing/logs-streaming-cli.png#lightbox) > [!TIP]
-> Use `az spring-cloud app logs -h` to explore more parameters and log stream functionalities.
+> Use `az spring app logs -h` to explore more parameters and log stream functionalities.
To learn more about the query language that's used in Log Analytics, see [Azure Monitor log queries](/azure/data-explorer/kusto/query/). To query all your Log Analytics logs from a centralized client, check out [Azure Data Explorer](/azure/data-explorer/query-monitor-data).
To get the logs using Azure Toolkit for IntelliJ:
### Log Analytics
-1. Go to the **service | Overview** page and select **Logs** in the **Monitoring** section. Select **Run** on one of the sample queries for Azure Spring Cloud.
+1. Go to the **service | Overview** page and select **Logs** in the **Monitoring** section. Select **Run** on one of the sample queries for Azure Spring Apps.
[ ![Logs Analytics portal entry](media/spring-cloud-quickstart-logs-metrics-tracing/update-logs-metrics-tracing/logs-entry.png) ](media/spring-cloud-quickstart-logs-metrics-tracing/update-logs-metrics-tracing/logs-entry.png#lightbox)
To get the logs using Azure Toolkit for IntelliJ:
## Metrics
-Navigate to the `Application insights` blade. Then, navigate to the `Metrics` blade - you can see metrics contributed by Spring Boot apps,
-Spring Cloud modules, and dependencies.
+Navigate to the `Application insights` blade. Then, navigate to the `Metrics` blade - you can see metrics contributed by Spring Boot apps, Spring modules, and dependencies.
The chart below shows `gateway-requests` (Spring Cloud Gateway), `hikaricp_connections` (JDBC Connections) and `http_client_requests`.
Navigate to the `Live Metrics` blade - you can see live metrics on screen with l
## Tracing
-Open the Application Insights created by Azure Spring Cloud and start monitoring Spring applications.
+Open the Application Insights created by Azure Spring Apps and start monitoring Spring applications.
Navigate to the `Application Map` blade: [ ![Application map](media/spring-cloud-quickstart-logs-metrics-tracing/update-logs-metrics-tracing/distributed-tracking-new-ai-agent.jpg) ](media/spring-cloud-quickstart-logs-metrics-tracing/update-logs-metrics-tracing/distributed-tracking-new-ai-agent.jpg#lightbox)
az config set defaults.group=
## Next steps
-To explore more monitoring capabilities of Azure Spring Cloud, see:
+To explore more monitoring capabilities of Azure Spring Apps, see:
> [!div class="nextstepaction"] > [Analyze logs and metrics with diagnostics settings](diagnostic-services.md)>
-> [Stream Azure Spring Cloud app logs in real-time](./how-to-log-streaming.md)
+> [Stream Azure Spring Apps app logs in real-time](./how-to-log-streaming.md)
spring-cloud Quickstart Provision Service Instance Enterprise https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/quickstart-provision-service-instance-enterprise.md
Title: "Quickstart - Provision an Azure Spring Cloud service instance using the Enterprise tier"
-description: Describes the creation of an Azure Spring Cloud service instance for app deployment using the Enterprise tier.
+ Title: "Quickstart - Provision an Azure Spring Apps service instance using the Enterprise tier"
+description: Describes the creation of an Azure Spring Apps service instance for app deployment using the Enterprise tier.
Last updated 02/09/2022-+
-# Quickstart: Provision an Azure Spring Cloud service instance using the Enterprise tier
+# Quickstart: Provision an Azure Spring Apps service instance using the Enterprise tier
+
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
**This article applies to:** ❌ Basic/Standard tier ✔️ Enterprise tier
-This quickstart shows you how to create an Azure Spring Cloud service instance using the Enterprise tier.
+This quickstart shows you how to create an Azure Spring Apps service instance using the Enterprise tier.
## Prerequisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- A license for Azure Spring Cloud Enterprise Tier. For more information, see [View Azure Spring Cloud Enterprise Tier Offer in Azure Marketplace](./how-to-enterprise-marketplace-offer.md).
+- A license for Azure Spring Apps Enterprise Tier. For more information, see [View Azure Spring Apps Enterprise Tier Offer in Azure Marketplace](./how-to-enterprise-marketplace-offer.md).
- [The Azure CLI version 2.0.67 or higher](/cli/azure/install-azure-cli). - [!INCLUDE [install-enterprise-extension](includes/install-enterprise-extension.md)] ## Provision a service instance
-Use the following steps to provision an Azure Spring Cloud service instance:
+Use the following steps to provision an Azure Spring Apps service instance:
### [Portal](#tab/azure-portal) 1. Open the [Azure portal](https://ms.portal.azure.com/).
-1. In the top search box, search for *Azure Spring Cloud*.
+1. In the top search box, search for *Azure Spring Apps*.
-1. Select **Azure Spring Cloud** from the **Services** results.
+1. Select **Azure Spring Apps** from the **Services** results.
-1. On the **Azure Spring Cloud** page, select **Create**.
+1. On the **Azure Spring Apps** page, select **Create**.
-1. On the Azure Spring Cloud **Create** page, select **Change** next to the **Pricing** option, then select the **Enterprise** tier.
+1. On the Azure Spring Apps **Create** page, select **Change** next to the **Pricing** option, then select the **Enterprise** tier.
- :::image type="content" source="media/enterprise/getting-started-enterprise/choose-enterprise-tier.png" alt-text="Screenshot of Azure portal Azure Spring Cloud creation page with Basics section and 'Choose your pricing tier' pane showing." lightbox="media/enterprise/getting-started-enterprise/choose-enterprise-tier.png":::
+ :::image type="content" source="media/enterprise/getting-started-enterprise/choose-enterprise-tier.png" alt-text="Screenshot of Azure portal Azure Spring Apps creation page with Basics section and 'Choose your pricing tier' pane showing." lightbox="media/enterprise/getting-started-enterprise/choose-enterprise-tier.png":::
Select the **Terms** checkbox to agree to the legal terms and privacy statements of the Enterprise tier offering in the Azure Marketplace. 1. To configure VMware Tanzu components, select **Next: VMware Tanzu settings**. > [!NOTE]
- > All Tanzu components are enabled by default. Be sure to carefully consider which Tanzu components you want to use or enable during the provisioning phase. After provisioning the Azure Spring Cloud instance, you can't enable or disable Tanzu components.
+ > All Tanzu components are enabled by default. Be sure to carefully consider which Tanzu components you want to use or enable during the provisioning phase. After provisioning the Azure Spring Apps instance, you can't enable or disable Tanzu components.
- :::image type="content" source="media/enterprise/getting-started-enterprise/create-instance-tanzu-settings-public-preview.png" alt-text="Screenshot of Azure portal Azure Spring Cloud creation page with V M ware Tanzu Settings section showing." lightbox="media/enterprise/getting-started-enterprise/create-instance-tanzu-settings-public-preview.png":::
+ :::image type="content" source="media/enterprise/getting-started-enterprise/create-instance-tanzu-settings-public-preview.png" alt-text="Screenshot of Azure portal Azure Spring Apps creation page with V M ware Tanzu Settings section showing." lightbox="media/enterprise/getting-started-enterprise/create-instance-tanzu-settings-public-preview.png":::
-1. Select the **Application Insights** section, then select **Enable Application Insights**. You can also enable Application Insights after you provision the Azure Spring Cloud instance.
+1. Select the **Application Insights** section, then select **Enable Application Insights**. You can also enable Application Insights after you provision the Azure Spring Apps instance.
- Choose an existing Application Insights instance or create a new Application Insights instance. - Give a **Sampling Rate** with in the range of 0-100, or use the default value 10. > [!NOTE]
- > You'll pay for the usage of Application Insights when integrated with Azure Spring Cloud. For more information about Application Insights pricing, see [Application Insights billing](../azure-monitor/logs/cost-logs.md#application-insights-billing).
+ > You'll pay for the usage of Application Insights when integrated with Azure Spring Apps. For more information about Application Insights pricing, see [Application Insights billing](../azure-monitor/logs/cost-logs.md#application-insights-billing).
- :::image type="content" source="media/enterprise/getting-started-enterprise/application-insights.png" alt-text="Screenshot of Azure portal Azure Spring Cloud creation page with Application Insights section showing." lightbox="media/enterprise/getting-started-enterprise/application-insights.png":::
+ :::image type="content" source="media/enterprise/getting-started-enterprise/application-insights.png" alt-text="Screenshot of Azure portal Azure Spring Apps creation page with Application Insights section showing." lightbox="media/enterprise/getting-started-enterprise/application-insights.png":::
1. Select **Review and create**. After validation completes successfully, select **Create** to start provisioning the service instance.
It takes about 5 minutes to finish the resource provisioning.
### [Azure CLI](#tab/azure-cli)
-1. Update Azure CLI with the Azure Spring Cloud extension by using the following command:
+1. Update Azure CLI with the Azure Spring Apps extension by using the following command:
```azurecli
- az extension update --name spring-cloud
+ az extension update --name spring
``` 1. Sign in to the Azure CLI and choose your active subscription by using the following command:
It takes about 5 minutes to finish the resource provisioning.
az account set --subscription <subscription-ID> ```
-1. Use the following command to accept the legal terms and privacy statements for the Enterprise tier. This step is necessary only if your subscription has never been used to create an Enterprise tier instance of Azure Spring Cloud.
+1. Use the following command to accept the legal terms and privacy statements for the Enterprise tier. This step is necessary only if your subscription has never been used to create an Enterprise tier instance of Azure Spring Apps.
```azurecli az provider register --namespace Microsoft.SaaS az term accept --publisher vmware-inc --product azure-spring-cloud-vmware-tanzu-2 --plan tanzu-asc-ent-mtr ```
-1. Prepare a name for your Azure Spring Cloud service instance. The name must be between 4 and 32 characters long and can contain only lowercase letters, numbers, and hyphens. The first character of the service name must be a letter and the last character must be either a letter or a number.
+1. Prepare a name for your Azure Spring Apps service instance. The name must be between 4 and 32 characters long and can contain only lowercase letters, numbers, and hyphens. The first character of the service name must be a letter and the last character must be either a letter or a number.
-1. Create a resource group and an Azure Spring Cloud service instance using the following the command:
+1. Create a resource group and an Azure Spring Apps service instance using the following the command:
```azurecli az group create --name <resource-group-name>
- az spring-cloud create \
+ az spring create \
--resource-group <resource-group-name> \ --name <service-instance-name> \ --sku enterprise
spring-cloud Quickstart Provision Service Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/quickstart-provision-service-instance.md
Title: "Quickstart - Provision an Azure Spring Cloud service"
-description: Describes creation of an Azure Spring Cloud service instance for app deployment.
+ Title: "Quickstart - Provision an Azure Spring Apps service"
+description: Describes creation of an Azure Spring Apps service instance for app deployment.
Last updated 10/12/2021-+ zone_pivot_groups: programming-languages-spring-cloud
-# Quickstart: Provision an Azure Spring Cloud service instance
+# Quickstart: Provision an Azure Spring Apps service instance
+
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
**This article applies to:** ✔️ Basic/Standard tier ❌ Enterprise tier ::: zone pivot="programming-language-csharp"
-In this quickstart, you use the Azure CLI to provision an instance of the Azure Spring Cloud service.
+In this quickstart, you use the Azure CLI to provision an instance of the Azure Spring Apps service.
## Prerequisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- [.NET Core 3.1 SDK](https://dotnet.microsoft.com/download/dotnet-core/3.1). The Azure Spring Cloud service supports .NET Core 3.1 and later versions.
+- [.NET Core 3.1 SDK](https://dotnet.microsoft.com/download/dotnet-core/3.1). The Azure Spring Apps service supports .NET Core 3.1 and later versions.
- [The Azure CLI version 2.0.67 or higher](/cli/azure/install-azure-cli). - [Git](https://git-scm.com/).
Verify that your Azure CLI version is 2.0.67 or later:
az --version ```
-Install the Azure Spring Cloud extension for the Azure CLI using the following command:
+Install the Azure Spring Apps extension for the Azure CLI using the following command:
```azurecli
-az extension add --name spring-cloud
+az extension add --name spring
``` ## Sign in to Azure
az extension add --name spring-cloud
az account set --subscription <Name or ID of a subscription from the last step> ```
-## Provision an instance of Azure Spring Cloud
+## Provision an instance of Azure Spring Apps
-1. Create a [resource group](../azure-resource-manager/management/overview.md) to contain your Azure Spring Cloud service. The resource group name can include alphanumeric, underscore, parentheses, hyphen, period (except at end), and Unicode characters.
+1. Create a [resource group](../azure-resource-manager/management/overview.md) to contain your Azure Spring Apps service. The resource group name can include alphanumeric, underscore, parentheses, hyphen, period (except at end), and Unicode characters.
```azurecli az group create --location eastus --name <resource group name> ```
-1. Provision an instance of Azure Spring Cloud service. The service instance name must be unique, between 4 and 32 characters long, and can contain only lowercase letters, numbers, and hyphens. The first character of the service name must be a letter and the last character must be either a letter or a number.
+1. Provision an instance of Azure Spring Apps service. The service instance name must be unique, between 4 and 32 characters long, and can contain only lowercase letters, numbers, and hyphens. The first character of the service name must be a letter and the last character must be either a letter or a number.
```azurecli
- az spring-cloud create -n <service instance name> -g <resource group name>
+ az spring create -n <service instance name> -g <resource group name>
``` This command might take several minutes to complete.
az extension add --name spring-cloud
::: zone-end ::: zone pivot="programming-language-java"
-You can provision an instance of the Azure Spring Cloud service using the Azure portal or the Azure CLI. Both methods are explained in the following procedures.
+You can provision an instance of the Azure Spring Apps service using the Azure portal or the Azure CLI. Both methods are explained in the following procedures.
## Prerequisites - [Install JDK 8 or JDK 11](/azure/developer/java/fundamentals/java-jdk-install) - [Sign up for an Azure subscription](https://azure.microsoft.com/free/)-- (Optional) [Install the Azure CLI version 2.0.67 or higher](/cli/azure/install-azure-cli) and install the Azure Spring Cloud extension with the command: `az extension add --name spring-cloud`
+- (Optional) [Install the Azure CLI version 2.0.67 or higher](/cli/azure/install-azure-cli) and install the Azure Spring Apps extension with the command: `az extension add --name spring`
- (Optional) [Install the Azure Toolkit for IntelliJ IDEA](https://plugins.jetbrains.com/plugin/8053-azure-toolkit-for-intellij/) and [sign-in](/azure/developer/java/toolkit-for-intellij/create-hello-world-web-app#installation-and-sign-in)
-## Provision an instance of Azure Spring Cloud
+## Provision an instance of Azure Spring Apps
#### [Portal](#tab/Azure-portal)
-The following procedure creates an instance of Azure Spring Cloud using the Azure portal.
+The following procedure creates an instance of Azure Spring Apps using the Azure portal.
1. In a new tab, open the [Azure portal](https://portal.azure.com/).
-2. From the top search box, search for **Azure Spring Cloud**.
+2. From the top search box, search for **Azure Spring Apps**.
-3. Select **Azure Spring Cloud** from the results.
+3. Select **Azure Spring Apps** from the results.
- ![ASC icon start](media/spring-cloud-quickstart-launch-app-portal/find-spring-cloud-start.png)
+ :::image type="content" source="media/spring-cloud-quickstart-launch-app-portal/find-spring-cloud-start.png" alt-text="Screenshot of Azure portal showing Azure Spring Apps service in search results.":::
-4. On the Azure Spring Cloud page, select **Create**.
+4. On the Azure Spring Apps page, select **Create**.
- ![ASC icon add](media/spring-cloud-quickstart-launch-app-portal/spring-cloud-create.png)
+ :::image type="content" source="media/spring-cloud-quickstart-launch-app-portal/spring-cloud-create.png" alt-text="Screenshot of Azure portal showing Azure Spring Apps resource with Create button highlighted.":::
-5. Fill out the form on the Azure Spring Cloud **Create** page. Consider the following guidelines:
+5. Fill out the form on the Azure Spring Apps **Create** page. Consider the following guidelines:
- **Subscription**: Select the subscription you want to be billed for this resource. - **Resource group**: Creating new resource groups for new resources is a best practice. You will use this value in later steps as **\<resource group name\>**.
The following procedure creates an instance of Azure Spring Cloud using the Azur
- **Location**: Select the location for your service instance. - Select **Standard** for the **Pricing tier** option.
- ![ASC portal start](media/spring-cloud-quickstart-launch-app-portal/portal-start.png)
+ :::image type="content" source="media/spring-cloud-quickstart-launch-app-portal/portal-start.png" alt-text="Screenshot of Azure portal showing Azure Spring Apps Create page.":::
6. Select **Review and create**.
The following procedure creates an instance of Azure Spring Cloud using the Azur
#### [CLI](#tab/Azure-CLI)
-The following procedure uses the Azure CLI extension to provision an instance of Azure Spring Cloud.
+The following procedure uses the Azure CLI extension to provision an instance of Azure Spring Apps.
-1. Update Azure CLI with Azure Spring Cloud extension.
+1. Update Azure CLI with Azure Spring Apps extension.
```azurecli
- az extension update --name spring-cloud
+ az extension update --name spring
``` 1. Sign in to the Azure CLI and choose your active subscription.
The following procedure uses the Azure CLI extension to provision an instance of
az account set --subscription <Name or ID of subscription, skip if you only have 1 subscription> ```
-1. Prepare a name for your Azure Spring Cloud service. The name must be between 4 and 32 characters long and can contain only lowercase letters, numbers, and hyphens. The first character of the service name must be a letter and the last character must be either a letter or a number.
+1. Prepare a name for your Azure Spring Apps service. The name must be between 4 and 32 characters long and can contain only lowercase letters, numbers, and hyphens. The first character of the service name must be a letter and the last character must be either a letter or a number.
-1. Create a resource group to contain your Azure Spring Cloud service. Create in instance of the Azure Spring Cloud service.
+1. Create a resource group to contain your Azure Spring Apps service. Create in instance of the Azure Spring Apps service.
```azurecli az group create --name <resource group name>
- az spring-cloud create -n <service instance name> -g <resource group name>
+ az spring create -n <service instance name> -g <resource group name>
``` Learn more about [Azure Resource Groups](../azure-resource-manager/management/overview.md).
echo "Press [ENTER] to continue ..."
## Next steps > [!div class="nextstepaction"]
-> [Quickstart: Set up Azure Spring Cloud Config Server](./quickstart-setup-config-server.md)
+> [Quickstart: Set up Azure Spring Apps Config Server](./quickstart-setup-config-server.md)
spring-cloud Quickstart Sample App Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/quickstart-sample-app-introduction.md
Title: "Quickstart - Introduction to the sample app - Azure Spring Cloud"
-description: Describes the sample app used in this series of quickstarts for deployment to Azure Spring Cloud.
+ Title: "Quickstart - Introduction to the sample app - Azure Spring Apps"
+description: Describes the sample app used in this series of quickstarts for deployment to Azure Spring Apps.
Last updated 10/12/2021-+ zone_pivot_groups: programming-languages-spring-cloud # Introduction to the sample app
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
+ **This article applies to:** ✔️ Basic/Standard tier ✔️ Enterprise tier ::: zone pivot="programming-language-csharp"
-This series of quickstarts uses a sample app composed of two Spring apps to show how to deploy a .NET Core Steeltoe app to the Azure Spring Cloud service. You'll use Azure Spring Cloud capabilities such as service discovery, config server, logs, metrics, and distributed tracing.
+This series of quickstarts uses a sample app composed of two Spring apps to show how to deploy a .NET Core Steeltoe app to the Azure Spring Apps service. You'll use Azure Spring Apps capabilities such as service discovery, config server, logs, metrics, and distributed tracing.
## Functional services
The following diagram illustrates the sample app architecture:
:::image type="content" source="media/spring-cloud-quickstart-sample-app-introduction/sample-app-diagram.png" alt-text="Diagram of sample app architecture."::: > [!NOTE]
-> When the application is hosted in Azure Spring Cloud Enterprise tier, the managed Application Configuration Service for VMware Tanzu® assumes the role of Spring Cloud Config Server and the managed VMware Tanzu® Service Registry assumes the role of Eureka Service Discovery without any code changes to the application. For more information, see [Use Application Configuration Service for Tanzu](how-to-enterprise-application-configuration-service.md) and [Use Tanzu Service Registry](how-to-enterprise-service-registry.md).
+> When the application is hosted in Azure Spring Apps Enterprise tier, the managed Application Configuration Service for VMware Tanzu® assumes the role of Spring Cloud Config Server and the managed VMware Tanzu® Service Registry assumes the role of Eureka Service Discovery without any code changes to the application. For more information, see [Use Application Configuration Service for Tanzu](how-to-enterprise-application-configuration-service.md) and [Use Tanzu Service Registry](how-to-enterprise-service-registry.md).
## Code repository
The instructions in the following quickstarts refer to the source code as needed
::: zone pivot="programming-language-java"
-In this quickstart, we use the well-known sample app [PetClinic](https://github.com/spring-petclinic/spring-petclinic-microservices) that will show you how to deploy apps to the Azure Spring Cloud service. The **Pet Clinic** sample demonstrates the microservice architecture pattern and highlights the services breakdown. You will see how services are deployed to Azure with Azure Spring Cloud capabilities, including service discovery, config server, logs, metrics, distributed tracing, and developer-friendly tooling support.
+In this quickstart, we use the well-known sample app [PetClinic](https://github.com/spring-petclinic/spring-petclinic-microservices) that will show you how to deploy apps to the Azure Spring Apps service. The **Pet Clinic** sample demonstrates the microservice architecture pattern and highlights the services breakdown. You will see how services are deployed to Azure with Azure Spring Apps capabilities, including service discovery, config server, logs, metrics, distributed tracing, and developer-friendly tooling support.
-To follow the Azure Spring Cloud deployment examples, you only need the location of the source code, which is provided as needed.
+To follow the Azure Spring Apps deployment examples, you only need the location of the source code, which is provided as needed.
The following diagram shows the architecture of the PetClinic application. ![Architecture of PetClinic](media/build-and-deploy/microservices-architecture-diagram.jpg) > [!NOTE]
-> When the application is hosted in Azure Spring Cloud Enterprise tier, the managed Application Configuration Service for VMware Tanzu® assumes the role of Spring Cloud Config Server and the managed VMware Tanzu® Service Registry assumes the role of Eureka Service Discovery without any code changes to the application. For more information, see the [Infrastructure services hosted by Azure Spring Cloud](#infrastructure-services-hosted-by-azure-spring-cloud) section later in this article.
+> When the application is hosted in Azure Spring Apps Enterprise tier, the managed Application Configuration Service for VMware Tanzu® assumes the role of Spring Cloud Config Server and the managed VMware Tanzu® Service Registry assumes the role of Eureka Service Discovery without any code changes to the application. For more information, see the [Infrastructure services hosted by Azure Spring Apps](#infrastructure-services-hosted-by-azure-spring-apps) section later in this article.
## Functional services to be deployed
PetClinic is decomposed into 4 core Spring apps. All of them are independently d
* **Vets service**: Stores and shows Veterinarians' information, including names and specialties. * **API Gateway**: The API Gateway is a single entry point into the system, used to handle requests and route them to an appropriate service or to invoke multiple services, and aggregate the results. The three core services expose an external API to client. In real-world systems, the number of functions can grow very quickly with system complexity. Hundreds of services might be involved in rendering one complex webpage.
-## Infrastructure services hosted by Azure Spring Cloud
+## Infrastructure services hosted by Azure Spring Apps
-There are several common patterns in distributed systems that support core services. Azure Spring Cloud provides tools that enhance Spring Boot applications to implement the following patterns:
+There are several common patterns in distributed systems that support core services. Azure Spring Apps provides tools that enhance Spring Boot applications to implement the following patterns:
### [Basic/Standard tier](#tab/basic-standard-tier)
-* **Config service**: Azure Spring Cloud Config is a horizontally scalable centralized configuration service for distributed systems. It uses a pluggable repository that currently supports local storage, Git, and Subversion.
+* **Config service**: Azure Spring Apps Config is a horizontally scalable centralized configuration service for distributed systems. It uses a pluggable repository that currently supports local storage, Git, and Subversion.
* **Service discovery**: It allows automatic detection of network locations for service instances, which could have dynamically assigned addresses because of autoscaling, failures, and upgrades. ### [Enterprise tier](#tab/enterprise-tier)
For full implementation details, see our fork of [PetClinic](https://github.com/
### [Basic/Standard tier](#tab/basic-standard-tier) > [!div class="nextstepaction"]
-> [Quickstart: Provision an Azure Spring Cloud service instance](./quickstart-provision-service-instance.md)
+> [Quickstart: Provision an Azure Spring Apps service instance](./quickstart-provision-service-instance.md)
### [Enterprise tier](#tab/enterprise-tier) > [!div class="nextstepaction"]
-> [Quickstart: Provision an Azure Spring Cloud service instance using the Enterprise tier](./quickstart-provision-service-instance-enterprise.md)
+> [Quickstart: Provision an Azure Spring Apps service instance using the Enterprise tier](./quickstart-provision-service-instance-enterprise.md)
spring-cloud Quickstart Setup Application Configuration Service Enterprise https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/quickstart-setup-application-configuration-service-enterprise.md
Title: "Quickstart - Set up Application Configuration Service for Tanzu for Azure Spring Cloud Enterprise tier"
-description: Describes how to set up Application Configuration Service for Tanzu for Azure Spring Cloud Enterprise tier.
+ Title: "Quickstart - Set up Application Configuration Service for Tanzu for Azure Spring Apps Enterprise tier"
+description: Describes how to set up Application Configuration Service for Tanzu for Azure Spring Apps Enterprise tier.
Last updated 02/09/2022-+ # Quickstart: Set up Application Configuration Service for Tanzu
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
+ **This article applies to:** ❌ Basic/Standard tier ✔️ Enterprise tier
-This quickstart shows you how to set up Application Configuration Service for VMware Tanzu® for use with Azure Spring Cloud Enterprise tier.
+This quickstart shows you how to set up Application Configuration Service for VMware Tanzu® for use with Azure Spring Apps Enterprise tier.
> [!NOTE]
-> To use Application Configuration Service for Tanzu, you must enable it when you provision your Azure Spring Cloud service instance. You cannot enable it after provisioning at this time.
+> To use Application Configuration Service for Tanzu, you must enable it when you provision your Azure Spring Apps service instance. You cannot enable it after provisioning at this time.
## Prerequisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- A license for Azure Spring Cloud Enterprise Tier. For more information, see [View Azure Spring Cloud Enterprise Tier offering from Azure Marketplace](./how-to-enterprise-marketplace-offer.md).
+- A license for Azure Spring Apps Enterprise Tier. For more information, see [View Azure Spring Apps Enterprise Tier offering from Azure Marketplace](./how-to-enterprise-marketplace-offer.md).
- [Azure CLI version 2.0.67 or higher](/cli/azure/install-azure-cli). - [Apache Maven](https://maven.apache.org/download.cgi) - [!INCLUDE [install-enterprise-extension](includes/install-enterprise-extension.md)]
To use Application Configuration Service for Tanzu, follow these steps.
1. Select **Application Configuration Service**. 1. Select **Overview** to view the running state and resources allocated to Application Configuration Service for Tanzu.
- ![Screenshot of Azure portal Azure Spring Cloud with Application Configuration Service page and Overview section showing.](./media/enterprise/getting-started-enterprise/config-service-overview.png)
+ :::image type="content" source="media/enterprise/getting-started-enterprise/config-service-overview.png" alt-text="Screenshot of Azure portal Azure Spring Apps with Application Configuration Service page and Overview section showing.":::
1. Select **Settings** and add a new entry in the **Repositories** section with the following information:
To use Application Configuration Service for Tanzu, follow these steps.
1. Select **Validate** to validate access to the target URI. After validation completes successfully, select **Apply** to update the configuration settings.
- ![Screenshot of Azure portal Azure Spring Cloud with Application Configuration Service page and Settings section showing.](./media/enterprise/getting-started-enterprise/config-service-settings.png)
+ :::image type="content" source="media/enterprise/getting-started-enterprise/config-service-settings.png" alt-text="Screenshot of Azure portal Azure Spring Apps with Application Configuration Service page and Settings section showing.":::
### [Azure CLI](#tab/azure-cli) To set the default repository, use the following command: ```azurecli
-az spring-cloud application-configuration-service git repo add \
+az spring application-configuration-service git repo add \
--name default \ --patterns api-gateway,customers-service \ --uri https://github.com/Azure-Samples/spring-petclinic-microservices-config.git \
echo "Press [ENTER] to continue ..."
## Next steps > [!div class="nextstepaction"]
-> [Quickstart: Build and deploy apps to Azure Spring Cloud using the Enterprise tier](quickstart-deploy-apps-enterprise.md)
+> [Quickstart: Build and deploy apps to Azure Spring Apps using the Enterprise tier](quickstart-deploy-apps-enterprise.md)
spring-cloud Quickstart Setup Config Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/quickstart-setup-config-server.md
Title: "Quickstart - Set up Azure Spring Cloud Config Server"
-description: Describes the set up of Azure Spring Cloud Config Server for app deployment.
+ Title: "Quickstart - Set up Azure Spring Apps Config Server"
+description: Describes the set up of Azure Spring Apps Config Server for app deployment.
Last updated 10/12/2021-+ zone_pivot_groups: programming-languages-spring-cloud
-# Quickstart: Set up Azure Spring Cloud Config Server
+# Quickstart: Set up Azure Spring Apps Config Server
+
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
**This article applies to:** ✔️ Basic/Standard tier ❌ Enterprise tier
-Azure Spring Cloud Config Server is a centralized configuration service for distributed systems. It uses a pluggable repository layer that currently supports local storage, Git, and Subversion. In this quickstart, you set up the Config Server to get data from a Git repository.
+Azure Spring Apps Config Server is a centralized configuration service for distributed systems. It uses a pluggable repository layer that currently supports local storage, Git, and Subversion. In this quickstart, you set up the Config Server to get data from a Git repository.
::: zone pivot="programming-language-csharp" ## Prerequisites
-* Complete the previous quickstart in this series: [Provision Azure Spring Cloud service](./quickstart-provision-service-instance.md).
-* Azure Spring Cloud Config server is only applicable to basic or standard tier.
+* Complete the previous quickstart in this series: [Provision Azure Spring Apps service](./quickstart-provision-service-instance.md).
+* Azure Spring Apps Config server is only applicable to basic or standard tier.
-## Azure Spring Cloud Config Server procedures
+## Azure Spring Apps Config Server procedures
Set up your Config Server with the location of the git repository for the project by running the following command. Replace *\<service instance name>* with the name of the service you created earlier. The default value for service instance name that you set in the preceding quickstart doesn't work with this command. ```azurecli
-az spring-cloud config-server git set -n <service instance name> --uri https://github.com/Azure-Samples/Azure-Spring-Cloud-Samples --search-paths steeltoe-sample/config
+az spring config-server git set -n <service instance name> --uri https://github.com/Azure-Samples/Azure-Spring-Cloud-Samples --search-paths steeltoe-sample/config
``` This command tells Config Server to find the configuration data in the [steeltoe-sample/config](https://github.com/Azure-Samples/Azure-Spring-Cloud-Samples/tree/master/steeltoe-sample/config) folder of the sample app repository. Since the name of the app that will get the configuration data is `planet-weather-provider`, the file that will be used is [planet-weather-provider.yml](https://github.com/Azure-Samples/Azure-Spring-Cloud-Samples/blob/master/steeltoe-sample/config/planet-weather-provider.yml).
This command tells Config Server to find the configuration data in the [steeltoe
::: zone-end ::: zone pivot="programming-language-java"
-Azure Spring Cloud Config Server is centralized configuration service for distributed systems. It uses a pluggable repository layer that currently supports local storage, Git, and Subversion. Set up the Config Server to deploy Spring apps to Azure Spring Cloud.
+Azure Spring Apps Config Server is centralized configuration service for distributed systems. It uses a pluggable repository layer that currently supports local storage, Git, and Subversion. Set up the Config Server to deploy Spring apps to Azure Spring Apps.
## Prerequisites * [Install JDK 8 or JDK 11](/azure/developer/java/fundamentals/java-jdk-install) * [Sign up for an Azure subscription](https://azure.microsoft.com/free/)
-* (Optional) [Install the Azure CLI version 2.0.67 or higher](/cli/azure/install-azure-cli) and install the Azure Spring Cloud extension with command: `az extension add --name spring-cloud`
+* (Optional) [Install the Azure CLI version 2.0.67 or higher](/cli/azure/install-azure-cli) and install the Azure Spring Apps extension with command: `az extension add --name spring`
* (Optional) [Install the Azure Toolkit for IntelliJ](https://plugins.jetbrains.com/plugin/8053-azure-toolkit-for-intellij/) and [sign-in](/azure/developer/java/toolkit-for-intellij/create-hello-world-web-app#installation-and-sign-in)
-## Azure Spring Cloud Config Server procedures
+## Azure Spring Apps Config Server procedures
#### [Portal](#tab/Azure-portal)
The following procedure sets up the Config Server using the Azure portal to depl
3. Select **Validate**.
- ![Navigate to Config Server](media/spring-cloud-quickstart-launch-app-portal/portal-config.png)
+ :::image type="content" source="media/spring-cloud-quickstart-launch-app-portal/portal-config.png" alt-text="Screenshot of Azure portal showing Config Server page.":::
4. When validation is complete, select **Apply** to save your changes.
- ![Validating Config Server](media/spring-cloud-quickstart-launch-app-portal/validate-complete.png)
+ :::image type="content" source="media/spring-cloud-quickstart-launch-app-portal/validate-complete.png" alt-text="Screenshot of Azure portal showing Config Server page with Apply button highlighted.":::
5. Updating the configuration can take a few minutes.
- ![Updating Config Server](media/spring-cloud-quickstart-launch-app-portal/updating-config.png)
+ :::image type="content" source="media/spring-cloud-quickstart-launch-app-portal/updating-config.png" alt-text="Screenshot of Azure portal showing Config Server page with Updating status message.":::
6. You should get a notification when the configuration is complete.
The following procedure uses the Azure CLI to set up Config Server to deploy the
Run the following command to set the Default repository. ```azurecli
-az spring-cloud config-server git set -n <service instance name> --uri https://github.com/azure-samples/spring-petclinic-microservices-config
+az spring config-server git set -n <service instance name> --uri https://github.com/azure-samples/spring-petclinic-microservices-config
``` ::: zone-end
az spring-cloud config-server git set -n <service instance name> --uri https://g
> [!TIP] > If you are using a private repository for Config Server, please refer to our [tutorial on setting up authentication](./how-to-config-server.md).
-## Troubleshooting of Azure Spring Cloud Config Server
+## Troubleshooting of Azure Spring Apps Config Server
The following procedure explains how to troubleshoot config server settings.
The following procedure explains how to troubleshoot config server settings.
1. Select **Run**. 1. If you find the error **java.lang.illegalStateException** in logs, this indicates that spring cloud service cannot locate properties from config server.
- [ ![ASC portal run query](media/spring-cloud-quickstart-setup-config-server/setup-config-server-query.png) ](media/spring-cloud-quickstart-setup-config-server/setup-config-server-query.png)
+ :::image type="content" source="media/spring-cloud-quickstart-setup-config-server/setup-config-server-query.png" alt-text="Screenshot of Azure portal showing Azure Spring Apps query." lightbox="media/spring-cloud-quickstart-setup-config-server/setup-config-server-query.png":::
1. Go to the service **Overview** page. 1. Select **Diagnose and solve problems**. 1. Select **Config Server** detector.
- [ ![ASC portal diagnose problems](media/spring-cloud-quickstart-setup-config-server/setup-config-server-diagnose.png) ](media/spring-cloud-quickstart-setup-config-server/setup-config-server-diagnose.png)
+ :::image type="content" source="media/spring-cloud-quickstart-setup-config-server/setup-config-server-diagnose.png" alt-text="Screenshot of Azure portal showing Diagnose and solve problems page with Config Server button highlighted." lightbox="media/spring-cloud-quickstart-setup-config-server/setup-config-server-diagnose.png":::
1. Select **Config Server Health Check**.
- [ ![ASC portal genie](media/spring-cloud-quickstart-setup-config-server/setup-config-server-genie.png) ](media/spring-cloud-quickstart-setup-config-server/setup-config-server-genie.png)
+ :::image type="content" source="media/spring-cloud-quickstart-setup-config-server/setup-config-server-genie.png" alt-text="Screenshot of Azure portal showing Diagnose and solve problems page and the Availability and Performance tab." lightbox="media/spring-cloud-quickstart-setup-config-server/setup-config-server-genie.png":::
1. Select **Config Server Status** to see more details from the detector.
- [ ![ASC portal health status](media/spring-cloud-quickstart-setup-config-server/setup-config-server-health-status.png) ](media/spring-cloud-quickstart-setup-config-server/setup-config-server-health-status.png)
+ :::image type="content" source="media/spring-cloud-quickstart-setup-config-server/setup-config-server-health-status.png" alt-text="Screenshot of Azure portal showing Diagnose and solve problems page with Config Server Health Status highlighted." lightbox="media/spring-cloud-quickstart-setup-config-server/setup-config-server-health-status.png":::
## Clean up resources
echo "Press [ENTER] to continue ..."
## Next steps > [!div class="nextstepaction"]
-> [Quickstart: Build and deploy apps to Azure Spring Cloud](quickstart-deploy-apps.md)
+> [Quickstart: Build and deploy apps to Azure Spring Apps](quickstart-deploy-apps.md)
spring-cloud Quickstart Setup Log Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/quickstart-setup-log-analytics.md
Title: "Quickstart - Set up a Log Analytics workspace in Azure Spring Cloud"
+ Title: "Quickstart - Set up a Log Analytics workspace in Azure Spring Apps"
description: This article describes the setup of a Log Analytics workspace for app deployment. Last updated 12/09/2021-+ ms.devlang: azurecli # Quickstart: Set up a Log Analytics workspace
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
+ **This article applies to:** ✔️ Basic/Standard tier ✔️ Enterprise tier
-This quickstart explains how to set up a Log Analytics workspace in Azure Spring Cloud for application development.
+This quickstart explains how to set up a Log Analytics workspace in Azure Spring Apps for application development.
Log Analytics is a tool in the Azure portal that's used to edit and run log queries with data in Azure Monitor Logs. You can write a query that returns a set of records and then use features of Log Analytics to sort, filter, and analyze those records. You can also write a more advanced query to do statistical analysis and visualize the results in a chart to identify particular trends. Whether you work with the results of your queries interactively or use them with other Azure Monitor features, Log Analytics is the tool that you use to write and test queries.
-You can set up Azure Monitor Logs for your application in Azure Spring Cloud to collect logs and run log queries via Log Analytics.
+You can set up Azure Monitor Logs for your application in Azure Spring Apps to collect logs and run log queries via Log Analytics.
## Prerequisites
-Complete the previous quickstart in this series: [Provision an Azure Spring Cloud service](./quickstart-provision-service-instance.md).
+Complete the previous quickstart in this series: [Provision an Azure Spring Apps service](./quickstart-provision-service-instance.md).
#### [Portal](#tab/Azure-Portal)
To create a workspace, follow the steps in [Create a Log Analytics workspace in
## Set up Log Analytics for a new service
-In the wizard for creating an Azure Spring Cloud service instance, you can configure the **Log Analytics workspace** field with an existing workspace or create one.
+In the wizard for creating an Azure Spring Apps service instance, you can configure the **Log Analytics workspace** field with an existing workspace or create one.
:::image type="content" source="media/spring-cloud-quickstart-setup-log-analytics/setup-diagnostics-setting.png" alt-text="Screenshot that shows where to configure diagnostic settings during provisioning." lightbox="media/spring-cloud-quickstart-setup-log-analytics/setup-diagnostics-setting.png":::
Setting up for a new service isn't applicable when you're using the Azure CLI.
## Set up Log Analytics for an existing service
-1. Get the instance ID for the Azure Spring Cloud service:
+1. Get the instance ID for the Azure Spring Apps service:
```azurecli
- az spring-cloud show \
+ az spring show \
--name <spring-cloud-service-name> \ --resource-group <your-resource-group> \ --query id --output tsv
echo "Press [ENTER] to continue ..."
## Next steps > [!div class="nextstepaction"]
-> [Quickstart: Monitoring Azure Spring Cloud apps with logs, metrics, and tracing](./quickstart-logs-metrics-tracing.md)
+> [Quickstart: Monitoring Azure Spring Apps apps with logs, metrics, and tracing](./quickstart-logs-metrics-tracing.md)
spring-cloud Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/quickstart.md
Title: "Quickstart - Deploy your first application to Azure Spring Cloud"
-description: In this quickstart, we deploy an application to Azure Spring Cloud.
+ Title: "Quickstart - Deploy your first application to Azure Spring Apps"
+description: In this quickstart, we deploy an application to Azure Spring Apps.
Last updated 10/18/2021 -+ zone_pivot_groups: programming-languages-spring-cloud
-# Quickstart: Deploy your first application to Azure Spring Cloud
+# Quickstart: Deploy your first application to Azure Spring Apps
+
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
**This article applies to:** ✔️ Basic/Standard tier ✔️ Enterprise tier ::: zone pivot="programming-language-csharp"
-This quickstart explains how to deploy a small application to run on Azure Spring Cloud.
+This quickstart explains how to deploy a small application to run on Azure Spring Apps.
>[!NOTE]
-> Steeltoe support for Azure Spring Cloud is currently offered as a public preview. Public preview offerings allow customers to experiment with new features prior to their official release. Public preview features and services aren't meant for production use. For more information about support during previews, see the [FAQ](https://azure.microsoft.com/support/faq/) or file a [Support request](../azure-portal/supportability/how-to-create-azure-support-request.md).
+> Steeltoe support for Azure Spring Apps is currently offered as a public preview. Public preview offerings allow customers to experiment with new features prior to their official release. Public preview features and services aren't meant for production use. For more information about support during previews, see the [FAQ](https://azure.microsoft.com/support/faq/) or file a [Support request](../azure-portal/supportability/how-to-create-azure-support-request.md).
By following this quickstart, you'll learn how to: > [!div class="checklist"] > * Generate a basic Steeltoe .NET Core project
-> * Provision an Azure Spring Cloud service instance
+> * Provision an Azure Spring Apps service instance
> * Build and deploy the app with a public endpoint > * Stream logs in real time
The application code used in this quickstart is a simple app built with a .NET C
## Prerequisites * An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* [.NET Core 3.1 SDK](https://dotnet.microsoft.com/download/dotnet-core/3.1). The Azure Spring Cloud service supports .NET Core 3.1 and later versions.
+* [.NET Core 3.1 SDK](https://dotnet.microsoft.com/download/dotnet-core/3.1). The Azure Spring Apps service supports .NET Core 3.1 and later versions.
* [Azure CLI version 2.0.67 or later](/cli/azure/install-azure-cli). * [Git](https://git-scm.com/).
Verify that your Azure CLI version is 2.0.67 or later:
az --version ```
-Install the Azure Spring Cloud extension for the Azure CLI using the following command:
+Install the Azure Spring Apps extension for the Azure CLI using the following command:
```azurecli
-az extension add --name spring-cloud
+az extension add --name spring
``` ## Sign in to Azure
In Visual Studio, create an ASP.NET Core Web application named as "hello-world"
</Target> ```
- The packages are for Steeltoe Service Discovery and the Azure Spring Cloud client library. The `Zip` task is for deployment to Azure. When you run the `dotnet publish` command, it generates the binaries in the *publish* folder, and this task zips the *publish* folder into a *.zip* file that you upload to Azure.
+ The packages are for Steeltoe Service Discovery and the Azure Spring Apps client library. The `Zip` task is for deployment to Azure. When you run the `dotnet publish` command, it generates the binaries in the *publish* folder, and this task zips the *publish* folder into a *.zip* file that you upload to Azure.
-1. In the *Program.cs* file, add a `using` directive and code that uses the Azure Spring Cloud client library:
+1. In the *Program.cs* file, add a `using` directive and code that uses the Azure Spring Apps client library:
```csharp using Microsoft.Azure.SpringCloud.Client;
In Visual Studio, create an ASP.NET Core Web application named as "hello-world"
## Provision a service instance
-The following procedure creates an instance of Azure Spring Cloud using the Azure portal.
+The following procedure creates an instance of Azure Spring Apps using the Azure portal.
1. Open the [Azure portal](https://portal.azure.com/).
-1. From the top search box, search for *Azure Spring Cloud*.
+1. From the top search box, search for *Azure Spring Apps*.
-1. Select *Azure Spring Cloud* from the results.
+1. Select *Azure Spring Apps* from the results.
- ![ASC icon start](media/spring-cloud-quickstart-launch-app-portal/find-spring-cloud-start.png)
+ :::image type="content" source="media/spring-cloud-quickstart-launch-app-portal/find-spring-cloud-start.png" alt-text="Screenshot of Azure portal showing Azure Spring Apps service in search results.":::
-1. On the Azure Spring Cloud page, select **Create**.
+1. On the Azure Spring Apps page, select **Create**.
- ![ASC icon add](media/spring-cloud-quickstart-launch-app-portal/spring-cloud-create.png)
+ :::image type="content" source="media/spring-cloud-quickstart-launch-app-portal/spring-cloud-create.png" alt-text="Screenshot of Azure portal showing Azure Spring Apps resource with Create button highlighted.":::
-1. Fill out the form on the Azure Spring Cloud **Create** page. Consider the following guidelines:
+1. Fill out the form on the Azure Spring Apps **Create** page. Consider the following guidelines:
* **Subscription**: Select the subscription you want to be billed for this resource. * **Resource group**: Create a new resource group. The name you enter here will be used in later steps as **\<resource group name\>**. * **Service Details/Name**: Specify the **\<service instance name\>**. The name must be between 4 and 32 characters long and can contain only lowercase letters, numbers, and hyphens. The first character of the service name must be a letter and the last character must be either a letter or a number. * **Region**: Select the region for your service instance.
- ![ASC portal start](media/spring-cloud-quickstart-launch-app-portal/portal-start.png)
+ :::image type="content" source="media/spring-cloud-quickstart-launch-app-portal/portal-start.png" alt-text="Screenshot of Azure portal showing Azure Spring Apps Create page.":::
1. Select **Review and create**.
The following procedure builds and deploys the project that you created earlier.
dotnet publish -c release -o ./publish ```
-1. Create an app in your Azure Spring Cloud instance with a public endpoint assigned. Use the same application name "hello-world" that you specified in *appsettings.json*.
+1. Create an app in your Azure Spring Apps instance with a public endpoint assigned. Use the same application name "hello-world" that you specified in *appsettings.json*.
```azurecli
- az spring-cloud app create -n hello-world -s <service instance name> -g <resource group name> --assign-endpoint --runtime-version NetCore_31
+ az spring app create -n hello-world -s <service instance name> -g <resource group name> --assign-endpoint --runtime-version NetCore_31
``` 1. Deploy the *.zip* file to the app. ```azurecli
- az spring-cloud app deploy -n hello-world -s <service instance name> -g <resource group name> --runtime-version NetCore_31 --main-entry hello-world.dll --artifact-path ./deploy.zip
+ az spring app deploy -n hello-world -s <service instance name> -g <resource group name> --runtime-version NetCore_31 --main-entry hello-world.dll --artifact-path ./deploy.zip
``` The `--main-entry` option identifies the *.dll* file that contains the application's entry point. After the service uploads the *.zip* file, it extracts all the files and folders and tries to execute the entry point in the *.dll* file specified by `--main-entry`.
The app returns JSON data similar to the following example:
Use the following command to get real-time logs from the App. ```azurecli
-az spring-cloud app logs -n hello-world -s <service instance name> -g <resource group name> --lines 100 -f
+az spring app logs -n hello-world -s <service instance name> -g <resource group name> --lines 100 -f
``` Logs appear in the output: ```output
-[Azure Spring Cloud] The following environment variables are loaded:
+[Azure Spring Apps] The following environment variables are loaded:
2020-09-08 20:58:42,432 INFO supervisord started with pid 1 2020-09-08 20:58:43,435 INFO spawned: 'event-gather_00' with pid 9 2020-09-08 20:58:43,436 INFO spawned: 'dotnet-app_00' with pid 10
info: Microsoft.Hosting.Lifetime[0]
Content root path: /netcorepublish/6e4db42a-b160-4b83-a771-c91adec18c60 2020-09-08 21:00:13 [Information] [10] Start listening... info: Microsoft.AspNetCore.Hosting.Diagnostics[1]
- Request starting HTTP/1.1 GET http://asc-svc-hello-world.azuremicroservices.io/weatherforecast
+ Request starting HTTP/1.1 GET http://asa-svc-hello-world.azuremicroservices.io/weatherforecast
info: Microsoft.AspNetCore.Routing.EndpointMiddleware[0] Executing endpoint 'hello_world.Controllers.WeatherForecastController.Get (hello-world)' info: Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionInvoker[3]
info: Microsoft.AspNetCore.Hosting.Diagnostics[2]
``` > [!TIP]
-> Use `az spring-cloud app logs -h` to explore more parameters and log stream functionalities.
+> Use `az spring app logs -h` to explore more parameters and log stream functionalities.
For advanced log analytics features, visit **Logs** tab in the menu on the [Azure portal](https://portal.azure.com/). Logs here have a latency of a few minutes.
-[ ![Logs Analytics](media/spring-cloud-quickstart-java/logs-analytics.png) ](media/spring-cloud-quickstart-java/logs-analytics.png#lightbox)
++ ::: zone-end ::: zone pivot="programming-language-java"
-This quickstart explains how to deploy a small application to Azure Spring Cloud.
+This quickstart explains how to deploy a small application to Azure Spring Apps.
The application code used in this tutorial is a simple app built with Spring Initializr. When you've completed this example, the application will be accessible online and can be managed via the Azure portal. This quickstart explains how to: > [!div class="checklist"]
-> * Generate a basic Spring Cloud project
+> * Generate a basic Spring project
> * Provision a service instance > * Build and deploy the app with a public endpoint > * Stream logs in real time
To complete this quickstart:
* [Install JDK 8 or JDK 11](/java/azure/jdk/) * [Sign up for an Azure subscription](https://azure.microsoft.com/free/)
-* (Optional) [Install the Azure CLI version 2.0.67 or higher](/cli/azure/install-azure-cli) and the Azure Spring Cloud extension with the command: `az extension add --name spring-cloud`
+* (Optional) [Install the Azure CLI version 2.0.67 or higher](/cli/azure/install-azure-cli) and the Azure Spring Apps extension with the command: `az extension add --name spring`
* (Optional) [Install IntelliJ IDEA](https://www.jetbrains.com/idea/) * (Optional) [Install the Azure Toolkit for IntelliJ](https://plugins.jetbrains.com/plugin/8053-azure-toolkit-for-intellij/) and [sign-in](/azure/developer/java/toolkit-for-intellij/create-hello-world-web-app#installation-and-sign-in) * (Optional) [Install Maven](https://maven.apache.org/guides/getting-started/maven-in-five-minutes.html). If you use the Azure Cloud Shell, this installation isn't needed.
-## Generate a Spring Cloud project
+## Generate a Spring project
-Start with [Spring Initializr](https://start.spring.io/#!type=maven-project&language=java&platformVersion=2.5.7&packaging=jar&jvmVersion=1.8&groupId=com.example&artifactId=hellospring&name=hellospring&description=Demo%20project%20for%20Spring%20Boot&packageName=com.example.hellospring&dependencies=web,cloud-eureka,actuator,cloud-config-client) to generate a sample project with recommended dependencies for Azure Spring Cloud. This link uses the following URL to provide default settings for you.
+Start with [Spring Initializr](https://start.spring.io/#!type=maven-project&language=java&platformVersion=2.5.7&packaging=jar&jvmVersion=1.8&groupId=com.example&artifactId=hellospring&name=hellospring&description=Demo%20project%20for%20Spring%20Boot&packageName=com.example.hellospring&dependencies=web,cloud-eureka,actuator,cloud-config-client) to generate a sample project with recommended dependencies for Azure Spring Apps. This link uses the following URL to provide default settings for you.
```url https://start.spring.io/#!type=maven-project&language=java&platformVersion=2.5.7&packaging=jar&jvmVersion=1.8&groupId=com.example&artifactId=hellospring&name=hellospring&description=Demo%20project%20for%20Spring%20Boot&packageName=com.example.hellospring&dependencies=web,cloud-eureka,actuator,cloud-config-client
The following image shows the recommended Initializr set up for this sample proj
This example uses Java version 8. If you want to use Java version 11, change the option under **Project Metadata**.
-![Initializr page](media/spring-cloud-quickstart-java/initializr-page.png)
1. Select **Generate** when all the dependencies are set. 1. Download and unpack the package, then create a web controller for a simple web application by adding the file *src/main/java/com/example/hellospring/HelloController.java* with the following contents:
This example uses Java version 8. If you want to use Java version 11, change th
@RequestMapping("/") public String index() {
- return "Greetings from Azure Spring Cloud!";
+ return "Greetings from Azure Spring Apps!";
} } ```
-## Provision an instance of Azure Spring Cloud
+## Provision an instance of Azure Spring Apps
-The following procedure creates an instance of Azure Spring Cloud using the Azure portal.
+The following procedure creates an instance of Azure Spring Apps using the Azure portal.
1. In a new tab, open the [Azure portal](https://portal.azure.com/).
-2. From the top search box, search for **Azure Spring Cloud**.
+2. From the top search box, search for **Azure Spring Apps**.
-3. Select **Azure Spring Cloud** from the results.
+3. Select **Azure Spring Apps** from the results.
- ![ASC icon start](media/spring-cloud-quickstart-launch-app-portal/find-spring-cloud-start.png)
+ :::image type="content" source="media/spring-cloud-quickstart-launch-app-portal/find-spring-cloud-start.png" alt-text="Screenshot of Azure portal showing Azure Spring Apps service in search results.":::
-4. On the Azure Spring Cloud page, select **Create**.
+4. On the Azure Spring Apps page, select **Create**.
- ![ASC icon add](media/spring-cloud-quickstart-launch-app-portal/spring-cloud-create.png)
+ :::image type="content" source="media/spring-cloud-quickstart-launch-app-portal/spring-cloud-create.png" alt-text="Screenshot of Azure portal showing Azure Spring Apps resource with Create button highlighted.":::
-5. Fill out the form on the Azure Spring Cloud **Create** page. Consider the following guidelines:
+5. Fill out the form on the Azure Spring Apps **Create** page. Consider the following guidelines:
- **Subscription**: Select the subscription you want to be billed for this resource. - **Resource group**: Creating new resource groups for new resources is a best practice. You will use this resource group in later steps as **\<resource group name\>**. - **Service Details/Name**: Specify the **\<service instance name\>**. The name must be between 4 and 32 characters long and can contain only lowercase letters, numbers, and hyphens. The first character of the service name must be a letter and the last character must be either a letter or a number. - **Location**: Select the region for your service instance.
- ![ASC portal start](media/spring-cloud-quickstart-launch-app-portal/portal-start.png)
+ :::image type="content" source="media/spring-cloud-quickstart-launch-app-portal/portal-start.png" alt-text="Screenshot of Azure portal showing Azure Spring Apps Create page.":::
6. Select **Review and create**.
The following procedure builds and deploys the application using the Azure CLI.
mvn clean package -DskipTests ```
-1. Create the app with a public endpoint assigned. If you selected Java version 11 when generating the Spring Cloud project, include the `--runtime-version=Java_11` switch.
+1. Create the app with a public endpoint assigned. If you selected Java version 11 when generating the Spring project, include the `--runtime-version=Java_11` switch.
```azurecli
- az spring-cloud app create -n hellospring -s <service instance name> -g <resource group name> --assign-endpoint true
+ az spring app create -n hellospring -s <service instance name> -g <resource group name> --assign-endpoint true
``` 1. Deploy the Jar file for the app (`target\hellospring-0.0.1-SNAPSHOT.jar` on Windows): ```azurecli
- az spring-cloud app deploy -n hellospring -s <service instance name> -g <resource group name> --artifact-path <jar file path>/hellospring-0.0.1-SNAPSHOT.jar
+ az spring app deploy -n hellospring -s <service instance name> -g <resource group name> --artifact-path <jar file path>/hellospring-0.0.1-SNAPSHOT.jar
``` 1. It takes a few minutes to finish deploying the application. To confirm that it has deployed, go to the **Apps** section in the Azure portal. You should see the status of the application. #### [IntelliJ](#tab/IntelliJ)
-The following procedure uses the IntelliJ plug-in for Azure Spring Cloud to deploy the sample app in IntelliJ IDEA.
+The following procedure uses the IntelliJ plug-in for Azure Spring Apps to deploy the sample app in IntelliJ IDEA.
### Import project 1. Open the IntelliJ **Welcome** dialog, then select **Open** to open the import wizard. 1. Select the **hellospring** folder.
- ![Import Project](media/spring-cloud-quickstart-java/intellij-new-project.png)
+ :::image type="content" source="media/spring-cloud-quickstart-java/intellij-new-project.png" alt-text="Screenshot of IntelliJ IDEA showing Open File or Project dialog box.":::
### Deploy the app In order to deploy to Azure, you must sign in with your Azure account, then choose your subscription. For sign-in details, see [Installation and sign-in](/azure/developer/java/toolkit-for-intellij/create-hello-world-web-app#installation-and-sign-in).
-1. Right-click your project in IntelliJ project explorer, then select **Azure** -> **Deploy to Azure Spring Cloud**.
+1. Right-click your project in IntelliJ project explorer, then select **Azure** -> **Deploy to Azure Spring Apps**.
- [ ![Where to deploy your project to Azure](media/spring-cloud-quickstart-java/intellij-deploy-azure-1.png) ](media/spring-cloud-quickstart-java/intellij-deploy-azure-1.png#lightbox)
+ :::image type="content" source="media/spring-cloud-quickstart-java/intellij-deploy-azure-1.png" alt-text="Screenshot of IntelliJ IDEA menu showing Deploy to Azure Spring Apps option." lightbox="media/spring-cloud-quickstart-java/intellij-deploy-azure-1.png":::
1. Accept the name for the app in the **Name** field. **Name** refers to the configuration, not the app name. Users don't usually need to change it. 1. In the **Artifact** textbox, select **Maven:com.example:hellospring-0.0.1-SNAPSHOT**. 1. In the **Subscription** textbox, verify your subscription is correct.
-1. In the **Service** textbox, select the instance of Azure Spring Cloud that you created in [Provision an instance of Azure Spring Cloud](./quickstart-provision-service-instance.md).
+1. In the **Service** textbox, select the instance of Azure Spring Apps that you created in [Provision an instance of Azure Spring Apps](./quickstart-provision-service-instance.md).
1. In the **App** textbox, select **+** to create a new app.
- ![Where to select to create a new IntelliJ app](media/spring-cloud-quickstart-java/intellij-create-new-app.png)
+ :::image type="content" source="media/spring-cloud-quickstart-java/intellij-create-new-app.png" alt-text="Screenshot of IntelliJ IDEA showing Deploy Azure Spring Apps dialog box.":::
1. In the **App name:** textbox, enter *hellospring*, then check the **More settings** check box. 1. Select the **Enable** button next to **Public endpoint**. The button will change to *Disable \<to be enabled\>*. 1. If you used Java 11, select **Java 11** in **Runtime**. 1. Select **OK**.
- ![How enable public endpoint should look after selecting](media/spring-cloud-quickstart-java/intellij-create-new-app-2.png)
+ :::image type="content" source="media/spring-cloud-quickstart-java/intellij-create-new-app-2.png" alt-text="Screenshot of IntelliJ IDEA Create Azure Spring Apps dialog box with public endpoint Disable button highlighted.":::
1. Under **Before launch**, select the **Run Maven Goal 'hellospring:package'** line, then select the pencil to edit the command line.
- ![Edit the Maven Goal](media/spring-cloud-quickstart-java/intellij-edit-maven-goal.png)
+ :::image type="content" source="media/spring-cloud-quickstart-java/intellij-edit-maven-goal.png" alt-text="Screenshot of IntelliJ IDEA Create Azure Spring Apps dialog box with Maven Goal edit button highlighted.":::
1. In the **Command line** textbox, enter *-DskipTests* after *package*, then select **OK**.
- ![Deploy to Azure OK](media/spring-cloud-quickstart-java/intellij-maven-goal-command-line.png)
+ :::image type="content" source="media/spring-cloud-quickstart-java/intellij-maven-goal-command-line.png" alt-text="Screenshot of IntelliJ IDEA Select Maven Goal dialog box with Command Line value highlighted.":::
-1. Start the deployment by selecting the **Run** button at the bottom of the **Deploy Azure Spring Cloud app** dialog. The plug-in will run the command `mvn package -DskipTests` on the `hellospring` app and deploy the jar generated by the `package` command.
+1. Start the deployment by selecting the **Run** button at the bottom of the **Deploy Azure Spring Apps app** dialog. The plug-in will run the command `mvn package -DskipTests` on the `hellospring` app and deploy the jar generated by the `package` command.
#### [Visual Studio Code](#tab/VS-Code)
-To deploy a simple Spring Boot web app to Azure Spring Cloud, follow the steps in [Build and Deploy Java Spring Boot Apps to Azure Spring Cloud with Visual Studio Code](https://code.visualstudio.com/docs/java/java-spring-cloud#_download-and-test-the-spring-boot-app).
+To deploy a simple Spring Boot web app to Azure Spring Apps, follow the steps in [Build and Deploy Java Spring Boot Apps to Azure Spring Apps with Visual Studio Code](https://code.visualstudio.com/docs/java/java-spring-cloud#_download-and-test-the-spring-boot-app).
Once deployment has completed, you can access the app at `https://<service instance name>-hellospring.azuremicroservices.io/`.
-[![Access app from browser](media/spring-cloud-quickstart-java/access-app-browser.png)](media/spring-cloud-quickstart-java/access-app-browser.png#lightbox)
## Streaming logs in real time
Once deployment has completed, you can access the app at `https://<service insta
Use the following command to get real-time logs from the App. ```azurecli
-az spring-cloud app logs -n hellospring -s <service instance name> -g <resource group name> --lines 100 -f
+az spring app logs -n hellospring -s <service instance name> -g <resource group name> --lines 100 -f
``` Logs appear in the results:
-[ ![Streaming Logs](media/spring-cloud-quickstart-java/streaming-logs.png) ](media/spring-cloud-quickstart-java/streaming-logs.png#lightbox)
>[!TIP]
-> Use `az spring-cloud app logs -h` to explore more parameters and log stream functionalities.
+> Use `az spring app logs -h` to explore more parameters and log stream functionalities.
#### [IntelliJ](#tab/IntelliJ)
Logs appear in the results:
1. Select **Streaming Logs** from the drop-down list. 1. Select instance.
- [![Select streaming logs](media/spring-cloud-quickstart-java/intellij-get-streaming-logs.png)](media/spring-cloud-quickstart-java/intellij-get-streaming-logs.png)
+ :::image type="content" source="media/spring-cloud-quickstart-java/intellij-get-streaming-logs.png" alt-text="Screenshot of IntelliJ IDEA showing Select instance dialog box." lightbox="media/spring-cloud-quickstart-java/intellij-get-streaming-logs.png":::
1. The streaming log will be visible in the output window.
- [![Streaming log output](media/spring-cloud-quickstart-java/intellij-streaming-logs-output.png)](media/spring-cloud-quickstart-java/intellij-streaming-logs-output.png)
+ :::image type="content" source="media/spring-cloud-quickstart-java/intellij-streaming-logs-output.png" alt-text="Screenshot of IntelliJ IDEA showing streaming log output." lightbox="media/spring-cloud-quickstart-java/intellij-streaming-logs-output.png":::
#### [Visual Studio Code](#tab/VS-Code)
To get real-time application logs with Visual Studio Code, follow the steps in [
For advanced logs analytics features, visit the **Logs** tab in the menu on the [Azure portal](https://portal.azure.com/). Logs here have a latency of a few minutes.
-[![Logs Analytics](media/spring-cloud-quickstart-java/logs-analytics.png)](media/spring-cloud-quickstart-java/logs-analytics.png#lightbox)
::: zone-end
echo "Press [ENTER] to continue ..."
In this quickstart, you learned how to: > [!div class="checklist"]
-> * Generate a basic Spring Cloud project
+> * Generate a basic Spring project
> * Provision a service instance > * Build and deploy the app with a public endpoint > * Stream logs in real time
-To learn how to use more Azure Spring capabilities, advance to the quickstart series that deploys a sample application to Azure Spring Cloud:
+To learn how to use more Azure Spring capabilities, advance to the quickstart series that deploys a sample application to Azure Spring Apps:
> [!div class="nextstepaction"] > [Introduction to the sample app](./quickstart-sample-app-introduction.md)
-More samples are available on GitHub: [Azure Spring Cloud Samples](https://github.com/Azure-Samples/Azure-Spring-Cloud-Samples).
+More samples are available on GitHub: [Azure Spring Apps Samples](https://github.com/Azure-Samples/Azure-Spring-Cloud-Samples).
spring-cloud Quotas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/quotas.md
Title: Service plans and quotas for Azure Spring Cloud
-description: Learn about service quotas and service plans for Azure Spring Cloud
+ Title: Service plans and quotas for Azure Spring Apps
+description: Learn about service quotas and service plans for Azure Spring Apps
Last updated 11/04/2019 -+
-# Quotas and service plans for Azure Spring Cloud
+# Quotas and service plans for Azure Spring Apps
+
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
**This article applies to:** ✔️ Java ✔️ C# **This article applies to:** ✔️ Basic/Standard tier ✔️ Enterprise tier
-All Azure services set default limits and quotas for resources and features. Azure Spring Cloud offers two pricing tiers: Basic and Standard. We will detail limits for both tiers in this article.
+All Azure services set default limits and quotas for resources and features. Azure Spring Apps offers two pricing tiers: Basic and Standard. We will detail limits for both tiers in this article.
-## Azure Spring Cloud service tiers and limits
+## Azure Spring Apps service tiers and limits
| Resource | Scope | Basic | Standard/Enterprise | |--|--|--|-| | vCPU | per app instance | 1 | 4 | | Memory | per app instance | 2 GB | 8 GB |
-| Azure Spring Cloud service instances | per region per subscription | 10 | 10 |
-| Total app instances | per Azure Spring Cloud service instance | 25 | 500 |
-| Custom Domains | per Azure Spring Cloud service instance | 0 | 25 |
-| Persistent volumes | per Azure Spring Cloud service instance | 1 GB/app x 10 apps | 50 GB/app x 10 apps |
-| Inbound Public Endpoints | per Azure Spring Cloud service instance | 10 <sup>1</sup> | 10 <sup>1</sup> |
-| Outbound Public IPs | per Azure Spring Cloud service instance | 1 <sup>2</sup> | 2 <sup>2</sup> <br> 1 if using VNet<sup>2</sup> |
+| Azure Spring Apps service instances | per region per subscription | 10 | 10 |
+| Total app instances | per Azure Spring Apps service instance | 25 | 500 |
+| Custom Domains | per Azure Spring Apps service instance | 0 | 25 |
+| Persistent volumes | per Azure Spring Apps service instance | 1 GB/app x 10 apps | 50 GB/app x 10 apps |
+| Inbound Public Endpoints | per Azure Spring Apps service instance | 10 <sup>1</sup> | 10 <sup>1</sup> |
+| Outbound Public IPs | per Azure Spring Apps service instance | 1 <sup>2</sup> | 2 <sup>2</sup> <br> 1 if using VNet<sup>2</sup> |
| User-assigned managed identities | per app instance | 20 | 20 | <sup>1</sup> You can increase this limit via support request to a maximum of 1 per app.
spring-cloud Reference Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/reference-architecture.md
Last updated 02/16/2021 Title: Azure Spring Cloud reference architecture
+ Title: Azure Spring Apps reference architecture
-description: This reference architecture is a foundation using a typical enterprise hub and spoke design for the use of Azure Spring Cloud.
+
+description: This reference architecture is a foundation using a typical enterprise hub and spoke design for the use of Azure Spring Apps.
-# Azure Spring Cloud reference architecture
+# Azure Spring Apps reference architecture
+
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
**This article applies to:** ✔️ Basic/Standard tier ✔️ Enterprise tier
-This reference architecture is a foundation using a typical enterprise hub and spoke design for the use of Azure Spring Cloud. In the design, Azure Spring Cloud is deployed in a single spoke that's dependent on shared services hosted in the hub. The architecture is built with components to achieve the tenets in the [Microsoft Azure Well-Architected Framework][16].
+This reference architecture is a foundation using a typical enterprise hub and spoke design for the use of Azure Spring Apps. In the design, Azure Spring Apps is deployed in a single spoke that's dependent on shared services hosted in the hub. The architecture is built with components to achieve the tenets in the [Microsoft Azure Well-Architected Framework][16].
-For an implementation of this architecture, see the [Azure Spring Cloud Reference Architecture][10] repository on GitHub.
+For an implementation of this architecture, see the [Azure Spring Apps Reference Architecture][10] repository on GitHub.
Deployment options for this architecture include Azure Resource Manager (ARM), Terraform, Azure CLI, and Bicep. The artifacts in this repository provide a foundation that you can customize for your environment. You can group resources such as Azure Firewall or Application Gateway into different resource groups or subscriptions. This grouping helps keep different functions separate, such as IT infrastructure, security, business application teams, and so on. ## Planning the address space
-Azure Spring Cloud requires two dedicated subnets:
+Azure Spring Apps requires two dedicated subnets:
* Service runtime * Spring Boot applications
-Each of these subnets requires a dedicated Azure Spring Cloud cluster. Multiple clusters can't share the same subnets. The minimum size of each subnet is /28. The number of application instances that Azure Spring Cloud can support varies based on the size of the subnet. You can find the detailed Virtual Network (VNET) requirements in the [Virtual network requirements][11] section of [Deploy Azure Spring Cloud in a virtual network][17].
+Each of these subnets requires a dedicated Azure Spring Apps cluster. Multiple clusters can't share the same subnets. The minimum size of each subnet is /28. The number of application instances that Azure Spring Apps can support varies based on the size of the subnet. You can find the detailed Virtual Network (VNET) requirements in the [Virtual network requirements][11] section of [Deploy Azure Spring Apps in a virtual network][17].
> [!WARNING] > The selected subnet size can't overlap with the existing VNET address space, and shouldn't overlap with any peered or on-premises subnet address ranges.
These use cases are similar except for their security and network traffic rules.
The following list describes the infrastructure requirements for private applications. These requirements are typical in highly regulated environments.
-* A subnet must only have one instance of Azure Spring Cloud.
+* A subnet must only have one instance of Azure Spring Apps.
* Adherence to at least one Security Benchmark should be enforced. * Application host Domain Name Service (DNS) records should be stored in Azure Private DNS. * Azure service dependencies should communicate through Service Endpoints or Private Link. * Data at rest should be encrypted. * Data in transit should be encrypted.
-* DevOps deployment pipelines can be used (for example, Azure DevOps) and require network connectivity to Azure Spring Cloud.
+* DevOps deployment pipelines can be used (for example, Azure DevOps) and require network connectivity to Azure Spring Apps.
* Egress traffic should travel through a central Network Virtual Appliance (NVA) (for example, Azure Firewall).
-* If [Azure Spring Cloud Config Server][8] is used to load config properties from a repository, the repository must be private.
+* If [Azure Spring Apps Config Server][8] is used to load config properties from a repository, the repository must be private.
* Microsoft's Zero Trust security approach requires secrets, certificates, and credentials to be stored in a secure vault. The recommended service is Azure Key Vault. * Name resolution of hosts on-premises and in the Cloud should be bidirectional. * No direct egress to the public Internet except for control plane traffic.
-* Resource Groups managed by the Azure Spring Cloud deployment must not be modified.
-* Subnets managed by the Azure Spring Cloud deployment must not be modified.
+* Resource Groups managed by the Azure Spring Apps deployment must not be modified.
+* Subnets managed by the Azure Spring Apps deployment must not be modified.
The following list shows the components that make up the design:
The following list describes the Azure services in this reference architecture:
* [Azure Monitor][3]: an all-encompassing suite of monitoring services for applications that deploy both in Azure and on-premises.
-* [Azure Pipelines][5]: a fully featured Continuous Integration / Continuous Development (CI/CD) service that can automatically deploy updated Spring Boot apps to Azure Spring Cloud.
+* [Azure Pipelines][5]: a fully featured Continuous Integration / Continuous Development (CI/CD) service that can automatically deploy updated Spring Boot apps to Azure Spring Apps.
* [Microsoft Defender for Cloud][4]: a unified security management and threat protection system for workloads across on-premises, multiple clouds, and Azure.
-* [Azure Spring Cloud][1]: a managed service that's designed and optimized specifically for Java-based Spring Boot applications and .NET-based [Steeltoe][9] applications.
+* [Azure Spring Apps][1]: a managed service that's designed and optimized specifically for Java-based Spring Boot applications and .NET-based [Steeltoe][9] applications.
The following diagram represents a well-architected hub and spoke design that addresses the above requirements:
The following diagram represents a well-architected hub and spoke design that ad
The following list describes the infrastructure requirements for public applications. These requirements are typical in highly regulated environments. These requirements are a superset of those in the preceding section. Additional items are indicated with italics.
-* A subnet must only have one instance of Azure Spring Cloud.
+* A subnet must only have one instance of Azure Spring Apps.
* Adherence to at least one Security Benchmark should be enforced. * Application host Domain Name Service (DNS) records should be stored in Azure Private DNS. * _Azure DDoS Protection standard should be enabled._ * Azure service dependencies should communicate through Service Endpoints or Private Link. * Data at rest should be encrypted. * Data in transit should be encrypted.
-* DevOps deployment pipelines can be used (for example, Azure DevOps) and require network connectivity to Azure Spring Cloud.
+* DevOps deployment pipelines can be used (for example, Azure DevOps) and require network connectivity to Azure Spring Apps.
* Egress traffic should travel through a central Network Virtual Appliance (NVA) (for example, Azure Firewall). * _Ingress traffic should be managed by at least Application Gateway or Azure Front Door._ * _Internet routable addresses should be stored in Azure Public DNS._ * Microsoft's Zero Trust security approach requires secrets, certificates, and credentials to be stored in a secure vault. The recommended service is Azure Key Vault. * Name resolution of hosts on-premises and in the Cloud should be bidirectional. * No direct egress to the public Internet except for control plane traffic.
-* Resource Groups managed by the Azure Spring Cloud deployment must not be modified.
-* Subnets managed by the Azure Spring Cloud deployment must not be modified.
+* Resource Groups managed by the Azure Spring Apps deployment must not be modified.
+* Subnets managed by the Azure Spring Apps deployment must not be modified.
The following list shows the components that make up the design:
The following list describes the Azure services in this reference architecture:
* [Azure Monitor][3]: an all-encompassing suite of monitoring services for applications that deploy both in Azure and on-premises.
-* [Azure Pipelines][5]: a fully featured Continuous Integration / Continuous Development (CI/CD) service that can automatically deploy updated Spring Boot apps to Azure Spring Cloud.
+* [Azure Pipelines][5]: a fully featured Continuous Integration / Continuous Development (CI/CD) service that can automatically deploy updated Spring Boot apps to Azure Spring Apps.
* [Microsoft Defender for Cloud][4]: a unified security management and threat protection system for workloads across on-premises, multiple clouds, and Azure.
-* [Azure Spring Cloud][1]: a managed service that's designed and optimized specifically for Java-based Spring Boot applications and .NET-based [Steeltoe][9] applications.
+* [Azure Spring Apps][1]: a managed service that's designed and optimized specifically for Java-based Spring Boot applications and .NET-based [Steeltoe][9] applications.
The following diagram represents a well-architected hub and spoke design that addresses the above requirements. Note that only the hub-virtual-network communicates with the internet: ![Reference architecture diagram for public applications](./media/spring-cloud-reference-architecture/architecture-public.png)
-## Azure Spring Cloud on-premises connectivity
+## Azure Spring Apps on-premises connectivity
-Applications in Azure Spring Cloud can communicate to various Azure, on-premises, and external resources. By using the hub and spoke design, applications can route traffic externally or to the on-premises network using Express Route or Site-to-Site Virtual Private Network (VPN).
+Applications in Azure Spring Apps can communicate to various Azure, on-premises, and external resources. By using the hub and spoke design, applications can route traffic externally or to the on-premises network using Express Route or Site-to-Site Virtual Private Network (VPN).
## Azure Well-Architected Framework considerations
The [Azure Well-Architected Framework][16] is a set of guiding tenets to follow
### Cost optimization
-Because of the nature of distributed system design, infrastructure sprawl is a reality. This reality results in unexpected and uncontrollable costs. Azure Spring Cloud is built using components that scale so that it can meet demand and optimize cost. The core of this architecture is the Azure Kubernetes Service (AKS). The service is designed to reduce the complexity and operational overhead of managing Kubernetes, which includes efficiencies in the operational cost of the cluster.
+Because of the nature of distributed system design, infrastructure sprawl is a reality. This reality results in unexpected and uncontrollable costs. Azure Spring Apps is built using components that scale so that it can meet demand and optimize cost. The core of this architecture is the Azure Kubernetes Service (AKS). The service is designed to reduce the complexity and operational overhead of managing Kubernetes, which includes efficiencies in the operational cost of the cluster.
-You can deploy different applications and application types to a single instance of Azure Spring Cloud. The service supports autoscaling of applications triggered by metrics or schedules that can improve utilization and cost efficiency.
+You can deploy different applications and application types to a single instance of Azure Spring Apps. The service supports autoscaling of applications triggered by metrics or schedules that can improve utilization and cost efficiency.
You can also use Application Insights and Azure Monitor to lower operational cost. With the visibility provided by the comprehensive logging solution, you can implement automation to scale the components of the system in real time. You can also analyze log data to reveal inefficiencies in the application code that you can address to improve the overall cost and performance of the system. ### Operational excellence
-Azure Spring Cloud addresses multiple aspects of operational excellence. You can combine these aspects to ensure that the service runs efficiently in production environments, as described in the following list:
+Azure Spring Apps addresses multiple aspects of operational excellence. You can combine these aspects to ensure that the service runs efficiently in production environments, as described in the following list:
* You can use Azure Pipelines to ensure that deployments are reliable and consistent while helping you avoid human error. * You can use Azure Monitor and Application Insights to store log and telemetry data.
- You can assess collected log and metric data to ensure the health and performance of your applications. Application Performance Monitoring (APM) is fully integrated into the service through a Java agent. This agent provides visibility into all the deployed applications and dependencies without requiring extra code. For more information, see the blog post [Effortlessly monitor applications and dependencies in Azure Spring Cloud][15].
+ You can assess collected log and metric data to ensure the health and performance of your applications. Application Performance Monitoring (APM) is fully integrated into the service through a Java agent. This agent provides visibility into all the deployed applications and dependencies without requiring extra code. For more information, see the blog post [Effortlessly monitor applications and dependencies in Azure Spring Apps][15].
* You can use Microsoft Defender for Cloud to ensure that applications maintain security by providing a platform to analyze and assess the data provided.
-* The service supports various deployment patterns. For more information, see [Set up a staging environment in Azure Spring Cloud][14].
+* The service supports various deployment patterns. For more information, see [Set up a staging environment in Azure Spring Apps][14].
### Reliability
-Azure Spring Cloud is built on AKS. While AKS provides a level of resiliency through clustering, this reference architecture goes even further by incorporating services and architectural considerations to increase availability of the application if there's component failure.
+Azure Spring Apps is built on AKS. While AKS provides a level of resiliency through clustering, this reference architecture goes even further by incorporating services and architectural considerations to increase availability of the application if there's component failure.
By building on top of a well-defined hub and spoke design, the foundation of this architecture ensures that you can deploy it to multiple regions. For the private application use case, the architecture uses Azure Private DNS to ensure continued availability during a geographic failure. For the public application use case, Azure Front Door and Azure Application Gateway ensure availability.
The following list shows the CIS controls that address network security in this
| 6.5 | Ensure that Network Watcher is 'Enabled'. | | 6.6 | Ensure that ingress using UDP is restricted from the internet. |
-Azure Spring Cloud requires management traffic to egress from Azure when deployed in a secured environment. To accomplish this, you must allow the network and application rules listed in [Customer responsibilities for running Azure Spring Cloud in VNET](./vnet-customer-responsibilities.md).
+Azure Spring Apps requires management traffic to egress from Azure when deployed in a secured environment. To accomplish this, you must allow the network and application rules listed in [Customer responsibilities for running Azure Spring Apps in VNET](./vnet-customer-responsibilities.md).
#### Application security
-This design principal covers the fundamental components of identity, data protection, key management, and application configuration. By design, an application deployed in Azure Spring Cloud runs with least privilege required to function. The set of authorization controls is directly related to data protection when using the service. Key management strengthens this layered application security approach.
+This design principal covers the fundamental components of identity, data protection, key management, and application configuration. By design, an application deployed in Azure Spring Apps runs with least privilege required to function. The set of authorization controls is directly related to data protection when using the service. Key management strengthens this layered application security approach.
The following list shows the CCM controls that address key management in this reference:
The aspects of application security set a foundation for the use of this referen
## Next steps
-Explore this reference architecture through the ARM, Terraform, and Azure CLI deployments available in the [Azure Spring Cloud Reference Architecture][10] repository.
+Explore this reference architecture through the ARM, Terraform, and Azure CLI deployments available in the [Azure Spring Apps Reference Architecture][10] repository.
<!-- Reference links in article --> [1]: ./index.yml
Explore this reference architecture through the ARM, Terraform, and Azure CLI de
[9]: https://steeltoe.io/ [10]: https://github.com/Azure/azure-spring-cloud-reference-architecture [11]: ./how-to-deploy-in-azure-virtual-network.md#virtual-network-requirements
-[12]: ./vnet-customer-responsibilities.md#azure-spring-cloud-network-requirements
-[13]: ./vnet-customer-responsibilities.md#azure-spring-cloud-fqdn-requirements--application-rules
+[12]: ./vnet-customer-responsibilities.md#azure-spring-apps-network-requirements
+[13]: ./vnet-customer-responsibilities.md#azure-spring-apps-fqdn-requirements--application-rules
[14]: ./how-to-staging-environment.md [15]: https://devblogs.microsoft.com/java/monitor-applications-and-dependencies-in-azure-spring-cloud/ [16]: /azure/architecture/framework/
spring-cloud Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/resources.md
Title: Resources for Azure Spring Cloud | Microsoft Docs
-description: Azure Spring Cloud resource list
+ Title: Resources for Azure Spring Apps | Microsoft Docs
+description: Azure Spring Apps resource list
Last updated 09/08/2020 -+
-# Azure Spring Cloud developer resources
+# Azure Spring Apps developer resources
+
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
**This article applies to:** ✔️ Java ✔️ C# **This article applies to:** ✔️ Basic/Standard tier ✔️ Enterprise tier
-As a developer, you might find the following Azure Spring Cloud resources useful:
+As a developer, you might find the following Azure Spring Apps resources useful:
* [Azure roadmap](https://azure.microsoft.com/updates) * [Frequently asked questions](./faq.md)
As a developer, you might find the following Azure Spring Cloud resources useful
* [Microsoft Q&A question page](/answers/topics/azure-spring-cloud.html) * [Spring Cloud Services for VMware Tanzu Documentation](https://docs.pivotal.io/spring-cloud-services/1-5/common/https://docsupdatetracker.net/index.html) * [Steeltoe](https://steeltoe.io/)
-* [Java Spring Cloud website](https://spring.io/)
+* [Spring](https://spring.io/)
* [Spring framework](https://spring.io/projects/spring-cloud-azure) * [Spring on Azure](/azure/developer/java/spring-framework/)
spring-cloud Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Spring Cloud
-description: Lists Azure Policy Regulatory Compliance controls available for Azure Spring Cloud. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources.
+ Title: Azure Policy Regulatory Compliance controls for Azure Spring Apps
+description: Lists Azure Policy Regulatory Compliance controls available for Azure Spring Apps. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources.
Last updated 05/10/2022 -+
-# Azure Policy Regulatory Compliance controls for Azure Spring Cloud
+# Azure Policy Regulatory Compliance controls for Azure Spring Apps
+
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
**This article applies to:** ✔️ Basic/Standard tier ✔️ Enterprise tier [Regulatory Compliance in Azure Policy](../governance/policy/concepts/regulatory-compliance.md) provides Microsoft created and managed initiative definitions, known as _built-ins_, for the **compliance domains** and **security controls** related to different compliance standards. This
-page lists the **compliance domains** and **security controls** for Azure Spring Cloud. You can
+page lists the **compliance domains** and **security controls** for Azure Spring Apps. You can
assign the built-ins for a **security control** individually to help make your Azure resources compliant with the specific standard.
spring-cloud Structured App Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/structured-app-log.md
Title: Structured application log for Azure Spring Cloud | Microsoft Docs
-description: This article explains how to generate and collect structured application log data in Azure Spring Cloud.
+ Title: Structured application log for Azure Spring Apps | Microsoft Docs
+description: This article explains how to generate and collect structured application log data in Azure Spring Apps.
Last updated 02/05/2021 -+
-# Structured application log for Azure Spring Cloud
+# Structured application log for Azure Spring Apps
+
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
**This article applies to:** ✔️ Basic/Standard tier ✔️ Enterprise tier
-This article explains how to generate and collect structured application log data in Azure Spring Cloud. With proper configuration, Azure Spring Cloud provides useful application log query and analysis through Log Analytics.
+This article explains how to generate and collect structured application log data in Azure Spring Apps. With proper configuration, Azure Spring Apps provides useful application log query and analysis through Log Analytics.
## Log schema requirements
-To improve log query experience, an application log is required to be in JSON format and conform to a schema. Azure Spring Cloud uses this schema to parse your application and stream to Log Analytics.
+To improve log query experience, an application log is required to be in JSON format and conform to a schema. Azure Spring Apps uses this schema to parse your application and stream to Log Analytics.
> [!NOTE]
-> Enabling the JSON log format makes it difficult to read the log streaming output from console. To get human readable output, append the `--format-json` argument to the `az spring-cloud app logs` CLI command. See [Format JSON structured logs](./how-to-log-streaming.md#format-json-structured-logs).
+> Enabling the JSON log format makes it difficult to read the log streaming output from console. To get human readable output, append the `--format-json` argument to the `az spring app logs` CLI command. See [Format JSON structured logs](./how-to-log-streaming.md#format-json-structured-logs).
**JSON schema requirements:**
The procedure:
</configuration> ```
- For local development, run the Spring Cloud application with JVM argument `-Dspring.profiles.active=dev`, then you can see human readable logs instead of JSON formatted lines.
+ For local development, run the Spring application with JVM argument `-Dspring.profiles.active=dev`, then you can see human readable logs instead of JSON formatted lines.
### Log with log4j2
spring-cloud Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/troubleshoot.md
Title: Troubleshooting guide for Azure Spring Cloud | Microsoft Docs
-description: Troubleshooting guide for Azure Spring Cloud
+ Title: Troubleshooting guide for Azure Spring Apps | Microsoft Docs
+description: Troubleshooting guide for Azure Spring Apps
Last updated 09/08/2020 -+
-# Troubleshoot common Azure Spring Cloud issues
+# Troubleshoot common Azure Spring Apps issues
+
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
**This article applies to:** ✔️ Basic/Standard tier ✔️ Enterprise tier
-This article provides instructions for troubleshooting Azure Spring Cloud development issues. For additional information, see [Azure Spring Cloud FAQ](./faq.md).
+This article provides instructions for troubleshooting Azure Spring Apps development issues. For additional information, see [Azure Spring Apps FAQ](./faq.md).
## Availability, performance, and application issues
To ascertain which situation applies, do the following:
2. Add an **App=** filter to specify which application you want to monitor. 3. Split the metrics by **Instance**.
-If *all instances* are experiencing high CPU or memory usage, you need to either scale out the application or scale up the CPU or memory usage. For more information, see [Tutorial: Scale an application in Azure Spring Cloud](./how-to-scale-manual.md).
+If *all instances* are experiencing high CPU or memory usage, you need to either scale out the application or scale up the CPU or memory usage. For more information, see [Tutorial: Scale an application in Azure Spring Apps](./how-to-scale-manual.md).
If *some instances* are experiencing high CPU or memory usage, check the instance status and its discovery status.
-For more information, see [Metrics for Azure Spring Cloud](./concept-metrics.md).
+For more information, see [Metrics for Azure Spring Apps](./concept-metrics.md).
If all instances are up and running, go to Azure Log Analytics to query your application logs and review your code logic. This will help you see whether any of them might affect scale partitioning. For more information, see [Analyze logs and metrics with diagnostics settings](diagnostic-services.md). To learn more about Azure Log Analytics, see [Get started with Log Analytics in Azure Monitor](../azure-monitor/logs/log-analytics-tutorial.md). Query the logs by using the [Kusto query language](/azure/kusto/query/).
-### Checklist for deploying your Spring application to Azure Spring Cloud
+### Checklist for deploying your Spring application to Azure Spring Apps
Before you onboard your application, ensure that it meets the following criteria:
Before you onboard your application, ensure that it meets the following criteria
## Configuration and management
-### I encountered a problem with creating an Azure Spring Cloud service instance
+### I encountered a problem with creating an Azure Spring Apps service instance
-When you set up an Azure Spring Cloud service instance by using the Azure portal, Azure Spring Cloud performs the validation for you.
+When you set up an Azure Spring Apps service instance by using the Azure portal, Azure Spring Apps performs the validation for you.
-But if you try to set up the Azure Spring Cloud service instance by using the [Azure CLI](/cli/azure/get-started-with-azure-cli) or the [Azure Resource Manager template](../azure-resource-manager/index.yml), verify that:
+But if you try to set up the Azure Spring Apps service instance by using the [Azure CLI](/cli/azure/get-started-with-azure-cli) or the [Azure Resource Manager template](../azure-resource-manager/index.yml), verify that:
* The subscription is active.
-* The location is [supported](./faq.md) by Azure Spring Cloud.
+* The location is [supported](./faq.md) by Azure Spring Apps.
* The resource group for the instance is already created. * The resource name conforms to the naming rule. It must contain only lowercase letters, numbers, and hyphens. The first character must be a letter. The last character must be a letter or number. The value must contain from 2 to 32 characters.
-If you want to set up the Azure Spring Cloud service instance by using the Resource Manager template, first refer to [Understand the structure and syntax of Azure Resource Manager templates](../azure-resource-manager/templates/syntax.md).
+If you want to set up the Azure Spring Apps service instance by using the Resource Manager template, first refer to [Understand the structure and syntax of Azure Resource Manager templates](../azure-resource-manager/templates/syntax.md).
-The name of the Azure Spring Cloud service instance will be used for requesting a subdomain name under `azureapps.io`, so the setup will fail if the name conflicts with an existing one. You might find more details in the activity logs.
+The name of the Azure Spring Apps service instance will be used for requesting a subdomain name under `azureapps.io`, so the setup will fail if the name conflicts with an existing one. You might find more details in the activity logs.
### I can't deploy a .NET Core app
When you deploy your application package by using the [Azure CLI](/cli/azure/get
If the polling is interrupted, you can still use the following command to fetch the deployment logs: ```azurecli
-az spring-cloud app show-deploy-log --name <app-name>
+az spring app show-deploy-log --name <app-name>
``` Ensure that your application is packaged in the correct [executable JAR format](https://docs.spring.io/spring-boot/docs/current/reference/html/executable-jar.html). If it isn't packaged correctly, you will receive an error message similar to the following: `Error: Invalid or corrupt jarfile /jar/38bc8ea1-a6bb-4736-8e93-e8f3b52c8714`
When you deploy your application package by using the [Azure CLI](/cli/azure/get
If the polling is interrupted, you can still use the following command to fetch the build and deployment logs: ```azurecli
-az spring-cloud app show-deploy-log --name <app-name>
+az spring app show-deploy-log --name <app-name>
```
-However, note that one Azure Spring Cloud service instance can trigger only one build job for one source package at one time. For more information, see [Deploy an application](./quickstart.md) and [Set up a staging environment in Azure Spring Cloud](./how-to-staging-environment.md).
+However, note that one Azure Spring Apps service instance can trigger only one build job for one source package at one time. For more information, see [Deploy an application](./quickstart.md) and [Set up a staging environment in Azure Spring Apps](./how-to-staging-environment.md).
### My application can't be registered
In most cases, this situation occurs when *Required Dependencies* and *Service D
Wait at least two minutes before a newly registered instance starts receiving traffic.
-If you're migrating an existing Spring Cloud-based solution to Azure, ensure that your ad-hoc *Service Registry* and *Config Server* instances are removed (or disabled) to avoid conflicting with the managed instances provided by Azure Spring Cloud.
+If you're migrating an existing Spring Cloud-based solution to Azure, ensure that your ad-hoc *Service Registry* and *Config Server* instances are removed (or disabled) to avoid conflicting with the managed instances provided by Azure Spring Apps.
You can also check the *Service Registry* client logs in Azure Log Analytics. For more information, see [Analyze logs and metrics with diagnostics settings](diagnostic-services.md)
To learn more about Azure Log Analytics, see [Get started with Log Analytics in
### I want to inspect my application's environment variables
-Environment variables inform the Azure Spring Cloud framework, ensuring that Azure understands where and how to configure the services that make up your application. Ensuring that your environment variables are correct is a necessary first step in troubleshooting potential problems. You can use the Spring Boot Actuator endpoint to review your environment variables.
+Environment variables inform the Azure Spring Apps framework, ensuring that Azure understands where and how to configure the services that make up your application. Ensuring that your environment variables are correct is a necessary first step in troubleshooting potential problems. You can use the Spring Boot Actuator endpoint to review your environment variables.
> [!WARNING] > This procedure exposes your environment variables by using your test endpoint. Do not proceed if your test endpoint is publicly accessible or if you've assigned a domain name to your application.
If your application logs can be archived to a storage account but not sent to Az
### Error 112039: Failed to purchase on Azure Marketplace
-Creating an Azure Spring Cloud Enterprise tier instance fails with error code "112039". Check the detailed error message for below for more information:
+Creating an Azure Spring Apps Enterprise tier instance fails with error code "112039". Check the detailed error message for below for more information:
-* **"Failed to purchase on Azure Marketplace because the Microsoft.SaaS RP is not registered on the Azure subscription."** : Azure Spring Cloud Enterprise tier purchase a SaaS offer from VMware.
+* **"Failed to purchase on Azure Marketplace because the Microsoft.SaaS RP is not registered on the Azure subscription."** : Azure Spring Apps Enterprise tier purchase a SaaS offer from VMware.
- You must register the Microsoft.SaaS resource provider before creating Azure Spring Cloud Enterprise instance. See how to [register a resource provider](../azure-resource-manager/management/resource-providers-and-types.md#register-resource-provider).
+ You must register the Microsoft.SaaS resource provider before creating Azure Spring Apps Enterprise instance. See how to [register a resource provider](../azure-resource-manager/management/resource-providers-and-types.md#register-resource-provider).
* **"Failed to load catalog product vmware-inc.azure-spring-cloud-vmware-tanzu-2 in the Azure subscription market."**: Your Azure subscription's billing account address is not in the supported location.
Creating an Azure Spring Cloud Enterprise tier instance fails with error code "1
If that doesn't help, you can contact the support team with the following info. * `AZURE_TENANT_ID`: the Azure tenant ID that hosts the Azure subscription
- * `AZURE_SUBSCRIPTION_ID`: the Azure subscription ID used to create the Spring Cloud instance
+ * `AZURE_SUBSCRIPTION_ID`: the Azure subscription ID used to create the Azure Spring Apps instance
* `SPRING_CLOUD_NAME`: the failed instance name * `ERROR_MESSAGE`: the observed error message ### No plans are available for market '\<Location>'
-When you visit the SaaS offer [Azure Spring Cloud Enterprise Tier](https://aka.ms/ascmpoffer) in the Azure Marketplace, it may say "No plans are available for market '\<Location>'" as in the following image.
+When you visit the SaaS offer [Azure Spring Apps Enterprise Tier](https://aka.ms/ascmpoffer) in the Azure Marketplace, it may say "No plans are available for market '\<Location>'" as in the following image.
![No plans available error image](./media/enterprise/how-to-enterprise-marketplace-offer/no-enterprise-plans-available.png)
-Azure Spring Cloud Enterprise tier needs customers to pay for a license to Tanzu components through an Azure Marketplace offer. To purchase in the Azure Marketplace, the billing account's country or region for your Azure subscription should be in the SaaS offer's supported geographic locations.
+Azure Spring Apps Enterprise tier needs customers to pay for a license to Tanzu components through an Azure Marketplace offer. To purchase in the Azure Marketplace, the billing account's country or region for your Azure subscription should be in the SaaS offer's supported geographic locations.
-[Azure Spring Cloud Enterprise Tier](https://aka.ms/ascmpoffer) now supports all geographic locations that Azure Marketplace supports. See [Marketplace supported geographic location](../marketplace/marketplace-geo-availability-currencies.md#supported-geographic-locations).
+[Azure Spring Apps Enterprise Tier](https://aka.ms/ascmpoffer) now supports all geographic locations that Azure Marketplace supports. See [Marketplace supported geographic location](../marketplace/marketplace-geo-availability-currencies.md#supported-geographic-locations).
You can view the billing account for your subscription if you have admin access. See [view billing accounts](../cost-management-billing/manage/view-all-accounts.md#check-the-type-of-your-account).
Enterprise tier has built-in VMware Spring Runtime Support, so you can open supp
## Next steps
-* [How to self-diagnose and solve problems in Azure Spring Cloud](./how-to-self-diagnose-solve.md)
+* [How to self-diagnose and solve problems in Azure Spring Apps](./how-to-self-diagnose-solve.md)
spring-cloud Troubleshooting Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/troubleshooting-vnet.md
Title: Troubleshooting Azure Spring Cloud in virtual network
-description: Troubleshooting guide for Azure Spring Cloud virtual network.
+ Title: Troubleshooting Azure Spring Apps in virtual network
+description: Troubleshooting guide for Azure Spring Apps virtual network.
Last updated 09/19/2020 -+
-# Troubleshooting Azure Spring Cloud in virtual networks
+# Troubleshooting Azure Spring Apps in virtual networks
+
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
**This article applies to:** ✔️ Basic/Standard tier ✔️ Enterprise tier
-This article will help you solve various problems that can arise when using Azure Spring Cloud in virtual networks.
+This article will help you solve various problems that can arise when using Azure Spring Apps in virtual networks.
-## I encountered a problem with creating an Azure Spring Cloud service instance
+## I encountered a problem with creating an Azure Spring Apps service instance
-To create an instance of Azure Spring Cloud, you must have sufficient permission to deploy the instance to the virtual network. The Spring Cloud service instance must itself [Grant Azure Spring Cloud service permission to the virtual network](./how-to-deploy-in-azure-virtual-network.md#grant-service-permission-to-the-virtual-network).
+To create an instance of Azure Spring Apps, you must have sufficient permission to deploy the instance to the virtual network. The Azure Spring Apps service instance must itself grant Azure Spring Apps service permission to the virtual network. For more information, see the [Grant service permission to the virtual network](./how-to-deploy-in-azure-virtual-network.md#grant-service-permission-to-the-virtual-network) section of [Deploy Azure Spring Apps in a virtual network](how-to-deploy-in-azure-virtual-network.md).
-If you use the Azure portal to set up the Azure Spring Cloud service instance, the Azure portal will validate the permissions.
+If you use the Azure portal to set up the Azure Spring Apps service instance, the Azure portal will validate the permissions.
-To set up the Azure Spring Cloud service instance by using the [Azure CLI](/cli/azure/get-started-with-azure-cli), verify that:
+To set up the Azure Spring Apps service instance by using the [Azure CLI](/cli/azure/get-started-with-azure-cli), verify that:
- The subscription is active.-- The location is supported by Azure Spring Cloud.
+- The location is supported by Azure Spring Apps.
- The resource group for the instance is already created. - The resource name conforms to the naming rule. It must contain only lowercase letters, numbers, and hyphens. The first character must be a letter. The last character must be a letter or number. The value must contain from 2 to 32 characters.
-To set up the Azure Spring Cloud service instance by using the Resource Manager template, refer to [Understand the structure and syntax of Azure Resource Manager templates](../azure-resource-manager/templates/syntax.md).
+To set up the Azure Spring Apps service instance by using the Resource Manager template, refer to [Understand the structure and syntax of Azure Resource Manager templates](../azure-resource-manager/templates/syntax.md).
### Common creation issues | Error Message | How to fix | |||
-| Resources created by Azure Spring Cloud were disallowed by policy. | Network resources will be created when deploy Azure Spring Cloud in your own virtual network. Please check whether you have [Azure Policy](../governance/policy/overview.md) defined to block those creation. Resources failed to be created can be found in error message. |
-| Required traffic is not allowlisted. | Please refer to [Customer Responsibilities for Running Azure Spring Cloud in VNET](./vnet-customer-responsibilities.md) to ensure required traffic is allowlisted. |
+| Resources created by Azure Spring Apps were disallowed by policy. | Network resources will be created when deploy Azure Spring Apps in your own virtual network. Please check whether you have [Azure Policy](../governance/policy/overview.md) defined to block those creation. Resources failed to be created can be found in error message. |
+| Required traffic is not allowlisted. | Please refer to [Customer Responsibilities for Running Azure Spring Apps in VNET](./vnet-customer-responsibilities.md) to ensure required traffic is allowlisted. |
## My application can't be registered
-This problem occurs if your virtual network is configured with custom DNS settings. In this case, the private DNS zone used by Azure Spring Cloud is ineffective. Add the Azure DNS IP 168.63.129.16 as the upstream DNS server in the custom DNS server.
+This problem occurs if your virtual network is configured with custom DNS settings. In this case, the private DNS zone used by Azure Spring Apps is ineffective. Add the Azure DNS IP 168.63.129.16 as the upstream DNS server in the custom DNS server.
## Other issues
-[Troubleshoot common Azure Spring Cloud issues](./troubleshoot.md)
+[Troubleshoot common Azure Spring Apps issues](./troubleshoot.md)
spring-cloud Tutorial Alerts Action Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/tutorial-alerts-action-groups.md
Title: "Tutorial: Monitor Azure Spring Cloud resources using alerts and action groups | Microsoft Docs"
-description: Learn how to use Spring Cloud alerts.
+ Title: "Tutorial: Monitor Azure Spring Apps resources using alerts and action groups | Microsoft Docs"
+description: Learn how to use Spring app alerts.
Last updated 12/29/2019-+
-# Tutorial: Monitor Spring Cloud resources using alerts and action groups
+# Tutorial: Monitor Spring app resources using alerts and action groups
+
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
**This article applies to:** ✔️ Java ✔️ C# **This article applies to:** ✔️ Basic/Standard tier ✔️ Enterprise tier
-Azure Spring Cloud alerts support monitoring resources based on conditions such as available storage, rate of requests, or data usage. An alert sends notification when rates or conditions meet the defined specifications.
+Azure Spring Apps alerts support monitoring resources based on conditions such as available storage, rate of requests, or data usage. An alert sends notification when rates or conditions meet the defined specifications.
There are two steps to set up an alert pipeline:
There are two steps to set up an alert pipeline:
## Prerequisites
-In addition to the Azure Spring requirements, the procedures in this tutorial work with a deployed Azure Spring Cloud instance. Follow a [quickstart](./quickstart.md) to get started.
+In addition to the Azure Spring Apps requirements, the procedures in this tutorial work with a deployed Azure Spring Apps instance. Follow a [quickstart](./quickstart.md) to get started.
-The following procedures initialize both **Action Group** and **Alert** starting from the **Alerts** option in the left navigation pane of a Spring Cloud instance. (The procedure can also start from the **Monitor Overview** page of the Azure portal.)
+The following procedures initialize both **Action Group** and **Alert** starting from the **Alerts** option in the left navigation pane of an Azure Spring Apps instance. (The procedure can also start from the **Monitor Overview** page of the Azure portal.)
-Navigate from a resource group to your Spring Cloud instance. Select **Alerts** in the left pane, then select **Manage actions**:
+Navigate from a resource group to your Azure Spring Apps instance. Select **Alerts** in the left pane, then select **Manage actions**:
![Screenshot portal resource group page](media/alerts-action-groups/action-1-a.png)
A rule can also be created using the **Metrics** page:
## Next steps
-In this tutorial you learned how to set up alerts and action groups for an application in Azure Spring Cloud. To learn more about action groups, see:
+In this tutorial you learned how to set up alerts and action groups for an application in Azure Spring Apps. To learn more about action groups, see:
> [!div class="nextstepaction"] > [Create and manage action groups in the Azure portal](../azure-monitor/alerts/action-groups.md)
spring-cloud Tutorial Circuit Breaker https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/tutorial-circuit-breaker.md
Title: "Tutorial - Use Circuit Breaker Dashboard with Azure Spring Cloud"
-description: Learn how to use circuit Breaker Dashboard with Azure Spring Cloud.
+ Title: "Tutorial - Use Circuit Breaker Dashboard with Azure Spring Apps"
+description: Learn how to use circuit Breaker Dashboard with Azure Spring Apps.
Last updated 04/06/2020-+
-# Tutorial: Use Circuit Breaker Dashboard with Azure Spring Cloud
+# Tutorial: Use Circuit Breaker Dashboard with Azure Spring Apps
+
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
**This article applies to:** ✔️ Java ❌ C# **This article applies to:** ✔️ Basic/Standard tier ✔️ Enterprise tier
-Spring [Cloud Netflix Turbine](https://github.com/Netflix/Turbine) is widely used to aggregate multiple [Hystrix](https://github.com/Netflix/Hystrix) metrics streams so that streams can be monitored in a single view using Hystrix dashboard. This tutorial demonstrates how to use them on Azure Spring Cloud.
+Spring [Cloud Netflix Turbine](https://github.com/Netflix/Turbine) is widely used to aggregate multiple [Hystrix](https://github.com/Netflix/Hystrix) metrics streams so that streams can be monitored in a single view using Hystrix dashboard. This tutorial demonstrates how to use them on Azure Spring Apps.
> [!NOTE]
-> Netflix Hystrix is widely used in many existing Spring Cloud apps but it is no longer in active development. If you are developing new project, use instead Spring Cloud Circuit Breaker implementations like [resilience4j](https://github.com/resilience4j/resilience4j). Different from Turbine shown in this tutorial, the new Spring Cloud Circuit Breaker framework unifies all implementations of its metrics data pipeline into Micrometer, which is also supported by Azure Spring Cloud. [Learn More](./how-to-circuit-breaker-metrics.md).
+> Netflix Hystrix is widely used in many existing Spring apps but it is no longer in active development. If you are developing new project, use instead Spring Cloud Circuit Breaker implementations like [resilience4j](https://github.com/resilience4j/resilience4j). Different from Turbine shown in this tutorial, the new Spring Cloud Circuit Breaker framework unifies all implementations of its metrics data pipeline into Micrometer, which is also supported by Azure Spring Apps. [Learn More](./how-to-circuit-breaker-metrics.md).
## Prepare your sample applications
mvn clean package -D skipTests -f recommendation-service/pom.xml
mvn clean package -D skipTests -f hystrix-turbine/pom.xml ```
-## Provision your Azure Spring Cloud instance
+## Provision your Azure Spring Apps instance
-Follow the procedure, [Provision a service instance on the Azure CLI](./quickstart.md#provision-an-instance-of-azure-spring-cloud).
+Follow the procedure, [Provision a service instance on the Azure CLI](./quickstart.md#provision-an-instance-of-azure-spring-apps).
-## Deploy your applications to Azure Spring Cloud
+## Deploy your applications to Azure Spring Apps
-These apps do not use **Config Server**, so there is no need to set up **Config Server** for Azure Spring Cloud. Create and deploy as follows:
+These apps do not use **Config Server**, so there is no need to set up **Config Server** for Azure Spring Apps. Create and deploy as follows:
```azurecli
-az spring-cloud app create -n user-service --assign-endpoint
-az spring-cloud app create -n recommendation-service
-az spring-cloud app create -n hystrix-turbine --assign-endpoint
+az spring app create -n user-service --assign-endpoint
+az spring app create -n recommendation-service
+az spring app create -n hystrix-turbine --assign-endpoint
-az spring-cloud app deploy -n user-service --jar-path user-service/target/user-service.jar
-az spring-cloud app deploy -n recommendation-service --jar-path recommendation-service/target/recommendation-service.jar
-az spring-cloud app deploy -n hystrix-turbine --jar-path hystrix-turbine/target/hystrix-turbine.jar
+az spring app deploy -n user-service --jar-path user-service/target/user-service.jar
+az spring app deploy -n recommendation-service --jar-path recommendation-service/target/recommendation-service.jar
+az spring app deploy -n hystrix-turbine --jar-path hystrix-turbine/target/hystrix-turbine.jar
``` ## Verify your apps
As a web app, Hystrix dashboard should be working on `test-endpoint`. If it is n
## Next steps
-* [Provision a service instance on the Azure CLI](./quickstart.md#provision-an-instance-of-azure-spring-cloud)
-* [Prepare a Java Spring application for deployment in Azure Spring Cloud](how-to-prepare-app-deployment.md)
+* [Provision a service instance on the Azure CLI](./quickstart.md#provision-an-instance-of-azure-spring-apps)
+* [Prepare a Java Spring application for deployment in Azure Spring Apps](how-to-prepare-app-deployment.md)
spring-cloud Tutorial Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/tutorial-custom-domain.md
Title: "Tutorial: Map an existing custom domain to Azure Spring Cloud"
-description: How to map an existing custom Distributed Name Service (DNS) name to Azure Spring Cloud
+ Title: "Tutorial: Map an existing custom domain to Azure Spring Apps"
+description: How to map an existing custom Distributed Name Service (DNS) name to Azure Spring Apps
Last updated 03/19/2020 -+
-# Tutorial: Map an existing custom domain to Azure Spring Cloud
+# Tutorial: Map an existing custom domain to Azure Spring Apps
+
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
**This article applies to:** ✔️ Java ✔️ C#
Certificates encrypt web traffic. These TLS/SSL certificates can be stored in Az
## Prerequisites
-* An application deployed to Azure Spring Cloud (see [Quickstart: Launch an existing application in Azure Spring Cloud using the Azure portal](./quickstart.md), or use an existing app).
+* An application deployed to Azure Spring Apps (see [Quickstart: Launch an existing application in Azure Spring Apps using the Azure portal](./quickstart.md), or use an existing app).
* A domain name with access to the DNS registry for domain provider such as GoDaddy. * A private certificate (that is, your self-signed certificate) from a third-party provider. The certificate must match the domain. * A deployed instance of [Azure Key Vault](../key-vault/general/overview.md) ## Keyvault Private Link Considerations
-The Azure Spring Cloud management IPs are not yet part of the Azure Trusted Microsoft services. Therefore, to allow Azure Spring Cloud to load certificates from a Key Vault protected with Private endpoint connections, you must add the following IPs to Azure Key Vault Firewall: `20.53.123.160 52.143.241.210 40.65.234.114 52.142.20.14 20.54.40.121 40.80.210.49 52.253.84.152 20.49.137.168 40.74.8.134 51.143.48.243`
+The Azure Spring Apps management IPs are not yet part of the Azure Trusted Microsoft services. Therefore, to allow Azure Spring Apps to load certificates from a Key Vault protected with Private endpoint connections, you must add the following IPs to Azure Key Vault Firewall: `20.53.123.160 52.143.241.210 40.65.234.114 52.142.20.14 20.54.40.121 40.80.210.49 52.253.84.152 20.49.137.168 40.74.8.134 51.143.48.243`
## Import certificate
az keyvault certificate import --file <path to .pfx file> --name <certificate na
-### Grant Azure Spring Cloud access to your key vault
+### Grant Azure Spring Apps access to your key vault
-You need to grant Azure Spring Cloud access to your key vault before you import certificate:
+You need to grant Azure Spring Apps access to your key vault before you import certificate:
#### [Portal](#tab/Azure-portal) 1. Go to your key vault instance.
You need to grant Azure Spring Cloud access to your key vault before you import
| Secret permission | Certificate permission | Select principal | |--|--|--|
-| Get, List | Get, List | Azure Spring Cloud Domain-Management |
+| Get, List | Get, List | Azure Spring Apps Domain-Management |
![Import certificate 2](./media/custom-dns-tutorial/import-certificate-b.png) #### [CLI](#tab/Azure-CLI)
-Grant Azure Spring Cloud read access to key vault, replace the *\<key vault resource group>* and *\<key vault name>* in the following command.
+Grant Azure Spring Apps read access to key vault, replace the *\<key vault resource group>* and *\<key vault name>* in the following command.
```azurecli az keyvault set-policy -g <key vault resource group> -n <key vault name> --object-id 938df8e2-2b9d-40b1-940c-c75c33494239 --certificate-permissions get list --secret-permissions get list
az keyvault set-policy -g <key vault resource group> -n <key vault name> --obje
-### Import certificate to Azure Spring Cloud
+### Import certificate to Azure Spring Apps
#### [Portal](#tab/Azure-portal) 1. Go to your service instance.
az keyvault set-policy -g <key vault resource group> -n <key vault name> --obje
#### [CLI](#tab/Azure-CLI) ```azurecli
-az spring-cloud certificate add --name <cert name> --vault-uri <key vault uri> --vault-certificate-name <key vault cert name>
+az spring certificate add --name <cert name> --vault-uri <key vault uri> --vault-certificate-name <key vault cert name>
``` To show a list of certificates imported: ```azurecli
-az spring-cloud certificate list --resource-group <resource group name> --service <service name>
+az spring certificate list --resource-group <resource group name> --service <service name>
```
az spring-cloud certificate list --resource-group <resource group name> --servic
> To secure a custom domain with this certificate, you still need to bind the certificate to a specific domain. Follow the steps in this section: [Add SSL Binding](#add-ssl-binding). ## Add Custom Domain
-You can use a CNAME record to map a custom DNS name to Azure Spring Cloud.
+You can use a CNAME record to map a custom DNS name to Azure Spring Apps.
> [!NOTE] > The A record is not supported. ### Create the CNAME record
-Go to your DNS provider and add a CNAME record to map your domain to the <service_name>.azuremicroservices.io. Here <service_name> is the name of your Azure Spring Cloud instance. We support wildcard domain and sub domain.
+Go to your DNS provider and add a CNAME record to map your domain to the <service_name>.azuremicroservices.io. Here <service_name> is the name of your Azure Spring Apps instance. We support wildcard domain and sub domain.
After you add the CNAME, the DNS records page will resemble the following example: ![DNS records page](./media/custom-dns-tutorial/dns-records.png)
-## Map your custom domain to Azure Spring Cloud app
-If you don't have an application in Azure Spring Cloud, follow the instructions in [Quickstart: Launch an existing application in Azure Spring Cloud using the Azure portal](./quickstart.md).
+## Map your custom domain to Azure Spring Apps app
+If you don't have an application in Azure Spring Apps, follow the instructions in [Quickstart: Launch an existing application in Azure Spring Apps using the Azure portal](./quickstart.md).
#### [Portal](#tab/Azure-portal) Go to application page.
One app can have multiple domains, but one domain can only map to one app. When
#### [CLI](#tab/Azure-CLI) ```azurecli
-az spring-cloud app custom-domain bind --domain-name <domain name> --app <app name> --resource-group <resource group name> --service <service name>
+az spring app custom-domain bind --domain-name <domain name> --app <app name> --resource-group <resource group name> --service <service name>
``` To show the list of custom domains: ```azurecli
-az spring-cloud app custom-domain list --app <app name> --resource-group <resource group name> --service <service name>
+az spring app custom-domain list --app <app name> --resource-group <resource group name> --service <service name>
```
In the custom domain table, select **Add ssl binding** as shown in the previous
#### [CLI](#tab/Azure-CLI) ```azurecli
-az spring-cloud app custom-domain update --domain-name <domain name> --certificate <cert name> --app <app name> --resource-group <resource group name> --service <service name>
+az spring app custom-domain update --domain-name <domain name> --certificate <cert name> --app <app name> --resource-group <resource group name> --service <service name>
```
In your app page, in the left navigation, select **Custom Domain**. Then, set **
#### [CLI](#tab/Azure-CLI) ```azurecli
-az spring-cloud app update -n <app name> --resource-group <resource group name> --service <service name> --https-only
+az spring app update -n <app name> --resource-group <resource group name> --service <service name> --https-only
```
spring-cloud Tutorial Managed Identities Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/tutorial-managed-identities-functions.md
Title: "Tutorial: Managed identity to invoke Azure Functions"
-description: Use managed identity to invoke Azure Functions from an Azure Spring Cloud app
+description: Use managed identity to invoke Azure Functions from an Azure Spring Apps app
+ Last updated 07/10/2020
-# Tutorial: Use a managed identity to invoke Azure Functions from an Azure Spring Cloud app
+# Tutorial: Use a managed identity to invoke Azure Functions from an Azure Spring Apps app
+
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
**This article applies to:** ✔️ Basic/Standard tier ✔️ Enterprise tier
-This article shows you how to create a managed identity for an Azure Spring Cloud app and use it to invoke Http triggered Functions.
+This article shows you how to create a managed identity for an Azure Spring Apps app and use it to invoke Http triggered Functions.
-Both Azure Functions and App Services have built in support for Azure Active Directory (Azure AD) authentication. By leveraging this built-in authentication capability along with Managed Identities for Azure Spring Cloud, we can invoke RESTful services using modern OAuth semantics. This method doesn't require storing secrets in code and provides more granular controls for controlling access to external resources.
+Both Azure Functions and App Services have built in support for Azure Active Directory (Azure AD) authentication. By leveraging this built-in authentication capability along with Managed Identities for Azure Spring Apps, we can invoke RESTful services using modern OAuth semantics. This method doesn't require storing secrets in code and provides more granular controls for controlling access to external resources.
## Prerequisites
Functions in <your-functionapp-name>:
Invoke url: https://<your-functionapp-name>.azurewebsites.net/api/httptrigger ```
-## Create Azure Spring Cloud service and app
+## Create Azure Spring Apps service and app
-After installing the spring-cloud extension, create an Azure Spring Cloud instance with the Azure CLI command `az spring-cloud create`.
+After installing the spring extension, create an Azure Spring Apps instance with the Azure CLI command `az spring create`.
```azurecli
-az extension add --name spring-cloud
-az spring-cloud create --name mymsispringcloud --resource-group myResourceGroup --location eastus
+az extension add --name spring
+az spring create --name mymsispringcloud --resource-group myResourceGroup --location eastus
``` The following example creates an app named `msiapp` with a system-assigned managed identity, as requested by the `--assign-identity` parameter. ```azurecli
-az spring-cloud app create --name "msiapp" --service "mymsispringcloud" --resource-group "myResourceGroup" --assign-endpoint true --assign-identity
+az spring app create --name "msiapp" --service "mymsispringcloud" --resource-group "myResourceGroup" --assign-endpoint true --assign-identity
``` ## Build sample Spring Boot app to invoke the Function
This sample will invoke the Http triggered function by first requesting an acces
vim src/main/resources/application.properties ```
- To use managed identity for Azure Spring Cloud apps, add properties with the following content to *src/main/resources/application.properties*.
+ To use managed identity for Azure Spring Apps apps, add properties with the following content to *src/main/resources/application.properties*.
```properties azure.function.uri=https://<your-functionapp-name>.azurewebsites.net
This sample will invoke the Http triggered function by first requesting an acces
mvn clean package ```
-4. Now deploy the app to Azure with the Azure CLI command `az spring-cloud app deploy`.
+4. Now deploy the app to Azure with the Azure CLI command `az spring app deploy`.
```azurecli
- az spring-cloud app deploy --name "msiapp" --service "mymsispringcloud" --resource-group "myResourceGroup" --jar-path target/sc-managed-identity-function-sample-0.1.0.jar
+ az spring app deploy --name "msiapp" --service "mymsispringcloud" --resource-group "myResourceGroup" --jar-path target/sc-managed-identity-function-sample-0.1.0.jar
``` 5. Access the public endpoint or test endpoint to test your app.
This sample will invoke the Http triggered function by first requesting an acces
## Next steps
-* [How to enable system-assigned managed identity for applications in Azure Spring Cloud](./how-to-enable-system-assigned-managed-identity.md)
+* [How to enable system-assigned managed identity for applications in Azure Spring Apps](./how-to-enable-system-assigned-managed-identity.md)
* [Learn more about managed identities for Azure resources](../active-directory/managed-identities-azure-resources/overview.md) * [Configure client apps to access your App Service](../app-service/configure-authentication-provider-aad.md#configure-client-apps-to-access-your-app-service)
spring-cloud Tutorial Managed Identities Key Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/tutorial-managed-identities-key-vault.md
Title: "Tutorial: Managed identity to connect Key Vault"
-description: Set up managed identity to connect Key Vault to an Azure Spring Cloud app
+description: Set up managed identity to connect Key Vault to an Azure Spring Apps app
Last updated 04/15/2022-+
-# Tutorial: Use a managed identity to connect Key Vault to an Azure Spring Cloud app
+# Tutorial: Use a managed identity to connect Key Vault to an Azure Spring Apps app
+
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
**This article applies to:** ✔️ Java ❌ C# **This article applies to:** ✔️ Basic/Standard tier ✔️ Enterprise tier
-This article shows you how to create a managed identity for an Azure Spring Cloud app and use it to access Azure Key Vault.
+This article shows you how to create a managed identity for an Azure Spring Apps app and use it to access Azure Key Vault.
Azure Key Vault can be used to securely store and tightly control access to tokens, passwords, certificates, API keys, and other secrets for your app. You can create a managed identity in Azure Active Directory (Azure AD), and authenticate to any service that supports Azure AD authentication, including Key Vault, without having to display credentials in your code.
az keyvault secret set \
--value "jdbc:sqlserver://SERVER.database.windows.net:1433;database=DATABASE;" ```
-## Create Azure Spring Cloud service and app
+## Create Azure Spring Apps service and app
-After installing corresponding extension, create an Azure Spring Cloud instance with the Azure CLI command `az spring-cloud create`.
+After installing corresponding extension, create an Azure Spring Apps instance with the Azure CLI command `az spring create`.
```azurecli
-az extension add --name spring-cloud
-az spring-cloud create \
+az extension add --name spring
+az spring create \
--resource-group <your-resource-group-name> \
- --name <your-Azure-Spring-Cloud-instance-name>
+ --name <your-Azure-Spring-Apps-instance-name>
``` ### [System-assigned managed identity](#tab/system-assigned-managed-identity)
az spring-cloud create \
The following example creates an app named `springapp` with a system-assigned managed identity, as requested by the `--system-assigned` parameter. ```azurecli
-az spring-cloud app create \
+az spring app create \
--resource-group <your-resource-group-name> \ --name "springapp" \
- --service <your-Azure-Spring-Cloud-instance-name> \
+ --service <your-Azure-Spring-Apps-instance-name> \
--assign-endpoint true \ --system-assigned
-export SERVICE_IDENTITY=$(az spring-cloud app show --name "springapp" -s "myspringcloud" -g "myResourceGroup" | jq -r '.identity.principalId')
+export SERVICE_IDENTITY=$(az spring app show --name "springapp" -s "myspringcloud" -g "myResourceGroup" | jq -r '.identity.principalId')
``` ### [User-assigned managed identity](#tab/user-assigned-managed-identity)
export USER_IDENTITY_CLIENT_ID={client ID of user-assigned managed identity}
The following example creates an app named `springapp` with a user-assigned managed identity, as requested by the `--user-assigned` parameter. ```azurecli
-az spring-cloud app create \
+az spring app create \
--resource-group <your-resource-group-name> \ --name "springapp" \
- --service <your-Azure-Spring-Cloud-instance-name> \
+ --service <your-Azure-Spring-Apps-instance-name> \
--assign-endpoint true \ --user-assigned $USER_IDENTITY_RESOURCE_ID
-az spring-cloud app show \
+az spring app show \
--resource-group <your-resource-group-name> \ --name "springapp" \
- --service <your-Azure-Spring-Cloud-instance-name>
+ --service <your-Azure-Spring-Apps-instance-name>
```
This app will have access to get secrets from Azure Key Vault. Use the Azure Key
vim src/main/resources/application.properties ```
-1. To use managed identity for Azure Spring Cloud apps, add properties with the following content to the *src/main/resources/application.properties* file.
+1. To use managed identity for Azure Spring Apps apps, add properties with the following content to the *src/main/resources/application.properties* file.
### [System-assigned managed identity](#tab/system-assigned-managed-identity)
azure.keyvault.client-id={Client ID of user-assigned managed identity}
1. Now you can deploy your app to Azure with the following command: ```azurecli
- az spring-cloud app deploy \
+ az spring app deploy \
--resource-group <your-resource-group-name> \ --name "springapp" \
- --service <your-Azure-Spring-Cloud-instance-name> \
+ --service <your-Azure-Spring-Apps-instance-name> \
--jar-path target/demo-0.0.1-SNAPSHOT.jar ```
To build the sample, use the following steps:
vim src/main/resources/application.properties ```
- To use managed identity for Azure Spring Cloud apps, add properties with the following content to *src/main/resources/application.properties*.
+ To use managed identity for Azure Spring Apps apps, add properties with the following content to *src/main/resources/application.properties*.
```properties azure.keyvault.enabled=true
To build the sample, use the following steps:
1. Now deploy the app to Azure with the following command: ```azurecli
- az spring-cloud app deploy \
+ az spring app deploy \
--resource-group <your-resource-group-name> \ --name "springapp" \
- --service <your-Azure-Spring-Cloud-instance-name> \
+ --service <your-Azure-Spring-Apps-instance-name> \
--jar-path target/asc-managed-identity-keyvault-sample-0.1.0.jar ```
To build the sample, use the following steps:
## Next steps
-* [How to access Storage blob with managed identity in Azure Spring Cloud](https://github.com/Azure-Samples/Azure-Spring-Cloud-Samples/tree/master/managed-identity-storage-blob)
-* [How to enable system-assigned managed identity for applications in Azure Spring Cloud](./how-to-enable-system-assigned-managed-identity.md)
+* [How to access Storage blob with managed identity in Azure Spring Apps](https://github.com/Azure-Samples/Azure-Spring-Cloud-Samples/tree/master/managed-identity-storage-blob)
+* [How to enable system-assigned managed identity for applications in Azure Spring Apps](./how-to-enable-system-assigned-managed-identity.md)
* [Learn more about managed identities for Azure resources](../active-directory/managed-identities-azure-resources/overview.md)
-* [Authenticate Azure Spring Cloud with Key Vault in GitHub Actions](./github-actions-key-vault.md)
+* [Authenticate Azure Spring Apps with Key Vault in GitHub Actions](./github-actions-key-vault.md)
spring-cloud Tutorial Managed Identities Mysql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/tutorial-managed-identities-mysql.md
Title: "Tutorial: Managed identity to connect an Azure Database for MySQL to apps in Azure Spring Cloud"
-description: Set up managed identity to connect an Azure Database for MySQL to apps in Azure Spring Cloud
+ Title: "Tutorial: Managed identity to connect an Azure Database for MySQL to apps in Azure Spring Apps"
+description: Set up managed identity to connect an Azure Database for MySQL to apps in Azure Spring Apps
Last updated 03/30/2022-+
-# Tutorial: Use a managed identity to connect an Azure Database for MySQL to an app in Azure Spring Cloud
+# Tutorial: Use a managed identity to connect an Azure Database for MySQL to an app in Azure Spring Apps
+
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
**This article applies to:** ✔️ Java
-This article shows you how to create a managed identity for an app in Azure Spring Cloud. This article also shows you how to use the managed identity to access an Azure Database for MySQL with the MySQL password stored in Key Vault.
+This article shows you how to create a managed identity for an app in Azure Spring Apps. This article also shows you how to use the managed identity to access an Azure Database for MySQL with the MySQL password stored in Key Vault.
The following video describes how to manage secrets using Azure Key Vault.
az mysql db create \
--server-name <mysqlName> ```
-## Create an app and service in Azure Spring Cloud
+## Create an app and service in Azure Spring Apps
-After installing the corresponding extension, create an Azure Spring Cloud instance with the Azure CLI command [az spring-cloud create](/cli/azure/spring-cloud#az-spring-cloud-create).
+After installing the corresponding extension, create an Azure Spring Apps instance with the Azure CLI command [az spring create](/cli/azure/spring#az-spring-cloud-create).
```azurecli
-az extension add --name spring-cloud
-az spring-cloud create --name <myService> --group <myResourceGroup>
+az extension add --name spring
+az spring create --name <myService> --group <myResourceGroup>
``` The following example creates an app named `springapp` with a system-assigned managed identity, as requested by the `--assign-identity` parameter. ```azurecli
-az spring-cloud app create \
+az spring app create \
--name springapp --service <myService> --group <myResourceGroup> \ --assign-endpoint true \ --assign-identity
-export SERVICE_IDENTITY=$(az spring-cloud app show --name springapp -s <myService> -g <myResourceGroup> | jq -r '.identity.principalId')
+export SERVICE_IDENTITY=$(az spring app show --name springapp -s <myService> -g <myResourceGroup> | jq -r '.identity.principalId')
``` Make a note of the returned `url`, which will be in the format `https://<your-app-name>.azuremicroservices.io`. It will be used in the following step.
This [sample](https://github.com/Azure-Samples/Azure-Spring-Cloud-Samples/tree/m
mvn clean package ```
-4. Now deploy the app to Azure with the Azure CLI command [az spring-cloud app deploy](/cli/azure/spring-cloud/app#az-spring-cloud-app-deploy).
+4. Now deploy the app to Azure with the Azure CLI command [az spring app deploy](/cli/azure/spring/app#az-spring-cloud-app-deploy).
```azurecli
- az spring-cloud app deploy \
+ az spring app deploy \
--name springapp \ --service <myService> \ --group <myResourceGroup> \
This [sample](https://github.com/Azure-Samples/Azure-Spring-Cloud-Samples/tree/m
## Next Steps * [Managed identity to connect Key Vault](tutorial-managed-identities-key-vault.md)
-* [Managed identity to invoke Azure functions](tutorial-managed-identities-functions.md)
+* [Managed identity to invoke Azure functions](tutorial-managed-identities-functions.md)
spring-cloud Vnet Customer Responsibilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/vnet-customer-responsibilities.md
Title: "Customer responsibilities running Azure Spring Cloud in vnet"
-description: This article describes customer responsibilities running Azure Spring Cloud in vnet.
+ Title: "Customer responsibilities running Azure Spring Apps in vnet"
+description: This article describes customer responsibilities running Azure Spring Apps in vnet.
Last updated 11/02/2021-+
-# Customer responsibilities for running Azure Spring Cloud in VNET
+# Customer responsibilities for running Azure Spring Apps in VNET
+
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
**This article applies to:** ✔️ Basic/Standard tier ✔️ Enterprise tier
-This article includes specifications for the use of Azure Spring Cloud in a virtual network.
+This article includes specifications for the use of Azure Spring Apps in a virtual network.
-When Azure Spring Cloud is deployed in your virtual network, it has outbound dependencies on services outside of the virtual network. For management and operational purposes, Azure Spring Cloud must access certain ports and fully qualified domain names (FQDNs). Azure Spring Cloud requires these endpoints to communicate with the management plane and to download and install core Kubernetes cluster components and security updates.
+When Azure Spring Apps is deployed in your virtual network, it has outbound dependencies on services outside of the virtual network. For management and operational purposes, Azure Spring Apps must access certain ports and fully qualified domain names (FQDNs). Azure Spring Apps requires these endpoints to communicate with the management plane and to download and install core Kubernetes cluster components and security updates.
-By default, Azure Spring Cloud has unrestricted outbound (egress) internet access. This level of network access allows applications you run to access external resources as needed. If you wish to restrict egress traffic, a limited number of ports and addresses must be accessible for maintenance tasks. The simplest solution to secure outbound addresses is use of a firewall device that can control outbound traffic based on domain names. Azure Firewall, for example, can restrict outbound HTTP and HTTPS traffic based on the FQDN of the destination. You can also configure your preferred firewall and security rules to allow these required ports and addresses.
+By default, Azure Spring Apps has unrestricted outbound (egress) internet access. This level of network access allows applications you run to access external resources as needed. If you wish to restrict egress traffic, a limited number of ports and addresses must be accessible for maintenance tasks. The simplest solution to secure outbound addresses is use of a firewall device that can control outbound traffic based on domain names. Azure Firewall, for example, can restrict outbound HTTP and HTTPS traffic based on the FQDN of the destination. You can also configure your preferred firewall and security rules to allow these required ports and addresses.
-## Azure Spring Cloud resource requirements
+## Azure Spring Apps resource requirements
-The following list shows the resource requirements for Azure Spring Cloud services. As a general requirement, you shouldn't modify resource groups created by Azure Spring Cloud and the underlying network resources.
+The following list shows the resource requirements for Azure Spring Apps services. As a general requirement, you shouldn't modify resource groups created by Azure Spring Apps and the underlying network resources.
-- Don't modify resource groups created and owned by Azure Spring Cloud.
+- Don't modify resource groups created and owned by Azure Spring Apps.
- By default, these resource groups are named as `ap-svc-rt_[SERVICE-INSTANCE-NAME]_[REGION]*` and `ap_[SERVICE-INSTANCE-NAME]_[REGION]*`.
- - Don't block Azure Spring Cloud from updating resources in these resource groups.
-- Don't modify subnets used by Azure Spring Cloud.-- Don't create more than one Azure Spring Cloud service instance in the same subnet.-- When using a firewall to control traffic, don't block the following egress traffic to Azure Spring Cloud components that operate, maintain, and support the service instance.
+ - Don't block Azure Spring Apps from updating resources in these resource groups.
+- Don't modify subnets used by Azure Spring Apps.
+- Don't create more than one Azure Spring Apps service instance in the same subnet.
+- When using a firewall to control traffic, don't block the following egress traffic to Azure Spring Apps components that operate, maintain, and support the service instance.
-## Azure Spring Cloud network requirements
+## Azure Spring Apps network requirements
| Destination Endpoint | Port | Use | Note | | | - | -- | | | \*:1194 *or* [ServiceTag](../virtual-network/service-tags-overview.md#available-service-tags) - AzureCloud:1194 | UDP:1194 | Underlying Kubernetes Cluster management. | |
-| \*:443 *or* [ServiceTag](../virtual-network/service-tags-overview.md#available-service-tags) - AzureCloud:443 | TCP:443 | Azure Spring Cloud Service Management. | Information of service instance "requiredTraffics" could be known in resource payload, under "networkProfile" section. |
+| \*:443 *or* [ServiceTag](../virtual-network/service-tags-overview.md#available-service-tags) - AzureCloud:443 | TCP:443 | Azure Spring Apps Service Management. | Information of service instance "requiredTraffics" could be known in resource payload, under "networkProfile" section. |
| \*:9000 *or* [ServiceTag](../virtual-network/service-tags-overview.md#available-service-tags) - AzureCloud:9000 | TCP:9000 | Underlying Kubernetes Cluster management. | | | \*:123 *or* ntp.ubuntu.com:123 | UDP:123 | NTP time synchronization on Linux nodes. | | | \*.azure.io:443 *or* [ServiceTag](../virtual-network/service-tags-overview.md#available-service-tags) - AzureContainerRegistry:443 | TCP:443 | Azure Container Registry. | Can be replaced by enabling *Azure Container Registry* [service endpoint in virtual network](../virtual-network/virtual-network-service-endpoints-overview.md). | | \*.core.windows.net:443 and \*.core.windows.net:445 *or* [ServiceTag](../virtual-network/service-tags-overview.md#available-service-tags) - Storage:443 and Storage:445 | TCP:443, TCP:445 | Azure Files | Can be replaced by enabling *Azure Storage* [service endpoint in virtual network](../virtual-network/virtual-network-service-endpoints-overview.md). | | \*.servicebus.windows.net:443 *or* [ServiceTag](../virtual-network/service-tags-overview.md#available-service-tags) - EventHub:443 | TCP:443 | Azure Event Hubs. | Can be replaced by enabling *Azure Event Hubs* [service endpoint in virtual network](../virtual-network/virtual-network-service-endpoints-overview.md). |
-## Azure Spring Cloud FQDN requirements/application rules
+## Azure Spring Apps FQDN requirements/application rules
Azure Firewall provides the FQDN tag **AzureKubernetesService** to simplify the following configurations:
Azure Firewall provides the FQDN tag **AzureKubernetesService** to simplify the
<sup>1</sup> Please note that these FQDNs aren't included in the FQDN tag.
-## Azure Spring Cloud optional FQDN for third-party application performance management
+## Azure Spring Apps optional FQDN for third-party application performance management
Azure Firewall provides the FQDN tag **AzureKubernetesService** to simplify the following configurations:
static-web-apps Enterprise Edge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/enterprise-edge.md
az staticwebapp enterprise-edge enable -n my-static-webapp -g my-resource-group
+## Limitations
+
+- Private Endpoint cannot be used with enterprise-grade edge.
+ ## Next steps > [!div class="nextstepaction"]
storage Blob Upload Function Trigger Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blob-upload-function-trigger-javascript.md
+
+ Title: Upload and analyze a file with Azure Functions (JavaScript) and Blob Storage
+description: With JavaScript, learn how to upload an image to Azure Blob Storage and analyze its content using Azure Functions and Cognitive Services
++++ Last updated : 05/13/2022+++
+# JavaScript Tutorial: Upload and analyze a file with Azure Functions and Blob Storage
+
+In this tutorial, you'll learn how to upload an image to Azure Blob Storage and process it using Azure Functions and Computer Vision. You'll also learn how to implement Azure Function triggers and bindings as part of this process. Together, these services will analyze an uploaded image that contains text, extract the text out of it, and then store the text in a database row for later analysis or other purposes.
+
+Azure Blob Storage is Microsoft's massively scalable object storage solution for the cloud. Blob Storage is designed for storing images and documents, streaming media files, managing backup and archive data, and much more. You can read more about Blob Storage on the [overview page](./storage-blobs-introduction.md).
+
+Azure Functions is a serverless computer solution that allows you to write and run small blocks of code as highly scalable, serverless, event driven functions. You can read more about Azure Functions on the [overview page](../../azure-functions/functions-overview.md).
++
+In this tutorial, you'll learn how to:
+
+> [!div class="checklist"]
+> * Upload images and files to Blob Storage
+> * Use an Azure Function event trigger to process data uploaded to Blob Storage
+> * Use Cognitive Services to analyze an image
+> * Write data to Table Storage using Azure Function output bindings
++
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- [Visual Studio Code](https://code.visualstudio.com/) installed.
+ - [Azure Functions extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions) to deploy and configure the Function App.
+ - [Azure Storage extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurestorage)
+ - [Azure Resources extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azureresourcegroups)
++
+## Create the storage account and container
+The first step is to create the storage account that will hold the uploaded blob data, which in this scenario will be images that contain text. A storage account offers several different services, but this tutorial utilizes Blob Storage and Table Storage.
++
+### [Visual Studio Code](#tab/storage-resource-visual-studio-code)
+
+1. In Visual Studio Code, select <kbd>Ctrl</kbd> + <kbd>Shift</kbd> + <kbd>P</kbd> to open the command palette.
+1. Search for **Azure Storage: Create Storage Account (Advanced)**.
+1. Use the following table to create the Storage resource.
+
+ |Setting|Value|
+ |--|--|
+ |**Name**| Enter *msdocsstoragefunction* or something similar.|
+ |**Resource Group**|Create the `msdocs-storage-function` resource group you created earlier.|
+ |**Static web hosting**|No.|
+ |**Location**|Choose the region closest to you.|
+1. In Visual Studio Code, select <kbd>Shift</kbd> + <kbd>Alt</kbd> + <kbd>A</kbd> to open the **Azure** Explorer.
+1. Expand the **Storage** section, expand your subscription node and wait for the resource to be created.
+
+### Create the container in Visual Studio Code
+
+1. Still in the Azure Explorer with your new Storage resource found, expand the resource to see the nodes.
+1. Right-click on **Blob Containers** and select **Create Blob Container**.
+1. Enter the name `imageanalysis`. This creates a private container.
+
+### Change from private to public container in Azure portal
+
+This procedure expects a public container. To change that configuration, make the change in the Azure portal.
+
+1. Right-click on the Storage Resource in the Azure Explorer and select **Open in Portal**.
+1. In the **Data Storage** section, select **Containers**.
+1. Find your container, `imageanalysis`, and select the `...` (ellipse) at the end of the line.
+1. Select **Change access level**.
+1. Select **Blob (anonymous read access for blobs only** then select **Ok**.
+1. Return to Visual Studio Code.
+
+### Retrieve the connection string in Visual Studio Code
+
+1. In Visual Studio Code, select <kbd>Shift</kbd> + <kbd>Alt</kbd> + <kbd>A</kbd> to open the **Azure** Explorer.
+1. Right-click on your storage resource and select **Copy Connection String**.
+1. paste this somewhere to use for later.
+1. Also make note of the storage account name `msdocsstoragefunction` for later as well.
+
+### [Azure portal](#tab/storage-resource-azure-portal)
+
+Sign in to the [Azure portal](https://ms.portal.azure.com/#create/Microsoft.StorageAccount).
+
+1) In the search bar at the top of the portal, search for *Storage* and select the result labeled **Storage accounts**.
+
+2) On the **Storage accounts** page, select **+ Create** in the top left.
+
+3) On the **Create a storage account** page, enter the following values:
+
+ - **Subscription**: Choose your desired subscription.
+ - **Resource Group**: Select **Create new** and enter a name of `msdocs-storage-function`, and then choose **OK**.
+ - **Storage account name**: Enter a value of `msdocsstoragefunction`. The Storage account name must be unique across Azure, so you may need to add numbers after the name, such as `msdocsstoragefunction123`.
+ - **Region**: Select the region that is closest to you.
+ - **Performance**: Choose **Standard**.
+ - **Redundancy**: Leave the default value selected.
+
+ :::image type="content" source="./media/blob-upload-storage-function/portal-storage-create-small.png" alt-text="A screenshot showing how create a storage account in Azure." lightbox="media/blob-upload-storage-function/portal-storage-create.png":::
+
+4) Select **Review + Create** at the bottom and Azure will validate the information you entered. Once the settings are validated, choose **Create** and Azure will begin provisioning the storage account, which might take a moment.
+
+### Create the container
+1) After the storage account is provisioned, select **Go to Resource**. The next step is to create a storage container inside of the account to hold uploaded images for analysis.
+
+2) On the navigation panel, choose **Containers**.
+
+3) On the **Containers** page, select **+ Container** at the top. In the slide out panel, enter a **Name** of *imageanalysis*, and make sure the **Public access level** is set to **Blob (anonymous read access for blobs only**. Then select **Create**.
+
+ :::image type="content" source="./media/blob-upload-storage-function/portal-container-create-small.png" alt-text="A screenshot showing how to create a new storage container." lightbox="media/blob-upload-storage-function/portal-container-create.png":::
+
+You should see your new container appear in the list of containers.
+
+### Retrieve the connection string
+
+The last step is to retrieve our connection string for the storage account.
+
+1) On the left navigation panel, select **Access Keys**.
+
+2) On the **Access Keys page**, select **Show keys**. Copy the value of the **Connection String** under the **key1** section and paste this somewhere to use for later. You'll also want to make a note of the storage account name `msdocsstoragefunction` for later as well.
+
+ :::image type="content" source="./media/blob-upload-storage-function/storage-account-access-small.png" alt-text="A screenshot showing how to access the storage container." lightbox="media/blob-upload-storage-function/storage-account-access.png":::
+
+These values will be necessary when we need to connect our Azure Function to this storage account.
+
+### [Azure CLI](#tab/storage-resource-azure-cli)
+
+Azure CLI commands can be run in the [Azure Cloud Shell](https://shell.azure.com) or on a workstation with the [Azure CLI installed](/cli/azure/install-azure-cli).
+
+To create the storage account and container, we can run the CLI commands seen below.
+
+```azurecli-interactive
+az group create --location eastus --name msdocs-storage-function \
+
+az storage account create --name msdocsstorageaccount --resource-group msdocs-storage-function -l eastus --sku Standard_LRS \
+
+az storage container create --name imageanalysis --account-name msdocsstorageaccount --resource-group msdocs-storage-function
+```
+
+You may need to wait a few moments for Azure to provision these resources.
+
+After the commands complete, we also need to retrieve the connection string for the storage account. The connection string will be used later to connect our Azure Function to the storage account.
+
+```azurecli-interactive
+az storage account show-connection-string -g msdocs-storage-function -n msdocsstorageaccount
+```
+
+Copy the value of the `connectionString` property and paste it somewhere to use for later. You'll also want to make a note of the storage account name `msdocsstoragefunction` for later as well.
+++
+## Create the Computer Vision service
+Next, create the Computer Vision service account that will process our uploaded files. Computer Vision is part of Azure Cognitive Services and offers various features for extracting data out of images. You can learn more about Computer Vision on the [overview page](/azure/cognitive-services/computer-vision/overview).
+
+### [Azure portal](#tab/computer-vision-azure-portal)
+
+1) In the search bar at the top of the portal, search for *Computer* and select the result labeled **Computer vision**.
+
+2) On the **Computer vision** page, select **+ Create**.
+
+3) On the **Create Computer Vision** page, enter the following values:
+
+ - **Subscription**: Choose your desired Subscription.
+ - **Resource Group**: Use the `msdocs-storage-function` resource group you created earlier.
+ - **Region**: Select the region that is closest to you.
+ - **Name**: Enter in a name of `msdocscomputervision`.
+ - **Pricing Tier**: Choose **Free** if it's available, otherwise choose **Standard S1**.
+ - Check the **Responsible AI Notice** box if you agree to the terms
+
+ :::image type="content" lightbox="./media/blob-upload-storage-function/computer-vision-create.png" source="./media/blob-upload-storage-function/computer-vision-create-small.png" alt-text="A screenshot showing how to create a new Computer Vision service." :::
+
+4) Select **Review + Create** at the bottom. Azure will take a moment validate the information you entered. Once the settings are validated, choose **Create** and Azure will begin provisioning the Computer Vision service, which might take a moment.
+
+5) When the operation has completed, select **Go to Resource**.
+
+### Retrieve the keys
+
+Next, we need to find the secret key and endpoint URL for the Computer Vision service to use in our Azure Function app.
+
+1) On the **Computer Vision** overview page, select **Keys and Endpoint**.
+
+2) On the **Keys and EndPoint** page, copy the **Key 1** value and the **EndPoint** values and paste them somewhere to use for later. The endpoint should be in the format of `https://YOUR-RESOURCE-NAME.cognitiveservices.azure.com/`
++
+### [Azure CLI](#tab/computer-vision-azure-cli)
+
+To create the Computer Vision service, we can run the CLI command below.
+
+```azurecli-interactive
+az cognitiveservices account create \
+ --name msdocs-process-image \
+ --resource-group msdocs-storage-function \
+ --kind ComputerVision \
+ --sku F1 \
+ --location eastus2 \
+ --yes
+```
+
+You may need to wait a few moments for Azure to provision these resources.
+
+Once the Computer Vision service is created, you can retrieve the secret keys and URL endpoint using the commands below.
+
+```azurelci-interactive
+ az cognitiveservices account keys list \
+ --name msdocs-process-image \
+ --resource-group msdocs-storage-function \
+
+ az cognitiveservices account list \
+ --name msdocs-process-image \
+ --resource-group msdocs-storage-function --query "[].properties.endpoint"
+```
++
+
+
+## Download and configure the sample project
+The code for the Azure Function used in this tutorial can be found in [this GitHub repository](https://github.com/Azure-Samples/msdocs-storage-bind-function-service/tree/main/javascript), in the JavaScript subdirectory. You can also clone the project using the command below.
+
+```terminal
+git clone https://github.com/Azure-Samples/msdocs-storage-bind-function-service.git \
+cd msdocs-storage-bind-function-service/javascript \
+code .
+```
+
+The sample project accomplishes the following tasks:
+
+- Retrieves environment variables to connect to the storage account and Computer Vision service
+- Accepts the uploaded file as a blob parameter
+- Analyzes the blob using the Computer Vision service
+- Sends the analyzed image text to a new table row using output bindings
+
+Once you've downloaded and opened the project, there are a few essential concepts to understand:
+
+|Concept|Purpose|
+|--|--|
+|Function|The Azure Function is defined by both the function code and the bindings. The function code is in [./ProcessImageUpload/index.js](https://github.com/Azure-Samples/msdocs-storage-bind-function-service/blob/main/javascript/ProcessImageUpload/index.js). |
+|Triggers and bindings|The triggers and bindings indicate that data which is expected into or out of the function and which service is going to send or receive that data. The trigger and bindings for this function is in [./ProcessImageUpload/function.json](https://github.com/Azure-Samples/msdocs-storage-bind-function-service/blob/main/javascript/ProcessImageUpload/function.json).|
+
+### Triggers and bindings
+The following [function.json](https://github.com/Azure-Samples/msdocs-storage-bind-function-service/blob/main/javascript/ProcessImageUpload/function.json) file defines the triggers and bindings for this function:
++
+* **Data In** - The **BlobTrigger** (`"type": "blobTrigger"`) is used to bind the function to the upload event in Blob Storage. The trigger has two required parameters:
+ * `path`: The path the trigger watches for events. The path includes the container name,`imageanalysis`, and the variable substitution for the blob name. This blob name is retrieved from the `name` property.
+ * `name`: The name of the blob uploaded. The use of the `myBlob` is the parameter name for the blob coming into the function. Don't change the value `myBlob`.
+ * `connection`: The **connection string** of the storage account. The value `StorageConnection` matches the name in the `local.settings.json` file.
+
+* **Data Out** - The **TableBinding** (`"type": "table"`) is used to bind the outbound data to a Storage table.
+ * `tableName`: The name of the table to write the parsed image text value returned by the function. The table must already exist.
+ * `connection`: The **connection string** of the storage account. The value `StorageConnection` matches the name in the `local.settings.json` file.
++
+This code also retrieves essential configuration values from environment variables, such as the Blob Storage connection string and Computer Vision key. These environment variables are added to the Azure Function environment after it's deployed.
+
+The default function also utilizes a second method called `AnalyzeImage`. This code uses the URL Endpoint and Key of the Computer Vision account to make a request to Computer Vision to process the image. The request returns all of the text discovered in the image. This text is written to Table Storage, using the outbound binding.
+
+### Configure local settings
+
+To run the project locally, enter the environment variables in the `./local.settings.json` file. Fill in the placeholder values with the values you saved earlier when creating the Azure resources.
+
+Although the Azure Function code runs locally, it connects to the cloud-based services for Storage, rather than using any local emulators.
++
+## Create Azure Functions app
+
+You're now ready to deploy the application to Azure using a Visual Studio Code extension.
+
+1. In Visual Studio Code, select <kbd>Shift</kbd> + <kbd>Alt</kbd> + <kbd>A</kbd> to open the **Azure** explorer.
+1. In the **Functions** section, find and right-click the subscription, and select **Create Function App in Azure (Advanced)**.
+1. Use the following table to create the Function resource.
+
+ |Setting|Value|
+ |--|--|
+ |**Name**| Enter *msdocsprocessimage* or something similar.|
+ |**Runtime stack**| Select a **Node.js LTS** version. |
+ |**OS**| Select **Linux**. |
+ |**Resource Group**|Choose the `msdocs-storage-function` resource group you created earlier.|
+ |**Location**|Choose the region closest to you.|
+ |**Plan Type**|Select **Consumption**.|
+ |**Azure Storage**| Select the storage account you created earlier.|
+ |**Application Insights**| Skip for now.|
+
+1. Azure provisions the requested resources, which will take a few moments to complete.
+
+## Deploy Azure Functions app
+
+
+
+1. When the previous resource creation process finishes, right-click the new resource in the **Functions** section of the Azure explorer, and select **Deploy to Function App**.
+1. If asked **Are you sure you want to deploy...**, select **Deploy**.
+1. When the process completes, a notification appears which a choice which includes **Upload settings**. Select that option. This copies the values from your local.settings.json file into your Azure Function app. If the notification disappeared before you could select it, continue to the next section.
+
+## Add app settings for Storage and Computer Vision
+If you selected **Upload settings** in the notification, skip this section.
+
+The Azure Function was deployed successfully, but it can't connect to our Storage account and Computer Vision services yet. The correct keys and connection strings must first be added to the configuration settings of the Azure Functions app.
+
+1. Find your resource in the **Functions** section of the Azure explorer, right-click **Application Settings**, and select **Add New Setting**.
+1. Enter a new app setting for the following secrets. Copy and paste your secret values from your local project in the `local.settings.json` file.
+
+ |Setting|
+ |--|
+ |StorageConnection|
+ |StorageAccountName|
+ |StorageContainerName|
+ |ComputerVisionKey|
+ |ComputerVisionEndPoint|
++
+All of the required environment variables to connect our Azure function to different services are now in place.
++
+## Upload an image to Blob Storage
+
+You're now ready to test out our application! You can upload a blob to the container, and then verify that the text in the image was saved to Table Storage.
+
+1. In the Azure explorer in Visual Studio Code, find and expand your Storage resource in the **Storage** section.
+1. Expand **Blob Containers** and right-click your container name, `imageanalysis`, then select **Upload files**.
+1. You can find a few sample images included in the **images** folder at the root of the downloadable sample project, or you can use one of your own.
+1. For the **Destination directory**, accept the default value, `/`.
+1. Wait until the files are uploaded and listed in the container.
+
+## View text analysis of image
+
+Next, you can verify that the upload triggered the Azure Function, and that the text in the image was analyzed and saved to Table Storage properly.
+
+1. In Visual Studio Code, in the Azure Explorer, under the same Storage resource, expand **Tables** to find your resource.
+1. An **ImageText** table should now be available. Click on the table to preview the data rows inside of it. You should see an entry for the processed image text of an uploaded file. You can verify this using either the Timestamp, or by viewing the content of the **Text** column.
+
+Congratulations! You succeeded in processing an image that was uploaded to Blob Storage using Azure Functions and Computer Vision.
+
+## Troubleshooting
+
+Please use the following table to help troubleshoot issues during this procedure.
+
+|Issue|Resolution|
+|--|--|
+|`await computerVisionClient.read(url);` errors with `Only absolute URLs are supported`|Make sure your `ComputerVisionEndPoint` endpoint is in the format of `https://YOUR-RESOURCE-NAME.cognitiveservices.azure.com/`.|
+
+## Clean up resources
+
+If you're not going to continue to use this application, you can delete the resources you created by removing the resource group.
+
+1. Select **Resource groups** from the Azure explorer
+1. Find and right-click the `msdocs-storage-function` resource group from the list.
+1. Select **Delete**. The process to delete the resource group may take a few minutes to complete.
storage Monitor Blob Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/monitor-blob-storage.md
Title: Monitoring Azure Blob Storage+ description: Learn how to monitor the performance and availability of Azure Blob Storage. Monitor Azure Blob Storage data, learn about configuration, and analyze metric and log data.
Requests made by the Blob storage service itself, such as log creation or deleti
- Successful requests - Server errors-- Time-out errors for both client and server
+- Timeout errors for both client and server
- Failed GET requests with the error code 304 (Not Modified) All other failed anonymous requests aren't logged. For a full list of the logged data, see [Storage logged operations and status messages](/rest/api/storageservices/storage-analytics-logged-operations-and-status-messages) and [Storage log format](monitor-blob-storage-reference.md).
Get started with any of these guides.
| [Azure Monitor Logs overview](../../azure-monitor/logs/data-platform-logs.md)| The basics of logs and how to collect and analyze them | | [Transition to metrics in Azure Monitor](../common/storage-metrics-migration.md) | Move from Storage Analytics metrics to metrics in Azure Monitor. | | [Azure Blob Storage monitoring data reference](monitor-blob-storage-reference.md) | A reference of the logs and metrics created by Azure Blob Storage |
+| [Troubleshoot performance issues](../common/troubleshoot-storage-performance.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json)| Common performance issues and guidance about how to troubleshoot them. |
+| [Troubleshoot availability issues](../common/troubleshoot-storage-availability.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json)| Common availability issues and guidance about how to troubleshoot them.|
+| [Troubleshoot client application errors](../common/troubleshoot-storage-client-application-errors.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json)| Common issues with connecting clients and how to troubleshoot them.|
storage Object Replication Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/object-replication-overview.md
Previously updated : 05/09/2022 Last updated : 05/24/2022
When a blob in the source account is deleted, the current version of the blob be
Object replication doesn't support blob snapshots. Any snapshots on a blob in the source account aren't replicated to the destination account.
+## Blob index tags
+
+Object replication does not copy the source blob's index tags to the destination blob.
+ ### Blob tiering Object replication is supported when the source and destination accounts are in the hot or cool tier. The source and destination accounts may be in different tiers. However, object replication will fail if a blob in either the source or destination account has been moved to the archive tier. For more information on blob tiers, see [Hot, Cool, and Archive access tiers for blob data](access-tiers-overview.md).
storage Secure File Transfer Protocol Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/secure-file-transfer-protocol-known-issues.md
The following clients are known to be incompatible with SFTP for Azure Blob Stor
- paramiko 1.16.0 - Salesforce - SSH.NET 2016.1.0-- Workday - XFB.Gateway > [!NOTE]
storage Secure File Transfer Protocol Support How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/secure-file-transfer-protocol-support-how-to.md
To learn more about SFTP support for Azure Blob Storage, see [SSH File Transfer
## Prerequisites -- A standard general-purpose v2 or premium block blob storage account. You can also enable SFTP as create the account. For more information on these types of storage accounts, see [Storage account overview](../common/storage-account-overview.md).
+- A standard general-purpose v2 or premium block blob storage account. You can also enable SFTP as you create the account. For more information on these types of storage accounts, see [Storage account overview](../common/storage-account-overview.md).
- The hierarchical namespace feature of the account must be enabled. To enable the hierarchical namespace feature, see [Upgrade Azure Blob Storage with Azure Data Lake Storage Gen2 capabilities](upgrade-to-data-lake-storage-gen2-how-to.md).
storage Secure File Transfer Protocol Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/secure-file-transfer-protocol-support.md
The following clients have compatible algorithm support with SFTP for Azure Blob
- sshj 0.27.0+ - SSH.NET 2020.0.0+ - WinSCP 5.10+
+- Workday
> [!NOTE] > The supported client list above is not exhaustive and may change over time.
storage Storage Blobs Static Site Github Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blobs-static-site-github-actions.md
You need to provide your application's **Client ID**, **Tenant ID**, and **Subsc
1. Go to **Actions** for your GitHub repository.
- :::image type="content" source="media/storage-blob-static-website/storage-blob-github-actions-header.png" alt-text="GitHub actions menu item":::
+ :::image type="content" source="media/storage-blob-static-website/storage-blob-github-actions-header.png" alt-text="GitHub Actions menu item":::
1. Select **Set up your workflow yourself**.
You need to provide your application's **Client ID**, **Tenant ID**, and **Subsc
1. Go to **Actions** for your GitHub repository.
- :::image type="content" source="media/storage-blob-static-website/storage-blob-github-actions-header.png" alt-text="GitHub actions menu item":::
+ :::image type="content" source="media/storage-blob-static-website/storage-blob-github-actions-header.png" alt-text="GitHub Actions menu item":::
1. Select **Set up your workflow yourself**.
You need to provide your application's **Client ID**, **Tenant ID**, and **Subsc
1. Open the first result to see detailed logs of your workflow's run.
- :::image type="content" source="../media/index/github-actions-run.png" alt-text="Log of GitHub actions run":::
+ :::image type="content" source="../media/index/github-actions-run.png" alt-text="Log of GitHub Actions run":::
## Clean up resources
storage Storage Quickstart Blobs Go https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-quickstart-blobs-go.md
Azure CLI authentication isn't recommended for applications running in Azure.
To learn more about different authentication methods, check out [Azure authentication with the Azure SDK for Go](/azure/developer/go/azure-sdk-authentication). +
+## Assign RBAC permissions to the storage account
+
+Azure storage accounts require explicit permissions to perform read and write operations. In order to use the storage account, you must assign permissions to the account. To do that you'll need to assign an appropriate RBAC role to your account. To get the `objectID` of the currently signed in user, run `az ad signed-in-user show --query objectId`.
+
+Run the following AzureCli command to assign the storage account permissions:
+
+```azurecli
+az role assignment create --assignee "<ObjectID>" --role "Storage Blob Data Contributor" --scope "<StorageAccountResourceID>"
+```
+
+Learn more about Azure's built-in RBAC roles, check out [Built-in roles](/azure/role-based-access-control/built-in-roles).
+
+> Note: Azure Cli has built in helper fucntions that retrieve the storage access keys when permissions are not detected. That functionally does not transfer to the DefaultAzureCredential, which is the reason for assiging RBAC roles to your account.
+ ## Run the sample This sample creates an Azure storage container, uploads a blob, lists the blobs in the container, then downloads the blob data into a buffer.
if err != nil {
## Resources for developing Go applications with blobs
-See these additional resources for Go development with Blob storage:
+See these other resources for Go development with Blob storage:
- View and install the [Go client library source code](https://github.com/Azure/azure-sdk-for-go/tree/main/sdk/storage/azblob) for Azure Storage on GitHub. - Explore [Blob storage samples](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/storage/azblob#example-package) written using the Go client library.
storage Storage Account Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-account-create.md
Previously updated : 05/11/2022 Last updated : 05/18/2022 -+
The button launches an interactive shell that you can use to run the steps outli
You can also install and use the Azure CLI locally. If you plan to use Azure CLI locally, make sure you have installed the latest version of the Azure CLI. See [Install the Azure CLI](/cli/azure/install-azure-cli).
+# [Bicep](#tab/bicep)
+
+None.
+ # [Template](#tab/template) None.
To log into your local installation of the CLI, run the [az login](/cli/azure/re
az login ```
+# [Bicep](#tab/bicep)
+
+N/A
+ # [Template](#tab/template) N/A
The following table shows which values to use for the `sku` and `kind` parameter
| Legacy standard general-purpose v1 | LRS / GRS / RA-GRS | Storage | Standard_LRS / Standard_GRS / Standard_RAGRS | No | | Legacy blob storage | LRS / GRS / RA-GRS | BlobStorage | Standard_LRS / Standard_GRS / Standard_RAGRS | No |
+# [Bicep](#tab/bicep)
+
+You can use either Azure PowerShell or Azure CLI to deploy a Bicep file to create a storage account. The Bicep file used in this how-to article is from [Azure Resource Manager quickstart templates](https://azure.microsoft.com/resources/templates/storage-account-create/). Bicep currently doesn't support deploying a remote file. Download and save [the Bicep file](https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.storage/storage-account-create/main.bicep) to your local computer, and then run the scripts.
+
+```azurepowershell-interactive
+$resourceGroupName = Read-Host -Prompt "Enter the Resource Group name"
+$location = Read-Host -Prompt "Enter the location (i.e. centralus)"
+
+New-AzResourceGroup -Name $resourceGroupName -Location "$location"
+New-AzResourceGroupDeployment -ResourceGroupName $resourceGroupName -TemplateFile "main.bicep"
+```
+
+```azurecli-interactive
+echo "Enter the Resource Group name:" &&
+read resourceGroupName &&
+echo "Enter the location (i.e. centralus):" &&
+read location &&
+az group create --name $resourceGroupName --location "$location" &&
+az deployment group create --resource-group $resourceGroupName --template-file "main.bicep"
+```
+
+> [!NOTE]
+> This Bicep file serves only as an example. There are many storage account settings that aren't configured as part of this Bicep file. For example, if you want to use [Data Lake Storage](https://azure.microsoft.com/services/storage/data-lake-storage/), you would modify this Bicep file by setting the `isHnsEnabled` property of the `StorageAccountPropertiesCreateParameters` object to `true`.
+
+To learn how to modify this Bicep file or create new ones, see:
+
+- [Azure Resource Manager documentation](../../azure-resource-manager/index.yml).
+- [Storage account template reference](/azure/templates/microsoft.storage/allversions).
+- [Additional storage account template samples](https://azure.microsoft.com/resources/templates/?resourceType=Microsoft.Storage).
+ # [Template](#tab/template) You can use either Azure PowerShell or Azure CLI to deploy a Resource Manager template to create a storage account. The template used in this how-to article is from [Azure Resource Manager quickstart templates](https://azure.microsoft.com/resources/templates/storage-account-create/). To run the scripts, select **Try it** to open the Azure Cloud Shell. To paste the script, right-click the shell, and then select **Paste**.
To delete the storage account, use the [az storage account delete](/cli/azure/st
az storage account delete --name <storage-account> --resource-group <resource-group> ```
+# [Bicep](#tab/bicep)
+
+To delete the storage account, use either Azure PowerShell or Azure CLI.
+
+```azurepowershell-interactive
+$storageResourceGroupName = Read-Host -Prompt "Enter the resource group name"
+$storageAccountName = Read-Host -Prompt "Enter the storage account name"
+Remove-AzStorageAccount -Name $storageAccountName -ResourceGroupName $storageResourceGroupName
+```
+
+```azurecli-interactive
+echo "Enter the resource group name:" &&
+read resourceGroupName &&
+echo "Enter the storage account name:" &&
+read storageAccountName &&
+az storage account delete --name storageAccountName --resource-group resourceGroupName
+```
+ # [Template](#tab/template) To delete the storage account, use either Azure PowerShell or Azure CLI.
storage Storage Auth Abac Attributes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-auth-abac-attributes.md
Previously updated : 05/16/2022 Last updated : 05/24/2022
This section lists the supported Azure Blob storage actions and suboperations yo
> | **Description** | All blob read operations excluding list. | > | **DataAction** | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read` | > | **Suboperation** | NOT `Blob.List` |
-> | **Resource attributes** | [Account name](#account-name)<br/>[Is hierarchical namespace enabled](#is-hierarchical-namespace-enabled)<br/>[Container name](#container-name)<br/>[Blob path](#blob-path)<br/>[Encryption scope name](#encryption-scope-name) |
+> | **Resource attributes** | [Account name](#account-name)<br/>[Is Current Version](#is-current-version)<br/>[Is hierarchical namespace enabled](#is-hierarchical-namespace-enabled)<br/>[Container name](#container-name)<br/>[Blob path](#blob-path)<br/>[Encryption scope name](#encryption-scope-name) |
> | **Request attributes** | [Version ID](#version-id)<br/>[Snapshot](#snapshot) | > | **Principal attributes support** | True | > | **Examples** | `!(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'} AND NOT SubOperationMatches{'Blob.List'})`<br/>[Example: Read blobs in named containers with a path](storage-auth-abac-examples.md#example-read-blobs-in-named-containers-with-a-path) |
This section lists the supported Azure Blob storage actions and suboperations yo
> | **Description** | Read blobs with tags. | > | **DataAction** | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read` | > | **Suboperation** | `Blob.Read.WithTagConditions` |
-> | **Resource attributes** | [Account name](#account-name)<br/>[Is hierarchical namespace enabled](#is-hierarchical-namespace-enabled)<br/>[Container name](#container-name)<br/>[Blob path](#blob-path)<br/>[Blob index tags [Values in key]](#blob-index-tags-values-in-key)<br/>[Blob index tags [Keys]](#blob-index-tags-keys)<br/>[Encryption scope name](#encryption-scope-name) |
+> | **Resource attributes** | [Account name](#account-name)<br/>[Is Current Version](#is-current-version)<br/>[Is hierarchical namespace enabled](#is-hierarchical-namespace-enabled)<br/>[Container name](#container-name)<br/>[Blob path](#blob-path)<br/>[Blob index tags [Values in key]](#blob-index-tags-values-in-key)<br/>[Blob index tags [Keys]](#blob-index-tags-keys)<br/>[Encryption scope name](#encryption-scope-name) |
> | **Request attributes** | [Version ID](#version-id)<br/>[Snapshot](#snapshot) | > | **Principal attributes support** | True | > | **Examples** | `!(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'} AND SubOperationMatches{'Blob.Read.WithTagConditions'})`<br/>[Example: Read blobs with a blob index tag](storage-auth-abac-examples.md#example-read-blobs-with-a-blob-index-tag) |
This section lists the supported Azure Blob storage actions and suboperations yo
> | **Description** | DataAction for reading blob index tags. | > | **DataAction** | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags/read` | > | **Suboperation** | |
-> | **Resource attributes** | [Account name](#account-name)<br/>[Is hierarchical namespace enabled](#is-hierarchical-namespace-enabled)<br/>[Container name](#container-name)<br/>[Blob path](#blob-path)<br/>[Blob index tags [Values in key]](#blob-index-tags-values-in-key)<br/>[Blob index tags [Keys]](#blob-index-tags-keys) |
+> | **Resource attributes** | [Account name](#account-name)<br/>[Is Current Version](#is-current-version)<br/>[Is hierarchical namespace enabled](#is-hierarchical-namespace-enabled)<br/>[Container name](#container-name)<br/>[Blob path](#blob-path)<br/>[Blob index tags [Values in key]](#blob-index-tags-values-in-key)<br/>[Blob index tags [Keys]](#blob-index-tags-keys) |
> | **Request attributes** | [Version ID](#version-id)<br/>[Snapshot](#snapshot) | > | **Principal attributes support** | True | > | **Learn more** | [Manage and find Azure Blob data with blob index tags](../blobs/storage-manage-find-blobs.md) |
This section lists the supported Azure Blob storage actions and suboperations yo
> | **Description** | DataAction for writing to blobs. | > | **DataAction** | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write` | > | **Suboperation** | `Blob.Write.Tier` |
-> | **Resource attributes** | [Account name](#account-name)<br/>[Is hierarchical namespace enabled](#is-hierarchical-namespace-enabled)<br/>[Container name](#container-name)<br/>[Blob path](#blob-path)<br/>[Encryption scope name](#encryption-scope-name) |
+> | **Resource attributes** | [Account name](#account-name)<br/>[Is Current Version](#is-current-version)<br/>[Is hierarchical namespace enabled](#is-hierarchical-namespace-enabled)<br/>[Container name](#container-name)<br/>[Blob path](#blob-path)<br/>[Encryption scope name](#encryption-scope-name) |
> | **Request attributes** | [Version ID](#version-id)<br/>[Snapshot](#snapshot) | > | **Principal attributes support** | True | > | **Examples** | `!(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write'} AND SubOperationMatches{'Blob.Write.Tier'})` |
This section lists the supported Azure Blob storage actions and suboperations yo
> | **Description** | DataAction for writing blob index tags. | > | **DataAction** | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags/write` | > | **Suboperation** | |
-> | **Resource attributes** | [Account name](#account-name)<br/>[Is hierarchical namespace enabled](#is-hierarchical-namespace-enabled)<br/>[Container name](#container-name)<br/>[Blob path](#blob-path)<br/>[Blob index tags [Values in key]](#blob-index-tags-values-in-key)<br/>[Blob index tags [Keys]](#blob-index-tags-keys) |
+> | **Resource attributes** | [Account name](#account-name)<br/>[Is Current Version](#is-current-version)<br/>[Is hierarchical namespace enabled](#is-hierarchical-namespace-enabled)<br/>[Container name](#container-name)<br/>[Blob path](#blob-path)<br/>[Blob index tags [Values in key]](#blob-index-tags-values-in-key)<br/>[Blob index tags [Keys]](#blob-index-tags-keys) |
> | **Request attributes** | [Blob index tags [Values in key]](#blob-index-tags-values-in-key)<br/>[Blob index tags [Keys]](#blob-index-tags-keys)<br/>[Version ID](#version-id)<br/>[Snapshot](#snapshot) | > | **Principal attributes support** | True | > | **Examples** | `!(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags/write'})`<br/>[Example: Existing blobs must have blob index tag keys](storage-auth-abac-examples.md#example-existing-blobs-must-have-blob-index-tag-keys) |
This section lists the supported Azure Blob storage actions and suboperations yo
> | **Description** | DataAction for deleting blobs. | > | **DataAction** | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/delete` | > | **Suboperation** | |
-> | **Resource attributes** | [Account name](#account-name)<br/>[Is hierarchical namespace enabled](#is-hierarchical-namespace-enabled)<br/>[Container name](#container-name)<br/>[Blob path](#blob-path) |
+> | **Resource attributes** | [Account name](#account-name)<br/>[Is Current Version](#is-current-version)<br/>[Is hierarchical namespace enabled](#is-hierarchical-namespace-enabled)<br/>[Container name](#container-name)<br/>[Blob path](#blob-path) |
> | **Request attributes** | [Version ID](#version-id)<br/>[Snapshot](#snapshot) | > | **Principal attributes support** | True | > | **Examples** | `!(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/delete'})`<br/>[Example: Read, write, or delete blobs in named containers](storage-auth-abac-examples.md#example-read-write-or-delete-blobs-in-named-containers) |
This section lists the supported Azure Blob storage actions and suboperations yo
> | **Description** | DataAction for permanently deleting a blob overriding soft-delete. | > | **DataAction** | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/permanentDelete/action` | > | **Suboperation** | |
-> | **Resource attributes** | [Account name](#account-name)<br/>[Is hierarchical namespace enabled](#is-hierarchical-namespace-enabled)<br/>[Container name](#container-name)<br/>[Blob path](#blob-path) |
+> | **Resource attributes** | [Account name](#account-name)<br/>[Is Current Version](#is-current-version)<br/>[Is hierarchical namespace enabled](#is-hierarchical-namespace-enabled)<br/>[Container name](#container-name)<br/>[Blob path](#blob-path) |
> | **Request attributes** | [Version ID](#version-id)<br/>[Snapshot](#snapshot) | > | **Principal attributes support** | True |
This section lists the supported Azure Blob storage actions and suboperations yo
> | **Description** | DataAction for all data operations on storage accounts with hierarchical namespace enabled.<br/>If your role definition includes the `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/runAsSuperUser/action` action, you should target this action in your condition. Targeting this action ensures the condition will still work as expected if hierarchical namespace is enabled for a storage account. | > | **DataAction** | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/runAsSuperUser/action` | > | **Suboperation** | |
-> | **Resource attributes** | [Account name](#account-name)<br/>[Is hierarchical namespace enabled](#is-hierarchical-namespace-enabled)<br/>[Container name](#container-name)<br/>[Blob path](#blob-path) |
+> | **Resource attributes** | [Account name](#account-name)<br/>[Is Current Version](#is-current-version)<br/>[Is hierarchical namespace enabled](#is-hierarchical-namespace-enabled)<br/>[Container name](#container-name)<br/>[Blob path](#blob-path) |
> | **Request attributes** | | > | **Principal attributes support** | True |
-> | **Examples** | [Example: Read only storage accounts with hierarchical namespace enabled](storage-auth-abac-examples.md#example-read-only-storage-accounts-with-hierarchical-namespace-enabled)<br/>[Example: Read, write, or delete blobs in named containers](storage-auth-abac-examples.md#example-read-write-or-delete-blobs-in-named-containers)<br/>[Example: Read blobs in named containers with a path](storage-auth-abac-examples.md#example-read-blobs-in-named-containers-with-a-path)<br/>[Example: Read or list blobs in named containers with a path](storage-auth-abac-examples.md#example-read-or-list-blobs-in-named-containers-with-a-path)<br/>[Example: Write blobs in named containers with a path](storage-auth-abac-examples.md#example-write-blobs-in-named-containers-with-a-path) |
+> | **Examples** | [Example: Read, write, or delete blobs in named containers](storage-auth-abac-examples.md#example-read-write-or-delete-blobs-in-named-containers)<br/>[Example: Read blobs in named containers with a path](storage-auth-abac-examples.md#example-read-blobs-in-named-containers-with-a-path)<br/>[Example: Read or list blobs in named containers with a path](storage-auth-abac-examples.md#example-read-or-list-blobs-in-named-containers-with-a-path)<br/>[Example: Write blobs in named containers with a path](storage-auth-abac-examples.md#example-write-blobs-in-named-containers-with-a-path)<br/>[Example: Read only current blob versions](storage-auth-abac-examples.md#example-read-only-current-blob-versions)<br/>[Example: Read current blob versions and any blob snapshots](storage-auth-abac-examples.md#example-read-current-blob-versions-and-any-blob-snapshots)<br/>[Example: Read only storage accounts with hierarchical namespace enabled](storage-auth-abac-examples.md#example-read-only-storage-accounts-with-hierarchical-namespace-enabled) |
> | **Learn more** | [Azure Data Lake Storage Gen2 hierarchical namespace](../blobs/data-lake-storage-namespace.md) | ## Azure Queue storage actions
This section lists the Azure Blob storage attributes you can use in your conditi
> | **Examples** | `@Resource[Microsoft.Storage/storageAccounts/encryptionScopes:name] ForAnyOfAnyValues:StringEquals {'validScope1', 'validScope2'}`<br/>[Example: Read blobs with specific encryption scopes](storage-auth-abac-examples.md#example-read-blobs-with-specific-encryption-scopes) | > | **Learn more** | [Create and manage encryption scopes](../blobs/encryption-scope-manage.md) |
+### Is Current Version
+
+> [!div class="mx-tdCol2BreakAll"]
+> | Property | Value |
+> | | |
+> | **Display name** | Is Current Version |
+> | **Description** | Identifies if the resource is the current version of the blob, in contrast of a snapshot or a specific blob version. |
+> | **Attribute** | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs:isCurrentVersion` |
+> | **Attribute source** | Resource |
+> | **Attribute type** | Boolean |
+> | **Examples** | `@Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs:isCurrentVersion] BoolEquals true`<br/>[Example: Read only current blob versions](storage-auth-abac-examples.md#example-read-only-current-blob-versions)<br/>[Example: Read current blob versions and a specific blob version](storage-auth-abac-examples.md#example-read-current-blob-versions-and-a-specific-blob-version) |
+ ### Is hierarchical namespace enabled > [!div class="mx-tdCol2BreakAll"]
This section lists the Azure Blob storage attributes you can use in your conditi
> | Property | Value | > | | | > | **Display name** | Snapshot |
-> | **Description** | The Snapshot identifier for the Blob snapshot. |
+> | **Description** | The Snapshot identifier for the Blob snapshot.<br/>Available for storage accounts where hierarchical namespace is not enabled and currently in preview for storage accounts where hierarchical namespace is enabled. |
> | **Attribute** | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs:snapshot` | > | **Attribute source** | Request | > | **Attribute type** | DateTime | > | **Exists support** | True | > | **Hierarchical namespace support** | False | > | **Examples** | `Exists @Request[Microsoft.Storage/storageAccounts/blobServices/containers/blobs:snapshot]`<br/>[Example: Read current blob versions and any blob snapshots](storage-auth-abac-examples.md#example-read-current-blob-versions-and-any-blob-snapshots) |
-> | **Learn more** | [Azure Data Lake Storage Gen2 hierarchical namespace](../blobs/data-lake-storage-namespace.md) |
+> | **Learn more** | [Blob snapshots](../blobs/snapshots-overview.md)<br/>[Azure Data Lake Storage Gen2 hierarchical namespace](../blobs/data-lake-storage-namespace.md) |
### Version ID
This section lists the Azure Blob storage attributes you can use in your conditi
> | **Attribute type** | DateTime | > | **Exists support** | True | > | **Hierarchical namespace support** | False |
-> | **Examples** | `@Request[Microsoft.Storage/storageAccounts/blobServices/containers/blobs:versionId] DateTimeEquals '2022-06-01T23:38:32.8883645Z'`<br/>[Example: Read current blob versions and a specific blob version](storage-auth-abac-examples.md#example-read-current-blob-versions-and-a-specific-blob-version) |
+> | **Examples** | `@Request[Microsoft.Storage/storageAccounts/blobServices/containers/blobs:versionId] DateTimeEquals '2022-06-01T23:38:32.8883645Z'`<br/>[Example: Read current blob versions and a specific blob version](storage-auth-abac-examples.md#example-read-current-blob-versions-and-a-specific-blob-version)<br/>[Example: Read current blob versions and any blob snapshots](storage-auth-abac-examples.md#example-read-current-blob-versions-and-any-blob-snapshots) |
> | **Learn more** | [Azure Data Lake Storage Gen2 hierarchical namespace](../blobs/data-lake-storage-namespace.md) | ## Azure Queue storage attributes
storage Storage Auth Abac Examples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-auth-abac-examples.md
Previously updated : 05/16/2022 Last updated : 05/24/2022 #Customer intent: As a dev, devops, or it admin, I want to learn about the conditions so that I write more complex conditions.
$content = Get-AzStorageBlobContent -Container $grantedContainer -Blob "logs/Alp
## Blob versions or blob snapshots
-### Example: Read current blob versions and a specific blob version
+### Example: Read only current blob versions
-This condition allows a user to read current blob versions as well as read blobs with a version ID of 2022-06-01T23:38:32.8883645Z. The user cannot read other blob versions.
+This condition allows a user to only read current blob versions. The user cannot read other blob versions.
-> [!NOTE]
-> The condition includes a `NOT Exists` expression for the version ID attribute. This expression is included so that the Azure portal can list list the current version of the blob.
+You must add this condition to any role assignments that include the following actions.
+
+> [!div class="mx-tableFixed"]
+> | Action | Notes |
+> | | |
+> | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read` | |
+> | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/runAsSuperUser/action` | Add if role definition includes this action, such as Storage Blob Data Owner. |
+
+![Diagram of condition showing read access to current blob version only.](./media/storage-auth-abac-examples/current-version-read-only.png)
+
+Storage Blob Data Owner
+
+```
+(
+ (
+ !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'} AND NOT SubOperationMatches{'Blob.List'})
+ AND
+ !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/runAsSuperUser/action'})
+ )
+ OR
+ (
+ @Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs:isCurrentVersion] BoolEquals true
+ )
+)
+```
+
+Storage Blob Data Reader, Storage Blob Data Contributor
+
+```
+(
+ (
+ !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'} AND NOT SubOperationMatches{'Blob.List'})
+ )
+ OR
+ (
+ @Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs:isCurrentVersion] BoolEquals true
+ )
+)
+```
+
+#### Azure portal
+
+Here are the settings to add this condition using the Azure portal.
+
+> [!div class="mx-tableFixed"]
+> | Condition #1 | Setting |
+> | | |
+> | Actions | [Read a blob](storage-auth-abac-attributes.md#read-a-blob)<br/>[All data operations for accounts with hierarchical namespace enabled](storage-auth-abac-attributes.md#all-data-operations-for-accounts-with-hierarchical-namespace-enabled) (if applicable) |
+> | Attribute source | Resource |
+> | Attribute | [Is Current Version](storage-auth-abac-attributes.md#is-current-version) |
+> | Operator | [BoolEquals](../../role-based-access-control/conditions-format.md#boolean-comparison-operators) |
+> | Value | True |
+
+### Example: Read current blob versions and a specific blob version
+
+This condition allows a user to read current blob versions as well as read blobs with a version ID of 2022-06-01T23:38:32.8883645Z. The user cannot read other blob versions. The [Version ID](storage-auth-abac-attributes.md#version-id) attribute is available only for storage accounts where hierarchical namespace is not enabled.
You must add this condition to any role assignments that include the following action.
You must add this condition to any role assignments that include the following a
( @Request[Microsoft.Storage/storageAccounts/blobServices/containers/blobs:versionId] DateTimeEquals '2022-06-01T23:38:32.8883645Z' OR
- NOT Exists @Request[Microsoft.Storage/storageAccounts/blobServices/containers/blobs:versionId]
+ @Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs:isCurrentVersion] BoolEquals true
) ) ```
Here are the settings to add this condition using the Azure portal.
> | Value | &lt;blobVersionId&gt; | > | **Expression 2** | | > | Operator | Or |
-> | Attribute source | Request |
-> | Attribute | [Version ID](storage-auth-abac-attributes.md#version-id) |
-> | Exists | [Checked](../../role-based-access-control/conditions-format.md#exists) |
-> | Negate this expression | Checked |
+> | Attribute source | Resource |
+> | Attribute | [Is Current Version](storage-auth-abac-attributes.md#is-current-version) |
+> | Operator | [BoolEquals](../../role-based-access-control/conditions-format.md#boolean-comparison-operators) |
+> | Value | True |
### Example: Delete old blob versions
-This condition allows a user to delete versions of a blob that are older than 06/01/2022 to perform clean up.
+This condition allows a user to delete versions of a blob that are older than 06/01/2022 to perform clean up. The [Version ID](storage-auth-abac-attributes.md#version-id) attribute is available only for storage accounts where hierarchical namespace is not enabled.
You must add this condition to any role assignments that include the following actions.
Here are the settings to add this condition using the Azure portal.
### Example: Read current blob versions and any blob snapshots
-This condition allows a user to read current blob versions and any blob snapshots.
+This condition allows a user to read current blob versions and any blob snapshots. The [Version ID](storage-auth-abac-attributes.md#version-id) attribute is available only for storage accounts where hierarchical namespace is not enabled. The [Snapshot](storage-auth-abac-attributes.md#snapshot) attribute is available for storage accounts where hierarchical namespace is not enabled and currently in preview for storage accounts where hierarchical namespace is enabled.
You must add this condition to any role assignments that include the following action.
You must add this condition to any role assignments that include the following a
> | Action | Notes | > | | | > | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read` | |
+> | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/runAsSuperUser/action` | Add if role definition includes this action, such as Storage Blob Data Owner. |
![Diagram of condition showing read access to current blob versions and any blob snapshots.](./media/storage-auth-abac-examples/version-id-snapshot-blob-read.png)
+Storage Blob Data Owner
+ ``` ( ( !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'} AND NOT SubOperationMatches{'Blob.List'})
+ AND
+ !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/runAsSuperUser/action'})
) OR ( Exists @Request[Microsoft.Storage/storageAccounts/blobServices/containers/blobs:snapshot] OR
- NOT Exists @Request[Microsoft.Storage/storageAccounts/blobServices/containers/blobs:versionId]
+ @Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs:isCurrentVersion] BoolEquals true
+ )
+)
+```
+
+Storage Blob Data Reader, Storage Blob Data Contributor
+
+```
+(
+ (
+ !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'} AND NOT SubOperationMatches{'Blob.List'})
+ )
+ OR
+ (
+ Exists @Request[Microsoft.Storage/storageAccounts/blobServices/containers/blobs:snapshot]
OR
- @Resource[Microsoft.Storage/storageAccounts:isHnsEnabled] BoolEquals true
+ @Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs:isCurrentVersion] BoolEquals true
) ) ```
Here are the settings to add this condition using the Azure portal.
> [!div class="mx-tableFixed"] > | Condition #1 | Setting | > | | |
-> | Actions | [Read a blob](storage-auth-abac-attributes.md#read-a-blob) |
+> | Actions | [Read a blob](storage-auth-abac-attributes.md#read-a-blob)<br/>[All data operations for accounts with hierarchical namespace enabled](storage-auth-abac-attributes.md#all-data-operations-for-accounts-with-hierarchical-namespace-enabled) (if applicable) |
> | Attribute source | Request | > | Attribute | [Snapshot](storage-auth-abac-attributes.md#snapshot) | > | Exists | [Checked](../../role-based-access-control/conditions-format.md#exists) | > | **Expression 2** | | > | Operator | Or |
-> | Attribute source | Request |
-> | Attribute | [Version ID](storage-auth-abac-attributes.md#version-id) |
-> | Exists | [Checked](../../role-based-access-control/conditions-format.md#exists) |
-> | Negate this expression | Checked |
-> | **Expression 3** | |
-> | Operator | Or |
> | Attribute source | Resource |
-> | Attribute | [Is hierarchical namespace enabled](storage-auth-abac-attributes.md#is-hierarchical-namespace-enabled) |
+> | Attribute | [Is Current Version](storage-auth-abac-attributes.md#is-current-version) |
> | Operator | [BoolEquals](../../role-based-access-control/conditions-format.md#boolean-comparison-operators) | > | Value | True |
storage Storage Explorer Support Policy Lifecycle https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-explorer-support-policy-lifecycle.md
This table describes the release date and the end of support date for each relea
| Storage Explorer version | Release date | End of support date | |:-:|::|:-:|
+| v1.24.1 | May 12, 2022 | May 12, 2023 |
+| v1.24.0 | May 3, 2022 | May 3, 2023 |
| v1.23.1 | April 12, 2022 | April 12, 2023 | | v1.23.0 | March 2, 2022 | March 2, 2023 | | v1.22.1 | January 25, 2022 | January 25, 2023 |
storage Storage Explorer Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-explorer-troubleshooting.md
Many libraries needed by Storage Explorer come preinstalled with Canonical's sta
- iproute2 - libasound2 - libatm1-- libgconf2-4
+- libgconf-2-4
- libnspr4 - libnss3 - libpulse0
If none of these solutions work for you, you can:
- Create a support ticket. - [Open an issue on GitHub](https://github.com/Microsoft/AzureStorageExplorer/issues) by selecting the **Report issue to GitHub** button in the lower-left corner.
-![Feedback](./media/storage-explorer-troubleshooting/feedback-button.PNG)
+![Feedback](./media/storage-explorer-troubleshooting/feedback-button.PNG)
storage Storage Monitoring Diagnosing Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-monitoring-diagnosing-troubleshooting.md
Title: Monitor, diagnose, and troubleshoot Azure Storage | Microsoft Docs
+ Title: Monitor and troubleshoot Azure Storage (classic logs & metrics) | Microsoft Docs
description: Use features like storage analytics, client-side logging, and other third-party tools to identify, diagnose, and troubleshoot Azure Storage-related issues. Previously updated : 10/08/2020 Last updated : 05/23/2022
-# Monitor, diagnose, and troubleshoot Microsoft Azure Storage
--
-## Overview
-
-Diagnosing and troubleshooting issues in a distributed application hosted in a cloud environment can be more complex than in traditional environments. Applications can be deployed in a PaaS or IaaS infrastructure, on premises, on a mobile device, or in some combination of these environments. Typically, your application's network traffic may traverse public and private networks and your application may use multiple storage technologies such as Microsoft Azure Storage Tables, Blobs, Queues, or Files in addition to other data stores such as relational and document databases.
-
-To manage such applications successfully you should monitor them proactively and understand how to diagnose and troubleshoot all aspects of them and their dependent technologies. As a user of Azure Storage services, you should continuously monitor the Storage services your application uses for any unexpected changes in behavior (such as slower than usual response times), and use logging to collect more detailed data and to analyze a problem in depth. The diagnostics information you obtain from both monitoring and logging will help you to determine the root cause of the issue your application encountered. Then you can troubleshoot the issue and determine the appropriate steps you can take to remediate it. Azure Storage is a core Azure service, and forms an important part of the majority of solutions that customers deploy to the Azure infrastructure. Azure Storage includes capabilities to simplify monitoring, diagnosing, and troubleshooting storage issues in your cloud-based applications.
--- [Introduction]
- - [How this guide is organized]
-- [Monitoring your storage service]
- - [Monitoring service health]
- - [Monitoring capacity]
- - [Monitoring availability]
- - [Monitoring performance]
-- [Diagnosing storage issues]
- - [Service health issues]
- - [Performance issues]
- - [Diagnosing errors]
- - [Storage emulator issues]
- - [Storage logging tools]
- - [Using network logging tools]
-- [End-to-end tracing]
- - [Correlating log data]
- - [Client request ID]
- - [Server request ID]
- - [Timestamps]
-- [Troubleshooting guidance]
- - [Metrics show high AverageE2ELatency and low AverageServerLatency]
- - [Metrics show low AverageE2ELatency and low AverageServerLatency but the client is experiencing high latency]
- - [Metrics show high AverageServerLatency]
- - [You are experiencing unexpected delays in message delivery on a queue]
- - [Metrics show an increase in PercentThrottlingError]
- - [Metrics show an increase in PercentTimeoutError]
- - [Metrics show an increase in PercentNetworkError]
- - [The client is receiving HTTP 403 (Forbidden) messages]
- - [The client is receiving HTTP 404 (Not found) messages]
- - [The client is receiving HTTP 409 (Conflict) messages]
- - [Metrics show low PercentSuccess or analytics log entries have operations with transaction status of ClientOtherErrors]
- - [Capacity metrics show an unexpected increase in storage capacity usage]
- - [Your issue arises from using the storage emulator for development or test]
- - [You are encountering problems installing the Azure SDK for .NET]
- - [You have a different issue with a storage service]
- - [Troubleshooting VHDs on Windows virtual machines](/troubleshoot/azure/virtual-machines/welcome-virtual-machines)
- - [Troubleshooting VHDs on Linux virtual machines](/troubleshoot/azure/virtual-machines/welcome-virtual-machines)
- - [Troubleshooting Azure Files issues with Windows](../files/storage-troubleshoot-windows-file-connection-problems.md)
- - [Troubleshooting Azure Files issues with Linux](../files/storage-troubleshoot-linux-file-connection-problems.md)
-- [Appendices]
- - [Appendix 1: Using Fiddler to capture HTTP and HTTPS traffic]
- - [Appendix 2: Using Wireshark to capture network traffic]
- - [Appendix 4: Using Excel to view metrics and log data]
- - [Appendix 5: Monitoring with Application Insights for Azure DevOps]
-
-## <a name="introduction"></a>Introduction
+# Monitor, diagnose, and troubleshoot Microsoft Azure Storage (classic)
This guide shows you how to use features such as Azure Storage Analytics, client-side logging in the Azure Storage Client Library, and other third-party tools to identify, diagnose, and troubleshoot Azure Storage related issues.
This guide is intended to be read primarily by developers of online services tha
- To provide you with the necessary processes and tools to help you decide whether an issue or problem in an application relates to Azure Storage. - To provide you with actionable guidance for resolving problems related to Azure Storage.
+> [!NOTE]
+> This article is based on using Storage Analytics metrics and logs as referred to as *Classic metrics and logs*. We recommend that you use Azure Storage metrics and logs in Azure Monitor instead of Storage Analytics logs. To learn more, see any of the following articles:
+>
+> - [Monitoring Azure Blob Storage](../blobs/monitor-blob-storage.md)
+> - [Monitoring Azure Files](../files/storage-files-monitoring.md)
+> - [Monitoring Azure Queue Storage](../queues/monitor-queue-storage.md)
+> - [Monitoring Azure Table storage](../tables/monitor-table-storage.md)
+
+## Overview
+
+Diagnosing and troubleshooting issues in a distributed application hosted in a cloud environment can be more complex than in traditional environments. Applications can be deployed in a PaaS or IaaS infrastructure, on premises, on a mobile device, or in some combination of these environments. Typically, your application's network traffic may traverse public and private networks and your application may use multiple storage technologies such as Microsoft Azure Storage Tables, Blobs, Queues, or Files in addition to other data stores such as relational and document databases.
+
+To manage such applications successfully you should monitor them proactively and understand how to diagnose and troubleshoot all aspects of them and their dependent technologies. As a user of Azure Storage services, you should continuously monitor the Storage services your application uses for any unexpected changes in behavior (such as slower than usual response times), and use logging to collect more detailed data and to analyze a problem in depth. The diagnostics information you obtain from both monitoring and logging will help you to determine the root cause of the issue your application encountered. Then you can troubleshoot the issue and determine the appropriate steps you can take to remediate it. Azure Storage is a core Azure service, and forms an important part of the majority of solutions that customers deploy to the Azure infrastructure. Azure Storage includes capabilities to simplify monitoring, diagnosing, and troubleshooting storage issues in your cloud-based applications.
+ ### <a name="how-this-guide-is-organized"></a>How this guide is organized The section "[Monitoring your storage service]" describes how to monitor the health and performance of your Azure Storage services using Azure Storage Analytics Metrics (Storage Metrics).
storage Troubleshoot Storage Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/troubleshoot-storage-availability.md
+
+ Title: Troubleshoot availability issues in Azure Storage accounts
+description: Identify and troubleshoot availability issues in Azure Storage accounts.
+++ Last updated : 05/23/2022++++++
+# Troubleshoot availability issues in Azure Storage accounts
+
+This article helps you investigate changes in the availability (such as number of failed requests). These changes in availability can often be identified by monitoring storage metrics in Azure Monitor. For general information about using metrics and logs in Azure Monitor, see
+
+- [Monitoring Azure Blob Storage](../blobs/monitor-blob-storage.md)
+- [Monitoring Azure Files](../files/storage-files-monitoring.md)
+- [Monitoring Azure Queue Storage](../queues/monitor-queue-storage.md)
+- [Monitoring Azure Table storage](../tables/monitor-table-storage.md)
+
+## Monitoring availability
+
+You should monitor the availability of the storage services in your storage account by monitoring the value of the **Availability** metric. The **Availability** metric contains a percentage value and is calculated by taking the total billable requests value and dividing it by the number of applicable requests, including those requests that produced unexpected errors.
+
+Any value less than 100% indicates that some storage requests are failing. You can see why they are failing by examining the **ResponseType** dimension for error types such as **ServerTimeoutError**. You should expect to see **Availability** fall temporarily below 100% for reasons such as transient server timeouts while the service moves partitions to better load-balance request; the retry logic in your client application should handle such intermittent conditions.
+
+You can use features in Azure Monitor to alert you if **Availability** for a service falls below a threshold that you specify.
+
+## Metrics show an increase in throttling errors
+
+Throttling errors occur when you exceed the scalability targets of a storage service. The storage service throttles to ensure that no single client or tenant can use the service at the expense of others. For more information, see [Scalability and performance targets for standard storage accounts](scalability-targets-standard-account.md) for details on scalability targets for storage accounts and performance targets for partitions within storage accounts.
+
+If the **ClientThrottlingError** or **ServerBusyError** value of the **ResponseType** dimension shows an increase in the percentage of requests that are failing with a throttling error, you need to investigate one of two scenarios:
+
+- Transient increase in PercentThrottlingError
+- Permanent increase in PercentThrottlingError error
+
+An increase in throttling errors often occurs at the same time as an increase in the number of storage requests, or when you are initially load testing your application. This may also manifest itself in the client as "503 Server Busy" or "500 Operation Timeout" HTTP status messages from storage operations.
+
+### Transient increase in throttling errors
+
+If you are seeing spikes in throttling errors that coincide with periods of high activity for the application, you implement an exponential (not linear) back-off strategy for retries in your client. Back-off retries reduce the immediate load on the partition and help your application to smooth out spikes in traffic. For more information about how to implement retry policies using the Storage Client Library, see the [RetryOptions.MaxRetries](/dotnet/api/microsoft.azure.storage.retrypolicies) property.
+
+> [!NOTE]
+> You may also see spikes in throttling errors that do not coincide with periods of high activity for the application: the most likely cause here is the storage service moving partitions to improve load balancing.
+
+### Permanent increase in throttling errors
+
+If you are seeing a consistently high value for throttling errors following a permanent increase in your transaction volumes, or when you are performing your initial load tests on your application, then you need to evaluate how your application is using storage partitions and whether it is approaching the scalability targets for a storage account. For example, if you are seeing throttling errors on a queue (which counts as a single partition), then you should consider using additional queues to spread the transactions across multiple partitions. If you are seeing throttling errors on a table, you need to consider using a different partitioning scheme to spread your transactions across multiple partitions by using a wider range of partition key values. One common cause of this issue is the prepend/append anti-pattern where you select the date as the partition key and then all data on a particular day is written to one partition: under load, this can result in a write bottleneck. Either consider a different partitioning design or evaluate whether using blob storage might be a better solution. Also check whether throttling is occurring as a result of spikes in your traffic and investigate ways of smoothing your pattern of requests.
+
+If you distribute your transactions across multiple partitions, you must still be aware of the scalability limits set for the storage account. For example, if you used ten queues each processing the maximum of 2,000 1KB messages per second, you will be at the overall limit of 20,000 messages per second for the storage account. If you need to process more than 20,000 entities per second, you should consider using multiple storage accounts. You should also bear in mind that the size of your requests and entities has an impact on when the storage service throttles your clients: if you have larger requests and entities, you may be throttled sooner.
+
+Inefficient query design can also cause you to hit the scalability limits for table partitions. For example, a query with a filter that only selects one percent of the entities in a partition but that scans all the entities in a partition will need to access each entity. Every entity read will count towards the total number of transactions in that partition; therefore, you can easily reach the scalability targets.
+
+> [!NOTE]
+> Your performance testing should reveal any inefficient query designs in your application.
+
+## Metrics show an increase in timeout errors
+
+Timeout errors occur when the **ResponseType** dimension is equal to **ServerTimeoutError** or **ClientTimeout**.
+
+Your metrics show an increase in timeout errors for one of your storage services. At the same time, the client receives a high volume of "500 Operation Timeout" HTTP status messages from storage operations.
+
+> [!NOTE]
+> You may see timeout errors temporarily as the storage service load balances requests by moving a partition to a new server.
+
+The server timeouts (**ServerTimeOutError**) are caused by an error on the server. The client timeouts (**ClientTimeout**) happen because an operation on the server has exceeded the timeout specified by the client; for example, a client using the Storage Client Library can set a timeout for an operation.
+
+Server timeouts indicate a problem with the storage service that requires further investigation. You can use metrics to see if you are hitting the scalability limits for the service and to identify any spikes in traffic that might be causing this problem. If the problem is intermittent, it may be due to load-balancing activity in the service. If the problem is persistent and is not caused by your application hitting the scalability limits of the service, you should raise a support issue. For client timeouts, you must decide if the timeout is set to an appropriate value in the client and either change the timeout value set in the client or investigate how you can improve the performance of the operations in the storage service, for example by optimizing your table queries or reducing the size of your messages.
+
+## Metrics show an increase in network errors
+
+Network errors occur when the **ResponseType** dimension is equal to **NetworkError**. These occur when a storage service detects a network error when the client makes a storage request.
+
+The most common cause of this error is a client disconnecting before a timeout expires in the storage service. Investigate the code in your client to understand why and when the client disconnects from the storage service. You can also use third-party network analysis tools to investigate network connectivity issues from the client.
+
+## See also
+
+- [Troubleshoot client application errors](../common/troubleshoot-storage-client-application-errors.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json)
+- [Troubleshoot performance issues](../common/troubleshoot-storage-performance.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json)
+- [Monitor, diagnose, and troubleshoot your Azure Storage](/learn/modules/monitor-diagnose-and-troubleshoot-azure-storage/)
storage Troubleshoot Storage Client Application Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/troubleshoot-storage-client-application-errors.md
+
+ Title: Troubleshoot client application errors in Azure Storage accounts
+description: Identify and troubleshoot errors with client applications that connect to Azure Storage accounts.
+++ Last updated : 05/23/2022++++++
+# Troubleshoot client application errors in Azure Storage accounts
+
+This article helps you investigate client application errors by using metrics, [client side logs](/rest/api/storageservices/Client-side-Logging-with-the-.NET-Storage-Client-Library), and resource logs in Azure Monitor.
+
+### Diagnosing errors
+
+Users of your application may notify you of errors reported by the client application. Azure Monitor also records counts of different response types (**ResponseType** dimensions) from your storage services such as **NetworkError**, **ClientTimeoutError**, or **AuthorizationError**. While Azure Monitor only records counts of different error types, you can obtain more detail about individual requests by examining server-side, client-side, and network logs. Typically, the HTTP status code returned by the storage service will give an indication of why the request failed.
+
+> [!NOTE]
+> Remember that you should expect to see some intermittent errors: for example, errors due to transient network conditions, or application errors.
+
+The following resources are useful for understanding storage-related status and error codes:
+
+- [Common REST API Error Codes](/rest/api/storageservices/Common-REST-API-Error-Codes)
+- [Blob Service Error Codes](/rest/api/storageservices/Blob-Service-Error-Codes)
+- [Queue Service Error Codes](/rest/api/storageservices/Queue-Service-Error-Codes)
+- [Table Service Error Codes](/rest/api/storageservices/Table-Service-Error-Codes)
+- [File Service Error Codes](/rest/api/storageservices/File-Service-Error-Codes)
+
+## The client is receiving HTTP 403 (Forbidden) messages
+
+If your client application is throwing HTTP 403 (Forbidden) errors, a likely cause is that the client is using an expired Shared Access Signature (SAS) when it sends a storage request (although other possible causes include clock skew, invalid keys, and empty headers).
+
+The Storage Client Library for .NET enables you to collect client-side log data that relates to storage operations performed by your application. For more information, see [Client-side Logging with the .NET Storage Client Library](/rest/api/storageservices/Client-side-Logging-with-the-.NET-Storage-Client-Library).
+
+The following table shows a sample from the client-side log generated by the Storage Client Library that illustrates this issue occurring:
+
+| Source | Verbosity | Verbosity | Client request ID | Operation text |
+| | | | | |
+| Microsoft.Azure.Storage |Information |3 |85d077ab-… |`Starting operation with location Primary per location mode PrimaryOnly.` |
+| Microsoft.Azure.Storage |Information |3 |85d077ab -… |`Starting synchronous request to <https://developer.mozilla.org/en-US/docs/Web/API/XMLHttpRequest/Synchronous_and_Asynchronous_Requests#Synchronous_request>` |
+| Microsoft.Azure.Storage |Information |3 |85d077ab -… |`Waiting for response.` |
+| Microsoft.Azure.Storage |Warning |2 |85d077ab -… |`Exception thrown while waiting for response: The remote server returned an error: (403) Forbidden.` |
+| Microsoft.Azure.Storage |Information |3 |85d077ab -… |`Response received. Status code = 403, Request ID = 9d67c64a-64ed-4b0d-9515-3b14bbcdc63d, Content-MD5 = , ETag = .` |
+| Microsoft.Azure.Storage |Warning |2 |85d077ab -… |`Exception thrown during the operation: The remote server returned an error: (403) Forbidden..` |
+| Microsoft.Azure.Storage |Information |3 |85d077ab -… |`Checking if the operation should be retried. Retry count = 0, HTTP status code = 403, Exception = The remote server returned an error: (403) Forbidden..` |
+| Microsoft.Azure.Storage |Information |3 |85d077ab -… |`The next location has been set to Primary, based on the location mode.` |
+| Microsoft.Azure.Storage |Error |1 |85d077ab -… |`Retry policy did not allow for a retry. Failing with The remote server returned an error: (403) Forbidden.` |
+
+In this scenario, you should investigate why the SAS token is expiring before the client sends the token to the server:
+
+- Typically, you should not set a start time when you create a SAS for a client to use immediately. If there are small clock differences between the host generating the SAS using the current time and the storage service, then it is possible for the storage service to receive a SAS that is not yet valid.
+
+- Do not set a very short expiry time on a SAS. Again, small clock differences between the host generating the SAS and the storage service can lead to a SAS apparently expiring earlier than anticipated.
+
+- Does the version parameter in the SAS key (for example **sv=2015-04-05**) match the version of the Storage Client Library you are using? We recommend that you always use the latest version of the storage client library.
+
+- If you regenerate your storage access keys, any existing SAS tokens may be invalidated. This issue may arise if you generate SAS tokens with a long expiry time for client applications to cache.
+
+If you are using the Storage Client Library to generate SAS tokens, then it is easy to build a valid token. However, if you are using the Storage REST API and constructing the SAS tokens by hand, see [Delegating Access with a Shared Access Signature](/rest/api/storageservices/delegate-access-with-shared-access-signature).
+
+## The client is receiving HTTP 404 (Not found) messages
+
+If the client application receives an HTTP 404 (Not found) message from the server, this implies that the object the client was attempting to use (such as an entity, table, blob, container, or queue) does not exist in the storage service. There are a number of possible reasons for this, such as:
+
+- The client or another process previously deleted the object
+
+- A Shared Access Signature (SAS) authorization issue
+
+- Client-side JavaScript code does not have permission to access the object
+
+- Network failure
+
+### The client or another process previously deleted the object
+
+In scenarios where the client is attempting to read, update, or delete data in a storage service it is usually easy to identify in the storage resource logs a previous operation that deleted the object in question from the storage service. Often, the log data shows that another user or process deleted the object. In the Azure Monitor logs (server-side) show when a client deleted an object.
+
+In the scenario where a client is attempting to insert an object, it may not be immediately obvious why this results in an HTTP 404 (Not found) response given that the client is creating a new object. However, if the client is creating a blob it must be able to find the blob container, if the client is creating a message it must be able to find a queue, and if the client is adding a row it must be able to find the table.
+
+You can use the client-side log from the Storage Client Library to gain a more detailed understanding of when the client sends specific requests to the storage service.
+
+The following client-side log generated by the Storage Client library illustrates the problem when the client cannot find the container for the blob it is creating. This log includes details of the following storage operations:
+
+| Request ID | Operation |
+| | |
+| 07b26a5d-... |**DeleteIfExists** method to delete the blob container. Note that this operation includes a **HEAD** request to check for the existence of the container. |
+| e2d06d78… |**CreateIfNotExists** method to create the blob container. Note that this operation includes a **HEAD** request that checks for the existence of the container. The **HEAD** returns a 404 message but continues. |
+| de8b1c3c-... |**UploadFromStream** method to create the blob. The **PUT** request fails with a 404 message |
+
+Log entries:
+
+| Request ID | Operation Text |
+| | |
+| 07b26a5d-... |`Starting synchronous request to `https://domemaildist.blob.core.windows.net/azuremmblobcontainer`.` |
+| 07b26a5d-... |`StringToSign = HEAD............x-ms-client-request-id:07b26a5d-....x-ms-date:Tue, 03 Jun 2014 10:33:11 GMT.x-ms-version:2014-02-14./domemaildist/azuremmblobcontainer.restype:container.` |
+| 07b26a5d-... |`Waiting for response.` |
+| 07b26a5d-... |`Response received. Status code = 200, Request ID = eeead849-...Content-MD5 = , ETag = &quot;0x8D14D2DC63D059B&quot;.` |
+| 07b26a5d-... |`Response headers were processed successfully, proceeding with the rest of the operation.` |
+| 07b26a5d-... |`Downloading response body.` |
+| 07b26a5d-... |`Operation completed successfully.` |
+| 07b26a5d-... |`Starting synchronous request to `https://domemaildist.blob.core.windows.net/azuremmblobcontainer`.` |
+| 07b26a5d-... |`StringToSign = DELETE............x-ms-client-request-id:07b26a5d-....x-ms-date:Tue, 03 Jun 2014 10:33:12 GMT.x-ms-version:2014-02-14./domemaildist/azuremmblobcontainer.restype:container.` |
+| 07b26a5d-... |`Waiting for response.` |
+| 07b26a5d-... |`Response received. Status code = 202, Request ID = 6ab2a4cf-..., Content-MD5 = , ETag = .` |
+| 07b26a5d-... |`Response headers were processed successfully, proceeding with the rest of the operation.` |
+| 07b26a5d-... |`Downloading response body.` |
+| 07b26a5d-... |`Operation completed successfully.` |
+| e2d06d78-... |`Starting asynchronous request to https://domemaildist.blob.core.windows.net/azuremmblobcontainer`.</td> |
+| e2d06d78-... |`StringToSign = HEAD............x-ms-client-request-id:e2d06d78-....x-ms-date:Tue, 03 Jun 2014 10:33:12 GMT.x-ms-version:2014-02-14./domemaildist/azuremmblobcontainer.restype:container.` |
+| e2d06d78-... |`Waiting for response.` |
+| de8b1c3c-... |`Starting synchronous request to `https://domemaildist.blob.core.windows.net/azuremmblobcontainer/blobCreated.txt`.` |
+| de8b1c3c-... |`StringToSign = PUT...64.qCmF+TQLPhq/YYK50mP9ZQ==........x-ms-blob-type:BlockBlob.x-ms-client-request-id:de8b1c3c-....x-ms-date:Tue, 03 Jun 2014 10:33:12 GMT.x-ms-version:2014-02-14./domemaildist/azuremmblobcontainer/blobCreated.txt.` |
+| de8b1c3c-... |`Preparing to write request data.` |
+| e2d06d78-... |`Exception thrown while waiting for response: The remote server returned an error: (404) Not Found..` |
+| e2d06d78-... |`Response received. Status code = 404, Request ID = 353ae3bc-..., Content-MD5 = , ETag = .` |
+| e2d06d78-... |`Response headers were processed successfully, proceeding with the rest of the operation.` |
+| e2d06d78-... |`Downloading response body.` |
+| e2d06d78-... |`Operation completed successfully.` |
+| e2d06d78-... |`Starting asynchronous request to https://domemaildist.blob.core.windows.net/azuremmblobcontainer.` |
+| e2d06d78-... |`StringToSign = PUT...0.........x-ms-client-request-id:e2d06d78-....x-ms-date:Tue, 03 Jun 2014 10:33:12 GMT.x-ms-version:2014-02-14./domemaildist/azuremmblobcontainer.restype:container.` |
+| e2d06d78-... |`Waiting for response.` |
+| de8b1c3c-... |`Writing request data.` |
+| de8b1c3c-... |`Waiting for response.` |
+| e2d06d78-... |`Exception thrown while waiting for response: The remote server returned an error: (409) Conflict..` |
+| e2d06d78-... |`Response received. Status code = 409, Request ID = c27da20e-..., Content-MD5 = , ETag = .` |
+| e2d06d78-... |`Downloading error response body.` |
+| de8b1c3c-... |`Exception thrown while waiting for response: The remote server returned an error: (404) Not Found..` |
+| de8b1c3c-... |`Response received. Status code = 404, Request ID = 0eaeab3e-..., Content-MD5 = , ETag = .` |
+| de8b1c3c-... |`Exception thrown during the operation: The remote server returned an error: (404) Not Found..` |
+| de8b1c3c-... |`Retry policy did not allow for a retry. Failing with The remote server returned an error: (404) Not Found..` |
+| e2d06d78-... |`Retry policy did not allow for a retry. Failing with The remote server returned an error: (409) Conflict..` |
+
+In this example, the log shows that the client is interleaving requests from the **CreateIfNotExists** method (request ID e2d06d78…) with the requests from the **UploadFromStream** method (de8b1c3c-...). This interleaving happens because the client application is invoking these methods asynchronously. Modify the asynchronous code in the client to ensure that it creates the container before attempting to upload any data to a blob in that container. Ideally, you should create all your containers in advance.
+
+### A Shared Access Signature (SAS) authorization issue
+
+If the client application attempts to use a SAS key that does not include the necessary permissions for the operation, the storage service returns an HTTP 404 (Not found) message to the client. At the same time, in Azure Monitor metrics, you will also see an **AuthorizationError** for the **ResponseType** dimension.
+
+Investigate why your client application is attempting to perform an operation for which it has not been granted permissions.
+
+### Client-side JavaScript code does not have permission to access the object
+
+If you are using a JavaScript client and the storage service is returning HTTP 404 messages, you check for the following JavaScript errors in the browser:
+
+```
+SEC7120: Origin http://localhost:56309 not found in Access-Control-Allow-Origin header.
+SCRIPT7002: XMLHttpRequest: Network Error 0x80070005, Access is denied.
+```
+
+> [!NOTE]
+> You can use the F12 Developer Tools in Internet Explorer to trace the messages exchanged between the browser and the storage service when you are troubleshooting client-side JavaScript issues.
+
+These errors occur because the web browser implements the [same origin policy](https://www.w3.org/Security/wiki/Same_Origin_Policy) security restriction that prevents a web page from calling an API in a different domain from the domain the page comes from.
+
+To work around the JavaScript issue, you can configure Cross Origin Resource Sharing (CORS) for the storage service the client is accessing. For more information, see [Cross-Origin Resource Sharing (CORS) Support for Azure Storage Services](/rest/api/storageservices/Cross-Origin-Resource-Sharing--CORS--Support-for-the-Azure-Storage-Services).
+
+The following code sample shows how to configure your blob service to allow JavaScript running in the Contoso domain to access a blob in your blob storage service:
+
+#### [.NET v12 SDK](#tab/dotnet)
++++
+### Network Failure
+
+In some circumstances, lost network packets can lead to the storage service returning HTTP 404 messages to the client. For example, when your client application is deleting an entity from the table service you see the client throw a storage exception reporting an "HTTP 404 (Not Found)" status message from the table service. When you investigate the table in the table storage service, you see that the service did delete the entity as requested.
+
+The exception details in the client include the request ID (7e84f12d…) assigned by the table service for the request: you can use this information to locate the request details in the storage resource logs in Azure Monitor by searching in [Fields that describe how the operation was authenticated](../blobs/monitor-blob-storage-reference.md) of log entries. You could also use the metrics to identify when failures such as this occur and then search the log files based on the time the metrics recorded this error. This log entry shows that the delete failed with an "HTTP (404) Client Other Error" status message. The same log entry also includes the request ID generated by the client in the **client-request-id** column (813ea74f…).
+
+The server-side log also includes another entry with the same **client-request-id** value (813ea74f…) for a successful delete operation for the same entity, and from the same client. This successful delete operation took place very shortly before the failed delete request.
+
+The most likely cause of this scenario is that the client sent a delete request for the entity to the table service, which succeeded, but did not receive an acknowledgment from the server (perhaps due to a temporary network issue). The client then automatically retried the operation (using the same **client-request-id**), and this retry failed because the entity had already been deleted.
+
+If this problem occurs frequently, you should investigate why the client is failing to receive acknowledgments from the table service. If the problem is intermittent, you should trap the "HTTP (404) Not Found" error and log it in the client, but allow the client to continue.
+
+## The client is receiving HTTP 409 (Conflict) messages
+
+When a client deletes blob containers, tables, or queues there is a brief period before the name becomes available again. If the code in your client application deletes and then immediately recreates a blob container using the same name, the **CreateIfNotExists** method eventually fails with the HTTP 409 (Conflict) error.
+
+The client application should use unique container names whenever it creates new containers if the delete/recreate pattern is common.
+
+## Metrics show low PercentSuccess or analytics log entries have operations with transaction status of ClientOtherErrors
+
+A **ResponseType** dimension equal to a value of **Success** captures the percent of operations that were successful based on their HTTP Status Code. Operations with status codes of 2XX count as successful, whereas operations with status codes in 3XX, 4XX and 5XX ranges are counted as unsuccessful and lower the Success metric value. In storage resource logs, these operations are recorded with a transaction status of **ClientOtherError**.
+
+It is important to note that these operations have completed successfully and therefore do not affect other metrics such as availability. Some examples of operations that execute successfully but that can result in unsuccessful HTTP status codes include:
+
+- **ResourceNotFound** (Not Found 404), for example from a GET request to a blob that does not exist.
+- **ResourceAlreadyExists** (Conflict 409), for example from a **CreateIfNotExist** operation where the resource already exists.
+- **ConditionNotMet** (Not Modified 304), for example from a conditional operation such as when a client sends an **ETag** value and an HTTP **If-None-Match** header to request an image only if it has been updated since the last operation.
+
+You can find a list of common REST API error codes that the storage services return on the page [Common REST API Error Codes](/rest/api/storageservices/Common-REST-API-Error-Codes).
++
+## See also
+
+- [Monitoring Azure Blob Storage](../blobs/monitor-blob-storage.md)
+- [Monitoring Azure Files](../files/storage-files-monitoring.md)
+- [Monitoring Azure Queue Storage](../queues/monitor-queue-storage.md)
+- [Monitoring Azure Table storage](../tables/monitor-table-storage.md)
+- [Troubleshoot performance issues](../common/troubleshoot-storage-performance.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json)
+- [Troubleshoot availability issues](../common/troubleshoot-storage-availability.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json)
+- [Monitor, diagnose, and troubleshoot your Azure Storage](/learn/modules/monitor-diagnose-and-troubleshoot-azure-storage/)
storage Troubleshoot Storage Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/troubleshoot-storage-performance.md
+
+ Title: Troubleshoot performance issues in Azure Storage accounts
+description: Identify and troubleshoot performance issues in Azure Storage accounts.
+++ Last updated : 05/23/2022++++++
+# Troubleshoot performance in Azure Storage accounts
+
+This article helps you investigate unexpected changes in behavior (such as slower than usual response times). These changes in behavior can often be identified by monitoring storage metrics in Azure Monitor. For general information about using metrics and logs in Azure Monitor, see
+
+- [Monitoring Azure Blob Storage](../blobs/monitor-blob-storage.md)
+- [Monitoring Azure Files](../files/storage-files-monitoring.md)
+- [Monitoring Azure Queue Storage](../queues/monitor-queue-storage.md)
+- [Monitoring Azure Table storage](../tables/monitor-table-storage.md)
+
+## Monitoring performance
+
+To monitor the performance of the storage services, you can use the following metrics.
+
+- The **SuccessE2ELatency** and **SuccessServerLatency** metrics show the average time the storage service or API operation type is taking to process requests. **SuccessE2ELatency** is a measure of end-to-end latency that includes the time taken to read the request and send the response in addition to the time taken to process the request (therefore includes network latency once the request reaches the storage service); **SuccessServerLatency** is a measure of just the processing time and therefore excludes any network latency related to communicating with the client.
+
+- The **Egress** and **Ingress** metrics show the total amount of data, in bytes, coming in to and going out of your storage service or through a specific API operation type.
+
+- The **Transactions** metric shows the total number of requests that the storage service of API operation is receiving. **Transactions** is the total number of requests that the storage service receives.
+
+You can monitor for unexpected changes in any of these values. These changes could indicate an issue that requires further investigation.
+
+In the [Azure portal](https://portal.azure.com), you can add alert rules which notify you when any of the performance metrics for this service fall below or exceed a threshold that you specify.
+
+## Diagnose performance issues
+
+The performance of an application can be subjective, especially from a user perspective. Therefore, it is important to have baseline metrics available to help you identify where there might be a performance issue. Many factors might affect the performance of an Azure storage service from the client application perspective. These factors might operate in the storage service, in the client, or in the network infrastructure; therefore it is important to have a strategy for identifying the origin of the performance issue.
+
+After you have identified the likely location of the cause of the performance issue from the metrics, you can then use the log files to find detailed information to diagnose and troubleshoot the problem further.
+
+## Metrics show high SuccessE2ELatency and low SuccessServerLatency
+
+In some cases, you might find that **SuccessE2ELatency** is significantly higher than the **SuccessServerLatency**. The storage service only calculates the metric **SuccessE2ELatency** for successful requests and, unlike **SuccessServerLatency**, includes the time the client takes to send the data and receive acknowledgment from the storage service. Therefore, a difference between **SuccessE2ELatency** and **SuccessServerLatency** could be either due to the client application being slow to respond, or due to conditions on the network.
+
+> [!NOTE]
+> You can also view **E2ELatency** and **ServerLatency** for individual storage operations in the Storage Logging log data.
+
+### Investigating client performance issues
+
+Possible reasons for the client responding slowly include having a limited number of available connections or threads, or being low on resources such as CPU, memory or network bandwidth. You may be able to resolve the issue by modifying the client code to be more efficient (for example by using asynchronous calls to the storage service), or by using a larger Virtual Machine (with more cores and more memory).
+
+For the table and queue services, the Nagle algorithm can also cause high **SuccessE2ELatency** as compared to **SuccessServerLatency**: for more information, see the post [Nagle's Algorithm is Not Friendly towards Small Requests](/archive/blogs/windowsazurestorage/nagles-algorithm-is-not-friendly-towards-small-requests). You can disable the Nagle algorithm in code by using the **ServicePointManager** class in the **System.Net** namespace. You should do this before you make any calls to the table or queue services in your application since this does not affect connections that are already open. The following example comes from the **Application_Start** method in a worker role.
++
+You should check the client-side logs to see how many requests your client application is submitting, and check for general .NET related performance bottlenecks in your client such as CPU, .NET garbage collection, network utilization, or memory. As a starting point for troubleshooting .NET client applications, see [Debugging, Tracing, and Profiling](/dotnet/framework/debug-trace-profile/).
+
+### Investigating network latency issues
+
+Typically, high end-to-end latency caused by the network is due to transient conditions. You can investigate both transient and persistent network issues such as dropped packets by using tools such as Wireshark.
+
+## Metrics show low SuccessE2ELatency and low SuccessServerLatency but the client is experiencing high latency
+
+In this scenario, the most likely cause is a delay in the storage request reaching the storage service. You should investigate why requests from the client are not making it through to the blob service.
+
+One possible reason for the client delaying sending requests is that there are a limited number of available connections or threads.
+
+Also check whether the client is performing multiple retries, and investigate the reason if it is. To determine whether the client is performing multiple retries, you can:
+
+- Examine logs. If multiple retries are happening, you will see multiple operations with the same client request IDs.
+
+- Examine the client logs. Verbose logging will indicate that a retry has occurred.
+
+- Debug your code, and check the properties of the **OperationContext** object associated with the request. If the operation has retried, the **RequestResults** property will include multiple unique requests. You can also check the start and end times for each request.
+
+If there are no issues in the client, you should investigate potential network issues such as packet loss. You can use tools such as Wireshark to investigate network issues.
+
+## Metrics show high SuccessServerLatency
+
+In the case of high **SuccessServerLatency** for blob download requests, you should use the Storage logs to see if there are repeated requests for the same blob (or set of blobs). For blob upload requests, you should investigate what block size the client is using (for example, blocks less than 64 K in size can result in overheads unless the reads are also in less than 64 K chunks), and if multiple clients are uploading blocks to the same blob in parallel. You should also check the per-minute metrics for spikes in the number of requests that result in exceeding the per second scalability targets.
+
+If you are seeing high **SuccessServerLatency** for blob download requests when there are repeated requests the same blob or set of blobs, then you should consider caching these blobs using Azure Cache or the Azure Content Delivery Network (CDN). For upload requests, you can improve the throughput by using a larger block size. For queries to tables, it is also possible to implement client-side caching on clients that perform the same query operations and where the data doesn't change frequently.
+
+High **SuccessServerLatency** values can also be a symptom of poorly designed tables or queries that result in scan operations or that follow the append/prepend anti-pattern.
+
+> [!NOTE]
+> You can find a comprehensive checklist performance checklist here: [Microsoft Azure Storage Performance and Scalability Checklist](../blobs/storage-performance-checklist.md).
+
+## You are experiencing unexpected delays in message delivery on a queue
+
+If you are experiencing a delay between the time an application adds a message to a queue and the time it becomes available to read from the queue, then you should take the following steps to diagnose the issue:
+
+- Verify the application is successfully adding the messages to the queue. Check that the application is not retrying the **AddMessage** method several times before succeeding.
+
+- Verify there is no clock skew between the worker role that adds the message to the queue and the worker role that reads the message from the queue that makes it appear as if there is a delay in processing.
+
+- Check if the worker role that reads the messages from the queue is failing. If a queue client calls the **GetMessage** method but fails to respond with an acknowledgment, the message will remain invisible on the queue until the **invisibilityTimeout** period expires. At this point, the message becomes available for processing again.
+
+- Check if the queue length is growing over time. This can occur if you do not have sufficient workers available to process all of the messages that other workers are placing on the queue. Also check the metrics to see if delete requests are failing and the dequeue count on messages, which might indicate repeated failed attempts to delete the message.
+
+- Examine the Storage logs for any queue operations that have higher than expected **E2ELatency** and **ServerLatency** values over a longer period of time than usual.
+
+## See also
+
+- [Troubleshoot client application errors](../common/troubleshoot-storage-client-application-errors.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json)
+- [Troubleshoot availability issues](../common/troubleshoot-storage-availability.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json)
+- [Monitor, diagnose, and troubleshoot your Azure Storage](/learn/modules/monitor-diagnose-and-troubleshoot-azure-storage/)
storage File Sync Networking Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-networking-endpoints.md
description: Learn how to configure Azure File Sync network endpoints.
Previously updated : 04/13/2021 Last updated : 05/24/2021
Address: 192.168.0.5
### Create the Storage Sync Service private endpoint
-> [!Important]
-> In order to use private endpoints on the Storage Sync Service resource, you must use Azure File Sync agent version 10.1 or greater. Agent versions prior to 10.1 do not support private endpoints on the Storage Sync Service. All prior agent versions support private endpoints on the storage account resource.
- # [Portal](#tab/azure-portal) Navigate to the **Private Link Center** by typing *Private Link* into the search bar at the top of the Azure portal. In the table of contents for the Private Link Center, select **Private endpoints**, and then **+ Add** to create a new private endpoint.
storage File Sync Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-release-notes.md
Previously updated : 3/30/2022 Last updated : 5/24/2022
The following Azure File Sync agent versions are supported:
| V14.1 Release - [KB5001873](https://support.microsoft.com/topic/d06b8723-c4cf-4c64-b7ec-3f6635e044c5)| 14.1.0.0 | December 1, 2021 | Supported | | V14 Release - [KB5001872](https://support.microsoft.com/topic/92290aa1-75de-400f-9442-499c44c92a81)| 14.0.0.0 | October 29, 2021 | Supported | | V13 Release - [KB4588753](https://support.microsoft.com/topic/632fb833-42ed-4e4d-8abd-746bd01c1064)| 13.0.0.0 | July 12, 2021 | Supported - Agent version expires on August 8, 2022 |
-| V12.1 Release - [KB4588751](https://support.microsoft.com/topic/497dc33c-d38b-42ca-8015-01c906b96132)| 12.1.0.0 | May 20, 2021 | Supported - Agent version expires on May 23, 2022 |
-| V12 Release - [KB4568585](https://support.microsoft.com/topic/b9605f04-b4af-4ad8-86b0-2c490c535cfd)| 12.0.0.0 | March 26, 2021 | Supported - Agent version expires on May 23, 2022 |
## Unsupported versions The following Azure File Sync agent versions have expired and are no longer supported: | Milestone | Agent version number | Release date | Status | |-|-|--||
+| V12 Release | 12.0.0.0 - 12.1.0.0 | N/A | Not Supported - Agent versions expired on May 23, 2022 |
| V11 Release | 11.1.0.0 - 11.3.0.0 | N/A | Not Supported - Agent versions expired on March 28, 2022 | | V10 Release | 10.0.0.0 - 10.1.0.0 | N/A | Not Supported - Agent versions expired on June 28, 2021 | | V9 Release | 9.0.0.0 - 9.1.0.0 | N/A | Not Supported - Agent versions expired on February 16, 2021 |
The following items don't sync, but the rest of the system continues to operate
### Cloud tiering - If a tiered file is copied to another location by using Robocopy, the resulting file isn't tiered. The offline attribute might be set because Robocopy incorrectly includes that attribute in copy operations. - When copying files using robocopy, use the /MIR option to preserve file timestamps. This will ensure older files are tiered sooner than recently accessed files.-
-## Agent version 12.1.0.0
-The following release notes are for version 12.1.0.0 of the Azure File Sync agent released May 20, 2021. These notes are in addition to the release notes listed for version 12.0.0.0.
-
-### Improvements and issues that are fixed
-The v12.0 agent release had two bugs which are fixed in this release:
-- Agent auto-update fails to update the agent to a later version.-- FileSyncErrorsReport.ps1 script does not provide the list of per-item errors.-
-## Agent version 12.0.0.0
-The following release notes are for version 12.0.0.0 of the Azure File Sync agent (released March 26, 2021).
-
-### Improvements and issues that are fixed
-- New portal experience to configure network access policy and private endpoint connections
- - You can now use the portal to disable access to the Storage Sync Service public endpoint and to approve, reject and remove private endpoint connections. To configure the network access policy and private endpoint connections, open the Storage Sync Service portal, go to the Settings section and click Network.
-
-- Cloud Tiering support for volume cluster sizes larger than 64KiB
- - Cloud Tiering now supports volume cluster sizes up to 2MiB on Server 2019. To learn more, see [What is the minimum file size for a file to tier?](./file-sync-choose-cloud-tiering-policies.md#minimum-file-size-for-a-file-to-tier).
-
-- Measure bandwidth and latency to Azure File Sync service and storage account
- - The Test-StorageSyncNetworkConnectivity cmdlet can now be used to measure latency and bandwidth to the Azure File Sync service and storage account. Latency to the Azure File Sync service and storage account is measured by default when running the cmdlet. Upload and download bandwidth to the storage account is measured when using the "-MeasureBandwidth" parameter.
-
- For example, to measure bandwidth and latency to the Azure File Sync service and storage account, run the following PowerShell commands:
-
- ```powershell
- Import-Module "C:\Program Files\Azure\StorageSyncAgent\StorageSync.Management.ServerCmdlets.dll"
- Test-StorageSyncNetworkConnectivity -MeasureBandwidth
- ```
-
-- Improved error messages in the portal when server endpoint creation fails
- - We heard your feedback and have improved the error messages and guidance when server endpoint creation fails.
-
-- Miscellaneous performance and reliability improvements
- - Improved change detection performance to detect files that have changed in the Azure file share.
- - Performance improvements for reconciliation sync sessions.
- - Sync improvements to reduce ECS_E_SYNC_METADATA_KNOWLEDGE_SOFT_LIMIT_REACHED and ECS_E_SYNC_METADATA_KNOWLEDGE_LIMIT_REACHED errors.
- - Fixed a bug that causes data corruption if cloud tiering is enabled and tiered files are copied using Robocopy with the /B parameter.
- - Fixed a bug that can cause files to fail to tier on Server 2019 if Data Deduplication is enabled on the volume.
- - Fixed a bug that can cause AFSDiag to fail to compress files if a file is larger than 2GiB.
-
-### Evaluation Tool
-Before deploying Azure File Sync, you should evaluate whether it is compatible with your system using the Azure File Sync evaluation tool. This tool is an Azure PowerShell cmdlet that checks for potential issues with your file system and dataset, such as unsupported characters or an unsupported OS version. For installation and usage instructions, see [Evaluation Tool](file-sync-planning.md#evaluation-cmdlet) section in the planning guide.
-
-### Agent installation and server configuration
-For more information on how to install and configure the Azure File Sync agent with Windows Server, see [Planning for an Azure File Sync deployment](file-sync-planning.md) and [How to deploy Azure File Sync](file-sync-deployment-guide.md).
--- A restart is required for servers that have an existing Azure File Sync agent installation.-- The agent installation package must be installed with elevated (admin) permissions.-- The agent is not supported on Nano Server deployment option.-- The agent is supported only on Windows Server 2019, Windows Server 2016, and Windows Server 2012 R2.-- The agent requires at least 2 GiB of memory. If the server is running in a virtual machine with dynamic memory enabled, the VM should be configured with a minimum 2048 MiB of memory. See [Recommended system resources](file-sync-planning.md#recommended-system-resources) for more information.-- The Storage Sync Agent (FileSyncSvc) service does not support server endpoints located on a volume that has the system volume information (SVI) directory compressed. This configuration will lead to unexpected results.-
-### Interoperability
-- Antivirus, backup, and other applications that access tiered files can cause undesirable recall unless they respect the offline attribute and skip reading the content of those files. For more information, see [Troubleshoot Azure File Sync](file-sync-troubleshoot.md).-- File Server Resource Manager (FSRM) file screens can cause endless sync failures when files are blocked because of the file screen.-- Running sysprep on a server that has the Azure File Sync agent installed is not supported and can lead to unexpected results. The Azure File Sync agent should be installed after deploying the server image and completing sysprep mini-setup.-
-### Sync limitations
-The following items don't sync, but the rest of the system continues to operate normally:
-- Files with unsupported characters. See [Troubleshooting guide](file-sync-troubleshoot.md#handling-unsupported-characters) for list of unsupported characters.-- Files or directories that end with a period.-- Paths that are longer than 2,048 characters.-- The system access control list (SACL) portion of a security descriptor that's used for auditing.-- Extended attributes.-- Alternate data streams.-- Reparse points.-- Hard links.-- Compression (if it's set on a server file) isn't preserved when changes sync to that file from other endpoints.-- Any file that's encrypted with EFS (or other user mode encryption) that prevents the service from reading the data.-
- > [!Note]
- > Azure File Sync always encrypts data in transit. Data is always encrypted at rest in Azure.
-
-### Server endpoint
-- A server endpoint can be created only on an NTFS volume. ReFS, FAT, FAT32, and other file systems aren't currently supported by Azure File Sync.-- Cloud tiering is not supported on the system volume. To create a server endpoint on the system volume, disable cloud tiering when creating the server endpoint.-- Failover Clustering is supported only with clustered disks, but not with Cluster Shared Volumes (CSVs).-- A server endpoint can't be nested. It can coexist on the same volume in parallel with another endpoint.-- Do not store an OS or application paging file within a server endpoint location.-- The server name in the portal is not updated if the server is renamed.-
-### Cloud endpoint
-- Azure File Sync supports making changes to the Azure file share directly. However, any changes made on the Azure file share first need to be discovered by an Azure File Sync change detection job. A change detection job is initiated for a cloud endpoint once every 24 hours. To immediately sync files that are changed in the Azure file share, the [Invoke-AzStorageSyncChangeDetection](/powershell/module/az.storagesync/invoke-azstoragesyncchangedetection) PowerShell cmdlet can be used to manually initiate the detection of changes in the Azure file share. In addition, changes made to an Azure file share over the REST protocol will not update the SMB last modified time and will not be seen as a change by sync.-- The storage sync service and/or storage account can be moved to a different resource group, subscription, or Azure AD tenant. After the storage sync service or storage account is moved, you need to give the Microsoft.StorageSync application access to the storage account (see [Ensure Azure File Sync has access to the storage account](file-sync-troubleshoot.md?tabs=portal1%252cportal#troubleshoot-rbac)).-
- > [!Note]
- > When creating the cloud endpoint, the storage sync service and storage account must be in the same Azure AD tenant. Once the cloud endpoint is created, the storage sync service and storage account can be moved to different Azure AD tenants.
-
-### Cloud tiering
-- If a tiered file is copied to another location by using Robocopy, the resulting file isn't tiered. The offline attribute might be set because Robocopy incorrectly includes that attribute in copy operations.-- When copying files using robocopy, use the /MIR option to preserve file timestamps. This will ensure older files are tiered sooner than recently accessed files.
storage Storage Files Identity Ad Ds Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-ad-ds-enable.md
Previously updated : 05/06/2022 Last updated : 05/24/2022
Connect-AzAccount
# $StorageAccountName is the name of an existing storage account that you want to join to AD # $SamAccountName is an AD object, see https://docs.microsoft.com/en-us/windows/win32/adschema/a-samaccountname # for more information.
-# If you want to use AES256 encryption (recommended), except for the trailing '$', the storage account name must be the same as the computer object's SamAccountName.
$SubscriptionId = "<your-subscription-id-here>" $ResourceGroupName = "<resource-group-name-here>" $StorageAccountName = "<storage-account-name-here>"
Set-AzStorageAccount `
-ActiveDirectoryForestName "<your-forest-name-here>" ` -ActiveDirectoryDomainGuid "<your-guid-here>" ` -ActiveDirectoryDomainsid "<your-domain-sid-here>" `
- -ActiveDirectoryAzureStorageSid "<your-storage-account-sid>"
+ -ActiveDirectoryAzureStorageSid "<your-storage-account-sid>" `
+ -ActiveDirectorySamAccountName "<your-domain-object-sam-account-name>" `
+ -ActiveDirectoryAccountType "<you-domain-object-account-type, the value could be 'Computer' or 'User', for AES256 must be 'Computer'>"
``` #### Enable AES-256 encryption (recommended) To enable AES-256 encryption, follow the steps in this section. If you plan to use RC4, skip this section.
-The domain object that represents your storage account must meet the following requirements:
--- The domain object must be created as a computer object in the on-premises AD domain.-- Except for the trailing '$', the storage account name must be the same as the computer object's SamAccountName.-
-If your domain object doesn't meet those requirements, delete it and create a new domain object that does.
+> [!IMPORTANT]
+> The domain object that represents your storage account must be created as a computer object in the on-premises AD domain. If your domain object doesn't meet this requirement, delete it and create a new domain object that does.
-Replace `<domain-object-identity>` and `<domain-name>` with your values, then run the following cmdlet to configure AES-256 support:
+Replace `<domain-object-identity>` and `<domain-name>` with your values, then run the following cmdlet to configure AES-256 support:
```powershell Set-ADComputer -Identity <domain-object-identity> -Server <domain-name> -KerberosEncryptionType "AES256" ```
-After you've run that cmdlet, replace `<domain-object-identity>` in the following script with your value, then run the script to refresh your domain object password:
+After you've run the above cmdlet, replace `<domain-object-identity>` in the following script with your value, then run the script to refresh your domain object password:
```powershell $KeyName = "kerb1" # Could be either the first or second kerberos key, this script assumes we're refreshing the first $KerbKeys = New-AzStorageAccountKey -ResourceGroupName $ResourceGroupName -Name $StorageAccountName -KeyName $KeyName
-$KerbKey = $KerbKeys | Where-Object {$_.KeyName -eq $KeyName} | Select-Object -ExpandProperty Value
+$KerbKey = $KerbKeys.keys | Where-Object {$_.KeyName -eq $KeyName} | Select-Object -ExpandProperty Value
$NewPassword = ConvertTo-SecureString -String $KerbKey -AsPlainText -Force Set-ADAccountPassword -Identity <domain-object-identity> -Reset -NewPassword $NewPassword
Set-ADAccountPassword -Identity <domain-object-identity> -Reset -NewPassword $Ne
### Debugging
-You can run the Debug-AzStorageAccountAuth cmdlet to conduct a set of basic checks on your AD configuration with the logged on AD user. This cmdlet is supported on AzFilesHybrid v0.1.2+ version. For more information on the checks performed in this cmdlet, see [Unable to mount Azure Files with AD credentials](storage-troubleshoot-windows-file-connection-problems.md#unable-to-mount-azure-files-with-ad-credentials) in the troubleshooting guide for Windows.
+You can run the `Debug-AzStorageAccountAuth` cmdlet to conduct a set of basic checks on your AD configuration with the logged on AD user. This cmdlet is supported on AzFilesHybrid v0.1.2+ version. For more information on the checks performed in this cmdlet, see [Unable to mount Azure Files with AD credentials](storage-troubleshoot-windows-file-connection-problems.md#unable-to-mount-azure-files-with-ad-credentials) in the troubleshooting guide for Windows.
```PowerShell Debug-AzStorageAccountAuth -StorageAccountName $StorageAccountName -ResourceGroupName $ResourceGroupName -Verbose
storage Storage Files Networking Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-networking-overview.md
description: An overview of networking options for Azure Files.
Previously updated : 04/19/2022 Last updated : 05/23/2022
This reflects the fact that the storage account can expose both the public endpo
- Forward the `core.windows.net` zone from your on-premises DNS servers to your Azure private DNS zone. The Azure private DNS host can be reached through a special IP address (`168.63.129.16`) that is only accessible inside virtual networks that are linked to the Azure private DNS zone. To work around this limitation, you can run additional DNS servers within your virtual network that will forward `core.windows.net` on to the Azure private DNS zone. To simplify this set up, we have provided PowerShell cmdlets that will auto-deploy DNS servers in your Azure virtual network and configure them as desired. To learn how to set up DNS forwarding, see [Configuring DNS with Azure Files](storage-files-networking-dns.md). ## SMB over QUIC
-Windows Server 2022 Azure Edition supports a new transport protocol called QUIC for the SMB server provided by the File Server role. QUIC is a replacement for TCP that is built on top of UDP, providing numerous advantages over TCP while still providing a reliable transport mechanism. Although there are multiple advantages to QUIC as a transport protocol, one key advantage for the SMB protocol is that all transport is done over port 443, which is widely open outbound to support HTTPS. This effectively means that SMB over QUIC offers a "SMB VPN" for file sharing over the public internet. Windows 11 ships with a SMB over QUIC capable client.
+Windows Server 2022 Azure Edition supports a new transport protocol called QUIC for the SMB server provided by the File Server role. QUIC is a replacement for TCP that is built on top of UDP, providing numerous advantages over TCP while still providing a reliable transport mechanism. One key advantage for the SMB protocol is that instead of using port 445, all transport is done over port 443, which is widely open outbound to support HTTPS. This effectively means that SMB over QUIC offers an "SMB VPN" for file sharing over the public internet. Windows 11 ships with an SMB over QUIC capable client.
-At this time, Azure Files does not directly support SMB over QUIC. However, you can create a lightweight cache of your Azure file shares on a Windows Server 2022 Azure Edition VM using Azure File Sync. To learn more about this option, see [Deploy Azure File Sync](../file-sync/file-sync-deployment-guide.md) and [SMB over QUIC](/windows-server/storage/file-server/smb-over-quic).
+At this time, Azure Files doesn't directly support SMB over QUIC. However, you can get access to Azure file shares via Azure File Sync running on Windows Server as in the diagram below. This also gives you the option to have Azure File Sync caches both on-premises or in different Azure datacenters to provide local caches for a distributed workforce. To learn more about this option, see [Deploy Azure File Sync](../file-sync/file-sync-deployment-guide.md) and [SMB over QUIC](/windows-server/storage/file-server/smb-over-quic).
+ ## See also - [Azure Files overview](storage-files-introduction.md)
storage Storage Files Quick Create Use Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-quick-create-use-linux.md
description: This tutorial covers how to use the Azure portal to deploy a Linux
Previously updated : 03/22/2022 Last updated : 05/24/2022 #Customer intent: As an IT admin new to Azure Files, I want to try out Azure file share using NFS and Linux so I can determine whether I want to subscribe to the service.
Now that you've created an NFS share, to use it you have to mount it on your Lin
1. You should see **Connect to this NFS share from Linux** along with sample commands to use NFS on your Linux distribution and a provided mounting script.
+ > [!IMPORTANT]
+ > The provided mounting script will mount the NFS share only until the Linux machine is rebooted. To automatically mount the share every time the machine reboots, use a [static mount with /etc/fstab](storage-how-to-use-files-linux.md#static-mount-with-etcfstab).
+ :::image type="content" source="media/storage-files-quick-create-use-linux/mount-nfs-share.png" alt-text="Screenshot showing how to connect to an N F S file share from Linux using a provided mounting script." lightbox="media/storage-files-quick-create-use-linux/mount-nfs-share.png" border="true"::: 1. Select your Linux distribution (Ubuntu).
storage Monitor Queue Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/queues/monitor-queue-storage.md
Previously updated : 05/06/2022 Last updated : 05/23/2022
Get started with any of these guides.
| [Azure Monitor Logs overview](../../azure-monitor/logs/data-platform-logs.md)| The basics of logs and how to collect and analyze them | | [Transition to metrics in Azure Monitor](../common/storage-metrics-migration.md) | Move from Storage Analytics metrics to metrics in Azure Monitor. | | [Azure Queue Storage monitoring data reference](monitor-queue-storage-reference.md) | A reference of the logs and metrics created by Azure Queue Storage |
+| [Troubleshoot performance issues](../common/troubleshoot-storage-performance.md?toc=%2fazure%2fstorage%2fqueues%2ftoc.json)| Common performance issues and guidance about how to troubleshoot them. |
+| [Troubleshoot availability issues](../common/troubleshoot-storage-availability.md?toc=%2fazure%2fstorage%2fqueues%2ftoc.json)| Common availability issues and guidance about how to troubleshoot them.|
+| [Troubleshoot client application errors](../common/troubleshoot-storage-client-application-errors.md?toc=%2fazure%2fstorage%2fqueues%2ftoc.json)| Common issues with connecting clients and how to troubleshoot them.|
storage Monitor Table Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/tables/monitor-table-storage.md
Previously updated : 05/06/2022 Last updated : 05/23/2022 ms.devlang: csharp
No. Azure Compute supports the metrics on disks. For more information, see [Per
| [Azure Monitor Logs overview](../../azure-monitor/logs/data-platform-logs.md)| The basics of logs and how to collect and analyze them | | [Transition to metrics in Azure Monitor](../common/storage-metrics-migration.md) | Move from Storage Analytics metrics to metrics in Azure Monitor. | | [Azure Table storage monitoring data reference](monitor-table-storage-reference.md)| A reference of the logs and metrics created by Azure Table Storage |
+| [Troubleshoot performance issues](../common/troubleshoot-storage-performance.md?toc=%2fazure%2fstorage%2ftables%2ftoc.json)| Common performance issues and guidance about how to troubleshoot them. |
+| [Troubleshoot availability issues](../common/troubleshoot-storage-availability.md?toc=%2fazure%2fstorage%2ftables%2ftoc.json)| Common availability issues and guidance about how to troubleshoot them.|
+| [Troubleshoot client application errors](../common/troubleshoot-storage-client-application-errors.md?toc=%2fazure%2fstorage%2ftables%2ftoc.json)| Common issues with connecting clients and how to troubleshoot them.|
storsimple Storsimple 8000 Support Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-8000-support-options.md
description: Describes support options for StorSimple 8000 series enterprise sto
Previously updated : 04/15/2022 Last updated : 05/18/2022
## StorSimple support
-Microsoft offers flexible support options for StorSimple enterprise storage customers. We are deeply committed to delivering a high-quality support experience that allows you to maximize the impact of your investment in the StorSimple solution and Microsoft Azure. As a StorSimple customer, you receive:
+Microsoft offers flexible support options for StorSimple enterprise storage customers. We're deeply committed to delivering a high-quality support experience that allows you to maximize the impact of your investment in the StorSimple solution and Microsoft Azure. As a StorSimple customer, you receive:
* 24x7 ability to submit support tickets through the Azure portal. * Help desk access for general support queries and deep technical assistance.
Microsoft offers flexible support options for StorSimple enterprise storage cust
<sup>4</sup> Next business day parts delivery is performed on a best-effort basis and may be subject to delays.
-<sup>5</sup> Customers using only StorSimple Virtual Arrays must purchase either StorSimple Standard or Premium support plans. Please contact your Microsoft account/sales team to purchase StorSimple support.
+<sup>5</sup> Customers using only StorSimple Virtual Arrays must purchase either StorSimple Standard or Premium support plans. Contact your Microsoft account/sales team to purchase StorSimple support.
<sup>6</sup> To expedite hardware warranty claims, replacement parts are shipped to the customer before receiving defective parts. Customer is responsible for timely return shipment of defective parts.
-If your support contract has expired, be aware, depending on how long the support contract has been expired, it may take up to three weeks after the renewal processing has completed before a part is delivered as the local stocking location for your contract will not be stocked with replacement parts for your device until after your contract is processed.
+If your support contract has expired, be aware, depending on how long the support contract has been expired, it may take up to three weeks after the renewal processing has completed before a part is delivered as the local stocking location for your contract won't be stocked with replacement parts for your device until after your contract is processed.
## Local language support In addition to English, local language support is provided in the following languages during business hours: Spanish, Portuguese, Japanese, Korean, Taiwanese, and Traditional Chinese. ## Support scope
-Support for billing and subscription management-related issues is available at all support levels. In order to receive StorSimple support, customer must be actively enrolled for either StorSimple Standard or Premium support plans. StorSimple support team will be responsible for resolving all issues that impact the StorSimple solution. In order to receive support for Azure-related issues that are not directly related to StorSimple, customer needs to be enrolled in an appropriate Azure support plan. Refer [here](https://azure.microsoft.com/support/plans/) for details. The support team refers non-StorSimple support cases to the Azure team for followup based on customer entitlements for Azure support.
+Support for billing and subscription management-related issues is available at all support levels. In order to receive StorSimple support, customer must be actively enrolled for either StorSimple Standard or Premium support plans. StorSimple support team will be responsible for resolving all issues that impact the StorSimple solution. In order to receive support for Azure-related issues that aren't directly related to StorSimple, customer needs to be enrolled in an appropriate Azure support plan. Refer [here](https://azure.microsoft.com/support/plans/) for details. The support team refers non-StorSimple support cases to the Azure team for followup based on customer entitlements for Azure support.
| **SEVERITY** |**CUSTOMER'S SITUATION** | EXPECTED MICROSOFT RESPONSE <sup>2 | EXPECTED CUSTOMER RESPONSE | |-|--|--|-|
-| A | Critical business impact: <ul> <br> <li> Customers business has significant loss or degradation of services. <sup>1</sup> <br> <li> Needs immediate attention. | Initial response: <sup>1</sup> <ul><br> <li>1 hour or less for Premium. <br> <li> 2 hours or less for Standard. <br> <li> Continuous effort all day, every day. | <ul><li> Allocation of appropriate resources to sustain continuous effort all day, every day. <br> <li> Accurate contact information for case owner. |
-| B | Moderate business impact: <ul><br> <li> Customer's business has moderate loss or degradation of services, but work can reasonably continue in an impaired manner. | Initial response: <sup>1</sup><ul><br> <li> 2 hours or less for Premium. <br> <li> 4 hours or less for Standard. | <ul><li> Allocation of appropriate resources to sustain continuous effort during business hours unless customer requests to opt out of 24x7. <br> <li> Accurate contact information for case owner. |
-| C | Minimum business impact: <ul><br> <li> Customer's business is substantially functioning with minor or no impediments to services. | Initial response: <sup>1</sup><ul> <br> <li> 4 hours or less for Premium. <br> <li> 8 hours or less for Standard. | <ul><li>Accurate contact information for case owner |
+| A | Critical business impact: <ul> <br> <li> Customers business has significant loss or degradation of services. <sup>1</sup> <br> <li> Needs immediate attention. | Initial response: <sup>1</sup> <ul><br> <li>One hour or less for Premium. <br> <li> Two hours or less for Standard. <br> <li> Continuous effort all day, every day. | <ul><li> Allocation of appropriate resources to sustain continuous effort all day, every day. <br> <li> Accurate contact information for case owner. |
+| B | Moderate business impact: <ul><br> <li> Customer's business has moderate loss or degradation of services, but work can reasonably continue in an impaired manner. | Initial response: <sup>1</sup><ul><br> <li> Two hours or less for Premium. <br> <li> Four hours or less for Standard. | <ul><li> Allocation of appropriate resources to sustain continuous effort during business hours unless customer requests to opt out of 24x7. <br> <li> Accurate contact information for case owner. |
+| C | Minimum business impact: <ul><br> <li> Customer's business is substantially functioning with minor or no impediments to services. | Initial response: <sup>1</sup><ul> <br> <li> Four hours or less for Premium. <br> <li> Eight hours or less for Standard. | <ul><li>Accurate contact information for case owner |
-<sup>1 </sup> Microsoft may downgrade the severity level of a Severity A case if the customer is not able to provide adequate resources or responses to enable Microsoft to continue with problem resolution efforts.
+<sup>1 </sup> Microsoft may downgrade the severity level of a Severity A case if the customer isn't able to provide adequate resources or responses to enable Microsoft to continue with problem resolution efforts.
<sup>2</sup> Expected response times are based on 24x7 support in English for Severity A and local business hours for Severity B and C, and local business hours support in the remaining local languages: Japanese, Taiwanese, Traditional Chinese, and Korean. ## Cancellation policy
-In order to receive StorSimple support, customer must purchase Standard or Premium support plans for the duration of the subscription term. Cancellation does not result in a prorated refund. StorSimple support plans are reduction eligible at EA anniversary. However, Microsoft is unable to provide support to StorSimple customers without valid support contracts.
+In order to receive StorSimple support, customer must purchase Standard or Premium support plans for the duration of the subscription term. Cancellation doesn't result in a prorated refund. StorSimple support plans are reduction eligible at EA anniversary. However, Microsoft is unable to provide support to StorSimple customers without valid support contracts.
## Renewal policy
In order to receive StorSimple support, customer must purchase Standard or Premi
Upon the purchase of StorSimple 8000 Series Storage Arrays, support is provided through the next EA anniversary. Customer must renew StorSimple support at EA anniversary. StorSimple support plan orders are coterminous. Customers are notified via e-mail about impending support expiry for StorSimple 8000 Series Storage Arrays and are expected to follow up with the Microsoft account/sales teams or their Microsoft Licensing Solution Partner (LSP) to renew StorSimple support.
-Standard Azure support does not cover StorSimple hardware support. If you are covered under Premier or Unified Microsoft support, you must still purchase Standard StorSimple support renewal. StorSimple support renewal can be aligned with EA anniversary date by acquiring the required support SKU with the license quantity equal to the number of the appliances and the unit quantity ordered being the remaining number of months of support needed until the EA anniversary date if all the units have the same support contract expiration date. If the units have different support contract expiration dates, each appliance must be covered with one support SKU with the unit quantity ordered being the remaining number of months of support needed until the EA anniversary date per each appliance.
-
-> [!NOTE]
-> StorSimple 8000 series reaches its end-of-life in December 2022. Purchase hardware support for only the months you need, not the full year. Any support purchased after December 2022 will not be used and is not eligible for refund.
+Standard Azure support doesn't cover StorSimple hardware support. If you're covered under Premier or Unified Microsoft support, you must still purchase Standard StorSimple support renewal. StorSimple support renewal can be aligned with EA anniversary date by acquiring the required support SKU with the license quantity equal to the number of the appliances and the unit quantity ordered being the remaining number of months of support needed until the EA anniversary date if all the units have the same support contract expiration date. If the units have different support contract expiration dates, each appliance must be covered with one support SKU with the unit quantity ordered being the remaining number of months of support needed until the EA anniversary date per each appliance.
StorSimple 8000 Series Storage Arrays support is provided based on how the StorSimple array was purchased.
StorSimple 8000 Series Storage Arrays support is provided based on how the StorS
| **Support SKUs** | **Subscription Model** | **ASAP + Model** | |:|:--|:| | **Standard support** <br></br> [CWZ-00023] <br></br>*AzureStorSimple ShrdSvr ALNG SubsVL MVL StdSpprt*<br><br><br><br><br> |Included.<br><br><br><br><br><br><br><br><br><br><br><br> |<ul> <li>Provided to customers with initial purchase through the next EA anniversary.<li> Customer must purchase support in subsequent years as no replacement hardware parts can be dispatched without an active StorSimple support contract.<br> <br><br><br><br><br><br> |
-| **Premium*** **support** <br></br> [CWZ-00024] <br></br> *AzureStorSimple ShrdSvr ALNG SubsVL MVL PremSpprt* <br><br><br><br><br><br> | Since standard support is automatically included with Subscription, refer to the Standard to Premium Upgrade. <br><br><br><br><br><br><br><br><br> <br><br><br> | <ul> <li>Customers covered by Microsoft Premier support contracts should refer to the standard to premium upgrade. <li>Customers who are not covered by a Microsoft Premier contract and wish to have Premium StorSimple Support should purchase this SKU at renewal time.<br><br><br><br><br><br><br><br> |
+| **Premium*** **support** <br></br> [CWZ-00024] <br></br> *AzureStorSimple ShrdSvr ALNG SubsVL MVL PremSpprt* <br><br><br><br><br><br> | Since standard support is automatically included with Subscription, refer to the Standard to Premium Upgrade. <br><br><br><br><br><br><br><br><br> <br><br><br> | <ul> <li>Customers covered by Microsoft Premier support contracts should refer to the standard to premium upgrade. <li>Customers who aren't covered by a Microsoft Premier contract and wish to have Premium StorSimple Support should purchase this SKU at renewal time.<br><br><br><br><br><br><br><br> |
| **Standard to Premium**\*\* upgrade <br></br> [CWZ- 00025] <br></br> *AzureStorSimple ShrdSvr ALNG SubsVL MVL Spprt- StepuptoPrem* <br><br><br><br><br><br><br><br><br>| Customers covered by Microsoft Premier Support contract at the time of StorSimple purchase are automatically upgraded to Premium StorSimple support free of charge for the duration of the time they remain covered by Premier support. If customers acquire Premier Support later, a free StorSimple support upgrade can be obtained by requesting it via SSSupOps@microsoft.com. <br></br>Non-Premier customers can purchase the StorSimple Standard to Premium upgrade SKU [CWZ-00025] anytime during the Enterprise Agreement(EA) contract.<br><br><br><br><br><br><br><br> | <ul><li> Customers covered by Microsoft Premier Support contract can purchase the Standard Support SKU [CWZ-00023] and the StorSimple Standard Support contract will be automatically upgraded, at no additional charges, for the duration of the time they remain covered by Premier support. <li> If customers acquire Premier Support later, a free StorSimple support upgrade can be obtained by requesting it via SSSupOps@microsoft.com. <li> Non-Premier customers covered by StorSimple Standard support can purchase the Premium upgrade SKU [CWZ-00025] anytime during the Enterprise Agreement(EA) contract. Next year, when renewing Support contract, customers should purchase directly the Premium Support SKU [CWZ-00024] and not just the upgrade SKU [CWZ-00025].<br><br> |
-\* Premium coverage is not available in all locations. Contact Microsoft at SSSupOps\@microsoft.com for geographical coverage before purchasing StorSimple Premium Support.
+\* Premium coverage isn't available in all locations. Contact Microsoft at SSSupOps\@microsoft.com for geographical coverage before purchasing StorSimple Premium Support.
\** The StorSimple appliance must be deployed in a region where the customer is covered by Premier support in order to be eligible for a free upgrade to premium StorSimple support.
storsimple Storsimple Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-overview.md
NA Previously updated : 04/15/2022 Last updated : 05/18/2022 # StorSimple 8000 series: a hybrid cloud storage solution - ## Overview
-Welcome to Microsoft Azure StorSimple, an integrated storage solution that manages storage tasks between on-premises devices and Microsoft Azure cloud storage. StorSimple is an efficient, cost-effective, and easy to manage storage area network (SAN) solution that eliminates many of the issues and expenses that are associated with enterprise storage and data protection. It uses the proprietary StorSimple 8000 series device, integrates with cloud services, and provides a set of management tools for a seamless view of all enterprise storage, including cloud storage. (The StorSimple deployment information published on the Microsoft Azure website applies to StorSimple 8000 series devices only. If you are using a StorSimple 5000/7000 series device, go to [StorSimple Help](http://onlinehelp.storsimple.com/).)
+Welcome to Microsoft Azure StorSimple, an integrated storage solution that manages storage tasks between on-premises devices and Microsoft Azure cloud storage. StorSimple is an efficient, cost-effective, and easy to manage storage area network (SAN) solution that eliminates many of the issues and expenses that are associated with enterprise storage and data protection. It uses the proprietary StorSimple 8000 series device, integrates with cloud services, and provides a set of management tools for a seamless view of all enterprise storage, including cloud storage. (The StorSimple deployment information published on the Microsoft Azure website applies to StorSimple 8000 series devices only. If you're using a StorSimple 5000/7000 series device, go to [StorSimple Help](http://onlinehelp.storsimple.com/).)
StorSimple uses [storage tiering](#automatic-storage-tiering) to manage stored data across various storage media. The current working set is stored on-premises on solid state drives (SSDs). Data that is used less frequently is stored on hard disk drives (HDDs), and archival data is pushed to the cloud. Moreover, StorSimple uses deduplication and compression to reduce the amount of storage that the data consumes. For more information, go to [Deduplication and compression](#deduplication-and-compression). For definitions of other key terms and concepts that are used in the StorSimple 8000 series documentation, go to [StorSimple terminology](#storsimple-terminology) at the end of this article.
The following table describes some of the key benefits that Microsoft Azure Stor
| Feature | Benefit | | | |
-| Transparent integration |Uses the iSCSI protocol to invisibly link data storage facilities. Data stored in the cloud, at the datacenter, or on remote servers appears to be stored at a single location. |
+| Transparent integration |Uses the iSCSI protocol to invisibly link data storage facilities. Data stored in the cloud, at the datacenter, or on remote servers appear to be stored at a single location. |
| Reduced storage costs |Allocates sufficient local or cloud storage to meet current demands and extends cloud storage only when necessary. It further reduces storage requirements and expense by eliminating redundant versions of the same data (deduplication) and by using compression. | | Simplified storage management |Provides system administration tools to configure and manage data stored on-premises, on a remote server, and in the cloud. Additionally, you can manage backup and restore functions from a Microsoft Management Console (MMC) snap-in.|
-| Improved disaster recovery and compliance |Does not require extended recovery time. Instead, it restores data as it is needed so that normal operations can continue with minimal disruption. Additionally, you can configure policies to specify backup schedules and data retention. |
+| Improved disaster recovery and compliance |Doesn't require extended recovery time. Instead, it restores data as it is needed so that normal operations can continue with minimal disruption. Additionally, you can configure policies to specify backup schedules and data retention. |
| Data mobility |Data uploaded to Microsoft Azure cloud services can be accessed from other sites for recovery and migration purposes. Additionally, you can use StorSimple to configure StorSimple Cloud Appliances on virtual machines (VMs) running in Microsoft Azure. The VMs can then use virtual devices to access stored data for test or recovery purposes. | | Business continuity |Allows StorSimple 5000-7000 series users to migrate their data to a StorSimple 8000 series device. | | Availability in the Azure Government Portal |StorSimple is available in the Azure Government Portal. For more information, see [Deploy your on-premises StorSimple device in the Government Portal](storsimple-8000-deployment-walkthrough-gov-u2.md). | | Data protection and availability |The StorSimple 8000 series supports Zone Redundant Storage (ZRS), in addition to Locally Redundant Storage (LRS) and Geo-redundant storage (GRS). Refer to [this article on Azure Storage redundancy options](../storage/common/storage-redundancy.md) for ZRS details. |
-| Support for critical applications |StorSimple lets you identify appropriate volumes as locally pinned to ensure that data that is required by critical applications is not tiered to the cloud. Locally pinned volumes are not subject to cloud latencies or connectivity issues. For more information about locally pinned volumes, see [Use the StorSimple Device Manager service to manage volumes](storsimple-8000-manage-volumes-u2.md). |
+| Support for critical applications |StorSimple lets you identify appropriate volumes as locally pinned to ensure that data that is required by critical applications isn't tiered to the cloud. Locally pinned volumes aren't subject to cloud latencies or connectivity issues. For more information about locally pinned volumes, see [Use the StorSimple Device Manager service to manage volumes](storsimple-8000-manage-volumes-u2.md). |
| Low latency and high performance |You can create cloud appliances that take advantage of the high performance, low latency features of Azure premium storage. For more information about StorSimple premium cloud appliances, see [Deploy and manage a StorSimple Cloud Appliance in Azure](storsimple-8000-cloud-appliance-u2.md). |
Only one controller is active at any point in time. If the active controller fai
For more information, go to [StorSimple hardware components and status](storsimple-8000-monitor-hardware-status.md). ## StorSimple Cloud Appliance
-You can use StorSimple to create a cloud appliance that replicates the architecture and capabilities of the physical hybrid storage device. The StorSimple Cloud Appliance (also known as the StorSimple Virtual Appliance) runs on a single node in an Azure virtual machine. (A cloud appliance can only be created on an Azure virtual machine. You cannot create one on a StorSimple device or an on-premises server.)
+You can use StorSimple to create a cloud appliance that replicates the architecture and capabilities of the physical hybrid storage device. The StorSimple Cloud Appliance (also known as the StorSimple Virtual Appliance) runs on a single node in an Azure virtual machine. (A cloud appliance can only be created on an Azure virtual machine. You can't create one on a StorSimple device or an on-premises server.)
The cloud appliance has the following features:
You can use the StorSimple Device Manager service to perform all administration
For more information, go to [Use the StorSimple Device Manager service to administer your StorSimple device](storsimple-8000-manager-service-administration.md). ## Windows PowerShell for StorSimple
-Windows PowerShell for StorSimple provides a command-line interface that you can use to create and manage the Microsoft Azure StorSimple service and set up and monitor StorSimple devices. It is a Windows PowerShellΓÇôbased, command-line interface that includes dedicated cmdlets for managing your StorSimple device. Windows PowerShell for StorSimple has features that allow you to:
+Windows PowerShell for StorSimple provides a command-line interface that you can use to create and manage the Microsoft Azure StorSimple service and set up and monitor StorSimple devices. It's a Windows PowerShellΓÇôbased, command-line interface that includes dedicated cmdlets for managing your StorSimple device. Windows PowerShell for StorSimple has features that allow you to:
* Register a device. * Configure the network interface on a device.
StorSimple Snapshot Manager is a Microsoft Management Console (MMC) snap-in that
Backups are captured as snapshots, which record only the changes since the last snapshot was taken and require far less storage space than full backups. You can create backup schedules or take immediate backups as needed. Additionally, you can use StorSimple Snapshot Manager to establish retention policies that control how many snapshots will be saved. If you later need to restore data from a backup, StorSimple Snapshot Manager lets you select from the catalog of local or cloud snapshots.
-If a disaster occurs or if you need to restore data for another reason, StorSimple Snapshot Manager restores it incrementally as it is needed. Data restoration does not require that you shut down the entire system while you restore a file, replace equipment, or move operations to another site.
+If a disaster occurs or if you need to restore data for another reason, StorSimple Snapshot Manager restores it incrementally as it's needed. Data restoration doesn't require that you shut down the entire system while you restore a file, replace equipment, or move operations to another site.
For more information, go to [What is StorSimple Snapshot Manager?](storsimple-what-is-snapshot-manager.md)
To enable quick access, StorSimple stores very active data (hot data) on SSDs in
> In Update 2 or later, you can specify a volume as locally pinned, in which case the data remains on the local device and is not tiered to the cloud.
-StorSimple adjusts and rearranges data and storage assignments as usage patterns change. For example, some information might become less active over time. As it becomes progressively less active, it is migrated from SSDs to HDDs and then to the cloud. If that same data becomes active again, it is migrated back to the storage device.
+StorSimple adjusts and rearranges data and storage assignments as usage patterns change. For example, some information might become less active over time. As it becomes progressively less active, it's migrated from SSDs to HDDs and then to the cloud. If that same data becomes active again, it's migrated back to the storage device.
The storage tiering process occurs as follows:
The storage tiering process occurs as follows:
StorSimple deduplicates customer data across all the snapshots and the primary data (data written by hosts). While deduplication is great for storage efficiency, it makes the question of ΓÇ£what is in the cloudΓÇ¥ complicated. The tiered primary data and the snapshot data overlap with each other. A single chunk of data in the cloud could be used as tiered primary data and also be referenced by several snapshots. Every cloud snapshot ensures that a copy of all the point-in-time data is locked into the cloud until that snapshot is deleted.
-Data is only deleted from the cloud when there are no references to that data. For example, if we took a cloud snapshot of all the data that is in the StorSimple device and then deleted some primary data, we would see the _primary data_ drop immediately. The _cloud data_, which includes the tiered data and the backups, stays the same because a snapshot is still referencing the cloud data. After the cloud snapshot is deleted (and any other snapshot that referenced the same data), cloud consumption drops. Before we remove cloud data, we check that no snapshots still reference that data. This process is called _garbage collection_ and is a background service running on the device. Removal of cloud data is not immediate as the garbage collection service checks for other references to that data before the deletion. The speed of garbage collection depends on the total number of snapshots and the total data. Typically, the cloud data is cleaned up in less than a week.
+Data is only deleted from the cloud when there are no references to that data. For example, if we took a cloud snapshot of all the data that is in the StorSimple device and then deleted some primary data, we would see the _primary data_ drop immediately. The _cloud data_, which includes the tiered data and the backups, stays the same because a snapshot is still referencing the cloud data. After the cloud snapshot is deleted (and any other snapshot that referenced the same data), cloud consumption drops. Before we remove cloud data, we check that no snapshots still reference that data. This process is called _garbage collection_ and is a background service running on the device. Removal of cloud data isn't immediate as the garbage collection service checks for other references to that data before the deletion. The speed of garbage collection depends on the total number of snapshots and the total data. Typically, the cloud data is cleaned up in less than a week.
### Thin provisioning
A summary of the supported StorSimple workloads is tabulated below.
*Yes&#42; - Solution guidelines and restrictions should be applied.*
-The following workloads are not supported by StorSimple 8000 series devices. If deployed on StorSimple, these workloads will result in an unsupported configuration.
+The following workloads aren't supported by StorSimple 8000 series devices. If deployed on StorSimple, these workloads will result in an unsupported configuration.
* Medical imaging * Exchange
Before deploying your Microsoft Azure StorSimple solution, we recommend that you
| | | | access control record (ACR) |A record associated with a volume on your Microsoft Azure StorSimple device that determines which hosts can connect to it. The determination is based on the iSCSI Qualified Name (IQN) of the hosts (contained in the ACR) that are connecting to your StorSimple device. | | AES-256 |A 256-bit Advanced Encryption Standard (AES) algorithm for encrypting data as it moves to and from the cloud. |
-| allocation unit size (AUS) |The smallest amount of disk space that can be allocated to hold a file in your Windows file systems. If a file size is not an even multiple of the cluster size, extra space must be used to hold the file (up to the next multiple of the cluster size) resulting in lost space and fragmentation of the hard disk. <br>The recommended AUS for Azure StorSimple volumes is 64 KB because it works well with the deduplication algorithms. |
+| allocation unit size (AUS) |The smallest amount of disk space that can be allocated to hold a file in your Windows file systems. If a file size isn't an even multiple of the cluster size, extra space must be used to hold the file (up to the next multiple of the cluster size) resulting in lost space and fragmentation of the hard disk. <br>The recommended AUS for Azure StorSimple volumes is 64 KB because it works well with the deduplication algorithms. |
| automated storage tiering |Automatically moving less active data from SSDs to HDDs and then to a tier in the cloud, and then enabling management of all storage from a central user interface. | | backup catalog |A collection of backups, usually related by the application type that was used. This collection is displayed in the Backup Catalog blade of the StorSimple Device Manager service UI. | | backup catalog file |A file containing a list of available snapshots currently stored in the backup database of StorSimple Snapshot Manager. |
Before deploying your Microsoft Azure StorSimple solution, we recommend that you
| iSCSI initiator |A software component that enables a host computer running Windows to connect to an external iSCSI-based storage network. | | iSCSI Qualified Name (IQN) |A unique name that identifies an iSCSI target or initiator. | | iSCSI target |A software component that provides centralized iSCSI disk subsystems in storage area networks. |
-| live archiving |A storage approach in which archival data is accessible all the time (it is not stored off-site on tape, for example). Microsoft Azure StorSimple uses live archiving. |
+| live archiving |A storage approach in which archival data is accessible all the time (it isn't stored off-site on tape, for example). Microsoft Azure StorSimple uses live archiving. |
| locally pinned volume |a volume that resides on the device and is never tiered to the cloud. | | local snapshot |A point-in-time copy of volume data that is stored on the Microsoft Azure StorSimple device. | | Microsoft Azure StorSimple |A powerful solution consisting of a datacenter storage appliance and software that enables IT organizations to leverage cloud storage as though it were datacenter storage. StorSimple simplifies data protection and data management while reducing costs. The solution consolidates primary storage, archive, backup, and disaster recovery (DR) through seamless integration with the cloud. By combining SAN storage and cloud data management on an enterprise-class platform, StorSimple devices enable speed, simplicity, and reliability for all storage-related needs. |
Before deploying your Microsoft Azure StorSimple solution, we recommend that you
| StorSimple Adapter for SharePoint |A Microsoft Azure StorSimple component that transparently extends StorSimple storage and data protection to SharePoint Server farms. | | StorSimple Device Manager service |An extension of the Azure portal that allows you to manage your Azure StorSimple on-premises and virtual devices. | | StorSimple Snapshot Manager |A Microsoft Management Console (MMC) snap-in for managing backup and restore operations in Microsoft Azure StorSimple. |
-| take backup |A feature that allows the user to take an interactive backup of a volume. It is an alternate way of taking a manual backup of a volume as opposed to taking an automated backup via a defined policy. |
+| take backup |A feature that allows the user to take an interactive backup of a volume. It's an alternate way of taking a manual backup of a volume as opposed to taking an automated backup via a defined policy. |
| thin provisioning |A method of optimizing the efficiency with which the available storage space is used in storage systems. In thin provisioning, the storage is allocated among multiple users based on the minimum space required by each user at any given time. See also *fat provisioning*. | | tiering |Arranging data in logical groupings based on current usage, age, and relationship to other data. StorSimple automatically arranges data in tiers. |
-| volume |Logical storage areas presented in the form of drives. StorSimple volumes correspond to the volumes mounted by the host, including those volumes discovered through the use of iSCSI and a StorSimple device. |
+| volume |Logical storage areas presented in the form of drives. StorSimple volumes correspond to the volumes mounted by the host, including those volumes discovered by using iSCSI and a StorSimple device. |
| volume container |A grouping of volumes and the settings that apply to them. All volumes in your StorSimple device are grouped into volume containers. Volume container settings include storage accounts, encryption settings for data sent to cloud with associated encryption keys, and bandwidth consumed for operations involving the cloud. | | volume group |In StorSimple Snapshot Manager, a volume group is a collection of volumes configured to facilitate backup processing. | | Volume Shadow Copy Service (VSS) |A Windows Server operating system service that facilitates application consistency by communicating with VSS-aware applications to coordinate the creation of incremental snapshots. VSS ensures that the applications are temporarily inactive when snapshots are taken. |
stream-analytics Capture Event Hub Data Parquet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/capture-event-hub-data-parquet.md
+
+ Title: Capture data from Azure Data Lake Storage Gen2 in Parquet format
+description: Learn how to use the node code editor to automatically capture the streaming data in Event Hubs in an Azure Data Lake Storage Gen2 account in Parquet format.
+++++ Last updated : 05/24/2022+
+# Capture data from Event Hubs in Parquet format
+
+This article explains how to use the no code editor to automatically capture streaming data in Event Hubs in an Azure Data Lake Storage Gen2 account in Parquet format. You have the flexibility of specifying a time or size interval.
+
+## Prerequisites
+
+- Your Azure Event Hubs and Azure Data Lake Storage Gen2 resources must be publicly accessible and can't be behind a firewall or secured in an Azure Virtual Network.
+- The data in your Event Hubs must be serialized in either JSON, CSV, or Avro format.
+
+## Configure a job to capture data
+
+Use the following steps to configure a Stream Analytics job to capture data in Azure Data Lake Storage Gen2.
+
+1. In the Azure portal, navigate to your event hub.
+1. Select **Features** > **Process Data**, and select **Start** on the **Capture data to ADLS Gen2 in Parquet format** card.
+ :::image type="content" source="./media/capture-event-hub-data-parquet/process-event-hub-data-cards.png" alt-text="Screenshot showing the Process Event Hubs data start cards." lightbox="./media/capture-event-hub-data-parquet/process-event-hub-data-cards.png" :::
+1. Enter a **name** to identify your Stream Analytics job. Select **Create**.
+ :::image type="content" source="./media/capture-event-hub-data-parquet/new-stream-analytics-job-name.png" alt-text="Screenshot showing the New Stream Analytics job window where you enter the job name." lightbox="./media/capture-event-hub-data-parquet/new-stream-analytics-job-name.png" :::
+1. Specify the **Serialization** type of your data in the Event Hubs and the **Authentication method** that the job will use to connect to Event Hubs. Then select **Connect**.
+ :::image type="content" source="./media/capture-event-hub-data-parquet/event-hub-configuration.png" alt-text="Screenshot showing the Event Hubs connection configuration." lightbox="./media/capture-event-hub-data-parquet/event-hub-configuration.png" :::
+1. When the connection is established successfully, you'll see:
+ - Fields that are present in the input data. You can choose **Add field** or you can select the three dot symbol next to a field to optionally remove, rename, or change its name.
+ - A live sample of incoming data in the **Data preview** table under the diagram view. It refreshes periodically. You can select **Pause streaming preview** to view a static view of the sample input.
+ :::image type="content" source="./media/capture-event-hub-data-parquet/edit-fields.png" alt-text="Screenshot showing sample data under Data Preview." lightbox="./media/capture-event-hub-data-parquet/edit-fields.png" :::
+1. Select the **Azure Data Lake Storage Gen2** tile to edit the configuration.
+1. On the **Azure Data Lake Storage Gen2** configuration page, follow these steps:
+ 1. Select the subscription, storage account name and container from the drop-down menu.
+ 1. Once the subscription is selected, the authentication method and storage account key should be automatically filled in.
+ 1. For streaming blobs, the directory path pattern is expected to be a dynamic value. It's required for the date to be a part of the file path for the blob ΓÇô referenced as `{date}`. To learn about custom path patterns, see to [Azure Stream Analytics custom blob output partitioning](stream-analytics-custom-path-patterns-blob-storage-output.md).
+ :::image type="content" source="./media/capture-event-hub-data-parquet/blob-configuration.png" alt-text="First screenshot showing the Blob window where you edit a blob's connection configuration." lightbox="./media/capture-event-hub-data-parquet/blob-configuration.png" :::
+ 1. Select **Connect**
+1. When the connection is established, you will see fields that are present in the output data.
+1. Select **Save** on the command bar to save your configuration.
+1. Select **Start** on the command bar to start the streaming flow to capture data. Then in the Start Stream Analytics job window:
+ 1. Choose the output start time.
+ 1. Select the number of Streaming Units (SU) that the job runs with. SU represents the computing resources that are allocated to execute a Stream Analytics job. For more information, see [Streaming Units in Azure Stream Analytics](stream-analytics-streaming-unit-consumption.md).
+ 1. In the **Choose Output data error handling** list, select the behavior you want when the output of the job fails due to data error. Select **Retry** to have the job retry until it writes successfully or select another option.
+ :::image type="content" source="./media/capture-event-hub-data-parquet/start-job.png" alt-text="Screenshot showing the Start Stream Analytics job window where you set the output start time, streaming units, and error handling." lightbox="./media/capture-event-hub-data-parquet/start-job.png" :::
+
+The new job is shown on the **Stream Analytics jobs** tab. Select **Open metrics** to monitor it.
++
+## Next steps
+
+Now you know how to use the Stream Analytics no code editor to create a job that captures Event Hubs data to Azure Data Lake Storage Gen2 in Parquet format. Next, you can learn more about Azure Stream Analytics and how to monitor the job that you created.
+
+* [Introduction to Azure Stream Analytics](stream-analytics-introduction.md)
+* [Monitor Stream Analytics jobs](stream-analytics-monitoring.md)
stream-analytics Cluster Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/cluster-overview.md
- Previously updated : 06/22/2021+ Last updated : 05/10/2022 # Overview of Azure Stream Analytics Cluster
-Azure Stream Analytics Cluster offers a single-tenant deployment for complex and demanding streaming scenarios. At full scale, Stream Analytics clusters can process more than 200 MB/second in real time. Stream Analytics jobs running on dedicated clusters can leverage all the features in the Standard offering and includes support for private link connectivity to your inputs and outputs.
+Azure Stream Analytics Cluster offers a single-tenant deployment for complex and demanding streaming scenarios. At full scale, Stream Analytics clusters can process more than 400 MB/second in real time. Stream Analytics jobs running on dedicated clusters can leverage all the features in the Standard offering and includes support for private link connectivity to your inputs and outputs.
-Stream Analytics clusters are billed by Streaming Units (SUs) which represent the amount of CPU and memory resources allocated to your cluster. A Streaming Unit is the same across Standard and Dedicated offerings. You can purchase 36, 72, 108, 144, 180 or 216 SUs for each cluster. A Stream Analytics cluster can serve as the streaming platform for your organization and can be shared by different teams working on various use cases.
+Stream Analytics clusters are billed by Streaming Units (SUs) which represent the amount of CPU and memory resources allocated to your cluster. A Streaming Unit is the same across Standard and Dedicated offerings. You can purchase from 36 to 396 SUs for each cluster, by increments of 36 (36, 72, 108...). A Stream Analytics cluster can serve as the streaming platform for your organization and can be shared by different teams working on various use cases.
## What are Stream Analytics clusters
Stream Analytics clusters are powered by the same engine that powers Stream Anal
* Single tenant hosting with no noise from other tenants. Your resources are truly "isolated" and performs better when there are burst in traffic.
-* Scale your cluster between 36 to 216 SUs as your streaming usage increases over time.
+* Scale your cluster between 36 to 396 SUs as your streaming usage increases over time.
* VNet support that allows your Stream Analytics jobs to connect to other resources securely using private endpoints.
stream-analytics Create Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/create-cluster.md
- Previously updated : 04/09/2021+ Last updated : 05/10/2022 # Quickstart: Create a dedicated Azure Stream Analytics cluster using Azure portal
In this section, you create a Stream Analytics cluster resource.
|Resource Group|Resource group name|Select a resource group, or select **Create new**, then enter a unique name for the new resource group. | |Cluster Name|A unique name|Enter a name to identify your Stream Analytics cluster.| |Location|The region closest to your data sources and sinks|Select a geographic location to host your Stream Analytics cluster. Use the location that is closest to your data sources and sinks for low latency analytics.|
- |Streaming Unit Capacity|36 through 216 |Determine the size of the cluster by estimating how many Stream Analytics job you plan to run and the total SUs the job will require. You can start with 36 SUs and later scale up or down as required.|
+ |Streaming Unit Capacity|36 through 396 |Determine the size of the cluster by estimating how many Stream Analytics job you plan to run and the total SUs the job will require. You can start with 36 SUs and later scale up or down as required.|
![Create cluster](./media/create-cluster/create-cluster.png)
stream-analytics Data Protection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/data-protection.md
Previously updated : 07/07/2021 Last updated : 05/20/2022 # Data protection in Azure Stream Analytics
If you want to use customer-managed keys to encrypt your data, you can use your
This setting must be configured at the time of Stream Analytics job creation, and it can't be modified throughout the job's life cycle. Modification or deletion of storage that is being used by your Stream Analytics is not recommended. If you delete your storage account, you will permanently delete all private data assets, which will cause your job to fail.
-Updating or rotating keys to your storage account is not possible using the Stream Analytics portal. You can update the keys using the REST APIs.
+Updating or rotating keys to your storage account is not possible using the Stream Analytics portal. You can update the keys using the REST APIs. You can also connect to your job storage account using managed identity authentication with allow trusted services.
+
+If the storage account you want to use is in an Azure Virtual Network, you must use managed identity authentication mode with **Allow trusted services**. For more information, visit: [Connect Stream Analytics jobs to resources in an Azure Virtual Network (VNet)](connect-job-to-vnet.md).
### Configure storage account for private data
stream-analytics Event Hubs Parquet Capture Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/event-hubs-parquet-capture-tutorial.md
+
+ Title: Capture Event Hubs data to ADLSG2 in parquet format
+description: Use no code editor to capture Event Hubs data in parquet format
++++ Last updated : 05/23/2022+++
+# Capture Event Hubs data in parquet format and analyze with Azure Synapse Analytics
+This tutorial shows how you can use the Stream Analytics no code editor to capture Event Hubs data in Azure Data Lake Storage Gen2 in parquet format.
+
+In this tutorial, you learn how to:
+
+> [!div class="checklist"]
+> * Deploy an event generator that sends data to your event hub
+> * Create a Stream Analytics job using the no code editor
+> * Review input data and schema
+> * Configure Azure Data Lake Storage Gen2 to which event hub data will be captured
+> * Run the Stream Analytics job
+> * Use Azure Synapse Analytics to query the parquet files
+
+## Prerequisites
+
+Before you start, make sure you've completed the following steps:
+
+* If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/).
+* Deploy the TollApp event generator to Azure, use this link to [Deploy TollApp Azure Template](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-stream-analytics%2Fmaster%2FSamples%2FTollApp%2FVSProjects%2FTollAppDeployment%2Fazuredeploy.json). Set the 'interval' parameter to 1. And use a new resource group for this.
+* Create an [Azure Synapse Analytics workspace](../synapse-analytics/get-started-create-workspace.md) with a Data Lake Storage Gen2 account.
+
+## Use no code editor to create a Stream Analytics job
+1. Locate the Resource Group in which the TollApp event generator was deployed.
+2. Select the Azure Event Hubs namespace. And then under the Event Hubs section, select **entrystream** instance.
+3. Go to **Process data** under Features section and then click **start** on the Capture in parquet format template.
+[ ![Screenshot of start capture experience from process data blade.](./media/stream-analytics-no-code/parquet-capture-start.png) ](./media/stream-analytics-no-code/parquet-capture-start.png#lightbox)
+4. Name your job **parquetcapture** and select **Create**.
+5. Configure your event hub input by specifying
+ * Consumer Group: Default
+ * Serialization type of your input data: JSON
+ * Authentication mode that the job will use to connect to your event hub: Connection String defaults
+ * Click **Connect**
+6. Within few seconds, you'll see sample input data and the schema. You can choose to drop fields, rename fields or change data type.
+[![Screenshot of event hub data and schema in no code editor.](./media/stream-analytics-no-code/event-hub-data-preview.png)](./media/stream-analytics-no-code/event-hub-data-preview.png#lightbox)
+7. Click the Azure Data Lake Storage Gen2 tile on your canvas and configure it by specifying
+ * Subscription where your Azure Data Lake Gen2 account is located in
+ * Storage account name which should be the same ADLS Gen2 account used with your Azure Synapse Analytics workspace done in the Prerequisites section.
+ * Container inside which the Parquet files will be created.
+ * Path pattern set to *{date}/{time}*
+ * Date and time pattern as the default *yyyy-mm-dd* and *HH*.
+ * Click **Connect**
+8. Select **Save** in the top ribbon to save your job and then select **Start**. Set Streaming Unit count to 3 and then Select **Start** to run your job.
+[![Screenshot of start job in no code editor.](./media/stream-analytics-no-code/no-code-start-job.png)](./media/stream-analytics-no-code/no-code-start-job.png#lightbox)
+9. You'll then see a list of all Stream Analytics jobs created using the no code editor. And within two minutes, your job will go to a **Running** state.
+[![Screenshot of job in running state after job creation.](./media/stream-analytics-no-code/no-code-job-running-state.png)](./media/stream-analytics-no-code/no-code-job-running-state.png#lightbox)
+
+## View output in your Azure Data Lake Storage Gen 2 account
+1. Locate the Azure Data Lake Storage Gen2 account you had used in the previous step.
+2. Select the container you had used in the previous step. You'll see parquet files created based on the *{date}/{time}* path pattern used in the previous step.
+[![Screenshot of parquet files in Azure Data Lake Storage Gen 2.](./media/stream-analytics-no-code/capture-parquet-files.png)](./media/stream-analytics-no-code/capture-parquet-files.png#lightbox)
+
+## Query event hub Capture files in Parquet format with Azure Synapse Analytics
+### Query using Azure Synapse Spark
+1. Locate your Azure Synapse Analytics workspace and open Synapse Studio.
+2. [Create a serverless Apache Spark pool](../synapse-analytics/get-started-analyze-spark.md#create-a-serverless-apache-spark-pool) in your workspace if one doesn't already exist.
+3. In the Synapse Studio, go to the **Develop** hub and create a new **Notebook**.
+4. Create a new code cell and paste the following code in that cell. Replace *container* and *adlsname* with the name of the container and ADLS Gen2 account used in the previous step.
+ ```py
+ %%pyspark
+ df = spark.read.load('abfss://container@adlsname.dfs.core.windows.net/*/*/*.parquet', format='parquet')
+ display(df.limit(10))
+ df.count()
+ df.printSchema()
+ ```
+5. Select **Run All** to see the results
+[![Screenshot of spark run results in Azure Synapse Analytics.](./media/stream-analytics-no-code/spark-run-all.png)](./media/stream-analytics-no-code/spark-run-all.png#lightbox)
+
+### Query using Azure Synapse Serverless SQL
+1. In the **Develop** hub, create a new **SQL script**.
+2. Paste the following script and **Run** it using the **Built-in** serverless SQL endpoint. Replace *container* and *adlsname* with the name of the container and ADLS Gen2 account used in the previous step.
+ ``SQL
+ SELECT
+ TOP 100 *
+ FROM
+ OPENROWSET(
+ BULK 'https://adlsname.dfs.core.windows.net/container/*/*/*.parquet',
+ FORMAT='PARQUET'
+ ) AS [result]
+ ```
+[![Screenshot of SQL query results using Azure Synapse Analytics.](./media/stream-analytics-no-code/sql-results.png)](./media/stream-analytics-no-code/sql-results.png#lightbox)
+
+## Clean up resources
+1. Locate your Event Hubs instance and see the list of Stream Analytics jobs under **Process Data** section. Stop any jobs that are running.
+2. Go to the resource group you used while deploying the TollApp event generator.
+3. Select **Delete resource group**. Type the name of the resource group to confirm deletion.
+
+## Next steps
+In this tutorial, you learned how to create a Stream Analytics job using the no code editor to capture Event Hubs data streams in Parquet format. You then used Azure Synapse Analytics to query the parquet files using both Synapse Spark and Synapse SQL.
+
+> [!div class="nextstepaction"]
+> [No code stream processing with Azure Stream Analytics](https://aka.ms/asanocodeux)
stream-analytics Filter Ingest Data Lake Storage Gen2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/filter-ingest-data-lake-storage-gen2.md
+
+ Title: Filter and ingest to Azure Data Lake Storage Gen2 using the Stream Analytics no code editor
+description: Learn how to use the no code editor to easily create a Stream Analytics job. It continuously reads from Event Hubs, filters the incoming data, and then writes the results continuously to Azure Data Lake Storage Gen2.
+++++ Last updated : 05/08/2022++
+# Filter and ingest to Azure Data Lake Storage Gen2 using the Stream Analytics no code editor
+
+This article describes how you can use the no code editor to easily create a Stream Analytics job. It continuously reads from your Event Hubs, filters the incoming data, and then writes the results continuously to Azure Data Lake Storage Gen2.
+
+## Prerequisites
+
+- Your Azure Event Hubs resources must be publicly accessible and not be behind a firewall or secured in an Azure Virtual Network
+- The data in your Event Hubs must be serialized in either JSON, CSV, or Avro format.
+
+## Develop a Stream Analytics job to filter and ingest real time data
+
+1. In the Azure portal, locate and select the Azure Event Hubs instance.
+1. Select **Features** > **Process Data** and then select **Start** on the **Filter and ingest to ADLS Gen2** card.
+ :::image type="content" source="./media/filter-ingest-data-lake-storage-gen2/filter-data-lake-gen2-card-start.png" alt-text="Screenshot showing the Filter and ingest to ADLS Gen2 card where you select Start." lightbox="./media/filter-ingest-data-lake-storage-gen2/filter-data-lake-gen2-card-start.png" :::
+1. Enter a name for the Stream Analytics job, then select **Create**.
+ :::image type="content" source="./media/filter-ingest-data-lake-storage-gen2/create-job.png" alt-text="Screenshot showing where to enter a job name." lightbox="./media/filter-ingest-data-lake-storage-gen2/create-job.png" :::
+1. Specify the **Serialization** type of your data in the Event Hubs window and the **Authentication method** that the job will use to connect to the Event Hubs. Then select **Connect**.
+ :::image type="content" source="./media/filter-ingest-data-lake-storage-gen2/event-hub-review-connect.png" alt-text="Screenshot showing the Event Hubs area where you select Serialization and Authentication method." lightbox="./media/filter-ingest-data-lake-storage-gen2/event-hub-review-connect.png" :::
+1. If the connection is established successfully and you have data streams flowing in to the Event Hubs instance, you'll immediately see two things:
+ 1. Fields that are present in the input data. You can choose **Add field** or select the three dot symbol next to each field to remove, rename, or change its type.
+ :::image type="content" source="./media/filter-ingest-data-lake-storage-gen2/add-field.png" alt-text="Screenshot showing where you can add a field or remove, rename, or change a field type." lightbox="./media/filter-ingest-data-lake-storage-gen2/add-field.png" :::
+ 1. A live sample of incoming data in **Data preview** table under the diagram view. It automatically refreshes periodically. You can select **Pause streaming preview** to see a static view of sample input data.
+ :::image type="content" source="./media/filter-ingest-data-lake-storage-gen2/sample-input.png" alt-text="Screenshot showing sample data on the Data preview tab." lightbox="./media/filter-ingest-data-lake-storage-gen2/sample-input.png" :::
+1. In the **Filter** area, select a field to filter the incoming data with a condition.
+ :::image type="content" source="./media/filter-ingest-data-lake-storage-gen2/filter-data.png" alt-text="Screenshot showing the Filter area where you can add a conditional filter." lightbox="./media/filter-ingest-data-lake-storage-gen2/filter-data.png" :::
+1. Select the Azure Data Lake Gen2 table to send your filtered data:
+ 1. Select the **subscription**, **storage account name**, and **container** from the drop-down menu.
+ 1. After the **subscription** is selected, the **authentication method** and **storage account key** should be automatically filled in. Select **Connect**.
+ For more information about the fields and to see examples of path pattern, see [Blob storage and Azure Data Lake Gen2 output from Azure Stream Analytics](blob-storage-azure-data-lake-gen2-output.md).
+ :::image type="content" source="./media/filter-ingest-data-lake-storage-gen2/data-lake-configuration.png" alt-text="Screenshot showing the Azure Data Lake Gen2 blob container connection configuration settings." lightbox="./media/filter-ingest-data-lake-storage-gen2/data-lake-configuration.png" :::
+1. Optionally, select **Get static preview/Refresh static preview** to see the data preview that will be ingested from Azure Data Lake Storage Gen2.
+1. Select **Save** and then select **Start** the Stream Analytics job.
+ :::image type="content" source="./media/filter-ingest-data-lake-storage-gen2/no-code-save-start.png" alt-text="Screenshot showing the job Save and Start options." lightbox="./media/filter-ingest-data-lake-storage-gen2/no-code-save-start.png" :::
+1. To start the job, specify the number of **Streaming Units (SUs)** that the job runs with. SUs represents the amount of compute and memory allocated to the job. We recommended that you start with three and then adjust as needed.
+1. After your select **Start**, the job starts running within two minutes.
+
+You can see the job under the Process Data section in the **Stream Analytics jobs** tab. Select **Open metrics** to monitor it or stop and restart it, as needed.
++
+## Next steps
+
+Learn more about Azure Stream Analytics and how to monitor the job you've created.
+
+- [Introduction to Azure Stream Analytics](stream-analytics-introduction.md)
+- [Monitor Stream Analytics jobs](stream-analytics-monitoring.md)
stream-analytics Filter Ingest Synapse Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/filter-ingest-synapse-sql.md
+
+ Title: Filter and ingest to Azure Synapse SQL using the Stream Analytics no code editor
+description: Learn how to use the no code editor to easily create a Stream Analytics job. It continuously reads from your Event Hubs, filters the incoming data, and then writes the results continuously to a Synapse SQL table.
+++++ Last updated : 05/08/2022++
+# Filter and ingest to Azure Synapse SQL using the Stream Analytics no code editor
+
+This article describes how you can use the no code editor to easily create a Stream Analytics job. It continuously reads from your Event Hubs, filters the incoming data, and then writes the results continuously to Synapse SQL table.
+
+## Prerequisites
+
+- Your Azure Event Hubs resources must be publicly accessible and can't be behind a firewall or secured in an Azure Virtual Network.
+- The data in your Event Hubs must be serialized in either JSON, CSV, or Avro format.
+
+## Develop a Stream Analytics job to filter and ingest data
+
+Use the following steps to develop a Stream Analytics job to filter and ingest real time data into a Synapse SQL table.
+
+1. In the Azure portal, locate and select your Azure Event Hubs instance.
+1. Select **Features** > **Process Data**, and select **Start** on the **Filter and ingest to Synapse SQL** card.
+ :::image type="content" source="./media/filter-ingest-synapse-sql/process-event-hub-data-cards.png" alt-text="Screenshot showing the Process Event Hubs data start cards." lightbox="./media/filter-ingest-synapse-sql/process-event-hub-data-cards.png" :::
+1. Enter a name to identify your Stream Analytics job, then select **Create**.
+ :::image type="content" source="./media/filter-ingest-synapse-sql/create-new-stream-analytics-job.png" alt-text="Screenshot showing the New Stream Analytics job window where you enter the job name." lightbox="./media/filter-ingest-synapse-sql/create-new-stream-analytics-job.png" :::
+1. Specify the **Serialization type** of your data in the Event Hubs window and the **Authentication method** that the job will use to connect to the Event Hubs. Then select **Connect**.
+ :::image type="content" source="./media/filter-ingest-synapse-sql/event-hub-configuration.png" alt-text="Screenshot showing the Event Hubs connection configuration." lightbox="./media/filter-ingest-synapse-sql/event-hub-configuration.png" :::
+1. When the connection is established successfully and you have data streams flowing into your Event Hubs instance, you'll immediately see two things:
+ - Fields that are present in the input data. You can choose **Add field** or select the three dot symbol next to a field to remove, rename, or change its type.
+ :::image type="content" source="./media/filter-ingest-synapse-sql/no-code-schema.png" alt-text="Screenshot showing the Event Hubs field list where you can remove, rename, or change the field type." lightbox="./media/filter-ingest-synapse-sql/no-code-schema.png" :::
+ - A live sample of incoming data in the **Data preview** table under the diagram view. It automatically refreshes periodically. You can select **Pause streaming preview** to see a static view of the sample input data.
+ :::image type="content" source="./media/filter-ingest-synapse-sql/no-code-sample-input.png" alt-text="Screenshot showing sample data under Data Preview." lightbox="./media/filter-ingest-synapse-sql/no-code-sample-input.png" :::
+1. In the Filter area, select a field to filter the incoming data with a condition.
+ :::image type="content" source="./media/filter-ingest-synapse-sql/no-code-filter-data.png" alt-text="Screenshot showing the Filter area where you can filter incoming data with a condition." lightbox="./media/filter-ingest-synapse-sql/no-code-filter-data.png" :::
+1. Select the Synapse SQL table to send your filtered data:
+ 1. Select the **Subscription**, **Database (dedicated SQL pool name)** and **Authentication method** from the drop-down menu.
+ 1. Enter Table name where the filtered data will be ingested. Select **Connect**.
+ :::image type="content" source="./media/filter-ingest-synapse-sql/no-code-synapse-configuration.png" alt-text="Screenshot showing Synapse SQL table connection details." lightbox="./media/filter-ingest-synapse-sql/no-code-synapse-configuration.png" :::
+ > [!NOTE]
+ > The table schema must exactly match the number of fields and their types that your data preview generates.
+1. Optionally, select **Get static preview/Refresh static preview** to see the data preview that will be ingested in selected Synapse SQL table.
+ :::image type="content" source="./media/filter-ingest-synapse-sql/no-code-synapse-static-preview.png" alt-text="Screenshot showing the Get static preview/Refresh static preview option." lightbox="./media/filter-ingest-synapse-sql/no-code-synapse-static-preview.png" :::
+1. Select **Save** and then select **Start** the Stream Analytics job.
+ :::image type="content" source="./media/filter-ingest-synapse-sql/no-code-save-start.png" alt-text="Screenshot showing the Save and Start options." lightbox="./media/filter-ingest-synapse-sql/no-code-save-start.png" :::
+1. To start the job, specify:
+ - The number of **Streaming Units (SUs)** the job runs with. SUs represents the amount of compute and memory allocated to the job. We recommended that you start with three and then adjust as needed.
+ - **Output data error handling** ΓÇô It allows you to specify the behavior you want when a jobΓÇÖs output to your destination fails due to data errors. By default, your job retries until the write operation succeeds. You can also choose to drop such output events.
+ :::image type="content" source="./media/filter-ingest-synapse-sql/no-code-start-job.png" alt-text="Screenshot showing the Start Stream Analytics job options where you can change the output time, set the number of streaming units, and select the Output data error handling options." lightbox="./media/filter-ingest-synapse-sql/no-code-start-job.png" :::
+1. After you select **Start**, the job starts running within two minutes.
+
+You can see the job under the Process Data section on the **Stream Analytics jobs** tab. Select **Open metrics** to monitor it or stop and restart it, as needed.
++
+## Next steps
+
+Learn more about Azure Stream Analytics and how to monitor the job you've created.
+
+* [Introduction to Azure Stream Analytics](stream-analytics-introduction.md)
+* [Monitor Stream Analytics jobs](stream-analytics-monitoring.md)
stream-analytics Machine Learning Udf https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/machine-learning-udf.md
Last updated 03/31/2022-+ # Integrate Azure Stream Analytics with Azure Machine Learning
After you have deployed your web service, you send sample request with varying b
At optimal scaling, your Stream Analytics job should be able to send multiple parallel requests to your web service and get a response within few milliseconds. The latency of the web service's response can directly impact the latency and performance of your Stream Analytics job. If the call from your job to the web service takes a long time, you will likely see an increase in watermark delay and may also see an increase in the number of backlogged input events.
-You can achieve low latency by ensuring that your Azure Kubernetes Service (AKS) cluster has been provisioned with the [right number of nodes and replicas](../machine-learning/how-to-deploy-azure-kubernetes-service.md?tabs=python#autoscaling). It's critical that your web service is highly available and returns successful responses. If your job receives an error that is retriable such as service unavailable response (503), it will automaticaly retry with exponential back off. If your job receives one of these errors as a response from the endpoint, the job will go to a failed state.
+You can achieve low latency by ensuring that your Azure Kubernetes Service (AKS) cluster has been provisioned with the [right number of nodes and replicas](../machine-learning/v1/how-to-deploy-azure-kubernetes-service.md?tabs=python#autoscaling). It's critical that your web service is highly available and returns successful responses. If your job receives an error that is retriable such as service unavailable response (503), it will automaticaly retry with exponential back off. If your job receives one of these errors as a response from the endpoint, the job will go to a failed state.
* Bad Request (400) * Conflict (409) * Not Found (404)
stream-analytics No Code Materialize Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/no-code-materialize-cosmos-db.md
+
+ Title: Materialize data in Azure Cosmos DB using no code editor
+description: Learn how to use the no code editor in Stream Analytics to materialize data from Event Hubs to Cosmos DB.
+++++ Last updated : 05/12/2022+
+# Materialize data in Azure Cosmos DB using the Stream Analytics no code editor
+
+This article describes how you can use the no code editor to easily create a Stream Analytics job. The job continuously reads from your Event Hubs and performs aggregations like count, sum, and average. You select fields to group by over a time window and then the job writes the results continuously to Azure Cosmos DB.
+
+## Prerequisites
+
+* Your Azure Event Hubs and Azure Cosmos DB resources must be publicly accessible and can't be behind a firewall or secured in an Azure Virtual Network.
+* The data in your Event Hubs must be serialized in either JSON, CSV, or Avro format.
+
+## Develop a Stream Analytics job
+
+Use the following steps to develop a Stream Analytics job to materialize data in Cosmos DB.
+
+1. In the Azure portal, locate and select your Azure Event Hubs instance.
+2. Under **Features**, select **Process Data**. Then, select **Start** in the card titled **Materialize Data in Cosmos DB**.
+ :::image type="content" source="./media/no-code-materialize-cosmos-db/no-code-materialize-view-start.png" alt-text="Screenshot showing the Start Materialize Data Flow." lightbox="./media/no-code-materialize-cosmos-db/no-code-materialize-view-start.png" :::
+3. Enter a name for your job and select **Create**.
+4. Specify the **Serialization** type of your data in the event hub and the **Authentication method** that the job will use to connect to the Event Hubs. Then select **Connect**.
+5. If the connection is successful and you have data streams flowing into your Event Hubs instance, you'll immediately see two things:
+ - Fields that are present in your input payload. Select the three dot symbol next to a field optionally remove, rename, or change the data type of the field.
+ :::image type="content" source="./media/no-code-materialize-cosmos-db/no-code-schema.png" alt-text="Screenshot showing the event hub fields of input for you to review." lightbox="./media/no-code-materialize-cosmos-db/no-code-schema.png" :::
+ - A sample of your input data in the bottom pane under **Data preview** that automatically refreshes periodically. You can select **Pause streaming preview** if you prefer to have a static view of your sample input data.
+ :::image type="content" source="./media/no-code-materialize-cosmos-db/no-code-sample-input.png" alt-text="Screenshot showing sample input data." lightbox="./media/no-code-materialize-cosmos-db/no-code-sample-input.png" :::
+6. In the next step, you specify the field and the **aggregate** you want to calculate, such as Average and Count. You can also specify the field that you want to **Group By** along with the **time window**. Then you can validate the results of the step in the **Data preview** section.
+ :::image type="content" source="./media/no-code-materialize-cosmos-db/no-code-group-by.png" alt-text="Screenshot showing the Group By area." lightbox="./media/no-code-materialize-cosmos-db/no-code-group-by.png" :::
+7. Choose the **Cosmos DB database** and **container** where you want results written.
+8. Start the Stream Analytics job by selecting **Start**.
+ :::image type="content" source="./media/no-code-materialize-cosmos-db/no-code-cosmos-db-start.png" alt-text="Screenshot showing your definition where you select Start." lightbox="./media/no-code-materialize-cosmos-db/no-code-cosmos-db-start.png" :::
+To start the job, you must specify:
+ - The number of **Streaming Units (SU)** the job runs with. SUs represent the amount of compute and memory allocated to the job. We recommended that you start with three and adjust as needed.
+ - **Output data error handling** allows you to specify the behavior you want when a jobΓÇÖs output to your destination fails due to data errors. By default, your job retries until the write operation succeeds. You can also choose to drop output events.
+9. After you select **Start**, the job starts running within two minutes. View the job under the **Process Data** section in the Stream Analytics jobs tab. You can explore job metrics and stop and restart it as needed.
+
+## Next steps
+
+Now you know how to use the Stream Analytics no code editor to develop a job that reads from Event Hubs and calculates aggregates such as counts, averages and writes it to your Azure Cosmos DB resource.
+
+* [Introduction to Azure Stream Analytics](stream-analytics-introduction.md)
+* [Monitor Stream Analytics jobs](stream-analytics-monitoring.md)
stream-analytics No Code Power Bi Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/no-code-power-bi-tutorial.md
+
+ Title: Build near real-time dashboard with Azure Synapse Analytics and Power BI
+description: Use no code editor to compute aggregations and write to Azure Synapse Analytics and build near-real time dashboards using Power BI
++++ Last updated : 05/23/2022+++
+# Build real time Power BI dashboards with Stream Analytics no code editor
+This tutorial shows how you can use the Stream Analytics no code editor to compute aggregates on real time data streams and store it in Azure Synapse Analytics.
+
+In this tutorial, you learn how to:
+
+> [!div class="checklist"]
+> * Deploy an event generator that sends data to your event hub
+> * Create a Stream Analytics job using the no code editor
+> * Review input data and schema
+> * Select fields to group by and define aggregations like count
+> * Configure Azure Synapse Analytics to which results will be written
+> * Run the Stream Analytics job
+> * Visualize data in Power BI
+
+## Prerequisites
+
+Before you start, make sure you've completed the following steps:
+
+* If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/).
+* Deploy the TollApp event generator to Azure, use this link to [Deploy TollApp Azure Template](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-stream-analytics%2Fmaster%2FSamples%2FTollApp%2FVSProjects%2FTollAppDeployment%2Fazuredeploy.json). Set the 'interval' parameter to 1. And use a new resource group for this.
+* Create an [Azure Synapse Analytics workspace](../synapse-analytics/get-started-create-workspace.md) with a [Dedicated SQL pool](../synapse-analytics/get-started-analyze-sql-pool.md#create-a-dedicated-sql-pool).
+* Create a table named **carsummary** using your Dedicated SQL pool. You can do this by running the following SQL script:
+ ```SQL
+ CREATE TABLE carsummary
+ (
+ Make nvarchar(20),
+ CarCount int,
+ times datetime
+ )
+ WITH ( CLUSTERED COLUMNSTORE INDEX ) ;
+ ```
+## Use no code editor to create a Stream Analytics job
+1. Locate the Resource Group in which the TollApp event generator was deployed.
+2. Select the Azure Event Hubs namespace. And then under the Event Hubs section, select **entrystream** instance.
+3. Go to **Process data** under Features section and then click **start** on the **Start with blank canvas** template.
+[![Screenshot of real time dashboard template in no code editor.](./media/stream-analytics-no-code/real-time-dashboard-power-bi.png)](./media/stream-analytics-no-code/real-time-dashboard-power-bi.png#lightbox)
+4. Name your job **carsummary** and select **Create**.
+5. Configure your event hub input by specifying
+ * Consumer Group: Default
+ * Serialization type of your input data: JSON
+ * Authentication mode which the job will use to connect to your event hub: Connection String defaults
+ * Click **Connect**
+6. Within few seconds, you'll see sample input data and the schema. You can choose to drop fields, rename fields or change data type if you want.
+7. Click the **Group by** tile on the canvas and connect it to the event hub tile. Configure the Group By tile by specifying:
+ * Aggregation as **Count**
+ * Field as **Make** which is a nested field inside **CarModel**
+ * Click **Save**
+ * In the **Group by** settings, select **Make** and **Tumbling window** of **3 minutes**
+8. Click the **Manage Fields** tile and connect it to the Group by tile on canvas. Configure the **Manage Fields** tile by specifying:
+ * Clicking on **Add all fields**
+ * Rename the fields by clicking on the fields and changing the names from:
+ * COUNT_make to CarCount
+ * Window_End_Time to times
+9. Click the **Azure Synapse Analytics** tile and connect it to Manage Fields tile on your canvas. Configure Azure Synapse Analytics by specifying:
+ * Subscription where your Azure Synapse Analytics is located
+ * Database of the Dedicated SQL pool which you used to create the Table in the previous section.
+ * Username and password to authenticate
+ * Table name as **carsummary**
+ * Click **Connect**. You'll see sample results that will be written to your Synapse SQL table.
+ [![Screenshot of synapse output in no code editor.](./media/stream-analytics-no-code/synapse-output.png)](./media/stream-analytics-no-code/synapse-output.png#lightbox)
+8. Select **Save** in the top ribbon to save your job and then select **Start**. Set Streaming Unit count to 3 and then click **Start** to run your job. Specify the storage account that will be used by Synapse SQL to load data into your data warehouse.
+9. You'll then see a list of all Stream Analytics jobs created using the no code editor. And within two minutes, your job will go to a **Running** state.
+[![Screenshot of job in running state in no code editor.](./media/stream-analytics-no-code/cosmos-db-running-state.png)](./media/stream-analytics-no-code/cosmos-db-running-state.png#lightbox)
+
+## Create a Power BI visualization
+1. Download the latest version of [Power BI desktop](https://powerbi.microsoft.com/desktop).
+2. Use the Power BI connector for Azure Synapse SQL to connect to your database.
+3. Use this query to fetch data from your database
+ ```SQL
+ SELECT [Make],[CarCount],[times]
+ FROM [dbo].[carsummary]
+ WHERE times >= DATEADD(day, -1, GETDATE())
+ ```
+4. You can then create a line chart with
+ * X-axis as times
+ * Y-axis as CarCount
+ * Legend as Make
+ You'll then see a chart that can be published. You can configure [automatic page refresh](/power-bi/create-reports/desktop-automatic-page-refresh#authoring-reports-with-automatic-page-refresh-in-power-bi-desktop) and set it to 3 minutes to get a near-real time view.
+[![Screenshot of Power BI dashboard showing car summary data.](./media/stream-analytics-no-code/no-code-power-bi-real-time-dashboard.png)](./media/stream-analytics-no-code/no-code-power-bi-real-time-dashboard.png#lightbox)
+
+## Clean up resources
+1. Locate your Event Hubs instance and see the list of Stream Analytics jobs under **Process Data** section. Stop any jobs that are running.
+2. Go to the resource group you used while deploying the TollApp event generator.
+3. Select **Delete resource group**. Type the name of the resource group to confirm deletion.
+
+## Next steps
+In this tutorial, you created a Stream Analytics job using the no code editor to define aggregations and write results to Azure Synapse Analytics. You then used the Power BI to build a near-real time dashboard to see the results produced by the job.
+
+> [!div class="nextstepaction"]
+> [No code stream processing with Azure Stream Analytics](https://aka.ms/asanocodeux)
stream-analytics No Code Stream Processing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/no-code-stream-processing.md
+
+ Title: No code stream processing using Azure Stream Analytics
+description: Learn about processing your real time data streams in Azure Event Hubs using the Azure Stream Analytics no code editor.
+++++ Last updated : 05/08/2022++
+# No code stream processing using Azure Stream Analytics (Preview)
+
+You can process your real time data streams in Azure Event Hubs using Azure Stream Analytics. The no code editor allows you to easily develop a Stream Analytics job without writing a single line of code. Within minutes, you can develop and run a job that tackles many scenarios, including:
+
+- Filtering and ingesting to Azure Synapse SQL
+- Capturing your Event Hubs data in Parquet format in Azure Data Lake Storage Gen2
+- Materializing data in Azure Cosmos DB
+
+The experience provides a canvas that allows you to connect to input sources to quickly see your streaming data. Then you can transform it before writing to your destination of choice in Azure.
+
+You can:
+
+- Modify input schema
+- Perform data preparation operations like joins and filters
+- Tackle advanced scenarios such as time-window aggregations (tumbling, hopping, and session windows) for group-by operations
+
+After you create and run your Stream Analytics jobs, you can easily operationalize production workloads. Use the right set of [built-in metrics](stream-analytics-monitoring.md) for monitoring and troubleshooting purposes. Stream Analytics jobs are billed according to the [pricing model](https://azure.microsoft.com/pricing/details/stream-analytics/) when they're running.
+
+## Prerequisites
+
+Before you develop your Stream Analytics jobs using the no code editor, you must meet these requirements.
+
+- The Azure Event Hubs namespace and any target destination resource where you want to write must be publicly accessible and can't be in an Azure Virtual Network.
+- You must have the required permissions to access the streaming input and output resources.
+- You must maintain permissions to create and modify Azure Stream Analytics resources.
+
+## Azure Stream Analytics job
+
+A Stream Analytics job is built on three main components: _streaming inputs_, _transformations_, and _outputs_. You can have as many components as you want, including multiple inputs, parallel branches with multiple transformations, and multiple outputs. For more information, see [Azure Stream Analytics documentation](index.yml).
+
+To use the no code editor to easily create a Stream Analytics job, open an Event Hubs instance. Select Process Data and then select any template.
++
+The following screenshot shows a finished Stream Analytics job. It highlights all the sections available to you while you author.
++
+1. **Ribbon** - On the ribbon, sections follow the order of a classic/ analytics process: Event Hubs as input (also known as data source), transformations (streaming ETL operations), outputs, a button to save your progress and a button to start the job.
+2. **Diagram view** - A graphical representation of your Stream Analytics job, from input to operations to outputs.
+3. **Side pane** - Depending on which component you selected in the diagram view, you'll have settings to modify input, transformation, or output.
+4. **Tabs for data preview, authoring errors, and runtime errors** - For each tile shown, the data preview will show you results for that step (live for inputs and on-demand for transformations and outputs). This section also summarizes any authoring errors or warnings that you might have in your job when it's being developed. Selecting each error or warning will select that transform.
+
+## Event Hubs as the streaming input
+
+Azure Event Hubs is a big-data streaming platform and event ingestion service. It can receive and process millions of events per second. Data sent to an event hub can be transformed and stored by using any real-time analytics provider or batching/storage adapters.
+
+To configure an event hub as an input for your job, select the **Event Hub** symbol. A tile appears in the diagram view, including a side pane for its configuration and connection.
+
+After you set up your Event Hubs credentials and select **Connect**, you can add fields manually by using **+ Add field** if you know the field names. To instead detect fields and data types automatically based on a sample of the incoming messages, select **Autodetect fields**. Selecting the gear symbol allows you to edit the credentials if needed. When Stream Analytics job detect the fields, you'll see them in the list. You'll also see a live preview of the incoming messages in the **Data Preview** table under the diagram view.
+
+You can always edit the field names, or remove or change the data type, by selecting the three dot symbol next to each field. You can also expand, select, and edit any nested fields from the incoming messages, as shown in the following image.
++
+The available data types are:
+
+- **DateTime** - Date and time field in ISO format
+- **Float** - Decimal number
+- **Int** - Integer number
+- **Record** - Nested object with multiple records
+- **String** - Text
+
+## Transformations
+
+Streaming data transformations are inherently different from batch data transformations. Almost all streaming data has a time component, which affects any data preparation tasks involved.
+
+To add a streaming data transformation to your job, select the transformation symbol on the ribbon for that transformation. The respective tile will be dropped in the diagram view. After you select it, you'll see the side pane for that transformation to configure it.
+
+### Filter
+
+Use the **Filter** transformation to filter events based on the value of a field in the input. Depending on the data type (number or text), the transformation will keep the values that match the selected condition.
++
+> [!NOTE]
+> Inside every tile, you'll see information about what else is needed for the transformation to be ready. For example, when you're adding a new tile, you'll see a `Set-up required` message. If you're missing a node connector, you'll see either an *Error* or a *Warning* message.
+
+### Manage fields
+
+The **Manage fields** transformation allows you to add, remove, or rename fields coming in from an input or another transformation. The settings on the side pane give you the option of adding a new one by selecting **Add field** or adding all fields at once.
++
+> [!TIP]
+> After you configure a tile, the diagram view gives you a glimpse of the settings within the tile itself. For example, in the **Manage fields** area of the preceding image, you can see the first three fields being managed and the new names assigned to them. Each tile has information relevant to it.
+
+### Aggregate
+
+You can use the **Aggregate** transformation to calculate an aggregation (**Sum**, **Minimum**, **Maximum**, or **Average**) every time a new event occurs over a period of time. This operation also allows you to filter or slice the aggregation based on other dimensions in your data. You can have one or more aggregations in the same transformation.
+
+To add an aggregation, select the transformation symbol. Then connect an input, select the aggregation, add any filter or slice dimensions, and select the period of time over which the aggregation will be calculated. In this example, we're calculating the sum of the toll value by the state where the vehicle is from over the last 10 seconds.
++
+To add another aggregation to the same transformation, select **Add aggregate function**. Keep in mind that the filter or slice will apply to all aggregations in the transformation.
+
+### Join
+
+Use the **Join** transformation to combine events from two inputs based on the field pairs that you select. If you don't select a field pair, the join will be based on time by default. The default is what makes this transformation different from a batch one.
+
+As with regular joins, you have different options for your join logic:
+
+- **Inner join** - Include only records from both tables where the pair matches. In this example, that's where the license plate matches both inputs.
+- **Left outer join** - Include all records from the left (first) table and only the records from the second one that match the pair of fields. If there's no match, the fields from the second input will be blank.
+
+To select the type of join, select the symbol for the preferred type on the side pane.
+
+Finally, select over what period you want the join to be calculated. In this example, the join looks at the last 10 seconds. Keep in mind that the longer the period is, the less frequent the output is&mdash;and the more processing resources you'll use for the transformation.
+
+By default, all fields from both tables are included. Prefixes left (first node) and right (second node) in the output help you differentiate the source.
++
+### Group by
+
+Use the **Group by** transformation to calculate aggregations across all events within a certain time window. You can group by the values in one or more fields. It's like the **Aggregate** transformation but provides more options for aggregations. It also includes more complex time-window options. Also like **Aggregate**, you can add more than one aggregation per transformation.
+
+The aggregations available in the transformation are:
+
+- **Average**
+- **Count**
+- **Maximum**
+- **Minimum**
+- **Percentile** (continuous and discrete)
+- **Standard Deviation**
+- **Sum**
+- **Variance**
+
+To configure the transformation:
+
+1. Select your preferred aggregation.
+2. Select the field that you want to aggregate on.
+3. Select an optional group-by field if you want to get the aggregate calculation over another dimension or category. For example, **State**.
+4. Select your function for time windows.
+
+To add another aggregation to the same transformation, select **Add aggregate function**. Keep in mind that the **Group by** field and the windowing function will apply to all aggregations in the transformation.
++
+A time stamp for the end of the time window is provided as part of the transformation output for reference. For more information about time windows supported by Stream Analytics jobs, see [Windowing functions (Azure Stream Analytics)](/stream-analytics-query/windowing-azure-stream-analytics).
+
+### Union
+
+Use the **Union** transformation to connect two or more inputs to add events with shared fields (with the same name and data type) into one table. Fields that don't match will be dropped and not included in the output.
+
+### Expand
+
+Expand array is to create a new row for each value within an array.
++
+## Streaming outputs
+
+The no-code drag-and-drop experience currently supports three outputs to store your processed real time data.
++
+### Azure Data Lake Storage Gen2
+
+Data Lake Storage Gen2 makes Azure Storage the foundation for building enterprise data lakes on Azure. It's designed from the start to service multiple petabytes of information while sustaining hundreds of gigabits of throughput. It allows you to easily manage massive amounts of data. Azure Blob storage offers a cost-effective and scalable solution for storing large amounts of unstructured data in the cloud.
+
+Select **ADLS Gen2** as output for your Stream Analytics job and select the container where you want to send the output of the job. For more information about Azure Data Lake Gen2 output for a Stream Analytics job, see [Blob storage and Azure Data Lake Gen2 output from Azure Stream Analytics](blob-storage-azure-data-lake-gen2-output.md).
+
+### Azure Synapse Analytics
+
+Azure Stream Analytics jobs can output to a dedicated SQL pool table in Azure Synapse Analytics and can process throughput rates up to 200MB/sec. It supports the most demanding real-time analytics and hot-path data processing needs for workloads such as reporting and dashboarding.
+
+> [!IMPORTANT]
+> The dedicated SQL pool table must exist before you can add it as output to your Stream Analytics job. The table's schema must match the fields and their types in your job's output.
+
+Select **Synapse** as output for your Stream Analytics job and select the SQL pool table where you want to send the output of the job. For more information about Synapse output for a Stream Analytics job, see [Azure Synapse Analytics output from Azure Stream Analytics](azure-synapse-analytics-output.md).
+
+### Azure Cosmos DB
+
+Azure Cosmos DB is a globally distributed database service that offers limitless elastic scale around the globe, rich query, and automatic indexing over schema-agnostic data models.
+
+Select **CosmosDB** as output for your Stream Analytics job. For more information about Cosmos DB output for a Stream Analytics job, see [Azure Cosmos DB output from Azure Stream Analytics](azure-cosmos-db-output.md).
+
+## Data preview and errors
+
+The no code drag-and-drop experience provides tools to help you author, troubleshoot, and evaluate the performance of your analytics pipeline for streaming data.
+
+### Live data preview for inputs
+
+When you're connecting to an event hub and selecting its tile in the diagram view (the **Data Preview** tab), you'll get a live preview of data coming in if all the following are true:
+
+- Data is being pushed.
+- The input is configured correctly.
+- Fields have been added.
+
+As shown in the following screenshot, if you want to see or drill down into something specific, you can pause the preview (1). Or you can start it again if you're done.
+
+You can also see the details of a specific record, a _cell_ in the table, by selecting it and then selecting **Show/Hide details** (2). The screenshot shows the detailed view of a nested object in a record.
++
+### Static preview for transformations and outputs
+
+After you add and set up any steps in the diagram view, you can test their behavior by selecting **Get static preview**.
++
+After you do, the Stream Analytics job evaluates all transformations and outputs to make sure they're configured correctly. Stream Analytics then displays the results in the static data preview, as shown in the following image.
++
+You can refresh the preview by selecting **Refresh static preview** (1). When you refresh the preview, the Stream Analytics job takes new data from the input and evaluates all transformations. Then it outputs again with any updates that you might have performed. The **Show/Hide details** option is also available (2).
+
+### Authoring errors
+
+If you have any authoring errors or warnings, the Authoring errors tab will list them, as shown in the following screenshot. The list includes details about the error or warning, the type of card (input, transformation, or output), the error level, and a description of the error or warning.
++
+### Runtime errors
+
+Runtime errors are warning/Error/Critical level errors. These errors are helpful when you want to edit your Stream Analytics job topology/configuration for troubleshooting. In the following screenshot example, the user has configured Synapse output with an incorrect table name. The user started the job, but there's a Runtime error stating that the schema definition for the output table can't be found.
++
+## Start a Stream Analytics job
+
+Once you have configured Event Hubs, operations and Streaming outputs for the job, you Save and Start the job.
++
+- Output start time - When you start a job, you select a time for the job to start creating output.
+ - Now - Makes the starting point of the output event stream the same as when the job is started.
+ - Custom - You can choose the starting point of the output.
+ - When last stopped - This option is available when the job was previously started but was stopped manually or failed. When you choose this option, the last output time will be used to restart the job, so no data is lost.
+- Streaming units - Streaming Units represent the amount of compute and memory assigned to the job while running. If you're unsure how many SUs to choose, we recommend that you start with three and adjust as needed.
+- Output data error handling ΓÇô Output data error handling policies only apply when the output event produced by a Stream Analytics job doesn't conform to the schema of the target sink. You can configure the policy by choosing either **Retry** or **Drop**. For more information, see [Azure Stream Analytics output error policy](stream-analytics-output-error-policy.md).
+- Start ΓÇô Starts the Stream Analytics job.
++
+## Stream Analytics jobs list
+
+You can see the list of all Stream Analytics jobs created by no-code drag and drop under **Process data** > **Stream Analytics jobs**.
++
+- Filter ΓÇô You can filter the list by job name.
+- Refresh ΓÇô The list doesn't auto-refresh currently. Use the option to refresh the list and see the latest status.
+- Job name ΓÇô The name you provided in the first step of job creation. You can't edit it. Select the job name to open the job in the no-code drag and drop experience where you can Stop the job, edit it, and Start it again.
+- Status ΓÇô The status of the job. Select Refresh on top of the list to see the latest status.
+- Streaming units ΓÇô The number of Streaming units selected when you started the job.
+- Output watermark - An indicator of liveliness for the data produced by the job. All events before the timestamp are already computed.
+- Job monitoring ΓÇô Select **Open metrics** to see the metrics related to this Stream Analytics job. For more information about the metrics you can use to monitor your Stream Analytics job, see [Metrics available for Stream Analytics](stream-analytics-monitoring.md#metrics-available-for-stream-analytics).
+- Operations ΓÇô Start, stop, or delete the job.
+
+## Next steps
+
+Learn how to use the no code editor to address common scenarios using predefined templates:
+
+- [Capture Event Hubs data in Parquet format](capture-event-hub-data-parquet.md)
+- [Filter and ingest to Azure Synapse SQL](filter-ingest-synapse-sql.md)
+- [Filter and ingest to Azure Data Lake Storage Gen2](filter-ingest-data-lake-storage-gen2.md)
+- [Materialize data to Azure Cosmos DB](no-code-materialize-cosmos-db.md)
stream-analytics Stream Analytics Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-autoscale.md
Title: Autoscale Stream Analytics jobs
-description: This article describes how to autoscale Stream Analytics job based on a predefined schedule or values of job metrics
+ Title: Azure Stream Analytics autoscale streaming units
+description: This article explains how you can use different scaling methods for your Stream Analytics job to make sure you have the right number of streaming units.
- + Previously updated : 06/03/2020 Last updated : 05/10/2022
-# Autoscale Stream Analytics jobs using Azure Automation
-You can optimize the cost of your Stream Analytics jobs by configuring autoscale. Autoscaling increases or decreases your job's Streaming Units (SUs) to match the change in your input load. Instead of over-provisioning your job, you can scale up or down as needed. There are two ways to configure your jobs to autoscale:
-1. **Pre-define a schedule** when you have a predictable input load. For example, you expect a higher rate of input events during the daytime and want your job to run with more SUs.
-2. **Trigger scale up and scale down operations based on job metrics** when you don't have a predictable input load. You can dynamically change the number of SUs based on your job metrics such as the number of input events or backlogged input events.
+# Autoscale streaming units (Preview)
+
+Streaming units (SUs) represent the computing resources that are allocated to execute a Stream Analytics job. The higher the number of SUs, the more CPU and memory resources are allocated to your job. Stream Analytics offers two types of scaling, which allows you to have the right number of [Streaming Units](stream-analytics-streaming-unit-consumption.md) (SUs) running to handle the load of your job.
+
+This article explains how you can use these different scaling methods for your Stream Analytics job in the Azure portal.
+
+The two types of scaling supported by Stream Analytics are _manual scale_ and _custom autoscale_.
+
+_Manual scale_ allows you to maintain and adjust a fixed number of streaming units for your job.
+
+_Custom autoscale_ allows you to specify the minimum and maximum number of streaming units for your job to dynamically adjust based on your rule definitions. Custom autoscale examines the preconfigured set of rules. Then it determines to add SUs to handle increases in load or to reduce the number of SUs when computing resources are sitting idle. For more information about autoscale in Azure Monitor, see [Overview of autoscale in Microsoft Azure](../azure-monitor/autoscale/autoscale-overview.md).
+
+> [!NOTE]
+> Although you can use manual scale regardless of the job's state, custom autoscale can only be enabled when the job is in the `running` state.
+
+Examples of custom autoscale rules include:
+
+- Increase streaming units when the average SU% utilization of the job over the last 2 minutes goes above 75%.
+- Decrease streaming units when the average SU% utilization of the job over the last 15 minutes is below 30%.
+- Use more streaming units during business hours and fewer during off hours.
+
+## Autoscale limits
+
+All Stream Analytics jobs can autoscale between 1, 3 and 6 SUs. Autoscaling beyond 6 SUs requires your job to have a parallel or [embarrassingly parallel topology](stream-analytics-parallelization.md#embarrassingly-parallel-jobs). Parallel jobs created with less than or equal to 6 streaming units can autoscale to the maximum SU value for that job based on the number of partitions.
-## Prerequisites
-Before you start to configure autoscaling for your job, complete the following steps.
-1. Your job is optimized to have a [parallel topology](./stream-analytics-parallelization.md). If you can change the scale of your job while it is running, then your job has a parallel topology and can be configured to autoscale.
-2. [Create an Azure Automation account](../automation/automation-create-standalone-account.md) with the option "RunAsAccount" enabled. This account must have permissions to manage your Stream Analytics jobs.
+## Scaling your Stream Analytics job
-## Set up Azure Automation
-### Configure variables
-Add the following variables inside the Azure Automation account. These variables will be used in the runbooks that are described in the next steps.
+First, follow these steps to navigate to the **Scale** page for your Azure Stream Analytics job.
-| Name | Type | Value |
-| | | |
-| **jobName** | String | Name of your Stream Analytics job that you want to autoscale. |
-| **resourceGroupName** | String | Name of the resource group in which your job is present. |
-| **subId** | String | Subscription ID in which your job is present. |
-| **increasedSU** | Integer | The higher SU value you want your job to scale to in a schedule. This value must be one of the valid SU options you see in the **Scale** settings of your job while it is running. |
-| **decreasedSU** | Integer | The lower SU value you want your job to scale to in a schedule. This value must be one of the valid SU options you see in the **Scale** settings of your job while it is running. |
-| **maxSU** | Integer | The maximum SU value you want your job to scale to in steps when autoscaling by load. This value must be one of the valid SU options you see in the **Scale** settings of your job while it is running. |
-| **minSU** | Integer | The minimum SU value you want your job to scale to in steps when autoscaling by load. This value must be one of the valid SU options you see in the **Scale** settings of your job while it is running. |
+1. Sign in to [Azure portal](https://portal.azure.com/)
+2. In the list of resources, find the Stream Analytics job that you want to scale and then open it.
+3. In the job page, under the **Configure** heading, select **Scale**.
+ :::image type="content" source="./media/stream-analytics-autoscale/configure-scale.png" alt-text="Screenshot showing navigation to Scale." lightbox="./media/stream-analytics-autoscale/configure-scale.png" :::
+1. Under **Configure** , you'll see two options for scaling: **Manual scale** and **Custom autoscale**.
+ :::image type="content" source="./media/stream-analytics-autoscale/configure-manual-custom-autoscale.png" alt-text="Screenshot showing the Configure area where you select Manual scale or custom autoscale." lightbox="./media/stream-analytics-autoscale/configure-manual-custom-autoscale.png" :::
-![Add variables in Azure Automation](./media/autoscale/variables.png)
+## Manual scale
-### Create runbooks
-The next step is to create two PowerShell runbooks. One for scale up and the other for scale down operations.
-1. In your Azure Automation account, go to **Runbooks** under **Process Automation** and select **Create Runbook**.
-2. Name the first runbook *ScaleUpRunbook* with the type set to PowerShell. Use the [ScaleUpRunbook PowerShell script](https://github.com/Azure/azure-stream-analytics/blob/master/Autoscale/ScaleUpRunbook.ps1) available in GitHub. Save and publish it.
-3. Create another runbook called *ScaleDownRunbook* with the type PowerShell. Use the [ScaleDownRunbook PowerShell script](https://github.com/Azure/azure-stream-analytics/blob/master/Autoscale/ScaleDownRunbook.ps1) available in GitHub. Save and publish it.
+This setting allows you to set a fixed number of streaming units for your job. Notice that the default number of SUs is 3 when creating a job.
-![Autoscale runbooks in Azure Automation](./media/autoscale/runbooks.png)
+### To manually scale your job
-You now have runbooks that can automatically trigger scale up and scale down operations on your Stream Analytics job. These runbooks can be triggered using a pre-defined schedule or can be set dynamically based on job metrics.
+1. Select **Manual scale** if it isn't already selected.
+2. Use the **Slider** to set the SUs for the job or type directly into the box. You're limited to specific SU settings when the job is running. The limitation is dependent on your job configuration.
+ :::image type="content" source="./media/stream-analytics-autoscale/manual-scale-slider.png" alt-text="Screenshot showing Manual scale where you select the number of streaming units with a slider." lightbox="./media/stream-analytics-autoscale/manual-scale-slider.png" :::
+3. Select **Save** on the toolbar to save the setting.
+ :::image type="content" source="./media/stream-analytics-autoscale/save-manual-scale-setting.png" alt-text="Screenshot showing the Save option in the Configure area." lightbox="./media/stream-analytics-autoscale/save-manual-scale-setting.png" :::
+
+## Custom autoscale - default condition
-## Autoscale based on a schedule
-Azure Automation allows you to configure a schedule to trigger your runbooks.
-1. In your Azure Automation account, select **Schedules** under **Shared resources**. Then, select **Add a schedule**.
-2. For example, you can create two schedules. One that represents when you want your job to scale up and another that represents when you want your job to scale down. You can define a recurrence for these schedules.
+You can configure automatic scaling of streaming units by using conditions. The **Default** scale condition is executed when none of the other scale conditions match. As such, you must select a Default condition for your job. You may choose a name for your Default condition or leave it as `Auto created scale condition`, which is pre-populated.
- ![Schedules in Azure Automation](./media/autoscale/schedules.png)
-3. Open your **ScaleUpRunbook** and then select **Schedules** under **Resources**. You can then link your runbook to a schedule you created in the previous steps. You can have multiple schedules linked with the same runbook which can be helpful when you want to run the same scale operation at different times of the day.
+Set the **Default** condition by choosing one of the following scale modes:
-![Scheduling runbooks in Azure Automation](./media/autoscale/schedulerunbook.png)
+- **Scale based on a metric** (such as CPU or memory usage)
+- **Scale to specific number of streaming units**
-1. Repeat the previous step for **ScaleDownRunbook**.
+> [!NOTE]
+> You can't set a **Schedule** within the Default condition. The Default is only executed when none of the other schedule conditions are met.
-## Autoscale based on load
-There might be cases where you cannot predict input load. In such cases, it is more optimal to scale up/down in steps within a minimum and maximum bound. You can configure alert rules in your Stream Analytics jobs to trigger runbooks when job metrics go above or below a threshold.
-1. In your Azure Automation account, create two more Integer variables called **minSU** and **maxSU**. This sets the bounds within which your job will scale in steps.
-2. Create two new runbooks. You can use the [StepScaleUp PowerShell script](https://github.com/Azure/azure-stream-analytics/blob/master/Autoscale/StepScaleUp.ps1) that
- increases the SUs of your job in increments until **maxSU** value. You can also use the [StepScaleDown PowerShell script](https://github.com/Azure/azure-stream-analytics/blob/master/Autoscale/StepScaleDown.ps1) that decreases the SUs of your job in steps until **minSU** value is reached. Alternatively, you can use the runbooks from the previous section if you have specific SU values you want to scale to.
-3. In your Stream Analytics job, select **Alert rules** under **Monitoring**.
-4. Create two action groups. One to be used for scale up operation and another for scale down operation. Select **Manage Actions** and then click on **Add action group**.
-5. Fill out the required fields. Choose **Automation Runbook** when you select the **Action Type**. Select the runbook you want to trigger when the alert fires. Then, create the action group.
+### Scale based on a metric
- ![Create action group](./media/autoscale/create-actiongroup.png)
-6. Create a [**New alert rule**](./stream-analytics-set-up-alerts.md#set-up-alerts-in-the-azure-portal) in your job. Specify a condition based on a metric of your choice. [*Input Events*, *SU% Utilization* or *Backlogged Input Events*](./stream-analytics-monitoring.md#metrics-available-for-stream-analytics) are recommended metrics to use for defining autoscaling logic. It is also recommended to use 1 minute *Aggregation granularity* and *Frequency of evaluation* when triggering scale up operations. Doing so ensures your job has ample resources to cope with large spikes in input volume.
-7. Select the Action Group created in the last step, and create the alert.
-8. Repeat steps 2 through 4 for any additional scale operations you want to trigger based on condition of job metrics.
+The following procedure shows you how to add a condition to automatically increase streaming units (scale out) when the SU (memory) usage is greater than 75%. Or how to decrease streaming units (scale in) when the SU usage is less than 25%. Increments are made from 1 to 3 to 6. Similarly, decrements are made from 6 to 3 to 1.
-It's a best practice to run scale tests before running your job in production. When you test your job against varying input loads, you get a sense of how many SUs your job needs for different input throughput. This can inform the conditions you define in your alert rules that trigger scale up and scale down operations.
+1. On the **Scale** page, select **Custom autoscale**.
+2. In the **Default** section of the page, specify a **name** for the default condition. Select the **pencil** symbol to edit the text.
+3. Select **Scale based on a metric** for **Scale mode**.
+4. Select **+ Add a rule**.
+ :::image type="content" source="./media/stream-analytics-autoscale/scale-metric-add-role.png" alt-text="Screenshot showing the add scale rule option." lightbox="./media/stream-analytics-autoscale/scale-metric-add-role.png" :::
+5. On the **Scale rule** page, follow these steps:
+ 1. Under **Metric Namespace**, select a metric from the **Metric name** drop-down list. In this example, it's **SU % Utilization**.
+ 2. Select an Operator and threshold values. In this example, they're **Greater than** and **75** for **Metric threshold to trigger scale action**.
+ 3. Select an **operation** in the **Action** section. In this example, it's set to **Increase**.
+ 4. Then, select **Add**.
+ :::image type="content" source="./media/stream-analytics-autoscale/rule-metric-operators-add.png" alt-text="Screenshot showing adding a rule metric options." lightbox="./media/stream-analytics-autoscale/rule-metric-operators-add.png" :::
+6. Select **+ Add a rule** again, and follow these steps on the **Scale rule** page:
+ 1. Select a metric from the **Metric name** drop-down list. In this example, it's **SU % Utilization**.
+ 2. Select an operator and threshold values. In this example, they're **Less than** and **25** for **Metric threshold to trigger scale action**.
+ 3. Select an **operation** in the **Action** section. In this example, it's set to **Decrease**.
+ 4. Then, select **Add**.
+7. The autoscale feature decreases the streaming units for the namespace if the overall SU usage goes below 25% in this example.
+8. Set the **minimum** and **maximum** and **default** number of streaming units. The minimum and maximum streaming units represent the scaling limitations for your job. The **default** value is used in the rare instance that scaling failed. We recommended that you set the **default** value to the number of SUs that the job is currently running with.
+9. Select **Save**.
+ :::image type="content" source="./media/stream-analytics-autoscale/save-scale-rule-streaming-units-limits.png" alt-text="Screenshot showing the Save option for a rule." lightbox="./media/stream-analytics-autoscale/save-scale-rule-streaming-units-limits.png" :::
+
+### Scale to specific number of streaming units
+
+Follow these steps to configure the rule to scale the job to use specific number of streaming units. Again, the default condition is applied when none of the other scale conditions match.
+
+1. On the **Scale** page, select **Custom autoscale**.
+2. In the **Default** section of the page, specify a **name** for the default condition.
+3. Select **Scale to specific streaming units** for **Scale mode**.
+4. For **Streaming units**, select the number of default streaming units.
+
+### Custom autoscale ΓÇô Add more scale conditions
+
+The previous section shows you how to add a default condition for the autoscale setting. This section shows you how to add more conditions to the autoscale setting. For these other non-default conditions, you can set a schedule based on specific days of the week or a date range.
+
+### Scale based on a metric
+
+1. On the **Scale** page, select **Custom autoscale** for the **Choose how to scale your resource** option.
+2. Select **Add a scale condition** under the **Default** block.
+ :::image type="content" source="./media/stream-analytics-autoscale/save-custom-autoscale-add-condition.png" alt-text="Screenshot showing the custom autoscale scale condition." lightbox="./media/stream-analytics-autoscale/save-custom-autoscale-add-condition.png" :::
+3. Specify a **name** for the condition.
+4. Confirm that the **Scale based on a metric** option is selected.
+5. Select **+ Add a rule** to add a rule to increase streaming units when the overall SU % utilization goes above 75%. Follow steps from the preceding **Default condition** section.
+6. Set the **minimum** and **maximum** and **default** number of streaming units.
+7. Set **Schedule**, **Timezone**, **Start date**, and **End date** on the custom condition (but not on the default condition). You can either specify start and end dates for the condition (or) select **Repeat specific days** (Monday, Tuesday, and so on.) of a week.
+ - If you select **Specify start/end dates**, select the **Timezone**, **Start date and time**, and **End date and time** for the condition to be in effect.
+ - If you select **Repeat specific days**, select the days of the week, timezone, start time, and end time when the condition should apply.
+
+### Scale to specific number of streaming units
+
+1. On the **Scale** page, select **Custom autoscale** for the **Choose how to scale your resource** option.
+2. Select **Add a scale condition** under the **Default** block.
+3. Specify a **name** for the condition.
+4. Select **scale to specific streaming units** option for **Scale mode**.
+5. Type in the number of **streaming units**.
+6. For the **Schedule**, specify either start and end dates for the condition (or) select specific days (Monday, Tuesday, and so on.) of a week and times.
+ 1. If you select **Specify start/end dates**, select the **Timezone**, **Start date and time**, and **End date and time** for the condition to be in effect.
+ 2. If you select **Repeat specific days**, select the days of the week, timezone, start time, and end time when the condition should apply.
+
+To learn more about how autoscale settings work, especially how it picks a profile or condition and evaluates multiple rules, see [Understand Autoscale settings](../azure-monitor/autoscale/autoscale-understanding-settings.md).
## Next steps
-* [Create parallelizable queries in Azure Stream Analytics](stream-analytics-parallelization.md)
-* [Scale Azure Stream Analytics jobs to increase throughput](stream-analytics-scale-jobs.md)
+
+- [Understand and adjust Streaming Units](stream-analytics-streaming-unit-consumption.md)
+- [Create parallelizable queries in Azure Stream Analytics](stream-analytics-parallelization.md)
+- [Scale Azure Stream Analytics jobs to increase throughput](stream-analytics-scale-jobs.md)
stream-analytics Stream Analytics Concepts Checkpoint Replay https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-concepts-checkpoint-replay.md
Previously updated : 12/06/2018- Last updated : 05/09/2022+ # Checkpoint and replay concepts in Azure Stream Analytics jobs This article describes the internal checkpoint and replay concepts in Azure Stream Analytics, and the impact those have on job recovery. Each time a Stream Analytics job runs, state information is maintained internally. That state information is saved in a checkpoint periodically. In some scenarios, the checkpoint information is used for job recovery if a job failure or upgrade occurs. In other circumstances, the checkpoint cannot be used for recovery, and a replay is necessary.
If you ever observe significant processing delay because of node failure and OS
Current Stream Analytics does not show a report when this kind of recovery process is taking place. ## Job recovery from a service upgrade
-Microsoft occasionally upgrades the binaries that run the Stream Analytics jobs in the Azure service. At these times, usersΓÇÖ running jobs are upgraded to newer version and the job restarts automatically.
-Currently, the recovery checkpoint format is not preserved between upgrades. As a result, the state of the streaming query must be restored entirely using replay technique. In order to allow Stream Analytics jobs to replay the exact same input from before, itΓÇÖs important to set the retention policy for the source data to at least the window sizes in your query. Failing to do so may result in incorrect or partial results during service upgrade, since the source data may not be retained far enough back to include the full window size.
+Microsoft occasionally upgrades the binaries that run the Stream Analytics jobs in the Azure service. At these times, usersΓÇÖ running jobs are upgraded to a newer version and the job restarts automatically.
+
+Azure Stream Analytics uses checkpoints where possible to restore data from the last checkpointed state. In scenarios where internal checkpoints can't be used, the state of the streaming query is restored entirely using a replay technique. In order to allow Stream Analytics jobs to replay the exact same input from before, itΓÇÖs important to set the retention policy for the source data to at least the window sizes in your query. Failing to do so may result in incorrect or partial results during service upgrade, since the source data may not be retained far enough back to include the full window size.
In general, the amount of replay needed is proportional to the size of the window multiplied by the average event rate. As an example, for a job with an input rate of 1000 events per second, a window size greater than one hour is considered to have a large replay size. Up to one hour of data may need to be re-processed to initialize the state so it can produce full and correct results, which may cause delayed output (no output) for some extended period. Queries with no windows or other temporal operators, like `JOIN` or `LAG`, would have zero replay. ## Estimate replay catch-up time To estimate the length of the delay due to a service upgrade, you can follow this technique:
-1. Load the input Event Hub with sufficient data to cover the largest window size in your query, at expected event rate. The eventsΓÇÖ timestamp should be close to the wall clock time throughout that period of time, as if itΓÇÖs a live input feed. For example, if you have a 3-day window in your query, send events to Event Hub for three days, and continue to send events.
+1. Load the input Event Hubs with sufficient data to cover the largest window size in your query, at expected event rate. The eventsΓÇÖ timestamp should be close to the wall clock time throughout that period of time, as if itΓÇÖs a live input feed. For example, if you have a 3-day window in your query, send events to Event Hubs for three days, and continue to send events.
2. Start the job using **Now** as the start time.
stream-analytics Stream Analytics Parallelization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-parallelization.md
Title: Use query parallelization and scale in Azure Stream Analytics description: This article describes how to scale Stream Analytics jobs by configuring input partitions, tuning the query definition, and setting job streaming units. + Previously updated : 05/04/2020 Last updated : 05/10/2022 # Leverage query parallelization in Azure Stream Analytics This article shows you how to take advantage of parallelization in Azure Stream Analytics. You learn how to scale Stream Analytics jobs by configuring input partitions and tuning the analytics query definition.
All Azure Stream Analytics streaming inputs can take advantage of partitioning:
### Outputs When you work with Stream Analytics, you can take advantage of partitioning in the outputs:-- Azure Data Lake Storage-- Azure Functions-- Azure Table-- Blob storage (can set the partition key explicitly)-- Cosmos DB (need to set the partition key explicitly)-- Event Hubs (need to set the partition key explicitly)-- IoT Hub (need to set the partition key explicitly)-- Service Bus
+- Azure Data Lake Storage
+- Azure Functions
+- Azure Table
+- Blob storage (can set the partition key explicitly)
+- Cosmos DB (need to set the partition key explicitly)
+- Event Hubs (need to set the partition key explicitly)
+- IoT Hub (need to set the partition key explicitly)
+- Service Bus
- SQL and Azure Synapse Analytics with optional partitioning: see more information on the [Output to Azure SQL Database page](./stream-analytics-sql-output-perf.md). Power BI doesn't support partitioning. However you can still partition the input as described in [this section](#multi-step-query-with-different-partition-by-values)
Only when all inputs, outputs and query steps are using the same key will the jo
## Embarrassingly parallel jobs An *embarrassingly parallel* job is the most scalable scenario in Azure Stream Analytics. It connects one partition of the input to one instance of the query to one partition of the output. This parallelism has the following requirements:
-1. If your query logic depends on the same key being processed by the same query instance, you must make sure that the events go to the same partition of your input. For outputs to Event Hubs or IoT Hub, the event data must have the **PartitionKey** value set. Alternatively, you can use partitioned senders. For blob storage, the events are sent to the same partition folder. An example would be a query instance that aggregates data per userID where input event hub is partitioned using userID as partition key. However, if your query logic does not require the same key to be processed by the same query instance, you can ignore this requirement. An example of this logic would be a simple select-project-filter query.
+1. If your query logic depends on the same key being processed by the same query instance, you must make sure that the events go to the same partition of your input. For Event Hubs or IoT Hub, this means that the event data must have the **PartitionKey** value set. Alternatively, you can use partitioned senders. For blob storage, this means that the events are sent to the same partition folder. An example would be a query instance that aggregates data per userID where input event hub is partitioned using userID as partition key. However, if your query logic doesn't require the same key to be processed by the same query instance, you can ignore this requirement. An example of this logic would be a simple select-project-filter query.
-2. The next step is to make your query partitioned. For jobs with compatibility level 1.2 or higher (recommended), custom column can be specified as Partition Key in the input settings and the job will be parallelized automatically. Jobs with compatibility level 1.0 or 1.1, requires you to use **PARTITION BY PartitionId** in all the steps of your query. Multiple steps are allowed, but they all must be partitioned by the same key.
+2. The next step is to make your query be partitioned. For jobs with compatibility level 1.2 or higher (recommended), custom column can be specified as Partition Key in the input settings and the job will be paralelled automatically. Jobs with compatibility level 1.0 or 1.1, requires you to use **PARTITION BY PartitionId** in all the steps of your query. Multiple steps are allowed, but they all must be partitioned by the same key.
-3. Most of the outputs supported in Stream Analytics can take advantage of partitioning. If you use an output type that doesn't support partitioning your job won't be *embarrassingly parallel*. For Event Hubs outputs, ensure **Partition key column** is set to the same partition key used in the query. Refer to the [output section](#outputs) for more details.
+3. Most of the outputs supported in Stream Analytics can take advantage of partitioning. If you use an output type that doesn't support partitioning your job won't be *embarrassingly parallel*. For Event Hubs output, ensure **Partition key column** is set to the same partition key used in the query. Refer to the [output section](#outputs) for more details.
4. The number of input partitions must equal the number of output partitions. Blob storage output can support partitions and inherits the partitioning scheme of the upstream query. When a partition key for Blob storage is specified, data is partitioned per input partition thus the result is still fully parallel. Here are examples of partition values that allow a fully parallel job:
Query:
GROUP BY TumblingWindow(minute, 3), TollBoothId, PartitionId ```
-This query has a grouping key. Therefore, the events grouped together must be sent to the same event hub partition. Since in this example we group by TollBoothID, we should be sure that TollBoothID is used as the partition key when the events are sent to Event Hubs. Then in ASA, we can use **PARTITION BY PartitionId** to inherit from this partition scheme and enable full parallelization. Since the output is blob storage, we don't need to worry about configuring a partition key value, as per requirement #4.
+This query has a grouping key. Therefore, the events grouped together must be sent to the same Event Hubs partition. Since in this example we group by TollBoothID, we should be sure that TollBoothID is used as the partition key when the events are sent to Event Hubs. Then in ASA, we can use **PARTITION BY PartitionId** to inherit from this partition scheme and enable full parallelization. Since the output is blob storage, we don't need to worry about configuring a partition key value, as per requirement #4.
## Example of scenarios that are *not* embarrassingly parallel
If the input partition count doesn't match the output partition count, the topol
* Input: Event hub with eight partitions * Output: Power BI
-Power BI output doesn't currently support partitioning. Therefore, this scenario isn't parallel.
+Power BI output doesn't currently support partitioning. Therefore, this scenario isn't embarrassingly parallel.
### Multi-step query with different PARTITION BY values * Input: Event hub with eight partitions
Query:
GROUP BY TumblingWindow(minute, 3), TollBoothId ```
-As you can see, the second step uses **TollBoothId** as the partitioning key. This step isn't the same as the first step, and it therefore requires us to do a shuffle. This job isn't parallel.
+As you can see, the second step uses **TollBoothId** as the partitioning key. This step isn't the same as the first step, and it therefore requires us to do a shuffle.
### Multi-step query with different PARTITION BY values * Input: Event hub with eight partitions ("Partition key column" not set, default to "PartitionId")
Query:
GROUP BY TumblingWindow(minute, 3), TollBoothId ```
-Compatibility level 1.2 or above enables parallel query execution by default. But here the keys aren't aligned. If we knew the input event hub to be partitioned by "TollBoothId", we could set it up in the input config and get a parallel job. In any case, the PARTITION BY clause isn't required.
+Compatibility level 1.2 or above enables parallel query execution by default. For example, query from the previous section will be partitioned as long as "TollBoothId" column is set as input Partition Key. PARTITION BY PartitionId clause isn't required.
## Calculate the maximum streaming units of a job The total number of streaming units that can be used by a Stream Analytics job depends on the number of steps in the query defined for the job and the number of partitions for each step.
The following observations use a Stream Analytics job with stateless (passthroug
| 5 K | 6 | 6 TU | | 10 K | 12 | 10 TU |
-The [Event Hubs](https://github.com/Azure-Samples/streaming-at-scale/tree/main/eventhubs-streamanalytics-eventhubs) service scales linearly in terms of streaming units (SU) and throughput, making it the most efficient and performant way to analyze and stream data out of Stream Analytics. Jobs can be scaled up to 192 SU, which roughly translates to processing up to 200 MB/s, or 19 trillion events per day.
+The [Event Hubs](https://github.com/Azure-Samples/streaming-at-scale/tree/main/eventhubs-streamanalytics-eventhubs) solution scales linearly in terms of streaming units (SU) and throughput, making it the most efficient and performant way to analyze and stream data out of Stream Analytics. Jobs can be scaled up to 396 SU, which roughly translates to processing up to 400 MB/s, or 38 trillion events per day.
#### Azure SQL |Ingestion Rate (events per second) | Streaming Units | Output Resources |
stream-analytics Streaming Technologies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/streaming-technologies.md
description: Learn about how to choose the right real-time analytics and streami
+ Previously updated : 05/15/2019 Last updated : 05/10/2022 # Choose a real-time analytics and streaming processing technology on Azure
Azure Stream Analytics is the recommended service for stream analytics on Azure.
* [Event Sourcing pattern](/azure/architecture/patterns/event-sourcing) * [IoT Edge](stream-analytics-edge.md)
-Adding an Azure Stream Analytics job to your application is the fastest way to get streaming analytics up and running in Azure, using the SQL language you already know. Azure Stream Analytics is a job service, so you don't have to spend time managing clusters, and you don't have to worry about downtime with a 99.9% SLA at the job level. Billing is also done at the job level making startup costs low (one Streaming Unit), but scalable (up to 192 Streaming Units). It's much more cost effective to run a few Stream Analytics jobs than it is to run and maintain a cluster.
+Adding an Azure Stream Analytics job to your application is the fastest way to get streaming analytics up and running in Azure, using the SQL language you already know. Azure Stream Analytics is a job service, so you don't have to spend time managing clusters, and you don't have to worry about downtime with a 99.9% SLA at the job level. Billing is also done at the job level making startup costs low (one Streaming Unit), but scalable (up to 396 Streaming Units). It's much more cost effective to run a few Stream Analytics jobs than it is to run and maintain a cluster.
Azure Stream Analytics has a rich out-of-the-box experience. You can immediately take advantage of the following features without any additional setup:
synapse-analytics Continuous Integration Delivery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/cicd/continuous-integration-delivery.md
Continuous integration (CI) is the process of automating the build and testing of code every time a team member commits a change to version control. Continuous delivery (CD) is the process of building, testing, configuring, and deploying from multiple testing or staging environments to a production environment.
-In an Azure Synapse Analytics workspace, CI/CD moves all entities from one environment (development, test, production) to another environment. Promoting your workspace to another workspace is a two-part process. First, use an [Azure Resource Manager template (ARM template)](../../azure-resource-manager/templates/overview.md) to create or update workspace resources (pools and workspace). Then, migrate artifacts like SQL scripts and notebooks, Spark job definitions, pipelines, datasets, and data flows by using Azure Synapse CI/CD tools in Azure DevOps or on GitHub.
+In an Azure Synapse Analytics workspace, CI/CD moves all entities from one environment (development, test, production) to another environment. Promoting your workspace to another workspace is a two-part process. First, use an [Azure Resource Manager template (ARM template)](../../azure-resource-manager/templates/overview.md) to create or update workspace resources (pools and workspace). Then, migrate artifacts like SQL scripts and notebooks, Spark job definitions, pipelines, datasets, and other artifacts by using **Synapse Workspace Deployment** tools in Azure DevOps or on GitHub.
This article outlines how to use an Azure DevOps release pipeline and GitHub Actions to automate the deployment of an Azure Synapse workspace to multiple environments.
In this section, you'll learn how to deploy an Azure Synapse workspace in Azure
:::image type="content" source="media/release-creation-arm-template-branch.png" lightbox="media/release-creation-arm-template-branch.png" alt-text="Screenshot that shows setting the resource ARM template branch.":::
-1. For the artifacts **Default branch**, select the repository [publish branch](source-control.md#configure-publishing-settings). By default, the publish branch is `workspace_publish`. For the **Default version**, select **Latest from default branch**.
+1. For the artifacts **Default branch**, select the repository [publish branch](source-control.md#configure-publishing-settings) or other non-publish branches which include Synapse artifacts. By default, the publish branch is `workspace_publish`. For the **Default version**, select **Latest from default branch**.
:::image type="content" source="media/release-creation-publish-branch.png" alt-text="Screenshot that shows setting the artifacts branch.":::
If you have an ARM template that deploys a resource, such as an Azure Synapse wo
### Set up a stage task for Azure Synapse artifacts deployment
-Use the [Synapse workspace deployment](https://marketplace.visualstudio.com/items?itemName=AzureSynapseWorkspace.synapsecicd-deploy) extension to deploy other items in your Azure Synapse workspace. Items that you can deploy include datasets, SQL scripts and notebooks, a Spark job definition, a data flow, a pipeline, a linked service, credentials, and an integration runtime.
+Use the [Synapse workspace deployment](https://marketplace.visualstudio.com/items?itemName=AzureSynapseWorkspace.synapsecicd-deploy) extension to deploy other items in your Azure Synapse workspace. Items that you can deploy include datasets, SQL scripts and notebooks, spark job definitions, integration runtime, data flow, credentials, and other artifacts in workspace.
+
+#### Install and add deployment extension
1. Search for and get the extension from [Visual Studio Marketplace](https://marketplace.visualstudio.com/azuredevops).
Use the [Synapse workspace deployment](https://marketplace.visualstudio.com/item
:::image type="content" source="media/add-extension-task.png" alt-text="Screenshot that shows searching for Synapse workspace deployment to create a task.":::
+#### Configure the deployment task
+
+The deployment task supports 3 types of operations, validate only, deploy and validate and deploy.
+
+**Validate** is to validate the Synapse artifacts in non-publish branch with the task and generate the workspace template and parameter template file. The validation operation only works in the YAML pipeline. The sample YAML file is as below:
+
+ ```yaml
+ pool:
+ vmImage: ubuntu-latest
+
+ resources:
+ repositories:
+ - repository: <repository name>
+ type: git
+ name: <name>
+ ref: <user/collaboration branch>
+
+ steps:
+ - checkout: <name>
+ - task: Synapse workspace deployment@2
+ continueOnError: true
+ inputs:
+ operation: 'validate'
+ ArtifactsFolder: '$(System.DefaultWorkingDirectory)/ArtifactFolder'
+ TargetWorkspaceName: '<target workspace name>'
+```
+
+**Deploy** The inputs of the operation deploy include Synapse workspace template and parameter template, which can be created after publishing in the workspace publish branch or after the validation. It is same as the version 1.x.
+
+**Validate and deploy** can be used to directly deploy the workspace from non-publish branch with the artifact root folder.
+
+You can choose the operation types based on the use case. Following part is an example of the deploy.
+
+1. In the task, select the operation type as **Deploy**.
+
+ :::image type="content" source="media/operation-deploy.png" lightbox="media/operation-deploy.png" alt-text="Screenshot that shows the selection of operation deploy.":::
+ 1. In the task, next to **Template**, select **…** to choose the template file. 1. Next to **Template parameters**, select **…** to choose the parameters file.
In Azure Synapse, unlike in Data Factory, some artifacts aren't Resource Manager
### Unexpected token error in release If your parameter file has parameter values that aren't escaped, the release pipeline fails to parse the file and generates an `unexpected token` error. We suggest that you override parameters or use Key Vault to retrieve parameter values. You also can use double escape characters to resolve the issue.
+
synapse-analytics Quickstart Integrate Azure Machine Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/machine-learning/quickstart-integrate-azure-machine-learning.md
-# Quickstart: Create a new Azure Machine Learning linked service in Synapse
-**[!IMPORTANT] The Azure ML integration is not currently supported in Synapse Workspaces with Data Exfiltration Protection.** If you are **not** using data exfiltration protection and want to connect to Azure ML using private endpoints, you can set up a managed AzureML private endpoint in your Synapse workspace. [Read more about managed private endpoints](../security/how-to-create-managed-private-endpoints.md)
+# Quickstart: Create a new Azure Machine Learning linked service in Synapse
+[!IMPORTANT]
+**The Azure ML integration is not currently supported in Synapse Workspaces with Data Exfiltration Protection.** If you are **not** using data exfiltration protection and want to connect to Azure ML using private endpoints, you can set up a managed AzureML private endpoint in your Synapse workspace. [Read more about managed private endpoints](../security/how-to-create-managed-private-endpoints.md)
In this quickstart, you'll link an Azure Synapse Analytics workspace to an Azure Machine Learning workspace. Linking these workspaces allows you to leverage Azure Machine Learning from various experiences in Synapse.
For example, this linking to an Azure Machine Learning workspace enables these e
- Enrich your data with predictions by bringing a machine learning model from the Azure Machine Learning model registry and score the model in Synapse SQL pools. For more details, see [Tutorial: Machine learning model scoring wizard for Synapse SQL pools](tutorial-sql-pool-model-scoring-wizard.md). ## Two types of authentication+ There are two types of identities you can use when creating an Azure Machine Learning linked service in Azure Synapse.+ * Synapse workspace Managed Identity * Service Principal
This section will guide you on how to create an Azure Machine Learning linked se
![Create linked service](media/quickstart-integrate-azure-machine-learning/quickstart-integrate-azure-machine-learning-create-linked-service-00a.png)
-2. Fill out the form:
+1. Fill out the form:
- Provide the details about the Azure Machine Learning workspace you want to link to. This includes details about subscription and workspace name.
-
+ - Select Authentication Method: **Managed Identity**
-
-3. Click **Test Connection** to verify if the configuration is correct. If the connection test passes, click **Save**.
+
+1. Click **Test Connection** to verify if the configuration is correct. If the connection test passes, click **Save**.
If the connection test fails, make sure that the Azure Synapse workspace MSI has permissions to access this Azure Machine Learning workspace, and try again.
This step will create a new Service Principal. If you want to use an existing Se
![Create linked service](media/quickstart-integrate-azure-machine-learning/quickstart-integrate-azure-machine-learning-create-linked-service-00a.png)
-2. Fill out the form:
+1. Fill out the form:
- Provide the details about the Azure Machine Learning workspace you want to link to. This includes details about subscription and workspace name.
This step will create a new Service Principal. If you want to use an existing Se
- Service principal key: The secret you generated in the previous section.
-3. Click **Test Connection** to verify if the configuration is correct. If the connection test passes, click **Save**.
+1. Click **Test Connection** to verify if the configuration is correct. If the connection test passes, click **Save**.
If the connection test fails, make sure that the service principal ID and secret are correct and try again.
synapse-analytics 1 Design Performance Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/migration-guides/netezza/1-design-performance-migration.md
+
+ Title: "Design and performance for Netezza migrations"
+description: Learn how Netezza and Azure Synapse SQL databases differ in their approach to high query performance on exceptionally large data volumes.
+++
+ms.devlang:
++++ Last updated : 05/24/2022++
+# Design and performance for Netezza migrations
+
+This article is part one of a seven part series that provides guidance on how to migrate from Netezza to Azure Synapse Analytics. This article provides best practices for design and performance.
+
+## Overview
+
+> [!TIP]
+> More than just a database&mdash;the Azure environment includes a comprehensive set of capabilities and tools.
+
+Due to end of support from IBM, many existing users of Netezza data warehouse systems want to take advantage of the innovations provided by newer environments such as cloud, IaaS, and PaaS, and to delegate tasks like infrastructure maintenance and platform development to the cloud provider.
+
+Although Netezza and Azure Synapse are both SQL databases designed to use massively parallel processing (MPP) techniques to achieve high query performance on exceptionally large data volumes, there are some basic differences in approach:
+
+- Legacy Netezza systems are often installed on-premises and use proprietary hardware, while Azure Synapse is cloud based and uses Azure storage and compute resources.
+
+- Upgrading a Netezza configuration is a major task involving additional physical hardware and potentially lengthy database reconfiguration, or dump and reload. Since storage and compute resources are separate in the Azure environment, these resources can be scaled upwards or downwards independently, leveraging the elastic scaling capability.
+
+- Azure Synapse can be paused or resized as required to reduce resource utilization and cost.
+
+Microsoft Azure is a globally available, highly secure, scalable cloud environment, that includes Azure Synapse and an ecosystem of supporting tools and capabilities. The next diagram summarizes the Azure Synapse ecosystem.
++
+> [!TIP]
+> Azure Synapse gives best-of-breed performance and price-performance in independent benchmarks.
+
+Azure Synapse provides best-of-breed relational database performance by using techniques such as massively parallel processing (MPP) and multiple levels of automated caching for frequently used data. See the results of this approach in independent benchmarks such as the one run recently by [GigaOm](https://research.gigaom.com/report/data-warehouse-cloud-benchmark/), which compares Azure Synapse to other popular cloud data warehouse offerings. Customers who have migrated to this environment have seen many benefits including:
+
+- Improved performance and price/performance.
+
+- Increased agility and shorter time to value.
+
+- Faster server deployment and application development.
+
+- Elastic scalability&mdash;only pay for actual usage.
+
+- Improved security/compliance.
+
+- Reduced storage and disaster recovery costs.
+
+- Lower overall TCO and better cost control (OPEX).
+
+To maximize these benefits, migrate new or existing data and applications to the Azure Synapse platform. In many organizations, this will include migrating an existing data warehouse from legacy on-premises platforms such as Netezza. At a high level, the basic process includes these steps:
++
+This paper looks at schema migration with a goal of equivalent or better performance of your migrated Netezza data warehouse and data marts on Azure Synapse. This paper applies specifically to migrations from an existing Netezza environment.
+
+## Design considerations
+
+### Migration scope
+
+> [!TIP]
+> Create an inventory of objects to be migrated and document the migration process.
+
+#### Preparation for migration
+
+When migrating from a Netezza environment, there are some specific topics to consider in addition to the more general subjects described in this article.
+
+#### Choosing the workload for the initial migration
+
+Legacy Netezza environments have typically evolved over time to encompass multiple subject areas and mixed workloads. When deciding where to start on an initial migration project, choose an area that can:
+
+- Prove the viability of migrating to Azure Synapse by quickly delivering the benefits of the new environment.
+
+- Allow the in-house technical staff to gain relevant experience of the processes and tools involved, which can be used in migrations to other areas.
+
+- Create a template for further migrations specific to the source Netezza environment and the current tools and processes that are already in place.
+
+A good candidate for an initial migration from the Netezza environment that would enable the items above, is typically one that implements a BI/Analytics workload (rather than an OLTP workload) with a data model that can be migrated with minimal modifications&mdash;normally a start or snowflake schema.
+
+The migration data volume for the initial exercise should be large enough to demonstrate the capabilities and benefits of the Azure Synapse environment while quickly demonstrating the value&mdash;typically in the 1-10TB range.
+
+To minimize the risk and reduce implementation time for the initial migration project, confine the scope of the migration to just the data marts. However, this won't address the broader topics such as ETL migration and historical data migration as part of the initial migration project. Address these topics in later phases of the project, once the migrated data mart layer is backfilled with the data and processes required to build them.
+
+#### Lift and shift as-is versus a phased approach incorporating changes
+
+> [!TIP]
+> 'Lift and shift' is a good starting point, even if subsequent phases will implement changes to the data model.
+
+Whatever the drive and scope of the intended migration, there are&mdash;broadly speaking&mdash;two types of migration:
+
+##### Lift and shift
+
+In this case, the existing data model&mdash;such as a star schema&mdash;is migrated unchanged to the new Azure Synapse platform. The emphasis is on minimizing risk and the migration time required by reducing the work needed to realize the benefits of moving to the Azure cloud environment.
+
+This is a good fit for existing Netezza environments where a single data mart is being migrated, or where the data is already in a well-designed star or snowflake schema&mdash;or there are other pressures to move to a more modern cloud environment.
+
+##### Phased approach incorporating modifications
+
+In cases where a legacy warehouse has evolved over a long time, you might need to re-engineer to maintain the required performance levels or to support new data, such as Internet of Things (IoT) streams. Migrate to Azure Synapse to get the benefits of a scalable cloud environment as part of the re-engineering process. Migration could include a change in the underlying data model, such as a move from an Inmon model to a data vault.
+
+Microsoft recommends moving the existing data model as-is to Azure and using the performance and flexibility of the Azure environment to apply the re-engineering changes, leveraging Azure's capabilities to make the changes without impacting the existing source system.
+
+#### Use Azure Data Factory to implement a metadata-driven migration
+
+Automate and orchestrate the migration process by using the capabilities of the Azure environment. This approach minimizes the impact on the existing Netezza environment, which may already be running close to full capacity.
+
+Azure Data Factory is a cloud-based data integration service that allows creation of data-driven workflows in the cloud for orchestrating and automating data movement and data transformation. Using Data Factory, you can create and schedule data-driven workflows&mdash;called pipelines&mdash;to ingest data from disparate data stores. Data Factory can process and transform data by using compute services such as Azure HDInsight Hadoop, Spark, Azure Data Lake Analytics, and Azure Machine Learning.
+
+By creating metadata to list the data tables to be migrated and their location, you can use the Data Factory facilities to manage the migration process.
+
+### Design differences between Netezza and Azure Synapse
+
+#### Multiple databases versus a single database and schemas
+
+> [!TIP]
+> Combine multiple databases into a single database in Azure Synapse and use schemas to logically separate the tables.
+
+In a Netezza environment, there are often multiple separate databases for individual parts of the overall environment. For example, there may be a separate database for data ingestion and staging tables, a database for the core warehouse tables, and another database for data marts, sometimes called a semantic layer. Processing these as ETL/ELT pipelines may implement cross-database joins and will move data between these separate databases.
+
+> [!TIP]
+> Replace Netezza-specific features with Azure Synapse features.
+
+Querying within the Azure Synapse environment is limited to a single database. Schemas are used to separate the tables into logically separate groups. Therefore, we recommend using a series of schemas within the target Azure Synapse to mimic any separate databases migrated from the Netezza environment. If the Netezza environment already uses schemas, you may need to use a new naming convention to move the existing Netezza tables and views to the new environment&mdash;for example, concatenate the existing Netezza schema and table names into the new Azure Synapse table name and use schema names in the new environment to maintain the original separate database names. Schema consolidation naming can have dots&mdash;however, Azure Synapse Spark may have issues. You can use SQL views over the underlying tables to maintain the logical structures, but there are some potential downsides to this approach:
+
+- Views in Azure Synapse are read-only, so any updates to the data must take place on the underlying base tables.
+
+- There may already be one or more layers of views in existence, and adding an extra layer of views might impact performance and supportability as nested views are difficult to troubleshoot.
+
+#### Table considerations
+
+> [!TIP]
+> Use existing indexes to indicate candidates for indexing in the migrated warehouse.
+
+When migrating tables between different technologies, only the raw data and the metadata that describes it gets physically moved between the two environments. Other database elements from the source system&mdash;such as indexes&mdash;aren't migrated as these may not be needed or may be implemented differently within the new target environment.
+
+However, it's important to understand where performance optimizations such as indexes have been used in the source environment, as this can indicate where to add performance optimization in the new target environment. For example, if queries in the source Netezza environment frequently use zone maps, it may indicate that a non-clustered index should be created within the migrated Azure Synapse. Other native performance optimization techniques (such as table replication) may be more applicable that a straight 'like for like' index creation.
+
+#### Unsupported Netezza database object types
+
+> [!TIP]
+> Assess the impact of unsupported data types as part of the preparation phase
+
+Netezza implements some database objects that aren't directly supported in Azure Synapse, but there are methods to achieve the same functionality within the new environment:
+
+- Zone Maps&mdash;In Netezza, zone maps are automatically created and maintained for some column types and are used at query time to restrict the amount of data to be scanned. Zone Maps are created on the following column types:
+ - `INTEGER` columns of length 8 bytes or less.
+ - Temporal columns. For instance, `DATE`, `TIME`, and `TIMESTAMP`.
+ - `CHAR` columns, if these are part of a materialized view and mentioned in the `ORDER BY` clause.
+
+ You can find out which columns have zone maps by using the `nz_zonemap` utility, which is part of the NZ Toolkit. Azure Synapse doesn't include zone maps, but you can achieve similar results by using other user-defined index types and/or partitioning.
+
+- Clustered Base tables (CBT)&mdash;In Netezza, CBTs are commonly used for fact tables, which can have billions of records. Scanning such a huge table requires a lot of processing time, since a full table scan might be needed to get relevant records. Organizing records on restrictive CBT via allows Netezza to group records in same or nearby extents. This process also creates zone maps that improve the performance by reducing the amount of data to be scanned.
+
+ In Azure Synapse, you can achieve a similar effect by use of partitioning and/or use of other indexes.
+
+- Materialized views&mdash;Netezza supports materialized views and recommends creating one or more of these over large tables having many columns where only a few of those columns are regularly used in queries. The system automatically maintains materialized views when data in the base table is updated.
+
+ Azure Synapse supports materialized views, with the same functionality as Netezza.
+
+#### Netezza data type mapping
+
+Most Netezza data types have a direct equivalent in Azure Synapse. This table shows these data types together with the recommended approach for handling them.
+
+| Netezza Data Type | Azure Synapse Data Type |
+|--|-|
+| BIGINT | BIGINT |
+| BINARY VARYING(n) | VARBINARY(n) |
+| BOOLEAN | BIT |
+| BYTEINT | TINYINT |
+| CHARACTER VARYING(n) | VARCHAR(n) |
+| CHARACTER(n) | CHAR(n) |
+| DATE | DATE(DATE |
+| DECIMAL(p,s) | DECIMAL(p,s) |
+| DOUBLE PRECISION | FLOAT |
+| FLOAT(n) | FLOAT(n) |
+| INTEGER | INT |
+| INTERVAL | INTERVAL data types aren't currently directly supported in Azure Synapse but can be calculated using temporal functions such as DATEDIFF |
+| MONEY | MONEY |
+| NATIONAL CHARACTER VARYING(n) | NVARCHAR(n) |
+| NATIONAL CHARACTER(n) | NCHAR(n) |
+| NUMERIC(p,s) | NUMERIC(p,s) |
+| REAL | REAL |
+| SMALLINT | SMALLINT |
+| ST_GEOMETRY(n) | Spatial data types such as ST_GEOMETRY aren't currently supported in Azure Synapse, but the data could be stored as VARCHAR or VARBINARY |
+| TIME | TIME |
+| TIME WITH TIME ZONE | DATETIMEOFFSET |
+| TIMESTAMP | DATETIME |
+
+> [!TIP]
+> Assess the number and type of non-data objects to be migrated as part of the preparation phase.
+
+There are third-party vendors who offer tools and services to automate migration, including the mapping of data types. If a third-party ETL tool such as Informatica or Talend is already in use in the Netezza environment, those tools can implement any required data transformations.
+
+#### SQL DML syntax differences
+
+There are a few differences in SQL Data Manipulation Language (DML) syntax between Netezza SQL and Azure Synapse (T-SQL) that you should be aware during migration:
+
+- `STRPOS`: In Netezza, the `STRPOS` function returns the position of a substring within a string. The equivalent function in Azure Synapse is `CHARINDEX`, with the order of the arguments reversed. For example, `SELECT STRPOS('abcdef','def')...` in Netezza is equivalent to `SELECT CHARINDEX('def','abcdef')...` in Azure Synapse.
+
+- `AGE`: Netezza supports the `AGE` operator to give the interval between two temporal values, such as timestamps or dates. For example, `SELECT AGE('23-03-1956','01-01-2019') FROM...`. In Azure Synapse, `DATEDIFF` gives the interval. For example, `SELECT DATEDIFF(day, '1956-03-26','2019-01-01') FROM...`. Note the date representation sequence.
+
+- `NOW()`: Netezza uses `NOW()` to represent `CURRENT_TIMESTAMP` in Azure Synapse.
+
+#### Functions, stored procedures, and sequences
+
+> [!TIP]
+> Assess the number and type of non-data objects to be migrated as part of the preparation phase.
+
+When migrating from a mature legacy data warehouse environment such as Netezza, you must often migrate elements other than simple tables and views to the new target environment. Examples include functions, stored procedures, and sequences.
+
+As part of the preparation phase, create an inventory of these objects to be migrated, and define the method of handling them. Assign an appropriate allocation of resources in the project plan.
+
+There may be facilities in the Azure environment that replace the functionality implemented as functions or stored procedures in the Netezza environment. In this case, it's more efficient to use the built-in Azure facilities rather than recoding the Netezza functions.
+
+[Data integration partners](/azure/synapse-analytics/partner/data-integration) offer tools and services that can automate the migration.
+
+##### Functions
+
+As with most database products, Netezza supports system functions and user-defined functions within an SQL implementation. When migrating to another database platform such as Azure Synapse, common system functions are available and can be migrated without change. Some system functions may have slightly different syntax, but the required changes can be automated if so.
+
+For system functions where there's no equivalent, or for arbitrary user-defined functions, recode these using the language(s) available in the target environment. Netezza user-defined functions are coded in nzlua or C++ languages while Azure Synapse uses the popular Transact-SQL language to implement user-defined functions.
+
+##### Stored procedures
+
+Most modern database products allow for procedures to be stored within the database. Netezza provides the NZPLSQL language for this purpose. NZPLSQL is based on Postgres PL/pgSQL.
+
+A stored procedure typically contains SQL statements and some procedural logic, and may return data or a status.
+
+Azure Synapse Analytics also supports stored procedures using T-SQL. If you must migrate stored procedures, recode these procedures for their new environment.
+
+##### Sequences
+
+In Netezza, a sequence is a named database object created via `CREATE SEQUENCE` that can provide the unique value via the `NEXT VALUE FOR` method. Use these to generate unique numbers for use as surrogate key values for primary key values.
+
+Within Azure Synapse, there's no `CREATE SEQUENCE`. Sequences are handled via use of [IDENTITY](/sql/t-sql/statements/create-table-transact-sql-identity-property?msclkid=8ab663accfd311ec87a587f5923eaa7b) columns or using SQL code to create the next sequence number in a series.
+
+### Extracting metadata and data from a Netezza environment
+
+#### Data Definition Language (DDL) generation
+
+> [!TIP]
+> Use Netezza external tables for most efficient data extract.
+
+You can edit existing Netezza CREATE TABLE and CREATE VIEW scripts to create the equivalent definitions with modified data types, if necessary, as described in the previous section. Typically, this involves removing or modifying any extra Netezza-specific clauses such as `ORGANIZE ON`.
+
+However, all the information that specifies the current definitions of tables and views within the existing Netezza environment is maintained within system catalog tables. These tables are the best source of this information, as it's guaranteed to be up to date and complete. User-maintained documentation may not be in sync with the current table definitions.
+
+Access the information in these tables via utilities such as `nz_ddl_table` and generate the `CREATE TABLE DDL` statements for the equivalent tables in Azure Synapse.
+
+Third-party migration and ETL tools also use the catalog information to achieve the same result.
+
+#### Data extraction from Netezza
+
+Migrate the raw data from existing Netezza tables into flat delimited files using standard Netezza utilities, such as nzsql, nzunload, and via external tables. Compress these files using gzip and upload them to Azure Blob Storage via AzCopy or by using Azure data transport facilities such as Azure Data Box.
+
+During a migration exercise, extract the data as efficiently as possible. Use the external tables approach as this is the fastest method. Perform multiple extracts in parallel to maximize the throughput for data extraction.
+
+This is a simple example of an external table extract:
+
+```sql
+CREATE EXTERNAL TABLE '/tmp/export_tab1.csv' USING (DELIM ',') AS SELECT * from <TABLENAME>;
+```
+
+If sufficient network bandwidth is available, extract data directly from an on-premises Netezza system into Azure Synapse tables or Azure Blob Data Storage by using Azure Data Factory processes or third-party data migration or ETL products.
+
+Recommended data formats for the extracted data include delimited text files (also called Comma Separated Values or CSV), Optimized Row Columnar (ORC), or Parquet files.
+
+For more information about the process of migrating data and ETL from a Netezza environment, see [Data migration, ETL, and load for Netezza migration](1-design-performance-migration.md).
+
+## Performance recommendations for Netezza migrations
+
+This article provides general information and guidelines about use of performance optimization techniques for Azure Synapse and adds specific recommendations for use when migrating from a Netezza environment.
+
+### Similarities in performance tuning approach concepts
+
+> [!TIP]
+> Many Netezza tuning concepts hold true for Azure Synapse.
+
+When moving from a Netezza environment, many of the performance tuning concepts for Azure Data Warehouse will be remarkably familiar. For example:
+
+- Using data distribution to co-locate data to be joined onto the same processing node
+
+- Using the smallest data type for a given column will save storage space and accelerate query processing
+
+- Ensuring data types of columns to be joined are identical will optimize join processing by reducing the need to transform data for matching
+
+- Ensuring statistics are up to date will help the optimizer produce the best execution plan
+
+### Differences in performance tuning approach
+
+> [!TIP]
+> Prioritize early familiarity with Azure Synapse tuning options in a migration exercise.
+
+This section highlights lower-level implementation differences between Netezza and Azure Synapse for performance tuning.
+
+#### Data distribution options
+
+`CREATE TABLE` statements in both Netezza and Azure Synapse allow for specification of a distribution definition&mdash;via `DISTRIBUTE ON` in Netezza, and `DISTRIBUTION =` in Azure Synapse.
+
+Compared to Netezza, Azure Synapse provides an additional way to achieve local joins for small table-large table joins (typically dimension table to fact table in a start schema model) is to replicate the smaller dimension table across all nodes. This ensures that any value of the join key of the larger table will have a matching dimension row locally available. The overhead of replicating the dimension tables is relatively low, provided the tables aren't very large (see [Design guidance for replicated tables](/azure/synapse-analytics/sql-data-warehouse/design-guidance-for-replicated-tables))&mdash;in which case, the hash distribution approach as described previously is more appropriate. For more information, see [Distributed tables design](/azure/synapse-analytics/sql-data-warehouse/sql-data-warehouse-tables-distribute).
+
+#### Data indexing
+
+Azure Synapse provides several user-definable indexing options, but these are different from the system managed zone maps in Netezza. To understand the different indexing options, see [table indexes](/azure/sql-data-warehouse/sql-data-warehouse-tables-index).
+
+The existing system managed zone maps within the source Netezza environment can indicate how the data is currently used. They can identify candidate columns for indexing within the Azure Synapse environment.
+
+#### Data partitioning
+
+In an enterprise data warehouse, fact tables can contain many billions of rows. Partitioning optimizes the maintenance and querying of these tables by splitting them into separate parts to reduce the amount of data processed. The `CREATE TABLE` statement defines the partitioning specification for a table.
+
+Only one field per table can be used for partitioning. This is frequently a date field since many queries are filtered by date or a date range. It's possible to change the partitioning of a table after initial load by recreating the table with the new distribution using the `CREATE TABLE AS` (or CTAS) statement. See [table partitions](/azure/sql-data-warehouse/sql-data-warehouse-tables-partition) for a detailed discussion of partitioning in Azure Synapse.
+
+#### Data table statistics
+
+Ensure that statistics on data tables are up to date by building in a [statistics](/azure/synapse-analytics/sql/develop-tables-statistics) step to ETL/ELT jobs.
+
+#### PolyBase for data loading
+
+PolyBase is the most efficient method for loading large amounts of data into the warehouse since it can leverage parallel loading streams. For more information, see [PolyBase data loading strategy](/azure/synapse-analytics/sql/load-data-overview).
+
+#### Use Workload management
+
+Use [Workload management](/azure/synapse-analytics/sql-data-warehouse/sql-data-warehouse-workload-management?context=/azure/synapse-analytics/context/context) instead of resource classes. ETL would be in its own workgroup and should be configured to have more resources per query (less concurrency by more resources). For more information, see [What is dedicated SQL pool in Azure Synapse Analytics](/azure/synapse-analytics/sql-data-warehouse/sql-data-warehouse-overview-what-is).
+
+## Next steps
+
+To learn more about ETL and load for Netezza migration, see the next article in this series: [Data migration, ETL, and load for Netezza migration](2-etl-load-migration-considerations.md).
synapse-analytics 2 Etl Load Migration Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/migration-guides/netezza/2-etl-load-migration-considerations.md
+
+ Title: "Data migration, ETL, and load for Netezza migration"
+description: Learn how to plan your data migration from Netezza to Azure Synapse to minimize the risk and impact on users.
+++
+ms.devlang:
++++ Last updated : 05/24/2022++
+# Data migration, ETL, and load for Netezza migration
+
+This article is part two of a seven part series that provides guidance on how to migrate from Netezza to Azure Synapse Analytics. This article provides best practices for ETL and load migration.
+
+## Data migration considerations
+
+### Initial decisions for data migration from Netezza
+
+When migrating a Netezza data warehouse, you need to ask some basic data-related questions. For example:
+
+- Should unused table structures be migrated or not?
+
+- What's the best migration approach to minimize risk and impact for users?
+
+- Migrating data marts&mdash;stay physical or go virtual?
+
+The next sections discuss these points within the context of migration from Netezza.
+
+#### Migrate unused tables?
+
+> [!TIP]
+> In legacy systems, it's not unusual for tables to become redundant over time&mdash;these don't need to be migrated in most cases.
+
+It makes sense to only migrate tables that are in use in the existing system. Tables that aren't active can be archived rather than migrated, so that the data is available if necessary in future. It's best to use system metadata and log files rather than documentation to determine which tables are in use, because documentation can be out of date.
+
+If enabled, Netezza query history tables contain information that can determine when a given table was last accessed&mdash;which can in turn be used to decide whether a table is a candidate for migration.
+
+Here's an example query that looks for the usage of a specific table within a given time window:
+
+```sql
+SELECT FORMAT_TABLE_ACCESS (usage),
+ hq.submittime
+FROM "$v_hist_queries" hq
+ INNER JOIN "$hist_table_access_3" hta USING
+(NPSID, NPSINSTANCEID, OPID, SESSIONID)
+WHERE hq.dbname = 'PROD'
+AND hta.schemaname = 'ADMIN'
+AND hta.tablename = 'TEST_1'
+AND hq.SUBMITTIME > '01-01-2015'
+AND hq.SUBMITTIME <= '08-06-2015'
+AND
+(
+ instr(FORMAT_TABLE_ACCESS(usage),'ins') > 0
+ OR instr(FORMAT_TABLE_ACCESS(usage),'upd') > 0
+ OR instr(FORMAT_TABLE_ACCESS(usage),'del') > 0
+)
+AND status=0;
+```
+
+```output
+| FORMAT_TABLE_ACCESS | SUBMITTIME
+-+
+ins | 2015-06-16 18:32:25.728042
+ins | 2015-06-16 17:46:14.337105
+ins | 2015-06-16 17:47:14.430995
+(3 rows)
+```
+
+This query uses the helper function `FORMAT_TABLE_ACCESS` and the digit at the end of the `$v_hist_table_access_3` view to match the installed query history version.
+
+#### What is the best migration approach to minimize risk and impact on users?
+
+> [!TIP]
+> Migrate the existing model as-is initially, even if a change to the data model is planned in the future.
+
+This question comes up often since companies often want to lower the impact of changes on the data warehouse data model to improve agility. Companies see an opportunity to do so during a migration to modernize their data model. This approach carries a higher risk because it could impact ETL jobs populating the data warehouse from a data warehouse to feed dependent data marts. Because of that risk, it's usually better to redesign on this scale after the data warehouse migration.
+
+Even if a data model change is an intended part of the overall migration, it's good practice to migrate the existing model as-is to the new environment (Azure Synapse in this case), rather than do any re-engineering on the new platform during migration. This approach has the advantage of minimizing the impact on existing production systems, while also leveraging the performance and elastic scalability of the Azure platform for one-off re-engineering tasks.
+
+When migrating from Netezza, often the existing data model is already suitable for as-is migration to Azure Synapse.
+
+#### Migrating data marts&mdash;stay physical or go virtual?
+
+> [!TIP]
+> Virtualizing data marts can save on storage and processing resources.
+
+In legacy Netezza data warehouse environments, it's common practice to create several data marts that are structured to provide good performance for ad hoc self-service queries and reports for a given department or business function within an organization. As such, a data mart typically consists of a subset of the data warehouse and contains aggregated versions of the data in a form that enables users to easily query that data with fast response times via user-friendly query tools such as Microsoft Power BI, Tableau, or MicroStrategy. This form is typically a dimensional data model. One use of data marts is to expose the data in a usable form, even if the underlying warehouse data model is something different, such as a data vault.
+
+You can use separate data marts for individual business units within an organization to implement robust data security regimes, by only allowing users to access specific data marts that are relevant to them, and eliminating, obfuscating, or anonymizing sensitive data.
+
+If these data marts are implemented as physical tables, they'll require additional storage resources to store them, and additional processing to build and refresh them regularly. Also, the data in the mart will only be as up to date as the last refresh operation, and so may be unsuitable for highly volatile data dashboards.
+
+> [!TIP]
+> The performance and scalability of Azure Synapse enables virtualization without sacrificing performance.
+
+With the advent of relatively low-cost scalable MPP architectures, such as Azure Synapse, and the inherent performance characteristics of such architectures, it may be that you can provide data mart functionality without having to instantiate the mart as a set of physical tables. This is achieved by effectively virtualizing the data marts via SQL views onto the main data warehouse, or via a virtualization layer using features such as views in Azure or the [visualization products of Microsoft partners](/azure/synapse-analytics/partner/data-integration). This approach simplifies or eliminates the need for additional storage and aggregation processing and reduces the overall number of database objects to be migrated.
+
+There's another potential benefit to this approach: by implementing the aggregation and join logic within a virtualization layer, and presenting external reporting tools via a virtualized view, the processing required to create these views is 'pushed down' into the data warehouse, which is generally the best place to run joins, aggregations, and other related operations, on large data volumes.
+
+The primary drivers for choosing a virtual data mart implementation over a physical data mart are:
+
+- More agility&mdash;a virtual data mart is easier to change than physical tables and the associated ETL processes.
+
+- Lower total cost of ownership&mdash;a virtualized implementation requires fewer data stores and copies of data.
+
+- Elimination of ETL jobs to migrate and simplify data warehouse architecture in a virtualized environment.
+
+- Performance&mdash;although physical data marts have historically been more performant, virtualization products now implement intelligent caching techniques to mitigate.
+
+### Data migration from Netezza
+
+#### Understand your data
+
+Part of migration planning is understanding in detail the volume of data that needs to be migrated since that can impact decisions about the migration approach. Use system metadata to determine the physical space taken up by the 'raw data' within the tables to be migrated. In this context, 'raw data' means the amount of space used by the data rows within a table, excluding overheads such as indexes and compression. This is especially true for the largest fact tables since these will typically comprise more than 95% of the data.
+
+Get an accurate number for the volume of data to be migrated for a given table by extracting a representative sample of the data&mdash;for example, one million rows&mdash;to an uncompressed delimited flat ASCII data file. Then, use the size of that file to get an average raw data size per row of that table. Finally, multiply that average size by the total number of rows in the full table to give a raw data size for the table. Use that raw data size in your planning.
+
+#### Netezza data type mapping
+
+> [!TIP]
+> Assess the impact of unsupported data types as part of the preparation phase.
+
+Most Netezza data types have a direct equivalent in Azure Synapse. The following table shows these data types, together with the recommended approach for mapping them.
++
+| Netezza data type | Azure Synapse data type |
+|--|-|
+| BIGINT | BIGINT |
+| BINARY VARYING(n) | VARBINARY(n) |
+| BOOLEAN | BIT |
+| BYTEINT | TINYINT |
+| CHARACTER VARYING(n) | VARCHAR(n) |
+| CHARACTER(n) | CHAR(n) |
+| DATE | DATE(DATE |
+| DECIMAL(p,s) | DECIMAL(p,s) |
+| DOUBLE PRECISION | FLOAT |
+| FLOAT(n) | FLOAT(n) |
+| INTEGER | INT |
+| INTERVAL | INTERVAL data types aren't currently directly supported in ASA but can be calculated using temporal functions, such as DATEDIFF |
+| MONEY | MONEY |
+| NATIONAL CHARACTER VARYING(n) | NVARCHAR(n) |
+| NATIONAL CHARACTER(n) | NCHAR(n) |
+| NUMERIC(p,s) | NUMERIC(p,s) |
+| REAL | REAL |
+| SMALLINT | SMALLINT |
+| ST_GEOMETRY(n) | Spatial data types such as ST_GEOMETRY aren't currently supported in Azure Synapse Analytics, but the data could be stored as VARCHAR or VARBINARY |
+| TIME | TIME |
+| TIME WITH TIME ZONE | DATETIMEOFFSET |
+| TIMESTAMP | DATETIME |
+
+Use the metadata from the Netezza catalog tables to determine whether any of these data types need to be migrated, and allow for this in your migration plan. The important metadata views in Netezza for this type of query are:
+
+- `_V_USER`: the user view gives information about the users in the Netezza system.
+
+- `_V_TABLE`: the table view contains the list of tables created in the Netezza performance system.
+
+- `_V_RELATION_COLUMN`: the relation column system catalog view contains the columns available in a table.
+
+- `_V_OBJECTS`: the objects view lists the different objects like tables, view, functions, and so on, that are available in Netezza.
+
+For example, this Netezza SQL query shows columns and column types:
+
+```sql
+SELECT
+tablename,
+ attname AS COL_NAME,
+ b.FORMAT_TYPE AS COL_TYPE,
+ attnum AS COL_NUM
+FROM _v_table a
+ JOIN _v_relation_column b
+ ON a.objid = b.objid
+WHERE a.tablename = 'ATT_TEST'
+AND a.schema = 'ADMIN'
+ORDER BY attnum;
+```
+
+```output
+TABLENAME | COL_NAME | COL_TYPE | COL_NUM
+-+-+-+--
+ATT_TEST | COL_INT | INTEGER | 1
+ATT_TEST | COL_NUMERIC | NUMERIC(10,2) | 2
+ATT_TEST | COL_VARCHAR | CHARACTER VARYING(5) | 3
+ATT_TEST | COL_DATE | DATE | 4
+(4 rows)
+```
+
+The query can be modified to search all tables for any occurrences of unsupported data types.
+
+Azure Data Factory can be used to move data from a legacy Netezza environment. For more information, see [IBM Netezza connector](/azure/data-factory/connector-netezza).
+
+[Third-party vendors](/azure/sql-data-warehouse/sql-data-warehouse-partner-data-integration) offer tools and services to automate migration, including the mapping of data types as previously described. Also, third-party ETL tools, like Informatica or Talend, already in use in the Netezza environment can implement all required data transformations. The next section explores the migration of existing third-party ETL processes.
+
+## ETL migration considerations
+
+### Initial decisions regarding Netezza ETL migration
+
+> [!TIP]
+> Plan the approach to ETL migration ahead of time and leverage Azure facilities where appropriate.
+
+For ETL/ELT processing, legacy Netezza data warehouses may use custom-built scripts using Netezza utilities such as nzsql and nzload, or third-party ETL tools such as Informatica or Ab Initio. Sometimes, Netezza data warehouses use a combination of ETL and ELT approaches that's evolved over time. When planning a migration to Azure Synapse, you need to determine the best way to implement the required ETL/ELT processing in the new environment, while minimizing the cost and risk involved. To learn more about ETL and ELT processing, see [ELT vs ETL Design approach](/azure/synapse-analytics/sql-data-warehouse/design-elt-data-loading).
+
+The following sections discuss migration options and make recommendations for various use cases. This flowchart summarizes one approach:
++
+The first step is always to build an inventory of ETL/ELT processes that need to be migrated. As with other steps, it's possible that the standard 'built-in' Azure features make it unnecessary to migrate some existing processes. For planning purposes, it's important to understand the scale of the migration to be performed.
+
+In the preceding flowchart, decision 1 relates to a high-level decision about whether to migrate to a totally Azure-native environment. If you're moving to a totally Azure-native environment, we recommend that you re-engineer the ETL processing using [Pipelines and activities in Azure Data Factory](/azure/data-factory/concepts-pipelines-activities?msclkid=b6ea2be4cfda11ec929ac33e6e00db98&tabs=data-factory) or [Synapse Pipelines](/azure/synapse-analytics/get-started-pipelines?msclkid=b6e99db9cfda11ecbaba18ca59d5c95c). If you're not moving to a totally Azure-native environment, then decision 2 is whether an existing third-party ETL tool is already in use.
+
+> [!TIP]
+> Leverage investment in existing third-party tools to reduce cost and risk.
+
+If a third-party ETL tool is already in use, and especially if there's a large investment in skills or several existing workflows and schedules use that tool, then decision 3 is whether the tool can efficiently support Azure Synapse as a target environment. Ideally, the tool will include 'native' connectors that can leverage Azure facilities like PolyBase or [COPY INTO](/sql/t-sql/statements/copy-into-transact-sql), for the most efficient parallel data loading. There's a way to call an external process, such as PolyBase or `COPY INTO`, and pass in the appropriate parameters. In this case, leverage existing skills and workflows, with Azure Synapse as the new target environment.
+
+If you decide to retain an existing third-party ETL tool, there may be benefits to running that tool within the Azure environment (rather than on an existing on-premises ETL server) and having Azure Data Factory handle the overall orchestration of the existing workflows. One particular benefit is that less data needs to be downloaded from Azure, processed, and then uploaded back into Azure. So, decision 4 is whether to leave the existing tool running as-is or to move it into the Azure environment to achieve cost, performance, and scalability benefits.
+
+### Re-engineering existing Netezza-specific scripts
+
+If some or all the existing Netezza warehouse ETL/ELT processing is handled by custom scripts that utilize Netezza-specific utilities, such as nzsql or nzload, then these scripts need to be recoded for the new Azure Synapse environment. Similarly, if ETL processes were implemented using stored procedures in Netezza, then these will also have to be recoded.
+
+> [!TIP]
+> The inventory of ETL tasks to be migrated should include scripts and stored procedures.
+
+Some elements of the ETL process are easy to migrate. For example, by simple bulk data load into a staging table from an external file. It may even be possible to automate those parts of the process, for example, by using PolyBase instead of nzload. Other parts of the process that contain arbitrary complex SQL and/or stored procedures will take more time to re-engineer.
+
+One way of testing Netezza SQL for compatibility with Azure Synapse is to capture some representative SQL statements from Netezza query history, then prefix those queries with `EXPLAIN`, and then (assuming a like-for-like migrated data model in Azure Synapse) run those EXPLAIN statements in Azure Synapse. Any incompatible SQL will generate an error, and the error information can determine the scale of the recoding task.
+
+[Microsoft partners](/azure/sql-data-warehouse/sql-data-warehouse-partner-data-integration) offer tools and services to migrate Netezza SQL and stored procedures to Azure Synapse.
+
+### Using existing third-party ETL tools
+
+As described in the previous section, in many cases the existing legacy data warehouse system will already be populated and maintained by third-party ETL products. For a list of Microsoft data integration partners for Azure Synapse, see [Data integration partners](/azure/sql-data-warehouse/sql-data-warehouse-partner-data-integration).
+
+## Data loading from Netezza
+
+### Choices available when loading data from Netezza
+
+> [!TIP]
+> Third-party tools can simplify and automate the migration process and therefore reduce risk.
+
+When it comes to migrating data from a Netezza data warehouse, there are some basic questions associated with data loading that need to be resolved. You'll need to decide how the data will be physically moved from the existing on-premises Netezza environment into Azure Synapse in the cloud, and which tools will be used to perform the transfer and load. Consider the following questions, which is discussed in the next sections.
+
+- Will you extract the data to files, or move it directly via a network connection?
+
+- Will you orchestrate the process from the source system, or from the Azure target environment?
+
+- Which tools will you use to automate and manage the process?
+
+#### Transfer data via files or network connection?
+
+> [!TIP]
+> Understand the data volumes to be migrated and the available network bandwidth since these factors influence the migration approach decision.
+
+Once the database tables to be migrated have been created in Azure Synapse, you can move the data to populate those tables out of the legacy Netezza system and loaded into the new environment. There are two basic approaches:
+
+- **File Extract**&mdash;Extract the data from the Netezza tables to flat files, normally in CSV format, via nzsql with the -o option or via the `CREATE EXTERNAL TABLE` statement. Use an external table whenever possible since it's the most efficient in terms of data throughput. The following SQL example, creates a CSV file via an external table:
+
+ ```sql
+ CREATE EXTERNAL TABLE '/data/export.csv' USING (delimiter ',')
+ AS SELECT col1, col2, expr1, expr2, col3, col1 || col2 FROM your table;
+ ```
+
+ Use an external table if you're exporting data to a mounted file system on a local Netezza host. If you're exporting data to a remote machine that has JDBC, ODBC, or OLEDB installed, then your 'remotesource odbc' option is the `USING` clause.
+
+ This approach requires space to land the extracted data files. The space could be local to the Netezza source database (if sufficient storage is available), or remote in Azure Blob Storage. The best performance is achieved when a file is written locally, since that avoids network overhead.
+
+ To minimize the storage and network transfer requirements, it's good practice to compress the extracted data files using a utility like gzip.
+
+ Once extracted, the flat files can either be moved into Azure Blob Storage (co-located with the target Azure Synapse instance), or loaded directly into Azure Synapse using PolyBase or [COPY INTO](/sql/t-sql/statements/copy-into-transact-sql). The method for physically moving data from local on-premises storage to the Azure cloud environment depends on the amount of data and the available network bandwidth.
+
+ Microsoft provides various options to move large volumes of data, including AzCopy for moving files across the network into Azure Storage, Azure ExpressRoute for moving bulk data over a private network connection, and Azure Data Box for files moving to a physical storage device that's then shipped to an Azure data center for loading. For more information, see [data transfer](/azure/architecture/data-guide/scenarios/data-transfer).
+
+- **Direct extract and load across network**&mdash;The target Azure environment sends a data extract request, normally via a SQL command, to the legacy Netezza system to extract the data. The results are sent across the network and loaded directly into Azure Synapse, with no need to land the data into intermediate files. The limiting factor in this scenario is normally the bandwidth of the network connection between the Netezza database and the Azure environment. For very large data volumes, this approach may not be practical.
+
+There's also a hybrid approach that uses both methods. For example, you can use the direct network extract approach for smaller dimension tables and samples of the larger fact tables to quickly provide a test environment in Azure Synapse. For large volume historical fact tables, you can use the file extract and transfer approach using Azure Data Box.
+
+#### Orchestrate from Netezza or Azure?
+
+The recommended approach when moving to Azure Synapse is to orchestrate the data extract and loading from the Azure environment using [Azure Synapse Pipelines](/azure/synapse-analytics/get-started-pipelines?msclkid=b6e99db9cfda11ecbaba18ca59d5c95c) or [Azure Data Factory](/azure/data-factory/introduction?msclkid=2ccc66eccfde11ecaa58877e9d228779), as well as associated utilities, such as PolyBase or [COPY INTO](/sql/t-sql/statements/copy-into-transact-sql), for the most efficient data loading. This approach leverages Azure capabilities and provides an easy method to build reusable data loading pipelines.
+
+Other benefits of this approach include reduced impact on the Netezza system during the data load process since the management and loading process is running in Azure, and the ability to automate the process by using metadata-driven data load pipelines.
+
+#### Which tools can be used?
+
+The task of data transformation and movement is the basic function of all ETL products. If one of these products is already in use in the existing Netezza environment, then using the existing ETL tool may simplify data migration from Netezza to Azure Synapse. This approach assumes that the ETL tool supports Azure Synapse as a target environment. For more information on tools that support Azure Synapse, see [Data integration partners](/azure/sql-data-warehouse/sql-data-warehouse-partner-data-integration).
+
+If you're using an ETL tool, consider running that tool within the Azure environment to benefit from Azure cloud performance, scalability, and cost, and free up resources in the Netezza data center. Another benefit is reduced data movement between the cloud and on-premises environments.
+
+## Summary
+
+To summarize, our recommendations for migrating data and associated ETL processes from Netezza to Azure Synapse are:
+
+- Plan ahead to ensure a successful migration exercise.
+
+- Build a detailed inventory of data and processes to be migrated as soon as possible.
+
+- Use system metadata and log files to get an accurate understanding of data and process usage. Don't rely on documentation since it may be out of date.
+
+- Understand the data volumes to be migrated, and the network bandwidth between the on-premises data center and Azure cloud environments.
+
+- Leverage standard 'built-in' Azure features when appropriate, to minimize the migration workload.
+
+- Identify and understand the most efficient tools for data extract and load in both Netezza and Azure environments. Use the appropriate tools in each phase in the process.
+
+- Use Azure facilities, such as [Azure Synapse Pipelines](/azure/synapse-analytics/get-started-pipelines?msclkid=b6e99db9cfda11ecbaba18ca59d5c95c) or [Azure Data Factory](/azure/data-factory/introduction?msclkid=2ccc66eccfde11ecaa58877e9d228779), to orchestrate and automate the migration process while minimizing impact on the Netezza system.
+
+## Next steps
+
+To learn more about security access operations, see the next article in this series: [Security, access, and operations for Netezza migrations](3-security-access-operations.md).
synapse-analytics 3 Security Access Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/migration-guides/netezza/3-security-access-operations.md
+
+ Title: "Security, access, and operations for Netezza migrations"
+description: Learn about authentication, users, roles, permissions, monitoring, and auditing, and workload management in Azure Synapse and Netezza.
+++
+ms.devlang:
++++ Last updated : 05/24/2022++
+# Security, access, and operations for Netezza migrations
+
+This article is part three of a seven part series that provides guidance on how to migrate from Netezza to Azure Synapse Analytics. This article provides best practices for security access operations.
+
+## Security considerations
+
+This article discusses the methods of connection for existing legacy Netezza environments and how they can be migrated to Azure Synapse with minimal risk and user impact.
+
+It's assumed that there's a requirement to migrate the existing methods of connection and user/role/permission structure as-is. If this isn't the case, then use Azure utilities such as Azure portal to create and manage a new security regime.
+
+For more information on the [Azure Synapse security](/azure/synapse-analytics/sql-data-warehouse/sql-data-warehouse-overview-manage-security#authorization) options see [Security whitepaper](/azure/synapse-analytics/guidance/security-white-paper-introduction).
+
+### Connection and authentication
+
+> [!TIP]
+> Authentication in both Netezza and Azure Synapse can be "in database" or through external methods.
+
+#### Netezza authorization options
+
+The IBM&reg; Netezza&reg; system offers several authentication methods for Netezza database users:
+
+- **Local authentication**: Netezza administrators define database users and their passwords by using the `CREATE USER` command or through Netezza administrative interfaces. In local authentication, use the Netezza system to manage database accounts and passwords, and to add and remove database users from the system. This method is the default authentication method.
+
+- **LDAP authentication**: Use an LDAP name server to authenticate database users, manage passwords, database account activations, and deactivations. The Netezza system uses a Pluggable Authentication Module (PAM) to authenticate users on the LDAP name server. Microsoft Active Directory conforms to the LDAP protocol, so it can be treated like an LDAP server for the purposes of LDAP authentication.
+
+- **Kerberos authentication**: Use a Kerberos distribution server to authenticate database users, manage passwords, database account activations, and deactivations.
+
+Authentication is a system-wide setting. Users must be either locally authenticated or authenticated by using the LDAP or Kerberos method. If you choose LDAP or Kerberos authentication, create users with local authentication on a per-user basis. LDAP and Kerberos can't be used at the same time to authenticate users. Netezza host supports LDAP or Kerberos authentication for database user logins only, not for operating system logins on the host.
+
+#### Azure Synapse authorization options
+
+Azure Synapse supports two basic options for connection and authorization:
+
+- **SQL authentication**: SQL authentication is via a database connection that includes a database identifier, user ID, and password plus other optional parameters. This is functionally equivalent to Netezza local connections.
+
+- **Azure Active Directory (Azure AD) authentication**: With Azure Active Directory authentication, you can centrally manage the identities of database users and other Microsoft services in one central location. Central ID management provides a single place to manage SQL Data Warehouse users and simplifies permission management. Azure AD can also support connections to LDAP and Kerberos services&mdash;for example, Azure AD can be used to connect to existing LDAP directories if these are to remain in place after migration of the database.
+
+### Users, roles, and permissions
+
+#### Overview
+
+> [!TIP]
+> High-level planning is essential for a successful migration project.
+
+Both Netezza and Azure Synapse implement database access control via a combination of users, roles (groups in Netezza), and permissions. Both use standard `SQL CREATE USER` and `CREATE ROLE/GROUP` statements to define users and roles, and `GRANT` and `REVOKE` statements to assign or remove permissions to those users and/or roles.
+
+> [!TIP]
+> Automation of migration processes is recommended to reduce elapsed time and scope for errors.
+
+Conceptually the two databases are similar, and it might be possible to automate the migration of existing user IDs, groups, and permissions to some degree. Migrate such data by extracting the existing legacy user and group information from the Netezza system catalog tables and generating matching equivalent `CREATE USER` and `CREATE ROLE` statements to be run in Azure Synapse to recreate the same user/role hierarchy.
+
+After data extraction, use Netezza system catalog tables to generate equivalent `GRANT` statements to assign permissions (where an equivalent one exists). The following diagram shows how to use existing metadata to generate the necessary SQL.
++
+See the following sections for more details.
+
+#### Users and roles
+
+> [!TIP]
+> Migration of a data warehouse requires more than just tables, views, and SQL statements.
+
+The information about current users and groups in a Netezza system is held in system catalog views `_v_users` and `_v_groupusers`. Use the nzsql utility or tools such as the Netezza&reg; Performance, NzAdmin, or the Netezza Utility scripts to list user privileges. For example, use the `dpu` and `dpgu` commands in nzsql to display users or groups with their permissions.
+
+Use or edit the utility scripts `nz_get_users` and `nz_get_user_groups` to retrieve the same information in the required format.
+
+Query system catalog views directly (if the user has `SELECT` access to those views) to obtain current lists of users and roles defined within the system. See examples to list users, groups, or users and their associated groups:
+
+```sql
+-- List of users
+SELECT USERNAME FROM _V_USER;
+
+--List of groups
+SELECT DISTINCT(GROUPNAME) FROM _V_USERGROUPS;
+
+--List of users and their associated groups
+SELECT USERNAME, GROUPNAME FROM _V_GROUPUSERS;
+```
+
+Modify the example `SELECT` statement to produce a result set that is a series of `CREATE USER` and `CREATE GROUP` statements by including the appropriate text as a literal within the `SELECT` statement.
+
+There's no way to retrieve existing passwords, so you need to implement a scheme for allocating new initial passwords on Azure Synapse.
+
+#### Permissions
+
+> [!TIP]
+> There are equivalent Azure Synapse permissions for basic database operations such as DML and DDL.
+
+In a Netezza system, the system table `_t_usrobj_priv` holds the access rights for users and roles. Query these tables (if the user has `SELECT` access to those tables) to obtain current lists of access rights defined within the system.
+
+In Netezza, the individual permissions are represented as individual bits within field privileges or g_privileges. See example SQL statement at [user group permissions](http://nz2nz.blogspot.com/2016/03/netezza-user-group-permissions-view_3.html)
+
+The simplest way to obtain a DDL script that contains the `GRANT` commands to replicate the current privileges for users and groups is to use the appropriate Netezza utility scripts:
+
+```sql
+--List of group privileges
+nz_ddl_grant_group -usrobj dbname > output_file_dbname;
+
+--List of user privileges
+nz_ddl_grant_user -usrobj dbname > output_file_dbname;
+```
+
+The output file can be modified to produce a script that is a series of `GRANT` statements for Azure Synapse.
+
+Netezza supports two classes of access rights,&mdash;Admin and Object. See the following table for a list of Netezza access rights and their equivalent in Azure Synapse.
+
+| Admin Privilege | Description | Azure Synapse Equivalent |
+|-|-|--|
+| Backup | Allows user to create backups. The user can run backups. The user can run the command `nzbackup`. | \* |
+| [Create] Aggregate | Allows the user to create user-defined aggregates (UDAs). Permission to operate on existing UDAs is controlled by object privileges. | CREATE FUNCTION \*\*\* |
+| [Create] Database | Allows the user to create databases. Permission to operate on existing databases is controlled by object privileges. | CREATE DATABASE |
+| [Create] External Table | Allows the user to create external tables. Permission to operate on existing tables is controlled by object privileges. | CREATE TABLE |
+| [Create] Function | Allows the user to create user-defined functions (UDFs). Permission to operate on existing UDFs is controlled by object privileges. | CREATE FUNCTION |
+| [Create] Group | Allows the user to create groups. Permission to operate on existing groups is controlled by object privileges. | CREATE ROLE |
+| [Create] Index | For system use only. Users can't create indexes. | CREATE INDEX |
+| [Create] Library | Allows the user to create shared libraries. Permission to operate on existing shared libraries is controlled by object privileges. | \* |
+| [Create] Materialized View | Allows the user to create materialized views. | CREATE VIEW |
+| [Create] Procedure | Allows the user to create stored procedures. Permission to operate on existing stored procedures is controlled by object privileges. | CREATE PROCEDURE |
+| [Create] Schema | Allows the user to create schemas. Permission to operate on existing schemas is controlled by object privileges. | CREATE SCHEMA |
+| [Create] Sequence | Allows the user to create database sequences. | \* |
+| [Create] Synonym | Allows the user to create synonyms. | CREATE SYNONYM |
+| [Create] Table | Allows the user to create tables. Permission to operate on existing tables is controlled by object privileges. | CREATE TABLE |
+| [Create] Temp Table | Allows the user to create temporary tables. Permission to operate on existing tables is controlled by object privileges. | CREATE TABLE |
+| [Create] User | Allows the user to create users. Permission to operate on existing users is controlled by object privileges. | CREATE USER |
+| [Create] View | Allows the user to create views. Permission to operate on existing views is controlled by object privileges. | CREATE VIEW |
+| [Manage Hardware | Allows the user to do the following hardware-related operations: view hardware status, manage SPUs, manage topology and mirroring, and run diagnostic tests. The user can run these commands: nzhw and nzds. | \*\*\*\* |
+| [Manage Security | Allows the user to run commands and operations that relate to the following advanced security options such as: managing and configuring history databases, managing multi- level security objects, and specifying security for users and groups, managing database key stores and keys and key stores for the digital signing of audit data. | \*\*\*\* |
+| [Manage System | Allows the user to do the following management operations: start/stop/pause/resume the system, abort sessions, view the distribution map, system statistics, and logs. The user can use these commands: nzsystem, nzstate, nzstats, and nzsession. | \*\*\*\* |
+| Restore | Allows the user to restore the system. The user can run the nzrestore command. | \*\* |
+| Unfence | Allows the user to create or alter a user-defined function or aggregate to run in unfenced mode. | \* |
+
+| Object Privilege Abort | Description | Azure Synapse Equivalent |
+|-|-|--|
+| Abort | Allows the user to abort sessions. Applies to groups and users. | KILL DATABASE CONNECTION |
+| Alter | Allows the user to modify object attributes. Applies to all objects. | ALTER |
+| Delete | Allows the user to delete table rows. Applies only to tables. | DELETE |
+| Drop | Allows the user to drop objects. Applies to all object types. | DROP |
+| Execute | Allows the user to run user-defined functions, user-defined aggregates, or stored procedures. | EXECUTE |
+| GenStats | Allows the user to generate statistics on tables or databases. The user can run GENERATE STATISTICS command. | \*\* |
+| Groom | Allows the user to reclaim disk space for deleted or outdated rows, and reorganize a table by the organizing keys, or to migrate data for tables that have multiple stored versions. | \*\* |
+| Insert | Allows the user to insert rows into a table. Applies only to tables. | INSERT |
+| List | Allows the user to display an object name, either in a list or in another manner. Applies to all objects. | LIST |
+| Select | Allows the user to select (or query) rows within a table. Applies to tables and views. | SELECT |
+| Truncate | Allows the user to delete all rows from a table. Applies only to tables. | TRUNCATE |
+| Update | Allows the user to modify table rows. Applies to tables only. | UPDATE |
+
+Comments on the preceding table:
+
+\* There's no direct equivalent to this function in Azure Synapse.
+
+\*\* These Netezza functions are handled automatically in Azure Synapse.
+
+\*\*\* The Azure Synapse `CREATE FUNCTION` feature incorporates Netezza aggregate functionality.
+
+\*\*\*\* These features are managed automatically by the system or via Azure portal in Azure Synapse&mdash;see the next section on Operational considerations.
+
+Refer to [Azure Synapse Analytics security permissions](/azure/synapse-analytics/guidance/security-white-paper-introduction).
+
+## Operational considerations
+
+> [!TIP]
+> Operational tasks are necessary to keep any data warehouse operating efficiently.
+
+This section discusses how to implement typical Netezza operational tasks in Azure Synapse with minimal risk and impact to users.
+
+As with all data warehouse products, once in production there are ongoing management tasks that are necessary to keep the system running efficiently and to provide data for monitoring and auditing. Resource utilization and capacity planning for future growth also falls into this category, as does backup/restore of data.
+
+Netezza administration tasks typically fall into two categories:
+
+- System administration, which is managing the hardware, configuration settings, system status, access, disk space, usage, upgrades, and other tasks.
+
+- Database administration, which is managing user databases and their content, loading data, backing up data, restoring data, and controlling access to data and permissions.
+
+IBM&reg; Netezza&reg; offers several ways or interfaces that you can use to perform the various system and database management tasks:
+
+- Netezza commands (nz* commands) are installed in the /nz/kit/bin directory on the Netezza host. For many of the nz* commands, you must be able to sign into the Netezza system to access and run those commands. In most cases, users sign in as the default nz user account, but you can create other Linux user accounts on your system. Some commands require you to specify a database user account, password, and database to ensure that you've permission to do the task.
+
+- The Netezza CLI client kits package a subset of the nz* commands that can be run from Windows and UNIX client systems. The client commands might also require you to specify a database user account, password, and database to ensure that you've database administrative and object permissions to perform the task.
+
+- The SQL commands support administration tasks and queries within a SQL database session. You can run the SQL commands from the Netezza nzsql command interpreter or through SQL APIs such as ODBC, JDBC, and the OLE DB Provider. You must have a database user account to run the SQL commands with appropriate permissions for the queries and tasks that you perform.
+
+- The NzAdmin tool is a Netezza interface that runs on Windows client workstations to manage Netezza systems.
+
+While conceptually the management and operations tasks for different data warehouses are similar, the individual implementations may differ. In general, modern cloud-based products such as Azure Synapse tend to incorporate a more automated and "system managed" approach (as opposed to a more 'manual' approach in legacy data warehouses such as Netezza).
+
+The following sections compare Netezza and Azure Synapse options for various operational tasks.
+
+### Housekeeping tasks
+
+> [!TIP]
+> Housekeeping tasks keep a production warehouse operating efficiently and optimize use of resources such as storage.
+
+In most legacy data warehouse environments, regular 'housekeeping' tasks are time-consuming. Reclaim disk storage space by removing old versions of updated or deleted rows or reorganizing data, log file or index blocks for efficiency (`GROOM` and `VACUUM` in Netezza). Collecting statistics is also a potentially time-consuming task, required after a bulk data ingest to provide the query optimizer with up-to-date data on which to base query execution plans.
+
+Netezza recommends collecting statistics as follows:
+
+- Collect statistics on unpopulated tables to set up the interval histogram used in internal processing. This initial collection makes subsequent statistics collections faster. Make sure to recollect statistics after data is added.
+
+- Prototype phase, newly populated tables.
+
+- Production phase, after a significant percentage of change to the table or partition (~10% rows). For high volumes of nonunique values, such as dates or timestamps, it may be advantageous to recollect at 7%.
+
+- Recommendation: Collect production phase statistics after you've created users and applied real world query loads to the database (up to about three months of querying).
+
+- Collect statistics in the first few weeks after an upgrade or migration during periods of low CPU utilization.
+
+Netezza Database contains many log tables in the Data Dictionary that accumulate data, either automatically or after certain features are enabled. Because log data grows over time, purge older information to avoid using up permanent space. There are options to automate the maintenance of these logs available.
+
+> [!TIP]
+> Automate and monitor housekeeping tasks in Azure.
+
+Azure Synapse has an option to automatically create statistics so that they can be used as needed. Perform defragmentation of indexes and data blocks manually, on a scheduled basis, or automatically. Leveraging native built-in Azure capabilities can reduce the effort required in a migration exercise.
+
+### Monitoring and auditing
+
+> [!TIP]
+> Netezza Performance Portal is the recommended method of monitoring and logging for Netezza systems.
+
+Netezza provides the Netezza Performance Portal to monitor various aspects of one or more Netezza systems including activity, performance, queuing, and resource utilization. Netezza Performance Portal is an interactive GUI which allows users to drill down into low-level details for any chart.
+
+> [!TIP]
+> Azure Portal provides a GUI to manage monitoring and auditing tasks for all Azure data and processes.
+
+Similarly, Azure Synapse provides a rich monitoring experience within the Azure portal to provide insights into your data warehouse workload. The Azure portal is the recommended tool when monitoring your data warehouse as it provides configurable retention periods, alerts, recommendations, and customizable charts and dashboards for metrics and logs.
+
+The portal also enables integration with other Azure monitoring services such as Operations Management Suite (OMS) and Azure Monitor (logs) to provide a holistic monitoring experience for not only the data warehouse but also the entire Azure analytics platform for an integrated monitoring experience.
+
+> [!TIP]
+> Low-level and system-wide metrics are automatically logged in Azure Synapse.
+Resource utilization statistics for the Azure Synapse are automatically logged within the system. The metrics include usage statistics for CPU, memory, cache, I/O and temporary workspace for each query as well as connectivity information&mdash;such as failed connection attempts.
+
+Azure Synapse provides a set of [Dynamic management views](/azure/synapse-analytics/sql-data-warehouse/sql-data-warehouse-manage-monitor?msclkid=3e6eefbccfe211ec82d019ada29b1834) (DMVs). These views are useful when actively troubleshooting and identifying performance bottlenecks with your workload.
+
+For more information, see [Azure Synapse operations and management options](/azure/sql-data-warehouse/sql-data-warehouse-how-to-manage-and-monitor-workload-importance).
+
+### High Availability (HA) and Disaster Recovery (DR)
+
+Netezza appliances are redundant, fault-tolerant systems and there are diverse options in a Netezza system to enable high availability and disaster recovery.
+
+Adding IBM&reg; Netezza Replication Services for disaster recovery improves fault tolerance by extending redundancy across local and wide area networks.
+
+IBM Netezza Replication Services protects against data loss by synchronizing data on a primary system (the primary node) with data on one or more target nodes (subordinates). These nodes make up a replication set.
+
+High-Availability Linux (also called *Linux-HA*) provides the failover capabilities from a primary or active Netezza host to a secondary or standby Netezza host. The main cluster management daemon in the Linux-HA solution is called *Heartbeat*. Heartbeat watches the hosts and manages the communication and status checks of services.
+
+Each service is a resource.
+
+Netezza groups the Netezza-specific services into the nps resource group. When Heartbeat detects problems that imply a host failure condition or loss of service to the Netezza users, Heartbeat can initiate a failover to the standby host. For details about Linux-HA and its terms and operations, see the documentation at [http://www.linux-ha.org](http://www.linux-ha.org/).
+
+Distributed Replicated Block Device (DRBD) is a block device driver that mirrors the content of block devices (hard disks, partitions, and logical volumes) between the hosts. Netezza uses the DRBD replication only on the **/nz** and **/export/home** partitions. As new data is written to the **/nz** partition and the **/export/home** partition on the primary host, the DRBD software automatically makes the same changes to the **/nz** and **/export/home** partition of the standby host.
+
+> [!TIP]
+> Azure Synapse creates snapshots automatically to ensure fast recovery times.
+
+Azure Synapse uses database snapshots to provide high availability of the warehouse. A data warehouse snapshot creates a restore point that can be used to recover or copy a data warehouse to a previous state. Since Azure Synapse is a distributed system, a data warehouse snapshot consists of many files that are in Azure storage. Snapshots capture incremental changes from the data stored in your data warehouse.
+
+> [!TIP]
+> Use user-defined snapshots to define a recovery point before key updates.
+
+> [!TIP]
+> Microsoft Azure provides automatic backups to a separate geographical location to enable DR.
+
+Azure Synapse automatically takes snapshots throughout the day, creating restore points that are available for seven days. You can't change this retention period. Azure Synapse supports an eight-hour recovery point objective (RPO). A data warehouse can be restored in the primary region from any one of the snapshots taken in the past seven days.
+
+User-defined restore points are also supported, allowing manual triggering of snapshots to create restore points of a data warehouse before and after large modifications. This capability ensures that restore points are logically consistent, which provides additional data protection in case of any workload interruptions or user errors for a desired RPO less than 8 hours.
+
+As well as the snapshots described previously, Azure Synapse also performs as standard a geo-backup once per day to a [paired data center.](/azure/best-practices-availability-paired-regions) The RPO for a geo-restore is 24 hours. You can restore the geo-backup to a server in any other region where Azure Synapse is supported. A geo-backup ensures that a data warehouse can be restored in case the restore points in the primary region aren't available.
+
+| Technique | Description |
+|--|-|
+| **Scheduler rules** | Scheduler rules influence the scheduling of plans. Each scheduler rule specifies a condition or set of conditions. Each time the scheduler receives a plan, it evaluates all modifying scheduler rules and carries out the appropriate actions. Each time the scheduler selects a plan for execution, it evaluates all limiting scheduler rules. The plan is executed only if doing so wouldn't exceed a limit imposed by a limiting scheduler rule. Otherwise, the plan waits. This provides you with a way to classify and manipulate plans in a way that influences the other WLM techniques (SQB, GRA, and PQE). |
+| **Guaranteed resource allocation (GRA)** | You can assign a minimum share and a maximum percentage of total system resources to entities called *resource groups*. The scheduler ensures that each resource group receives system resources in proportion to its minimum share. A resource group receives a larger share of resources when other resource groups are idle, but never receives more than its configured maximum percentage. Each plan is associated with a resource group, and the settings of that resource group settings determine what fraction of available system resources are to be made available to process the plan. |
+| **Short query bias (SQB)** | Resources (that is, scheduling slots, memory, and preferential queuing) are reserved for short queries. A short query is a query for which the cost estimate is less than a specified maximum value (the default is two seconds). With SQB, short queries can run even when the system is busy processing other, longer queries. |
+| **Prioritized query execution (PQE)** | Based on settings that you configure, the system assigns a priority&mdash;critical, high, normal, or low&mdash;to each query. The priority depends on factors such as the user, group, or session associated with the query. The system can then use the priority as a basis for allocating resources. |
+
+### Workload management
+
+> [!TIP]
+> In a production data warehouse, there are typically mixed workloads which have different resource usage characteristics running concurrently.
+
+Netezza incorporates various features for managing workloads:
+
+In Azure Synapse, resource classes are pre-determined resource limits that govern compute resources and concurrency for query execution. Resource classes can help you manage your workload by setting limits on the number of queries that run concurrently and on the compute resources assigned to each query. There's a trade-off between memory and concurrency.
+
+See [Resource classes for workload management](/azure/sql-data-warehouse/resource-classes-for-workload-management) for detailed information.
+
+This information can also be used for capacity planning, determining the resources required for additional users or application workload. This also applies to planning scale up/scale downs of compute resources for cost-effective support of 'peaky' workloads.
+
+### Scaling compute resources
+
+> [!TIP]
+> A major benefit of Azure is the ability to independently scale up and down compute resources on demand to handle peaky workloads cost-effectively.
+
+The architecture of Azure Synapse separates storage and compute, allowing each to scale independently. As a result, [compute resources can be scaled](/azure/synapse-analytics/sql-data-warehouse/quickstart-scale-compute-portal) to meet performance demands independent of data storage. You can also pause and resume compute resources. A natural benefit of this architecture is that billing for compute and storage is separate. If a data warehouse isn't in use, save on compute costs by pausing compute.
+
+Compute resources can be scaled up or scaled back by adjusting the data warehouse units setting for the data warehouse. Loading and query performance will increase linearly as you add more data warehouse units.
+
+Adding more compute nodes adds more compute power and ability to leverage more parallel processing. As the number of compute nodes increases, the number of distributions per compute node decreases, providing more compute power and parallel processing for queries. Similarly, decreasing data warehouse units reduces the number of compute nodes, which reduces the compute resources for queries.
+
+## Next steps
+
+To learn more about visualization and reporting, see the next article in this series: [Visualization and reporting for Netezza migrations](4-visualization-reporting.md).
synapse-analytics 4 Visualization Reporting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/migration-guides/netezza/4-visualization-reporting.md
+
+ Title: "Visualization and reporting for Netezza migrations"
+description: Learn about Microsoft and third-party BI tools for reports and visualizations in Azure Synapse compared to Netezza.
+++
+ms.devlang:
++++ Last updated : 05/24/2022++
+# Visualization and reporting for Netezza migrations
+
+This article is part four of a seven part series that provides guidance on how to migrate from Netezza to Azure Synapse Analytics. This article provides best practices for visualization and reporting.
+
+## Accessing Azure Synapse Analytics using Microsoft and third-party BI tools
+
+Almost every organization accesses data warehouses and data marts using a range of BI tools and applications, such as:
+
+- Microsoft BI tools, like Power BI.
+
+- Office applications, like Microsoft Excel spreadsheets.
+
+- Third-party BI tools from various vendors.
+
+- Custom analytic applications that have embedded BI tool functionality inside the application.
+
+- Operational applications that request BI on demand, by invoking queries and reports as-a-service on a BI platform, which in turn queries data in the data warehouse or data marts that are being migrated.
+
+- Interactive data science development tools, such as Azure Synapse Spark Notebooks, Azure Machine Learning, RStudio, Jupyter notebooks.
+
+The migration of visualization and reporting as part of a data warehouse migration program means that all the existing queries, reports, and dashboards generated and issued by these tools and applications, need to run on Azure Synapse and yield the same results as they did in the original data warehouse prior to migration.
+
+> [!TIP]
+> Existing users, user groups, roles and assignments of access security privileges need to be migrated first for migration of reports and visualizations to succeed.
+
+To make that happen, everything that BI tools and applications depend on needs to work once you migrate your data warehouse schema and data to Azure Synapse. That includes the obvious and the not so obvious&mdash;such as access and security. Access and security are important considerations for data access in the migrated system, and are specifically discussed in [another guide](3-security-access-operations.md) in this series. When you address access and security, ensure that:
+
+- Authentication is migrated to let users sign in to the data warehouse and data mart databases on Azure Synapse.
+
+- All users are migrated to Azure Synapse.
+
+- All user groups are migrated to Azure Synapse.
+
+- All roles are migrated to Azure Synapse.
+
+- All authorization privileges governing access control are migrated to Azure Synapse.
+
+- User, role, and privilege assignments are migrated to mirror what you had on your existing data warehouse before migration. For example:
+ - Database object privileges assigned to roles
+ - Roles assigned to user groups
+ - Users assigned to user groups and/or roles
+
+> [!TIP]
+> Communication and business user involvement is critical to success.
+
+In addition, all the required data needs to be migrated to ensure the same results appear in the same reports and dashboards that now query data on Azure Synapse. User expectation will undoubtedly be that migration is seamless and there will be no surprises that destroy their confidence in the migrated system on Azure Synapse. So, this is an area where you must take extreme care and communicate as much as possible to allay any fears in your user base. Their expectations are that:
+
+- Table structure will be the same if directly referred to in queries
+
+- Table and column names remain the same if directly referred to in queries; for instance, so that calculated fields defined on columns in BI tools don't fail when aggregate reports are produced
+
+- Historical analysis remains the same
+
+- Data types should, if possible, remain the same
+
+- Query behavior remains the same
+
+- ODBC / JDBC drivers are tested to make sure nothing has changed in terms of query behavior
+
+> [!TIP]
+> Views and SQL queries using proprietary SQL query extensions are likely to result in incompatibilities that impact BI reports and dashboards.
+
+If BI tools are querying views in the underlying data warehouse or data mart database, then will these views still work? You might think yes, but if there are proprietary SQL extensions, specific to your legacy data warehouse DBMS in these views that have no equivalent in Azure Synapse, you'll need to know about them and find a way to resolve them.
+
+Other issues like the behavior of nulls or data type variations across DBMS platforms need to be tested, in case they cause slightly different calculation results. Obviously, you want to minimize these issues and take all necessary steps to shield business users from any kind of impact. Depending on your legacy data warehouse system (such as Netezza), there are [tools](/azure/synapse-analytics/partner/data-integration) that can help hide these differences so that BI tools and applications are kept unaware of them and can run unchanged.
+
+> [!TIP]
+> Use repeatable tests to ensure reports, dashboards, and other visualizations migrate successfully.
+
+Testing is critical to visualization and report migration. You need a test suite and agreed-on test data to run and rerun tests in both environments. A test harness is also useful, and a few are mentioned later in this guide. In addition, it's also important to have significant business involvement in this area of migration to keep confidence high and to keep them engaged and part of the project.
+
+Finally, you may also be thinking about switching BI tools. For example, you might want to [migrate to Power BI](/power-bi/guidance/powerbi-migration-overview). The temptation is to do all of this at the same time, while migrating your schema, data, ETL processing, and more. However, to minimize risk, it's better to migrate to Azure Synapse first and get everything working before undertaking further modernization.
+
+If your existing BI tools run on premises, ensure that they're able to connect to Azure Synapse through your firewall to run comparisons against both environments. Alternatively, if the vendor of your existing BI tools offers their product on Azure, you can try it there. The same applies for applications running on premises that embed BI or that call your BI server on-demand, requesting a "headless report" with data returned in XML or JSON, for example.
+
+There's a lot to think about here, so let's look at all this in more detail.
+
+> [!TIP]
+> A lift and shift data warehouse migration are likely to minimize any disruption to reports, dashboards, and other visualizations.
+
+## Minimizing the impact of data warehouse migration on BI tools and reports using data virtualization
+
+> [!TIP]
+> Data virtualization allows you to shield business users from structural changes during migration so that they remain unaware of changes.
+
+The temptation during data warehouse migration to the cloud is to take the opportunity to make changes during the migration to fulfill long-term requirements, such as opening business requests, missing data, new features, and more. However, if you're going to do that, it can affect BI tool business users and applications accessing your data warehouse, especially if it involves structural changes in your data model. Even if there were no new data structures because of new requirements, but you're considering adopting a different data modeling technique (like Data Vault) in your migrated data warehouse, you're likely to cause structural changes that impact BI reports and dashboards. If you want to adopt an agile data modeling technique, do so after migration. One way in which you can minimize the impact of things like schema changes on BI tools, users, and the reports they produce, is to introduce data virtualization between BI tools and your data warehouse and data marts. The following diagram shows how data virtualization can hide the migration from users.
++
+This breaks the dependency between business users utilizing self-service BI tools and the physical schema of the underlying data warehouse and data marts that are being migrated.
+
+> [!TIP]
+> Schema alterations to tune your data model for Azure Synapse can be hidden from users.
+
+By introducing data virtualization, any schema alternations made during data warehouse and data mart migration to Azure Synapse (to optimize performance, for example) can be hidden from business users because they only access virtual tables in the data virtualization layer. If structural changes are needed, only the mappings between the data warehouse or data marts, and any virtual tables would need to be changed so that users remain unaware of those changes and unaware of the migration. [Microsoft partners](/azure/synapse-analytics/partner/data-integration) provides a useful data virtualization software.
+
+## Identifying high priority reports to migrate first
+
+A key question when migrating your existing reports and dashboards to Azure Synapse is which ones to migrate first. Several factors can drive the decision. For example:
+
+- Business value
+
+- Usage
+
+- Ease of migration
+
+- Data migration strategy
+
+These factors are discussed in more detail later in this article.
+
+Whatever the decision is, it must involve the business, since they produce the reports and dashboards, and consume the insights these artifacts provide in support of the decisions that are made around your business. That said, if most reports and dashboards can be migrated seamlessly, with minimal effort, and offer up like- for-like results, simply by pointing your BI tool(s) at Azure Synapse, instead of your legacy data warehouse system, then everyone benefits. Therefore, if it's that straight forward and there's no reliance on legacy system proprietary SQL extensions, then there's no doubt that the above ease of migration option breeds confidence.
+
+### Migrating reports based on usage
+
+Usage is interesting, since it's an indicator of business value. Reports and dashboards that are never used clearly aren't contributing to supporting any decisions and$1 [don't currently offer any value. So, do you've any mechanism for finding out which reports, and dashboards are currently not used? Several BI tools provide statistics on usage, which would be an obvious place to start.
+
+If your legacy data warehouse has been up and running for many years, there's a high chance you could have hundreds, if not thousands, of reports in existence. In these situations, usage is an important indicator to the business value of a specific report or dashboard. In that sense, it's worth compiling an inventory of the reports and dashboards you've and defining their business purpose and usage statistics.
+
+For those that aren't used at all, it's an appropriate time to seek a business decision, to determine if it necessary to de-commission those reports to optimize your migration efforts. A key question worth asking when deciding to de-commission unused reports is: are they unused because people don't know they exist, or is it because they offer no business value, or have they been superseded by others?
+
+### Migrating reports based on business value
+
+Usage on its own isn't a clear indicator of business value. There needs to be a deeper business context to determine the value to the business. In an ideal world, we would like to know the contribution of the insights produced in a report to the bottom line of the business. That's exceedingly difficult to determine, since every decision made, and its dependency on the insights in a specific report, would need to be recorded along with the contribution that each decision makes to the bottom line of the business. You would also need to do this overtime.
+
+This level of detail is unlikely to be available in most organizations. One way in which you can get deeper on business value to drive migration order is to look at alignment with business strategy. A business strategy set by your executive typically lays out strategic business objectives, key performance indicators (KPIs), and KPI targets that need to be achieved and who is accountable for achieving them. In that sense, classifying your reports and dashboards by strategic business objectives&mdash;for example, reduce fraud, improve customer engagement, and optimize business operations&mdash;will help understand business purpose and show what objective(s), specific reports, and dashboards these are contributing to. Reports and dashboards associated with high priority objectives in the business strategy can then be highlighted so that migration is focused on delivering business value in a strategic high priority area.
+
+It's also worthwhile to classify reports and dashboards as operational, tactical, or strategic, to understand the level in the business where they're used. Delivering strategic business objectives contribution is required at all these levels. Knowing which reports and dashboards are used, at what level, and what objectives they're associated with, helps to focus migration on high priority business value that will drive the company forward. Business contribution of reports and dashboards is needed to understand this, perhaps like what is shown in the following **Business strategy objective** table.
+
+| **Level** | **Report / dashboard name** | **Business purpose** | **Department used** | **Usage frequency** | **Business priority** |
+|-|-|-|-|-|-|
+| **Strategic** | | | | | |
+| **Tactical** | | | | | |
+| **Operational** | | | | | |
+
+While this may seem too time consuming, you need a mechanism to understand the contribution of reports and dashboards to business value, whether you're migrating or not. Catalogs like Azure Data Catalog are becoming very important because they give you the ability to catalog reports and dashboards, automatically capture the metadata associated with them, and let business users tag and rate them to help you understand business value.
+
+### Migrating reports based on data migration strategy
+
+> [!TIP]
+> Data migration strategy could also dictate which reports and visualizations get migrated first.
+
+If your migration strategy is based on migrating "data marts first", clearly, the order of data mart migration will have a bearing on which reports and dashboards can be migrated first to run on Azure Synapse. Again, this is likely to be a business-value-related decision. Prioritizing which data marts are migrated first reflects business priorities. Metadata discovery tools can help you here by showing you which reports rely on data in which data mart tables.
+
+## Migration incompatibility issues that can impact reports and visualizations
+
+When it comes to migrating to Azure Synapse, there are several things that can impact the ease of migration for reports, dashboards, and other visualizations. The ease of migration is affected by:
+
+- Incompatibilities that occur during schema migration between your legacy data warehouse and Azure Synapse.
+
+- Incompatibilities in SQL between your legacy data warehouse and Azure Synapse.
+
+### The impact of schema incompatibilities
+
+> [!TIP]
+> Schema incompatibilities include legacy warehouse DBMS table types and data types that are unsupported on Azure Synapse.
+
+BI tool reports and dashboards, and other visualizations, are produced by issuing SQL queries that access physical tables and/or views in your data warehouse or data mart. When it comes to migrating your data warehouse or data mart schema to Azure Synapse, there may be incompatibilities that can impact reports and dashboards, such as:
+
+- Non-standard table types supported in your legacy data warehouse DBMS that don't have an equivalent in Azure Synapse.
+
+- Data types supported in your legacy data warehouse DBMS that don't have an equivalent in Azure Synapse.
+
+In many cases, where there are incompatibilities, there may be ways around them. For example, the data in unsupported table types can be migrated into a standard table with appropriate data types and indexed or partitioned on a date/time column. Similarly, it may be able to represent unsupported data types in another type of column and perform calculations in Azure Synapse to achieve the same. Either way, it will need refactoring.
+
+> [!TIP]
+> Querying the system catalog of your legacy warehouse DBMS is a quick and straightforward way to identify schema incompatibilities with Azure Synapse.
+
+To identify reports and visualizations impacted by schema incompatibilities, run queries against the system catalog of your legacy data warehouse to identify tables with unsupported data types. Then use metadata from your BI tool or tools to identify reports that access these structures, to see what could be impacted. Obviously, this will depend on the legacy data warehouse DBMS you're migrating from. Find details of how to identify these incompatibilities in [Design and performance for Netezza migrations](1-design-performance-migration.md).
+
+The impact may be less than you think, because many BI tools don't support such data types. As a result, views may already exist in your legacy data warehouse that `CAST` unsupported data types to more generic types.
+
+### The impact of SQL incompatibilities and differences
+
+Additionally, any report, dashboard, or other visualization in an application or tool that makes use of proprietary SQL extensions associated with your legacy data warehouse DBMS, is likely to be impacted when migrating to Azure Synapse. This could happen because the BI tool or application:
+
+- Accesses legacy data warehouse DBMS views that include proprietary SQL functions that have no equivalent in Azure Synapse.
+
+- Issues SQL queries, which include proprietary SQL functions peculiar to the SQL dialect of your legacy data warehouse DBMS, that have no equivalent in Azure Synapse.
+
+### Gauging the impact of SQL incompatibilities on your reporting portfolio
+
+You can't rely on documentation associated with reports, dashboards, and other visualizations to gauge how big of an impact SQL incompatibility may have on the portfolio of embedded query services, reports, dashboards, and other visualizations you're intending to migrate to Azure Synapse. There must be a more precise way of doing that.
+
+#### Using EXPLAIN statements to find SQL incompatibilities
+
+> [!TIP]
+> Gauge the impact of SQL incompatibilities by harvesting your DBMS log files and running `EXPLAIN` statements.
+
+One way is to get a hold of the SQL log files of your legacy data warehouse. Use a script to pull out a representative set of SQL statements into a file, prefix each SQL statement with an `EXPLAIN` statement, and then run all the `EXPLAIN` statements in Azure Synapse. Any SQL statements containing proprietary SQL extensions from your legacy data warehouse that are unsupported will be rejected by Azure Synapse when the `EXPLAIN` statements are executed. This approach would at least give you an idea of how significant or otherwise the use of incompatible SQL is.
+
+Metadata from your legacy data warehouse DBMS will also help you when it comes to views. Again, you can capture and view SQL statements, and `EXPLAIN` them as described previously to identify incompatible SQL in views.
+
+## Testing report and dashboard migration to Azure Synapse Analytics
+
+> [!TIP]
+> Test performance and tune to minimize compute costs.
+
+A key element in data warehouse migration is the testing of reports and dashboards against Azure Synapse to verify that the migration has worked. To do this, you need to define a series of tests and a set of required outcomes for each test that needs to be run to verify success. It's important to ensure that reports and dashboards are tested and compared across your existing and migrated data warehouse systems to:
+
+- Identify whether schema changes made during migration such as data types to be converted, have impacted reports in terms of ability to run, results, and corresponding visualizations.
+
+- Verify all users are migrated.
+
+- Verify all roles are migrated and users assigned to those roles.
+
+- Verify all data access security privileges are migrated to ensure access control list (ACL) migration.
+
+- Ensure consistent results of all known queries, reports, and dashboards.
+
+- Ensure that data and ETL migration is complete and error free.
+
+- Ensure data privacy is upheld.
+
+- Test performance and scalability.
+
+- Test analytical functionality.
+
+For information about how to migrate users, user groups, roles, and privileges, see the [Security, access, and operations for Netezza migrations](3-security-access-operations.md) which is part of this series of articles.
+
+> [!TIP]
+> Build an automated test suite to make tests repeatable.
+
+It's also best practice to automate testing as much as possible, to make each test repeatable and to allow a consistent approach to evaluating results. This works well for known regular reports, and could be managed via [Synapse pipelines](/azure/synapse-analytics/get-started-pipelines?msclkid=8f3e7e96cfed11eca432022bc07c18de) or [Azure Data Factory](/azure/data-factory/introduction?msclkid=2ccc66eccfde11ecaa58877e9d228779) orchestration. If you already have a suite of test queries in place for regression testing, you could use the testing tools to automate the post migration testing.
+
+> [!TIP]
+> Leverage tools that can compare metadata lineage to verify results.
+
+Ad-hoc analysis and reporting are more challenging and requires a set of tests to be compiled to verify that results are consistent across your legacy data warehouse DBMS and Azure Synapse. If reports and dashboards are inconsistent, then having the ability to compare metadata lineage across original and migrated systems is extremely valuable during migration testing, as it can highlight differences and pinpoint where they occurred when these aren't easy to detect. This is discussed in more detail later in this article.
+
+In terms of security, the best way to do this is to create roles, assign access privileges to roles, and then attach users to roles. To access your newly migrated data warehouse, set up an automated process to create new users, and to do role assignment. To detach users from roles, you can follow the same steps.
+
+It's also important to communicate the cut-over to all users, so they know what's changing and what to expect.
+
+## Analyzing lineage to understand dependencies between reports, dashboards, and data
+
+> [!TIP]
+> Having access to metadata and data lineage from reports all the way back to data source is critical for verifying that migrated reports are working correctly.
+
+A critical success factor in migrating reports and dashboards is understanding lineage. Lineage is metadata that shows the journey that data has taken, so you can see the path from the report/dashboard all the way back to where the data originates. It shows how data has gone from point to point, its location in the data warehouse and/or data mart, and where it's used&mdash;for example, in what reports. It helps you understand what happens to data as it travels through different data stores&mdash;files and database&mdash;different ETL pipelines, and into reports. If business users have access to data lineage, it improves trust, breeds confidence, and enables more informed business decisions.
+
+> [!TIP]
+> Tools that automate metadata collection and show end-to- end lineage in a multi-vendor environment are valuable when it comes to migration.
+
+In multi-vendor data warehouse environments, business analysts in BI teams may map out data lineage. For example, if you've Informatica for your ETL, Oracle for your data warehouse, and Tableau for reporting, each of which have their own metadata repository, figuring out where a specific data element in a report came from can be challenging and time consuming.
+
+To migrate seamlessly from a legacy data warehouse to Azure Synapse, end-to-end data lineage helps prove like-for-like migration when comparing reports and dashboards against your legacy environment. That means that metadata from several tools needs to be captured and integrated to show the end to end journey. Having access to tools that support automated metadata discovery and data lineage will let you see duplicate reports and ETL processes and reports that rely on data sources that are obsolete, questionable, or even non-existent. With this information, you can reduce the number of reports and ETL processes that you migrate.
+
+You can also compare end-to-end lineage of a report in Azure Synapse against the end-to-end lineage, for the same report in your legacy data warehouse environment, to see if there are any differences that have occurred inadvertently during migration. This helps enormously with testing and verifying migration success.
+
+Data lineage visualization not only reduces time, effort, and error in the migration process, but also enables faster execution of the migration project.
+
+By leveraging automated metadata discovery and data lineage tools that can compare lineage, you can verify if a report is produced using data migrated to Azure Synapse and if it's produced in the same way as in your legacy environment. This kind of capability also helps you determine:
+
+- What data needs to be migrated to ensure successful report and dashboard execution on Azure Synapse
+
+- What transformations have been and should be performed to ensure successful execution on Azure Synapse
+
+- How to reduce report duplication
+
+This substantially simplifies the data migration process, because the business will have a better idea of the data assets it has and what needs to be migrated to enable a solid reporting environment on Azure Synapse.
+
+> [!TIP]
+> Azure Data Factory and several third-party ETL tools support lineage.
+
+Several ETL tools provide end-to-end lineage capability, and you may be able to make use of this via your existing ETL tool if you're continuing to use it with Azure Synapse. Microsoft [Synapse pipelines](/azure/synapse-analytics/get-started-pipelines?msclkid=8f3e7e96cfed11eca432022bc07c18de) or [Azure Data Factory](/azure/data-factory/introduction?msclkid=2ccc66eccfde11ecaa58877e9d228779) lets you view lineage in mapping flows. Also, [Microsoft partners](/azure/synapse-analytics/partner/data-integration) provide automated metadata discovery, data lineage, and lineage comparison tools.
+
+## Migrating BI tool semantic layers to Azure Synapse Analytics
+
+> [!TIP]
+> Some BI tools have semantic layers that simplify business user access to physical data structures in your data warehouse or data mart, like SAP Business Objects and IBM Cognos.
+
+Some BI tools have what is known as a semantic metadata layer. The role of this metadata layer is to simplify business user access to physical data structures in an underlying data warehouse or data mart database. It does this by providing high-level objects like dimensions, measures, hierarchies, calculated metrics, and joins. These objects use business terms familiar to business analysts and are mapped to the physical data structures in the data warehouse or data mart database.
+
+When it comes to data warehouse migration, changes to column names or table names may be forced upon you. For example, in Oracle, table names can have a "#". In Azure Synapse, the "#" is only allowed as a prefix to a table name to indicate a temporary table. Therefore, you may need to change a table name if migrating from Oracle. You may need to do rework to change mappings in such cases.
+
+A good way to get everything consistent across multiple BI tools is to create a universal semantic layer, using common data names for high-level objects like dimensions, measures, hierarchies, and joins, in a data virtualization server (as shown in the next diagram) that sits between applications, BI tools, and Azure Synapse. This allows you to set up everything once (instead of in every tool), including calculated fields, joins and mappings, and then point all BI tools at the data virtualization server.
+
+> [!TIP]
+> Use data virtualization to create a common semantic layer to guarantee consistency across all BI tools in an Azure Synapse environment.
+
+In this way, you get consistency across all BI tools, while at the same time breaking the dependency between BI tools and applications, and the underlying physical data structures in Azure Synapse. Use [Microsoft partners](/azure/synapse-analytics/partner/data-integration) on Azure to implement this. The following diagram shows how a common vocabulary in the Data Virtualization server lets multiple BI tools see a common semantic layer.
++
+## Conclusions
+
+> [!TIP]
+> Identify incompatibilities early to gauge the extent of the migration effort. Migrate your users, group roles and privilege assignments. Only migrate the reports and visualizations that are used and are contributing to business value.
+
+In a lift-and-shift data warehouse migration to Azure Synapse, most reports and dashboards should migrate easily.
+
+However, if data structures change, then data is stored in unsupported data types or access to data in the data warehouse or data mart is via a view that includes proprietary SQL that's unsupported in your Azure Synapse environment. You'll need to deal with those issues if they arise.
+
+You can't rely on documentation to find out where the issues are likely to be. Making use of `EXPLAIN` statements is a pragmatic and quick way to identify incompatibilities in SQL. Rework these to achieve similar results in Azure Synapse. In addition, it's recommended that you make use of automated metadata discovery and lineage tools to help you identify duplicate reports, reports that are no longer valid because they're using data from data sources that you no longer use, and to understand dependencies. Some of these tools help compare lineage to verify that reports running in your legacy data warehouse environment are produced identically in Azure Synapse.
+
+Don't migrate reports that you no longer use. BI tool usage data can help determine which ones aren't in use. For the visualizations and reports that you do want to migrate, migrate all users, user groups, roles, and privileges, and associate these reports with strategic business objectives and priorities to help you identify report insight contribution to specific objectives. This is useful if you're using business value to drive your report migration strategy. If you're migrating by data store,&mdash;data mart by data mart&mdash;then metadata will also help you identify which reports are dependent on which tables and views, so that you can focus on migrating to these first.
+
+Finally, consider data virtualization to shield BI tools and applications from structural changes to the data warehouse and/or the data mart data model that may occur during migration. You can also use a common vocabulary with data virtualization to define a common semantic layer that guarantees consistent common data names, definitions, metrics, hierarchies, joins, and more across all BI tools and applications in a migrated Azure Synapse environment.
+
+## Next steps
+
+To learn more about minimizing SQL issues, see the next article in this series: [Minimizing SQL issues for Netezza migrations](5-minimize-sql-issues.md).
synapse-analytics 5 Minimize Sql Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/migration-guides/netezza/5-minimize-sql-issues.md
+
+ Title: "Minimizing SQL issues for Netezza migrations"
+description: Learn how to minimize the risk of SQL issues when migrating from Netezza to Azure Synapse.
+++
+ms.devlang:
++++ Last updated : 05/24/2022++
+# Minimizing SQL issues for Netezza migrations
+
+This article is part five of a seven part series that provides guidance on how to migrate from Netezza to Azure Synapse Analytics. This article provides best practices for minimizing SQL issues.
+
+## Overview
+
+### Characteristics of Netezza environments
+
+> [!TIP]
+> Netezza pioneered the "data warehouse appliance" concept in the early 2000s.
+
+In 2003, Netezza initially released their data warehouse appliance product. It reduced the cost of entry and improved the ease-of-use of massively parallel processing (MPP) techniques to enable data processing at scale more efficiently than the existing mainframe or other MPP technologies available at the time. Since then, the product has evolved and has many installations among large financial institutions, telecommunications, and retail companies. The original implementation used proprietary hardware including field programmable gate arrays&mdash;or FPGAs&mdash;and was accessible via ODBC or JDBC network connection over TCP/IP.
+
+Most existing Netezza installations are on-premises, so many users are considering migrating some or all their Netezza data to Azure Synapse to gain the benefits of a move to a modern cloud environment.
+
+> [!TIP]
+> Many existing Netezza installations are data warehouses using a dimensional data model.
+
+Netezza technology is often used to implement a data warehouse, supporting complex analytic queries on large data volumes using SQL. Dimensional data models&mdash;star or snowflake schemas&mdash;are common, as is the implementation of data marts for individual departments.
+
+This combination of SQL and dimensional data models simplifies migration to Azure Synapse, since the basic concepts and SQL skills are transferable. The recommended approach is to migrate the existing data model as-is to reduce risk and time taken. Even if the eventual intention is to make changes to the data model (for example, moving to a Data Vault model), perform an initial as-is migration and then make changes within the Azure cloud environment, leveraging the performance, elastic scalability, and cost advantages there.
+
+While the SQL language has been standardized, individual vendors have in some cases implemented proprietary extensions. This document highlights potential SQL differences you may encounter while migrating from a legacy Netezza environment, and to provide workarounds.
+
+### Use Azure Data Factory to implement a metadata-driven migration
+
+> [!TIP]
+> Automate the migration process by using Azure Data Factory capabilities.
+
+Automate and orchestrate the migration process by making use of the capabilities in the Azure environment. This approach also minimizes the migration's impact on the existing Netezza environment, which may already be running close to full capacity.
+
+Azure Data Factory is a cloud-based data integration service that allows creation of data-driven workflows in the cloud for orchestrating and automating data movement and data transformation. Using Data Factory, you can create and schedule data-driven workflows&mdash;called pipelines&mdash;that can ingest data from disparate data stores. It can process and transform data by using compute services such as Azure HDInsight Hadoop, Spark, Azure Data Lake Analytics, and Azure Machine Learning.
+
+By creating metadata to list the data tables to be migrated and their location, you can use the Data Factory facilities to manage and automate parts of the migration process. You can also use [Synapse pipelines](/azure/synapse-analytics/get-started-pipelines?msclkid=8f3e7e96cfed11eca432022bc07c18de).
+
+## SQL DDL differences between Netezza and Azure Synapse
+
+### SQL Data Definition Language (DDL)
+
+> [!TIP]
+> SQL DDL commands `CREATE TABLE` and `CREATE VIEW` have standard core elements but are also used to define implementation-specific options.
+
+The ANSI SQL standard defines the basic syntax for DDL commands such as `CREATE TABLE` and `CREATE VIEW`. These commands are used within both Netezza and Azure Synapse, but they've also been extended to allow definition of implementation-specific features such as indexing, table distribution and partitioning options.
+
+The following sections discuss Netezza-specific options to consider during a migration to Azure Synapse.
+
+### Table considerations
+
+> [!TIP]
+> Use existing indexes to give an indication of candidates for indexing in the migrated warehouse.
+
+When migrating tables between different technologies, only the raw data and its descriptive metadata gets physically moved between the two environments. Other database elements from the source system, such as indexes and log files, aren't directly migrated as these may not be needed or may be implemented differently within the new target environment. For example, the `TEMPORARY` option within Netezza's `CREATE TABLE` syntax is equivalent to prefixing the table name with a "#" character in Azure Synapse.
+
+It's important to understand where performance optimizations&mdash;such as indexes&mdash;were used in the source environment. This indicates where performance optimization can be added in the new target environment. For example, if zone maps were created in the source Netezza environment, this might indicate that a non-clustered index should be created in the migrated Azure Synapse. Other native performance optimization techniques, such as table replication, may be more applicable than a straight 'like for like' index creation.
+
+### Unsupported Netezza database object types
+
+> [!TIP]
+> Netezza-specific features can be replaced by Azure Synapse features.
+
+Netezza implements some database objects that aren't directly supported in Azure Synapse, but there are methods to achieve the same functionality within the new environment:
+
+- Zone Maps&mdash;In Netezza, zone maps are automatically created and maintained for some column types and are used at query time to restrict the amount of data to be scanned. Zone Maps are created on the following column types:
+ - `INTEGER` columns of length 8 bytes or less.
+ - Temporal columns. For instance, `DATE`, `TIME`, and `TIMESTAMP`.
+ - `CHAR` columns, if these are part of a materialized view and mentioned in the `ORDER BY` clause.
+
+ You can find out which columns have zone maps by using the `nz_zonemap` utility, which is part of the NZ Toolkit. Azure Synapse doesn't include zone maps, but you can achieve similar results by using other user-defined index types and/or partitioning.
+
+- Clustered Base tables (CBT)&mdash;In Netezza, CBTs are commonly used for fact tables, which can have billions of records. Scanning such a huge table requires a lot of processing time, since a full table scan might be needed to get relevant records. Organizing records on restrictive CBT via allows Netezza to group records in same or nearby extents. This process also creates zone maps that improve the performance by reducing the amount of data to be scanned.
+
+ In Azure Synapse, you can achieve a similar effect by use of partitioning and/or use of other indexes.
+
+- Materialized views&mdash;Netezza supports materialized views and recommends creating one or more of these over large tables having many columns where only a few of those columns are regularly used in queries. The system automatically maintains materialized views when data in the base table is updated.
+
+ Azure Synapse supports materialized views, with the same functionality as Netezza.
+
+### Netezza data type mapping
+
+> [!TIP]
+> Assess the impact of unsupported data types as part of the preparation phase.
+
+Most Netezza data types have a direct equivalent in the Azure Synapse. The following table shows these data types along with the recommended approach for mapping them.
+
+| Netezza Data Type | Azure Synapse Data Type |
+|--|-|
+| BIGINT | BIGINT |
+| BINARY VARYING(n) | VARBINARY(n) |
+| BOOLEAN | BIT |
+| BYTEINT | TINYINT |
+| CHARACTER VARYING(n) | VARCHAR(n) |
+| CHARACTER(n) | CHAR(n) |
+| DATE | DATE(DATE |
+| DECIMAL(p,s) | DECIMAL(p,s) |
+| DOUBLE PRECISION | FLOAT |
+| FLOAT(n) | FLOAT(n) |
+| INTEGER | INT |
+| INTERVAL | INTERVAL data types aren't currently directly supported in Azure Synapse but can be calculated using temporal functions such as DATEDIFF |
+| MONEY | MONEY |
+| NATIONAL CHARACTER VARYING(n) | NVARCHAR(n) |
+| NATIONAL CHARACTER(n) | NCHAR(n) |
+| NUMERIC(p,s) | NUMERIC(p,s) |
+| REAL | REAL |
+| SMALLINT | SMALLINT |
+| ST_GEOMETRY(n) | Spatial data types such as ST_GEOMETRY aren't currently supported in Azure Synapse, but the data could be stored as VARCHAR or VARBINARY |
+| TIME | TIME |
+| TIME WITH TIME ZONE | DATETIMEOFFSET |
+| TIMESTAMP | DATETIME |
+
+### Data Definition Language (DDL) generation
+
+> [!TIP]
+> Use existing Netezza metadata to automate the generation of `CREATE TABLE` and `CREATE VIEW DDL` for Azure Synapse.
+
+Edit existing Netezza `CREATE TABLE` and `CREATE VIEW` scripts to create the equivalent definitions with modified data types as described previously if necessary. Typically, this involves removing or modifying any extra Netezza-specific clauses such as `ORGANIZE ON`.
+
+However, all the information that specifies the current definitions of tables and views within the existing Netezza environment is maintained within system catalog tables. This is the best source of this information as it's guaranteed to be up to date and complete. Be aware that user-maintained documentation may not be in sync with the current table definitions.
+
+Access this information by using utilities such as `nz_ddl_table` and generate the `CREATE TABLE DDL` statements. Edit these statements for the equivalent tables in Azure Synapse.
+
+> [!TIP]
+> Third-party tools and services can automate data mapping tasks.
+
+There are [Microsoft partners](/azure/synapse-analytics/partner/data-integration) who offer tools and services to automate migration, including data-type mapping. Also, if a third-party ETL tool such as Informatica or Talend is already in use in the Netezza environment, that tool can implement any required data transformations.
+
+## SQL DML differences between Netezza and Azure Synapse
+
+### SQL Data Manipulation Language (DML)
+
+> [!TIP]
+> SQL DML commands `SELECT`, `INSERT` and `UPDATE` have standard core elements but may also implement different syntax options.
+
+The ANSI SQL standard defines the basic syntax for DML commands such as `SELECT`, `INSERT`, `UPDATE` and `DELETE`. Both Netezza and Azure Synapse use these commands, but in some cases there are implementation differences.
+
+The following sections discuss the Netezza-specific DML commands that you should consider during a migration to Azure Synapse.
+
+### SQL DML syntax differences
+
+Be aware of these differences in SQL Data Manipulation Language (DML) syntax between Netezza SQL and Azure Synapse when migrating:
+
+- `STRPOS`: In Netezza, the `STRPOS` function returns the position of a substring within a string. The equivalent function in Azure Synapse is `CHARINDEX`, with the order of the arguments reversed. For example, `SELECT STRPOS('abcdef','def')...` in Netezza is equivalent to `SELECT CHARINDEX('def','abcdef')...` in Azure Synapse.
+
+- `AGE`: Netezza supports the `AGE` operator to give the interval between two temporal values, such as timestamps or dates. For example, `SELECT AGE('23-03-1956','01-01-2019') FROM...`. In Azure Synapse, `DATEDIFF` gives the interval. For example, `SELECT DATEDIFF(day, '1956-03-26','2019-01-01') FROM...`. Note the date representation sequence.
+
+- `NOW()`: Netezza uses `NOW()` to represent `CURRENT_TIMESTAMP` in Azure Synapse.
+
+### Functions, stored procedures, and sequences
+
+> [!TIP]
+> As part of the preparation phase, assess the number and type of non-data objects being migrated.
+
+When migrating from a mature legacy data warehouse environment such as Netezza, there are often elements other than simple tables and views that need to be migrated to the new target environment. Examples of this include functions, stored procedures, and sequences.
+
+As part of the preparation phase, create an inventory of the objects that need to be migrated and define the methods for handling them. Then assign an appropriate allocation of resources in the project plan.
+
+There may be facilities in the Azure environment that replace the functionality implemented as either functions or stored procedures in the Netezza environment. In this case, it's often more efficient to use the built-in Azure facilities rather than recoding the Netezza functions.
+
+> [!TIP]
+> Third-party products and services can automate migration of non-data elements.
+
+[Microsoft partners](/azure/synapse-analytics/partner/data-integration) offer tools and services that can automate the migration, including the mapping of data types. Also, third-party ETL tools, such as Informatica or Talend, that are already in use in the IBM Netezza environment can implement any required data transformations.
+
+See the following sections for more information on each of these elements.
+
+#### Functions
+
+As with most database products, Netezza supports system functions and user-defined functions within the SQL implementation. When migrating to another database platform such as Azure Synapse, common system functions are available and can be migrated without change. Some system functions may have slightly different syntax, but the required changes can be automated. System functions where there's no equivalent, such arbitrary user-defined functions, may need to be recoded using the languages available in the target environment. Azure Synapse uses the popular Transact-SQL language to implement user-defined functions. Netezza user-defined functions are coded in nzlua or C++ languages.
+
+#### Stored procedures
+
+Most modern database products allow for procedures to be stored within the database. Netezza provides the NZPLSQL language, which is based on Postgres PL/pgSQL. A stored procedure typically contains SQL statements and some procedural logic, and may return data or a status.
+
+SQL Azure Data Warehouse also supports stored procedures using T-SQL, so if you must migrate stored procedures, recode them accordingly.
+
+#### Sequences
+
+In Netezza, a sequence is a named database object created via `CREATE SEQUENCE` that can provide the unique value via the `NEXT VALUE FOR` method. Use these to generate unique numbers for use as surrogate key values for primary key values.
+
+In Azure Synapse, there's no `CREATE SEQUENCE`. Sequences are handled using [Identity to create surrogate keys](/azure/synapse-analytics/sql-data-warehouse/sql-data-warehouse-tables-identity) or [managed identity](/azure/data-factory/data-factory-service-identity?tabs=data-factory) using SQL code to create the next sequence number in a series.
+
+### Use [EXPLAIN](/sql/t-sql/queries/explain-transact-sql?msclkid=91233fc1cff011ec9dff597671b7ae97) to validate legacy SQL
+
+> [!TIP]
+> Find potential migration issues by using real queries from the existing system query logs.
+
+Capture some representative SQL statements from the legacy query history logs to evaluate legacy Netezza SQL for compatibility with Azure Synapse. Then prefix those queries with `EXPLAIN` and&mdash;assuming a 'like for like' migrated data model in Azure Synapse with the same table and column names&mdash;run those `EXPLAIN` statements in Azure Synapse. Any incompatible SQL will return an error. Use this information to determine the scale of the recoding task. This approach doesn't require data to be loaded into the Azure environment, only that the relevant tables and views have been created.
+
+#### IBM Netezza to T-SQL mapping
+
+The IBM Netezza to T-SQL compliant with Azure Synapse SQL data type mapping is in this table:
+
+| IBM Netezza Data Type | Azure Synapse SQL Data Type |
+||--|
+| array    | *Not supported* |
+| bigint  | bigint |
+| binary large object \[(n\[K\|M\|G\])\] | nvarchar \[(n\|max)\] |
+| blob \[(n\[K\|M\|G\])\]  | nvarchar \[(n\|max)\] |
+| byte \[(n)\] | binary \[(n)\]\|varbinary(max) |
+| byteint    | smallint |
+| char varying \[(n)\] | varchar \[(n\|max)\] |
+| character varying \[(n)\] | varchar \[(n\|max)\] |
+| char \[(n)\] | char \[(n)\]\|varchar(max) |
+| character \[(n)\] | char \[(n)\]\|varchar(max) |
+| character large object \[(n\[K\|M\|G\])\] | varchar \[(n\|max) |
+| clob \[(n\[K\|M\|G\])\] | varchar \[(n\|max) |
+| dataset    | *Not supported* |
+| date  | date |
+| dec \[(p\[,s\])\]    | decimal \[(p\[,s\])\] |
+| decimal \[(p\[,s\])\]    | decimal \[(p\[,s\])\] |
+| double precision    | float(53) |
+| float \[(n)\]    | float \[(n)\] |
+| graphic \[(n)\] | nchar \[(n)\]\| varchar(max) |
+| interval  | *Not supported* |
+| json \[(n)\]  | nvarchar \[(n\|max)\] |
+| long varchar  | nvarchar(max) |
+| long vargraphic  | nvarchar(max) |
+| mbb  | *Not supported* |
+| mbr  | *Not supported* |
+| number \[((p\|\*)\[,s\])\]  | numeric \[(p\[,s\])\]  |
+| numeric \[(p \[,s\])\]  | numeric \[(p\[,s\])\]  |
+| period  | *Not supported* |
+| real  | real |
+| smallint  | smallint |
+| st_geometry    | *Not supported* |
+| time  | time |
+| time with time zone  | datetimeoffset |
+| timestamp  | datetime2  |
+| timestamp with time zone  | datetimeoffset |
+| varbyte  | varbinary \[(n\|max)\] |
+| varchar \[(n)\] | varchar \[(n)\] |
+| vargraphic \[(n)\] | nvarchar \[(n\|max)\] |
+| varray  | *Not supported* |
+| xml  | *Not supported* |
+| xmltype  | *Not supported* |
+
+## Summary
+
+Typical existing legacy Netezza installations are implemented in a way that makes migration to Azure Synapse easy. They use SQL for analytical queries on large data volumes, and are in some form of dimensional data model. These factors make it a good candidate for migration to Azure Synapse.
+
+To minimize the task of migrating the actual SQL code, follow these recommendations:
+
+- Initial migration of the data warehouse should be as-is to minimize risk and time taken, even if the eventual final environment will incorporate a different data model such as Data Vault.
+
+- Understand the differences between Netezza SQL implementation and Azure Synapse.
+
+- Use metadata and query logs from the existing Netezza implementation to assess the impact of the differences and plan an approach to mitigate.
+
+- Automate the process wherever possible to minimize errors, risk, and time for the migration.
+
+- Consider using specialist [Microsoft partners](/azure/synapse-analytics/partner/data-integration) and services to streamline the migration.
+
+## Next steps
+
+To learn more about Microsoft and third-party tools, see the next article in this series: [Tools for Netezza data warehouse migration to Azure Synapse Analytics](6-microsoft-third-party-migration-tools.md).
synapse-analytics 6 Microsoft Third Party Migration Tools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/migration-guides/netezza/6-microsoft-third-party-migration-tools.md
+
+ Title: "Tools for Netezza data warehouse migration to Azure Synapse Analytics"
+description: Learn about Microsoft and third-party data and database migration tools that can help you migrate from Netezza to Azure Synapse.
+++
+ms.devlang:
++++ Last updated : 05/24/2022++
+# Tools for Netezza data warehouse migration to Azure Synapse Analytics
+
+This article is part six of a seven part series that provides guidance on how to migrate from Netezza to Azure Synapse Analytics. This article provides best practices for Microsoft and third-party tools.
+
+## Data warehouse migration tools
+
+Migrating your existing data warehouse to Azure Synapse enables you to utilize:
+
+- A globally secure, scalable, low-cost, cloud-native, pay-as-you-use analytical database
+
+- The rich Microsoft analytical ecosystem that exists on Azure consists of technologies to help modernize your data warehouse once it's migrated and extend your analytical capabilities to drive new value
+
+Several tools from Microsoft and third-party partner vendors can help you migrate your existing data warehouse to Azure Synapse.
+
+They include:
+
+- Microsoft data and database migration tools
+
+- Third-party data warehouse automation tools to automate and document the migration to Azure Synapse
+
+- Third-party data warehouse migration tools to migrate schema and data to Azure Synapse
+
+- Third-party tools to minimize the impact on SQL differences between your existing data warehouse DBMS and Azure Synapse
+
+Let's look at these in more detail.
+
+## Microsoft data migration tools
+
+> [!TIP]
+> Data Factory includes tools to help migrate your data and your entire data warehouse to Azure.
+
+Microsoft offers several tools to help you migrate your existing data warehouse to Azure Synapse. They are:
+
+- Microsoft Azure Data Factory
+
+- Microsoft services for physical data transfer
+
+- Microsoft services for data ingestion
+
+### Microsoft Azure Data Factory
+
+Microsoft Azure Data Factory is a fully managed, pay-as-you-use, hybrid data integration service for highly scalable ETL and ELT processing. It uses Spark to process and analyze data in parallel and in memory to maximize throughput.
+
+> [!TIP]
+> Data Factory allows you to build scalable data integration pipelines code free.
+
+[Azure Data Factory connectors](/azure/data-factory/connector-overview?msclkid=00086e4acff211ec9263dee5c7eb6e69) connect to external data sources and databases and have templates for common data integration tasks. A visual front-end, browser-based, GUI enables non-programmers to create and run process pipelines to ingest, transform, and load data, while more experienced programmers have the option to incorporate custom code if necessary, such as Python programs.
+
+> [!TIP]
+> Data Factory enables collaborative development between business and IT professionals.
+
+Data Factory is also an orchestration tool. It's the best Microsoft tool to automate the end-to-end migration process to reduce risk and make the migration process easily repeatable. The following diagram shows a Data Factory mapping data flow.
++
+The next screenshot shows a Data Factory wrangling data flow.
++
+Development of simple or comprehensive ETL and ELT processes without coding or maintenance, with a few clicks. These processes ingest, move, prepare, transform, and process your data. Design and manage scheduling and triggers in Azure Data Factory to build an automated data integration and loading environment. Define, manage, and schedule PolyBase bulk data load processes in Data Factory.
+
+> [!TIP]
+> Data Factory includes tools to help migrate your data and your entire data warehouse to Azure.
+
+Use Data Factory to implement and manage a hybrid environment that includes on-premises, cloud, streaming and SaaS data&mdash;for example, from applications like Salesforce&mdash;in a secure and consistent way.
+
+A new capability in Data Factory is wrangling data flows. This opens up Data Factory to business users to make use of platform and allows them to visually discover, explore, and prepare data at scale without writing code. This easy-to-use Data Factory capability is like Microsoft Excel Power Query or Microsoft Power BI Dataflows, where self-service data preparation business users use a spreadsheet style user interface, with drop-down transforms, to prepare and integrate data.
+
+Azure Data Factory is the recommended approach for implementing data integration and ETL/ELT processes for an Azure Synapse environment, especially if existing legacy processes need to be refactored.
+
+### Microsoft services for physical data transfer
+
+> [!TIP]
+> Microsoft offers a range of products and services to assist with data transfer.
+
+#### Azure ExpressRoute
+
+Azure ExpressRoute creates private connections between Azure data centers and infrastructure on your premises or in a collocation environment. ExpressRoute connections don't go over the public Internet, and they offer more reliability, faster speeds, and lower latencies than typical Internet connections. In some cases, using ExpressRoute connections to transfer data between on-premises systems and Azure can give you significant cost benefits.
+
+#### AzCopy
+
+[AzCopy](/azure/storage/common/storage-use-azcopy-v10) is a command line utility that copies files to Azure Blob Storage via a standard internet connection. You can use it to upload extracted, compressed, delimited text files before loading via PolyBase or native parquet reader (if the exported files are parquet) in a warehouse migration project. Individual files, file selections, and file directories can be uploaded.
+
+#### Azure Data Box
+
+Microsoft offers a service called Azure Data Box. This service writes data to be migrated to a physical storage device. This device is then shipped to an Azure data center and loaded into cloud storage. This service can be cost-effective for large volumes of data&mdash;for example, tens or hundreds of terabytes&mdash;or where network bandwidth isn't readily available and is typically used for the one-off historical data load when migrating a large amount of data to Azure Synapse.
+
+Another service available is Data Box Gateway, a virtualized cloud storage gateway device that resides on your premises and sends your images, media, and other data to Azure. Use Data Box Gateway for one-off migration tasks or ongoing incremental data uploads.
+
+### Microsoft services for data ingestion
+
+#### COPY INTO
+
+The [COPY](/sql/t-sql/statements/copy-into-transact-sql) statement provides the most flexibility for high-throughput data ingestion into Azure Synapse Analytics. Refer to the list of capabilities that `COPY` offers for data ingestion.
+
+#### PolyBase
+
+> [!TIP]
+> PolyBase can load data in parallel from Azure Blob Storage into Azure Synapse.
+
+PolyBase provides the fastest and most scalable method of loading bulk data into Azure Synapse. PolyBase leverages the MPP architecture to use parallel loading, to give the fastest throughput, and can read data from flat files in Azure Blob Storage or directly from external data sources and other relational databases via connectors.
+
+PolyBase can also directly read from files compressed with gzip&mdash;this reduces the physical volume of data moved during the load process. PolyBase supports popular data formats such as delimited text, ORC and Parquet.
+
+> [!TIP]
+> Invoke PolyBase from Azure Data Factory as part of a migration pipeline.
+
+PolyBase is tightly integrated with Azure Data Factory (see next section) to enable data load ETL/ELT processes to be rapidly developed and scheduled via a visual GUI, leading to higher productivity and fewer errors than hand-written code.
+
+PolyBase is the recommended data load method for Azure Synapse, especially for high-volume data. PolyBase loads data using the `CREATE TABLE AS` or `INSERT...SELECT` statements&mdash;CTAS achieves the highest possible throughput as it minimizes the amount of logging required. Compressed delimited text files are the most efficient input format. For maximum throughput, split very large input files into multiple smaller files and load these in parallel. For fastest loading to a staging table, define the target table as type `HEAP` and use round-robin distribution.
+
+There are some limitations in PolyBase. Rows to be loaded must be less than 1 MB in length. Fixed-width format or nested data such as JSON and XML aren't directly readable.
+
+## Microsoft partners to help you migrate your data warehouse to Azure Synapse Analytics
+
+In addition to tools that can help you with various aspects of data warehouse migration, there are several practiced [Microsoft partners](/azure/synapse-analytics/partner/data-integration) that can bring their expertise to help you move your legacy on-premises data warehouse platform to Azure Synapse.
+
+## Next steps
+
+To learn more about implementing modern data warehouses, see the next article in this series: [Beyond Netezza migration, implementing a modern data warehouse in Microsoft Azure](7-beyond-data-warehouse-migration.md).
synapse-analytics 7 Beyond Data Warehouse Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/migration-guides/netezza/7-beyond-data-warehouse-migration.md
+
+ Title: "Beyond Netezza migration, implementing a modern data warehouse in Microsoft Azure"
+description: Learn how a Netezza migration to Azure Synapse lets you integrate your data warehouse with the Microsoft Azure analytical ecosystem.
+++
+ms.devlang:
++++ Last updated : 05/24/2022++
+# Beyond Netezza migration, implementing a modern data warehouse in Microsoft Azure
+
+This article is part seven of a seven part series that provides guidance on how to migrate from Netezza to Azure Synapse Analytics. This article provides best practices for implementing modern data warehouses.
+
+## Beyond data warehouse migration to Azure
+
+One of the key reasons to migrate your existing data warehouse to Azure Synapse is to utilize a globally secure, scalable, low-cost, cloud-native, pay-as-you-use analytical database. Azure Synapse also lets you integrate your migrated data warehouse with the complete Microsoft Azure analytical ecosystem to take advantage of, and integrate with, other Microsoft technologies that help you modernize your migrated data warehouse. This includes integrating with technologies like:
+
+- Azure Data Lake Storage&mdash;for cost effective data ingestion, staging, cleansing and transformation to free up data warehouse capacity occupied by fast growing staging tables
+
+- Azure Data Factory&mdash;for collaborative IT and self-service data integration [with connectors](/azure/data-factory/connector-overview) to cloud and on-premises data sources and streaming data
+
+- [The Open Data Model Common Data Initiative](/common-data-model/)&mdash;to share consistent trusted data across multiple technologies including:
+ - Azure Synapse
+ - Azure Synapse Spark
+ - Azure HDInsight
+ - Power BI
+ - SAP
+ - Adobe Customer Experience Platform
+ - Azure IoT
+ - Microsoft ISV Partners
+
+- [Microsoft's data science technologies](/azure/architecture/data-science-process/platforms-and-tools) including:
+ - Azure ML studio
+ - Azure Machine Learning Service
+ - Azure Synapse Spark (Spark as a service)
+ - Jupyter Notebooks
+ - RStudio
+ - ML.NET
+ - Visual Studio .NET for Apache Spark to enable data scientists to use Azure Synapse data to train machine learning models at scale.
+
+- [Azure HDInsight](/azure/hdinsight/)&mdash;to leverage big data analytical processing and join big data with Azure Synapse data by creating a Logical Data Warehouse using PolyBase
+
+- [Azure Event Hubs](/azure/event-hubs/event-hubs-about), [Azure Stream Analytics](/azure/stream-analytics/stream-analytics-introduction) and [Apache Kafka](/azure/databricks/spark/latest/structured-streaming/kafka)&mdash;to integrate with live streaming data from within Azure Synapse
+
+There's often acute demand to integrate with [Machine Learning](/azure/synapse-analytics/machine-learning/what-is-machine-learning) to enable custom built, trained machine learning models for use in Azure Synapse. This would enable in-database analytics to run at scale in-batch, on an event-driven basis and on-demand. The ability to exploit in-database analytics in Azure Synapse from multiple BI tools and applications also guarantees that all get the same predictions and recommendations.
+
+In addition, there's an opportunity to integrate Azure Synapse with Microsoft partner tools on Azure to shorten time to value.
+
+Let's look at these in more detail to understand how you can take advantage of the technologies in Microsoft's analytical ecosystem to modernize your data warehouse once you've migrated to Azure Synapse.
+
+## Offloading data staging and ETL processing to Azure Data Lake and Azure Data Factory
+
+Enterprises today have a key problem resulting from digital transformation. So much new data is being generated and captured for analysis, and much of this data is finding its way into data warehouses. A good example is transaction data created by opening online transaction processing (OLTP) systems to self-service access from mobile devices. These OLTP systems are the main sources of data to a data warehouse, and with customers now driving the transaction rate rather than employees, data in data warehouse staging tables has been growing rapidly in volume.
+
+The rapid influx of data into the enterprise, along with new sources of data like Internet of Things (IoT), means that companies need to find a way to deal with unprecedented data growth and scale data integration ETL processing beyond current levels. One way to do this is to offload ingestion, data cleansing, transformation and integration to a data lake and process it at scale there, as part of a data warehouse modernization program.
+
+> [!TIP]
+> Offload ELT processing to Azure Data Lake and still run at scale as your data volumes grow.
+
+Once you've migrated your data warehouse to Azure Synapse, Microsoft provides the ability to modernize your ETL processing by ingesting data into, and staging data in, Azure Data Lake Storage. You can then clean, transform and integrate your data at scale using Data Factory before loading it into Azure Synapse in parallel using PolyBase.
+
+### Microsoft Azure Data Factory
+
+> [!TIP]
+> Data Factory allows you to build scalable data integration pipelines code free.
+
+[Microsoft Azure Data Factory](https://azure.microsoft.com/services/data-factory/) is a pay-as-you-use, hybrid data integration service for highly scalable ETL and ELT processing. Data Factory provides a simple web-based user interface to build data integration pipelines, in a code-free manner that can:
+
+- Data Factory allows you to build scalable data integration pipelines code free. Easily acquire data at scale. Pay only for what you use and connect to on premises, cloud, and SaaS based data sources.
+
+- Ingest, move, clean, transform, integrate, and analyze cloud and on-premises data at scale and take automatic action such a recommendation, an alert, and more.
+
+- Seamlessly author, monitor and manage pipelines that span data stores both on-premises and in the cloud.
+
+- Enable pay-as-you-go scale out in alignment with customer growth.
+
+> [!TIP]
+> Data Factory can connect to on-premises, cloud, and SaaS data.
+
+All of this can be done without writing any code. However, adding custom code to Data Factory pipelines is also supported. The next screenshot shows an example Data Factory pipeline.
++
+> [!TIP]
+> Pipelines called data factories control the integration and analysis of data. Data Factory is enterprise class data integration software aimed at IT professionals with a data wrangling facility for business users.
+
+Implement Data Factory pipeline development from any of several places including:
+
+- Microsoft Azure portal
+
+- Microsoft Azure PowerShell
+
+- Programmatically from .NET and Python using a multi-language SDK
+
+- Azure Resource Manager (ARM) Templates
+
+- REST APIs
+
+Developers and data scientists who prefer to write code can easily author Data Factory pipelines in Java, Python, and .NET using the software development kits (SDKs) available for those programming languages. Data Factory pipelines can also be hybrid as they can connect, ingest, clean, transform and analyze data in on-premises data centers, Microsoft Azure, other clouds, and SaaS offerings.
+
+Once you develop Data Factory pipelines to integrate and analyze data, deploy those pipelines globally and schedule them to run in batch, invoke them on demand as a service, or run them in real time on an event-driven basis. A Data Factory pipeline can also run on one or more execution engines and monitor pipeline execution to ensure performance and track errors.
+
+#### Use cases
+
+> [!TIP]
+> Build data warehouses on Microsoft Azure.
+
+Data Factory can support multiple use cases, including:
+
+- Preparing, integrating, and enriching data from cloud and on-premises data sources to populate your migrated data warehouse and data marts on Microsoft Azure Synapse.
+
+- Preparing, integrating, and enriching data from cloud and on-premises data sources to produce training data for use in machine learning model development and in retraining analytical models.
+
+- Orchestrating data preparation and analytics to create predictive and prescriptive analytical pipelines for processing and analyzing data in batch, such as sentiment analytics, and either acting on the results of the analysis or populating your data warehouse with the results.
+
+- Preparing, integrating, and enriching data for data-driven business applications running on the Azure cloud on top of operational data stores like Azure Cosmos DB.
+
+> [!TIP]
+> Build training data sets in data science to develop machine learning models.
+
+#### Data sources
+
+Data Factory lets you use [connectors](/azure/data-factory/connector-overview) from both cloud and on-premises data sources. Agent software, known as a *self-hosted integration runtime*, securely accesses on-premises data sources and supports secure, scalable data transfer.
+
+#### Transforming data using Data Factory
+
+> [!TIP]
+> Professional ETL developers can use Data Factory mapping data flows to clean, transform and integrate data without the need to write code.
+
+Within a Data Factory pipeline, ingest, clean, transform, integrate, and, if necessary, analyze any type of data from these sources. This includes structured, semi-structured&mdash;such as JSON or Avro&mdash;and unstructured data.
+
+Professional ETL developers can use Data Factory mapping data flows to filter, split, join (many types), lookup, pivot, unpivot, sort, union, and aggregate data without writing any code. In addition, Data Factory supports surrogate keys, multiple write processing options such as insert, upsert, update, table recreation, and table truncation, and several types of target data stores&mdash;also known as sinks. ETL developers can also create aggregations, including time series aggregations that require a window to be placed on data columns.
+
+> [!TIP]
+> Data Factory supports the ability to automatically detect and manage schema changes in inbound data, such as in streaming data.
+
+Run mapping data flows that transform data as activities in a Data Factory pipeline. Include multiple mapping data flows in a single pipeline, if necessary. Break up challenging data transformation and integration tasks into smaller mapping dataflows that can be combined to handle the complexity and custom code added if necessary. In addition to this functionality, Data Factory mapping data flows include these abilities:
+
+- Define expressions to clean and transform data, compute aggregations, and enrich data. For example, these expressions can perform feature engineering on a date field to break it into multiple fields to create training data during machine learning model development. Construct expressions from a rich set of functions that include mathematical, temporal, split, merge, string concatenation, conditions, pattern match, replace, and many other functions.
+
+- Automatically handle schema drift so that data transformation pipelines can avoid being impacted by schema changes in data sources. This is especially important for streaming IoT data, where schema changes can happen without notice when devices are upgraded or when readings are missed by gateway devices collecting IoT data.
+
+- Partition data to enable transformations to run in parallel at scale.
+
+- Inspect data to view the metadata of a stream you're transforming.
+
+> [!TIP]
+> Data Factory can also partition data to enable ETL processing to run at scale.
+
+The next screenshot shows an example Data Factory mapping data flow.
++
+Data engineers can profile data quality and view the results of individual data transforms by switching on a debug capability during development.
+
+> [!TIP]
+> Data Factory pipelines are also extensible since Data Factory allows you to write your own code and run it as part of a pipeline.
+
+Extend Data Factory transformational and analytical functionality by adding a linked service containing your own code into a pipeline. For example, an Azure Synapse Spark Pool notebook containing Python code could use a trained model to score the data integrated by a mapping data flow.
+
+Store integrated data and any results from analytics included in a Data Factory pipeline in one or more data stores such as Azure Data Lake storage, Azure Synapse, or Azure HDInsight (Hive Tables). Invoke other activities to act on insights produced by a Data Factory analytical pipeline.
+
+#### Utilizing Spark to scale data integration
+
+Under the covers, Data Factory utilizes Azure Synapse Spark Pools&mdash;Microsoft's Spark-as-a-service offering&mdash;at run time to clean and integrate data on the Microsoft Azure cloud. This enables it to clean, integrate, and analyze high-volume and very high-velocity data (such as click stream data) at scale. Microsoft intends to execute Data Factory pipelines on other Spark distributions. In addition to executing ETL jobs on Spark, Data Factory can also invoke Pig scripts and Hive queries to access and transform data stored in Azure HDInsight.
+
+#### Linking self-service data prep and Data Factory ETL processing using wrangling data flows
+
+> [!TIP]
+> Data Factory support for wrangling data flows in addition to mapping data flows means that business and IT can work together on a common platform to integrate data.
+
+Another new capability in Data Factory is wrangling data flows. This lets business users (also known as citizen data integrators and data engineers) make use of the platform to visually discover, explore and prepare data at scale without writing code. This easy-to-use Data Factory capability is similar to Microsoft Excel Power Query or Microsoft Power BI Dataflows, where self-service data preparation business users use a spreadsheet-style UI with drop-down transforms to prepare and integrate data. The following screenshot shows an example Data Factory wrangling data flow.
++
+This differs from Excel and Power BI, as Data Factory wrangling data flows uses Power Query Online to generate M code and translate it into a massively parallel in-memory Spark job for cloud scale execution. The combination of mapping data flows and wrangling data flows in Data Factory lets IT professional ETL developers and business users collaborate to prepare, integrate, and analyze data for a common business purpose. The preceding Data Factory mapping data flow diagram shows how both Data Factory and Azure Synapse Spark Pool Notebooks can be combined in the same Data Factory pipeline. This allows IT and business to be aware of what each has created. Mapping data flows and wrangling data flows can then be available for reuse to maximize productivity and consistency and minimize reinvention.
+
+#### Linking Data and Analytics in Analytical Pipelines
+
+In addition to cleaning and transforming data, Data Factory can combine data integration and analytics in the same pipeline. Use Data Factory to create both data integration and analytical pipelines&mdash;the latter being an extension of the former. Drop an analytical model into a pipeline so that clean, integrated data can be stored to provide predictions or recommendations. Act on this information immediately or store it in your data warehouse to provide you with new insights and recommendations that can be viewed in BI tools.
+
+Models developed code-free with Azure ML Studio or with the Azure Machine Learning Service SDK using Azure Synapse Spark Pool Notebooks or using R in RStudio can be invoked as a service from within a Data Factory pipeline to batch score your data. Analysis happens at scale by executing Spark machine learning pipelines on Azure Synapse Spark Pool Notebooks.
+
+Store integrated data and any results from analytics included in a Data Factory pipeline in one or more data stores, such as Azure Data Lake storage, Azure Synapse, or Azure HDInsight (Hive Tables). Invoke other activities to act on insights produced by a Data Factory analytical pipeline.
+
+## A lake database to share consistent trusted data
+
+> [!TIP]
+> Microsoft has created a lake database to describe core data entities to be shared across the enterprise.
+
+A key objective in any data integration set-up is the ability to integrate data once and reuse it everywhere, not just in a data warehouse&mdash;for example, in data science. Reuse avoids reinvention and ensures consistent, commonly understood data that everyone can trust.
+
+> [!TIP]
+> Azure Data Lake is shared storage that underpins Microsoft Azure Synapse, Azure ML, Azure Synapse Spark, and Azure HDInsight.
+
+To achieve this goal, establish a set of common data names and definitions describing logical data entities that need to be shared across the enterprise&mdash;such as customer, account, product, supplier, orders, payments, returns, and so forth. Once this is done, IT and business professionals can use data integration software to create these common data assets and store them to maximize their reuse to drive consistency everywhere.
+
+> [!TIP]
+> Integrating data to create lake database logical entities in shared storage enables maximum reuse of common data assets.
+
+Microsoft has done this by creating a [lake database](/azure/synapse-analytics/database-designer/concepts-lake-database). The lake database is a common language for business entities that represents commonly used concepts and activities across a business. Azure Synapse Analytics provides industry specific database templates to help standardize data in the lake. [Lake database templates](/azure/synapse-analytics/database-designer/concepts-database-templates) provide schemas for predefined business areas, enabling data to the loaded into a lake database in a structured way. The power comes when data integration software is used to create lake database common data assets. This results in self-describing trusted data that can be consumed by applications and analytical systems. Create a lake database in Azure Data Lake storage using Azure Data Factory, and consume it with Power BI, Azure Synapse Spark, Azure Synapse and Azure ML. The following diagram shows a lake database used in Azure Synapse Analytics.
++
+## Integration with Microsoft data science technologies on Azure
+
+Another key requirement in modernizing your migrated data warehouse is to integrate it with Microsoft and third-party data science technologies on Azure to produce insights for competitive advantage. Let's look at what Microsoft offers in terms of machine learning and data science technologies and see how these can be used with Azure Synapse in a modern data warehouse environment.
+
+### Microsoft technologies for data science on Azure
+
+> [!TIP]
+> Develop machine learning models using a no/low code approach or from a range of programming languages like Python, R and .NET.
+
+Microsoft offers a range of technologies to build predictive analytical models using machine learning, analyze unstructured data using deep learning, and perform other kinds of advanced analytics. This includes:
+
+- Azure ML Studio
+
+- Azure Machine Learning Service
+
+- Azure Synapse Spark Pool Notebooks
+
+- ML.NET (API, CLI or .NET Model Builder for Visual Studio)
+
+- Visual Studio .NET for Apache Spark
+
+Data scientists can use RStudio (R) and Jupyter Notebooks (Python) to develop analytical models, or they can use other frameworks such as Keras or TensorFlow.
+
+#### Azure ML Studio
+
+Azure ML Studio is a fully managed cloud service that lets you easily build, deploy, and share predictive analytics via a drag-and-drop web-based user interface. The next screenshot shows an Azure Machine Learning studio user interface.
++
+#### Azure Machine Learning Service
+
+> [!TIP]
+> Azure Machine Learning Service provides an SDK for developing machine learning models using several open-source frameworks.
+
+Azure Machine Learning Service provides a software development kit (SDK) and services for Python to quickly prepare data, as well as train and deploy machine learning models. Use Azure Machine Learning Service from Azure notebooks (a Jupyter notebook service) and utilize open-source frameworks, such as PyTorch, TensorFlow, Spark MLlib (Azure Synapse Spark Pool Notebooks), or scikit-learn. Azure Machine Learning Service provides an AutoML capability that automatically identifies the most accurate algorithms to expedite model development. You can also use it to build machine learning pipelines that manage end-to-end workflow, programmatically scale on the cloud, and deploy models both to the cloud and the edge. Azure Machine Learning Service uses logical containers called workspaces, which can be either created manually from the Azure portal or created programmatically. These workspaces keep compute targets, experiments, data stores, trained machine learning models, docker images, and deployed services all in one place to enable teams to work together. Use Azure Machine Learning Service from Visual Studio with a Visual Studio for AI extension.
+
+> [!TIP]
+> Organize and manage related data stores, experiments, trained models, docker images and deployed services in workspaces.
+
+#### Azure Synapse Spark Pool Notebooks
+
+> [!TIP]
+> Azure Synapse Spark is Microsoft's dynamically scalable Spark-as-a-service offering scalable execution of data preparation, model development and deployed model execution.
+
+[Azure Synapse Spark Pool Notebooks](/azure/synapse-analytics/spark/apache-spark-development-using-notebooks?msclkid=cbe4b8ebcff511eca068920ea4bf16b9) is an Apache Spark service optimized to run on Azure which:
+
+- Allows data engineers to build and execute scalable data preparation jobs using Azure Data Factory
+
+- Allows data scientists to build and execute machine learning models at scale using notebooks written in languages such as Scala, R, Python, Java, and SQL; and to visualize results
+
+> [!TIP]
+> Azure Synapse Spark can access data in a range of Microsoft analytical ecosystem data stores on Azure.
+
+Jobs running in Azure Synapse Spark Pool Notebook can retrieve, process, and analyze data at scale from Azure Blob Storage, Azure Data Lake Storage, Azure Synapse, Azure HDInsight, and streaming data services such as Kafka.
+
+Autoscaling and auto-termination are also supported to reduce total cost of ownership (TCO). Data scientists can use the ML flow open-source framework to manage the machine learning lifecycle.
+
+#### ML.NET
+
+> [!TIP]
+> Microsoft has extended its machine learning capability to .NET developers.
+
+ML.NET is an open-source and cross-platform machine learning framework (Windows, Linux, macOS), created by Microsoft for .NET developers so that they can use existing tools&mdash;like .NET Model Builder for Visual Studio&mdash;to develop custom machine learning models and integrate them into .NET applications.
+
+#### Visual Studio .NET for Apache Spark
+
+Visual Studio .NET for Apache® Spark™ aims to make Spark accessible to .NET developers across all Spark APIs. It takes Spark support beyond R, Scala, Python, and Java to .NET. While initially only available on Apache Spark on HDInsight, Microsoft intends to make this available on Azure Synapse Spark Pool Notebook.
+
+### Utilizing Azure Analytics with your data warehouse
+
+> [!TIP]
+> Train, test, evaluate, and execute machine learning models at scale on Azure Synapse Spark Pool Notebook using data in your Azure Synapse.
+
+Combine machine learning models built using the tools with Azure Synapse by.
+
+- Using machine learning models in batch mode or in real time to produce new insights, and add them to what you already know in Azure Synapse.
+
+- Using the data in Azure Synapse to develop and train new predictive models for deployment elsewhere, such as in other applications.
+
+- Deploying machine learning models&mdash;including those trained elsewhere&mdash;in Azure Synapse to analyze data in the data warehouse and drive new business value.
+
+> [!TIP]
+> Produce new insights using machine learning on Azure in batch or in real-time and add to what you know in your data warehouse.
+
+In terms of machine learning model development, data scientists can use RStudio, Jupyter notebooks, and Azure Synapse Spark Pool notebooks together with Microsoft Azure Machine Learning Service to develop machine learning models that run at scale on Azure Synapse Spark Pool Notebooks using data in Azure Synapse. For example, they could create an unsupervised model to segment customers for use in driving different marketing campaigns. Use supervised machine learning to train a model to predict a specific outcome, such as predicting a customer's propensity to churn, or recommending the next best offer for a customer to try to increase their value. The next diagram shows how Azure Synapse Analytics can be leveraged for Machine Learning.
++
+In addition, you can ingest big data&mdash;such as social network data or review website data&mdash;into Azure Data Lake, then prepare and analyze it at scale on Azure Synapse Spark Pool Notebook, using natural language processing to score sentiment about your products or your brand. Add these scores to your data warehouse to understand the impact of&mdash;for example&mdash;negative sentiment on product sales, and to leverage big data analytics to add to what you already know in your data warehouse.
+
+## Integrating live streaming data into Azure Synapse Analytics
+
+When analyzing data in a modern data warehouse, you must be able to analyze streaming data in real time and join it with historical data in your data warehouse. An example of this would be combining IoT data with product or asset data.
+
+> [!TIP]
+> Integrate your data warehouse with streaming data from IoT devices or clickstream.
+
+Once you've successfully migrated your data warehouse to Azure Synapse, you can introduce this capability as part of a data warehouse modernization exercise. Do this by taking advantage of additional functionality in Azure Synapse.
+
+> [!TIP]
+> Ingest streaming data into Azure Data Lake Storage from Microsoft Event Hub or Kafka, and access it from Azure Synapse using PolyBase external tables.
+
+To do this, ingest streaming data via Microsoft Event Hubs or other technologies, such as Kafka, using Azure Data Factory (or using an existing ETL tool if it supports the streaming data sources) and land it in Azure Data Lake Storage (ADLS). Next, create an external table in Azure Synapse using PolyBase and point it at the data being streamed into Azure Data Lake. Your migrated data warehouse will now contain new tables that provide access to real-time streaming data. Query this external table as if the data was in the data warehouse via standard TSQL from any BI tool that has access to Azure Synapse. You can also join this data to other tables containing historical data and create views that join live streaming data to historical data to make it easier for business users to access. In the following diagram, a real-time data warehouse on Azure Synapse analytics is integrated with streaming data in Azure Data Lake.
++
+## Creating a logical data warehouse using PolyBase
+
+> [!TIP]
+> PolyBase simplifies access to multiple underlying analytical data stores on Azure to simplify access for business users.
+
+PolyBase offers the capability to create a logical data warehouse to simplify user access to multiple analytical data stores.
+
+This is attractive because many companies have adopted 'workload optimized' analytical data stores over the last several years in addition to their data warehouses. Examples of these platforms on Azure include:
+
+- Azure Data Lake Storage with Azure Synapse Spark Pool Notebook (Spark-as-a-service), for big data analytics
+
+- Azure HDInsight (Hadoop as-a-service), also for big data analytics
+
+- NoSQL Graph databases for graph analysis, which could be done in Azure Cosmos DB
+
+- Azure Event Hubs and Azure Stream Analytics, for real-time analysis of data in motion
+
+You may have non-Microsoft equivalents of some of these. You may also have a master data management (MDM) system that needs to be accessed for consistent trusted data on customers, suppliers, products, assets, and more.
+
+These additional analytical platforms have emerged because of the explosion of new data sources&mdash;both inside and outside the enterprises&mdash;that business users want to capture and analyze. Examples include:
+
+- Machine generated data, such as IoT sensor data and clickstream data.
+
+- Human generated data, such as social network data, review web site data, customer in-bound email, image, and video.
+
+- Other external data, such as open government data and weather data.
+
+This data is over and above the structured transaction data and master data sources that typically feed data warehouses. These new data sources include semi-structured data (like JSON, XLM, or Avro) or unstructured data (like text, voice, image, or video) which is more complex to process and analyze. This data could be very high volume, high velocity, or both.
+
+As a result, the need for new kinds of more complex analysis has emerged, such as natural language processing, graph analysis, deep learning, streaming analytics, or complex analysis of large volumes of structured data. All of this is typically not happening in a data warehouse, so it's not surprising to see different analytical platforms for different types of analytical workloads, as shown in this diagram.
++
+Since these platforms are producing new insights, it's normal to see a requirement to combine these insights with what you already know in Azure Synapse. That's what PolyBase makes possible.
+
+> [!TIP]
+> The ability to make data in multiple analytical data stores look like it's all in one system and join it to Azure Synapse is known as a logical data warehouse architecture.
+
+By leveraging PolyBase data virtualization inside Azure Synapse, you can implement a logical data warehouse. Join data in Azure Synapse to data in other Azure and on-premises analytical data stores&mdash;like Azure HDInsight or Cosmos DB&mdash;or to streaming data flowing into Azure Data Lake storage from Azure Stream Analytics and Event Hubs. Users access external tables in Azure Synapse, unaware that the data they're accessing is stored in multiple underlying analytical systems. The next diagram shows the complex data warehouse structure accessed through comparatively simpler but still powerful user interface methods.
++
+The previous diagram shows how other technologies of the Microsoft analytical ecosystem can be combined with the capability of Azure Synapse logical data warehouse architecture. For example, data can be ingested into Azure Data Lake Storage (ADLS) and curated using Azure Data Factory to create trusted data products that represent Microsoft [lake database](/azure/synapse-analytics/database-designer/concepts-lake-database) logical data entities. This trusted, commonly understood data can then be consumed and reused in different analytical environments such as Azure Synapse, Azure Synapse Spark Pool Notebooks, or Azure Cosmos DB. All insights produced in these environments are accessible via a logical data warehouse data virtualization layer made possible by PolyBase.
+
+> [!TIP]
+> A logical data warehouse architecture simplifies business user access to data and adds new value to what you already know in your data warehouse.
+
+## Conclusions
+
+> [!TIP]
+> Migrating your data warehouse to Azure Synapse lets you make use of a rich Microsoft analytical ecosystem running on Azure.
+
+Once you migrate your data warehouse to Azure Synapse, you can leverage other technologies in the Microsoft analytical ecosystem. You can't only modernize your data warehouse, but combine insights produced in other Azure analytical data stores into an integrated analytical architecture.
+
+Broaden your ETL processing to ingest data of any type into Azure Data Lake Storage. Prepare and integrate it at scale using Azure Data Factory to produce trusted, commonly understood data assets that can be consumed by your data warehouse and accessed by data scientists and other applications. Build real-time and batch-oriented analytical pipelines and create machine learning models to run in batch, in-real-time on streaming data and on-demand as a service.
+
+Leverage PolyBase and `COPY INTO` to go beyond your data warehouse. Simplify access to insights from multiple underlying analytical platforms on Azure by creating holistic integrated views in a logical data warehouse. Easily access streaming, big data, and traditional data warehouse insights from BI tools and applications to drive new value in your business.
+
+## Next steps
+
+To learn more about migrating to a dedicated SQL pool, see [Migrate a data warehouse to a dedicated SQL pool in Azure Synapse Analytics](../migrate-to-synapse-analytics-guide.md).
synapse-analytics 1 Design Performance Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/migration-guides/teradata/1-design-performance-migration.md
+
+ Title: "Design and performance for Teradata migrations"
+description: Learn how Teradata and Azure Synapse SQL databases differ in their approach to high query performance on exceptionally large data volumes.
+++
+ms.devlang:
++++ Last updated : 05/24/2022++
+# Design and performance for Teradata migrations
+
+This article is part one of a seven part series that provides guidance on how to migrate from Teradata to Azure Synapse Analytics. This article provides best practices for design and performance.
+
+## Overview
+
+> [!TIP]
+> More than just a database&mdash;the Azure environment includes a comprehensive set of capabilities and tools.
+
+Many existing users of Teradata data warehouse systems want to take advantage of the innovations provided by newer environments such as cloud, IaaS, or PaaS, and to delegate tasks like infrastructure maintenance and platform development to the cloud provider.
+
+Although Teradata and Azure Synapse are both SQL databases designed to use massively parallel processing (MPP) techniques to achieve high query performance on exceptionally large data volumes, there are some basic differences in approach:
+
+- Legacy Teradata systems are often installed on-premises and use proprietary hardware, while Azure Synapse is cloud based and uses Azure storage and compute resources.
+
+- Since storage and compute resources are separate in the Azure environment, these resources can be scaled upwards and downwards independently, leveraging the elastic scaling capability.
+
+- Azure Synapse can be paused or resized as required to reduce resource utilization and cost.
+
+- Upgrading a Teradata configuration is a major task involving additional physical hardware and potentially lengthy database reconfiguration or reload.
+
+Microsoft Azure is a globally available, highly secure, scalable cloud environment, that includes Azure Synapse and an ecosystem of supporting tools and capabilities. The next diagram summarizes the Azure Synapse ecosystem.
++
+> [!TIP]
+> Azure Synapse gives best-of-breed performance and price-performance in independent benchmark.
+
+Azure Synapse provides best-of-breed relational database performance by using techniques such as massively parallel processing (MPP) and multiple levels of automated caching for frequently used data. See the results of this approach in independent benchmarks such as the one run recently by [GigaOm](https://research.gigaom.com/report/data-warehouse-cloud-benchmark/), which compares Azure Synapse to other popular cloud data warehouse offerings. Customers who have migrated to this environment have seen many benefits including:
+
+- Improved performance and price/performance.
+
+- Increased agility and shorter time to value.
+
+- Faster server deployment and application development.
+
+- Elastic scalability&mdash;only pay for actual usage.
+
+- Improved security/compliance.
+
+- Reduced storage and disaster recovery costs.
+
+- Lower overall TCO and better cost control (OPEX).
+
+To maximize these benefits, migrate new or existing data and applications to the Azure Synapse platform. In many organizations, this will include migrating an existing data warehouse from legacy on-premises platforms such as Teradata. At a high level, the basic process includes these steps:
++
+This paper looks at schema migration with a goal of equivalent or better performance of your migrated Teradata data warehouse and data marts on Azure Synapse. This paper applies specifically to migrations from an existing Teradata environment.
+
+## Design considerations
+
+### Migration scope
+
+> [!TIP]
+> Create an inventory of objects to be migrated and document the migration process.
+
+#### Preparation for migration
+
+When migrating from a Teradata environment, there are some specific topics to consider in addition to the more general subjects described in this article.
+
+#### Choosing the workload for the initial migration
+
+Legacy Teradata environments have typically evolved over time to encompass multiple subject areas and mixed workloads. When deciding where to start on an initial migration project, choose an area that can:
+
+- Prove the viability of migrating to Azure Synapse by quickly delivering the benefits of the new environment.
+
+- Allow the in-house technical staff to gain relevant experience of the processes and tools involved which can be used in migrations to other areas.
+
+- Create a template for further migrations specific to the source Teradata environment and the current tools and processes that are already in place.
+
+A good candidate for an initial migration from the Teradata environment that would enable the items above, is typically one that implements a BI/Analytics workload (rather than an OLTP workload) with a data model that can be migrated with minimal modifications&mdash;normally a start or snowflake schema.
+
+The migration data volume for the initial exercise should be large enough to demonstrate the capabilities and benefits of the Azure Synapse environment while quickly demonstrating the value&mdash;typically in the 1-10TB range.
+
+To minimize the risk and reduce implementation time for the initial migration project, confine the scope of the migration to just the data marts, such as the OLAP DB part a Teradata warehouse. However, this won't address the broader topics such as ETL migration and historical data migration. Address these topics in later phases of the project, once the migrated data mart layer is back filled with the data and processes required to build them.
+
+#### Lift and shift as-is versus a phased approach incorporating changes
+
+> [!TIP]
+> 'Lift and shift' is a good starting point, even if subsequent phases will implement changes to the data model.
+
+Whatever the drive and scope of the intended migration, there are&mdash;broadly speaking&mdash;two types of migration:
+
+##### Lift and shift
+
+In this case, the existing data model&mdash;such as a star schema&mdash;is migrated unchanged to the new Azure Synapse platform. The emphasis is on minimizing risk and the migration time required by reducing the work needed to realize the benefits of moving to the Azure cloud environment.
+
+This is a good fit for existing Teradata environments where a single data mart is being migrated, or where the data is already in a well-designed star or snowflake schema&mdash;or there are other pressures to move to a more modern cloud environment.
+
+##### Phased approach incorporating modifications
+
+In cases where a legacy warehouse has evolved over a long time, you may need to reengineer to maintain the required performance levels or to support new data like IoT steams. Migrate to Azure Synapse to get the benefits of a scalable cloud environment as part of the re-engineering process. Migration could include a change in the underlying data model, such as a move from an Inmon model to a data vault.
+
+Microsoft recommends moving the existing data model as-is to Azure (optionally using a VM Teradata instance in Azure) and using the performance and flexibility of the Azure environment to apply the re-engineering changes, leveraging Azure's capabilities to make the changes without impacting the existing source system.
+
+#### Using a VM Teradata instance as part of a migration
+
+> [!TIP]
+> Use Azure's VM capability to create a temporary Teradata instance to speed up migration and minimize impact on the source system.
+
+When migrating from an on-premises Teradata environment, you can leverage the Azure environment. Azure provides cheap cloud storage and elastic scalability to create a Teradata instance within a VM in Azure, collocating with the target Azure Synapse environment.
+
+With this approach, standard Teradata utilities such as Teradata Parallel Data Transporter can efficiently move the subset of Teradata tables being migrated onto the VM instance. Then, all migration tasks can take place within the Azure environment. This approach has several benefits:
+
+- After the initial replication of data, the source system isn't impacted by the migration tasks
+
+- The familiar Teradata interfaces, tools, and utilities are available within the Azure environment
+
+- Once in the Azure environment, there are no potential issues with network bandwidth availability between the on-premises source system and the cloud target system
+
+- Tools like Azure Data Factory can efficiently call utilities like Teradata Parallel Transporter to migrate data quickly and easily
+
+- The migration process is orchestrated and controlled entirely within the Azure environment, keeping everything in a single place
+
+#### Use Azure Data Factory to implement a metadata-driven migration
+
+Automate and orchestrate the migration process by making use of the capabilities in the Azure environment. This approach minimizes the impact on the existing Teradata environment, which may already be running close to full capacity.
+
+Data Factory is a cloud-based data integration service that allows creation of data-driven workflows in the cloud for orchestrating and automating data movement and data transformation. Using Data Factory, you can create and schedule data-driven workflows&mdash;called pipelines&mdash;to ingest data from disparate data stores. It can process and transform data by using compute services such as Azure HDInsight Hadoop, Spark, Azure Data Lake Analytics, and Azure Machine Learning.
+
+By creating metadata to list the data tables to be migrated and their location, you can use the Data Factory facilities to manage the migration process.
+
+### Design differences between Teradata and Azure Synapse
+
+#### Multiple databases versus a single database and schemas
+
+> [!TIP]
+> Combine multiple databases into a single database in Azure Synapse and use schemas to logically separate the tables.
+
+In a Teradata environment, there are often multiple separate databases for individual parts of the overall environment. For example, there may be a separate database for data ingestion and staging tables, a database for the core warehouse tables, and another database for data marts, sometimes called a semantic layer. Processing these as ETL/ELT pipelines may implement cross-database joins and will move data between these separate databases.
+
+Querying within the Azure Synapse environment is limited to a single database. Schemas are used to separate the tables into logically separate groups. Therefore, we recommend using a series of schemas within the target Azure Synapse to mimic any separate databases migrated from the Teradata environment. If the Teradata environment already uses schemas, you may need to use a new naming convention to move the existing Teradata tables and views to the new environment&mdash;for example, concatenate the existing Teradata schema and table names into the new Azure Synapse table name and use schema names in the new environment to maintain the original separate database names. Schema consolidation naming can have dots&mdash;however, Azure Synapse Spark may have issues. You can use SQL views over the underlying tables to maintain the logical structures, but there are some potential downsides to this approach:
+
+- Views in Azure Synapse are read-only, so any updates to the data must take place on the underlying base tables.
+
+- There may already be one or more layers of views in existence, and adding an extra layer of views might impact performance and supportability as nested views are difficult to troubleshoot.
+
+#### Table considerations
+
+> [!TIP]
+> Use existing indexes to indicate candidates for indexing in the migrated warehouse.
+
+When migrating tables between different technologies, only the raw data and the metadata that describes it gets physically moved between the two environments. Other database elements from the source system&mdash;such as indexes&mdash;aren't migrated, as these may not be needed or may be implemented differently within the new target environment.
+
+However, it's important to understand where performance optimizations such as indexes have been used in the source environment, as this can indicate where to add performance optimization in the new target environment. For example, if a NUSI (Non-unique secondary index) has been created within the source Teradata environment, it may indicate that a non-clustered index should be created within the migrated Azure Synapse. Other native performance optimization techniques, such as table replication, may be more applicable than a straight 'like for like' index creation.
+
+#### High availability for the database
+
+Teradata supports data replication across nodes via the FALLBACK option, where table rows that reside physically on a given node are replicated to another node within the system. This approach guarantees that data won't be lost if there's a node failure and provides the basis for failover scenarios.
+
+The goal of the high availability architecture in Azure SQL Database is to guarantee that your database is up and running 99.9% of time, without worrying about the impact of maintenance operations and outages. Azure automatically handles critical servicing tasks such as patching, backups, and Windows and SQL upgrades, as well as unplanned events such as underlying hardware, software, or network failures.
+
+Data storage in Azure Synapse is automatically [backed up](/azure/synapse-analytics/sql-data-warehouse/backup-and-restore) with snapshots. These snapshots are a built-in feature of the service that creates restore points. You don't have to enable this capability. Users can't currently delete automatic restore points where the service uses these restore points to maintain SLAs for recovery.
+
+Azure Synapse Dedicated SQL pool takes snapshots of the data warehouse throughout the day creating restore points that are available for seven days. This retention period can't be changed. SQL Data Warehouse supports an eight-hour recovery point objective (RPO). You can restore your data warehouse in the primary region from any one of the snapshots taken in the past seven days. If you require more granular backups, other user-defined options are available.
+
+#### Unsupported Teradata table types
+
+> [!TIP]
+> Standard tables in Azure Synapse can support migrated Teradata time series and temporal data.
+
+Teradata supports special table types for time series and temporal data. The syntax and some of the functions for these table types aren't directly supported in Azure Synapse, but the data can be migrated into a standard table with appropriate data types and indexing or partitioning on the date/time column.
+
+Teradata implements the temporal query functionality via query rewriting to add additional filters within a temporal query to limit the applicable date range. If this functionality is currently used in the source Teradata environment and is to be migrated, add this additional filtering into the relevant temporal queries.
+
+The Azure environment also includes specific features for complex analytics on time- series data at a scale called [time series insights](https://azure.microsoft.com/services/time-series-insights/). This is aimed at IoT data analysis applications and may be more appropriate for this use case.
+
+#### SQL DML syntax differences
+
+There are a few differences in SQL Data Manipulation Language (DML) syntax between Teradata SQL and Azure Synapse (T-SQL) that you should be aware during migration:
+
+- `QUALIFY`&mdash;Teradata supports the `QUALIFY` operator. For example:
+
+ ```sql
+ SELECT col1
+ FROM tab1
+ WHERE col1='XYZ'
+ QUALIFY ROW_NUMBER () OVER (PARTITION by
+ col1 ORDER BY col1) = 1;
+ ```
+
+ The equivalent Azure Synapse syntax is:
+
+ ```sql
+ SELECT * FROM (
+ SELECT col1, ROW_NUMBER () OVER (PARTITION by col1 ORDER BY col1) rn
+ FROM tab1 WHERE col1='XYZ'
+ ) WHERE rn = 1;
+ ```
+
+- Date Arithmetic&mdash;Azure Synapse has operators such as `DATEADD` and `DATEDIFF` which can be used on `DATE` or `DATETIME` fields. Teradata supports direct subtraction on dates such as 'SELECT DATE1-DATE2 FROM...'
+
+- In Group by ordinal, explicitly provide the T-SQL column name.
+
+- Teradata supports LIKE ANY syntax such as:
+
+ ```sql
+ SELECT * FROM CUSTOMER
+ WHERE POSTCODE LIKE ANY
+ ('CV1%', 'CV2%', 'CV3%');
+ ```
+
+ The equivalent in Azure Synapse syntax is:
+
+ ```sql
+ SELECT * FROM CUSTOMER
+ WHERE
+ (POSTCODE LIKE 'CV1%') OR (POSTCODE LIKE 'CV2%') OR (POSTCODE LIKE 'CV3%');
+ ```
+
+- Depending on system settings, character comparisons in Teradata may be case insensitive by default. In Azure Synapse, character comparisons are always case sensitive.
+
+#### Functions, stored procedures, triggers, and sequences
+
+> [!TIP]
+> Assess the number and type of non-data objects to be migrated as part of the preparation phase.
+
+When migrating from a mature legacy data warehouse environment such as Teradata, you must often migrate elements other than simple tables and views to the new target environment. Examples include functions, stored procedures, triggers, and sequences.
+
+As part of the preparation phase, create an inventory of these objects to be migrated, and define the method of handling them. Assign an appropriate allocation of resources in the project plan.
+
+There may be facilities in the Azure environment that replace the functionality implemented as functions or stored procedures in the Teradata environment. In this case, it's more efficient to use the built-in Azure facilities rather than recoding the Teradata functions.
+
+[Data integration partners](/azure/synapse-analytics/partner/data-integration) offer tools and services that can automate the migration.
+
+##### Functions
+
+As with most database products, Teradata supports system functions and user-defined functions within an SQL implementation. When migrating to another database platform such as Azure Synapse, common system functions are available and can be migrated without change. Some system functions may have slightly different syntax, but the required changes can be automated if so.
+
+For system functions where there's no equivalent, or for arbitrary user-defined functions, recode these using the language(s) available in the target environment. Azure Synapse uses the popular Transact-SQL language to implement user-defined functions.
+
+##### Stored procedures
+
+Most modern database products allow for procedures to be stored within the database. Teradata provides the SPL language for this purpose.
+
+A stored procedure typically contains SQL statements and some procedural logic, and may return data or a status.
+
+Azure Synapse Analytics from Azure SQL Data Warehouse also supports stored procedures using T-SQL. If you must migrate stored procedures, recode these procedures for their new environment.
+
+##### Triggers
+
+Azure Synapse doesn't support trigger creation, but trigger creation can be implemented with Azure Data Factory.
+
+##### Sequences
+
+With Azure Synapse, sequences are handled in a similar way to Teradata. Use [IDENTITY](/sql/t-sql/statements/create-table-transact-sql-identity-property?msclkid=8ab663accfd311ec87a587f5923eaa7b) columns or using SQL code to create the next sequence number in a series.
+
+### Extracting metadata and data from a Teradata environment
+
+#### Data Definition Language (DDL) generation
+
+> [!TIP]
+> Use existing Teradata metadata to automate the generation of CREATE TABLE and CREATE VIEW DDL for Azure Synapse Analytics.
+
+You can edit existing Teradata CREATE TABLE and CREATE VIEW scripts to create the equivalent definitions with modified data types, if necessary, as described in the previous section. Typically, this involves removing extra Teradata-specific clauses such as FALLBACK.
+
+However, all the information that specifies the current definitions of tables and views within the existing Teradata environment is maintained within system catalog tables. These tables are the best source of this information, as it's guaranteed to be up to date and complete. User-maintained documentation may not be in sync with the current table definitions.
+
+Access the information in these tables via views into the catalog such as `DBC.ColumnsV`, and generate the equivalent CREATE TABLE DDL statements for the equivalent tables in Azure Synapse.
+
+Third-party migration and ETL tools also use the catalog information to achieve the same result.
+
+#### Data extraction from Teradata
+
+> [!TIP]
+> Use Teradata Parallel Transporter for most efficient data extract.
+
+Migrate the raw data from existing Teradata tables using standard Teradata utilities, such as BTEQ and FASTEXPORT. During a migration exercise, extract the data as efficiently as possible. Use Teradata Parallel Transporter, which uses multiple parallel FASTEXPORT streams to achieve the best throughput.
+
+Call Teradata Parallel Transporter directly from Azure Data Factory. This is the recommended approach for managing the data migration process whether the Teradata instance in on-premises or copied to a VM in the Azure environment, as described in the previous section.
+
+Recommended data formats for the extracted data include delimited text files (also called Comma Separated Values or CSV), Optimized Row Columnar (ORC), or Parquet files.
+
+For more detailed information on the process of migrating data and ETL from a Teradata environment, see Section 2.1. Data Migration ETL and Load from Teradata.
+
+## Performance Recommendations for Teradata Migrations
+
+This article provides general information and guidelines about use of performance optimization techniques for Azure Synapse and adds specific recommendations for use when migrating from a Teradata environment.
+
+### Differences in performance tuning approach
+
+> [!TIP]
+> Prioritize early familiarity with Azure Synapse tuning options in a migration exercise.
+
+This section highlights lower-level implementation differences between Teradata and Azure Synapse for performance tuning.
+
+#### Data distribution options
+
+Azure enables the specification of data distribution methods for individual tables. The aim is to reduce the amount of data that must be moved between processing nodes when executing a query.
+
+For large table-large table joins, hash distributing one or, ideally, both tables on one of the join columns&mdash;which has a wide range of values to help ensure an even distribution. Perform join processing locally, as the data rows to be joined will already be collocated on the same processing node.
+
+Another way to achieve local joins for small table-large table joins&mdash;typically dimension table to fact table in a star schema model&mdash;is to replicate the smaller dimension table across all nodes. This ensures that any value of the join key of the larger table will have a matching dimension row locally available. The overhead of replicating the dimension tables is relatively low, provided the tables aren't very large (see [Design guidance for replicated tables](/azure/synapse-analytics/sql-data-warehouse/design-guidance-for-replicated-tables))&mdash;in which case, the hash distribution approach as described above is more appropriate. For more information, see [Distributed tables design](/azure/synapse-analytics/sql-data-warehouse/sql-data-warehouse-tables-distribute).
+
+#### Data Indexing
+
+Azure Synapse provides several indexing options, but these are different from the indexing options implemented in Teradata. More details of the different indexing options are described in [table indexes](/azure/sql-data-warehouse/sql-data-warehouse-tables-index).
+
+Existing indexes within the source Teradata environment can however provide a useful indication of how the data is currently used. They can identify candidates for indexing within the Azure Synapse environment.
+
+#### Data partitioning
+
+In an enterprise data warehouse, fact tables can contain many billions of rows. Partitioning optimizes the maintenance and querying of these tables by splitting them into separate parts to reduce the amount of data processed. The `CREATE TABLE` statement defines the partitioning specification for a table. Partitioning should only be done on very large tables where each partition will contain at least 60 million rows.
+
+Only one field per table can be used for partitioning. That field is frequently a date field since many queries are filtered by date or a date range. It's possible to change the partitioning of a table after initial load by recreating the table with the new distribution using the `CREATE TABLE AS` (or CTAS) statement. See [table partitions](/azure/sql-data-warehouse/sql-data-warehouse-tables-partition) for a detailed discussion of partitioning in Azure Synapse.
+
+#### Data table statistics
+
+Ensure that statistics on data tables are up to date by building in a [statistics](/azure/synapse-analytics/sql/develop-tables-statistics) step to ETL/ELT jobs.
+
+#### PolyBase for data loading
+
+PolyBase is the most efficient method for loading large amounts of data into the warehouse since it can leverage parallel loading streams. For more information, see [PolyBase data loading strategy](/azure/synapse-analytics/sql/load-data-overview).
+
+#### Use Workload management
+
+Use [Workload management](/azure/synapse-analytics/sql-data-warehouse/sql-data-warehouse-workload-management?context=/azure/synapse-analytics/context/context) instead of resource classes. ETL would be in its own workgroup and should be configured to have more resources per query (less concurrency by more resources). For more information, see [What is dedicated SQL pool in Azure Synapse Analytics](/azure/synapse-analytics/sql-data-warehouse/sql-data-warehouse-overview-what-is).
+
+## Next steps
+
+To learn more about ETL and load for Teradata migration, see the next article in this series: [Data migration, ETL, and load for Teradata migration](2-etl-load-migration-considerations.md).
synapse-analytics 2 Etl Load Migration Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/migration-guides/teradata/2-etl-load-migration-considerations.md
+
+ Title: "Data migration, ETL, and load for Teradata migrations"
+description: Learn how to plan your data migration from Teradata to Azure Synapse to minimize the risk and impact on users.
+++
+ms.devlang:
++++ Last updated : 05/24/2022++
+# Data migration, ETL, and load for Teradata migrations
+
+This article is part two of a seven part series that provides guidance on how to migrate from Teradata to Azure Synapse Analytics. This article provides best practices for ETL and load migration.
+
+## Data migration considerations
+
+### Initial decisions for data migration from Teradata
+
+When migrating a Teradata data warehouse, you need to ask some basic data-related questions. For example:
+
+- Should unused table structures be migrated?
+
+- What's the best migration approach to minimize risk and user impact?
+
+- When migrating data marts&mdash;stay physical or go virtual?
+
+The next sections discuss these points within the context of migration from Teradata.
+
+#### Migrate unused tables?
+
+> [!TIP]
+> In legacy systems, it's not unusual for tables to become redundant over time&mdash;these don't need to be migrated in most cases.
+
+It makes sense to only migrate tables that are in use in the existing system. Tables that aren't active can be archived rather than migrated, so that the data is available if necessary in future. It's best to use system metadata and log files rather than documentation to determine which tables are in use, because documentation can be out of date.
+
+If enabled, Teradata system catalog tables and logs contain information that can determine when a given table was last accessed&mdash;which can in turn be used to decide whether a table is a candidate for migration.
+
+Here's an example query on `DBC.Tables` that provides the date of last access and last modification:
+
+```sql
+SELECT TableName, CreatorName, CreateTimeStamp, LastAlterName,
+LastAlterTimeStamp, AccessCount, LastAccessTimeStamp
+FROM DBC.Tables t
+WHERE DataBaseName = 'databasename'
+```
+
+If logging is enabled and the log history is accessible, other information, such as SQL query text, is available in table DBQLogTbl and associated logging tables. For more information, see [Teradata log history](https://docs.teradata.com/reader/wada1XMYPkZVTqPKz2CNaw/PuQUxpyeCx4jvP8XCiEeGA).
+
+#### What is the best migration approach to minimize risk and impact on users?
+
+> [!TIP]
+> Migrate the existing model as-is initially, even if a change to the data model is planned in the future.
+
+This question comes up often since companies often want to lower the impact of changes on the data warehouse data model to improve agility. Companies see an opportunity to do so during a migration to modernize their data model. This approach carries a higher risk because it could impact ETL jobs populating the data warehouse from a data warehouse to feed dependent data marts. Because of that risk, it's usually better to redesign on this scale after the data warehouse migration.
+
+Even if a data model change is an intended part of the overall migration, it's good practice to migrate the existing model as-is to the new environment (Azure Synapse in this case), rather than do any re-engineering on the new platform during migration. This approach has the advantage of minimizing the impact on existing production systems, while also leveraging the performance and elastic scalability of the Azure platform for one-off re-engineering tasks.
+
+When migrating from Teradata, consider creating a Teradata environment in a VM within Azure as a stepping stone in the migration process.
+
+#### Using a VM Teradata instance as part of a migration
+
+One optional approach for migrating from an on-premises Teradata environment is to leverage the Azure environment to create a Teradata instance in a VM within Azure, co-located with the target Azure Synapse environment. This is possible because Azure provides cheap cloud storage and elastic scalability.
+
+With this approach, standard Teradata utilities, such as Teradata Parallel Data Transporter&mdash;or third-party data replication tools, such as Attunity Replicate&mdash;can be used to efficiently move the subset of Teradata tables that need to be migrated to the VM instance. Then, all migration tasks can take place within the Azure environment. This approach has several benefits:
+
+- After the initial replication of data, migration tasks don't impact the source system.
+
+- The Azure environment has familiar Teradata interfaces, tools, and utilities.
+
+- The Azure environment provides network bandwidth availability between the on-premises source system and the cloud target system.
+
+- Tools like Azure Data Factory can efficiently call utilities like Teradata Parallel Transporter to migrate data quickly and easily.
+
+- The migration process is orchestrated and controlled entirely within the Azure environment.
+
+#### Migrating data marts - stay physical or go virtual?
+
+> [!TIP]
+> Virtualizing data marts can save on storage and processing resources.
+
+In legacy Teradata data warehouse environments, it's common practice to create several data marts that are structured to provide good performance for ad hoc self-service queries and reports for a given department or business function within an organization. As such, a data mart typically consists of a subset of the data warehouse and contains aggregated versions of the data in a form that enables users to easily query that data with fast response times via user-friendly query tools such as Microsoft Power BI, Tableau, or MicroStrategy. This form is typically a dimensional data model. One use of data marts is to expose the data in a usable form, even if the underlying warehouse data model is something different, such as a data vault.
+
+You can use separate data marts for individual business units within an organization to implement robust data security regimes, by only allowing users to access specific data marts that are relevant to them, and eliminating, obfuscating, or anonymizing sensitive data.
+
+If these data marts are implemented as physical tables, they'll require additional storage resources to store them, and additional processing to build and refresh them regularly. Also, the data in the mart will only be as up to date as the last refresh operation, and so may be unsuitable for highly volatile data dashboards.
+
+> [!TIP]
+> The performance and scalability of Azure Synapse enables virtualization without sacrificing performance.
+
+With the advent of relatively low-cost scalable MPP architectures, such as Azure Synapse, and the inherent performance characteristics of such architectures, it may be that you can provide data mart functionality without having to instantiate the mart as a set of physical tables. This is achieved by effectively virtualizing the data marts via SQL views onto the main data warehouse, or via a virtualization layer using features such as views in Azure or the [visualization products of Microsoft partners](/azure/synapse-analytics/partner/data-integration). This approach simplifies or eliminates the need for additional storage and aggregation processing and reduces the overall number of database objects to be migrated.
+
+There's another potential benefit to this approach: by implementing the aggregation and join logic within a virtualization layer, and presenting external reporting tools via a virtualized view, the processing required to create these views is pushed down into the data warehouse, which is generally the best place to run joins, aggregations, and other related operations on large data volumes.
+
+The primary drivers for choosing a virtual data mart implementation over a physical data mart are:
+
+- More agility&mdash;a virtual data mart is easier to change than physical tables and the associated ETL processes.
+
+- Lower total cost of ownership&mdash;a virtualized implementation requires fewer data stores and copies of data.
+
+- Elimination of ETL jobs to migrate and simplify data warehouse architecture in a virtualized environment.
+
+- Performance&mdash;although physical data marts have historically been more performant, virtualization products now implement intelligent caching techniques to mitigate.
+
+### Data migration from Teradata
+
+#### Understand your data
+
+Part of migration planning is understanding in detail the volume of data that needs to be migrated, since that can impact decisions about the migration approach. Use system metadata to determine the physical space taken up by the raw data within the tables to be migrated. In this context, 'raw data' means the amount of space used by the data rows within a table, excluding overheads such as indexes and compression. This is especially true for the largest fact tables since these will typically comprise more than 95% of the data.
+
+You can get an accurate number for the volume of data to be mitigated for a given table by extracting a representative sample of the data&mdash;for example, one million rows&mdash;to an uncompressed delimited flat ASCII data file. Then, use the size of that file to get an average raw data size per row of that table. Finally, multiply that average size by the total number of rows in the full table to give a raw data size for the table. Use that raw data size in your planning.
+
+## ETL migration considerations
+
+### Initial decisions regarding Teradata ETL migration
+
+> [!TIP]
+> Plan the approach to ETL migration ahead of time and leverage Azure facilities where appropriate.
+
+For ETL/ELT processing, legacy Teradata data warehouses may use custom-built scripts using Teradata utilities such as BTEQ and Teradata Parallel Transporter (TPT), or third-party ETL tools such as Informatica or Ab Initio. Sometimes, Teradata data warehouses use a combination of ETL and ELT approaches that's evolved over time. When planning a migration to Azure Synapse, you need to determine the best way to implement the required ETL/ELT processing in the new environment while minimizing the cost and risk involved. To learn more about ETL and ELT processing, see [ELT vs ETL Design approach](/azure/synapse-analytics/sql-data-warehouse/design-elt-data-loading).
+
+The following sections discuss migration options and make recommendations for various use cases. This flowchart summarizes one approach:
++
+The first step is always to build an inventory of ETL/ELT processes that need to be migrated. As with other steps, it's possible that the standard 'built-in' Azure features make it unnecessary to migrate some existing processes. For planning purposes, it's important to understand the scale of the migration to be performed.
+
+In the preceding flowchart, decision 1 relates to a high-level decision about whether to migrate to a totally Azure-native environment. If you're moving to a totally Azure-native environment, we recommend that you re-engineer the ETL processing using [Pipelines and activities in Azure Data Factory](/azure/data-factory/concepts-pipelines-activities?msclkid=b6ea2be4cfda11ec929ac33e6e00db98&tabs=data-factory) or [Synapse Pipelines](/azure/synapse-analytics/get-started-pipelines?msclkid=b6e99db9cfda11ecbaba18ca59d5c95c). If you're not moving to a totally Azure-native environment, then decision 2 is whether an existing third-party ETL tool is already in use.
+
+In the Teradata environment, some or all ETL processing may be performed by custom scripts using Teradata-specific utilities like BTEQ and TPT. In this case, your approach should be to re-engineer using Data Factory.
+
+> [!TIP]
+> Leverage investment in existing third-party tools to reduce cost and risk.
+
+If a third-party ETL tool is already in use, and especially if there's a large investment in skills or several existing workflows and schedules use that tool, then decision 3 is whether the tool can efficiently support Azure Synapse as a target environment. Ideally, the tool will include 'native' connectors that can leverage Azure facilities like PolyBase or [COPY INTO](/sql/t-sql/statements/copy-into-transact-sql), for most efficient data loading. There's a way to call an external process, such as PolyBase or `COPY INTO`, and pass in the appropriate parameters. In this case, leverage existing skills and workflows, with Azure Synapse as the new target environment.
+
+If you decide to retain an existing third-party ETL tool, there may be benefits to running that tool within the Azure environment (rather than on an existing on-premises ETL server) and having Azure Data Factory handle the overall orchestration of the existing workflows. One particular benefit is that less data needs to be downloaded from Azure, processed, and then uploaded back into Azure. So, decision 4 is whether to leave the existing tool running as-is or move it into the Azure environment to achieve cost, performance, and scalability benefits.
+
+### Re-engineering existing Teradata-specific scripts
+
+If some or all the existing Teradata warehouse ETL/ELT processing is handled by custom scripts that utilize Teradata-specific utilities, such as BTEQ, MLOAD, or TPT, these scripts need to be recoded for the new Azure Synapse environment. Similarly, if ETL processes were implemented using stored procedures in Teradata, then these will also have to be recoded.
+
+> [!TIP]
+> The inventory of ETL tasks to be migrated should include scripts and stored procedures.
+
+Some elements of the ETL process are easy to migrate&mdash;for example, by simple bulk data load into a staging table from an external file. It may even be possible to automate those parts of the process, for example, by using PolyBase instead of fast load or MLOAD. If the exported files are Parquet, you can use a native Parquet reader, which is a faster option than PolyBase. Other parts of the process that contain arbitrary complex SQL and/or stored procedures will take more time to reengineer.
+
+One way of testing Teradata SQL for compatibility with Azure Synapse is to capture some representative SQL statements from Teradata logs, then prefix those queries with `EXPLAIN`, and then&mdash;assuming a like-for-like migrated data model in Azure Synapse&mdash;run those `EXPLAIN` statements in Azure Synapse. Any incompatible SQL will generate an error, and the error information can determine the scale of the recoding task.
+
+[Microsoft partners](/azure/sql-data-warehouse/sql-data-warehouse-partner-data-integration) offer tools and services to migrate Teradata SQL and stored procedures to Azure Synapse.
+
+### Using existing third party ETL tools
+
+As described in the previous section, in many cases the existing legacy data warehouse system will already be populated and maintained by third-party ETL products. For a list of Microsoft data integration partners for Azure Synapse, see [Data Integration partners](/azure/sql-data-warehouse/sql-data-warehouse-partner-data-integration).
+
+## Data loading from Teradata
+
+### Choices available when loading data from Teradata
+
+> [!TIP]
+> Third-party tools can simplify and automate the migration process and therefore reduce risk.
+
+When migrating data from a Teradata data warehouse, there are some basic questions associated with data loading that need to be resolved. You'll need to decide how the data will be physically moved from the existing on-premises Teradata environment into Azure Synapse in the cloud, and which tools will be used to perform the transfer and load. Consider the following questions, which are discussed in the next sections.
+
+- Will you extract the data to files, or move it directly via a network connection?
+
+- Will you orchestrate the process from the source system, or from the Azure target environment?
+
+- Which tools will you use to automate and manage the process?
+
+#### Transfer data via files or network connection?
+
+> [!TIP]
+> Understand the data volumes to be migrated and the available network bandwidth since these factors influence the migration approach decision.
+
+Once the database tables to be migrated have been created in Azure Synapse, you can move the data to populate those tables out of the legacy Teradata system and load it into the new environment. There are two basic approaches:
+
+- **File Extract**&mdash;Extract the data from the Teradata tables to flat files, normally in CSV format, via BTEQ, Fast Export, or Teradata Parallel Transporter (TPT). Use TPT whenever possible since it's the most efficient in terms of data throughput.
+
+ This approach requires space to land the extracted data files. The space could be local to the Teradata source database (if sufficient storage is available), or remote in Azure Blob Storage. The best performance is achieved when a file is written locally, since that avoids network overhead.
+
+ To minimize the storage and network transfer requirements, it's good practice to compress the extracted data files using a utility like gzip.
+
+ Once extracted, the flat files can either be moved into Azure Blob Storage (collocated with the target Azure Synapse instance) or loaded directly into Azure Synapse using PolyBase or [COPY INTO](/sql/t-sql/statements/copy-into-transact-sql). The method for physically moving data from local on-premises storage to the Azure cloud environment depends on the amount of data and the available network bandwidth.
+
+ Microsoft provides different options to move large volumes of data, including AZCopy for moving files across the network into Azure Storage, Azure ExpressRoute for moving bulk data over a private network connection, and Azure Data Box where the files are moved to a physical storage device that's then shipped to an Azure data center for loading. For more information, see [data transfer](/azure/architecture/data-guide/scenarios/data-transfer).
+
+- **Direct extract and load across network**&mdash;The target Azure environment sends a data extract request, normally via a SQL command, to the legacy Teradata system to extract the data. The results are sent across the network and loaded directly into Azure Synapse, with no need to 'land' the data into intermediate files. The limiting factor in this scenario is normally the bandwidth of the network connection between the Teradata database and the Azure environment. For very large data volumes this approach may not be practical.
+
+There's also a hybrid approach that uses both methods. For example, you can use the direct network extract approach for smaller dimension tables and samples of the larger fact tables to quickly provide a test environment in Azure Synapse. For the large volume historical fact tables, you can use the file extract and transfer approach using Azure Data Box.
+
+#### Orchestrate from Teradata or Azure?
+
+The recommended approach when moving to Azure Synapse is to orchestrate the data extract and loading from the Azure environment using [Azure Synapse Pipelines](/azure/synapse-analytics/get-started-pipelines?msclkid=b6e99db9cfda11ecbaba18ca59d5c95c) or [Azure Data Factory](/azure/data-factory/introduction?msclkid=2ccc66eccfde11ecaa58877e9d228779), as well as associated utilities, such as PolyBase or [COPY INTO](/sql/t-sql/statements/copy-into-transact-sql), for most efficient data loading. This approach leverages the Azure capabilities and provides an easy method to build reusable data loading pipelines.
+
+Other benefits of this approach include reduced impact on the Teradata system during the data load process since the management and loading process is running in Azure, and the ability to automate the process by using metadata-driven data load pipelines.
+
+#### Which tools can be used?
+
+The task of data transformation and movement is the basic function of all ETL products. If one of these products is already in use in the existing Teradata environment, then using the existing ETL tool may simplify data migration data from Teradata to Azure Synapse. This approach assumes that the ETL tool supports Azure Synapse as a target environment. For more information on tools that support Azure Synapse, see [Data integration partners](/azure/sql-data-warehouse/sql-data-warehouse-partner-data-integration).
+
+If you're using an ETL tool, consider running that tool within the Azure environment to benefit from Azure cloud performance, scalability, and cost, and free up resources in the Teradata data center. Another benefit is reduced data movement between the cloud and on-premises environments.
+
+## Summary
+
+To summarize, our recommendations for migrating data and associated ETL processes from Teradata to Azure Synapse are:
+
+- Plan ahead to ensure a successful migration exercise.
+
+- Build a detailed inventory of data and processes to be migrated as soon as possible.
+
+- Use system metadata and log files to get an accurate understanding of data and process usage. Don't rely on documentation since it may be out of date.
+
+- Understand the data volumes to be migrated, and the network bandwidth between the on-premises data center and Azure cloud environments.
+
+- Consider using a Teradata instance in an Azure VM as a stepping stone to offload migration from the legacy Teradata environment.
+
+- Leverage standard built-in Azure features to minimize the migration workload.
+
+- Identify and understand the most efficient tools for data extraction and loading in both Teradata and Azure environments. Use the appropriate tools at each phase in the process.
+
+- Use Azure facilities such as [Azure Synapse Pipelines](/azure/synapse-analytics/get-started-pipelines?msclkid=b6e99db9cfda11ecbaba18ca59d5c95c) or [Azure Data Factory](/azure/data-factory/introduction?msclkid=2ccc66eccfde11ecaa58877e9d228779) to orchestrate and automate the migration process while minimizing impact on the Teradata system.
+
+## Next steps
+
+To learn more about security access operations, see the next article in this series: [Security, access, and operations for Teradata migrations](3-security-access-operations.md).
synapse-analytics 3 Security Access Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/migration-guides/teradata/3-security-access-operations.md
+
+ Title: "Security, access, and operations for Teradata migrations"
+description: Learn about authentication, users, roles, permissions, monitoring, and auditing, and workload management in Azure Synapse and Teradata.
+++
+ms.devlang:
++++ Last updated : 05/24/2022++
+# Security, access, and operations for Teradata migrations
+
+This article is part three of a seven part series that provides guidance on how to migrate from Teradata to Azure Synapse Analytics. This article provides best practices for security access operations.
+
+## Security considerations
+
+This article discusses connection methods for existing legacy Teradata environments and how they can be migrated to Azure Synapse with minimal risk and user impact.
+
+We assume there's a requirement to migrate the existing methods of connection and user, role, and permission structure as is. If this isn't the case, then you can use Azure utilities such as Azure portal to create and manage a new security regime.
+
+For more information on the [Azure Synapse security](/azure/synapse-analytics/sql-data-warehouse/sql-data-warehouse-overview-manage-security#authorization) options see [Security whitepaper](/azure/synapse-analytics/guidance/security-white-paper-introduction).
+
+### Connection and authentication
+
+#### Teradata Authorization Options
+
+> [!TIP]
+> Authentication in both Teradata and Azure Synapse can be "in database" or through external methods.
+
+Teradata supports several mechanisms for connection and authorization. Valid mechanism values are:
+
+- **TD1**&mdash;selects Teradata 1 as the authentication mechanism. Username and password are required.
+
+- **TD2**&mdash;selects Teradata 2 as the authentication mechanism. Username and password are required.
+
+- **TDNEGO**&mdash;selects one of the authentication mechanisms automatically based on the policy, without user involvement.
+
+- **LDAP**&mdash;selects Lightweight Directory Access Protocol (LDAP) as the Authentication Mechanism. The application provides the username and password.
+
+- **KRB5**&mdash;selects Kerberos (KRB5) on Windows clients working with Windows servers. To log on using KRB5, the user needs to supply a domain, username, and password. The domain is specified by setting the username to `MyUserName@MyDomain`.
+
+- **NTLM**&mdash;selects NTLM on Windows clients working with Windows servers. The application provides the username and password.
+
+Kerberos (KRB5), Kerberos Compatibility (KRB5C), NT LAN Manager (NTLM), and NT LAN Manager Compatibility (NTLMC) are for Windows only.
+
+#### Azure Synapse authorization options
+
+Azure Synapse supports two basic options for connection and authorization:
+
+- **SQL authentication**: SQL authentication is via a database connection that includes a database identifier, user ID, and password plus other optional parameters. This is functionally equivalent to Teradata TD1, TD2 and default connections.
+
+- **Azure Active Directory (Azure AD) authentication**: With Azure Active Directory authentication, you can centrally manage the identities of database users and other Microsoft services in one central location. Central ID management provides a single place to manage SQL Data Warehouse users and simplifies permission management. Azure AD can also support connections to LDAP and Kerberos services&mdash;for example, Azure AD can be used to connect to existing LDAP directories if these are to remain in place after migration of the database.
+
+### Users, roles, and permissions
+
+#### Overview
+
+> [!TIP]
+> High-level planning is essential for a successful migration project.
+
+Both Teradata and Azure Synapse implement database access control via a combination of users, roles, and permissions. Both use standard `SQL CREATE USER` and `CREATE ROLE` statements to define users and roles, and `GRANT` and `REVOKE` statements to assign or remove permissions to those users and/or roles.
+
+> [!TIP]
+> Automation of migration processes is recommended to reduce elapsed time and scope for errors.
+
+Conceptually the two databases are similar, and it might be possible to automate the migration of existing user IDs, roles, and permissions to some degree. Migrate such data by extracting the existing legacy user and role information from the Teradata system catalog tables and generating matching equivalent `CREATE USER` and `CREATE ROLE` statements to be run in Azure Synapse to recreate the same user/role hierarchy.
+
+After data extraction, use Teradata system catalog tables to generate equivalent `GRANT` statements to assign permissions (where an equivalent one exists). The following diagram shows how to use existing metadata to generate the necessary SQL.
++
+#### Users and roles
+
+> [!TIP]
+> Migration of a data warehouse requires more than just tables, views, and SQL statements.
+
+The information about current users and roles in a Teradata system is found in the system catalog tables `DBC.USERS` (or `DBC.DATABASES`) and `DBC.ROLEMEMBERS`. Query these tables (if the user has `SELECT` access to those tables) to obtain current lists of users and roles defined within the system. The following are examples of queries to do this for individual users:
+
+```sql
+/***SQL to find all users***/
+SELECT
+DatabaseName AS UserName
+FROM DBC.Databases
+WHERE dbkind = 'u';
+
+/***SQL to find all roles***/
+SELECT A.ROLENAME, A.GRANTEE, A.GRANTOR,
+ A.DefaultRole,
+ A.WithAdmin,
+ B.DATABASENAME,
+ B.TABLENAME,
+ B.COLUMNNAME,
+ B.GRANTORNAME,
+ B.AccessRight
+FROM DBC.ROLEMEMBERS A
+JOIN DBC.ALLROLERIGHTS B
+ON A.ROLENAME = B.ROLENAME
+GROUP BY 1,2,3,4,5,6,7
+ORDER BY 2,1,6;
+```
+
+These examples modify `SELECT` statements to produce a result set, which is a series of `CREATE USER` and `CREATE ROLE` statements, by including the appropriate text as a literal within the `SELECT` statement.
+
+There's no way to retrieve existing passwords, so you need to implement a scheme for allocating new initial passwords on Azure Synapse.
+
+#### Permissions
+
+> [!TIP]
+> There are equivalent Azure Synapse permissions for basic database operations such as DML and DDL.
+
+In a Teradata system, the system tables `DBC.ALLRIGHTS` and `DBC.ALLROLERIGHTS` hold the access rights for users and roles. Query these tables (if the user has `SELECT` access to those tables) to obtain current lists of access rights defined within the system. The following are examples of queries for individual users:
+
+```sql
+/**SQL for AccessRights held by a USER***/
+SELECT UserName, DatabaseName,TableName,ColumnName,
+CASE WHEN Abbv.AccessRight IS NOT NULL THEN Abbv.Description ELSE
+ALRTS.AccessRight
+END AS AccessRight, GrantAuthority, GrantorName, AllnessFlag, CreatorName, CreateTimeStamp
+FROM DBC.ALLRIGHTS ALRTS LEFT OUTER JOIN AccessRightsAbbv Abbv
+ON ALRTS.AccessRight = Abbv.AccessRight
+WHERE UserName='UserXYZ'
+Order By 2,3,4,5;
+
+/**SQL for AccessRights held by a ROLE***/
+SELECT RoleName, DatabaseName,TableName,ColumnName,
+CASE WHEN Abbv.AccessRight IS NOT NULL THEN Abbv.Description ELSE
+ALRTS.AccessRight
+END AS AccessRight, GrantorName, CreateTimeStamp
+FROM DBC.ALLROLERIGHTS ALRTS LEFT OUTER JOIN AccessRightsAbbv
+Abbv
+ON ALRTS.AccessRight = Abbv.AccessRight
+WHERE RoleName='BI_DEVELOPER'
+Order By 2,3,4,5;
+```
+
+Modify these example `SELECT` statements to produce a result set which is a series of `GRANT` statements by including the appropriate text as a literal within the `SELECT` statement.
+
+Use the table `AccessRightsAbbv` to look up the full text of the access right, as the join key is an abbreviated 'type' field. See the following table for a list of Teradata access rights and their equivalent in Azure Synapse.
+
+| Teradata permission name | Teradata type | Azure Synapse equivalent |
+|||--|
+| **ABORT SESSION** | AS | KILL DATABASE CONNECTION |
+| **ALTER EXTERNAL PROCEDURE** | AE | \*\*\*\* |
+| **ALTER FUNCTION** | AF | ALTER FUNCTION |
+| **ALTER PROCEDURE** | AP | ALTER PROCEDURE |
+| **CHECKPOINT** | CP | CHECKPOINT |
+| **CREATE AUTHORIZATION** | CA | CREATE LOGIN |
+| **CREATE DATABASE** | CD | CREATE DATABASE |
+| **CREATE EXTERNAL** **PROCEDURE** | CE | \*\*\*\* |
+| **CREATE FUNCTION** | CF | CREATE FUNCTION |
+| **CREATE GLOP** | GC | \*\*\* |
+| **CREATE MACRO** | CM | CREATE PROCEDURE \*\* |
+| **CREATE OWNER PROCEDURE** | OP | CREATE PROCEDURE |
+| **CREATE PROCEDURE** | PC | CREATE PROCEDURE |
+| **CREATE PROFILE** | CO | CREATE LOGIN \* |
+| **CREATE ROLE** | CR | CREATE ROLE |
+| **DROP DATABASE** | DD | DROP DATABASE|
+| **DROP FUNCTION** | DF | DROP FUNCTION |
+| **DROP GLOP** | GD | \*\*\* |
+| **DROP MACRO** | DM | DROP PROCEDURE \*\* |
+| **DROP PROCEDURE** | PD | DELETE PROCEDURE |
+| **DROP PROFILE** | DO | DROP LOGIN \* |
+| **DROP ROLE** | DR | DELETE ROLE |
+| **DROP TABLE** | DT | DROP TABLE |
+| **DROP TRIGGER** | DG | \*\*\* |
+| **DROP USER** | DU | DROP USER |
+| **DROP VIEW** | DV | DROP VIEW |
+| **DUMP** | DP | \*\*\*\* |
+| **EXECUTE** | E | EXECUTE |
+| **EXECUTE FUNCTION** | EF | EXECUTE |
+| **EXECUTE PROCEDURE** | PE | EXECUTE |
+| **GLOP MEMBER** | GM | \*\*\* |
+| **INDEX** | IX | CREATE INDEX |
+| **INSERT** | I | INSERT |
+| **MONRESOURCE** | MR | \*\*\*\*\* |
+| **MONSESSION** | MS | \*\*\*\*\* |
+| **OVERRIDE DUMP CONSTRAINT** | OA | \*\*\*\* |
+| **OVERRIDE RESTORE CONSTRAINT** | OR | \*\*\*\* |
+| **REFERENCES** | RF | REFERENCES |
+| **REPLCONTROL** | RO | \*\*\*\*\* |
+| **RESTORE** | RS | \*\*\*\* |
+| **SELECT** | R | SELECT |
+| **SETRESRATE** | SR | \*\*\*\*\* |
+| **SETSESSRATE** | SS | \*\*\*\*\* |
+| **SHOW** | SH | \*\*\* |
+| **UPDATE** | U | UPDATE |
+
+Comments on the `AccessRightsAbbv` table:
+
+\* Teradata `PROFILE` is functionally equivalent to `LOGIN` in Azure Synapse
+
+\*\* In Teradata there are macros and stored procedures. The following table summarizes the differences between them:
+
+ | MACRO | Stored procedure |
+ |-|-|
+ | Contains SQL | Contains SQL |
+ | May contain BTEQ dot commands | Contains comprehensive SPL |
+ | May receive parameter values passed to it | May receive parameter values passed to it |
+ | May retrieve one or more rows | Must use a cursor to retrieve more than one row |
+ | Stored in DBC PERM space | Stored in DATABASE or USER PERM |
+ | Returns rows to the client | May return one or more values to client as parameters |
+
+In Azure Synapse, procedures can be used to provide this functionality.
+
+\*\*\* `SHOW`, `GLOP`, and `TRIGGER` have no direct equivalent in Azure Synapse.
+
+\*\*\*\* These features are managed automatically by the system in Azure Synapse&mdash;see [Operational considerations](#operational-considerations).
+
+\*\*\*\*\* In Azure Synapse, these features are handled outside of the database.
+
+Refer to [Azure Synapse Analytics security permissions](/azure/synapse-analytics/guidance/security-white-paper-introduction).
+
+## Operational considerations
+
+> [!TIP]
+> Operational tasks are necessary to keep any data warehouse operating efficiently.
+
+This section discusses how to implement typical Teradata operational tasks in Azure Synapse with minimal risk and impact to users.
+
+As with all data warehouse products, once in production there are ongoing management tasks that are necessary to keep the system running efficiently and to provide data for monitoring and auditing. Resource utilization and capacity planning for future growth also falls into this category, as does backup/restore of data.
+
+While conceptually the management and operations tasks for different data warehouses are similar, the individual implementations may differ. In general, modern cloud-based products such as Azure Synapse tend to incorporate a more automated and "system managed" approach (as opposed to a more manual approach in legacy data warehouses such as Teradata).
+
+The following sections compare Teradata and Azure Synapse options for various operational tasks.
+
+### Housekeeping tasks
+
+> [!TIP]
+> Housekeeping tasks keep a production warehouse operating efficiently and optimize use of resources such as storage.
+
+In most legacy data warehouse environments, there's a requirement to perform regular 'housekeeping' tasks such as reclaiming disk storage space that can be freed up by removing old versions of updated or deleted rows, or reorganizing data log files or index blocks for efficiency. Collecting statistics is also a potentially time-consuming task. Collecting statistics is required after a bulk data ingest to provide the query optimizer with up-to-date data to base generation of query execution plans.
+
+Teradata recommends collecting statistics as follows:
+
+- Collect statistics on unpopulated tables to set up the interval histogram used in internal processing. This initial collection makes subsequent statistics collections faster. Make sure to recollect statistics after data is added.
+
+- Prototype phase, newly populated tables.
+
+- Production phase, after a significant percentage of change to the table or partition (~10% rows). For high volumes of nonunique values, such as dates or timestamps, it may be advantageous to recollect at 7%.
+
+- Recommendation: Collect production phase statistics after you've created users and applied real world query loads to the database (up to about three months of querying).
+
+- Collect statistics in the first few weeks after an upgrade or migration during periods of low CPU utilization.
+
+Statistics collection can be managed manually using Automated Statistics Management open APIs or automatically using the Teradata Viewpoint Stats Manager portlet.
+
+> [!TIP]
+> Automate and monitor housekeeping tasks in Azure.
+
+Teradata Database contains many log tables in the Data Dictionary that accumulate data, either automatically or after certain features are enabled. Because log data grows over time, purge older information to avoid using up permanent space. There are options to automate the maintenance of these logs available. The Teradata dictionary tables that require maintenance are discussed next.
+
+#### Dictionary tables to maintain
+
+Reset accumulators and peak values using the `DBC.AMPUsage` view and the `ClearPeakDisk` macro provided with the software:
+
+- `DBC.Acctg`: resource usage by account/user
+
+- `DBC.DataBaseSpace`: database and table space accounting
+
+Teradata automatically maintains these tables, but good practices can reduce their size:
+
+- `DBC.AccessRights`: user rights on objects
+
+- `DBC.RoleGrants`: role rights on objects
+
+- `DBC.Roles`: defined roles
+
+- `DBC.Accounts`: account codes by user
+
+Archive these logging tables (if desired) and purge information 60-90 days old. Retention depends on customer requirements:
+
+- `DBC.SW_Event_Log`: database console log
+
+- `DBC.ResUsage`: resource monitoring tables
+
+- `DBC.EventLog`: session logon/logoff history
+
+- `DBC.AccLogTbl`: logged user/object events
+
+- `DBC.DBQL tables`: logged user/SQL activity
+
+- `.NETSecPolicyLogTbl`: logs dynamic security policy audit trails
+
+- `.NETSecPolicyLogRuleTbl`: controls when and how dynamic security policy is logged
+
+Purge these tables when the associated removable media is expired and overwritten:
+
+- `DBC.RCEvent`: archive/recovery events
+
+- `DBC.RCConfiguration`: archive/recovery config
+
+- `DBC.RCMedia`: VolSerial for Archive/recovery
+
+Azure Synapse has an option to automatically create statistics so that they can be used as needed. Perform defragmentation of indexes and data blocks manually, on a scheduled basis, or automatically. Leveraging native built-in Azure capabilities can reduce the effort required in a migration exercise.
+
+### Monitoring and auditing
+
+> [!TIP]
+> Over time, several different tools have been implemented to allow monitoring and logging of Teradata systems.
+
+Teradata provides several tools to monitor the operation including Teradata Viewpoint and Ecosystem Manager. For logging query history, the Database Query Log (DBQL) is a Teradata Database feature that provides a series of predefined tables that can store historical records of queries and their duration, performance, and target activity based on user-defined rules.
+
+Database administrators can use Teradata Viewpoint to determine system status, trends, and individual query status. By observing trends in system usage, system administrators are better able to plan project implementations, batch jobs, and maintenance to avoid peak periods of use. Business users can use Teradata Viewpoint to quickly access the status of reports and queries and drill down into details.
+
+> [!TIP]
+> Azure portal provides a UI to manage monitoring and auditing tasks for all Azure data and processes.
+
+Similarly, Azure Synapse provides a rich monitoring experience within the Azure portal to provide insights into your data warehouse workload. The Azure portal is the recommended tool when monitoring your data warehouse as it provides configurable retention periods, alerts, recommendations, and customizable charts and dashboards for metrics and logs.
+
+The portal also enables integration with other Azure monitoring services such as Operations Management Suite (OMS) and [Azure Monitor](/azure/synapse-analytics/monitoring/how-to-monitor-using-azure-monitor?msclkid=d5e9e46ecfe111ec8ba8ee5360e77c4c) (logs) to provide a holistic monitoring experience for not only the data warehouse but also the entire Azure analytics platform for an integrated monitoring experience.
+
+> [!TIP]
+> Low-level and system-wide metrics are automatically logged in Azure Synapse.
+
+Resource utilization statistics for the Azure Synapse are automatically logged within the system. The metrics include usage statistics for CPU, memory, cache, I/O and temporary workspace for each query as well as connectivity information (such as failed connection attempts).
+
+Azure Synapse provides a set of [Dynamic Management Views](/azure/synapse-analytics/sql-data-warehouse/sql-data-warehouse-manage-monitor?msclkid=3e6eefbccfe211ec82d019ada29b1834) (DMVs). These views are useful when actively troubleshooting and identifying performance bottlenecks with your workload.
+
+For more information, see [Azure Synapse operations and management options](/azure/sql-data-warehouse/sql-data-warehouse-how-to-manage-and-monitor-workload-importance).
+
+### High Availability (HA) and Disaster Recovery (DR)
+
+Teradata implements features such as Fallback, Archive Restore Copy utility (ARC), and Data Stream Architecture (DSA) to provide protection against data loss and high availability (HA) via replication and archive of data. Disaster Recovery options include Dual-Active systems, DR as a service, or a replacement system depending on the recovery time requirement.
+
+> [!TIP]
+> Azure Synapse creates snapshots automatically to ensure fast recovery times.
+
+Azure Synapse uses database snapshots to provide high availability of the warehouse. A data warehouse snapshot creates a restore point that can be used to recover or copy a data warehouse to a previous state. Since Azure Synapse is a distributed system, a data warehouse snapshot consists of many files that are in Azure storage. Snapshots capture incremental changes from the data stored in your data warehouse.
+
+Azure Synapse automatically takes snapshots throughout the day creating restore points that are available for seven days. This retention period can't be changed. Azure Synapse supports an eight-hour recovery point objective (RPO). A data warehouse can be restored in the primary region from any one of the snapshots taken in the past seven days.
+
+> [!TIP]
+> Use user-defined snapshots to define a recovery point before key updates.
+
+User-defined restore points are also supported, allowing manual triggering of snapshots to create restore points of a data warehouse before and after large modifications. This capability ensures that restore points are logically consistent, which provides additional data protection in case of any workload interruptions or user errors for a desired RPO of less than 8 hours.
+
+> [!TIP]
+> Microsoft Azure provides automatic backups to a separate geographical location to enable DR.
+
+As well as the snapshots described previously, Azure Synapse also performs as standard a geo-backup once per day to a [paired data center](/azure/best-practices-availability-paired-regions). The RPO for a geo-restore is 24 hours. You can restore the geo-backup to a server in any other region where Azure Synapse is supported. A geo-backup ensures that a data warehouse can be restored in case the restore points in the primary region aren't available.
+
+### Workload management
+
+> [!TIP]
+> In a production data warehouse, there are typically mixed workloads which have different resource usage characteristics running concurrently.
+
+A workload is a class of database requests with common traits whose access to the database can be managed with a set of rules. Workloads are useful for:
+
+- Setting different access priorities for different types of requests.
+
+- Monitoring resource usage patterns, performance tuning, and capacity planning.
+
+- Limiting the number of requests or sessions that can run at the same time.
+
+In a Teradata system, workload management is the act of managing workload performance by monitoring system activity and acting when pre-defined limits are reached. Workload management uses rules, and each rule applies only to some database requests. However, the collection of all rules applies to all active work on the platform. Teradata Active System Management (TASM) performs full workload management in a Teradata Database.
+
+In Azure Synapse, resource classes are pre-determined resource limits that govern compute resources and concurrency for query execution. Resource classes can help you manage your workload by setting limits on the number of queries that run concurrently and on the compute resources assigned to each query. There's a trade-off between memory and concurrency.
+
+See [Resource classes for workload management](/azure/sql-data-warehouse/resource-classes-for-workload-management) for detailed information.
+
+This information can also be used for capacity planning, determining the resources required for additional users or application workload. This also applies to planning scale up/scale downs of compute resources for cost-effective support of 'peaky' workloads.
+
+### Scaling compute resources
+
+> [!TIP]
+> A major benefit of Azure is the ability to independently scale up and down compute resources on demand to handle peaky workloads cost-effectively.
+
+The architecture of Azure Synapse separates storage and compute, allowing each to scale independently. As a result, [compute resources can be scaled](/azure/synapse-analytics/sql-data-warehouse/quickstart-scale-compute-portal) to meet performance demands independent of data storage. You can also pause and resume compute resources. A natural benefit of this architecture is that billing for compute and storage is separate. If a data warehouse isn't in use, you can save on compute costs by pausing compute.
+
+Compute resources can be scaled up or scaled back by adjusting the data warehouse units setting for the data warehouse. Loading and query performance will increase linearly as you add more data warehouse units.
+
+Adding more compute nodes adds more compute power and ability to leverage more parallel processing. As the number of compute nodes increases, the number of distributions per compute node decreases, providing more compute power and parallel processing for queries. Similarly, decreasing data warehouse units reduces the number of compute nodes, which reduces the compute resources for queries.
+
+## Next steps
+
+To learn more about visualization and reporting, see the next article in this series: [Visualization and reporting for Teradata migrations](4-visualization-reporting.md).
synapse-analytics 4 Visualization Reporting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/migration-guides/teradata/4-visualization-reporting.md
+
+ Title: "Visualization and reporting for Teradata migrations"
+description: Learn about Microsoft and third-party BI tools for reports and visualizations in Azure Synapse compared to Teradata.
+++
+ms.devlang:
++++ Last updated : 05/24/2022++
+# Visualization and reporting for Teradata migrations
+
+This article is part four of a seven part series that provides guidance on how to migrate from Teradata to Azure Synapse Analytics. This article provides best practices for visualization and reporting.
+
+## Accessing Azure Synapse Analytics using Microsoft and third-party BI tools
+
+Almost every organization accesses data warehouses and data marts by using a range of BI tools and applications, such as:
+
+- Microsoft BI tools, like Power BI.
+
+- Office applications, like Microsoft Excel spreadsheets.
+
+- Third-party BI tools from various vendors.
+
+- Custom analytic applications that have embedded BI tool functionality inside the application.
+
+- Operational applications that request BI on demand by invoking queries and reports as-a-service on a BI platform, that in-turn queries data in the data warehouse or data marts that are being migrated.
+
+- Interactive data science development tools, for instance, Azure Synapse Spark Notebooks, Azure Machine Learning, RStudio, Jupyter notebooks.
+
+The migration of visualization and reporting as part of a data warehouse migration program, means that all the existing queries, reports, and dashboards generated and issued by these tools and applications need to run on Azure Synapse and yield the same results as they did in the original data warehouse prior to migration.
+
+> [!TIP]
+> Existing users, user groups, roles and assignments of access security privileges need to be migrated first for migration of reports and visualizations to succeed.
+
+To make that happen, everything that BI tools and applications depend on still needs to work once you migrate your data warehouse schema and data to Azure Synapse. That includes the obvious and the not so obvious&mdash;such as access and security. While access and security are discussed in [another guide](3-security-access-operations.md) in this series, it's a prerequisite to accessing data in the migrated system. Access and security include ensuring that:
+
+- Authentication is migrated to let users sign in to the data warehouse and data mart databases on Azure Synapse.
+
+- All users are migrated to Azure Synapse.
+
+- All user groups are migrated to Azure Synapse.
+
+- All roles are migrated to Azure Synapse.
+
+- All authorization privileges governing access control are migrated to Azure Synapse.
+
+- User, role, and privilege assignments are migrated to mirror what you had on your existing data warehouse before migration. For example:
+ - Database object privileges assigned to roles
+ - Roles assigned to user groups
+ - Users assigned to user groups and/or roles
+
+> [!TIP]
+> Communication and business user involvement is critical to success.
+
+In addition, all the required data needs to be migrated to ensure the same results appear in the same reports and dashboards that now query data on Azure Synapse. User expectation will undoubtedly be that migration is seamless and there will be no surprises that destroy their confidence in the migrated system on Azure Synapse. So, this is an area where you must take extreme care and communicate as much as possible to allay any fears in your user base. Their expectations are that:
+
+- Table structure will be the same if directly referred to in queries
+
+- Table and column names remain the same if directly referred to in queries; for instance, so that calculated fields defined on columns in BI tools don't fail when aggregate reports are produced
+
+- Historical analysis remains the same
+
+- Data types should, if possible, remain the same
+
+- Query behavior remains the same
+
+- ODBC / JDBC drivers are tested to make sure nothing has changed in terms of query behavior
+
+> [!TIP]
+> Views and SQL queries using proprietary SQL query extensions are likely to result in incompatibilities that impact BI reports and dashboards.
+
+If BI tools are querying views in the underlying data warehouse or data mart database, then will these views still work? You might think yes, but if there are proprietary SQL extensions, specific to your legacy data warehouse DBMS in these views that have no equivalent in Azure Synapse, you'll need to know about them and find a way to resolve them.
+
+Other issues like the behavior of nulls or data type variations across DBMS platforms need to be tested, in case they cause slightly different calculation results. Obviously, you want to minimize these issues and take all necessary steps to shield business users from any kind of impact. Depending on your legacy data warehouse system (such as Teradata), there are [tools](/azure/synapse-analytics/partner/data-integration) that can help hide these differences so that BI tools and applications are kept unaware of them and can run unchanged.
+
+> [!TIP]
+> Use repeatable tests to ensure reports, dashboards, and other visualizations migrate successfully,.
+
+Testing is critical to visualization and report migration. You need a test suite and agreed-on test data to run and rerun tests in both environments. A test harness is also useful, and a few are mentioned later in this guide. In addition, it's also important to have significant business involvement in this area of migration to keep confidence high and to keep them engaged and part of the project.
+
+Finally, you may also be thinking about switching BI tools. For example, you might want to [migrate to Power BI](/power-bi/guidance/powerbi-migration-overview). The temptation is to do all of this at the same time, while migrating your schema, data, ETL processing, and more. However, to minimize risk, it's better to migrate to Azure Synapse first and get everything working before undertaking further modernization.
+
+If your existing BI tools run on premises, ensure that they're able to connect to Azure Synapse through your firewall to run comparisons against both environments. Alternatively, if the vendor of your existing BI tools offers their product on Azure, you can try it there. The same applies for applications running on premises that embed BI or that call your BI server on-demand, requesting a "headless report" with data returned in XML or JSON, for example.
+
+There's a lot to think about here, so let's look at all this in more detail.
+
+> [!TIP]
+> A lift and shift data warehouse migration are likely to minimize any disruption to reports, dashboards, and other visualizations.
+
+## Minimizing the impact of data warehouse migration on BI tools and reports using data virtualization
+
+> [!TIP]
+> Data virtualization allows you to shield business users from structural changes during migration so that they remain unaware of changes.
+
+The temptation during data warehouse migration to the cloud is to take the opportunity to make changes during the migration to fulfill long-term requirements, such as opening business requests, missing data, new features, and more. However, if you're going to do that, it can affect BI tool business users and applications accessing your data warehouse, especially if it involves structural changes in your data model. Even if there were no new data structures because of new requirements, but you're considering adopting a different data modeling technique (like Data Vault) in your migrated data warehouse, you're likely to cause structural changes that impact BI reports and dashboards. If you want to adopt an agile data modeling technique, do so after migration. One way in which you can minimize the impact of things like schema changes on BI tools, users, and the reports they produce, is to introduce data virtualization between BI tools and your data warehouse and data marts. The following diagram shows how data virtualization can hide the migration from users.
++
+This breaks the dependency between business users utilizing self-service BI tools and the physical schema of the underlying data warehouse and data marts that are being migrated.
+
+> [!TIP]
+> Schema alterations to tune your data model for Azure Synapse can be hidden from users.
+
+By introducing data virtualization, any schema alternations made during data warehouse and data mart migration to Azure Synapse (to optimize performance, for example) can be hidden from business users because they only access virtual tables in the data virtualization layer. If structural changes are needed, only the mappings between the data warehouse or data marts, and any virtual tables would need to be changed so that users remain unaware of those changes and unaware of the migration. [Microsoft partners](/azure/synapse-analytics/partner/data-integration) provides a useful data virtualization software.
+
+## Identifying high priority reports to migrate first
+
+A key question when migrating your existing reports and dashboards to Azure Synapse is which ones to migrate first. Several factors can drive the decision. For example:
+
+- Business value
+
+- Usage
+
+- Ease of migration
+
+- Data migration strategy
+
+These factors are discussed in more detail later in this article.
+
+Whatever the decision is, it must involve the business, since they produce the reports and dashboards, and consume the insights these artifacts provide in support of the decisions that are made around your business. That said, if most reports and dashboards can be migrated seamlessly, with minimal effort, and offer up like- for-like results, simply by pointing your BI tool(s) at Azure Synapse, instead of your legacy data warehouse system, then everyone benefits. Therefore, if it's that straight forward and there's no reliance on legacy system proprietary SQL extensions, then there's no doubt that the above ease of migration option breeds confidence.
+
+### Migrating reports based on usage
+
+Usage is interesting, since it's an indicator of business value. Reports and dashboards that are never used clearly aren't contributing to supporting any decisions and$1 [don't currently offer any value. So, do you've any mechanism for finding out which reports, and dashboards are currently not used? Several BI tools provide statistics on usage, which would be an obvious place to start.
+
+If your legacy data warehouse has been up and running for many years, there's a high chance you could have hundreds, if not thousands, of reports in existence. In these situations, usage is an important indicator to the business value of a specific report or dashboard. In that sense, it's worth compiling an inventory of the reports and dashboards you've and defining their business purpose and usage statistics.
+
+For those that aren't used at all, it's an appropriate time to seek a business decision, to determine if it necessary to de-commission those reports to optimize your migration efforts. A key question worth asking when deciding to de-commission unused reports is: are they unused because people don't know they exist, or is it because they offer no business value, or have they been superseded by others?
+
+### Migrating reports based on business value
+
+Usage on its own isn't a clear indicator of business value. There needs to be a deeper business context to determine the value to the business. In an ideal world, we would like to know the contribution of the insights produced in a report to the bottom line of the business. That's exceedingly difficult to determine, since every decision made, and its dependency on the insights in a specific report, would need to be recorded along with the contribution that each decision makes to the bottom line of the business. You would also need to do this overtime.
+
+This level of detail is unlikely to be available in most organizations. One way in which you can get deeper on business value to drive migration order is to look at alignment with business strategy. A business strategy set by your executive typically lays out strategic business objectives, key performance indicators (KPIs), and KPI targets that need to be achieved and who is accountable for achieving them. In that sense, classifying your reports and dashboards by strategic business objectives&mdash;for example, reduce fraud, improve customer engagement, and optimize business operations&mdash;will help understand business purpose and show what objective(s), specific reports, and dashboards these are contributing to. Reports and dashboards associated with high priority objectives in the business strategy can then be highlighted so that migration is focused on delivering business value in a strategic high priority area.
+
+It's also worthwhile to classify reports and dashboards as operational, tactical, or strategic, to understand the level in the business where they're used. Delivering strategic business objectives contribution is required at all these levels. Knowing which reports and dashboards are used, at what level, and what objectives they're associated with, helps to focus migration on high priority business value that will drive the company forward. Business contribution of reports and dashboards is needed to understand this, perhaps like what is shown in the following **Business strategy objective** table.
+
+| **Level** | **Report / dashboard name** | **Business purpose** | **Department used** | **Usage frequency** | **Business priority** |
+|-|-|-|-|-|-|
+| **Strategic** | | | | | |
+| **Tactical** | | | | | |
+| **Operational** | | | | | |
+
+While this may seem too time consuming, you need a mechanism to understand the contribution of reports and dashboards to business value, whether you're migrating or not. Catalogs like Azure Data Catalog are becoming very important because they give you the ability to catalog reports and dashboards, automatically capture the metadata associated with them, and let business users tag and rate them to help you understand business value.
+
+### Migrating reports based on data migration strategy
+
+> [!TIP]
+> Data migration strategy could also dictate which reports and visualizations get migrated first.
+
+If your migration strategy is based on migrating "data marts first", clearly, the order of data mart migration will have a bearing on which reports and dashboards can be migrated first to run on Azure Synapse. Again, this is likely to be a business-value-related decision. Prioritizing which data marts are migrated first reflects business priorities. Metadata discovery tools can help you here by showing you which reports rely on data in which data mart tables.
+
+## Migration incompatibility issues that can impact reports and visualizations
+
+When it comes to migrating to Azure Synapse, there are several things that can impact the ease of migration for reports, dashboards, and other visualizations. The ease of migration is affected by:
+
+- Incompatibilities that occur during schema migration between your legacy data warehouse and Azure Synapse.
+
+- Incompatibilities in SQL between your legacy data warehouse and Azure Synapse.
+
+### The impact of schema incompatibilities
+
+> [!TIP]
+> Schema incompatibilities include legacy warehouse DBMS table types and data types that are unsupported on Azure Synapse.
+
+BI tool reports and dashboards, and other visualizations, are produced by issuing SQL queries that access physical tables and/or views in your data warehouse or data mart. When it comes to migrating your data warehouse or data mart schema to Azure Synapse, there may be incompatibilities that can impact reports and dashboards, such as:
+
+- Non-standard table types supported in your legacy data warehouse DBMS that don't have an equivalent in Azure Synapse (like the Teradata Time-Series tables)
+
+- Data types supported in your legacy data warehouse DBMS that don't have an equivalent in Azure Synapse. For example, Teradata Geospatial or Interval data types.
+
+In many cases, where there are incompatibilities, there may be ways around them. For example, the data in unsupported table types can be migrated into a standard table with appropriate data types and indexed or partitioned on a date/time column. Similarly, it may be able to represent unsupported data types in another type of column and perform calculations in Azure Synapse to achieve the same. Either way, it will need refactoring.
+
+> [!TIP]
+> Querying the system catalog of your legacy warehouse DBMS is a quick and straightforward way to identify schema incompatibilities with Azure Synapse.
+
+To identify reports and visualizations impacted by schema incompatibilities, run queries against the system catalog of your legacy data warehouse to identify tables with unsupported data types. Then use metadata from your BI tool or tools to identify reports that access these structures, to see what could be impacted. Obviously, this will depend on the legacy data warehouse DBMS you're migrating from. Find details of how to identify these incompatibilities in [Design and performance for Teradata migrations](1-design-performance-migration.md).
+
+The impact may be less than you think, because many BI tools don't support such data types. As a result, views may already exist in your legacy data warehouse that `CAST` unsupported data types to more generic types.
+
+### The impact of SQL incompatibilities and differences
+
+Additionally, any report, dashboard, or other visualization in an application or tool that makes use of proprietary SQL extensions associated with your legacy data warehouse DBMS, is likely to be impacted when migrating to Azure Synapse. This could happen because the BI tool or application:
+
+- Accesses legacy data warehouse DBMS views that include proprietary SQL functions that have no equivalent in Azure Synapse.
+
+- Issues SQL queries, which include proprietary SQL functions peculiar to the SQL dialect of your legacy data warehouse DBMS, that have no equivalent in Azure Synapse.
+
+### Gauging the impact of SQL incompatibilities on your reporting portfolio
+
+You can't rely on documentation associated with reports, dashboards, and other visualizations to gauge how big of an impact SQL incompatibility may have on the portfolio of embedded query services, reports, dashboards, and other visualizations you're intending to migrate to Azure Synapse. There must be a more precise way of doing that.
+
+#### Using EXPLAIN statements to find SQL incompatibilities
+
+> [!TIP]
+> Gauge the impact of SQL incompatibilities by harvesting your DBMS log files and running `EXPLAIN` statements.
+
+One way is to get a hold of the SQL log files of your legacy data warehouse. Use a script to pull out a representative set of SQL statements into a file, prefix each SQL statement with an `EXPLAIN` statement, and then run all the `EXPLAIN` statements in Azure Synapse. Any SQL statements containing proprietary SQL extensions from your legacy data warehouse that are unsupported will be rejected by Azure Synapse when the `EXPLAIN` statements are executed. This approach would at least give you an idea of how significant or otherwise the use of incompatible SQL is.
+
+Metadata from your legacy data warehouse DBMS will also help you when it comes to views. Again, you can capture and view SQL statements, and `EXPLAIN` them as described previously to identify incompatible SQL in views.
+
+## Testing report and dashboard migration to Azure Synapse Analytics
+
+> [!TIP]
+> Test performance and tune to minimize compute costs.
+
+A key element in data warehouse migration is the testing of reports and dashboards against Azure Synapse to verify that the migration has worked. To do this, you need to define a series of tests and a set of required outcomes for each test that needs to be run to verify success. It's important to ensure that reports and dashboards are tested and compared across your existing and migrated data warehouse systems to:
+
+- Identify whether schema changes made during migration such as data types to be converted, have impacted reports in terms of ability to run, results, and corresponding visualizations.
+
+- Verify all users are migrated.
+
+- Verify all roles are migrated and users assigned to those roles.
+
+- Verify all data access security privileges are migrated to ensure access control list (ACL) migration.
+
+- Ensure consistent results of all known queries, reports, and dashboards.
+
+- Ensure that data and ETL migration is complete and error free.
+
+- Ensure data privacy is upheld.
+
+- Test performance and scalability.
+
+- Test analytical functionality.
+
+For information about how to migrate users, user groups, roles, and privileges, see the [Security, access, and operations for Teradata migrations](3-security-access-operations.md) which is part of this series of articles.
+
+> [!TIP]
+> Build an automated test suite to make tests repeatable.
+
+It's also best practice to automate testing as much as possible, to make each test repeatable and to allow a consistent approach to evaluating results. This works well for known regular reports, and could be managed via [Synapse pipelines](/azure/synapse-analytics/get-started-pipelines?msclkid=8f3e7e96cfed11eca432022bc07c18de) or [Azure Data Factory](/azure/data-factory/introduction?msclkid=2ccc66eccfde11ecaa58877e9d228779) orchestration. If you already have a suite of test queries in place for regression testing, you could use the testing tools to automate the post migration testing.
+
+> [!TIP]
+> Leverage tools that can compare metadata lineage to verify results.
+
+Ad-hoc analysis and reporting are more challenging and requires a set of tests to be compiled to verify that results are consistent across your legacy data warehouse DBMS and Azure Synapse. If reports and dashboards are inconsistent, then having the ability to compare metadata lineage across original and migrated systems is extremely valuable during migration testing, as it can highlight differences and pinpoint where they occurred when these aren't easy to detect. This is discussed in more detail later in this article.
+
+In terms of security, the best way to do this is to create roles, assign access privileges to roles, and then attach users to roles. To access your newly migrated data warehouse, set up an automated process to create new users, and to do role assignment. To detach users from roles, you can follow the same steps.
+
+It's also important to communicate the cut-over to all users, so they know what's changing and what to expect.
+
+## Analyzing lineage to understand dependencies between reports, dashboards, and data
+
+> [!TIP]
+> Having access to metadata and data lineage from reports all the way back to data source is critical for verifying that migrated reports are working correctly.
+
+A critical success factor in migrating reports and dashboards is understanding lineage. Lineage is metadata that shows the journey that data has taken, so you can see the path from the report/dashboard all the way back to where the data originates. It shows how data has gone from point to point, its location in the data warehouse and/or data mart, and where it's used&mdash;for example, in what reports. It helps you understand what happens to data as it travels through different data stores&mdash;files and database&mdash;different ETL pipelines, and into reports. If business users have access to data lineage, it improves trust, breeds confidence, and enables more informed business decisions.
+
+> [!TIP]
+> Tools that automate metadata collection and show end-to- end lineage in a multi-vendor environment are valuable when it comes to migration.
+
+In multi-vendor data warehouse environments, business analysts in BI teams may map out data lineage. For example, if you've Informatica for your ETL, Oracle for your data warehouse, and Tableau for reporting, each of which have their own metadata repository, figuring out where a specific data element in a report came from can be challenging and time consuming.
+
+To migrate seamlessly from a legacy data warehouse to Azure Synapse, end-to-end data lineage helps prove like-for-like migration when comparing reports and dashboards against your legacy environment. That means that metadata from several tools needs to be captured and integrated to show the end to end journey. Having access to tools that support automated metadata discovery and data lineage will let you see duplicate reports and ETL processes and reports that rely on data sources that are obsolete, questionable, or even non-existent. With this information, you can reduce the number of reports and ETL processes that you migrate.
+
+You can also compare end-to-end lineage of a report in Azure Synapse against the end-to-end lineage, for the same report in your legacy data warehouse environment, to see if there are any differences that have occurred inadvertently during migration. This helps enormously with testing and verifying migration success.
+
+Data lineage visualization not only reduces time, effort, and error in the migration process, but also enables faster execution of the migration project.
+
+By leveraging automated metadata discovery and data lineage tools that can compare lineage, you can verify if a report is produced using data migrated to Azure Synapse and if it's produced in the same way as in your legacy environment. This kind of capability also helps you determine:
+
+- What data needs to be migrated to ensure successful report and dashboard execution on Azure Synapse
+
+- What transformations have been and should be performed to ensure successful execution on Azure Synapse
+
+- How to reduce report duplication
+
+This substantially simplifies the data migration process, because the business will have a better idea of the data assets it has and what needs to be migrated to enable a solid reporting environment on Azure Synapse.
+
+> [!TIP]
+> Azure Data Factory and several third-party ETL tools support lineage.
+
+Several ETL tools provide end-to-end lineage capability, and you may be able to make use of this via your existing ETL tool if you're continuing to use it with Azure Synapse. Microsoft [Synapse pipelines](/azure/synapse-analytics/get-started-pipelines?msclkid=8f3e7e96cfed11eca432022bc07c18de) or [Azure Data Factory](/azure/data-factory/introduction?msclkid=2ccc66eccfde11ecaa58877e9d228779) lets you view lineage in mapping flows. Also, [Microsoft partners](/azure/synapse-analytics/partner/data-integration) provide automated metadata discovery, data lineage, and lineage comparison tools.
+
+## Migrating BI tool semantic layers to Azure Synapse Analytics
+
+> [!TIP]
+> Some BI tools have semantic layers that simplify business user access to physical data structures in your data warehouse or data mart, like SAP Business Objects and IBM Cognos.
+
+Some BI tools have what is known as a semantic metadata layer. The role of this metadata layer is to simplify business user access to physical data structures in an underlying data warehouse or data mart database. It does this by providing high-level objects like dimensions, measures, hierarchies, calculated metrics, and joins. These objects use business terms familiar to business analysts and are mapped to the physical data structures in the data warehouse or data mart database.
+
+When it comes to data warehouse migration, changes to column names or table names may be forced upon you. For example, in Oracle, table names can have a "#". In Azure Synapse, the "#" is only allowed as a prefix to a table name to indicate a temporary table. Therefore, you may need to change a table name if migrating from Oracle. You may need to do rework to change mappings in such cases.
+
+A good way to get everything consistent across multiple BI tools is to create a universal semantic layer, using common data names for high-level objects like dimensions, measures, hierarchies, and joins, in a data virtualization server (as shown in the next diagram) that sits between applications, BI tools, and Azure Synapse. This allows you to set up everything once (instead of in every tool), including calculated fields, joins and mappings, and then point all BI tools at the data virtualization server.
+
+> [!TIP]
+> Use data virtualization to create a common semantic layer to guarantee consistency across all BI tools in an Azure Synapse environment.
+
+In this way, you get consistency across all BI tools, while at the same time breaking the dependency between BI tools and applications, and the underlying physical data structures in Azure Synapse. Use [Microsoft partners](/azure/synapse-analytics/partner/data-integration) on Azure to implement this. The following diagram shows how a common vocabulary in the Data Virtualization server lets multiple BI tools see a common semantic layer.
++
+## Conclusions
+
+> [!TIP]
+> Identify incompatibilities early to gauge the extent of the migration effort. Migrate your users, group roles and privilege assignments. Only migrate the reports and visualizations that are used and are contributing to business value.
+
+In a lift-and-shift data warehouse migration to Azure Synapse, most reports and dashboards should migrate easily.
+
+However, if data structures change, then data is stored in unsupported data types or access to data in the data warehouse or data mart is via a view that includes proprietary SQL that's unsupported in your Azure Synapse environment. You'll need to deal with those issues if they arise.
+
+You can't rely on documentation to find out where the issues are likely to be. Making use of `EXPLAIN` statements is a pragmatic and quick way to identify incompatibilities in SQL. Rework these to achieve similar results in Azure Synapse. In addition, it's recommended that you make use of automated metadata discovery and lineage tools to help you identify duplicate reports, reports that are no longer valid because they're using data from data sources that you no longer use, and to understand dependencies. Some of these tools help compare lineage to verify that reports running in your legacy data warehouse environment are produced identically in Azure Synapse.
+
+Don't migrate reports that you no longer use. BI tool usage data can help determine which ones aren't in use. For the visualizations and reports that you do want to migrate, migrate all users, user groups, roles, and privileges, and associate these reports with strategic business objectives and priorities to help you identify report insight contribution to specific objectives. This is useful if you're using business value to drive your report migration strategy. If you're migrating by data store,&mdash;data mart by data mart&mdash;then metadata will also help you identify which reports are dependent on which tables and views, so that you can focus on migrating to these first.
+
+Finally, consider data virtualization to shield BI tools and applications from structural changes to the data warehouse and/or the data mart data model that may occur during migration. You can also use a common vocabulary with data virtualization to define a common semantic layer that guarantees consistent common data names, definitions, metrics, hierarchies, joins, and more across all BI tools and applications in a migrated Azure Synapse environment.
+
+## Next steps
+
+To learn more about minimizing SQL issues, see the next article in this series: [Minimizing SQL issues for Teradata migrations](5-minimize-sql-issues.md).
synapse-analytics 5 Minimize Sql Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/migration-guides/teradata/5-minimize-sql-issues.md
+
+ Title: "Minimizing SQL issues for Teradata migrations"
+description: Learn how to minimize the risk of SQL issues when migrating from Teradata to Azure Synapse.
+++
+ms.devlang:
++++ Last updated : 05/24/2022++
+# Minimizing SQL issues for Teradata migrations
+
+This article is part five of a seven part series that provides guidance on how to migrate from Teradata to Azure Synapse Analytics. This article provides best practices for minimizing SQL issues.
+
+## Overview
+
+### Characteristics of Teradata environments
+
+> [!TIP]
+> Teradata pioneered large scale SQL databases using MPP in the 1980s.
+
+In 1984, Teradata initially released their database product. It introduced massively parallel processing (MPP) techniques to enable data processing at a scale more efficiently than the existing mainframe technologies available at the time. Since then, the product has evolved and has many installations among large financial institutions, telecommunications, and retail companies. The original implementation used proprietary hardware and was channel attached to mainframes&mdash;typically IBM or IBM-compatible processors.
+
+While more recent announcements have included network connectivity and the availability of Teradata technology stack in the cloud (including Azure), most existing installations are on premises, so many users are considering migrating some or all their Teradata data to Azure Synapse to gain the benefits of a move to a modern cloud environment.
+
+> [!TIP]
+> Many existing Teradata installations are data warehouses using a dimensional data model.
+
+Teradata technology is often used to implement a data warehouse, supporting complex analytic queries on large data volumes using SQL. Dimensional data models&mdash;star or snowflake schemas&mdash;are common, as is the implementation of data marts for individual departments.
+
+This combination of SQL and dimensional data models simplifies migration to Azure Synapse, since the basic concepts and SQL skills are transferable. The recommended approach is to migrate the existing data model as-is to reduce risk and time taken. Even if the eventual intention is to make changes to the data model (for example, moving to a Data Vault model), perform an initial as-is migration and then make changes within the Azure cloud environment, leveraging the performance, elastic scalability, and cost advantages there.
+
+While the SQL language has been standardized, individual vendors have in some cases implemented proprietary extensions. This document highlights potential SQL differences you may encounter while migrating from a legacy Teradata environment, and to provide workarounds.
+
+### Using a VM Teradata instance as part of a migration
+
+> [!TIP]
+> Use the VM capability in Azure to create a temporary Teradata instance to speed up migration and minimize impact on the source system.
+
+Leverage the Azure environment when running a migration from an on-premises Teradata environment. Azure provides affordable cloud storage and elastic scalability to create a Teradata instance within a VM in Azure, collocated with the target Azure Synapse environment.
+
+With this approach, standard Teradata utilities such as Teradata Parallel Data Transporter (or third-party data replication tools such as Attunity Replicate) can be used to efficiently move the subset of Teradata tables that are to be migrated onto the VM instance, and then all migration tasks can take place within the Azure environment. This approach has several benefits:
+
+- After the initial replication of data, the source system isn't impacted by the migration tasks
+
+- The familiar Teradata interfaces, tools and utilities are available within the Azure environment
+
+- Once in the Azure environment there are no potential issues with network bandwidth availability between the on-premises source system and the cloud target system
+
+- Tools such as Azure Data Factory can efficiently call utilities such as Teradata Parallel Transporter to migrate data quickly and easily
+
+- The migration process is orchestrated and controlled entirely within the Azure environment
+
+### Use Azure Data Factory to implement a metadata-driven migration
+
+> [!TIP]
+> Automate the migration process by using Azure Data Factory capabilities.
+
+Automate and orchestrate the migration process by making use of the capabilities in the Azure environment. This approach also minimizes the migration's impact on the existing Teradata environment, which may already be running close to full capacity.
+
+Azure Data Factory is a cloud-based data integration service that allows creation of data-driven workflows in the cloud for orchestrating and automating data movement and data transformation. Using Data Factory, you can create and schedule data-driven workflows&mdash;called pipelines&mdash;that can ingest data from disparate data stores. It can process and transform data by using compute services such as Azure HDInsight Hadoop, Spark, Azure Data Lake Analytics, and Azure Machine Learning.
+
+By creating metadata to list the data tables to be migrated and their location, you can use the Data Factory facilities to manage and automate parts of the migration process. You can also use [Synapse pipelines](/azure/synapse-analytics/get-started-pipelines?msclkid=8f3e7e96cfed11eca432022bc07c18de).
+
+## SQL DDL differences between Teradata and Azure Synapse
+
+### SQL Data Definition Language (DDL)
+
+> [!TIP]
+> SQL DDL commands `CREATE TABLE` and `CREATE VIEW` have standard core elements but are also used to define implementation-specific options.
+
+The ANSI SQL standard defines the basic syntax for DDL commands such as `CREATE TABLE` and `CREATE VIEW`. These commands are used within both Teradata and Azure Synapse, but they've also been extended to allow definition of implementation-specific features such as indexing, table distribution and partitioning options.
+
+The following sections discuss Teradata-specific options to consider during a migration to Azure Synapse.
+
+### Table considerations
+
+> [!TIP]
+> Use existing indexes to give an indication of candidates for indexing in the migrated warehouse.
+
+When migrating tables between different technologies, only the raw data and its descriptive metadata gets physically moved between the two environments. Other database elements from the source system, such as indexes and log files, aren't directly migrated as these may not be needed or may be implemented differently within the new target environment. For example, there's no equivalent of the `MULTISET` option within Teradata's `CREATE TABLE` syntax.
+
+It's important to understand where performance optimizations&mdash;such as indexes&mdash;were used in the source environment. This indicates where performance optimization can be added in the new target environment. For example, if a NUSI has been created in the source Teradata environment, this might indicate that a non-clustered index should be created in the migrated Azure Synapse. Other native performance optimization techniques, such as table replication, may be more applicable than a straight 'like for like' index creation.
+
+### Unsupported Teradata table types
+
+> [!TIP]
+> Standard tables within Azure Synapse can support migrated Teradata time series and temporal tables.
+
+Teradata includes support for special table types for time series and temporal data. The syntax and some of the functions for these table types isn't directly supported within Azure Synapse, but the data can be migrated into a standard table with appropriate data types and indexing or partitioning on the date/time column.
+
+Teradata implements the temporal query functionality via query rewriting to add additional filters within a temporal query to limit the applicable date range. If this functionality is currently in use within the source Teradata environment and is to be migrated, then this additional filtering will need to be added into the relevant temporal queries.
+
+The Azure environment also includes specific features for complex analytics on time&mdash;series data at scale called [time series insights](https://azure.microsoft.com/services/time-series-insights/)&mdash;this is aimed at IoT data analysis applications and may be more appropriate for this use-case.
+
+### Teradata data type mapping
+
+> [!TIP]
+> Assess the impact of unsupported data types as part of the preparation phase.
+
+Most Teradata data types have a direct equivalent in Azure Synapse. This table shows these data types together with the recommended approach for handling them. In the table, Teradata column type is the type that's stored within the system catalog&mdash;for example, in `DBC.ColumnsV`.
+
+| Teradata column type | Teradata data type | Azure Synapse data type |
+|-|--|-|
+| ++ | TD_ANYTYPE | Not supported in Azure Synapse |
+| A1 | ARRAY | Not supported in Azure Synapse |
+| AN | ARRAY | Not supported in Azure Synapse |
+| AT | TIME | TIME |
+| BF | BYTE | BINARY |
+| BO | BLOB | BLOB data type isn\'t directly supported but can be replaced with BINARY |
+| BV | VARBYTE | BINARY |
+| CF | VARCHAR | CHAR |
+| CO | CLOB | CLOB data type isn\'t directly supported but can be replaced with VARCHAR |
+| CV | VARCHAR | VARCHAR |
+| D | DECIMAL | DECIMAL |
+| DA | DATE | DATE |
+| DH | INTERVAL DAY TO HOUR | INTERVAL data types aren\'t supported in Azure Synapse. but date calculations can be done with the date comparison functions (for example, DATEDIFF and DATEADD) |
+| DM | INTERVAL DAY TO MINUTE | INTERVAL data types aren\'t supported in Azure Synapse. but date calculations can be done with the date comparison functions (for example, DATEDIFF and DATEADD) |
+| DS | INTERVAL DAY TO SECOND | INTERVAL data types aren\'t supported in Azure Synapse. but date calculations can be done with the date comparison functions (for example, DATEDIFF and DATEADD) |
+| DT | DATASET | DATASET data type is supported in Azure Synapse |
+| DY | INTERVAL DAY | INTERVAL data types aren\'t supported in Azure Synapse. but date calculations can be done with the date comparison functions (for example, DATEDIFF and DATEADD) |
+| F | FLOAT | FLOAT |
+| HM | INTERVAL HOUR TO MINUTE | INTERVAL data types aren\'t supported in Azure Synapse. but date calculations can be done with the date comparison functions (for example, DATEDIFF and DATEADD) |
+| HR | INTERVAL HOUR | INTERVAL data types aren\'t supported in Azure Synapse. but date calculations can be done with the date comparison functions (for example, DATEDIFF and DATEADD) |
+| HS | INTERVAL HOUR TO SECOND | INTERVAL data types aren\'t supported in Azure Synapse. but date calculations can be done with the date comparison functions (for example, DATEDIFF and DATEADD) |
+| I1 | BYTEINT | TINYINT |
+| I2 | SMALLINT | SMALLINT |
+| I8 | BIGINT | BIGINT |
+| I | INTEGER | INT |
+| JN | JSON | JSON data type isn't currently directly supported within Azure Synapse, but JSON data can be stored in a VARCHAR field |
+| MI | INTERVAL MINUTE | INTERVAL data types aren\'t supported in Azure Synapse. but date calculations can be done with the date comparison functions (for example, DATEDIFF and DATEADD) |
+| MO | INTERVAL MONTH | INTERVAL data types aren\'t supported in Azure Synapse. but date calculations can be done with the date comparison functions (for example, DATEDIFF and DATEADD) |
+| MS | INTERVAL MINUTE TO SECOND | INTERVAL data types aren\'t supported in Azure Synapse. but date calculations can be done with the date comparison functions (for example, DATEDIFF and DATEADD) |
+| N | NUMBER | NUMERIC |
+| PD | PERIOD(DATE) | Can be converted to VARCHAR or split into two separate dates |
+| PM | PERIOD (TIMESTAMP WITH TIME ZONE) | Can be converted to VARCHAR or split into two separate timestamps (DATETIMEOFFSET). |
+| PS | PERIOD(TIMESTAMP) | Can be converted to VARCHAR or split into two separate timestamps (DATETIMEOFFSET). |
+| PT | PERIOD(TIME) | Can be converted to VARCHAR or split into two separate times. |
+| PZ | PERIOD (TIME WITH TIME ZONE) | Can be converted to VARCHAR or split into two separate times but WITH TIME ZONE isn\'t supported for TIME. |
+| SC | INTERVAL SECOND | INTERVAL data types aren\'t supported in Azure Synapse, but date calculations can be done with the date comparison functions (for example, DATEDIFF and DATEADD) |
+| SZ | TIMESTAMP WITH TIME ZONE | DATETIMEOFFSET |
+| TS | TIMESTAMP | DATETIME or DATETIME2 |
+| TZ | TIME WITH TIME ZONE | TIME WITH TIME ZONE isn\'t supported because TIME is stored using \"wall clock\" time only without a time zone offset |
+| XM | XML | XML data type isn't currently directly supported within Azure Synapse, but XML data can be stored in a VARCHAR field |
+| YM | INTERVAL YEAR TO MONTH | INTERVAL data types aren\'t supported in Azure Synapse. but date calculations can be done with the date comparison functions (for example, DATEDIFF and DATEADD) |
+| YR | INTERVAL YEAR | INTERVAL data types aren\'t supported in Azure Synapse. but date calculations can be done with the date comparison functions (for example, DATEDIFF and DATEADD) |
+
+Use the metadata from the Teradata catalog tables to determine whether any of these data types are to be migrated and allow for this in the migration plan. For example, use a SQL query like this one to find any occurrences of unsupported data types that need attention.
+
+```sql
+SELECT
+ColumnType, CASE
+WHEN ColumnType = '++' THEN 'TD_ANYTYPE'
+WHEN ColumnType = 'A1' THEN 'ARRAY' WHEN
+ColumnType = 'AN' THEN 'ARRAY' WHEN
+ColumnType = 'BO' THEN 'BLOB'
+WHEN ColumnType = 'CO' THEN 'CLOB'
+WHEN ColumnType = 'DH' THEN 'INTERVAL DAY TO HOUR' WHEN
+ColumnType = 'DM' THEN 'INTERVAL DAY TO MINUTE' WHEN
+ColumnType = 'DS' THEN 'INTERVAL DAY TO SECOND' WHEN
+ColumnType = 'DT' THEN 'DATASET'
+WHEN ColumnType = 'DY' THEN 'INTERVAL DAY'
+WHEN ColumnType = 'HM' THEN 'INTERVAL HOUR TO MINUTE' WHEN
+ColumnType = 'HR' THEN 'INTERVAL HOUR'
+WHEN ColumnType = 'HS' THEN 'INTERVAL HOUR TO SECOND' WHEN
+ColumnType = 'JN' THEN 'JSON'
+WHEN ColumnType = 'MI' THEN 'INTERVAL MINUTE' WHEN
+ColumnType = 'MO' THEN 'INTERVAL MONTH'
+WHEN ColumnType = 'MS' THEN 'INTERVAL MINUTE TO SECOND' WHEN
+ColumnType = 'PD' THEN 'PERIOD(DATE)'
+WHEN ColumnType = 'PM' THEN 'PERIOD (TIMESTAMP WITH TIME ZONE)'
+WHEN ColumnType = 'PS' THEN 'PERIOD(TIMESTAMP)' WHEN
+ColumnType = 'PT' THEN 'PERIOD(TIME)'
+WHEN ColumnType = 'PZ' THEN 'PERIOD (TIME WITH TIME ZONE)' WHEN
+ColumnType = 'SC' THEN 'INTERVAL SECOND'
+WHEN ColumnType = 'SZ' THEN 'TIMESTAMP WITH TIME ZONE' WHEN
+ColumnType = 'XM' THEN 'XML'
+WHEN ColumnType = 'YM' THEN 'INTERVAL YEAR TO MONTH' WHEN
+ColumnType = 'YR' THEN 'INTERVAL YEAR'
+END AS Data_Type,
+COUNT (*) AS Data_Type_Count FROM
+DBC.ColumnsV
+WHERE DatabaseName IN ('UserDB1', 'UserDB2', 'UserDB3') -- select databases to be migrated
+GROUP BY 1,2
+ORDER BY 1;
+```
+
+> [!TIP]
+> Third-party tools and services can automate data mapping tasks.
+
+There are third-party vendors who offer tools and services to automate migration, including the mapping of data types. If a third-party ETL tool such as Informatica or Talend is already in use in the Teradata environment, those tools can implement any required data transformations.
+
+### Data Definition Language (DDL) generation
+
+> [!TIP]
+> Use existing Teradata metadata to automate the generation of `CREATE TABLE` and `CREATE VIEW DDL` for Azure Synapse.
+
+Edit existing Teradata `CREATE TABLE` and `CREATE VIEW` scripts to create the equivalent definitions with modified data types as described previously if necessary. Typically, this involves removing extra Teradata-specific clauses such as `FALLBACK` or `MULTISET`.
+
+However, all the information that specifies the current definitions of tables and views within the existing Teradata environment is maintained within system catalog tables. This is the best source of this information as it's guaranteed to be up to date and complete. Be aware that user-maintained documentation may not be in sync with the current table definitions.
+
+Access this information via views onto the catalog such as `DBC.ColumnsV` and generate the equivalent `CREATE TABLE DDL` statements for the equivalent tables in Azure Synapse.
+
+> [!TIP]
+> Third-party tools and services can automate data mapping tasks.
+
+There are [Microsoft partners](/azure/synapse-analytics/partner/data-integration) who offer tools and services to automate migration, including data-type mapping. Also, if a third-party ETL tool such as Informatica or Talend is already in use in the Teradata environment, that tool can implement any required data transformations.
+
+## SQL DML differences between Teradata and Azure Synapse
+
+### SQL Data Manipulation Language (DML)
+
+> [!TIP]
+> SQL DML commands `SELECT`, `INSERT` and `UPDATE` have standard core elements but may also implement different syntax options.
+
+The ANSI SQL standard defines the basic syntax for DML commands such as `SELECT`, `INSERT`, `UPDATE` and `DELETE`. Both Teradata and Azure Synapse use these commands, but in some cases there are implementation differences.
+
+The following sections discuss the Teradata-specific DML commands that you should consider during a migration to Azure Synapse.
+
+### SQL DML syntax differences
+
+Be aware of these differences in SQL Data Manipulation Language (DML) syntax between Teradata SQL and Azure Synapse when migrating:
+
+- `QUALIFY`&mdash;Teradata supports the `QUALIFY` operator. For example:
+
+ ```sql
+ SELECT col1
+ FROM tab1
+ WHERE col1='XYZ'
+ QUALIFY ROW_NUMBER () OVER (PARTITION by
+ col1 ORDER BY col1) = 1;
+ ```
+
+ The equivalent Azure Synapse syntax is:
+
+ ```sql
+ SELECT * FROM (
+ SELECT col1, ROW_NUMBER () OVER (PARTITION by col1 ORDER BY col1) rn
+ FROM tab1 WHERE col1='XYZ'
+ ) WHERE rn = 1;
+ ```
+
+- Date Arithmetic&mdash;Azure Synapse has operators such as `DATEADD` and `DATEDIFF` which can be used on `DATE` or `DATETIME` fields. Teradata supports direct subtraction on dates such as `SELECT DATE1&mdash;DATE2 FROM...`.
+
+- In Group by ordinal, explicitly provide the T-SQL column name.
+
+- `LIKE ANY`&mdash;Teradata supports `LIKE ANY` syntax such as:
+
+ ```sql
+ SELECT * FROM CUSTOMER
+ WHERE POSTCODE LIKE ANY
+ ('CV1%', 'CV2%', 'CV3%');
+ ```
+
+ The equivalent in Azure Synapse syntax is:
+
+ ```sql
+ SELECT * FROM CUSTOMER
+ WHERE
+ (POSTCODE LIKE 'CV1%') OR (POSTCODE LIKE 'CV2%') OR (POSTCODE LIKE 'CV3%');
+ ```
+
+- Depending on system settings, character comparisons in Teradata may be case insensitive by default. In Azure Synapse, character comparisons are always case sensitive.
+
+### Use EXPLAIN to validate legacy SQL
+
+> [!TIP]
+> Use real queries from the existing system query logs to find potential migration issues.
+
+One way of testing legacy Teradata SQL for compatibility with Azure Synapse is to capture some representative SQL statements from the legacy system query logs, prefix those queries with [EXPLAIN](/sql/t-sql/queries/explain-transact-sql?msclkid=91233fc1cff011ec9dff597671b7ae97), and (assuming a 'like for like' migrated data model in Azure Synapse with the same table and column names) run those `EXPLAIN` statements in Azure Synapse. Any incompatible SQL will throw an error&mdash;use this information to determine the scale of the recoding task. This approach doesn't require that data is loaded into the Azure environment, only that the relevant tables and views have been created.
+
+### Functions, stored procedures, triggers, and sequences
+
+> [!TIP]
+> As part of the preparation phase, assess the number and type of non-data objects being migrated.
+
+When migrating from a mature legacy data warehouse environment such as Teradata, there are often elements other than simple tables and views that need to be migrated to the new target environment. Examples of this include functions, stored procedures, triggers, and sequences.
+
+As part of the preparation phase, create an inventory of the objects that need to be migrated and define the methods for handling them. Then assign an appropriate allocation of resources in the project plan.
+
+There may be facilities in the Azure environment that replace the functionality implemented as either functions or stored procedures in the Teradata environment. In this case, it's often more efficient to use the built-in Azure facilities rather than recoding the Teradata functions.
+
+> [!TIP]
+> Third-party products and services can automate migration of non-data elements.
+
+[Microsoft partners](/azure/synapse-analytics/partner/data-integration) offer tools and services that can automate the migration.
+
+See the following sections for more information on each of these elements.
+
+#### Functions
+
+As with most database products, Teradata supports system functions and user-defined functions within the SQL implementation. When migrating to another database platform such as Azure Synapse, common system functions are available and can be migrated without change. Some system functions may have slightly different syntax, but the required changes can be automated. System functions where there's no equivalent, such arbitrary user-defined functions, may need to be recoded using the languages available in the target environment. Azure Synapse uses the popular Transact-SQL language to implement user-defined functions.
+
+#### Stored procedures
+
+Most modern database products allow for procedures to be stored within the database. Teradata provides the SPL language for this purpose. A stored procedure typically contains SQL statements and some procedural logic, and may return data or a status.
+
+The dedicated SQL pools of Azure Synapse Analytics also support stored procedures using T-SQL, so if you must migrate stored procedures, recode them accordingly.
+
+#### Triggers
+
+Azure Synapse doesn't support the creation of triggers, but you can implement them within Azure Data Factory.
+
+#### Sequences
+
+Azure Synapse sequences are handled in a similar way to Teradata, using [Identity to create surrogate keys](/azure/synapse-analytics/sql-data-warehouse/sql-data-warehouse-tables-identity) or [managed identity](/azure/data-factory/data-factory-service-identity?tabs=data-factory).
+
+#### Teradata to T-SQL mapping
+
+This table shows the Teradata to T-SQL compliant with Azure Synapse SQL data type mapping:
+
+| Teradata Data Type | Azure Synapse SQL Data Type |
+|-|--|
+| bigint  | bigint |
+| bool  | bit |
+| boolean  | bit |
+| byteint  | tinyint |
+| char \[(*p*)\]  | char \[(*p*)\] |
+| char varying \[(*p*)\]  | varchar \[(*p*)\] |
+| character \[(*p*)\]  | char \[(*p*)\] |
+| character varying \[(*p*)\]  | varchar \[(*p*)\] |
+| date  | date |
+| datetime  | datetime |
+| dec \[(*p*\[,*s*\])\]  | decimal \[(*p*\[,*s*\])\]  |
+| decimal \[(*p*\[,*s*\])\]  | decimal \[(*p*\[,*s*\])\] |
+| double  | float(53) |
+| double precision  | float(53) |
+| float \[(*p*)\]  | float \[(*p*)\] |
+| float4  | float(53) |
+| float8  | float(53) |
+| int  | int |
+| int1  | tinyint  |
+| int2  | smallint |
+| int4  | int  |
+| int8  | bigint  |
+| integer  | integer |
+| interval  | *Not supported* |
+| national char varying \[(*p*)\]  | nvarchar \[(*p*)\]  |
+| national character \[(*p*)\]  | nchar \[(*p*)\] |
+| national character varying \[(*p*)\]  | nvarchar \[(*p*)\] |
+| nchar \[(*p*)\]  | nchar \[(*p*)\] |
+| numeric \[(*p*\[,*s*\])\]  | numeric \[(*p*\[,*s*\]) |
+| nvarchar \[(*p*)\]  | nvarchar \[(*p*)\] |
+| real  | real |
+| smallint  | smallint |
+| time  | time |
+| time with time zone  | datetimeoffset |
+| time without time zone  | time |
+| timespan  | *Not supported* |
+| timestamp  | datetime2 |
+| timetz  | datetimeoffset |
+| varchar \[(*p*)\]  | varchar \[(*p*)\] |
+
+## Summary
+
+Typical existing legacy Teradata installations are implemented in a way that makes migration to Azure Synapse easy. They use SQL for analytical queries on large data volumes, and are in some form of dimensional data model. These factors make it a good candidate for migration to Azure Synapse.
+
+To minimize the task of migrating the actual SQL code, follow these recommendations:
+
+- Initial migration of the data warehouse should be as-is to minimize risk and time taken, even if the eventual final environment will incorporate a different data model such as Data Vault.
+
+- Consider using a Teradata instance in an Azure VM as a stepping stone as part of the migration process.
+
+- Understand the differences between Teradata SQL implementation and Azure Synapse.
+
+- Use metadata and query logs from the existing Teradata implementation to assess the impact of the differences and plan an approach to mitigate.
+
+- Automate the process wherever possible to minimize errors, risk, and time for the migration.
+
+- Consider using specialist [Microsoft partners](/azure/synapse-analytics/partner/data-integration) and services to streamline the migration.
+
+## Next steps
+
+To learn more about Microsoft and third-party tools, see the next article in this series: [Tools for Teradata data warehouse migration to Azure Synapse Analytics](6-microsoft-third-party-migration-tools.md).
synapse-analytics 6 Microsoft Third Party Migration Tools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/migration-guides/teradata/6-microsoft-third-party-migration-tools.md
+
+ Title: "Tools for Teradata data warehouse migration to Azure Synapse Analytics"
+description: Learn about Microsoft and third-party data and database migration tools that can help you migrate from Teradata to Azure Synapse.
+++
+ms.devlang:
++++ Last updated : 05/24/2022++
+# Tools for Teradata data warehouse migration to Azure Synapse Analytics
+
+This article is part six of a seven part series that provides guidance on how to migrate from Teradata to Azure Synapse Analytics. This article provides best practices for Microsoft and third-party tools.
+
+## Data warehouse migration tools
+
+Migrating your existing data warehouse to Azure Synapse enables you to utilize:
+
+- A globally secure, scalable, low-cost, cloud-native, pay-as-you-use analytical database
+
+- The rich Microsoft analytical ecosystem that exists on Azure consists of technologies to help modernize your data warehouse once it's migrated and extend your analytical capabilities to drive new value
+
+Several tools from Microsoft and third-party partner vendors can help you migrate your existing data warehouse to Azure Synapse.
+
+They include:
+
+- Microsoft data and database migration tools
+
+- Third-party data warehouse automation tools to automate and document the migration to Azure Synapse
+
+- Third-party data warehouse migration tools to migrate schema and data to Azure Synapse
+
+- Third-party tools to minimize the impact on SQL differences between your existing data warehouse DBMS and Azure Synapse
+
+Let's look at these in more detail.
+
+## Microsoft data migration tools
+
+> [!TIP]
+> Data Factory includes tools to help migrate your data and your entire data warehouse to Azure.
+
+Microsoft offers several tools to help you migrate your existing data warehouse to Azure Synapse. They are:
+
+- Microsoft Azure Data Factory
+
+- Microsoft services for physical data transfer
+
+- Microsoft services for data ingestion
+
+### Microsoft Azure Data Factory
+
+Microsoft Azure Data Factory is a fully managed, pay-as-you-use, hybrid data integration service for highly scalable ETL and ELT processing. It uses Spark to process and analyze data in parallel and in memory to maximize throughput.
+
+> [!TIP]
+> Data Factory allows you to build scalable data integration pipelines code free.
+
+[Azure Data Factory connectors](/azure/data-factory/connector-overview?msclkid=00086e4acff211ec9263dee5c7eb6e69) connect to external data sources and databases and have templates for common data integration tasks. A visual front-end, browser-based, GUI enables non-programmers to create and run process pipelines to ingest, transform, and load data, while more experienced programmers have the option to incorporate custom code if necessary, such as Python programs.
+
+> [!TIP]
+> Data Factory enables collaborative development between business and IT professionals.
+
+Data Factory is also an orchestration tool. It's the best Microsoft tool to automate the end-to-end migration process to reduce risk and make the migration process easily repeatable. The following diagram shows a Data Factory mapping data flow.
++
+The next screenshot shows a Data Factory wrangling data flow.
++
+Development of simple or comprehensive ETL and ELT processes without coding or maintenance, with a few clicks. These processes ingest, move, prepare, transform, and process your data. Design and manage scheduling and triggers in Azure Data Factory to build an automated data integration and loading environment. Define, manage, and schedule PolyBase bulk data load processes in Data Factory.
+
+> [!TIP]
+> Data Factory includes tools to help migrate your data and your entire data warehouse to Azure.
+
+Use Data Factory to implement and manage a hybrid environment that includes on-premises, cloud, streaming and SaaS data&mdash;for example, from applications like Salesforce&mdash;in a secure and consistent way.
+
+A new capability in Data Factory is wrangling data flows. This opens up Data Factory to business users to make use of platform and allows them to visually discover, explore and prepare data at scale without writing code. This easy-to-use Data Factory capability is like Microsoft Excel Power Query or Microsoft Power BI Dataflows, where self-service data preparation business users use a spreadsheet style user interface, with drop-down transforms, to prepare and integrate data.
+
+Azure Data Factory is the recommended approach for implementing data integration and ETL/ELT processes for an Azure Synapse environment, especially if existing legacy processes need to be refactored.
+
+### Microsoft services for physical data transfer
+
+> [!TIP]
+> Microsoft offers a range of products and services to assist with data transfer.
+
+#### Azure ExpressRoute
+
+Azure ExpressRoute creates private connections between Azure data centers and infrastructure on your premises or in a colocation environment. ExpressRoute connections don't go over the public Internet, and they offer more reliability, faster speeds, and lower latencies than typical Internet connections. In some cases, using ExpressRoute connections to transfer data between on-premises systems and Azure can give you significant cost benefits.
+
+#### AzCopy
+
+[AzCopy](/azure/storage/common/storage-use-azcopy-v10) is a command line utility that copies files to Azure Blob Storage via a standard internet connection. You can use it to upload extracted, compressed, delimited text files before loading via PolyBase or native parquet reader (if the exported files are parquet) in a warehouse migration project. Individual files, file selections, and file directories can be uploaded.
+
+#### Azure Data Box
+
+Microsoft offers a service called Azure Data Box. This service writes data to be migrated to a physical storage device. This device is then shipped to an Azure data center and loaded into cloud storage. This service can be cost-effective for large volumes of data&mdash;for example, tens or hundreds of terabytes&mdash;or where network bandwidth isn't readily available and is typically used for the one-off historical data load when migrating a large amount of data to Azure Synapse.
+
+Another service available is Data Box Gateway, a virtualized cloud storage gateway device that resides on your premises and sends your images, media, and other data to Azure. Use Data Box Gateway for one-off migration tasks or ongoing incremental data uploads.
+
+### Microsoft services for data ingestion
+
+#### COPY INTO
+
+The [COPY](/sql/t-sql/statements/copy-into-transact-sql) statement provides the most flexibility for high-throughput data ingestion into Azure Synapse Analytics. Refer to the list of capabilities that `COPY` offers for data ingestion.
+
+#### PolyBase
+
+> [!TIP]
+> PolyBase can load data in parallel from Azure Blob Storage into Azure Synapse.
+
+PolyBase provides the fastest and most scalable method of loading bulk data into Azure Synapse. PolyBase leverages the MPP architecture to use parallel loading, to give the fastest throughput, and can read data from flat files in Azure Blob Storage or directly from external data sources and other relational databases via connectors.
+
+PolyBase can also directly read from files compressed with gzip&mdash;this reduces the physical volume of data moved during the load process. PolyBase supports popular data formats such as delimited text, ORC and Parquet.
+
+> [!TIP]
+> Invoke PolyBase from Azure Data Factory as part of a migration pipeline.
+
+PolyBase is tightly integrated with Azure Data Factory (see next section) to enable data load ETL/ELT processes to be rapidly developed and scheduled via a visual GUI, leading to higher productivity and fewer errors than hand-written code.
+
+PolyBase is the recommended data load method for Azure Synapse, especially for high-volume data. PolyBase loads data using the `CREATE TABLE AS` or `INSERT...SELECT` statements&mdash;CTAS achieves the highest possible throughput as it minimizes the amount of logging required. Compressed delimited text files are the most efficient input format. For maximum throughput, split very large input files into multiple smaller files and load these in parallel. For fastest loading to a staging table, define the target table as type `HEAP` and use round-robin distribution.
+
+There are some limitations in PolyBase. Rows to be loaded must be less than 1 MB in length. Fixed-width format or nested data such as JSON and XML aren't directly readable.
+
+## Microsoft partners to help you migrate your data warehouse to Azure Synapse Analytics
+
+In addition to tools that can help you with various aspects of data warehouse migration, there are several practiced [Microsoft partners](/azure/synapse-analytics/partner/data-integration) that can bring their expertise to help you move your legacy on-premises data warehouse platform to Azure Synapse.
+
+## Next steps
+
+To learn more about implementing modern data warehouses, see the next article in this series: [Beyond Teradata migration, implementing a modern data warehouse in Microsoft Azure](7-beyond-data-warehouse-migration.md).
synapse-analytics 7 Beyond Data Warehouse Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/migration-guides/teradata/7-beyond-data-warehouse-migration.md
+
+ Title: "Beyond Teradata migration, implementing a modern data warehouse in Microsoft Azure"
+description: Learn how a Teradata migration to Azure Synapse lets you integrate your data warehouse with the Microsoft Azure analytical ecosystem.
+++
+ms.devlang:
++++ Last updated : 05/24/2022++
+# Beyond Teradata migration, implementing a modern data warehouse in Microsoft Azure
+
+This article is part seven of a seven part series that provides guidance on how to migrate from Teradata to Azure Synapse Analytics. This article provides best practices for implementing modern data warehouses.
+
+## Beyond data warehouse migration to Azure
+
+One of the key reasons to migrate your existing data warehouse to Azure Synapse is to utilize a globally secure, scalable, low-cost, cloud-native, pay-as-you-use analytical database. Azure Synapse also lets you integrate your migrated data warehouse with the complete Microsoft Azure analytical ecosystem to take advantage of, and integrate with, other Microsoft technologies that help you modernize your migrated data warehouse. This includes integrating with technologies like:
+
+- Azure Data Lake Storage&mdash;for cost effective data ingestion, staging, cleansing and transformation to free up data warehouse capacity occupied by fast growing staging tables
+
+- Azure Data Factory&mdash;for collaborative IT and self-service data integration [with connectors](/azure/data-factory/connector-overview) to cloud and on-premises data sources and streaming data
+
+- [The Open Data Model Common Data Initiative](/common-data-model/)&mdash;to share consistent trusted data across multiple technologies including:
+ - Azure Synapse
+ - Azure Synapse Spark
+ - Azure HDInsight
+ - Power BI
+ - SAP
+ - Adobe Customer Experience Platform
+ - Azure IoT
+ - Microsoft ISV Partners
+
+- [Microsoft's data science technologies](/azure/architecture/data-science-process/platforms-and-tools) including:
+ - Azure ML studio
+ - Azure Machine Learning Service
+ - Azure Synapse Spark (Spark as a service)
+ - Jupyter Notebooks
+ - RStudio
+ - ML.NET
+ - Visual Studio .NET for Apache Spark to enable data scientists to use Azure Synapse data to train machine learning models at scale.
+
+- [Azure HDInsight](/azure/hdinsight/)&mdash;to leverage big data analytical processing and join big data with Azure Synapse data by creating a Logical Data Warehouse using PolyBase
+
+- [Azure Event Hubs](/azure/event-hubs/event-hubs-about), [Azure Stream Analytics](/azure/stream-analytics/stream-analytics-introduction) and [Apache Kafka](/azure/databricks/spark/latest/structured-streaming/kafka)&mdash;to integrate with live streaming data from within Azure Synapse
+
+There's often acute demand to integrate with [Machine Learning](/azure/synapse-analytics/machine-learning/what-is-machine-learning) to enable custom built, trained machine learning models for use in Azure Synapse. This would enable in-database analytics to run at scale in-batch, on an event-driven basis and on-demand. The ability to exploit in-database analytics in Azure Synapse from multiple BI tools and applications also guarantees that all get the same predictions and recommendations.
+
+In addition, there's an opportunity to integrate Azure Synapse with Microsoft partner tools on Azure to shorten time to value.
+
+Let's look at these in more detail to understand how you can take advantage of the technologies in Microsoft's analytical ecosystem to modernize your data warehouse once you've migrated to Azure Synapse.
+
+## Offloading data staging and ETL processing to Azure Data Lake and Azure Data Factory
+
+Enterprises today have a key problem resulting from digital transformation. So much new data is being generated and captured for analysis, and much of this data is finding its way into data warehouses. A good example is transaction data created by opening online transaction processing (OLTP) systems to self-service access from mobile devices. These OLTP systems are the main sources of data to a data warehouse, and with customers now driving the transaction rate rather than employees, data in data warehouse staging tables has been growing rapidly in volume.
+
+This, along with other new data&mdash;like Internet of Things (IoT) data, coming into the enterprise, means that companies need to find a way to deal with unprecedented data growth and scale data integration ETL processing beyond current levels. One way to do this is to offload ingestion, data cleansing, transformation and integration to a data lake and process it at scale there, as part of a data warehouse modernization program.
+
+> [!TIP]
+> Offload ELT processing to Azure Data Lake and still run at scale as your data volumes grow.
+
+Once you've migrated your data warehouse to Azure Synapse, Microsoft provides the ability to modernize your ETL processing by ingesting data into, and staging data in, Azure Data Lake Storage. You can then clean, transform and integrate your data at scale using Data Factory before loading it into Azure Synapse in parallel using PolyBase.
+
+### Microsoft Azure Data Factory
+
+> [!TIP]
+> Data Factory allows you to build scalable data integration pipelines code free.
+
+[Microsoft Azure Data Factory](https://azure.microsoft.com/services/data-factory/)] is a pay-as-you-use, hybrid data integration service for highly scalable ETL and ELT processing. Data Factory provides a simple web-based user interface to build data integration pipelines, in a code-free manner that can:
+
+- Data Factory allows you to build scalable data integration pipelines code free. Easily acquire data at scale. Pay only for what you use and connect to on premises, cloud, and SaaS based data sources.
+
+- Ingest, move, clean, transform, integrate, and analyze cloud and on-premises data at scale and take automatic action such a recommendation, an alert, and more.
+
+- Seamlessly author, monitor and manage pipelines that span data stores both on-premises and in the cloud.
+
+- Enable pay as you go scale out in alignment with customer growth.
+
+> [!TIP]
+> Data Factory can connect to on-premises, cloud, and SaaS data.
+
+All of this can be done without writing any code. However, adding custom code to Data Factory pipelines is also supported. The next screenshot shows an example Data Factory pipeline.
++
+> [!TIP]
+> Pipelines called data factories control the integration and analysis of data. Data Factory is enterprise class data integration software aimed at IT professionals with a data wrangling facility for business users.
+
+Implement Data Factory pipeline development from any of several places including:
+
+- Microsoft Azure portal
+
+- Microsoft Azure PowerShell
+
+- Programmatically from .NET and Python using a multi-language SDK
+
+- Azure Resource Manager (ARM) Templates
+
+- REST APIs
+
+Developers and data scientists who prefer to write code can easily author Data Factory pipelines in Java, Python, and .NET using the software development kits (SDKs) available for those programming languages. Data Factory pipelines can also be hybrid as they can connect, ingest, clean, transform and analyze data in on-premises data centers, Microsoft Azure, other clouds, and SaaS offerings.
+
+Once you develop Data Factory pipelines to integrate and analyze data, deploy those pipelines globally and schedule them to run in batch, invoke them on demand as a service, or run them in real time on an event-driven basis. A Data Factory pipeline can also run on one or more execution engines and monitor pipeline execution to ensure performance and track errors.
+
+#### Use cases
+
+> [!TIP]
+> Build data warehouses on Microsoft Azure.
+
+> [!TIP]
+> Build training data sets in data science to develop machine learning models.
+
+Data Factory can support multiple use cases, including:
+
+- Preparing, integrating, and enriching data from cloud and on-premises data sources to populate your migrated data warehouse and data marts on Microsoft Azure Synapse.
+
+- Preparing, integrating, and enriching data from cloud and on-premises data sources to produce training data for use in machine learning model development and in retraining analytical models.
+
+- Orchestrating data preparation and analytics to create predictive and prescriptive analytical pipelines for processing and analyzing data in batch, such as sentiment analytics, and either acting on the results of the analysis or populating your data warehouse with the results.
+
+- Preparing, integrating, and enriching data for data-driven business applications running on the Azure cloud on top of operational data stores like Azure Cosmos DB.
+
+#### Data sources
+
+Data Factory lets you connect with [connectors](/azure/data-factory/connector-overview) from both cloud and on-premises data sources. Agent software, known as a Self-Hosted Integration Runtime, securely accesses on-premises data sources and supports secure, scalable data transfer.
+
+#### Transforming data using Data Factory
+
+> [!TIP]
+> Professional ETL developers can use Data Factory mapping data flows to clean, transform and integrate data without the need to write code.
+
+Within a Data Factory pipeline, ingest, clean, transform, integrate, and, if necessary, analyze any type of data from these sources. This includes structured, semi-structured&mdash;such as JSON or Avro&mdash;and unstructured data.
+
+Professional ETL developers can use Data Factory mapping data flows to filter, split, join (many types), lookup, pivot, unpivot, sort, union, and aggregate data without writing any code. In addition, Data Factory supports surrogate keys, multiple write processing options such as insert, upsert, update, table recreation, and table truncation, and several types of target data stores&mdash;also known as sinks. ETL developers can also create aggregations, including time series aggregations that require a window to be placed on data columns.
+
+> [!TIP]
+> Data Factory supports the ability to automatically detect and manage schema changes in inbound data, such as in streaming data.
+
+Run mapping data flows that transform data as activities in a Data Factory pipeline. Include multiple mapping data flows in a single pipeline, if necessary. Break up challenging data transformation and integration tasks into smaller mapping dataflows that can be combined to handle the complexity and custom code added if necessary. In addition to this functionality, Data Factory mapping data flows include these abilities:
+
+- Define expressions to clean and transform data, compute aggregations, and enrich data. For example, these expressions can perform feature engineering on a date field to break it into multiple fields to create training data during machine learning model development. Construct expressions from a rich set of functions that include mathematical, temporal, split, merge, string concatenation, conditions, pattern match, replace, and many other functions.
+
+- Automatically handle schema drift so that data transformation pipelines can avoid being impacted by schema changes in data sources. This is especially important for streaming IoT data, where schema changes can happen without notice when devices are upgraded or when readings are missed by gateway devices collecting IoT data.
+
+- Partition data to enable transformations to run in parallel at scale.
+
+- Inspect data to view the metadata of a stream you're transforming.
+
+> [!TIP]
+> Data Factory can also partition data to enable ETL processing to run at scale.
+
+The next screenshot shows an example Data Factory mapping data flow.
++
+Data engineers can profile data quality and view the results of individual data transforms by switching on a debug capability during development.
+
+> [!TIP]
+> Data Factory pipelines are also extensible since Data Factory allows you to write your own code and run it as part of a pipeline.
+
+Extend Data Factory transformational and analytical functionality by adding a linked service containing your own code into a pipeline. For example, an Azure Synapse Spark Pool notebook containing Python code could use a trained model to score the data integrated by a mapping data flow.
+
+Store integrated data and any results from analytics included in a Data Factory pipeline in one or more data stores such as Azure Data Lake storage, Azure Synapse, or Azure HDInsight (Hive Tables). Invoke other activities to act on insights produced by a Data Factory analytical pipeline.
+
+#### Utilizing Spark to scale data integration
+
+Under the covers, Data Factory utilizes Azure Synapse Spark Pools&mdash;Microsoft's Spark-as-a-service offering&mdash;at run time to clean and integrate data on the Microsoft Azure cloud. This enables it to clean, integrate, and analyze high-volume and very high-velocity data (such as click stream data) at scale. Microsoft intends to execute Data Factory pipelines on other Spark distributions. In addition to executing ETL jobs on Spark, Data Factory can also invoke Pig scripts and Hive queries to access and transform data stored in Azure HDInsight.
+
+#### Linking self-service data prep and Data Factory ETL processing using wrangling data flows
+
+> [!TIP]
+> Data Factory support for wrangling data flows in addition to mapping data flows means that business and IT can work together on a common platform to integrate data.
+
+Another new capability in Data Factory is wrangling data flows. This lets business users (also known as citizen data integrators and data engineers) make use of the platform to visually discover, explore and prepare data at scale without writing code. This easy-to-use Data Factory capability is similar to Microsoft Excel Power Query or Microsoft Power BI Dataflows, where self-service data preparation business users use a spreadsheet-style UI with drop-down transforms to prepare and integrate data. The following screenshot shows an example Data Factory wrangling data flow.
++
+This differs from Excel and Power BI, as Data Factory wrangling data flows uses Power Query Online to generate M code and translate it into a massively parallel in-memory Spark job for cloud scale execution. The combination of mapping data flows and wrangling data flows in Data Factory lets IT professional ETL developers and business users collaborate to prepare, integrate, and analyze data for a common business purpose. The preceding Data Factory mapping data flow diagram shows how both Data Factory and Azure Synapse Spark Pool Notebooks can be combined in the same Data Factory pipeline. This allows IT and business to be aware of what each has created. Mapping data flows and wrangling data flows can then be available for reuse to maximize productivity and consistency and minimize reinvention.
+
+#### Linking Data and Analytics in Analytical Pipelines
+
+In addition to cleaning and transforming data, Data Factory can combine data integration and analytics in the same pipeline. Use Data Factory to create both data integration and analytical pipelines&mdash;the latter being an extension of the former. Drop an analytical model into a pipeline so that clean, integrated data can be stored to provide predictions or recommendations. Act on this information immediately or store it in your data warehouse to provide you with new insights and recommendations that can be viewed in BI tools.
+
+Models developed code-free with Azure ML Studio, Azure Machine Learning Service SDK using Azure Synapse Spark Pool Notebooks, or using R in RStudio, can be invoked as a service from within a Data Factory pipeline to batch score your data. Analysis happens at scale by executing Spark machine learning pipelines on Azure Synapse Spark Pool Notebooks.
+
+Store integrated data and any results from analytics included in a Data Factory pipeline in one or more data stores, such as Azure Data Lake storage, Azure Synapse, or Azure HDInsight (Hive Tables). Invoke other activities to act on insights produced by a Data Factory analytical pipeline.
+
+## A lake database to share consistent trusted data
+
+> [!TIP]
+> Microsoft has created a lake database to describe core data entities to be shared across the enterprise.
+
+A key objective in any data integration set-up is the ability to integrate data once and reuse it everywhere, not just in a data warehouse&mdash;for example, in data science. Reuse avoids reinvention and ensures consistent, commonly understood data that everyone can trust.
+
+> [!TIP]
+> Azure Data Lake is shared storage that underpins Microsoft Azure Synapse, Azure ML, Azure Synapse Spark, and Azure HDInsight.
+
+To achieve this goal, establish a set of common data names and definitions describing logical data entities that need to be shared across the enterprise&mdash;such as customer, account, product, supplier, orders, payments, returns, and so forth. Once this is done, IT and business professionals can use data integration software to create these common data assets and store them to maximize their reuse to drive consistency everywhere.
+
+> [!TIP]
+> Integrating data to create lake database logical entities in shared storage enables maximum reuse of common data assets.
+
+Microsoft has done this by creating a [lake database](/azure/synapse-analytics/database-designer/concepts-lake-database). The lake database is a common language for business entities that represents commonly used concepts and activities across a business. Azure Synapse Analytics provides industry specific database templates to help standardize data in the lake. [Lake database templates](/azure/synapse-analytics/database-designer/concepts-database-templates) provide schemas for predefined business areas, enabling data to the loaded into a lake database in a structured way. The power comes when data integration software is used to create lake database common data assets. This results in self-describing trusted data that can be consumed by applications and analytical systems. Create a lake database in Azure Data Lake storage using Azure Data Factory, and consume it with Power BI, Azure Synapse Spark, Azure Synapse and Azure ML. The following diagram shows a lake database used in Azure Synapse Analytics.
++
+## Integration with Microsoft data science technologies on Azure
+
+Another key requirement in modernizing your migrated data warehouse is to integrate it with Microsoft and third-party data science technologies on Azure to produce insights for competitive advantage. Let's look at what Microsoft offers in terms of machine learning and data science technologies and see how these can be used with Azure Synapse in a modern data warehouse environment.
+
+### Microsoft technologies for data science on Azure
+
+> [!TIP]
+> Develop machine learning models using a no/low code approach or from a range of programming languages like Python, R and .NET.
+
+Microsoft offers a range of technologies to build predictive analytical models using machine learning, analyze unstructured data using deep learning, and perform other kinds of advanced analytics. This includes:
+
+- Azure ML Studio
+
+- Azure Machine Learning Service
+
+- Azure Synapse Spark Pool Notebooks
+
+- ML.NET (API, CLI or .NET Model Builder for Visual Studio)
+
+- Visual Studio .NET for Apache Spark
+
+Data scientists can use RStudio (R) and Jupyter Notebooks (Python) to develop analytical models, or they can use other frameworks such as Keras or TensorFlow.
+
+#### Azure ML Studio
+
+Azure ML Studio is a fully managed cloud service that lets you easily build, deploy, and share predictive analytics via a drag-and-drop web-based user interface. The next screenshot shows an Azure Machine Learning studio user interface.
++
+#### Azure Machine Learning Service
+
+> [!TIP]
+> Azure Machine Learning Service provides an SDK for developing machine learning models using several open-source frameworks.
+
+Azure Machine Learning Service provides a software development kit (SDK) and services for Python to quickly prepare data, as well as train and deploy machine learning models. Use Azure Machine Learning Service from Azure notebooks (a Jupyter notebook service) and utilize open-source frameworks, such as PyTorch, TensorFlow, Spark MLlib (Azure Synapse Spark Pool Notebooks), or scikit-learn. Azure Machine Learning Service provides an AutoML capability that automatically identifies the most accurate algorithms to expedite model development. You can also use it to build machine learning pipelines that manage end-to-end workflow, programmatically scale on the cloud, and deploy models both to the cloud and the edge. Azure Machine Learning Service uses logical containers called workspaces, which can be either created manually from the Azure portal or created programmatically. These workspaces keep compute targets, experiments, data stores, trained machine learning models, docker images, and deployed services all in one place to enable teams to work together. Use Azure Machine Learning Service from Visual Studio with a Visual Studio for AI extension.
+
+> [!TIP]
+> Organize and manage related data stores, experiments, trained models, docker images and deployed services in workspaces.
+
+#### Azure Synapse Spark Pool Notebooks
+
+> [!TIP]
+> Azure Synapse Spark is Microsoft's dynamically scalable Spark-as-a-service offering scalable execution of data preparation, model development and deployed model execution.
+
+[Azure Synapse Spark Pool Notebooks](/azure/synapse-analytics/spark/apache-spark-development-using-notebooks?msclkid=cbe4b8ebcff511eca068920ea4bf16b9) is an Apache Spark service optimized to run on Azure which:
+
+- Allows data engineers to build and execute scalable data preparation jobs using Azure Data Factory
+
+- Allows data scientists to build and execute machine learning models at scale using notebooks written in languages such as Scala, R, Python, Java, and SQL; and to visualize results
+
+> [!TIP]
+> Azure Synapse Spark can access data in a range of Microsoft analytical ecosystem data stores on Azure.
+
+Jobs running in Azure Synapse Spark Pool Notebook can retrieve, process, and analyze data at scale from Azure Blob Storage, Azure Data Lake Storage, Azure Synapse, Azure HDInsight, and streaming data services such as Kafka.
+
+Autoscaling and auto-termination are also supported to reduce total cost of ownership (TCO). Data scientists can use the ML flow open-source framework to manage the machine learning lifecycle.
+
+#### ML.NET
+
+> [!TIP]
+> Microsoft has extended its machine learning capability to .NET developers.
+
+ML.NET is an open-source and cross-platform machine learning framework (Windows, Linux, macOS), created by Microsoft for .NET developers so that they can use existing tools&mdash;like .NET Model Builder for Visual Studio&mdash;to develop custom machine learning models and integrate them into .NET applications.
+
+#### Visual Studio .NET for Apache Spark
+
+Visual Studio .NET for Apache® Spark™ aims to make Spark accessible to .NET developers across all Spark APIs. It takes Spark support beyond R, Scala, Python, and Java to .NET. While initially only available on Apache Spark on HDInsight, Microsoft intends to make this available on Azure Synapse Spark Pool Notebook.
+
+### Utilizing Azure Analytics with your data warehouse
+
+> [!TIP]
+> Train, test, evaluate, and execute machine learning models at scale on Azure Synapse Spark Pool Notebook using data in your Azure Synapse.
+
+Combine machine learning models built using the tools with Azure Synapse by.
+
+- Using machine learning models in batch mode or in real time to produce new insights, and add them to what you already know in Azure Synapse.
+
+- Using the data in Azure Synapse to develop and train new predictive models for deployment elsewhere, such as in other applications.
+
+- Deploying machine learning models&mdash;including those trained elsewhere&mdash;in Azure Synapse to analyze data in the data warehouse and drive new business value.
+
+> [!TIP]
+> Produce new insights using machine learning on Azure in batch or in real-time and add to what you know in your data warehouse.
+
+In terms of machine learning model development, data scientists can use RStudio, Jupyter notebooks, and Azure Synapse Spark Pool notebooks together with Microsoft Azure Machine Learning Service to develop machine learning models that run at scale on Azure Synapse Spark Pool Notebooks using data in Azure Synapse. For example, they could create an unsupervised model to segment customers for use in driving different marketing campaigns. Use supervised machine learning to train a model to predict a specific outcome, such as predicting a customer's propensity to churn, or recommending the next best offer for a customer to try to increase their value. The next diagram shows how Azure Synapse Analytics can be leveraged for Machine Learning.
++
+In addition, you can ingest big data&mdash;such as social network data or review website data&mdash;into Azure Data Lake, then prepare and analyze it at scale on Azure Synapse Spark Pool Notebook, using natural language processing to score sentiment about your products or your brand. Add these scores to your data warehouse to understand the impact of&mdash;for example&mdash;negative sentiment on product sales, and to leverage big data analytics to add to what you already know in your data warehouse.
+
+## Integrating live streaming data into Azure Synapse Analytics
+
+When analyzing data in a modern data warehouse, you must be able to analyze streaming data in real time and join it with historical data in your data warehouse. An example of this would be combining IoT data with product or asset data.
+
+> [!TIP]
+> Integrate your data warehouse with streaming data from IoT devices or clickstream.
+
+Once you've successfully migrated your data warehouse to Azure Synapse, you can introduce this capability as part of a data warehouse modernization exercise. Do this by taking advantage of additional functionality in Azure Synapse.
+
+> [!TIP]
+> Ingest streaming data into Azure Data Lake Storage from Microsoft Event Hub or Kafka, and access it from Azure Synapse using PolyBase external tables.
+
+To do this, ingest streaming data via Microsoft Event Hubs or other technologies, such as Kafka, using Azure Data Factory (or using an existing ETL tool if it supports the streaming data sources) and land it in Azure Data Lake Storage (ADLS). Next, create an external table in Azure Synapse using PolyBase and point it at the data being streamed into Azure Data Lake. Your migrated data warehouse will now contain new tables that provide access to real-time streaming data. Query this external table as if the data was in the data warehouse via standard TSQL from any BI tool that has access to Azure Synapse. You can also join this data to other tables containing historical data and create views that join live streaming data to historical data to make it easier for business users to access. In the following diagram, a real-time data warehouse on Azure Synapse analytics is integrated with streaming data in Azure Data Lake.
++
+## Creating a logical data warehouse using PolyBase
+
+> [!TIP]
+> PolyBase simplifies access to multiple underlying analytical data stores on Azure to simplify access for business users.
+
+PolyBase offers the capability to create a logical data warehouse to simplify user access to multiple analytical data stores.
+
+This is attractive because many companies have adopted 'workload optimized' analytical data stores over the last several years in addition to their data warehouses. Examples of these platforms on Azure include:
+
+- Azure Data Lake Storage with Azure Synapse Spark Pool Notebook (Spark-as-a-service), for big data analytics
+
+- Azure HDInsight (Hadoop as-a-service), also for big data analytics
+
+- NoSQL Graph databases for graph analysis, which could be done in Azure Cosmos DB
+
+- Azure Event Hubs and Azure Stream Analytics, for real-time analysis of data in motion
+
+You may have non-Microsoft equivalents of some of these. You may also have a master data management (MDM) system that needs to be accessed for consistent trusted data on customers, suppliers, products, assets, and more.
+
+These additional analytical platforms have emerged because of the explosion of new data sources&mdash;both inside and outside the enterprises&mdash;that business users want to capture and analyze. Examples include:
+
+- Machine generated data, such as IoT sensor data and clickstream data.
+
+- Human generated data, such as social network data, review web site data, customer in-bound email, image, and video.
+
+- Other external data, such as open government data and weather data.
+
+This data is over and above the structured transaction data and master data sources that typically feed data warehouses. These new data sources include semi-structured data (like JSON, XLM, or Avro) or unstructured data (like text, voice, image, or video) which is more complex to process and analyze. This data could be very high volume, high velocity, or both.
+
+As a result, the need for new kinds of more complex analysis has emerged, such as natural language processing, graph analysis, deep learning, streaming analytics, or complex analysis of large volumes of structured data. All of this is typically not happening in a data warehouse, so it's not surprising to see different analytical platforms for different types of analytical workloads, as shown in this diagram.
++
+Since these platforms are producing new insights, it's normal to see a requirement to combine these insights with what you already know in Azure Synapse. That's what PolyBase makes possible.
+
+> [!TIP]
+> The ability to make data in multiple analytical data stores look like it's all in one system and join it to Azure Synapse is known as a logical data warehouse architecture.
+
+By leveraging PolyBase data virtualization inside Azure Synapse, you can implement a logical data warehouse. Join data in Azure Synapse to data in other Azure and on-premises analytical data stores&mdash;like Azure HDInsight or Cosmos DB&mdash;or to streaming data flowing into Azure Data Lake storage from Azure Stream Analytics and Event Hubs. Users access external tables in Azure Synapse, unaware that the data they're accessing is stored in multiple underlying analytical systems. The next diagram shows the complex data warehouse structure accessed through comparatively simpler but still powerful user interface methods.
++
+The previous diagram shows how other technologies of the Microsoft analytical ecosystem can be combined with the capability of Azure Synapse logical data warehouse architecture. For example, data can be ingested into Azure Data Lake Storage (ADLS) and curated using Azure Data Factory to create trusted data products that represent Microsoft [lake database](/azure/synapse-analytics/database-designer/concepts-lake-database) logical data entities. This trusted, commonly understood data can then be consumed and reused in different analytical environments such as Azure Synapse, Azure Synapse Spark Pool Notebooks, or Azure Cosmos DB. All insights produced in these environments are accessible via a logical data warehouse data virtualization layer made possible by PolyBase.
+
+> [!TIP]
+> A logical data warehouse architecture simplifies business user access to data and adds new value to what you already know in your data warehouse.
+
+## Conclusions
+
+> [!TIP]
+> Migrating your data warehouse to Azure Synapse lets you make use of a rich Microsoft analytical ecosystem running on Azure.
+
+Once you migrate your data warehouse to Azure Synapse, you can leverage other technologies in the Microsoft analytical ecosystem. You can't only modernize your data warehouse, but combine insights produced in other Azure analytical data stores into an integrated analytical architecture.
+
+Broaden your ETL processing to ingest data of any type into Azure Data Lake Storage. Prepare and integrate it at scale using Azure Data Factory to produce trusted, commonly understood data assets that can be consumed by your data warehouse and accessed by data scientists and other applications. Build real-time and batch-oriented analytical pipelines and create machine learning models to run in batch, in-real-time on streaming data and on-demand as a service.
+
+Leverage PolyBase and `COPY INTO` to go beyond your data warehouse. Simplify access to insights from multiple underlying analytical platforms on Azure by creating holistic integrated views in a logical data warehouse. Easily access streaming, big data, and traditional data warehouse insights from BI tools and applications to drive new value in your business.
+
+## Next steps
+
+To learn more about migrating to a dedicated SQL pool, see [Migrate a data warehouse to a dedicated SQL pool in Azure Synapse Analytics](../migrate-to-synapse-analytics-guide.md).
synapse-analytics How To Set Up Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/security/how-to-set-up-access-control.md
Title: How to set up access control for your Azure Synapse workspace description: This article will teach you how to control access to an Azure Synapse workspace using Azure roles, Synapse roles, SQL permissions, and Git permissions. -+ Last updated 3/07/2022-+
synapse-analytics Apache Spark Azure Create Spark Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-azure-create-spark-configuration.md
+
+ Title: Manage Apache Spark configuration
+description: Learn how to create an Apache Spark configuration for your synapse studio.
+++++++ Last updated : 04/21/2022++
+# Manage Apache Spark configuration
+
+In this tutorial, you will learn how to create an Apache Spark configuration for your synapse studio. The created Apache Spark configuration can be managed in a standardized manner and when you create Notebook or Apache spark job definition can select the Apache Spark configuration that you want to use with your Apache Spark pool. When you select it, the details of the configuration are displayed.
+
+## Create an Apache Spark Configuration
+
+You can create custom configurations from different entry points, such as from the Apache Spark configurations page, from the Apache Spark configuration page of an existing spark pool.
+
+### Create custom configurations in Apache Spark configurations
+
+Follow the steps below to create an Apache Spark Configuration in Synapse Studio.
+
+ 1. Select **Manage** > **Apache Spark configurations**.
+ 2. Click on **New** button to create a new Apache Spark configuration, or click on **Import** a local .json file to your workspace.
+ 3. **New Apache Spark configuration** page will be opened after you click on **New** button.
+ 4. For **Name**, you can enter your preferred and valid name.
+ 5. For **Description**, you can input some description in it.
+ 6. For **Annotations**, you can add annotations by clicking the **New** button, and also you can delete existing annotations by selecting and clicking **Delete** button.
+ 7. For **Configuration properties**, customize the configuration by clicking **Add** button to add properties. If you do not add a property, Azure Synapse will use the default value when applicable.
+
+ ![Screenshot that create spark configuration.](./media/apache-spark-azure-log-analytics/create-spark-configuration.png)
+
+ 8. Click on **Continue** button.
+ 9. Click on **Create** button when the validation succeeded.
+ 10. Publish all
++
+> [!NOTE]
+>
+> **Upload Apache Spark configuration** feature has been removed, but Synapse Studio will keep your previously uploaded configuration.
+
+### Create an Apache Spark Configuration in already existing Apache Spark pool
+
+Follow the steps below to create an Apache Spark configuration in an existing Apache Spark pool.
+
+ 1. Select an existing Apache Spark pool, and click on action "..." button.
+ 2. Select the **Apache Spark configuration** in the content list.
+
+ ![Screenshot that apache spark configuration.](./media/apache-spark-azure-create-spark-configuration/create-spark-configuration-by-right-click-on-spark-pool.png)
+
+ 3. For Apache Spark configuration, you can select an already created configuration from the drop-down list, or click on **+New** to create a new configuration.
+
+ * If you click **+New**, the Apache Spark Configuration page will open, and you can create a new configuration by following the steps in [Create custom configurations in Apache Spark configurations](#create-custom-configurations-in-apache-spark-configurations).
+ * If you select an existing configuration, the configuration details will be displayed at the bottom of the page, you can also click the **Edit** button to edit the existing configuration.
+
+ ![Screenshot that edit spark configuration.](./media/apache-spark-azure-create-spark-configuration/edit-spark-config.png)
+
+ 4. Click **View Configurations** to open the **Select a Configuration** page. All configurations will be displayed on this page. You can select a configuration that you want to use on this Apache Spark pool.
+
+ ![Screenshot that select a configuration.](./media/apache-spark-azure-create-spark-configuration/select-a-configuration.png)
+
+ 5. Click on **Apply** button to save your action.
++
+### Create an Apache Spark Configuration in the Notebook's configure session
+
+If you need to use a custom Apache Spark Configuration when creating a Notebook, you can create and configure it in the **configure session** by following the steps below.
+
+ 1. Create a new/Open an existing Notebook.
+ 2. Open the **Properties** of this notebook.
+ 3. Click on **Configure session** to open the Configure session page.
+ 4. Scroll down the configure session page, for Apache Spark configuration, expand the drop-down menu, you can click on New button to [create a new configuration](#create-custom-configurations-in-apache-spark-configurations). Or select an existing configuration, if you select an existing configuration, click the **Edit** icon to go to the Edit Apache Spark configuration page to edit the configuration.
+ 5. Click **View Configurations** to open the **Select a Configuration** page. All configurations will be displayed on this page. You can select a configuration that you want to use.
+
+ ![Screenshot that create configuration in configure session.](./media/apache-spark-azure-create-spark-configuration/create-spark-config-in-configure-session.png)
+
+### Create an Apache Spark Configuration in Apache Spark job definitions
+
+When you are creating a spark job definition, you need to use Apache Spark configuration, which can be created by following the steps below:
+
+ 1. Create a new/Open an existing Apache Spark job definitions.
+ 2. For **Apache Spark configuration**, you can click on New button to [create a new configuration](#create-custom-configurations-in-apache-spark-configurations). Or select an existing configuration in the drop-down menu, if you select an existing configuration, click the **Edit** icon to go to the Edit Apache Spark configuration page to edit the configuration.
+ 3. Click **View Configurations** to open the **Select a Configuration** page. All configurations will be displayed on this page. You can select a configuration that you want to use.
+
+ ![Screenshot that create configuration in spark job definitions.](./media/apache-spark-azure-create-spark-configuration/create-spark-config-in-spark-job-definition.png)
++
+> [!NOTE]
+>
+> If the Apache Spark configuration in the Notebook and Apache Spark job definition does not do anything special, the default configuration will be used when running the job.
++
+## Next steps
+
+ - [Use serverless Apache Spark pool in Synapse Studio](../quickstart-create-apache-spark-pool-studio.md).
+ - [Run a Spark application in notebook](./apache-spark-development-using-notebooks.md).
+ - [Create Apache Spark job definition in Azure Studio](./apache-spark-job-definitions.md).
+ - [Collect Apache Spark applications logs and metrics with Azure Storage account](./azure-synapse-diagnostic-emitters-azure-storage.md).
+ - [Collect Apache Spark applications logs and metrics with Azure Event Hubs](./azure-synapse-diagnostic-emitters-azure-eventhub.md).
synapse-analytics Apache Spark Azure Log Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-azure-log-analytics.md
Follow these steps to configure the necessary information in Synapse Studio.
### Step 1: Create a Log Analytics workspace Consult one of the following resources to create this workspace:-- [Create a workspace in the Azure portal](../../azure-monitor/logs/quick-create-workspace.md)-- [Create a workspace with Azure CLI](../../azure-monitor/logs/resource-manager-workspace.md)-- [Create and configure a workspace in Azure Monitor by using PowerShell](../../azure-monitor/logs/powershell-workspace-configuration.md)
+- [Create a workspace in the Azure portal.](../../azure-monitor/logs/quick-create-workspace.md)
+- [Create a workspace with Azure CLI.](../../azure-monitor/logs/resource-manager-workspace.md)
+- [Create and configure a workspace in Azure Monitor by using PowerShell.](../../azure-monitor/logs/powershell-workspace-configuration.md)
### Step 2: Prepare an Apache Spark configuration file
spark.synapse.logAnalytics.keyVault.linkedServiceName <LINKED_SERVICE_NAME>
[uri_suffix]: ../../azure-monitor/logs/data-collector-api.md#request-uri
-### Step 3: Upload your Apache Spark configuration to an Apache Spark pool
+### Step 3: Upload your Apache Spark configuration to an Apache Spark pool
+
+> [!NOTE]
+>
+> This step will be replaced by step 4.
+ You can upload the configuration file to your Azure Synapse Analytics Apache Spark pool. In Synapse Studio: 1. Select **Manage** > **Apache Spark pools**.
You can upload the configuration file to your Azure Synapse Analytics Apache Spa
> > All the Apache Spark applications submitted to the Apache Spark pool will use the configuration setting to push the Apache Spark application metrics and logs to your specified workspace. +
+### Step 4: Create an Apache Spark Configuration
+
+You can create an Apache Spark Configuration to your workspace, and when you create Notebook or Apache spark job definition can select the Apache Spark configuration that you want to use with your Apache Spark pool. When you select it, the details of the configuration are displayed.
+
+ 1. Select **Manage** > **Apache Spark configurations**.
+ 2. Click on **New** button to create a new Apache Spark configuration, or click on **Import** a local .json file to your workspace.
+ 3. **New Apache Spark configuration** page will be opened after you click on **New** button.
+ 4. For **Name**, you can enter your preferred and valid name.
+ 5. For **Description**, you can input some description in it.
+ 6. For **Annotations**, you can add annotations by clicking the **New** button, and also you can delete existing annotations by selecting and clicking **Delete** button.
+ 7. For **Configuration properties**, customize the configuration by clicking **Add** button to add properties. If you do not add a property, Azure Synapse will use the default value when applicable.
+
+ ![Screenshot that create spark configuration.](./media/apache-spark-azure-log-analytics/create-spark-configuration.png)
+ ## Submit an Apache Spark application and view the logs and metrics Here's how:
Users can query to evaluate metrics and logs at a set frequency, and fire an ale
After the Synapse workspace is created with [data exfiltration protection](../security/workspace-data-exfiltration-protection.md) enabled.
-When you want to enabled this feature, you need to create managed private endpoint connection requests to [Azure Monitor private link scopes (AMPLS)](../../azure-monitor/logs/private-link-security.md) in the workspaceΓÇÖs approved Azure AD tenants.
+When you want to enable this feature, you need to create managed private endpoint connection requests to [Azure Monitor private link scopes (A M P L S)](../../azure-monitor/logs/private-link-security.md) in the workspaceΓÇÖs approved Azure AD tenants.
-You can follow below steps to create a managed private endpoint connection to Azure Monitor private link scopes (AMPLS):
+You can follow below steps to create a managed private endpoint connection to Azure Monitor private link scopes (A M P L S):
-1. If there is no existing AMPLS, you can follow [Azure Monitor Private Link connection setup](../../azure-monitor/logs/private-link-security.md) to create one.
-2. Navigate to your AMPLS in Azure portal, on the **Azure Monitor Resources** page, click **Add** to add connection to your Azure Log Analytics workspace.
+1. If there is no existing A M P L S, you can follow [Azure Monitor Private Link connection setup](../../azure-monitor/logs/private-link-security.md) to create one.
+2. Navigate to your A M P L S in Azure portal, on the **Azure Monitor Resources** page, click **Add** to add connection to your Azure Log Analytics workspace.
3. Navigate to **Synapse Studio > Manage > Managed private endpoints**, click **New** button, select **Azure Monitor Private Link Scopes**, and **continue**. > [!div class="mx-imgBorder"]
- > ![Create AMPLS managed private endpoint 1](./media/apache-spark-azure-log-analytics/create-ampls-private-endpoint-1.png)
+ > ![Screenshot of create A M P L S managed private endpoint 1.](./media/apache-spark-azure-log-analytics/create-ampls-private-endpoint-1.png)
4. Choose your Azure Monitor Private Link Scope you created, and click **Create** button. > [!div class="mx-imgBorder"]
- > ![Create AMPLS managed private endpoint 2](./media/apache-spark-azure-log-analytics/create-ampls-private-endpoint-2.png)
+ > ![Screenshot of create A M P L S managed private endpoint 2.](./media/apache-spark-azure-log-analytics/create-ampls-private-endpoint-2.png)
5. Wait a few minutes for private endpoint provisioning.
-6. Navigate to your AMPLS in Azure portal again, on the **Private Endpoint connections** page, select the connection provisioned and **Approve**.
+6. Navigate to your A M P L S in Azure portal again, on the **Private Endpoint connections** page, select the connection provisioned and **Approve**.
> [!NOTE]
-> - The AMPLS object has a number of limits you should consider when planning your Private Link setup. See [AMPLS limits](../../azure-monitor/logs/private-link-security.md) for a deeper review of these limits.
+> - The A M P L S object has a number of limits you should consider when planning your Private Link setup. See [A M P L S limits](../../azure-monitor/logs/private-link-security.md) for a deeper review of these limits.
> - Check if you have [right permission](../security/synapse-workspace-access-control-overview.md) to create managed private endpoint. ## Next steps
synapse-analytics Apache Spark Development Using Notebooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-development-using-notebooks.md
Notebook reference works in both interactive mode and Synapse pipeline.
> [!NOTE] > - ```%run``` command currently only supports to pass a absolute path or notebook name only as parameter, relative path is not supported. > - ```%run``` command currently only supports to 4 parameter value types: `int`, `float`, `bool`, `string`, variable replacement operation is not supported.
-> - The referenced notebooks are required to be published. You need to publish the notebooks to reference them. Synapse Studio does not recognize the unpublished notebooks from the Git repo.
+> - The referenced notebooks are required to be published. You need to publish the notebooks to reference them unless [Reference unpublished notebook](#reference-unpublished-notebook) is enabled. Synapse Studio does not recognize the unpublished notebooks from the Git repo.
> - Referenced notebooks do not support statement that depth is larger than **five**. >
The number of tasks per each job or stage help you to identify the parallel leve
You can specify the timeout duration, the number, and the size of executors to give to the current Spark session in **Configure session**. Restart the Spark session is for configuration changes to take effect. All cached notebook variables are cleared.
+You can also create a configuration from the Apache Spark configuration or select an existing configuration. For details, please refer to [Apache Spark Configuration Management](../../synapse-analytics/spark/apache-spark-azure-create-spark-configuration.md).
+ [![Screenshot of session-management](./media/apache-spark-development-using-notebooks/synapse-azure-notebook-spark-session-management.png)](./media/apache-spark-development-using-notebooks/synapse-azure-notebook-spark-session-management.png#lightbox) #### Spark session configuration magic command
Notebook will use default value if run a notebook in interactive mode directly o
During the pipeline run mode, you can configure pipeline Notebook activity settings as below: ![Screenshot of parameterized session configuration](./media/apache-spark-development-using-notebooks/parameterized-session-config.png)
-If you want to change the session configuration, pipeline Notebook activity parameters name should be same as activityParameterName in the notebook. When run this pipeline, in this example driverCores in %%configure will be replaced by 8 and livy.rsc.sql.num-rows will be replaced by 4000.
+If you want to change the session configuration, pipeline Notebook activity parameters name should be same as activityParameterName in the notebook. When running this pipeline, in this example driverCores in %%configure will be replaced by 8 and livy.rsc.sql.num-rows will be replaced by 4000.
> [!NOTE] > If run pipeline failed because of using this new %%configure magic, you can check more error information by running %%configure magic cell in the interactive mode of the notebook.
Widgets are eventful python objects that have a representation in the browser, o
```python import ipywidgets as widgets ```
-2. You can use top-level `display` function to render a widget, or leave a expression of **widget** type at the last line of code cell.
+2. You can use top-level `display` function to render a widget, or leave an expression of **widget** type at the last line of code cell.
```python slider = widgets.IntSlider() display(slider)
Available cell magics:
-## Reference unpublished notebook
+
+<h2 id="reference-unpublished-notebook">Reference unpublished notebook</h2>
+ Reference unpublished notebook is helpful when you want to debug "locally", when enabling this feature, notebook run will fetch the current content in web cache, if you run a cell including a reference notebooks statement, you will reference the presenting notebooks in the current notebook browser instead of a saved versions in cluster, that means the changes in your notebook editor can be referenced immediately by other notebooks without having to be published(Live mode) or committed(Git mode), by leveraging this approach you can easily avoid common libraries getting polluted during developing or debugging process.
synapse-analytics Apache Spark Job Definitions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-job-definitions.md
In this section, you create an Apache Spark job definition for PySpark (Python).
|Executors| Number of executors to be given in the specified Apache Spark pool for the job.| |Executor size| Number of cores and memory to be used for executors given in the specified Apache Spark pool for the job.| |Driver size| Number of cores and memory to be used for driver given in the specified Apache Spark pool for the job.|
+ |Apache Spark configuration| Customize configurations by adding properties below. If you do not add a property, Azure Synapse will use the default value when applicable.|
![Set the value of the Spark job definition for Python](./media/apache-spark-job-definitions/create-py-definition.png)
In this section, you create an Apache Spark job definition for Apache Spark(Scal
|Executors| Number of executors to be given in the specified Apache Spark pool for the job.| |Executor size| Number of cores and memory to be used for executors given in the specified Apache Spark pool for the job.| |Driver size| Number of cores and memory to be used for driver given in the specified Apache Spark pool for the job.|
+ |Apache Spark configuration| Customize configurations by adding properties below. If you do not add a property, Azure Synapse will use the default value when applicable.|
![Set the value of the Spark job definition for scala](./media/apache-spark-job-definitions/create-scala-definition.png)
+
7. Select **Publish** to save the Apache Spark job definition.
In this section, you create an Apache Spark job definition for .NET Spark(C#/F#)
|Executors| Number of executors to be given in the specified Apache Spark pool for the job.| |Executor size| Number of cores and memory to be used for executors given in the specified Apache Spark pool for the job.| |Driver size| Number of cores and memory to be used for driver given in the specified Apache Spark pool for the job.|-
+ |Apache Spark configuration| Customize configurations by adding properties below. If you do not add a property, Azure Synapse will use the default value when applicable.|
+
![Set the value of the Spark job definition for dotnet](./media/apache-spark-job-definitions/create-dotnet-definition.png) 7. Select **Publish** to save the Apache Spark job definition. ![publish dotnet definition](./media/apache-spark-job-definitions/publish-dotnet-definition.png) +
+> [!NOTE]
+>
+> For Apache Spark configuration, if the Apache Spark configuration Apache Spark job definition does not do anything special, the default configuration will be used when running the job.
++++ ## Create Apache Spark job definition by importing a JSON file You can import an existing local JSON file into Azure Synapse workspace from the **Actions** (...) menu of the Apache Spark job definition Explorer to create a new Apache Spark job definition.
synapse-analytics Synapse Spark Sql Pool Import Export https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/synapse-spark-sql-pool-import-export.md
Previously updated : 03/18/2022 Last updated : 05/10/2022
## Introduction
-The Azure Synapse Dedicated SQL Pool Connector for Apache Spark in Azure Synapse Analytics enables efficient transfer of large data sets between the [Apache Spark runtime](../../synapse-analytics/spark/apache-spark-overview.md) and the [Dedicated SQL pool](../../synapse-analytics/sql-data-warehouse/sql-data-warehouse-overview-what-is.md). The connector is implemented using `Scala` language. The connector is shipped as a default library with Azure Synapse Workspace. To use the Connector with other notebook language choices, use the Spark magic command - `%%spark`.
+The Azure Synapse Dedicated SQL Pool Connector for Apache Spark in Azure Synapse Analytics enables efficient transfer of large data sets between the [Apache Spark runtime](../../synapse-analytics/spark/apache-spark-overview.md) and the [Dedicated SQL pool](../../synapse-analytics/sql-data-warehouse/sql-data-warehouse-overview-what-is.md). The connector is shipped as a default library with Azure Synapse Workspace. The connector is implemented using `Scala` language. The connector supports Scala and Python. To use the Connector with other notebook language choices, use the Spark magic command - `%%spark`.
At a high-level, the connector provides the following capabilities:
At a high-level, the connector provides the following capabilities:
* Introduces an optional call-back handle (a Scala function argument) that clients can use to receive post-write metrics. * Few examples include - number of records, duration to complete certain action, and failure reason. + ## Orchestration approach ### Read
There are two ways to grant access permissions to Azure Data Lake Storage Gen2 -
* `Write` enables ability to write. * It's important to configure ACLs such that the Connector can successfully write and read from the storage locations.
->[!Note]
+> [!Note]
> * If you'd like to run notebooks using Synapse Workspace pipelines you must also grant above listed access permissions to the Synapse Workspace default managed identity. The workspace's default managed identity name is same as the name of the workspace. > > * To use the Synapse workspace with secured storage accounts, a managed private end point must be [configured](../../storage/common/storage-network-security.md?tabs=azure-portal) from the notebook. The managed private end point must be approved from the ADLS Gen2 storage account's `Private endpoint connections` section in the `Networking` pane.
Following is the list of configuration options based on usage scenario:
This section presents reference code templates to describe how to use and invoke the Azure Synapse Dedicated SQL Pool Connector for Apache Spark.
+> [!Note]
+> Using the Connector in Python-
+> * The connector is supported in Python for Spark 3 only. For Spark 2.4, we can use the Scala connector API to interact with content from a DataFrame in PySpark by using DataFrame.createOrReplaceTempView or DataFrame.createOrReplaceGlobalTempView. See Section - [Using materialized data across cells](#using-materialized-data-across-cells).
+> * The call back handle is not available in Python.
+ ### Read from Azure Synapse Dedicated SQL Pool #### Read Request - `synapsesql` method signature
This section presents reference code templates to describe how to use and invoke
synapsesql(tableName:String) => org.apache.spark.sql.DataFrame ```
+```python
+synapsesql(table_name: str) -> org.apache.spark.sql.DataFrame
+```
+ #### Read using Azure AD based authentication
+##### [Scala](#tab/scala)
+ ```Scala //Use case is to read data from an internal table in Synapse Dedicated SQL Pool DB //Azure Active Directory based authentication approach is preferred here.
val dfToReadFromTable:DataFrame = spark.read.
dfToReadFromTable.show() ```
+##### [Python](#tab/python)
+
+```python
+# Add required imports
+import com.microsoft.spark.sqlanalytics
+from com.microsoft.spark.sqlanalytics.Constants import Constants
+from pyspark.sql.functions import col
+
+# Read from existing internal table
+dfToReadFromTable = (spark.read
+ # If `Constants.SERVER` is not provided, the `<database_name>` from the three-part table name argument
+ # to `synapsesql` method is used to infer the Synapse Dedicated SQL End Point.
+ .option(Constants.SERVER, "<sql-server-name>.sql.azuresynapse.net")
+ # Defaults to storage path defined in the runtime configurations
+ .option(Constants.TEMP_FOLDER, "abfss://<container_name>@<storage_account_name>.dfs.core.windows.net/<some_base_path_for_temporary_staging_folders>")
+ # Three-part table name from where data will be read.
+ .synapsesql("<database_name>.<schema_name>.<table_name>")
+ # Column-pruning i.e., query select column values.
+ .select("<some_column_1>", "<some_column_5>", "<some_column_n>")
+ # Push-down filter criteria that gets translated to SQL Push-down Predicates.
+ .filter(col("Title").contains("E"))
+ # Fetch a sample of 10 records
+ .limit(10))
+
+# Show contents of the dataframe
+dfToReadFromTable.show()
+```
+ #### Read using basic authentication
+##### [Scala](#tab/scala1)
+ ```Scala //Use case is to read data from an internal table in Synapse Dedicated SQL Pool DB //Azure Active Directory based authentication approach is preferred here.
val dfToReadFromTable:DataFrame = spark.read.
dfToReadFromTable.show() ```
+##### [Python](#tab/python1)
+
+```python
+# Add required imports
+import com.microsoft.spark.sqlanalytics
+from com.microsoft.spark.sqlanalytics.Constants import Constants
+from pyspark.sql.functions import col
+
+# Read from existing internal table
+dfToReadFromTable = (spark.read
+ # If `Constants.SERVER` is not provided, the `<database_name>` from the three-part table name argument
+ # to `synapsesql` method is used to infer the Synapse Dedicated SQL End Point.
+ .option(Constants.SERVER, "<sql-server-name>.sql.azuresynapse.net")
+ # Set database user name
+ .option(Constants.USER, "<user_name>")
+ # Set user's password to the database
+ .option(Constants.PASSWORD, "<user_password>")
+ # Set name of the data source definition that is defined with database scoped credentials.
+ # https://docs.microsoft.com/sql/t-sql/statements/create-external-data-source-transact-sql?view=sql-server-ver15&tabs=dedicated#h-create-external-data-source-to-access-data-in-azure-storage-using-the-abfs-interface
+ # Data extracted from the SQL query will be staged to the storage path defined on the data source's location setting.
+ .option(Constants.DATA_SOURCE, "<data_source_name>")
+ # Three-part table name from where data will be read.
+ .synapsesql("<database_name>.<schema_name>.<table_name>")
+ # Column-pruning i.e., query select column values.
+ .select("<some_column_1>", "<some_column_5>", "<some_column_n>")
+ # Push-down filter criteria that gets translated to SQL Push-down Predicates.
+ .filter(col("Title").contains("E"))
+ # Fetch a sample of 10 records
+ .limit(10))
+
+# Show contents of the dataframe
+dfToReadFromTable.show()
+
+```
+ ### Write to Azure Synapse Dedicated SQL Pool #### Write Request - `synapsesql` method signature
synapsesql(tableName:String,
callBackHandle=Option[(Map[String, Any], Option[Throwable])=>Unit]):Unit ```
+```python
+synapsesql(table_name: str, table_type: str = Constants.INTERNAL, location: str = None) -> None
+```
+ #### Write using Azure AD based authentication Following is a comprehensive code template that describes how to use the Connector for write scenarios:
+##### [Scala](#tab/scala2)
+ ```Scala //Add required imports import org.apache.spark.sql.DataFrame
readDF.
if(errorDuringWrite.isDefined) throw errorDuringWrite.get ```
+##### [Python](#tab/python2)
+
+```python
+
+# Write using AAD Auth to internal table
+# Add required imports
+import com.microsoft.spark.sqlanalytics
+from com.microsoft.spark.sqlanalytics.Constants import Constants
+
+# Configure and submit the request to write to Synapse Dedicated SQL Pool
+# Sample below is using AAD-based authentication approach; See further examples to leverage SQL Basic auth.
+(df.write
+ # If `Constants.SERVER` is not provided, the `<database_name>` from the three-part table name argument
+ # to `synapsesql` method is used to infer the Synapse Dedicated SQL End Point.
+ .option(Constants.SERVER, "<sql-server-name>.sql.azuresynapse.net")
+ # Like-wise, if `Constants.TEMP_FOLDER` is not provided, the connector will use the runtime staging directory config (see section on Configuration Options for details).
+ .option(Constants.TEMP_FOLDER, "abfss://<container_name>@<storage_account_name>.dfs.core.windows.net/<some_base_path_for_temporary_staging_folders>")
+ # Choose a save mode that is apt for your use case.
+ # Options for save modes are "error" or "errorifexists" (default), "overwrite", "append", "ignore".
+ # refer to https://spark.apache.org/docs/latest/sql-data-sources-load-save-functions.html#save-modes
+ .mode("overwrite")
+ # Required parameter - Three-part table name to which data will be written
+ .synapsesql("<database_name>.<schema_name>.<table_name>"))
++
+# Write using AAD Auth to external table
+# Add required imports
+import com.microsoft.spark.sqlanalytics
+from com.microsoft.spark.sqlanalytics.Constants import Constants
+
+# Setup and trigger the read DataFrame for write to Synapse Dedicated SQL Pool.
+# Sample below is using AAD-based authentication approach; See further examples to leverage SQL Basic auth.
+(df.write
+ # If `Constants.SERVER` is not provided, the `<database_name>` from the three-part table name argument
+ # to `synapsesql` method is used to infer the Synapse Dedicated SQL End Point.
+ .option(Constants.SERVER, "<sql-server-name>.sql.azuresynapse.net")
+ # Set name of the data source definition that is defined with database scoped credentials.
+ # https://docs.microsoft.com/sql/t-sql/statements/create-external-data-source-transact-sql?view=sql-server-ver15&tabs=dedicated#h-create-external-data-source-to-access-data-in-azure-storage-using-the-abfs-interface
+ .option(Constants.DATA_SOURCE, "<data_source_name>")
+ # Choose a save mode that is apt for your use case.
+ # Options for save modes are "error" or "errorifexists" (default), "overwrite", "append", "ignore".
+ # refer to https://spark.apache.org/docs/latest/sql-data-sources-load-save-functions.html#save-modes
+ .mode("overwrite")
+ # Required parameter - Three-part table name to which data will be written
+ .synapsesql("<database_name>.<schema_name>.<table_name>",
+ # Optional Parameter which is used to specify table type. Default is internal i.e. Constants.INTERNAL.
+ # For external table type, the value is Constants.EXTERNAL.
+ Constants.EXTERNAL,
+ # Optional parameter that is used to specify external table's base folder; defaults to `database_name/schema_name/table_name`
+ "/path/to/external/table"))
+
+```
+ #### Write using basic authentication Following code snippet replaces the write definition described in the [Write using Azure AD based authentication](#write-using-azure-ad-based-authentication) section, to submit write request using SQL basic authentication approach:
+##### [Scala](#tab/scala3)
+ ```Scala //Define write options to use SQL basic authentication val writeOptionsWithBasicAuth:Map[String, String] = Map(Constants.SERVER -> "<dedicated-pool-sql-server-name>.sql.azuresynapse.net",
readDF.
callBackHandle = Some(callBackFunctionToReceivePostWriteMetrics)) ```
+##### [Python](#tab/python3)
+
+```python
+# Write using Basic Auth to Internal table
+# Add required imports
+import com.microsoft.spark.sqlanalytics
+from com.microsoft.spark.sqlanalytics.Constants import Constants
+
+# Setup and trigger the read DataFrame for write to Synapse Dedicated SQL Pool.
+
+(df.write
+ # If `Constants.SERVER` is not provided, the `<database_name>` from the three-part table name argument
+ # to `synapsesql` method is used to infer the Synapse Dedicated SQL End Point.
+ .option(Constants.SERVER, "<sql-server-name>.sql.azuresynapse.net")
+ # Set database user name
+ .option(Constants.USER, "<user_name>")
+ # Set user's password to the database
+ .option(Constants.PASSWORD, "<user_password>")
+ # if `Constants.TEMP_FOLDER` is not provided, the connector will use the runtime staging directory config (see section on Configuration Options for details).
+ .option(Constants.TEMP_FOLDER, "abfss://<container_name>@<storage_account_name>.dfs.core.windows.net/<some_base_path_for_temporary_staging_folders>")
+ # For Basic Auth, need the storage account key for the storage account where the data will be staged
+ # .option(Constants.STAGING_STORAGE_ACCOUNT_KEY, "<storage_account_key>")
+ # Choose a save mode that is apt for your use case.
+ # Options for save modes are "error" or "errorifexists" (default), "overwrite", "append", "ignore".
+ # refer to https://spark.apache.org/docs/latest/sql-data-sources-load-save-functions.html#save-modes
+ .mode("overwrite")
+ # Required parameter - Three-part table name to which data will be written
+ .synapsesql("<database_name>.<schema_name>.<table_name>"))
+
+# Write using Basic Auth to External table
+# Add required imports
+import com.microsoft.spark.sqlanalytics
+from com.microsoft.spark.sqlanalytics.Constants import Constants
+
+# Setup and trigger the read DataFrame for write to Synapse Dedicated SQL Pool.
+(df.write
+ # If `Constants.SERVER` is not provided, the `<database_name>` from the three-part table name argument
+ # to `synapsesql` method is used to infer the Synapse Dedicated SQL End Point.
+ .option(Constants.SERVER, "<sql-server-name>.sql.azuresynapse.net")
+ # Set database user name
+ .option(Constants.USER, "<user_name>")
+ # Set user's password to the database
+ .option(Constants.PASSWORD, "<user_password>")
+ # Set name of the data source with database scoped credentials for external table.
+ # https://docs.microsoft.com/sql/t-sql/statements/create-external-data-source-transact-sql?view=sql-server-ver15&tabs=dedicated#h-create-external-data-source-to-access-data-in-azure-storage-using-the-abfs-interface
+ .option(Constants.DATA_SOURCE, "<data_source_name>")
+ # For Basic Auth, need the storage account key for the storage account where the data will be staged
+ .option(Constants.STAGING_STORAGE_ACCOUNT_KEY,"<storage_account_key>")
+ # Choose a save mode that is apt for your use case.
+ # Options for save modes are "error" or "errorifexists" (default), "overwrite", "append", "ignore".
+ # refer to https://spark.apache.org/docs/latest/sql-data-sources-load-save-functions.html#save-modes
+ .mode("overwrite")
+ # Required parameter - Three-part table name to which data will be written
+ .synapsesql("<database_name>.<schema_name>.<table_name>",
+ # Optional Parameter which is used to specify table type. Default is internal i.e. Constants.INTERNAL.
+ # For external table type, the value is Constants.EXTERNAL.
+ Constants.EXTERNAL,
+ # Optional parameter that is used to specify external table's base folder; defaults to `database_name/schema_name/table_name`
+ "/path/to/external/table"))
+
+```
+ In a basic authentication approach, in order to read data from a source storage path other configuration options are required. Following code snippet provides an example to read from an Azure Data Lake Storage Gen2 data source using Service Principal credentials: ```Scala
Following is a sample JSON string with post-write metrics:
### More code samples
-#### Using the Connector with other language preferences
-
-Example that demonstrates how to use the Connector with `PySpark (Python)` language preference:
-
-```Python
-%%spark
-
-import org.apache.spark.sql.DataFrame
-import com.microsoft.spark.sqlanalytics.utils.Constants
-import org.apache.spark.sql.SqlAnalyticsConnector._
-
-//Code to write or read goes here (refer to the aforementioned code templates)
-
-```
- #### Using materialized data across cells Spark DataFrame's `createOrReplaceTempView` can be used to access data fetched in another cell, by registering a temporary view.
Spark DataFrame's `createOrReplaceTempView` can be used to access data fetched i
spark.sql("select * from <temporary_view_name>").show() ```
-## Response handling
+### Response handling
Invoking `synapsesql` has two possible end states - Success or a Failed State. This section describes how to handle the request response for each scenario.
-### Read request response
+#### Read request response
Upon completion, the read response snippet is displayed in the cell's output. Failure in the current cell will also cancel subsequent cell executions. Detailed error information is available in the Spark Application Logs.
-### Write request response
+#### Write request response
By default, a write response is printed to the cell output. On failure, the current cell is marked as failed, and subsequent cell executions will be aborted. The other approach is to pass the [callback handle](#write-request-callback-handle) option to the `synapsesql` method. The callback handle will provide programmatic access to the write response.
synapse-analytics Quickstart Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/quickstart-bicep.md
+
+ Title: Create an Azure Synapse Analytics dedicated SQL pool (formerly SQL DW) using Bicep
+description: Learn how to create an Azure Synapse Analytics SQL pool using Bicep.
+++++ Last updated : 05/20/2022+++
+# Quickstart: Create an Azure Synapse Analytics dedicated SQL pool (formerly SQL DW) using Bicep
+
+This Bicep file will create a dedicated SQL pool (formerly SQL DW) with Transparent Data Encryption enabled. Dedicated SQL pool (formerly SQL DW) refers to the enterprise data warehousing features that are generally available in Azure Synapse.
++
+## Prerequisites
+
+If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+
+## Review the Bicep file
+
+The Bicep file used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/sql-data-warehouse-transparent-encryption-create/).
++
+The Bicep file defines one resource:
+
+- [Microsoft.Sql/servers](/azure/templates/microsoft.sql/servers)
+
+## Deploy the Bicep file
+
+1. Save the Bicep file as `main.bicep` to your local computer.
+1. Deploy the Bicep file using either Azure CLI or Azure PowerShell.
+
+ # [CLI](#tab/CLI)
+
+ ```azurecli
+ az group create --name exampleRG --location eastus
+ az deployment group create --resource-group exampleRG --template-file main.bicep --parameters sqlAdministratorLogin=<admin-login> databasesName=<db-name> capacity=<int>
+ ```
+
+ # [PowerShell](#tab/PowerShell)
+
+ ```azurepowershell
+ New-AzResourceGroup -Name exampleRG -Location eastus
+ New-AzResourceGroupDeployment -ResourceGroupName exampleRG -TemplateFile ./main.bicep -sqlAdministratorLogin "<admin-login>" -databasesName "<db-name>" -capacity <int>
+ ```
+
+
+
+ > [!NOTE]
+ > Replace **\<admin-login\>** with the administrator login username for the SQL server. Replace **\<db-name\>** with the name of the database. Replace **\<int\>** with the DW performance level. The minimum value is 900 and the maximum value is 54000. You'll also be prompted to enter **sqlAdministratorPassword**.
+
+ When the deployment finishes, you should see a message indicating the deployment succeeded.
+
+## Review deployed resources
+
+Use the Azure portal, Azure CLI, or Azure PowerShell to list the deployed resources in the resource group.
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+az resource list --resource-group exampleRG
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell-interactive
+Get-AzResource -ResourceGroupName exampleRG
+```
+++
+## Clean up resources
+
+When no longer needed, use the Azure portal, Azure CLI, or Azure PowerShell to delete the resource group and its resources.
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+az group delete --name exampleRG
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell-interactive
+Remove-AzResourceGroup -Name exampleRG
+```
+++
+## Next steps
+
+In this quickstart, you created a dedicated SQL pool (formerly SQL DW) using Bicep and validated the deployment. To learn more about Azure Synapse Analytics and Bicep, see the articles below.
+
+- Read an [Overview of Azure Synapse Analytics](sql-data-warehouse-overview-what-is.md)
+- Learn more about [Bicep](../../azure-resource-manager/bicep/overview.md)
+- [Quickstart: Create Bicep files with Visual Studio Code](../../azure-resource-manager/bicep/quickstart-create-bicep-use-visual-studio-code.md)
synapse-analytics Create Use Views https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/create-use-views.md
Last updated 05/20/2020 --++ # Create and use views using serverless SQL pool in Azure Synapse Analytics
from openrowset(
The `OPENJSON` function parses each line from the JSONL file containing one JSON document per line in textual format.
-## CosmosDB view
+## <a id="cosmosdb-view"></a> Cosmos DB views on containers
-The views can be created on top of the Azure CosmosDB containers if the CosmosDB analytical storage is enabled on the container. CosmosDB account name, database name, and container name should be added as a part of the view, and the read-only access key should be placed in the database scoped credential that the view references.
+The views can be created on top of the Azure Cosmos DB containers if the Cosmos DB analytical storage is enabled on the container. Cosmos DB account name, database name, and container name should be added as a part of the view, and the read-only access key should be placed in the database scoped credential that the view references.
```sql CREATE DATABASE SCOPED CREDENTIAL MyCosmosDbAccountCredential
FROM OPENROWSET(
) with ( date_rep varchar(20), cases bigint, geo_id varchar(6) ) as rows ```
-Find more details about [querying CosmosDB containers using Synapse Link here](query-cosmos-db-analytical-store.md).
+For more information, see [Query Azure Cosmos DB data with a serverless SQL pool in Azure Synapse Link](query-cosmos-db-analytical-store.md).
## Use a view
synapse-analytics Develop Tables External Tables https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/develop-tables-external-tables.md
CREATE EXTERNAL DATA SOURCE SqlOnDemandDemo WITH (
CREDENTIAL = sqlondemand ); ```
+> [!NOTE]
+> The SQL users needs to have proper permissions on database scoped credentials to access the data source in Azure Synapse Analytics Serverless SQL Pool. [Access external storage using serverless SQL pool in Azure Synapse Analytics](https://docs.microsoft.com/azure/synapse-analytics/sql/develop-storage-files-overview?tabs=impersonation#permissions).
The following example creates an external data source for Azure Data Lake Gen2 pointing to the publicly available New York data set:
Specifies the row number that is read first and applies to all files. Setting th
USE_TYPE_DEFAULT = { TRUE | **FALSE** } - Specifies how to handle missing values in delimited text files when retrieving data from the text file.
+> [!NOTE]
+> Please note that USE_TYPE_DEFAULT=true is not supported for FORMAT_TYPE = DELIMITEDTEXT, PARSER_VERSION = '2.0'.
+ TRUE - If you're retrieving data from the text file, store each missing value by using the default value's data type for the corresponding column in the external table definition. For example, replace a missing value with:
synapse-analytics Overview Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/overview-features.md
+ Last updated 03/24/2022-+ # Transact-SQL features supported in Azure Synapse SQL
Query languages used in Synapse SQL can have different supported features depend
| **Built-in/system functions ([string](/sql/t-sql/functions/string-functions-transact-sql))** | Yes. All Transact-SQL [String](/sql/t-sql/functions/string-functions-transact-sql?view=azure-sqldw-latest&preserve-view=true), [JSON](/sql/t-sql/functions/json-functions-transact-sql?view=azure-sqldw-latest&preserve-view=true), and Collation functions, except [STRING_ESCAPE](/sql/t-sql/functions/string-escape-transact-sql?view=azure-sqldw-latest&preserve-view=true) and [TRANSLATE](/sql/t-sql/functions/translate-transact-sql?view=azure-sqldw-latest&preserve-view=true) | Yes. All Transact-SQL [String](/sql/t-sql/functions/string-functions-transact-sql?view=azure-sqldw-latest&preserve-view=true), [JSON](/sql/t-sql/functions/json-functions-transact-sql?view=azure-sqldw-latest&preserve-view=true), and Collation functions are supported. | | **Built-in/system functions ([Cryptographic](/sql/t-sql/functions/cryptographic-functions-transact-sql))** | Some | `HASHBYTES` is the only supported cryptographic function in serverless SQL pools. | | **Built-in/system table-value functions** | Yes, [Transact-SQL Rowset functions](/sql/t-sql/functions/functions?view=azure-sqldw-latest&preserve-view=true#rowset-functions), except [OPENXML](/sql/t-sql/functions/openxml-transact-sql?view=azure-sqldw-latest&preserve-view=true), [OPENDATASOURCE](/sql/t-sql/functions/opendatasource-transact-sql?view=azure-sqldw-latest&preserve-view=true), [OPENQUERY](/sql/t-sql/functions/openquery-transact-sql?view=azure-sqldw-latest&preserve-view=true), and [OPENROWSET](/sql/t-sql/functions/openrowset-transact-sql?view=azure-sqldw-latest&preserve-view=true) | Yes, all [Transact-SQL Rowset functions](/sql/t-sql/functions/functions?view=azure-sqldw-latest&preserve-view=true#rowset-functions) are supported, except [OPENXML](/sql/t-sql/functions/openxml-transact-sql?view=azure-sqldw-latest&preserve-view=true), [OPENDATASOURCE](/sql/t-sql/functions/opendatasource-transact-sql?view=azure-sqldw-latest&preserve-view=true), and [OPENQUERY](/sql/t-sql/functions/openquery-transact-sql?view=azure-sqldw-latest&preserve-view=true). |
-| **Built-in/system aggregates** | Transact-SQL built-in aggregates except, except [CHECKSUM_AGG](/sql/t-sql/functions/checksum-agg-transact-sql?view=azure-sqldw-latest&preserve-view=true) and [GROUPING_ID](/sql/t-sql/functions/grouping-id-transact-sql?view=azure-sqldw-latest&preserve-view=true) | Yes, all Transact-SQL built-in [aggregates](/sql/t-sql/functions/aggregate-functions-transact-sql?view=sql-server-ver15&preserve-view=true) are supported. |
+| **Built-in/system aggregates** | Transact-SQL built-in aggregates, except [CHECKSUM_AGG](/sql/t-sql/functions/checksum-agg-transact-sql?view=azure-sqldw-latest&preserve-view=true) and [GROUPING_ID](/sql/t-sql/functions/grouping-id-transact-sql?view=azure-sqldw-latest&preserve-view=true) | Yes, all Transact-SQL built-in [aggregates](/sql/t-sql/functions/aggregate-functions-transact-sql?view=sql-server-ver15&preserve-view=true) are supported. |
| **Operators** | Yes, all [Transact-SQL operators](/sql/t-sql/language-elements/operators-transact-sql?view=azure-sqldw-latest&preserve-view=true) except [!>](/sql/t-sql/language-elements/not-greater-than-transact-sql?view=azure-sqldw-latest&preserve-view=true) and [!<](/sql/t-sql/language-elements/not-less-than-transact-sql?view=azure-sqldw-latest&preserve-view=true) | Yes, all [Transact-SQL operators](/sql/t-sql/language-elements/operators-transact-sql?view=azure-sqldw-latest&preserve-view=true) are supported. | | **Control of flow** | Yes. All [Transact-SQL Control-of-flow statement](/sql/t-sql/language-elements/control-of-flow?view=azure-sqldw-latest&preserve-view=true) except [CONTINUE](/sql/t-sql/language-elements/continue-transact-sql?view=azure-sqldw-latest&preserve-view=true), [GOTO](/sql/t-sql/language-elements/goto-transact-sql?view=azure-sqldw-latest&preserve-view=true), [RETURN](/sql/t-sql/language-elements/return-transact-sql?view=azure-sqldw-latest&preserve-view=true), [USE](/sql/t-sql/language-elements/use-transact-sql?view=azure-sqldw-latest&preserve-view=true), and [WAITFOR](/sql/t-sql/language-elements/waitfor-transact-sql?view=azure-sqldw-latest&preserve-view=true) | Yes. All [Transact-SQL Control-of-flow statements](/sql/t-sql/language-elements/control-of-flow?view=azure-sqldw-latest&preserve-view=true) are supported. SELECT query in `WHILE (...)` condition is not supported. | | **DDL statements (CREATE, ALTER, DROP)** | Yes. All Transact-SQL DDL statement applicable to the supported object types | Yes, all Transact-SQL DDL statement applicable to the supported object types are supported. |
Data that is analyzed can be stored on various storage types. The following tabl
| **Azure Data Lake v2** | Yes | Yes, you can use external tables and the `OPENROWSET` function to read data from ADLS. Learn here how to [setup access control](develop-storage-files-storage-access-control.md). | | **Azure Blob Storage** | Yes | Yes, you can use external tables and the `OPENROWSET` function to read data from Azure Blob Storage. Learn here how to [setup access control](develop-storage-files-storage-access-control.md). | | **Azure SQL/SQL Server (remote)** | No | No, serverless SQL pool cannot reference Azure SQL database. You can reference serverless SQL pools from Azure SQL using [elastic queries](https://devblogs.microsoft.com/azure-sql/read-azure-storage-files-using-synapse-sql-external-tables/) or [linked servers](https://devblogs.microsoft.com/azure-sql/linked-server-to-synapse-sql-to-implement-polybase-like-scenarios-in-managed-instance). |
-| **Dataverse** | No, you can [load CosmosDB data into a dedicated pool using Synapse Link in serverless SQL pool (via ADLS)](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/loading-cosmosdb-and-dataverse-data-into-dedicated-sql-pool-dw/ba-p/3104168) or Spark. | Yes, you can read Dataverse tables using [Synapse link](/powerapps/maker/data-platform/azure-synapse-link-data-lake). |
+| **Dataverse** | No, you can [load CosmosDB data into a dedicated pool using Azure Synapse Link in serverless SQL pool (via ADLS)](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/loading-cosmosdb-and-dataverse-data-into-dedicated-sql-pool-dw/ba-p/3104168) or Spark. | Yes, you can read Dataverse tables using [Azure Synapse link for Dataverse with Azure Data Lake](/powerapps/maker/data-platform/azure-synapse-link-data-lake). |
| **Azure Cosmos DB transactional storage** | No | No, you cannot access Cosmos DB containers to update data or read data from the Cosmos DB transactional storage. Use [Spark pools to update the Cosmos DB](../synapse-link/how-to-query-analytical-store-spark.md) transactional storage. |
-| **Azure Cosmos DB analytical storage** | No, you can [load CosmosDB data into a dedicated pool using Synapse Link in serverless SQL pool (via ADLS)](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/loading-cosmosdb-and-dataverse-data-into-dedicated-sql-pool-dw/ba-p/3104168), ADF, Spark or some other load tool. | Yes, you can [query Cosmos DB analytical storage](query-cosmos-db-analytical-store.md) using [Synapse Link](../../cosmos-db/synapse-link.md?toc=%2fazure%2fsynapse-analytics%2ftoc.json). |
+| **Azure Cosmos DB analytical storage** | No, you can [load CosmosDB data into a dedicated pool using Azure Synapse Link in serverless SQL pool (via ADLS)](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/loading-cosmosdb-and-dataverse-data-into-dedicated-sql-pool-dw/ba-p/3104168), ADF, Spark or some other load tool. | Yes, you can [query Cosmos DB analytical storage](query-cosmos-db-analytical-store.md) using [Azure Synapse Link](../../cosmos-db/synapse-link.md?toc=%2fazure%2fsynapse-analytics%2ftoc.json). |
| **Apache Spark tables (in workspace)** | No | Yes, serverless pool can read PARQUET and CSV tables using [metadata synchronization](develop-storage-files-spark-tables.md). | | **Apache Spark tables (remote)** | No | No, serverless pool can access only the PARQUET and CSV tables that are [created in Apache Spark pools in the same Synapse workspace](develop-storage-files-spark-tables.md). However, you can manually create an external table that reference external Spark table location. | | **Databricks tables (remote)** | No | No, serverless pool can access only the PARQUET and CSV tables that are [created in Apache Spark pools in the same Synapse workspace](develop-storage-files-spark-tables.md). However, you can manually create an external table that reference Databricks table location. |
Data that is analyzed can be stored in various storage formats. The following ta
## Next steps Additional information on best practices for dedicated SQL pool and serverless SQL pool can be found in the following articles: -- [Best Practices for dedicated SQL pool](best-practices-dedicated-sql-pool.md)
+- [Best practices for dedicated SQL pool](best-practices-dedicated-sql-pool.md)
- [Best practices for serverless SQL pool](best-practices-serverless-sql-pool.md)
synapse-analytics Query Cosmos Db Analytical Store https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/query-cosmos-db-analytical-store.md
Title: Query Azure Cosmos DB data using a serverless SQL pool in Azure Synapse Link description: In this article, you'll learn how to query Azure Cosmos DB by using a serverless SQL pool in Azure Synapse Link.- Previously updated : 03/02/2021 Last updated : 05/10/2022 --++ # Query Azure Cosmos DB data with a serverless SQL pool in Azure Synapse Link
A serverless SQL pool allows you to analyze data in your Azure Cosmos DB contain
For querying Azure Cosmos DB, the full [SELECT](/sql/t-sql/queries/select-transact-sql?view=azure-sqldw-latest&preserve-view=true) surface area is supported through the [OPENROWSET](develop-openrowset.md) function, which includes the majority of [SQL functions and operators](overview-features.md). You can also store results of the query that reads data from Azure Cosmos DB along with data in Azure Blob Storage or Azure Data Lake Storage by using [create external table as select](develop-tables-cetas.md#cetas-in-serverless-sql-pool) (CETAS). You can't currently store serverless SQL pool query results to Azure Cosmos DB by using CETAS.
-In this article, you'll learn how to write a query with a serverless SQL pool that will query data from Azure Cosmos DB containers that are enabled with Azure Synapse Link. You can then learn more about building serverless SQL pool views over Azure Cosmos DB containers and connecting them to Power BI models in [this tutorial](./tutorial-data-analyst.md). This tutorial uses a container with an [Azure Cosmos DB well-defined schema](../../cosmos-db/analytical-store-introduction.md#schema-representation). You can also checkout the learn module on how to [Query Azure Cosmos DB with SQL Serverless for Azure Synapse Analytics](/learn/modules/query-azure-cosmos-db-with-sql-serverless-for-azure-synapse-analytics/)
+In this article, you'll learn how to write a query with a serverless SQL pool that will query data from Azure Cosmos DB containers that are enabled with Azure Synapse Link. You can then learn more about building serverless SQL pool views over Azure Cosmos DB containers and connecting them to Power BI models in [this tutorial](./tutorial-data-analyst.md). This tutorial uses a container with an [Azure Cosmos DB well-defined schema](../../cosmos-db/analytical-store-introduction.md#schema-representation). You can also check out the learn module on how to [Query Azure Cosmos DB with SQL Serverless for Azure Synapse Analytics](/learn/modules/query-azure-cosmos-db-with-sql-serverless-for-azure-synapse-analytics/)
## Prerequisites
For more information about the SQL types that should be used for Azure Cosmos DB
## Create view
-Creating views in the master or default databases is not recommended or supported. So you need to create a user database for your views.
+Creating views in the `master` or default databases is not recommended or supported. So you need to create a user database for your views.
Once you identify the schema, you can prepare a view on top of your Azure Cosmos DB data. You should place your Azure Cosmos DB account key in a separate credential and reference this credential from `OPENROWSET` function. Do not keep your account key in the view definition.
FROM OPENROWSET(
) with ( date_rep varchar(20), cases bigint, geo_id varchar(6) ) as rows ```
-Do not use `OPENROWSET` without explicitly defined schema because it might impact your performance. Make sure that you use the smallest possible sizes for your columns (for example VARCHAR(100) instead of default VARCHAR(8000)). You should use some UTF-8 collation as default database collation or set it as explicit column collation to avoid [UTF-8 conversion issue](../troubleshoot/reading-utf8-text.md). Collation `Latin1_General_100_BIN2_UTF8` provides best performance when yu filter data using some string columns.
+Do not use `OPENROWSET` without explicitly defined schema because it might impact your performance. Make sure that you use the smallest possible sizes for your columns (for example VARCHAR(100) instead of default VARCHAR(8000)). You should use some UTF-8 collation as default database collation or set it as explicit column collation to avoid [UTF-8 conversion issue](../troubleshoot/reading-utf8-text.md). Collation `Latin1_General_100_BIN2_UTF8` provides best performance when you filter data using some string columns.
## Query nested objects
FROM OPENROWSET(
'CosmosDB', 'Account=synapselink-cosmosdb-sqlsample;Database=covid;Key=s5zarR2pT0JWH9k8roipnWxUYBegOuFGjJpSjGlR36y86cW0GQ6RaaG8kGjsRAQoWMw1QKTkkX8HQtFpJjC8Hg==', Cord19)
-WITH ( paper_id varchar(8000),
+WITH ( paper_id varchar(8000),
title varchar(1000) '$.metadata.title', metadata varchar(max), authors varchar(max) '$.metadata.authors'
The result of this query might look like the following table:
Learn more about analyzing [complex data types in Azure Synapse Link](../how-to-analyze-complex-schema.md) and [nested structures in a serverless SQL pool](query-parquet-nested-types.md). > [!IMPORTANT]
-> If you see unexpected characters in your text like `Mélade` instead of `Mélade`, then your database collation isn't set to [UTF-8](/sql/relational-databases/collations/collation-and-unicode-support#utf8) collation.
+> If you see unexpected characters in your text like `MÃÂ&copy;lade` instead of `Mélade`, then your database collation isn't set to [UTF-8](/sql/relational-databases/collations/collation-and-unicode-support#utf8) collation.
> [Change collation of the database](/sql/relational-databases/collations/set-or-change-the-database-collation#to-change-the-database-collation) to UTF-8 collation by using a SQL statement like `ALTER DATABASE MyLdw COLLATE LATIN1_GENERAL_100_CI_AS_SC_UTF8`. ## Flatten nested arrays
The result of this query might look like the following table:
| title | authors | first | last | affiliation | | | | | | |
-| Supplementary Information An eco-epidemi… | `[{"first":"Julien","last":"Mélade","suffix":"","affiliation":{"laboratory":"Centre de Recher…` | Julien | Mélade | ` {"laboratory":"Centre de Recher…` |
+| Supplementary Information An eco-epidemi… | `[{"first":"Julien","last":"Mélade","suffix":"","affiliation":{"laboratory":"Centre de Recher…` | Julien | Mélade | ` {"laboratory":"Centre de Recher…` |
Supplementary Information An eco-epidemi… | `[{"first":"Nicolas","last":"4#","suffix":"","affiliation":{"laboratory":"","institution":"U…` | Nicolas | 4# |`{"laboratory":"","institution":"U…` |
-| Supplementary Information An eco-epidemi… | `[{"first":"Beza","last":"Ramazindrazana","suffix":"","affiliation":{"laboratory":"Centre de Recher…` | Beza | Ramazindrazana | `{"laboratory":"Centre de Recher…` |
-| Supplementary Information An eco-epidemi… | `[{"first":"Olivier","last":"Flores","suffix":"","affiliation":{"laboratory":"UMR C53 CIRAD, …` | Olivier | Flores |`{"laboratory":"UMR C53 CIRAD, …` |
+| Supplementary Information An eco-epidemi… | `[{"first":"Beza","last":"Ramazindrazana","suffix":"","affiliation":{"laboratory":"Centre de Recher…` | Beza | Ramazindrazana | `{"laboratory":"Centre de Recher…` |
+| Supplementary Information An eco-epidemi… | `[{"first":"Olivier","last":"Flores","suffix":"","affiliation":{"laboratory":"UMR C53 CIRAD, …` | Olivier | Flores |`{"laboratory":"UMR C53 CIRAD, …` |
> [!IMPORTANT]
-> If you see unexpected characters in your text like `Mélade` instead of `Mélade`, then your database collation isn't set to [UTF-8](/sql/relational-databases/collations/collation-and-unicode-support#utf8) collation. [Change collation of the database](/sql/relational-databases/collations/set-or-change-the-database-collation#to-change-the-database-collation) to UTF-8 collation by using a SQL statement like `ALTER DATABASE MyLdw COLLATE LATIN1_GENERAL_100_CI_AS_SC_UTF8`.
+> If you see unexpected characters in your text like `MÃÂ&copy;lade` instead of `Mélade`, then your database collation isn't set to [UTF-8](/sql/relational-databases/collations/collation-and-unicode-support#utf8) collation. [Change collation of the database](/sql/relational-databases/collations/set-or-change-the-database-collation#to-change-the-database-collation) to UTF-8 collation by using a SQL statement like `ALTER DATABASE MyLdw COLLATE LATIN1_GENERAL_100_CI_AS_SC_UTF8`.
## Azure Cosmos DB to SQL type mappings
For more information, see the following articles:
- [Use Power BI and serverless SQL pool with Azure Synapse Link](../../cosmos-db/synapse-link-power-bi.md) - [Create and use views in a serverless SQL pool](create-use-views.md) - [Tutorial on building serverless SQL pool views over Azure Cosmos DB and connecting them to Power BI models via DirectQuery](./tutorial-data-analyst.md)-- Visit [Synapse link for Cosmos DB self-help page](resources-self-help-sql-on-demand.md#azure-cosmos-db) if you are getting some errors or experiencing performance issues.
+- Visit the [Azure Synapse link for Cosmos DB self-help page](resources-self-help-sql-on-demand.md#azure-cosmos-db) if you are getting some errors or experiencing performance issues.
- Checkout the learn module on how to [Query Azure Cosmos DB with SQL Serverless for Azure Synapse Analytics](/learn/modules/query-azure-cosmos-db-with-sql-serverless-for-azure-synapse-analytics/).
synapse-analytics Resources Self Help Sql On Demand https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/resources-self-help-sql-on-demand.md
Title: Serverless SQL pool self-help description: This article contains information that can help you troubleshoot problems with serverless SQL pool.- Previously updated : 9/23/2021+ Last updated : 05/16/2022 - # Self-help for serverless SQL pool
For more information, see:
#### Content of Dataverse table can't be listed
-If you use the Azure Synapse link for Dataverse to read the linked Dataverse tables, you must use an Azure AD account to access the linked data by using serverless SQL pool. If you try to use a SQL sign-in to read an external table that's referencing the Dataverse table, you'll get the following error:
+If you are using the Synapse link for Dataverse to read the linked DataVerse tables, you need to use Azure AD account to access the linked data using the serverless SQL pool. For more information, see [Azure Synapse Link for Dataverse with Azure Data Lake](/powerapps/maker/data-platform/azure-synapse-link-data-lake).
+
+If you try to use a SQL login to read an external table that is referencing the DataVerse table, you will get the following error:
``` External table '???' is not accessible because content of directory cannot be listed.
Apply best practices before you file a support ticket.
### Query fails with an error handling an external file (max errors count reached)
-If your query fails with the error message "Error handling external file: Max errors count reached," it means there's a mismatch between a specified column type and the data that needs to be loaded. To get more information about the error and which rows and columns to look at, change the parser version from 2.0 to 1.0.
+If your query fails with the error message 'error handling external file: Max errors count reached', it means that there is a mismatch of a specified column type and the data that needs to be loaded.
-**Example**
+To get more information about the error and which rows and columns to look at, change the parser version from `2.0` to `1.0`.
-If you want to query the file names.csv with this Query 1, Azure Synapse serverless SQL pool returns with the following error:
+#### Example
+
+If you want to query the file `names.csv` with this Query 1, Azure Synapse serverless SQL pool returns with the following error:
names.csv
FROM
ASΓÇ»[result] ```
-Causes:
+
+#### Cause
"Error handling external file: 'Max error count reached'. File/External table name: [filepath]."
To resolve this problem, inspect the file and the data types you chose. Also che
For more information on field terminators, row delimiters, and escape quoting characters, see [Query CSV files](query-single-csv-file.md).
-**Example**
+#### Example
If you want to query the file names.csv:
Azure Synapse serverless SQL pool returns the error "Bulk load data conversion e
It's necessary to browse the data and make an informed decision to handle this problem. To look at the data that causes this problem, the data type needs to be changed first. Instead of querying the ID column with the data type SMALLINT, VARCHAR(100) is now used to analyze this issue.
+It is necessary to browse the data and make an informed decision to handle this problem.
+To look at the data that causes this problem, the data type needs to be changed first. Instead of querying column "ID" with the data type "SMALLINT", VARCHAR(100) is now used to analyze this issue.
+ With this slightly changed Query 2, the data can now be processed to return the list of names. Query 2:
Your query might not fail, but you might see that your result set isn't as expec
To resolve this problem, take another look at the data and change those settings. Debugging this query is easy, as shown in the following example.
-**Example**
+#### Example
-If you want to query the file names.csv with the query in Query 1, Azure Synapse serverless SQL pool returns with a result that looks odd:
+If you want to query the file `names.csv` with the query in Query 1, Azure Synapse serverless SQL pool returns with a result that looks odd:
names.csv
FROM
| 4,David | NULL | | 5,Eva | NULL |
-There seems to be no value in the column Firstname. Instead, all values ended up being in the ID column. Those values are separated by a comma. The problem was caused by this line of code because it's necessary to choose the comma instead of the semicolon symbol as field terminator:
+There seems to be no value in the column `Firstname`. Instead, all values ended up being in the `ID` column. Those values are separated by a comma. The problem was caused by this line of code because it's necessary to choose the comma instead of the semicolon symbol as field terminator:
```sql FIELDTERMINATOR =';',
FROM
AS [result] ```
-"Column 'SumTripDistance' of type 'INT' is not compatible with external data type 'Parquet physical type: DOUBLE', please try with 'FLOAT'. File/External table name: '<filepath>taxi-data.parquet'."
+`Column 'SumTripDistance' of type 'INT' is not compatible with external data type 'Parquet physical type: DOUBLE', please try with 'FLOAT'. File/External table name: '<filepath>taxi-data.parquet'.`
This error message tells you that data types aren't compatible and comes with the suggestion to use FLOAT instead of INT. The error is caused by this line of code:
More information about syntax and usage:
When the file format is Parquet, the query won't recover automatically. It needs to be retried by the client application.
-### Azure Synapse Link for Dataverse
+### Synapse Link for Dataverse
-This error can occur when reading data from Azure Synapse Link for Dataverse, when Azure Synapse Link is syncing data to the lake and the data is being queried at the same time. The product group has a goal to improve this behavior.
+This error can occur when reading data from Synapse Link for Dataverse, when Synapse Link is syncing data to the lake and the data is being queried at the same time. The product group has a goal to improve this behavior.
### [0x800700A1](#tab/x800700A1)
If you use Synapse Studio, try using a desktop client such as SQL Server Managem
Check the following issues if you experience slow query execution: -- Make sure that the client applications are collocated with the serverless SQL pool endpoint. Executing a query across the region can cause more latency and slow streaming of the result set.-- Make sure that you don't have networking issues that can cause the slow streaming of the result set.-- Make sure that the client application has enough resources. For example, it's not using 100% CPU.-- Make sure that the storage account or Azure Cosmos DB analytical storage is placed in the same region as your serverless SQL endpoint.
+- Make sure that the client applications are collocated with the serverless SQL pool endpoint. Executing a query across the region can cause additional latency and slow streaming of result set.
+- Make sure that you don't have networking issues that can cause the slow streaming of result set
+- Make sure that the client application has enough resources (for example, not using 100% CPU).
+- Make sure that the storage account or Cosmos DB analytical storage is placed in the same region as your serverless SQL endpoint.
See best practices for [collocating the resources](best-practices-serverless-sql-pool.md#client-applications-and-network-connections).
synapse-analytics Shared Databases Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/shared-databases-access-control.md
reviewer: vvasic-msft, jovanpop-msft, WilliamDAssafMSFT + Last updated 12/30/2021
Once these databases and tables are synchronized from Spark to serverless SQL po
`External table '<table>' is not accessible because content of directory cannot be listed.` despite them having access to data on the underlying storage account(s).
-Since synchronized databases in serverless SQL pool are read-only, they canΓÇÖt be modified. Creating a user, or giving other permissions will fail if attempted. To read synchronized databases, one must have privileged server-level permissions (like sysadmin).
-This limitation is also present on external tables in serverless SQL pool when using [Synapse Link for Dataverse](/powerapps/maker/data-platform/export-to-data-lake) and lake databases tables.
+Since synchronized databases in serverless SQL pool are read-only, they can't be modified. Creating a user, or giving other permissions will fail if attempted. To read synchronized databases, one must have privileged server-level permissions (like sysadmin).
+This limitation is also present on external tables in serverless SQL pool when using [Azure Synapse Link for Dataverse](/powerapps/maker/data-platform/export-to-data-lake) and lake databases tables.
## Non-admin access to synchronized databases A user who needs to read data and create reports usually doesn't have full administrator access (sysadmin). This user is usually data analyst who just needs to read and analyze data using the existing tables. They don't need to create new objects. A user with minimal permission should be able to:-- Connect to a database that is replicated from Spark-- Select data via external tables and access the underlying ADLS data.
+- Connect to a database that is replicated from Spark
+- Select data via external tables and access the underlying ADLS data.
After executing the code script below, it will allow non-admin users to have server-level permissions to connect to any database. It will also allow users to view data from all schema-level objects, such as tables or views. Data access security can be managed on the storage layer.
Access to the data on storage account can be managed via [ACL](../../storage/blo
## Next steps
-For more information, see [SQL Authentication](sql-authentication.md).
+For more information, see [SQL Authentication](sql-authentication.md).
synapse-analytics Connect Synapse Link Sql Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/synapse-link/connect-synapse-link-sql-database.md
+
+ Title: Get started with Azure Synapse Link for Azure SQL Database (Preview)
+description: Learn how to connect an Azure SQL database to an Azure Synapse workspace with Azure Synapse Link (Preview).
+++++ Last updated : 05/09/2022++++
+# Get started with Azure Synapse Link for Azure SQL Database (Preview)
+
+This article provides a step-by-step guide for getting started with Azure Synapse Link for Azure SQL Database. For more information, see [Synapse Link for Azure SQL Database (Preview)](sql-database-synapse-link.md).
+
+> [!IMPORTANT]
+> Azure Synapse Link for SQL is currently in PREVIEW.
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+## Prerequisites
+
+* [Create a new Synapse workspace](https://portal.azure.com/#create/Microsoft.Synapse) to get Azure Synapse Link for SQL. Make sure to check "Disable Managed virtual network" and "Allow connections from all IP address" when creating Synapse workspace.
+
+* For DTU-based provisioning, make sure your Azure SQL Database service is at least Standard tier with a minimum of 100 DTUs. Free, Basic, or Standard tiers with fewer than 100 DTUs provisioned are not supported.
+
+## Configure your source Azure SQL Database
+
+1. Go to Azure portal, navigate to your Azure SQL Server, select **Identity**, and then set **System assigned managed identity** to **On**.
+
+ :::image type="content" source="../media/connect-synapse-link-sql-database/set-identity-sql-database.png" alt-text="Screenshot of turning on system assigned managed identity.":::
+
+1. Navigate to **Networking**, then check **Allow Azure services and resources to access this server**.
+
+ :::image type="content" source="../media/connect-synapse-link-sql-database/configure-network-firewall-sql-database.png" alt-text="Screenshot of configuring firewalls for your SQL DB using Azure portal.":::
+
+1. Using Microsoft SQL Server Management Studio (SSMS) or Azure Data Studio, connect to the Azure SQL Server. If you want to have your Synapse workspace connect to your Azure SQL Database using a managed identity, set the Azure Active Directory admin on Azure SQL Server, and use the same admin name to connect to Azure SQL Server with administrative privileges in order to have the privileges in step 5.
+
+1. Expand **Databases**, right select the database you created above, and select **New Query**.
+
+ :::image type="content" source="../media/connect-synapse-link-sql-database/ssms-new-query.png" alt-text="Select your database and create a new query.":::
+
+1. If you want to have your Synapse workspace connect to your source Azure SQL Database using a [managed identity](../../active-directory/managed-identities-azure-resources/overview.md), run the following script to provide the managed identity permission to the source database.
+
+ **You can skip this step** if you instead want to have your Synapse workspace connect to your source Azure SQL Database via SQL authentication.
+
+ ```sql
+ CREATE USER <workspace name> FROM EXTERNAL PROVIDER;
+ ALTER ROLE [db_owner] ADD MEMBER <workspace name>;
+ ```
+
+1. You can create a table with your own schema; the following is just an example for a `CREATE TABLE` query. You can also insert some rows into this table to ensure there's data to be replicated.
+
+ ```sql
+ CREATE TABLE myTestTable1 (c1 int primary key, c2 int, c3 nvarchar(50))
+ ```
+
+## Create your target Synapse SQL pool
+
+1. Launch [Synapse Studio](https://web.azuresynapse.net/).
+
+1. Open the **Manage** hub, navigate to **SQL pools**, and select **+ New**.
+
+ :::image type="content" source="../media/connect-synapse-link-sql-database/studio-new-sql-pool.png" alt-text="Create a new SQL dedicated pool from Synapse Studio.":::
+
+1. Enter a unique pool name, use the default settings, and create the dedicated pool.
+
+1. You need to create a schema if your expected schema is not available in target Synapse SQL database. If your schema is dbo, you can skip this step.
++
+## Create the Azure Synapse Link connection
+
+1. Open the **Integrate** hub, and select **+ Link connection(Preview)**.
+
+ :::image type="content" source="../media/connect-synapse-link-sql-database/studio-new-link-connection.png" alt-text="Select a new link connection from Synapse Studio.":::
+
+1. Under **Source linked service**, select **New**.
+
+ :::image type="content" source="../media/connect-synapse-link-sql-database/studio-new-linked-service-dropdown.png" alt-text="Select a new linked service.":::
+
+1. Enter the information for your source Azure SQL Database.
+
+ * Select the subscription, server, and database corresponding to your Azure SQL Database.
+ * If you wish to connect your Synapse workspace to the source DB using the workspace's managed identity, set **Authentication type** to **Managed Identity**.
+ * If you wish to use SQL authentication instead and know the username/password to use, select **SQL Authentication** instead.
+
+ :::image type="content" source="../media/connect-synapse-link-sql-database/studio-new-linked-service.png" alt-text="Enter the server, database details to create a new linked service.":::
+
+1. Select **Test connection** to ensure the firewall rules are properly configured and the workspace can successfully connect to the source Azure SQL Database.
+
+1. Select **Create**.
+ > [!NOTE]
+ > The linked service that you create here is not dedicated to Azure Synapse Link for SQL - it can be used by any workspace user that has the appropriate permissions. Please take time to understand the scope of users who may have access to this linked service and its credentials. For more information on permissions in Azure Synapse workspaces, see [Azure Synapse workspace access control overview - Azure Synapse Analytics](/synapse-analytics/security/synapse-workspace-access-control-overview).
+
+1. Select one or more source tables to replicate to your Synapse workspace and select **Continue**.
+
+ > [!NOTE]
+ > A given source table can only be enabled in at most one link connection at a time.
+
+1. Select a target Synapse SQL database and pool.
+
+1. Provide a name for your Azure Synapse Link connection, and select the number of cores. These cores will be used for the movement of data from the source to the target.
+
+ > [!NOTE]
+ > We recommend starting low and increasing as needed.
+
+1. Select **OK**.
+
+1. With the new Azure Synapse Link connection open, you can update the target table name, distribution type and structure type.
+
+ > [!NOTE]
+ > * Consider heap table for structure type when your data contains varchar(max), nvarchar(max), and varbinary(max).
+ > * Make sure the schema in your Synapse dedicated SQL pool has already been created before you start the link connection. Azure Synapse Link for SQL will create tables automatically under your schema in the Synapse dedicated SQL pool.
+
+ :::image type="content" source="../media/connect-synapse-link-sql-database/studio-edit-link.png" alt-text="Edit Azure Synapse Link connection from Synapse Studio.":::
+
+1. Select **Publish all** to save the new link connection to the service.
+
+## Start the Azure Synapse Link connection
+
+1. Select **Start** and wait a few minutes for the data to be replicated.
+
+ > [!NOTE]
+ > When being started, a link connection will start from a full initial load from your source database followed by incremental change feeds via the change feed feature in Azure SQL database. For more information, see [Azure Synapse Link for SQL change feed](/sql/sql-server/synapse-link/synapse-link-sql-server-change-feed).
+
+## Monitor the status of the Azure Synapse Link connection
+
+You may monitor the status of your Azure Synapse Link connection, see which tables are being initially copied over (Snapshotting), and see which tables are in continuous replication mode (Replicating).
+
+1. Navigate to the **Monitor** hub, and select **Link connections**.
+
+ :::image type="content" source="../media/connect-synapse-link-sql-database/studio-monitor-link-connections.png" alt-text="Monitor the status of Azure Synapse Link connection from the monitor hub.":::
+
+1. Open the Azure Synapse Link connection you started and view the status of each table.
+
+1. Select **Refresh** on the monitoring view for your connection to observe any updates to the status.
+
+## Query replicated data
+
+Wait for a few minutes, then check the target database has the expected table and data. You can also now explore the replicated tables in your target Synapse dedicated SQL pool.
+
+1. In the **Data** hub, under **Workspace**, open your target database, and within **Tables**, right-click one of your target tables.
+
+1. Choose **New SQL script**, then **Select TOP 100 rows**.
+
+1. Run this query to view the replicated data in your target Synapse dedicated SQL pool.
+
+1. You can also query the target database with SSMS (or other tools). Use the dedicated SQL endpoint for your workspace as the server name. This is typically `<workspacename>.sql.azuresynapse.net`. Add `Database=databasename@poolname` as another connection string parameter when connecting via SSMS (or other tools).
+
+## Add/remove table in existing Azure Synapse Link connection
+
+You can add/remove tables on Synapse Studio as following:
+
+1. Open the **Integrate Hub**.
+
+1. Select the **Link connection** you want to edit and open it.
+
+1. Select **+New** table to add tables on Synapse Studio or select the trash can icon to the right or a table to remove an existing table. You can add or remove tables when the link connection is running.
+
+ :::image type="content" source="../media/connect-synapse-link-sql-server-2022/link-connection-add-remove-tables.png" alt-text="Screenshot of link connection to add table.":::
+
+ > [!NOTE]
+ > You can directly add or remove tables when a link connection is running.
+
+## Stop the Azure Synapse Link connection
+
+You can stop the Azure Synapse Link connection in Synapse Studio as follows:
+
+1. Open the **Integrate Hub** of your Synapse workspace.
+
+1. Select the **Link connection** you want to edit and open it.
+
+1. Select **Stop** to stop the link connection, and it will stop replicating your data.
+
+ :::image type="content" source="../media/connect-synapse-link-sql-server-2022/stop-link-connection.png" alt-text="Screenshot of link connection to stop link.":::
+
+ > [!NOTE]
+ > If you restart a link connection after stopping it, it will start from a full initial load from your source database followed by incremental change feeds.
++
+## Next steps
+
+If you are using a different type of database, see how to:
+
+* [Configure Azure Synapse Link for Azure Cosmos DB](../../cosmos-db/configure-synapse-link.md?context=/azure/synapse-analytics/context/context)
+* [Configure Azure Synapse Link for Dataverse](/powerapps/maker/data-platform/azure-synapse-link-synapse?context=/azure/synapse-analytics/context/context)
+* [Get started with Azure Synapse Link for SQL Server 2022](connect-synapse-link-sql-server-2022.md)
synapse-analytics Connect Synapse Link Sql Server 2022 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/synapse-link/connect-synapse-link-sql-server-2022.md
+
+ Title: Create Azure Synapse Link for SQL Server 2022 (Preview)
+description: Learn how to create and connect a SQL Server 2022 instance to an Azure Synapse workspace with Azure Synapse Link (Preview).
+++++ Last updated : 05/09/2022++++
+# Get started with Azure Synapse Link for SQL Server 2022 (Preview)
+
+This article provides a step-by-step guide for getting started with Azure Synapse Link for SQL Server 2022. For more information, see [Get started with Azure Synapse Link for SQL Server 2022 (Preview)](sql-server-2022-synapse-link.md).
+
+> [!IMPORTANT]
+> Azure Synapse Link for SQL is currently in PREVIEW.
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+## Prerequisites
+
+* [Create a new Synapse workspace](https://portal.azure.com/#create/Microsoft.Synapse) to get Azure Synapse Link for SQL. Ensure to check "Disable Managed virtual network" and "Allow connections from all IP address" when creating Synapse workspace. If you have a workspace created after May 24, 2022, you do not need to create a new workspace.
+
+* Create an Azure Data Lake Storage Gen2 account (different from the account created with the Azure Synapse Analytics workspace) used as the landing zone to stage the data submitted by SQL Server 2022. See [how to create a Azure Data Lake Storage Gen2 account](../../storage/blobs/create-data-lake-storage-account.md) article for more details.
++
+* Make sure your database in SQL Server 2022 has a master key created.
+
+ ```sql
+ CREATE MASTER KEY ENCRYPTION BY PASSWORD = '<a new password>'
+ ```
+
+## Create your target Synapse dedicated SQL pool
+
+1. Launch [Synapse Studio](https://ms.web.azuresynapse.net/).
+
+1. Open the **Manage** hub, navigate to **SQL pools**, and select **+ New**.
+
+ :::image type="content" source="../media/connect-synapse-link-sql-database/studio-new-sql-pool.png" alt-text="Screenshot of creating a new SQL dedicated pool from Synapse Studio.":::
+
+1. Enter a unique pool name, use the default settings, and create the dedicated pool.
+
+1. From the **Data** hub, under **Workspace**, you should see your new Synapse SQL database listed under **Databases**. From your new Synapse SQL database, select **New SQL script**, then **Empty script**.
+
+ :::image type="content" source="../media/connect-synapse-link-sql-server-2022/studio-new-empty-sql-script.png" alt-text="Screenshot of creating a new empty SQL script from Synapse Studio.":::
+
+1. Paste the following script and select **Run** to create the master key for your target Synapse SQL database. You also need to create a schema if your expected schema is not available in target Synapse SQL database.
+
+ ```sql
+ CREATE MASTER KEY
+ ```
+
+## Create linked service for your source SQL Server 2022
+
+1. Open the **Manage** hub, and navigate to **Linked services**.
+
+ :::image type="content" source="../media/connect-synapse-link-sql-server-2022/studio-linked-service-navigation.png" alt-text="Navigate to linked services from Synapse studio.":::
+
+1. Press **+ New**, select **SQL Server** and select **Continue**.
+
+ :::image type="content" source="../media/connect-synapse-link-sql-server-2022/studio-linked-service-select.png" alt-text="Create a SQL server linked service.":::
+
+1. Enter the **name** of linked service of SQL Server 2022.
+
+ :::image type="content" source="../media/connect-synapse-link-sql-server-2022/studio-linked-service-new.png" alt-text="Enter server and database names to connect.":::
+
+1. When selecting the integration runtime, choose your **self-hosted integration runtime**. If your synapse workspace doesn't have self-hosted integration runtime available, create one.
+
+1. Use the following steps to create a self-hosted integration runtime to connect to your source SQL Server 2022 (optional)
+
+ * Select **+New**.
+
+ :::image type="content" source="../media/connect-synapse-link-sql-server-2022/create-new-integration-runtime.png" alt-text="Creating a new self-hosted integration runtime.":::
+
+ * Select **Self-hosted** and select **continue**.
+
+ * Input the **name** of Self-hosted integration runtime and select **Create**.
+
+ :::image type="content" source="../media/connect-synapse-link-sql-server-2022/input-name-integration-runtime.png" alt-text="Enter a name for the self-hosted integration runtime.":::
+
+ * Now a self-hosted integration runtime is available in your Synapse workspace. Follow the prompts in the UI to **download**, **install** and use the key to **register** your integration runtime agent on your windows machine, which has direct access on your SQL Server 2022. For more information, see [Create a self-hosted integration runtime - Azure Data Factory & Azure Synapse](../../data-factory/create-self-hosted-integration-runtime.md?context=%2Fazure%2Fsynapse-analytics%2Fcontext%2Fcontext&tabs=synapse-analytics#install-and-register-a-self-hosted-ir-from-microsoft-download-center)
+
+ :::image type="content" source="../media/connect-synapse-link-sql-server-2022/set-up-integration-runtime.png" alt-text="Download, install and register the integration runtime.":::
+
+ * Select **Close**, and go to monitoring page to make sure your self-hosted integration runtime is running by selecting **refresh** to get the latest status of integration runtime.
+
+ :::image type="content" source="../media/connect-synapse-link-sql-server-2022/integration-runtime-status.png" alt-text="Get the status of integration runtime.":::
+
+1. Continue to input the rest information on your linked service including **SQL Server name**, **Database name**, **Authentication type**, **User name** and **Password** to connect to your SQL Server 2022.
+
+ > [!NOTE]
+ > We recommend that you enable encryption on this connection. To enable encryption, add the `Encrypt` property with a value of `true` as an Additional connection property, and also set the `Trust Server Certificate` property to either `true` or `false` - depending on your server configuration. For more information, see [Enable encrypted connections to the Database Engine](/sql/database-engine/configure-windows/enable-encrypted-connections-to-the-database-engine).
+
+1. Select **Test Connection** to ensure your self-hosted integration runtime can access on your SQL Server.
+
+1. Select **Create**, and you'll have your new linked service connecting to SQL Server 2022 available in your workspace.
+
+ :::image type="content" source="../media/connect-synapse-link-sql-server-2022/view-linked-service-connection.png" alt-text="View the linked service connection.":::
+
+ > [!NOTE]
+ > The linked service that you create here is not dedicated to Azure Synapse Link for SQL - it can be used by any workspace user that has the appropriate permissions. Please take time to understand the scope of users who may have access to this linked service and its credentials. For more information on permissions in Azure Synapse workspaces, see [Azure Synapse workspace access control overview - Azure Synapse Analytics](/synapse-analytics/security/synapse-workspace-access-control-overview).
+
+## Create linked service to connect to your landing zone on Azure Data Lake Storage Gen2
+
+1. Go to your created Azure Data Lake Storage Gen2 account, navigate to **Access Control (IAM)**, select **+Add**, and select **Add role assignment**.
+
+ :::image type="content" source="../media/connect-synapse-link-sql-server-2022/adls-gen2-access-control.png" alt-text="Navigate to Access Control (IAM) of the Data Lake Storage Gen2 account.":::
+
+1. Select **Storage Blob Data Contributor** for the selected role, choose **Managed identity** in Managed identity, and select your Synapse workspace in **Members**. This may take a few minutes to take effect to add role assignment.
+
+ :::image type="content" source="../media/connect-synapse-link-sql-server-2022/adls-gen2-assign-blob-data-contributor-role.png" alt-text="Add a role assignment.":::
+
+ > [!NOTE]
+ > Make sure that you have granted your Synapse workspace managed identity permissions to ADLS Gen2 storage account used as the landing zone. For more information, see how to [Grant permissions to managed identity in Synapse workspace - Azure Synapse Analytics](../security/how-to-grant-workspace-managed-identity-permissions.md#grant-the-managed-identity-permissions-to-adls-gen2-storage-account)
+
+1. Open the **Manage** hub in your Synapse workspace, and navigate to **Linked services**.
+
+ :::image type="content" source="../media/connect-synapse-link-sql-server-2022/studio-linked-service-navigation.png" alt-text="Navigate to the linked service.":::
+
+1. Press **+ New** and select **Azure Data Lake Storage Gen2**.
+
+1. Input the following settings:
+
+ * Enter the **name** of linked service for your landing zone.
+
+ * Input **Authentication method**, and it must be **Managed Identity**.
+
+ * Select the **Storage account name** which had already been created.
+
+1. Select **Test Connection** to ensure you get access on your Azure Data Lake Storage Gen2.
+
+1. Select **Create** and you'll have your new linked service connecting to Azure Data Lake Storage Gen2.
+
+ :::image type="content" source="../media/connect-synapse-link-sql-server-2022/storage-gen2-linked-service-created.png" alt-text="New linked service to Azure Data Lake Storage Gen2.":::
+
+ > [!NOTE]
+ > The linked service that you create here is not dedicated to Azure Synapse Link for SQL - it can be used by any workspace user that has the appropriate permissions. Please take time to understand the scope of users who may have access to this linked service and its credentials. For more information on permissions in Azure Synapse workspaces, see [Azure Synapse workspace access control overview - Azure Synapse Analytics](/synapse-analytics/security/synapse-workspace-access-control-overview).
+
+## Create the Azure Synapse Link connection
+
+1. From the Synapse studio, open the **Integrate** hub, and select **+Link connection(Preview)**.
+
+ :::image type="content" source="../media/connect-synapse-link-sql-server-2022/new-link-connection.png" alt-text="New link connection.":::
+
+1. Input your source database:
+
+ * Select Source type to **SQL Server**.
+
+ * Select your source **linked service** to connect to your SQL Server 2022.
+
+ * Select **table names** from your SQL Server to be replicated to your Synapse SQL pool.
+
+ * Select **Continue**.
+
+ :::image type="content" source="../media/connect-synapse-link-sql-server-2022/input-source-database-details-link-connection.png" alt-text="Input source database details.":::
+
+1. Select a target database name from **Synapse SQL Dedicated Pools**.
+
+1. Select **Continue**.
+
+1. Input your link connection settings:
+
+ * Input your **link connection name**.
+
+ * Select your **Core count**. We recommend starting from small number and increasing as needed.
+
+ * Configure your landing zone. Select your **linked service** connecting to your landing zone.
+
+ * Input your ADLS Gen2 **container name or container/folder name** as landing zone folder path for staging the data. The container is required to be created first.
+
+ * Input your ADLS Gen2 shared access signature (SAS) token. SAS token is required for SQL change feed to get access on landing zone. If your ADLS Gen2 doesn't have SAS token, you can create one by selecting **+Generate token**.
+
+ * Select **OK**.
+
+ :::image type="content" source="../media/connect-synapse-link-sql-server-2022/link-connection-compute-settings.png" alt-text="Input the link connection settings.":::
+
+1. With the new Azure Synapse Link connection open, you have chance to update the target table name, distribution type and structure type.
+
+ > [!NOTE]
+ > * Consider heap table for structure type when your data contains varchar(max), nvarchar(max), and varbinary(max).
+ > * Make sure the schema in your Synapse SQL pool has already been created before you start the link connection. Azure Synapse Link will help you to create tables automatically under your schema in Azure Synapse SQL Pool.
+
+1. Select **Publish all** to save the new link connection to the service.
+
+## Start the Azure Synapse Link connection
+
+1. Select **Start** and wait a few minutes for the data to be replicated.
+
+ > [!NOTE]
+ > When being started, a link connection will start from a full initial load from your source database followed by incremental change feeds via the change feed feature in SQL Server 2022. For more information, see [Azure Synapse Link for SQL change feed](/sql/sql-server/synapse-link/synapse-link-sql-server-change-feed).
+
+## Monitor Azure Synapse Link for SQL Server 2022
+
+You may monitor the status of your Azure Synapse Link connection, see which tables are being initially copied over (Snapshotting), and see which tables are in continuous replication mode (Replicating).
+
+1. Navigate to the **Monitor hub** of your Synapse workspace.
+
+1. Select **Link connections**.
+
+1. Open the link connection you started and view the status of each table.
+
+1. Select **Refresh** on the monitoring view for your connection to observe any updates to the status.
+
+ :::image type="content" source="../media/connect-synapse-link-sql-server-2022/monitor-link-connection.png" alt-text="Monitor the linked connection.":::
+
+## Query replicated data
+
+Wait for a few minutes, then check the target database has the expected table and data. See the data available in your Synapse dedicated SQL pool destination store. You can also now explore the replicated tables in your target Synapse dedicated SQL pool.
+
+1. In the **Data** hub, under **Workspace**, open your target database, and within **Tables**, right-click one of your target tables.
+
+1. Choose **New SQL script**, then **Select TOP 100 rows**.
+
+1. Run this query to view the replicated data in your target Synapse dedicated SQL pool.
+
+1. You can also query the target database with SSMS (or other tools). Use the dedicated SQL endpoint for your workspace as the server name. This is typically `<workspacename>.sql.azuresynapse.net`. Add `Database=databasename@poolname` as an extra connection string parameter when connecting via SSMS (or other tools).
+
+## Add/remove table in existing Azure Synapse Link connection
+
+You can add/remove tables on Synapse Studio as follows:
+
+1. Open the **Integrate Hub**.
+
+1. Select the **Link connection** you want to edit and open it.
+
+1. Select **+New** table to add tables on Synapse Studio or select the trash can icon to the right of a table to remove an existing table. You can add or remove tables when the link connection is running.
+
+ :::image type="content" source="../media/connect-synapse-link-sql-server-2022/link-connection-add-remove-tables.png" alt-text="Link connection add table.":::
+
+ > [!NOTE]
+ > You can directly add or remove tables when a link connection is running.
+
+## Stop the Azure Synapse Link connection
+
+You can stop the Azure Synapse Link connection on Synapse Studio as follows:
+
+1. Open the **Integrate Hub** of your Synapse workspace.
+
+1. Select the **Link connection** you want to edit and open it.
+
+1. Select **Stop** to stop the link connection, and it will stop replicating your data.
+
+ :::image type="content" source="../media/connect-synapse-link-sql-server-2022/stop-link-connection.png" alt-text="Link connection stop link.":::
+
+ > [!NOTE]
+ > If you restart a link connection after stopping it, it will start from a full initial load from your source database followed by incremental change feeds.
+
+## Rotate the SAS token for landing zone
+
+A SAS token is required for SQL change feed to get access to the landing zone and push data there. It has an expiration date so you need to rotate the SAS token before the expiration date. Otherwise, Azure Synapse Link will fail to replicate the data from SQL Server to the Synapse dedicated SQL pool.
+
+1. Open the **Integrate Hub** of your Synapse workspace.
+
+1. Select the **Link connection** you want to edit and open it.
+
+1. Select **Rotate token**.
+
+ :::image type="content" source="../media/connect-synapse-link-sql-server-2022/link-connection-locate-rotate-token.png" alt-text="Rotate S A S token.":::
+
+1. Select **Generate automatically** or **Input manually** to get the new SAS token, and then select **OK**.
+
+ :::image type="content" source="../media/connect-synapse-link-sql-server-2022/landing-zone-rotate-sas-token.png" alt-text="Get the new S A S token.":::
++
+## Next steps
+
+If you are using a different type of database, see how to:
+
+* [Configure Azure Synapse Link for Azure Cosmos DB](../../cosmos-db/configure-synapse-link.md?context=/azure/synapse-analytics/context/context)
+* [Configure Azure Synapse Link for Dataverse](/powerapps/maker/data-platform/azure-synapse-link-synapse?context=/azure/synapse-analytics/context/context)
+* [Get started with Azure Synapse Link for Azure SQL Database](connect-synapse-link-sql-database.md)
synapse-analytics How To Copy To Sql Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/synapse-link/how-to-copy-to-sql-pool.md
Title: Copy Synapse Link for Azure Cosmos DB data into a dedicated SQL pool using Apache Spark
+ Title: Copy Azure Synapse Link for Azure Cosmos DB data into a dedicated SQL pool using Apache Spark
description: Load the data into a Spark dataframe, curate the data, and load it into a dedicated SQL pool table
Last updated 08/10/2020 --++ # Copy data from Azure Cosmos DB into a dedicated SQL pool using Apache Spark
-Azure Synapse Link for Azure Cosmos DB enables users to run near real-time analytics over operational data in Azure Cosmos DB. However, there are times when some data needs to be aggregated and enriched to serve data warehouse users. Curating and exporting Synapse Link data can be done with just a few cells in a notebook.
+Azure Synapse Link for Azure Cosmos DB enables users to run near real-time analytics over operational data in Azure Cosmos DB. However, there are times when some data needs to be aggregated and enriched to serve data warehouse users. Curating and exporting Azure Synapse Link data can be done with just a few cells in a notebook.
## Prerequisites * [Provision a Synapse workspace](../quickstart-create-workspace.md) with:
In that example, we use an HTAP container called **RetailSales**. It's part of a
* weekStarting: long (nullable = true) * _etag: string (nullable = true)
-We'll aggregate the sales (*quantity*, *revenue* (price x quantity) by *productCode* and *weekStarting* for reporting purposes. Finally, we'll export that data into a dedicated SQL pool table called **dbo.productsales**.
+We'll aggregate the sales (*quantity*, *revenue* (price x quantity) by *productCode* and *weekStarting* for reporting purposes. Finally, we'll export that data into a dedicated SQL pool table called `dbo.productsales`.
## Configure a Spark Notebook Create a Spark notebook with Scala as Spark (Scala) as the main language. We use the notebook's default setting for the session.
synapse-analytics Sql Database Synapse Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/synapse-link/sql-database-synapse-link.md
+
+ Title: Azure Synapse Link for Azure SQL Database (Preview)
+description: Learn about Azure Synapse Link for Azure SQL Database, the link connection, and monitoring the Synapse Link.
++++++ Last updated : 05/09/2022++++
+# Azure Synapse Link for Azure SQL Database (Preview)
+
+This article helps you to understand the functions of Azure Synapse Link for Azure SQL Database. You can use the Azure Synapse Link for SQL functionality to replicate your operational data into an Azure Synapse Analytics dedicated SQL pool from Azure SQL Database.
+
+> [!IMPORTANT]
+> Azure Synapse Link for SQL is currently in PREVIEW.
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+## Link connection
+
+A link connection identifies a mapping relationship between an Azure SQL database and an Azure Synapse Analytics dedicated SQL pool. You can create, manage, monitor and delete link connections in your Synapse workspace. When creating a link connection, you can select both source database and a destination Synapse dedicated SQL pool so that the operational data from your source database will be automatically replicated to the specified destination Synapse dedicated SQL pool. You can also add or remove one or more tables from your source database to be replicated.
+
+You can start or stop a link connection. When started, a link connection will start from a full initial load from your source database followed by incremental change feeds via the change feed feature in Azure SQL database. When you stop a link connection, the updates made to the operational data won't be synchronized to your Synapse dedicated SQL pool. For more information, see [Azure Synapse Link change feed for SQL Server 2022 and Azure SQL Database](/sql/sql-server/synapse-link/synapse-link-sql-server-change-feed).
+
+You need to select compute core counts for each link connection to replicate your data. The core counts represent the compute power and it impacts your data replication latency and cost.
+
+## Monitoring
+
+You can monitor Azure Synapse Link for SQL at the link and table levels. For each link connection, you'll see the following status:
+
+* **Initial:** a link connection is created but not started. You will not be charged in initial state.
+* **Starting:** a link connection is setting up compute engines to replicate data.
+* **Running:** a link connection is replicating data.
+* **Stopping:** a link connection is shutting down the compute engines.
+* **Stopped:** a link connection is stopped. You will not be charged in stopped state.
+
+For each table, you'll see the following status:
+
+* **Snapshotting:** a source table is initially loaded to the destination with full snapshot.
+* **Replicating:** any updates on source table are replicated to the destination.
+* **Failed:** the data on source table can't be replicated to destination due to a fatal error. If you want to retry after fixing the error, remove the table from link connection and add it back.
+* **Suspended:** replication is suspended for this table due to an error. It will be resumed after the error is resolved.
+
+## Transactional consistency across tables
+
+You can enable transactional consistency across tables for each link connection. However, it limits overall replication throughput.
+
+## <a name="known-issues"></a>Known limitations
+
+A consolidated list of known limitations and issues can be found at [Known limitations and issues with Azure Synapse Link for SQL (Preview)](synapse-link-for-sql-known-issues.md).
+
+## Next steps
+
+* To learn more, see how to [Configure Synapse Link for Azure SQL Database (Preview)](connect-synapse-link-sql-database.md).
synapse-analytics Sql Server 2022 Synapse Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/synapse-link/sql-server-2022-synapse-link.md
+
+ Title: Azure Synapse Link for SQL Server 2022 (Preview)
+description: Learn about Azure Synapse Link for SQL Server 2022, the link connection, landing zone, Self-hosted integration runtime, and monitoring the Azure Synapse Link for SQL.
++++++ Last updated : 05/09/2022++++
+# Azure Synapse Link for SQL Server 2022 (Preview)
+
+This article helps you to understand the functions of Azure Synapse Link for SQL Server 2022. You can use the Azure Synapse Link for SQL functionality to replicate your operational data into an Azure Synapse Analytics dedicated SQL pool from SQL Server 2022.
+
+> [!IMPORTANT]
+> Azure Synapse Link for SQL is currently in PREVIEW.
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+## Link connection
+
+A link connection identifies a mapping relationship between an SQL Server 2022 and an Azure Synapse Analytics dedicated SQL pool. You can create, manage, monitor and delete link connections in your Synapse workspace. When creating a link connection, you can select both source database and destination Synapse dedicated SQL pool so that the operational data from your source database will be automatically replicated to the specified destination Synapse dedicated SQL pool. You can also add or remove one or more tables from your source database to be replicated.
+
+You can start or stop a link connection. When started, a link connection will start from a full initial load from your source database followed by incremental change feeds via change feed feature in SQL Server 2022. When you stop a link connection, the updates made to the operational data won't be synchronized to your Synapse dedicated SQL pool. For more information, see [Azure Synapse Link change feed for SQL Server 2022 and Azure SQL Database](/sql/sql-server/synapse-link/synapse-link-sql-server-change-feed).
+
+You need to select compute core counts for each link connection to replicate your data. The core counts represent the compute power and it impacts your data replication latency and cost.
+
+## Landing zone
+
+The landing zone is an interim staging store required for Azure Synapse Link for SQL Server 2022. First, the operational data is loaded from the SQL Server 2022 to the landing zone. Next, the data is copied from the landing zone to the Synapse dedicated SQL pool. You need to provide your own Azure Data Lake Storage Gen2 account to be used as a landing zone. It is not supported to use this landing zone for anything other than Azure Synapse Link for SQL.
+
+The shared access signature (SAS) token from your Azure Data Lake Storage Gen2 account is required for a link connection to get access to the landing zone. Be aware that the SAS token has an expiration date. Make sure to rotate the SAS token before the expiration date to ensure the SAS token is valid. Otherwise, Azure Synapse Link for SQL will fail to replicate the data from SQL Server 2022.
+
+## Self-hosted integration runtime
+
+Self-hosted integration runtime is a software agent that you can download and install on an on-premises machine or a virtual machine. It is required for Azure Synapse Link for SQL Server 2022 to get access the data on SQL Server 2022 on premises that is behind the firewall. Currently, the self-hosted IR is only supported on a Windows operating system. For more information, see [Create a self-hosted integration runtime](../../data-factory/create-self-hosted-integration-runtime.md?tabs=synapse-analytics)
+
+## Monitoring
+
+You can monitor Azure Synapse Link for SQL at the link and table levels. For each link connection, you'll see the following status:
+
+* **Initial:** a link connection is created but not started. You will not be charged in initial state.
+* **Starting:** a link connection is setting up compute engines to replicate data.
+* **Running:** a link connection is replicating data.
+* **Stopping:** a link connection is shutting down the compute engines.
+* **Stopped:** a link connection is stopped. You will not be charged in stopped state.
+
+For each table, you'll see the following status:
+
+* **Snapshotting:** a source table is initially loaded to the destination with full snapshot.
+* **Replicating:** any updates on source table are replicated to the destination.
+* **Failed:** the data on source table can't be replicated to destination. If you want to retry after fixing the error, remove the table from link connection and add it back.
+* **Suspended:** replication is suspended for this table due to an error. It will be resumed after the error is resolved.
+
+For more information, see [Manage Synapse Link for SQL change feed](/sql/sql-server/synapse-link/synapse-link-sql-server-change-feed-manage).
+
+## Transactional consistency across tables
+
+You can enable transactional consistency across table for each link connection. However, it limits overall replication throughput.
+
+## Known limitations
+
+A consolidated list of known limitations and issues can be found at [Known limitations and issues with Azure Synapse Link for SQL (Preview)](synapse-link-for-sql-known-issues.md).
+
+## Next steps
+
+* To learn more, see how to [Configure Synapse Link for SQL Server 2022 (Preview)](connect-synapse-link-sql-server-2022.md).
synapse-analytics Sql Synapse Link Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/synapse-link/sql-synapse-link-overview.md
+
+ Title: What is Azure Synapse Link for SQL? (Preview)
+description: Learn about Azure Synapse Link for SQL, the benefit it offers and price
++++++ Last updated : 04/18/2022++++
+# What is Azure Synapse Link for SQL? (Preview)
+
+Azure Synapse Link for SQL enables near real time analytics over operational data in Azure SQL Database or SQL Server 2022. With a seamless integration between operational stores including Azure SQL Database and SQL Server 2022 and Azure Synapse Analytics, Azure Synapse Link for SQL enables you to run analytics, business intelligence and machine learning scenarios on your operational data with minimum impact on source databases with a new change feed technology.
+
+> [!IMPORTANT]
+> Azure Synapse Link for Azure SQL is currently in PREVIEW.
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+The following image shows the Azure Synapse Link integration with Azure SQL DB, SQL Server 2022, and Azure Synapse Analytics:
++
+## Benefit
+
+Azure Synapse Link for SQL provides fully managed and turnkey experience for you to land operational data in Azure Synapse Analytics dedicated SQL pools. It does this by continuously replicating the data from Azure SQL Database or SQL Server 2022 with full consistency. By using Azure Synapse Link for SQL, you can get the following benefits:
+
+* **Minimum impact on operational workload**
+With the new change feed technology in Azure SQL Database and SQL Server 2022, Azure Synapse Link for SQL can automatically extract incremental changes from Azure SQL Database or SQL Server 2022. It then replicates to Azure Synapse Analytics dedicated SQL pool with minimal impact on the operational workload.
+
+* **Reduced complexity with No ETL jobs to manage**
+After going through a few clicks including selecting your operational database and tables, updates made to the operational data in Azure SQL Database or SQL Server 2022 are visible in the Azure Synapse Analytics dedicated SQL pool. They're available in near real-time with no ETL or data integration logic. You can focus on analytical and reporting logic against operational data via all the capabilities within Azure Synapse Analytics.
+
+* **Near real-time insights into your operational data**
+You can now get rich insights by analyzing operational data in Azure SQL Database or SQL Server 2022 in near real-time to enable new business scenarios including operational BI reporting, real time scoring and personalization, or supply chain forecasting etc. via Azure Synapse link for SQL.
+
+## Next steps
+
+* [Azure Synapse Link for Azure SQL Database (Preview)](sql-database-synapse-link.md).
+* [Azure Synapse Link for SQL Server 2022 (Preview)](sql-server-2022-synapse-link.md).
+* How to [Configure Azure Synapse Link for SQL Server 2022 (Preview)](connect-synapse-link-sql-server-2022.md).
+* How to [Configure Azure Synapse Link for Azure SQL Database (Preview)](connect-synapse-link-sql-database.md).
synapse-analytics Synapse Link For Sql Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/synapse-link/synapse-link-for-sql-known-issues.md
+
+ Title: Known limitations and issues with Azure Synapse Link for SQL (Preview)
+description: Learn about known limitations and issues with Azure Synapse Link for SQL (Preview).
+++++ Last updated : 05/24/2022++++
+# Known limitations and issues with Azure Synapse Link for SQL
+
+This article lists the known limitations and issues with Azure Synapse Link for SQL.
+
+## Known limitations
+
+This is the list of known limitations for Azure Synapse Link for SQL.
+
+### Azure SQL DB and SQL Server 2022
+* Users must use an Azure Synapse Analytics workspace created on or after May 24, 2022, to get access to Azure Synapse Link for SQL functionality.
+* Running Azure Synapse Analytics in a managed virtual network isn't supported. Users need to check "Disable Managed virtual network" and "Allow connections from all IP addresses" when creating their workspace.
+* If you are using a schema other than `dbo`, that schema must be manually created in the target dedicated SQL pool before it can be used.
+* Source tables must have primary keys.
+* The following data types aren't supported for primary keys in the source tables:
+ * real
+ * float
+ * hierarchyid
+ * sql_variant
+ * timestamp
+* Source table row size can't exceed 7,500 bytes. For tables where variable-length columns are stored off-row, a 24-byte pointer is stored in the main record.
+* Tables enabled for Azure Synapse Link for SQL can have a maximum of 1,020 columns (not 1,024).
+* While a database can have multiple links enabled, a given table can't belong to multiple links.
+* When a database owner doesn't have a mapped log in, Azure Synapse link for SQL will run into an error when enabling a link connection. User can set database owner to a valid user with the `ALTER AUTHORIZATION` command to fix this issue.
+* If the source table contains computed columns or columns with data types that aren't supported by Azure Synapse Analytics dedicated SQL pools, these columns won't be replicated to Azure Synapse Analytics. Unsupported columns include:
+ * image
+ * text
+ * xml
+ * timestamp
+ * sql_variant
+ * UDT
+ * geometry
+ * geography
+* A maximum of 5,000 tables can be added to a single link connection.
+* When a source column is of type datetime2(7) or time(7), the last digit will be truncated when data is replicated to Azure Synapse Analytics.
+* The following table DDL operations aren't allowed on source tables when they are enabled for Azure Synapse Link for SQL. All other DDL operations are allowed, but they won't be replicated to Azure Synapse Analytics.
+ * Switch Partition
+ * Add/Drop/Alter Column
+ * Alter Primary Key
+ * Drop/Truncate Table
+ * Rename Table
+* If DDL + DML is executed in an explicit transaction (between `BEGIN TRANSACTION` and `END TRANSACTION` statements), replication for corresponding tables will fail within the link connection.
+ > [!NOTE]
+ > If a table is critical for transactional consistency at the link connection level, please review the state of the Azure Synapse Link table in the Monitoring tab.
+* Azure Synapse Link for SQL can't be enabled if any of the following features are in use for the source table:
+ * Change Data Capture
+ * Temporal history table
+ * Always encrypted
+ * In-Memory OLTP
+ * Column Store Index
+ * Graph
+* System tables can't be replicated.
+* The security configuration from the source database will **NOT** be reflected in the target dedicated SQL pool.
+* Enabling Azure Synapse Link for SQL will create a new schema called `changefeed`. Don't use this schema, as it is reserved for system use.
+* Source tables with non-default collations: UTF8, Japanese can't be replicated to Synapse. Here's the [supported collations in Synapse SQL Pool](../sql/reference-collation-types.md).
+* Single row updates (including off-page storage) of > 370MB are not supported.
+* Single transactions of > 500MB could cause data ingestion to the Azure Synapse Analytics dedicated SQL pool to fail.
+
+### Azure SQL DB only
+* Azure Synapse Link for SQL isn't supported on Free, Basic or Standard tier with fewer than 100 DTUs.
+* Azure Synapse Link for SQL isn't supported on SQL Managed Instances.
+* Users need to check "Allow Azure services and resources to access this server" in the firewall settings of their source database server.
+* Service principal and user-assigned managed identity aren't supported for authenticating to source Azure SQL DB, so when creating Azure SQL DB linked Service, choose SQL authentication or service assigned managed Identity (SAMI).
+* Azure Synapse Link can't be enabled on the secondary database once a GeoDR failover has happened if the secondary database has a different name from the primary database.
+* If you enabled Azure Synapse Link for SQL on your database as an Microsoft Azure Active Directory (Azure AD) user, Point-in-time restore (PITR) will fail. PITR will only work when you enable Azure Synapse Link for SQL on your database as a SQL user.
+* If you create a database as an Azure AD user and enable Azure Synapse Link for SQL, a SQL authentication user (for example, even sysadmin role) won't be able to disable/make changes to Azure Synapse Link for SQL artifacts. However, another Azure AD user will be able to enable/disable Azure Synapse Link for SQL on the same database. Similarly, if you create a database as an SQL authentication user, enabling/disabling Azure Synapse Link for SQL as an Azure AD user won't work.
+* When enabling Azure Synapse Link for SQL on your Azure SQL Database, you should ensure that aggressive log truncation is disabled.
+
+### SQL Server 2022 only
+* When creating SQL Server linked service, choose SQL Authentication, Windows Authentication or Azure AD Authentication.
+* Azure Synapse Link for SQL works with SQL Server on Linux, but HA scenarios with Linux Pacemaker aren't supported. Shelf hosted IR cannot be installed on Linux environment.
+* Azure Synapse Link for SQL can't be enabled on databases that are transactional replication publishers or distributors.
+* If the SAS key of landing zone expires and gets rotated during the snapshot process, the new key won't get picked up. The snapshot will fail and restart automatically with the new key.
+* Prior to breaking an Availability Group, disable any running links. Otherwise both databases will attempt to write their changes to the landing zone.
+* When using asynchronous replicas, transactions need to be written to all replicas prior to them being published to Azure Synapse Link for SQL.
+* Azure Synapse Link for SQL isn't supported on databases with database mirroring enabled.
+* Restoring an Azure Synapse Link for SQL-enabled database from on-premises to Azure SQL Managed Instance isn't supported.
+* Azure Synapse Link for SQL is not supported on databases that are using Managed Instance Link.
+
+## Known issues
+### Deleting an Azure Synapse Analytics workspace with a running link could cause log in source database to fill
+* Applies To - Azure SQL Database and SQL Server 2022
+* Issue - When you delete an Azure Synapse Analytics workspace it is possible that running links might not be stopped, which will cause the source database to think that the link is still operational and could lead to the log filling and not being truncated.
+* Resolution - There are two possible resolutions to this situation:
+1. Stop any running links prior to deleting the Azure Synapse Analytics workspace.
+1. Manually clean up the link definition in the source database.
+ 1. Find the table_group_id for the link(s) that need to be stopped using the following query:
+ ```sql
+ SELECT table_group_id, workspace_id, synapse_workgroup_name
+ FROM [changefeed].[change_feed_table_groups]
+ WHERE synapse_workgroup_name = <synapse workspace name>
+ ```
+ 1. Drop each link identified using the following procedure:
+ ```sql
+ EXEC sys.sp_change_feed_drop_table_group @table_group_id = <table_group_id>
+ ```
+ 1. Optionally, if you are disabling all of the table groups for a given database, you can also disable change feed on the database with the following command:
+ ```sql
+ EXEC sys.sp_change_feed_disable_db
+
+### User may receive error indicating invalid primary key column data type even when primary key is of a supported type
+* Applies To - Azure SQL Database
+* Issue - If your source database contains a table with a primary key that is an unsupported data type (real, float, hierarchyid, sql_variant, and timestamp), it could cause a table with a supported primary key data type to not be enabled for Azure Synapse Link for SQL.
+* Resolution - Change the data type of all primary key columns to a supported data type.
+
+## Next steps
+
+If you are using a different type of database, see how to:
+
+* [Configure Synapse Link for Azure Cosmos DB](../../cosmos-db/configure-synapse-link.md?context=/azure/synapse-analytics/context/context)
+* [Configure Synapse Link for Dataverse](/powerapps/maker/data-platform/azure-synapse-link-synapse?context=/azure/synapse-analytics/context/context)
synapse-analytics Whats New Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/whats-new-archive.md
Previously updated : 04/15/2022 Last updated : 05/20/2022 # Previous monthly updates in Azure Synapse Analytics This article describes previous month updates to Azure Synapse Analytics. For the most current month's release, check out [Azure Synapse Analytics latest updates](whats-new.md). Each update links to the Azure Synapse Analytics blog and an article that provides more information.
+## Mar 2022 update
+
+The following updates are new to Azure Synapse Analytics this month.
+
+### Developer Experience
+
+* Code cells in Synapse notebooks that result in exception will now show standard output along with the exception message. This feature is supported for Python and Scala languages. To learn more, see the [example output when a code statement fails](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-march-update-2022/ba-p/3269194#TOCREF_1).
+
+* Synapse notebooks now support partial output when running code cells. To learn more, see the [examples at this blog post](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-march-update-2022/ba-p/3269194#TOCREF_1)
+
+* You can now dynamically control Spark session configuration for the notebook activity with pipeline parameters. To learn more, see the [variable explorer feature of Synapse notebooks.](./spark/apache-spark-development-using-notebooks.md?tabs=classical#parameterized-session-configuration-from-pipeline)
+
+* You can now reuse and manage notebook sessions without having to start a new one. You can easily connect a selected notebook to an active session in the list started from another notebook. You can detach a session from a notebook, stop the session, and monitor it. To learn more, see [how to manage your active notebook sessions.](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-march-update-2022/ba-p/3269194#TOCREF_3)
+
+* Synapse notebooks now capture anything written through the Python logging module, in addition to the driver logs. To learn more, see [support for Python logging.](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-march-update-2022/ba-p/3269194#TOCREF_4)
+
+### SQL
+
+* Column Level Encryption for Azure Synapse dedicated SQL Pools is now Generally Available. With column level encryption, you can use different protection keys for each column with each key having its own access permissions. The data in CLE-enforced columns are encrypted on disk and remain encrypted in memory until the DECRYPTBYKEY function is used to decrypt it. To learn more, see [how to encrypt a data column](/sql/relational-databases/security/encryption/encrypt-a-column-of-data?view=azure-sqldw-latest&preserve-view=true).
+
+* Serverless SQL pools now support better performance for CETAS (Create External Table as Select) and subsequent SELECT queries. The performance improvements include, a parallel execution plan resulting in faster CETAS execution and outputting multiple files. To learn more, see [CETAS with Synapse SQL](./sql/develop-tables-cetas.md) article and the [blog post](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-march-update-2022/ba-p/3269194#TOCREF_7)
+
+### Apache Spark for Synapse
+
+* Synapse Spark Common Data Model (CDM) Connector is now Generally Available. The CDM format reader/writer enables a Spark program to read and write CDM entities in a CDM folder via Spark dataframes. To learn more, see [how the CDM connector supports reading, writing data, examples, & known issues](./spark/data-sources/apache-spark-cdm-connector.md).
+
+* Synapse Spark Dedicated SQL Pool (DW) Connector now supports improved performance. The new architecture eliminates redundant data movement and uses COPY-INTO instead of PolyBase. You can authenticate through SQL basic authentication or opt into the Azure Active Directory/Azure AD based authentication method. It now has ~5x improvements over the previous version. To learn more, see [Azure Synapse Dedicated SQL Pool Connector for Apache Spark](./spark/synapse-spark-sql-pool-import-export.md)
+
+* Synapse Spark Dedicated SQL Pool (DW) Connector now supports all Spark Dataframe SaveMode choices. It supports Append, Overwrite, ErrorIfExists, and Ignore modes. The Append and Overwrite are critical for managing data ingestion at scale. To learn more, see [DataFrame write SaveMode support](./spark/synapse-spark-sql-pool-import-export.md#supported-dataframe-save-modes)
+
+* Accelerate Spark execution speed using the new Intelligent Cache feature. This feature is currently in public preview. Intelligent Cache automatically stores each read within the allocated cache storage space, detecting underlying file changes and refreshing the files to provide the most recent data. To learn more, see how to [Enable/Disable the cache for your Apache Spark pool](./spark/apache-spark-intelligent-cache-concept.md) or see the [blog post](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-march-update-2022/ba-p/3269194#TOCREF_12)
+
+### Security
+
+* Azure Synapse Analytics now supports Azure Active Directory (Azure AD) authentication. You can turn on Azure AD authentication during the workspace creation or after the workspace is created. To learn more, see [how to use Azure AD authentication with Synapse SQL](./sql/active-directory-authentication.md).
+
+* API support to raise or lower minimal TLS version for workspace managed SQL Server Dedicated SQL. To learn more, see [how to update the minimum TLS setting](/rest/api/synapse/sqlserver/workspace-managed-sql-server-dedicated-sql-minimal-tls-settings/update) or read the [blog post](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-march-update-2022/ba-p/3269194#TOCREF_15) for more details.
+
+### Data Integration
+
+* Flowlets and CDC Connectors are now Generally Available. Flowlets in Synapse Data Flows allow for reusable and composable ETL logic. To learn more, see [Flowlets in mapping data flow](../data-factory/concepts-data-flow-flowlet.md) or see the [blog post.](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-march-update-2022/ba-p/3269194#TOCREF_17)
+
+* sFTP connector for Synapse data flows. You can read and write data while transforming data from sftp using the visual low-code data flows interface in Synapse. To learn more, see [source transformation](../data-factory/connector-sftp.md#source-transformation)
+
+* Data flow improvements to Data Preview. To learn more, see [Data Preview and debug improvements in Mapping Data Flows](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/data-preview-and-debug-improvements-in-mapping-data-flows/ba-p/3268254?wt.mc_id=azsynapseblog_mar2022_blog_azureeng)
+
+* Pipeline script activity. The Script Activity enables data engineers to build powerful data integration pipelines that can read from and write to Synapse databases, and other database types. To learn more, see [Transform data by using the Script activity in Azure Data Factory or Synapse Analytics](../data-factory/transform-data-using-script.md)
+ ## Feb 2022 update The following updates are new to Azure Synapse Analytics this month.
The following updates are new to Azure Synapse Analytics this month.
## Next steps
-[Get started with Azure Synapse Analytics](get-started.md)
+[Get started with Azure Synapse Analytics](get-started.md)
synapse-analytics Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/whats-new.md
This article lists updates to Azure Synapse Analytics that are published in Mar
The following updates are new to Azure Synapse Analytics this month.
-## Developer Experience
-
-* Code cells in Synapse notebooks that result in exception will now show standard output along with the exception message. This feature is supported for Python and Scala languages. To learn more, see the [example output when a code statement fails](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-march-update-2022/ba-p/3269194#TOCREF_1).
+## SQL
-* Synapse notebooks now support partial output when running code cells. To learn more, see the [examples at this blog post](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-march-update-2022/ba-p/3269194#TOCREF_1)
+* Cross-subscription restore for Azure Synapse SQL is now generally available. Previously, it took many undocumented steps to restore a dedicated SQL pool to another subscription. Now, with the PowerShell Az.Sql module 3.8 update, the Restore-AzSqlDatabase cmdlet can be used for cross-subscription restore. To learn more, see [Restore a dedicated SQL pool (formerly SQL DW) to a different subscription](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-april-update-2022/ba-p/3280185).
-* You can now dynamically control Spark session configuration for the notebook activity with pipeline parameters. To learn more, see the [variable explorer feature of Synapse notebooks.](./spark/apache-spark-development-using-notebooks.md?tabs=classical#parameterized-session-configuration-from-pipeline)
+* It is now possible to recover a SQL pool from a dropped server or workspace. With the PowerShell Restore cmdlets in Az.Sql and Az.Synapse modules, you can now restore from a deleted server or workspace without filing a support ticket. For more information, read [Synapse workspace SQL pools](./backuprestore/restore-sql-pool-from-deleted-workspace.md) or [standalone SQL pools (formerly SQL DW)](./sql-data-warehouse/sql-data-warehouse-restore-from-deleted-server.md), depending on your scenario.
-* You can now reuse and manage notebook sessions without having to start a new one. You can easily connect a selected notebook to an active session in the list started from another notebook. You can detach a session from a notebook, stop the session, and monitor it. To learn more, see [how to manage your active notebook sessions.](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-march-update-2022/ba-p/3269194#TOCREF_3)
+## Synapse database templates and database designer
-* Synapse notebooks now capture anything written through the Python logging module, in addition to the driver logs. To learn more, see [support for Python logging.](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-march-update-2022/ba-p/3269194#TOCREF_4)
+* Based on popular customer feedback, we've made significant improvements to our exploration experience when creating a lake database using an industry template. To learn more, read [Quickstart: Create a new Lake database leveraging database templates](./database-designer/quick-start-create-lake-database.md).
-## SQL
+* We've added the option to clone a lake database. This unlocks additional opportunities to manage new versions of databases or support schemas that evolve in discrete steps. You can quickly clone a database using the action menu available on the lake database. To learn more, read [How-to: Clone a lake database](./database-designer/clone-lake-database.md).
-* Column Level Encryption for Azure Synapse dedicated SQL Pools is now Generally Available. With column level encryption, you can use different protection keys for each column with each key having its own access permissions. The data in CLE-enforced columns are encrypted on disk and remain encrypted in memory until the DECRYPTBYKEY function is used to decrypt it. To learn more, see [how to encrypt a data column](/sql/relational-databases/security/encryption/encrypt-a-column-of-data?view=azure-sqldw-latest&preserve-view=true).
-
-* Serverless SQL pools now support better performance for CETAS (Create External Table as Select) and subsequent SELECT queries. The performance improvements include, a parallel execution plan resulting in faster CETAS execution and outputting multiple files. To learn more, see [CETAS with Synapse SQL](./sql/develop-tables-cetas.md) article and the [blog post](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-march-update-2022/ba-p/3269194#TOCREF_7)
+* You can now use wildcards to specify custom folder hierarchies. Lake databases sit on top of data that is in the lake and this data can live in nested folders that donΓÇÖt fit into clean partition patterns. Previously, querying lake databases required that your data exists in a simple directory structure that you could browse using the folder icon without the ability to manually specify directory structure or use wildcard characters. To learn more, read [How-to: Modify a datalake](./database-designer/modify-lake-database.md).
## Apache Spark for Synapse
-* Synapse Spark Common Data Model (CDM) Connector is now Generally Available. The CDM format reader/writer enables a Spark program to read and write CDM entities in a CDM folder via Spark dataframes. To learn more, see [how the CDM connector supports reading, writing data, examples, & known issues](./spark/data-sources/apache-spark-cdm-connector.md).
-
-* Synapse Spark Dedicated SQL Pool (DW) Connector now supports improved performance. The new architecture eliminates redundant data movement and uses COPY-INTO instead of PolyBase. You can authenticate through SQL basic authentication or opt into the Azure Active Directory/Azure AD based authentication method. It now has ~5x improvements over the previous version. To learn more, see [Azure Synapse Dedicated SQL Pool Connector for Apache Spark](./spark/synapse-spark-sql-pool-import-export.md)
+* We are excited to announce the preview availability of Apache SparkΓäó 3.2 on Synapse Analytics. This new version incorporates user-requested enhancements and resolves 1,700+ Jira tickets. Please review the [official release notes](https://spark.apache.org/releases/spark-release-3-2-0.html) for the complete list of fixes and features and review the [migration guidelines between Spark 3.1 and 3.2](https://spark.apache.org/docs/latest/sql-migration-guide.html#upgrading-from-spark-sql-31-to-32) to assess potential changes to your applications. For more details, read [Apache Spark version support and Azure Synapse Runtime for Apache Spark 3.2](./spark/apache-spark-version-support.md).
-* Synapse Spark Dedicated SQL Pool (DW) Connector now supports all Spark Dataframe SaveMode choices. It supports Append, Overwrite, ErrorIfExists, and Ignore modes. The Append and Overwrite are critical for managing data ingestion at scale. To learn more, see [DataFrame write SaveMode support](./spark/synapse-spark-sql-pool-import-export.md#supported-dataframe-save-modes)
+* Assigning parameters dynamically based on variables, metadata, or specifying Pipeline specific parameters has been one of your top feature requests. Now, with the release of parameterization for the Spark job definition activity, you can do just that. For more details, read [Transform data using Apache Spark job definition](quickstart-transform-data-using-spark-job-definition.md#settings-tab).
-* Accelerate Spark execution speed using the new Intelligent Cache feature. This feature is currently in public preview. Intelligent Cache automatically stores each read within the allocated cache storage space, detecting underlying file changes and refreshing the files to provide the most recent data. To learn more, see how to [Enable/Disable the cache for your Apache Spark pool](./spark/apache-spark-intelligent-cache-concept.md) or see the [blog post](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-march-update-2022/ba-p/3269194#TOCREF_12)
+* We often receive customer requests to access the snapshot of the Notebook when there is a Pipeline Notebook run failure or there is a long-running Notebook job. With the release of the Synapse Notebook snapshot feature, you can now view the snapshot of the Notebook activity run with the original Notebook code, the cell output, and the input parameters. You can also access the snapshot of the referenced Notebook from the referencing Notebook cell output if you refer to other Notebooks through Spark utils. To learn more, read [Transform data by running a Synapse notebook](synapse-notebook-activity.md?tabs=classical#see-notebook-activity-run-history) and [Introduction to Microsoft Spark utilities](/spark/microsoft-spark-utilities.md?pivots=programming-language-scala#reference-a-notebook-1).
## Security
-* Azure Synapse Analytics now supports Azure Active Directory (Azure AD) authentication. You can turn on Azure AD authentication during the workspace creation or after the workspace is created. To learn more, see [how to use Azure AD authentication with Synapse SQL](./sql/active-directory-authentication.md).
-
-* API support to raise or lower minimal TLS version for workspace managed SQL Server Dedicated SQL. To learn more, see [how to update the minimum TLS setting](/rest/api/synapse/sqlserver/workspace-managed-sql-server-dedicated-sql-minimal-tls-settings/update) or read the [blog post](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-march-update-2022/ba-p/3269194#TOCREF_15) for more details.
-
-## Data Integration
-
-* Flowlets and CDC Connectors are now Generally Available. Flowlets in Synapse Data Flows allow for reusable and composable ETL logic. To learn more, see [Flowlets in mapping data flow](../data-factory/concepts-data-flow-flowlet.md) or see the [blog post.](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-march-update-2022/ba-p/3269194#TOCREF_17)
-
-* sFTP connector for Synapse data flows. You can read and write data while transforming data from sftp using the visual low-code data flows interface in Synapse. To learn more, see [source transformation](../data-factory/connector-sftp.md#source-transformation)
+* The Synapse Monitoring Operator RBAC role is now generally available. Since the GA of Synapse, customers have asked for a fine-grained RBAC (role-based access control) role that allows a user persona to monitor the execution of Synapse Pipelines and Spark applications without having the ability to run or cancel the execution of these applications. Now, customers can assign the Synapse Monitoring Operator role to such monitoring personas. This allows organizations to stay compliant while having flexibility in the delegation of tasks to individuals or teams. Learn more by reading [Synapse RBAC Roles](security/synapse-workspace-synapse-rbac-roles.md).
+## Data integration
-* Data flow improvements to Data Preview. To learn more, see [Data Preview and debug improvements in Mapping Data Flows](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/data-preview-and-debug-improvements-in-mapping-data-flows/ba-p/3268254?wt.mc_id=azsynapseblog_mar2022_blog_azureeng)
+* Microsoft has added Dataverse as a source and sink connector to Synapse Data Flows so that you can now build low-code data transformation ETL jobs in Synapse directly accessing your Dataverse environment. For more details on how to use this new connector, read [Mapping data flow properties](../data-factory/connector-dynamics-crm-office-365.md#mapping-data-flow-properties).
-* Pipeline script activity. The Script Activity enables data engineers to build powerful data integration pipelines that can read from and write to Synapse databases, and other database types. To learn more, see [Transform data by using the Script activity in Azure Data Factory or Synapse Analytics](../data-factory/transform-data-using-script.md)
+* We heard from you that a 1-minute timeout for Web activity was not long enough, especially in cases of synchronous APIs. Now, with the response timeout property 'httpRequestTimeout', you can define timeout for the HTTP request up to 10 minutes. Learn more by reading [Web activity response timeout improvements](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/web-activity-response-timeout-improvement/ba-p/3260307).
+
+## Developer experience
+* Previously, if you wanted to reference a notebook in another notebook, you could only reference published or committed content. Now, when using %run notebooks, you can enable ΓÇÿunpublished notebook referenceΓÇÖ which will allow you to reference unpublished notebooks. When enabled, notebook run will fetch the current contents in the notebook web cache, meaning the changes in your notebook editor can be referenced immediately by other notebooks without having to be published (Live mode) or committed (Git mode). To learn more, read [Reference unpublished notebook](spark/apache-spark-development-using-notebooks.md#reference-unpublished-notebook).
## Next steps [Get started with Azure Synapse Analytics](get-started.md)
virtual-desktop Data Locations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/data-locations.md
We currently support storing the aforementioned data in the following locations:
- Europe (EU) - United Kingdom (UK) - Canada (CA)
+- Japan (JP) (Public Preview)
In addition we aggregate service-generated from all locations where the service infrastructure is, then send it to the US geography. The data sent to the US region includes scrubbed data, but not customer data.
virtual-desktop Disaster Recovery Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/disaster-recovery-concepts.md
+
+ Title: Azure Virtual Desktop disaster recovery concepts
+description: Understand what a disaster recovery plan for Azure Virtual Desktop is and how each plan works.
+++++ Last updated : 05/24/2022++++
+# Azure Virtual Desktop disaster recovery concepts
+
+Azure Virtual Desktop has grown tremendously as a remote and hybrid work solution in recent years. Because so many users now work remotely, organizations require solutions with high deployment speed and reduced costs. Users also need to have a remote work environment with guaranteed availability and resiliency that lets them access their virtual machines even during disasters. This document describes disaster recovery plans that we recommend for keeping your organization up and running.
+
+To prevent system outages or downtime, every system and component in your Azure Virtual Desktop deployment must be fault-tolerant. Fault tolerance is when you have a duplicate configuration or system in another Azure region that takes over for the main configuration during an outage. This secondary configuration or system reduces the impact of a localized outage. There are many ways you can set up fault tolerance, but this article will focus on the methods currently available in Azure.
+
+## Azure Virtual Desktop infrastructure
+
+In order to figure out which areas to make fault-tolerant, we first need to know who's responsible for maintaining each area. You can divide responsibility in the Azure Virtual Desktop service into two areas: Microsoft-managed and customer-managed. Metadata like the host pools, app groups, and workspaces is controlled by Microsoft. The metadata is always available and doesn't require extra setup by the customer to replicate host pool data or configurations. We've designed the gateway infrastructure that connects people to their session hosts to be a global, highly resilient service managed by Microsoft. Meanwhile, customer-managed areas involve the virtual machines (VMs) used in Azure Virtual Desktop and the settings and configurations unique to the customer's deployment. The following table gives a clearer idea of which areas are managed by which party.
+
+| Managed by Microsoft | Managed by customer |
+|-|-|
+| Load balancer | Network |
+| Session broker | Session hosts |
+| Gateway | Storage |
+| Diagnostics | User profile data |
+| Cloud identity platform | Identity |
+
+In this article, we're going to focus on customer-managed components, as these are settings you can configure yourself.
+
+## Disaster recovery basics
+
+In this section, we'll discuss actions and design principles that can protect your data and prevent having huge data recovery efforts after small outages or full-blown disasters. For smaller outages, following certain smaller steps can help prevent them from becoming bigger disasters. Let's go over some basic terms that will help you when you start setting up your disaster recovery plan.
+
+When you design a disaster recovery plan, you should keep the following three things in mind:
+
+- High availability: distributing infrastructure so smaller, more localized outages don't interrupt your entire deployment. Designing with HA in mind can minimize outage impact and avoid the need for a full disaster recovery.
+- Business continuity: how an organization can keep operating during outages of any size.
+- Disaster recovery: the process of getting back to operation after a full outage.
+
+Azure has many built-in, free-of-charge features that can deliver high availability at many levels. The first feature is [availability sets](../virtual-machines/availability-set-overview.md), which distribute VMs across different fault and update domains within Azure. Next are [availability zones](../availability-zones/az-region.md), which are physically isolated and geographically distributed groups of data centers that can reduce the impact of an outage. Finally, distributing session hosts across multiple [Azure regions](../best-practices-availability-paired-regions.md) provides even more geographical distribution, which further reduces outage impact. All three features provide a certain level of protection within Azure Virtual Desktop, and you should carefully consider them along with any cost implications.
+
+Basically, the disaster recovery strategy we recommend for Azure Virtual Desktop is to deploy resources across multiple availability zones within a region. If you need more protection, you can also deploy resources across multiple paired Azure regions.
+
+## Active-passive and active-active deployments
+
+Something else you should keep in mind is the difference between active-passive and active-active plans. Active-passive plans are when you have a region with one set of resources that's active and one that's turned off until it's needed (passive). If the active region is taken offline by an emergency, the organization can switch to the passive region by turning it on and moving all their users there.
+
+Another option is an active-active deployment, where you use both sets of infrastructure at the same time. While some users may be affected by outages, the impact is limited to the users in the region that went down. Users in the other region that's still online won't be affected, and the recovery is limited to the users in the affected region reconnecting to the functioning active region. Active-active deployments can take many forms, including:
+
+- Overprovisioning infrastructure in each region to accommodate affected users in the event one of the regions goes down. A potential drawback to this method is that maintaining the additional resources costs more.
+- Have extra session hosts in both active regions, but deallocate them when they aren't needed, which reduces costs.
+- Only provision new infrastructure during disaster recovery and allow affected users to connect to the newly provisioned session hosts. This method requires regular testing with infrastructure-as-code tools so you can deploy the new infrastructure as quickly as possible during a disaster.
+
+## Recommended diaster recovery methods
+
+The disaster recovery methods we recommend are:
+
+- Configure and deploy Azure resources across multiple availability zones.
+
+- Configure and deploy Azure resources across multiple regions in either active-active or active-passive configurations. These configurations are typically found in [shared host pools](create-host-pools-azure-marketplace.md).
+
+- For personal host pools with dedicated VMs, [replicate VMs using Azure Site Recovery](../site-recovery/azure-to-azure-how-to-enable-replication.md) to another region.
+
+- Configure a separate "disaster recovery" host pool in the secondary region and use FSLogix Cloud Cache to replicate the user profile. During a disaster, you can switch users over to the secondary region.
+
+We'll go into more detail about the two main methods you can achieve these methods with for shared and personal host pools in the following sections.
+
+## Disaster recovery for shared host pools
+
+In this section, we'll discuss shared (or "pooled") host pools using an active-passive approach. The active-passive approach is when you divide up existing resources into a primary and secondary region. Normally, your organization would do all its work in the primary (or "active") region, but during a disaster, all it takes to switch over to the secondary (or "passive") region is to turn off the resources in the primary region (if you can do so, depending on the outage's extent) and turn on the ones in the secondary one.
+
+The following diagram shows an example of a deployment with redundant infrastructure in a secondary region. "Redundant" means that a copy of the original infrastructure exists in this other region, and is standard in deployments to provide resiliency for all components. Beneath a single Azure Active Directory, there are two regions: West US and East US. Each region has two session hosts running a multi-session operating system (OS), A server running Azure AD Connect, an Active Directory Domain Controller, an Azure Files Premium File share for FSLogix profiles, a storage account, and a virtual network (VNET). In the primary region, West US, all resources are turned on. In the secondary region, East US, the session hosts in the host pool are either turned off or in drain mode, and the Azure AD Connect server is in staging mode. The two VNETs in both regions are connected by peering.
++
+In most cases, if a component fails or the primary region isn't available, then the only action the customer needs to perform is to turn on the hosts or remove drain mode in the secondary region to enable end-user connections. This scenario focuses on reducing downtime. However, a redundancy-based disaster recovery plan may cost more due to having to maintain those extra components in the secondary region.
+
+The potential benefits of this plan are as follows:
+
+- Less time spent recovering from disasters. For example, you'll spend less time on provisioning, configuring, integrating, and validating newly deployed resources.
+- There's no need to use complicated procedures.
+- It's easy to test failover outside of disasters.
+
+The potential drawbacks are as follows:
+
+- May cost more due to having more infrastructure to maintain, such as storage accounts, hosts, and so on.
+- You'll need to spend more time configuring your deployment to accommodate this plan.
+- You need to maintain the extra infrastructure you set up even when you don't need it.
+
+## Important information for shared host pool recovery
+
+When using this disaster recovery strategy, it's important to keep the following things in mind:
+
+- Having multiple session hosts online across many regions can impact user experience. The managed network load balancer doesn't account for geographic proximity, instead treating all hosts in a host pool equally.
+
+- Having multiple active user sessions across regions using the same FSLogix cloud cache can corrupt user profiles. We recommend you have only one active Azure Virtual Desktop session using the same FSLogix cloud cache at a time. The service evaluates RemoteApps as multi-session occurrences, and desktops as single-session occurrences, which means you should avoid multiple connections to the same FSLogix profile.
+
+- Make sure that you configure your virtual machines (VMs) exactly the same way within your host pool. Also, make sure all VMs within your host pool are the same size. If your VMs aren't the same, the managed network load balancer will distribute user connections evenly across all available VMs. The smaller VMs may become resource-constrained earlier than expected compared to larger VMs, resulting in a negative user experience.
+
+- Region availability affects data or workspace monitoring. If a region isn't available, the service may lose all historical monitoring data during a disaster. We recommend using a custom export or dump of historical monitoring data.
+
+- We recommend you update your session hosts at least once every month. This recommendation applies to session hosts you keep turned off for extended periods of time.
+
+- Test your deployment by running a controlled failover at least once every six months.
+
+The following table lists deployment recommendations for host pool disaster recovery strategies:
+
+| Technology | Recommendations |
+|-|--|
+| Network | Create and deploy a secondary virtual network in another region and configure [Azure Peering](../virtual-network/virtual-network-manage-peering.md) with your primary virtual network. |
+| Session hosts | [Create and deploy an Azure Virtual Desktop shared host pool](create-host-pools-azure-marketplace.md) with multi-session OS SKU and include VMs from other availability zones and another region. |
+| Storage | Create storage accounts in multiple regions using premium-tier accounts. |
+| User profile data | Create separate [FSLogix cloud cache GPOs](/fslogix/configure-cloud-cache-tutorial) pointing at separate Azure Files SMB locations using Azure storage accounts in different regions. |
+| Identity | Active Directory Domain Controllers from the same directory. |
+
+## Disaster recovery for personal host pools
+
+For personal host pools, your disaster recovery strategy should involve replicating your resources to a secondary region using Azure Site Recovery Services Vault. If your primary region goes down during a disaster, Azure Site Recovery can fail over and turn on the resources in your secondary region.
+
+For example, let's say we have a deployment with a primary region in the West US and a secondary region in the East US. The primary region has a personal host pool with two session hosts each. Each session host has their own local disk containing the user profile data and their own VNET that's not paired with anything. If there's a disaster, you can use Azure Site Recovery to fail over to the secondary region in East US (or to a different availability zone in the same region). Unlike the primary region, the secondary region doesn't have local machines or disks. During the failover, Azure Site Recovery takes the replicated data from the Azure Site Recovery Vault and uses it to create two new VMs that are copies of the original session hosts, including the local disk and user profile data. The secondary region has its own independent VNET, so the VNET going offline in the primary region won't affect functionality.
+
+The following diagram shows the example deployment we just described.
++
+The benefits of this plan include a lower overall cost and not requiring maintenance to patch or update due to resources only being provisioned when you need them. However, a potential drawback is that you'll spend more time provisioning, integrating, and validating failover infrastructure than you would with a shared host pool disaster recovery setup.
+
+## Important information about personal host pool recovery
+
+When using this disaster recovery strategy, it's important to keep the following things in mind:
+
+- There may be requirements that the host pool VMs need to function in the secondary site, such as virtual networks, subnets, network security, or VPNs to access a directory such as on-premises Active Directory.
+
+ >[!NOTE]
+ > Using an [Azure Active Directory (AD)-joined VM](deploy-azure-ad-joined-vm.md) fulfills some of these requirements automatically.
+
+- You may experience integration, performance, or contention issues for resources if a large-scale disaster affects multiple customers or tenants.
+
+- Personal host pools use VMs that are dedicated to one user, which means affinity load load-balancing rules direct all user sessions back to a specific VM. This one-to-one mapping between user and VM means that if a VM is down, the user won't be able to sign in until the VM comes back online or the VM is recovered after disaster recovery is finished.
+
+- VMs in a personal host pool store user profile on drive C, which means FSLogix isn't required.
+
+- Region availability affects data or workspace monitoring. If a region isn't available, the service may lose all historical monitoring data during a disaster. We recommend using a custom export or dump of historical monitoring data.
+
+- We recommend you avoid using FSLogix when using a personal host pool configuration.
+
+- Run [controlled failover](../site-recovery/azure-to-azure-tutorial-dr-drill.md) and [failback](../site-recovery/azure-to-azure-tutorial-failback.md) tests at least once every six months.
+
+The following table lists deployment recommendations for host pool disaster recovery strategies:
+
+| Technology | Recommendations |
+|-||
+| Network | Create and deploy a secondary virtual network in another region to follow custom naming conventions or security requirements outside of the Azure Site Recovery default naming scheme. |
+| Session hosts | [Enable and configure Azure Site Recovery for VMs](../site-recovery/azure-to-azure-tutorial-enable-replication.md). Optionally, you can pre-stage an image manually or use the Azure Image Builder service for ongoing provisioning. |
+| Storage | Creating an Azure Storage account is optional to store profiles. |
+| User profile data | User profile data is locally stored on drive C. |
+| Identity | Active Directory Domain Controllers from the same directory across multiple regions.|
+
+## Next steps
+
+For more in-depth information about disaster recovery in Azure, check out these articles:
+
+- [Cloud Adoption Framework Azure Virtual Desktop business continuity and disaster recovery documentation](/azure/cloud-adoption-framework/scenarios/wvd/eslz-business-continuity-and-disaster-recovery)
+
+- [Azure Virtual Desktop Handbook: Disaster Recovery](https://azure.microsoft.com/resources/azure-virtual-desktop-handbook-disaster-recovery/)
virtual-desktop Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/disaster-recovery.md
Previously updated : 10/09/2020 Last updated : 05/24/2022 + # Azure Virtual Desktop disaster recovery
-To keep your organization's data safe, you may need to adopt a business continuity and disaster recovery (BCDR) strategy. A sound BCDR strategy keeps your apps and workload up and running during planned and unplanned service or Azure outages.
+To keep your organization's data safe, you should adopt and manage a business continuity and disaster recovery (BCDR) strategy. A sound BCDR strategy keeps your apps and workloads up and running during planned and unplanned service or Azure outages. These plans should cover the session host virtual machines (VMs) managed by customers, as opposed to the Azure Virtual Desktop service that's managed by Microsoft. For more information about management areas, see [Azure Virtual Desktop disaster recovery concepts](disaster-recovery-concepts.md).
-Azure Virtual Desktop offers BCDR for the Azure Virtual Desktop service to preserve customer metadata during outages. When an outage occurs in a region, the service infrastructure components will fail over to the secondary location and continue functioning as normal. You can still access service-related metadata, and users can still connect to available hosts. End-user connections will stay online as long as the hosts remain accessible.
+The Azure Virtual Desktop service is designed with high availability in mind. Azure Virtual Desktop is a global service managed by Microsoft, with multiple instances of its independent components distributed across multiple Azure regions. If there's an unexpected outage in any of the components, your traffic will be diverted to one of the remaining instances or Microsoft will initiate a full failover to redundant infrastructure in another Azure region.
-To make sure users can still connect during a region outage, you need to replicate their virtual machines (VMs) in a different location. During outages, the primary site fails over to the replicated VMs in the secondary location. Users can continue to access apps from the secondary location without interruption. On top of VM replication, you'll need to keep user identities accessible at the secondary location. If you're using profile containers, you'll also need to replicate them. Finally, make sure your business apps that rely on data in the primary location can fail over with the rest of the data.
+To make sure users can still connect during a region outage in session host VMs, you need to design your infrastructure with high availability and disaster recovery in mind. A typical disaster recovery plan includes replicating virtual machines (VMs) to a different location. During outages, the primary site fails over to the replicated VMs in the secondary location. Users can continue to access apps from the secondary location without interruption. On top of VM replication, you'll need to keep user identities accessible at the secondary location. If you're using profile containers, you'll also need to replicate them. Finally, make sure your business apps that rely on data in the primary location can fail over with the rest of the data.
-To summarize, to keep your users connected during an outage, you'll need to do the following things in this order:
+To summarize, to keep your users connected during an outage, you'll need to do the following things:
-- Replicate the VMs in a secondary location.
+- Replicate the VMs to a secondary location.
- If you're using profile containers, set up data replication in the secondary location.-- Make sure user identities you set up in the primary location are available in the secondary location.-- Make sure any line-of-business applications relying on data in your primary location are failed over to the secondary location.
+- Make sure user identities you set up in the primary location are available in the secondary location. To ensure availability, make sure your Active Directory Domain Controllers are available in or from the secondary location.
+- Make sure any line-of-business applications and data in your primary location are also failed over to the secondary location.
+
+## Active-passive and active-active disaster recovery plans
+
+There are two different types of disaster recovery infrastructure: active-passive and active-active. Each type of infrastructure works a different way, so let's look at what those differences are.
+
+Active-passive plans are when you have a region with one set of resources that's active and one that's turned off until it's needed (passive). If the active region is taken offline by an outage or disaster, the organization can switch to the passive region by turning it on and directing all the users there.
+
+Another option is an active-active deployment, where you use both sets of infrastructure at the same time. While some users may be affected by outages, the impact is limited to the users in the region that went down. Users in the other region that's still online won't be affected, and the recovery is limited to the users in the affected region reconnecting to the functioning active region. Active-active deployments can take many forms, including:
+
+- Overprovisioning infrastructure in each region to accommodate affected users in the event one of the regions goes down. A potential drawback to this method is that maintaining the additional resources costs more.
+- Have extra session hosts in both active regions, but deallocate them when they aren't needed, which reduces costs.
+- Only provision new infrastructure during disaster recovery and allow affected users to connect to the newly provisioned session hosts. This method requires regular testing with infrastructure-as-code tools so you can deploy the new infrastructure as quickly as possible during a disaster.
+
+For more information about types of disaster recovery plans you can use, see [Azure Virtual Desktop disaster recovery concepts](disaster-recovery-concepts.md).
+
+Identifying which method works best for your organization is the first thing you should do before you get started. Once you have your plan in place, you can start building your recovery plan.
## VM replication First, you'll need to replicate your VMs to the secondary location. Your options for doing so depend on how your VMs are configured: -- You can configure all your VMs for both pooled and personal host pools with Azure Site Recovery. With this method, you'll only need to set up one host pool and its related app groups and workspaces.-- You can create a new host pool in the failover region while keeping all resources in your failover location turned off. For this method, you'd need to set up new app groups and workspaces in the failover region. You can then use an Azure Site Recovery plan to turn host pools on.
+- You can configure replication for all your VMs in both pooled and personal host pools with Azure Site Recovery. For more information about how this process works, see [Replicate Azure VMs to another Azure region](../site-recovery/azure-to-azure-how-to-enable-replication.md). However, if you have pooled host pools that you built from the same image and don't have any personal user data stored locally, you can choose not to replicate them. Instead, you have the option to build the VMs ahead of time and keep them powered off. You can also choose to only provision new VMs in the secondary region while a disaster is happening. If you choose these methods, you'll only need to set up one host pool and its related app groups and workspaces.
+- You can create a new host pool in the failover region while keeping all resources in your failover location turned off. For this method, you'd need to set up new app groups and workspaces in the failover region. You can then use an Azure Site Recovery plan to turn on host pools.
- You can create a host pool that's populated by VMs built in both the primary and failover regions while keeping the VMs in the failover region turned off. In this case, you only need to set up one host pool and its related app groups and workspaces. You can use an Azure Site Recovery plan to power on host pools with this method.
-We recommend you use [Azure Site Recovery](../site-recovery/site-recovery-overview.md) to manage replicating VMs in other Azure locations, as described in [Azure-to-Azure disaster recovery architecture](../site-recovery/azure-to-azure-architecture.md). We especially recommend using Azure Site Recovery for personal host pools, because Azure Site Recovery supports both [server-based and client-based SKUs](../site-recovery/azure-to-azure-support-matrix.md#replicated-machine-operating-systems).
+We recommend you use [Azure Site Recovery](../site-recovery/site-recovery-overview.md) to manage replicating VMs to other Azure locations, as described in [Azure-to-Azure disaster recovery architecture](../site-recovery/azure-to-azure-architecture.md). We especially recommend using Azure Site Recovery for personal host pools because, true to their name, personal host pools tend to have something personal about them for their users. Azure Site Recovery supports both [server-based and client-based SKUs](../site-recovery/azure-to-azure-support-matrix.md#replicated-machine-operating-systems).
If you use Azure Site Recovery, you won't need to register these VMs manually. The Azure Virtual Desktop agent in the secondary VM will automatically use the latest security token to connect to the service instance closest to it. The VM (session host) in the secondary location will automatically become part of the host pool. The end-user will have to reconnect during the process, but apart from that, there are no other manual operations.
-If there are existing user connections during the outage, before the admin can start failover to the secondary region, you need to end the user connections in the current region.
+If there are existing user connections during the outage, before the admin can start failing over to the secondary region, you need to end the user connections in the current region.
To disconnect users in Azure Virtual Desktop (classic), run this cmdlet:
To disconnect users in Azure Virtual Desktop (classic), run this cmdlet:
Invoke-RdsUserSessionLogoff ```
-To disconnect users in the Azure-integrated version of Azure Virtual Desktop, run this cmdlet:
+To disconnect users in Azure Virtual Desktop, run this cmdlet:
```powershell Remove-AzWvdUserSession ```
-Once you've signed out all users in the primary region, you can fail over the VMs in the primary region and let users connect to the VMs in the secondary region. For more information about how this process works, see [Replicate Azure VMs to another Azure region](../site-recovery/azure-to-azure-how-to-enable-replication.md).
+Once you've signed out all users in the primary region, you can fail over the VMs in the primary region and let users connect to the VMs in the secondary region.
## Virtual network
Next, ensure that the domain controller is available at the secondary location.
There are three ways to keep the domain controller available:
- - Have Active Directory Domain Controller at secondary location
+ - Have one or more Active Directory Domain Controllers in the secondary location
- Use an on-premises Active Directory Domain Controller - Replicate Active Directory Domain Controller using [Azure Site Recovery](../site-recovery/site-recovery-active-directory.md)
-## User and app data
+## Replicating user and app profile data
+
+If you're using profile containers, the next step is to set up data replication to the secondary location.
-If you're using profile containers, the next step is to set up data replication in the secondary location. You have five options to store FSLogix profiles:
+You have five options to store FSLogix profiles:
- Storage Spaces Direct (S2D) - Network drives (VM with extra drives) - Azure Files - Azure NetApp Files
- - Cloud Cache for replication
+ - Third-party storage services available on the Azure Marketplace
For more information, check out [Storage options for FSLogix profile containers in Azure Virtual Desktop](store-fslogix-profile.md).
-If you're setting up disaster recovery for profiles, these are your options:
+If you're setting up disaster recovery for user profiles, then you'll need to either use the storage service to replicate the data to another region or use FSLogix Cloud Cache to manage the replication without using the underlying storage service to replicate the data.
- - Set up Native Azure Replication (for example, Azure Files Standard storage account replication, Azure NetApp Files replication, or Azure Files Sync for file servers).
+Let's go over the five options for user profile disaster recovery plans in more detail in the following sections.
+
+### Native Azure replication
+
+One way you can set up disaster recovery is to set up native Azure replication. For example, you can set up native replication with Azure Files Standard storage account replication, Azure NetApp Files replication, or Azure Files Sync for file servers.
- >[!NOTE]
- >NetApp replication is automatic after you first set it up. With Azure Site Recovery plans, you can add pre-scripts and post-scripts to fail over non-VM resources replicate Azure Storage resources.
+>[!NOTE]
+>NetApp replication is automatic after you first set it up. With Azure Site Recovery plans, you can add pre-scripts and post-scripts to fail over non-VM resources replicate Azure Storage resources.
+
+### Storage Spaces Direct
+
+Another option you can use is Storage Spaces Direct. Since Storage Spaces Direct handles replication across regions internally, you don't need to manually set up the secondary path.
+
+### Network drives (VM with extra drives)
- - Set up FSLogix Cloud Cache for both app and user data.
- - Set up disaster recovery for app data only to ensure access to business-critical data at all times. With this method, you can retrieve user data after the outage is over.
+You can use VMs with extra drives for disaster recovery, too. If you replicate the network storage VMs using Azure Site Recovery like the session host VMs, then the recovery keeps the same path, which means you don't need to reconfigure FSLogix.
-LetΓÇÖs take a look at how to configure FSLogix to set up disaster recovery for each option.
+### Azure Files
+
+Azure Files supports cross-region asynchronous replication that you can specify when you create the storage account. If the asynchronous nature of Azure Files already covers your disaster recovery goals, then you don't need to do extra configuration.
+
+If you need synchronous replication to minimize data loss, then we recommend you use FSLogix Cloud Cache instead.
+
+>[!NOTE]
+>This section doesn't cover the failover authentication mechanism for Azure Files.
+
+### Azure NetApp Files
+
+You can also use Azure NetApp Files to replicate your Azure resources. Learn more about Azure NetApp Files at [Create replication peering for Azure NetApp Files](../azure-netapp-files/cross-region-replication-create-peering.md).
### FSLogix configuration
-The FSLogix agent can support multiple profile locations if you configure the registry entries for FSLogix.
+The FSLogix agent can support multiple profile locations using the standard [VHDLocations](/fslogix/profile-container-configuration-reference#vhd-locations) option. This method doesn't have anything to do with the Cloud Cache, so if you'd rather use the cache, skip ahead to [FSLogix Cloud Cache](#fslogix-cloud-cache). This option also doesn't replicate data, but instead allows you access to multiple storage providers that the FSLogix agent can look in to find or create your user profile. This option separately requires storage replication so that the profile can be made available in the secondary region.
To configure the registry entries:
To configure the registry entries:
If the first location is unavailable, the FSLogix agent will automatically fail over to the second, and so on.
-We recommend you configure the FSLogix agent with a path to the secondary location in the main region. Once the primary location shuts down, the FLogix agent will replicate as part of the VM Azure Site Recovery replication. Once the replicated VMs are ready, the agent will automatically attempt to path to the secondary region.
+We recommend you configure the FSLogix agent VHDLocation registry setting with both storage locations in both of the Azure locations you've deployed them. To configure the VHDLocation registry setting, you'll need to set up two different group policies. The first group policy is for the session hosts located in the primary region with the corresponding storage locations ordered with the primary first and the secondary second. The second group policy would be for the session hosts in the secondary location with the storage options reversed, so that the secondary storage location is listed first for only the VMs in the secondary or failover site.
-For example, let's say your primary session host VMs are in the Central US region, but your profile container is in the Central US region for performance reasons.
+For example, let's say your primary session host VMs are in the Central US region, and the profile container is also in the Central US region for performance reasons. In this case, you'd configure the FSLogix agent with a path to the storage in the Central US region listed first. Next, you'd configure the storage service you used in the previous example to replicate to the West US region. Once the path to Central US fails, the agent will try to load the profile in West US instead.
-In this case, you would configure the FSLogix agent with a path to the storage in Central US. You would configure the session host VMs to replicate in West US. Once the path to Central US fails, the agent will try to create a new path for storage in West US instead.
+### VHDLocations
-### S2D
+VHDLocations contributes to business continuity, but this setting wasn't only designed to be one part of a complete high availability or disaster recovery solution. The VHDLocations setting enables users to use a replicated or new profile in the event of a disaster, keeping users productive even in the event of an outage.
-Since S2D handles replication across regions internally, you don't need to manually set up the secondary path.
+Here's how VHDLocations works, as well as some things you should consider if you plan to make VHDLocations part of your disaster recovery strategy:
-### Network drives (VM with extra drives)
+- If the primary storage is unavailable for whatever reason and a user signs in, the FSLogix agent won't be able to access the existing user profile from that primary share. The user can still sign in, but FSLogix will either use the profile it finds in the secondary storage location (if you've already replicated it with storage replication) or it'll create a new profile on the secondary share. Because the user is now using either a replicated or new profile, they wonΓÇÖt be using their original profile. When they use this secondary profile, any updates they make will apply only to the secondary profile. They won't be able to access their original profile until the primary storage becomes available again and they sign back in.
-If you replicate the network storage VMs using Azure Site Recovery like the session host VMs, then the recovery keeps the same path, which means you don't need to reconfigure FSlogix.
+- Once the primary storage is available again, the user won't be able to merge changes they made in the secondary or new profile back into the original profile. When a user signs in after the primary share is available again, they will return to using their original profile as it was before the disaster. Any changes they made in the secondary or new profile during the disaster will be lost.
-### Azure Files
+### FSLogix Cloud cache
-Azure Files supports cross-region asynchronous replication that you can specify when you create the storage account. If the asynchronous nature of Azure Files already covers your disaster recovery goals, then you don't need to do additional configuration.
+FSLogix supports replicating user and Office containers from the agent running on the session host itself. While you'll need to deploy multiple storage providers in multiple regions to store the replicated profiles, you won't need to configure the storage service's replication capabilities with multiple entries like you did with the VHDLocations settings in the previous section. However, before you start configuring FSLogix Cloud cache, you should be aware this method requires extra processing and storage space on the session host itself. Make sure you review [Cloud Cache to create resiliency and availability](/fslogix/cloud-cache-resiliency-availability-cncpt) before you get started.
-If you need synchronous replication to minimize data loss, then we recommend you use FSLogix Cloud Cache instead.
+You can configure FSLogix Cloud Cache directly in the registry based on the VHDLocations example in the previous section. However, we recommend you configure the cloud cache using a group policy instead. To create or edit a group policy object, go to **Computer Configuration** > **Administrative Templates** > **FSLogix** > **Profiles Containers (and Office 365 Containers, if necessary) > Cloud Cache - Cloud Cache Locations**. Once you've created or edited your policy object, you'll need to enable it, then list all storage provider locations you want the FSLogix to replicate it to, as shown in the following image.
->[!NOTE]
->This section doesn't cover the failover authentication mechanism for
-Azure Files.
+> [!div class="mx-imgBorder"]
+> ![A screenshot of the FSLogix Cloud Cache Group Policy Cloud Cache Locations is selected.](media/fslogix-locations.png)
-### Azure NetApp Files
+## Back up your data
+
+You also have the option to back up your data. You can choose one of the following methods to back up your Azure Virtual Desktop data:
-Learn more about Azure NetApp Files at [Create replication peering for Azure NetApp Files](../azure-netapp-files/cross-region-replication-create-peering.md).
+- For Compute data, we recommend only backing up personal host pools with [Azure Backup](../backup/backup-azure-vms-introduction.md).
+- For Storage data, the backup solution we recommend varies based on the back-end storage you used to store user profiles:
+ - If you used Azure Files Share, we recommend using [Azure Backup for File Share](../backup/azure-file-share-backup-overview.md).
+ - If you used Azure NetApp Files, we recommend using either [Snapshots/Policies](../azure-netapp-files/snapshots-manage-policy.md) or [Azure NetApp Files Backup](../azure-netapp-files/backup-introduction.md).
## App dependencies
After you're done setting up disaster recovery, you'll want to test your plan to
Here are some suggestions for how to test your plan: -- If the test VMs have internet access, they will take over any existing session host for new connections, but all existing connections to the original session host will remain active. Make sure the admin running the test signs out all active users before testing the plan. -- You should only do full disaster recovery tests during a maintenance window to not disrupt your users. You can also use a host pool in the validation environment for the test. -- Make sure your test covers all business-critical apps.
+- If the test VMs have internet access, they'll take over any existing session host for new connections, but all existing connections to the original session host will remain active. Make sure the admin running the test signs out all active users before testing the plan.
+- You should only do full disaster recovery tests during a maintenance window to not disrupt your users.
+- Make sure your test covers all business-critical applications and data.
- We recommend you only failover up to 100 VMs at a time. If you have more VMs than that, we recommend you fail them over in batches 10 minutes apart. ## Next steps
-If you have questions about how to keep your data secure in addition to planning for outages, check out our [security guide](security-guide.md).
+If you have questions about how to keep your data secure in addition to planning for outages, check out our [security guide](security-guide.md).
virtual-desktop Scheduled Agent Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/scheduled-agent-updates.md
> The Scheduled Agent Updates feature is currently in preview. > See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-The Scheduled Agent Updates feature (preview) lets you create up to two maintenance windows for the Azure Virtual Desktop agent, side-by-side stack, and Geneva Monitoring agent to get updated so that updates don't happen during peak business hours. To monitor agent updates, you can use Log Analytics to see when agent component updates are available and when updates are unsuccessful.
+The Scheduled Agent Updates feature (preview) lets you create up to two maintenance windows for the Azure Virtual Desktop agent, side-by-side stack, and Geneva Monitoring agent to get updated so that updates don't happen during peak business hours. To monitor agent updates, you can use Log Analytics to see when agent component updates are available and when updates are unsuccessful.
This article describes how the Scheduled Agent Updates feature works and how to set it up. >[!NOTE]
-> Azure Virtual Desktop (classic) doesn't support the Scheduled Agent Updates feature.
+> Azure Virtual Desktop (classic) doesn't support the Scheduled Agent Updates feature.
>[!IMPORTANT] >The preview version of this feature currently has the following limitations:
The agent component update won't succeed if the session host VM is shut down or
- All maintenance windows are two hours long to account for situations where all three agent components must be updated at the same time. For example, if your maintenance window is Saturday at 9:00 AM PST, the updates will happen between 9:00 AM PST and 11:00 AM PST. -- The **Use session host local time** parameter isn't selected by default. If you want the agent component update to be in the same time zone for all session hosts in your host pool, you'll need to specify a single time zone for your maintenance windows. Having a single time zone helps when all your session hosts or users are located in the same time zone.
+- The **Use session host local time** parameter isn't selected by default. If you want the agent component update to be in the same time zone for all session hosts in your host pool, you'll need to specify a single time zone for your maintenance windows. Having a single time zone helps when all your session hosts or users are located in the same time zone.
- If you select **Use session host local time**, the agent component update will be in the local time zone of each session host in the host pool. Use this setting when all session hosts in your host pool or their assigned users are in different time zones. For example, let's say you have one host pool with session hosts in West US in the Pacific Standard Time zone and session hosts in East US in the Eastern Standard Time zone, and you've set the maintenance window to be Saturday at 9:00 PM. Enabling **Use session host local time** ensures that updates to all session hosts in the host pool will happen at 9:00 PM in their respective time zones. Disabling **Use session host local time** and setting the time zone to be Central Standard Time ensures that updates to the session hosts in the host pool will happen at 9:00 PM Central Standard Time, regardless of the session hosts' local time zones. -- The local time zone for VMs you create using the Azure portal is set to Coordinated Universal Time (UTC) by default. If you want to change the VM time zone, run the [Set-TimeZone PowerShell cmdlet](/powershell/module/microsoft.powershell.management/set-timezone?view=powershell-7.1&preserve-view=true) on the VM.
+- The local time zone for VMs you create using the Azure portal is set to Coordinated Universal Time (UTC) by default. If you want to change the VM time zone, run the [Set-TimeZone PowerShell cmdlet](/powershell/module/microsoft.powershell.management/set-timezone) on the VM.
-- To get a list of available time zones for a VM, run the [Get-TimeZone PowerShell cmdlet]/powershell/module/microsoft.powershell.management/get-timezone?view=powershell-7.1&preserve-view=true) on the VM.
+- To get a list of available time zones for a VM, run the [Get-TimeZone PowerShell cmdlet]/powershell/module/microsoft.powershell.management/get-timezone) on the VM.
## Next steps
virtual-desktop Start Virtual Machine Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/start-virtual-machine-connect.md
# Set up Start VM on Connect
-The Start VM On Connect feature lets you reduce costs by enabling end users to turn on their session host virtual machines (VMs) only when they need them. You can them turn off VMs when they're not needed.
+Start VM On Connect lets you reduce costs by enabling end users to turn on their session host virtual machines (VMs) only when they need them. You can them turn off VMs when they're not needed.
You can configure Start VM on Connect for personal or pooled host pools using the Azure portal or PowerShell. Start VM on Connect is a host pool setting.
For personal host pools, Start VM On Connect will only turn on an existing sessi
The time it takes for a user to connect to a session host VM that is powered off (deallocated) increases because the VM needs time to turn on again, much like turning on a physical computer. The Remote Desktop client has an indicator that lets the user know the VM is being powered on while they're connecting. > [!NOTE]
-> Azure Virtual Desktop (classic) doesn't support this feature.
+> Azure Virtual Desktop (classic) doesn't support Start VM On Connect.
## Prerequisites
You need to make sure you have the names of the resource group and host pool you
## Troubleshooting
-If the feature runs into any issues, we recommend you use the Azure Virtual Desktop [diagnostics feature](diagnostics-log-analytics.md) to check for problems. If you receive an error message, make sure to pay close attention to the message content and make a note of the error name for reference. You can also use [Azure Monitor for Azure Virtual Desktop](azure-monitor.md) to get suggestions for how to resolve issues.
+If you run into any issues with Start VM On Connect, we recommend you use the Azure Virtual Desktop [diagnostics feature](diagnostics-log-analytics.md) to check for problems. If you receive an error message, make sure to pay close attention to the message content and make a note of the error name for reference. You can also use [Azure Monitor for Azure Virtual Desktop](azure-monitor.md) to get suggestions for how to resolve issues.
If the session host VM doesn't turn on, you'll need to check the health of the VM you tried to turn on as a first step.
virtual-desktop Teams On Avd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/teams-on-avd.md
Title: Microsoft Teams on Azure Virtual Desktop - Azure
description: How to use Microsoft Teams on Azure Virtual Desktop. Previously updated : 04/25/2022 Last updated : 05/24/2022
Using Teams in a virtualized environment is different from using Teams in a non-
- The Teams desktop client in Azure Virtual Desktop environments doesn't support creating live events, but you can join live events. For now, we recommend you create live events from the [Teams web client](https://teams.microsoft.com) in your remote session instead. When watching a live event in the browser, [enable multimedia redirection (MMR) for Teams live events](multimedia-redirection.md#how-to-use-mmr-for-teams-live-events) for smoother playback. - Calls or meetings don't currently support application sharing. Desktop sessions support desktop sharing. - Give control and take control aren't currently supported.-- Teams on Azure Virtual Desktop only supports one incoming video input at a time. This means that whenever someone tries to share their screen, their screen will appear instead of the meeting leader's screen. - Due to WebRTC limitations, incoming and outgoing video stream resolution is limited to 720p. - The Teams app doesn't support HID buttons or LED controls with other devices. - New Meeting Experience (NME) is not currently supported in VDI environments.
+- Teams for Azure Virtual Desktop doesn't currently support uploading custom background images.
For Teams known issues that aren't related to virtualized environments, see [Support Teams in your organization](/microsoftteams/known-issues). ### Known issues for Teams for macOS (preview)
-You can't configure audio devices from the Teams app, and the client will automatically use the default client audio device. To switch audio devices, you'll need to configure your settings from the client audio preferences instead.
+- You can't configure audio devices from the Teams app, and the client will automatically use the default client audio device. To switch audio devices, you'll need to configure your settings from the client audio preferences instead.
+- Teams for Azure Virtual Desktop on macOS doesn't currently support background effects such as background blur and background images.
## Collect Teams logs
virtual-desktop Connect Microsoft Store https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/user-documentation/connect-microsoft-store.md
To subscribe to a workspace:
- If you're using a Workspace URL, use the URL your admin gave you. - If you're connecting from Azure Virtual Desktop, use one of the following URLs depending on which version of the service you're using: - Azure Virtual Desktop (classic): `https://rdweb.wvd.microsoft.com/api/feeddiscovery/webfeeddiscovery.aspx`.
- - Azure Virtual Desktop: `https://rdweb.wvd.microsoft.com/arm/webclient/https://docsupdatetracker.net/index.html`.
+ - Azure Virtual Desktop: `https://rdweb.wvd.microsoft.com/api/arm/feeddiscovery`.
3. Tap **Subscribe**. 4. Provide your credentials when prompted.
virtual-desktop Manage Resources Using Ui https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/virtual-desktop-fall-2019/manage-resources-using-ui.md
The instructions in this article will tell you how to deploy the UI by using an
Since the app requires consent to interact with Azure Virtual Desktop, this tool doesn't support Business-to-Business (B2B) scenarios. Each Azure Active Directory (AAD) tenant's subscription will need its own separate deployment of the management tool.
-This management tool is a sample. Microsoft will provide important security and quality updates. The [source code is available in GitHub](https://github.com/Azure/RDS-Templates/tree/master/wvd-templates/wvd-management-ux/deploy). Customers and partners are encouraged to customize the tool to fit their business needs.
+This management tool is a sample. Microsoft will provide important security and quality updates. The [source code is available in GitHub](https://github.com/Azure/RDS-Templates/tree/master/wvd-templates/wvd-management-ux/deploy). Microsoft Support is not handling issues for the management tool. If you come across any issues, follow the directions in Azure Resource Manager templates for Remote Desktop Services to report them on [GitHub](https://github.com/Azure/RDS-Templates/tree/master/wvd-templates/wvd-management-ux/deploy).
+
+Customers and partners are encouraged to customize the tool to fit their business needs.
To following browsers are compatible with the management tool: - Google Chrome 68 or later
virtual-machines Av1 Series Retirement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/av1-series-retirement.md
Title: Av1-series retirement
description: Retirement information for the Av1 series VM sizes. -+ Last updated 07/26/2021
virtual-machines Av2 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/av2-series.md
Title: Av2-series description: Specifications for the Av2-series VMs.-+ -+ Last updated 02/03/2020-+ # Av2-series
virtual-machines Azure Compute Gallery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/azure-compute-gallery.md
There are some limitations for sharing your gallery to the community:
**A**: Users should exercise caution while using images from non-verified sources, since these images are not subject to Azure certification.
-**Q**: If an image that is shared to the community doesnΓÇÖt work, who do I contact for support?**
+**Q: If an image that is shared to the community doesnΓÇÖt work, who do I contact for support?**
**A**: Azure is not responsible for any issues users might encounter with community-shared images. The support is provided by the image publisher. Please look up the publisher contact information for the image and reach out to them for any support.
virtual-machines Dasv5 Dadsv5 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/dasv5-dadsv5-series.md
-+ Last updated 10/8/2021
virtual-machines Dav4 Dasv4 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/dav4-dasv4-series.md
description: Specifications for the Dav4 and Dasv4-series VMs.
-+ Last updated 02/03/2020
virtual-machines Dcasv5 Dcadsv5 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/dcasv5-dcadsv5-series.md
-+ Last updated 11/15/2021
virtual-machines Dcv2 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/dcv2-series.md
Title: DCsv2-series - Azure Virtual Machines description: Specifications for the DCsv2-series VMs.-+ -+ Last updated 02/20/2020-+
virtual-machines Dcv3 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/dcv3-series.md
Title: DCsv3 and DCdsv3-series - Azure Virtual Machines
-description: Specifications for the DCsv3 and DCdsv3-series VMs.
-
+ Title: DCsv3 and DCdsv3-series
+description: Specifications for the DCsv3 and DCdsv3-series Azure Virtual Machines.
+ -+ Previously updated : 11/01/2021- Last updated : 05/24/2022+
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
-> [!IMPORTANT]
-> DCsv3 and DCdsv3 are in public preview as of November 1st, 2021.
+The DCsv3 and DCdsv3-series Azure Virtual Machines help protect the confidentiality and integrity of your code and data while they're being processed in the public cloud. By using Intel&reg; Software Guard Extensions and Intel&reg; [Total Memory Encryption - Multi Key](https://itpeernetwork.intel.com/memory-encryption/), customers can ensure their data is always encrypted and protected in use.
-The DCsv3 and DCdsv3-series virtual machines help protect the confidentiality and integrity of your code and data whilst it’s processed in the public cloud. By leveraging Intel® Software Guard Extensions and Intel® Total Memory Encryption - Multi Key, customers can ensure their data is always encrypted and protected in use.
+These machines are powered by the latest 3rd Generation Intel&reg; Xeon Scalable processors, and use Intel&reg; Turbo Boost Max Technology 3.0 to reach 3.5 GHz.
-These machines are powered by the latest 3rd Generation Intel® Xeon Scalable processors, and leverage Intel® Turbo Boost Max Technology 3.0 to reach 3.5 GHz.
-
-With this generation, CPU Cores have increased 6x (up to a maximum of 48 physical cores), Encrypted Memory (EPC) has increased 1500x to 256GB, Regular Memory has increased 12x to 384GB. All these changes substantially improve the performance gen-on-gen and unlock new entirely new scenarios.
+With this generation, CPU Cores have increased 6x (up to a maximum of 48 physical cores). Encrypted Memory (EPC) has increased 1500x to 256 GB. Regular Memory has increased 12x to 384 GB. All these changes substantially improve the performance and unlock new entirely new scenarios.
> [!NOTE] > Hyperthreading is disabled for added security posture. Pricing is the same as Dv5 and Dsv5-series per physical core.
-We are offering two variants dependent on whether the workload benefits from a local disk or not. Whether you choose a VM with a local disk or not, you can attach remote persistent disk storage to all VMs. Remote disk options (such as for the VM boot disk) are billed separately from the VMs in any case, as always.
-
-## Configuration
-
-CPU: 3rd Generation Intel® Xeon Scalable Processor 8370C<br>
-Base All-Core Frequency: 2.8 GHz<br>
-[Turbo Boost Max 3.0](https://www.intel.com/content/www/us/en/gaming/resources/turbo-boost.html): Enabled, Max Frequency 3.5 GHz<br>
-[Hyper-Threading](https://www.intel.com/content/www/us/en/gaming/resources/hyper-threading.html): Not Supported<br>
-[Total Memory Encryption - Multi Key](https://itpeernetwork.intel.com/memory-encryption/): Enabled<br>
-[Premium Storage](premium-storage-performance.md): Supported<br>
-[Ultra-Disk Storage](disks-enable-ultra-ssd.md): Supported<br>
-[Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md): Supported<br>
-[Azure Kubernetes Service](../aks/intro-kubernetes.md): Supported (CLI provisioning only initially)<br>
-[Live Migration](maintenance-and-updates.md): Not Supported<br>
-[Memory Preserving Updates](maintenance-and-updates.md): Not Supported<br>
-[VM Generation Support](generation-2.md): Generation 2<br>
-[Trusted Launch](trusted-launch.md): Coming Soon<br>
-[Ephemeral OS Disks](ephemeral-os-disks.md): Supported for DCdsv3-series<br>
-[Dedicated Host](dedicated-hosts.md): Coming Soon<br>
-[Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Not Supported <br>
-
-## DCsv3-series Technical specifications
+There are two variants for each series, depending on whether the workload benefits from a local disk or not. You can attach remote persistent disk storage to all VMs, whether or not the VM has a local disk. As always, remote disk options (such as for the VM boot disk) are billed separately from the VMs in any case.
+
+Dcsv3-series instances run on a 3rd Generation Intel&reg; Xeon Scalable Processor 8370C. The base All-Core frequency is 2.8 GHz. [Turbo Boost Max 3.0](https://www.intel.com/content/www/us/en/gaming/resources/turbo-boost.html) is enabled with a max frequency of 3.5 GHz.
+
+- [Premium Storage](premium-storage-performance.md): Supported
+- [Live Migration](maintenance-and-updates.md): Not supported
+- [Memory Preserving Updates](maintenance-and-updates.md): Not supported
+- [VM Generation Support](generation-2.md): Generation 2
+- [Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md): Supported
+- [Ephemeral OS Disks](ephemeral-os-disks.md): Supported
+- [Ultra-Disk Storage](disks-enable-ultra-ssd.md): Supported
+- [Azure Kubernetes Service](../aks/intro-kubernetes.md): Supported (CLI provisioning only)
+- [Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Not Supported
+- [Hyper-Threading](https://www.intel.com/content/www/us/en/gaming/resources/hyper-threading.html): Not supported
+- [Trusted Launch](trusted-launch.md): Not supported
+- [Dedicated Host](dedicated-hosts.md): Not supported
++
+## DCsv3-series
| Size | Physical Cores | Memory GB | Temp storage (SSD) GiB | Max data disks | Max NICs | EPC Memory GB | ||-|-||-|||
Base All-Core Frequency: 2.8 GHz<br>
| Standard_DC32s_v3 | 32 | 256 | Remote Storage Only | 32 | 8 | 192 | | Standard_DC48s_v3 | 48 | 384 | Remote Storage Only | 32 | 8 | 256 |
-## DCdsv3-series Technical specifications
+## DCdsv3-series
| Size | Physical Cores | Memory GB | Temp storage (SSD) GiB | Max data disks | Max NICs | EPC Memory GB | ||-|-||-|||
Base All-Core Frequency: 2.8 GHz<br>
| Standard_DC32ds_v3 | 32 | 256 | 2400 | 32 | 8 | 192 | | Standard_DC48ds_v3 | 48 | 384 | 2400 | 32 | 8 | 256 |
-## Get started
--- Create DCsv3 and DCdsv3 VMs using the [Azure portal](./linux/quick-create-portal.md)-- DCsv3 and DCdsv3 VMs are [Generation 2 VMs](./generation-2.md#creating-a-generation-2-vm) and only support `Gen2` images.-- Currently available in the regions listed in [Azure Products by Region](https://azure.microsoft.com/global-infrastructure/services/?products=virtual-machines&regions=all). ## More sizes and information
Base All-Core Frequency: 2.8 GHz<br>
- [Previous generations](sizes-previous-gen.md) - [Pricing Calculator](https://azure.microsoft.com/pricing/calculator/)
-Pricing Calculator : [Pricing Calculator](https://azure.microsoft.com/pricing/calculator/)
+## Next steps
+
+- Create DCsv3 and DCdsv3 VMs using the [Azure portal](./linux/quick-create-portal.md)
+- DCsv3 and DCdsv3 VMs are [Generation 2 VMs](./generation-2.md#creating-a-generation-2-vm) and only support `Gen2` images.
+- Currently available in the regions listed in [Azure Products by Region](https://azure.microsoft.com/global-infrastructure/services/?products=virtual-machines&regions=all).
+
+Pricing Calculator: [Pricing Calculator](https://azure.microsoft.com/pricing/calculator/)
Learn more about how [Azure compute units (ACU)](acu.md) can help you compare compute performance across Azure SKUs.
virtual-machines Ddv4 Ddsv4 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/ddv4-ddsv4-series.md
-+ Last updated 06/01/2020
virtual-machines Ddv5 Ddsv5 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/ddv5-ddsv5-series.md
-+ Last updated 10/20/2021
virtual-machines Dedicated Host Compute Optimized Skus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/dedicated-host-compute-optimized-skus.md
The following packing configuration outlines the max packing of uniform VMs you
### Fsv2-Type3
-The Fsv2-Type3 is a Dedicated Host SKU utilizing the Intel® Cascade Lake (Xeon® Platinum 8272CL) processor. It offers 52 physical cores, 84 vCPUs, and 504 GiB of RAM. The Fsv2-Type3 runs [Fsv2-series](fsv2-series.md) VMs.
+The Fsv2-Type3 is a Dedicated Host SKU utilizing the Intel® Cascade Lake (Xeon® Platinum 8272CL) processor. It offers 52 physical cores, 80 vCPUs, and 504 GiB of RAM. The Fsv2-Type3 runs [Fsv2-series](fsv2-series.md) VMs.
The following packing configuration outlines the max packing of uniform VMs you can put onto a Fsv2-Type3 host. | Physical cores | Available vCPUs | Available RAM | VM Size | # VMs | |-|--|||-|
-| 52 | 84 | 504 GiB | F2s v2 | 32 |
+| 52 | 80 | 504 GiB | F2s v2 | 32 |
| | | | F4s v2 | 21 | | | | | F8s v2 | 10 | | | | | F16s v2 | 5 |
The following packing configuration outlines the max packing of uniform VMs you
### Fsv2-Type4
-The Fsv2-Type4 is a Dedicated Host SKU utilizing the Intel® Ice Lake (Xeon® Platinum 8370C) processor. It offers 64 physical cores, 96 vCPUs, and 768 GiB of RAM. The Fsv2-Type4 runs [Fsv2-series](fsv2-series.md) VMs.
+The Fsv2-Type4 is a Dedicated Host SKU utilizing the Intel® Ice Lake (Xeon® Platinum 8370C) processor. It offers 64 physical cores, 119 vCPUs, and 768 GiB of RAM. The Fsv2-Type4 runs [Fsv2-series](fsv2-series.md) VMs.
The following packing configuration outlines the max packing of uniform VMs you can put onto a Fsv2-Type4 host. | Physical cores | Available vCPUs | Available RAM | VM Size | # VMs | |-|--|||-|
-| 64 | 96 | 768 GiB | F2s v2 | 32 |
+| 64 | 119 | 768 GiB | F2s v2 | 32 |
| | | | F4s v2 | 24 | | | | | F8s v2 | 12 | | | | | F16s v2 | 6 |
virtual-machines Dedicated Host General Purpose Skus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/dedicated-host-general-purpose-skus.md
The following packing configuration outlines the max packing of uniform VMs you
## Ddsv5 ### Ddsv5-Type1
-The Ddsv5-Type1 is a Dedicated Host SKU utilizing the Intel® Ice Lake (Xeon® Platinum 8370C) processor. It offers 64 physical cores, 96 vCPUs, and 768 GiB of RAM. The Ddsv5-Type1 runs [Ddsv5-series](ddv5-ddsv5-series.md#ddsv5-series) VMs. Please refer to the VM size documentation to better understand specific VM performance information.
+The Ddsv5-Type1 is a Dedicated Host SKU utilizing the Intel® Ice Lake (Xeon® Platinum 8370C) processor. It offers 64 physical cores, 119 vCPUs, and 768 GiB of RAM. The Ddsv5-Type1 runs [Ddsv5-series](ddv5-ddsv5-series.md#ddsv5-series) VMs. Please refer to the VM size documentation to better understand specific VM performance information.
The following packing configuration outlines the max packing of uniform VMs you can put onto a Ddsv5-Type1 host. | Physical cores | Available vCPUs | Available RAM | VM Size | # VMs | |-|--||-|-|
-| 64 | 96 | 768 GiB | D2ds v5 | 32 |
+| 64 | 119 | 768 GiB | D2ds v5 | 32 |
| | | | D4ds v5 | 22 | | | | | D8ds v5 | 11 | | | | | D16ds v5 | 5 |
The following packing configuration outlines the max packing of uniform VMs you
## Dsv5 ### Dsv5-Type1
-The Dsv5-Type1 is a Dedicated Host SKU utilizing the Intel® Ice Lake (Xeon® Platinum 8370C) processor. It offers 64 physical cores, 100 vCPUs, and 768 GiB of RAM. The Dsv5-Type1 runs [Dsv5-series](dv5-dsv5-series.md#dsv5-series) VMs. Please refer to the VM size documentation to better understand specific VM performance information.
+The Dsv5-Type1 is a Dedicated Host SKU utilizing the Intel® Ice Lake (Xeon® Platinum 8370C) processor. It offers 64 physical cores, 119 vCPUs, and 768 GiB of RAM. The Dsv5-Type1 runs [Dsv5-series](dv5-dsv5-series.md#dsv5-series) VMs. Please refer to the VM size documentation to better understand specific VM performance information.
The following packing configuration outlines the max packing of uniform VMs you can put onto a Dsv5-Type1 host. | Physical cores | Available vCPUs | Available RAM | VM Size | # VMs | |-|--|||-|
-| 64 | 100 | 768 GiB | D2s v5 | 32 |
+| 64 | 119 | 768 GiB | D2s v5 | 32 |
| | | | D4s v5 | 25 | | | | | D8s v5 | 12 | | | | | D16s v5 | 6 |
You can also mix multiple VM sizes on the Dasv4-Type1. The following are sample
- 20 D4asv4 + 8 D2asv4 ### Dasv4-Type2
-The Dasv4-Type2 is a Dedicated Host SKU utilizing AMD's EPYCΓäó 7763v processor. It offers 64 physical cores, 110 vCPUs, and 768 GiB of RAM. The Dasv4-Type2 runs [Dasv4-series](dav4-dasv4-series.md#dasv4-series) VMs. Please refer to the VM size documentation to better understand specific VM performance information.
+The Dasv4-Type2 is a Dedicated Host SKU utilizing AMD's EPYCΓäó 7763v processor. It offers 64 physical cores, 112 vCPUs, and 768 GiB of RAM. The Dasv4-Type2 runs [Dasv4-series](dav4-dasv4-series.md#dasv4-series) VMs. Please refer to the VM size documentation to better understand specific VM performance information.
The following packing configuration outlines the max packing of uniform VMs you can put onto a Dasv4-Type2 host. | Physical cores | Available vCPUs | Available RAM | VM Size | # VMs | |-|--||-|-|
-| 64 | 110 | 768 GiB | D2as v4 | 32 |
+| 64 | 112 | 768 GiB | D2as v4 | 32 |
| | | | D4as v4 | 25 | | | | | D8as v4 | 12 | | | | | D16as v4 | 6 |
The following packing configuration outlines the max packing of uniform VMs you
## Ddsv4 ### Ddsv4-Type1
-The Ddsv4-Type1 is a Dedicated Host SKU utilizing the Intel® Cascade Lake (Xeon® Platinum 8272CL) processor. It offers 52 physical cores, 68 vCPUs, and 504 GiB of RAM. The Ddsv4-Type1 runs [Ddsv4-series](ddv4-ddsv4-series.md#ddsv4-series) VMs.
+The Ddsv4-Type1 is a Dedicated Host SKU utilizing the Intel® Cascade Lake (Xeon® Platinum 8272CL) processor. It offers 52 physical cores, 80 vCPUs, and 504 GiB of RAM. The Ddsv4-Type1 runs [Ddsv4-series](ddv4-ddsv4-series.md#ddsv4-series) VMs.
The following packing configuration outlines the max packing of uniform VMs you can put onto a Ddsv4-Type1 host. | Physical cores | Available vCPUs | Available RAM | VM Size | # VMs | |-|--||-|-|
-| 52 | 68 | 504 GiB | D2ds v4 | 32 |
+| 52 | 80 | 504 GiB | D2ds v4 | 32 |
| | | | D4ds v4 | 17 | | | | | D8ds v4 | 8 | | | | | D16ds v4 | 4 |
You can also mix multiple VM sizes on the Ddsv4-Type1. The following are sample
- 10 D4dsv4 + 14 D2dsv4 ### Ddsv4-Type2
-The Ddsv4-Type2 is a Dedicated Host SKU utilizing the Intel® Ice Lake (Xeon® Platinum 8370C) processor. It offers 64 physical cores, 76 vCPUs, and 768 GiB of RAM. The Ddsv4-Type2 runs [Ddsv4-series](ddv4-ddsv4-series.md#ddsv4-series) VMs.
+The Ddsv4-Type2 is a Dedicated Host SKU utilizing the Intel® Ice Lake (Xeon® Platinum 8370C) processor. It offers 64 physical cores, 119 vCPUs, and 768 GiB of RAM. The Ddsv4-Type2 runs [Ddsv4-series](ddv4-ddsv4-series.md#ddsv4-series) VMs.
The following packing configuration outlines the max packing of uniform VMs you can put onto a Ddsv4-Type2 host. | Physical cores | Available vCPUs | Available RAM | VM Size | # VMs | |-|--||-|-|
-| 64 | 76 | 768 GiB | D2ds v4 | 32 |
+| 64 | 119 | 768 GiB | D2ds v4 | 32 |
| | | | D4ds v4 | 19 | | | | | D8ds v4 | 9 | | | | | D16ds v4 | 4 |
You can also mix multiple VM sizes on the Dsv4-Type1. The following are sample c
### Dsv4-Type2
-The Dsv4-Type2 is a Dedicated Host SKU utilizing the Intel® Ice Lake (Xeon® Platinum 8370C) processor. It offers 64 physical cores, 96 vCPUs, and 768 GiB of RAM. The Dsv4-Type2 runs [Dsv4-series](dv4-dsv4-series.md#dsv4-series) VMs.
+The Dsv4-Type2 is a Dedicated Host SKU utilizing the Intel® Ice Lake (Xeon® Platinum 8370C) processor. It offers 64 physical cores, 119 vCPUs, and 768 GiB of RAM. The Dsv4-Type2 runs [Dsv4-series](dv4-dsv4-series.md#dsv4-series) VMs.
The following packing configuration outlines the max packing of uniform VMs you can put onto a Dsv4-Type2 host. | Physical cores | Available vCPUs | Available RAM | VM Size | # VMs | |-|--|||-|
-| 64 | 96 | 768 GiB | D2s v4 | 32 |
+| 64 | 119 | 768 GiB | D2s v4 | 32 |
| | | | D4s v4 | 25 | | | | | D8s v4 | 12 | | | | | D16s v4 | 6 |
The following packing configuration outlines the max packing of uniform VMs you
## Dsv3 ### Dsv3-Type1
-The Dsv3-Type1 is a Dedicated Host SKU utilizing the Intel® Broadwell (2.3 GHz Xeon® E5-2673 v4) processor. It offers 40 physical cores, 68 vCPUs, and 256 GiB of RAM. The Dsv3-Type1 runs [Dsv3-series](dv3-dsv3-series.md#dsv3-series) VMs.
+The Dsv3-Type1 is a Dedicated Host SKU utilizing the Intel® Broadwell (2.3 GHz Xeon® E5-2673 v4) processor. It offers 40 physical cores, 64 vCPUs, and 256 GiB of RAM. The Dsv3-Type1 runs [Dsv3-series](dv3-dsv3-series.md#dsv3-series) VMs.
The following packing configuration outlines the max packing of uniform VMs you can put onto a Dsv3-Type1 host. | Physical cores | Available vCPUs | Available RAM | VM Size | # VMs | |-|--|||-|
-| 40 | 68 | 256 GiB | D2s v3 | 32 |
+| 40 | 64 | 256 GiB | D2s v3 | 32 |
| | | | D4s v3 | 17 | | | | | D8s v3 | 8 | | | | | D16s v3 | 4 |
You can also mix multiple VM sizes on the Dsv3-Type1. The following are sample c
### Dsv3-Type2
-The Dsv3-Type2 is a Dedicated Host SKU utilizing the Intel® Skylake (2.1 GHz Xeon® Platinum 8171M) processor. It offers 48 physical cores, 80 vCPUs, and 504 GiB of RAM. The Dsv3-Type2 runs [Dsv3-series](dv3-dsv3-series.md#dsv3-series) VMs.
+The Dsv3-Type2 is a Dedicated Host SKU utilizing the Intel® Skylake (2.1 GHz Xeon® Platinum 8171M) processor. It offers 48 physical cores, 76 vCPUs, and 504 GiB of RAM. The Dsv3-Type2 runs [Dsv3-series](dv3-dsv3-series.md#dsv3-series) VMs.
The following packing configuration outlines the max packing of uniform VMs you can put onto a Dsv3-Type2 host. | Physical cores | Available vCPUs | Available RAM | VM Size | # VMs | |-|--|||-|
-| 48 | 80 | 504 GiB | D2s v3 | 32 |
+| 48 | 76 | 504 GiB | D2s v3 | 32 |
| | | | D4s v3 | 20 | | | | | D8s v3 | 10 | | | | | D16s v3 | 5 |
You can also mix multiple VM sizes on the Dsv3-Type2. The following are sample c
### Dsv3-Type3
-The Dsv3-Type3 is a Dedicated Host SKU utilizing the Intel® Cascade Lake (Xeon® Platinum 8272CL) processor. It offers 52 physical cores, 84 vCPUs, and 504 GiB of RAM. The Dsv3-Type3 runs [Dsv3-series](dv3-dsv3-series.md#dsv3-series) VMs.
+The Dsv3-Type3 is a Dedicated Host SKU utilizing the Intel® Cascade Lake (Xeon® Platinum 8272CL) processor. It offers 52 physical cores, 80 vCPUs, and 504 GiB of RAM. The Dsv3-Type3 runs [Dsv3-series](dv3-dsv3-series.md#dsv3-series) VMs.
The following packing configuration outlines the max packing of uniform VMs you can put onto a Dsv3-Type3 host. | Physical cores | Available vCPUs | Available RAM | VM Size | # VMs | |-|--|||-|
-| 52 | 84 | 504 GiB | D2s v3 | 32 |
+| 52 | 80 | 504 GiB | D2s v3 | 32 |
| | | | D4s v3 | 21 | | | | | D8s v3 | 10 | | | | | D16s v3 | 5 |
You can also mix multiple VM sizes on the Dsv3-Type3. The following are sample c
### Dsv3-Type4
-The Dsv3-Type4 is a Dedicated Host SKU utilizing the Intel® Ice Lake (Xeon® Platinum 8370C) processor. It offers 64 physical cores, 96 vCPUs, and 768 GiB of RAM. The Dsv3-Type4 runs [Dsv3-series](dv3-dsv3-series.md#dsv3-series) VMs.
+The Dsv3-Type4 is a Dedicated Host SKU utilizing the Intel® Ice Lake (Xeon® Platinum 8370C) processor. It offers 64 physical cores, 119 vCPUs, and 768 GiB of RAM. The Dsv3-Type4 runs [Dsv3-series](dv3-dsv3-series.md#dsv3-series) VMs.
The following packing configuration outlines the max packing of uniform VMs you can put onto a Dsv3-Type4 host. | Physical cores | Available vCPUs | Available RAM | VM Size | # VMs | |-|--|||-|
-| 64 | 96 | 768 GiB | D2s v3 | 32 |
+| 64 | 119 | 768 GiB | D2s v3 | 32 |
| | | | D4s v3 | 24 | | | | | D8s v3 | 12 | | | | | D16s v3 | 6 |
virtual-machines Dedicated Host Gpu Optimized Skus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/dedicated-host-gpu-optimized-skus.md
The following packing configuration outlines the max packing of uniform VMs you
## NVsv3 ### NVsv3-Type1
-The NVsv3-Type1 is a Dedicated Host SKU utilizing the Intel® Broadwell (E5-2690 v4) processor with NVDIDIA Tesla M60 GPUs and NVIDIA GRID technology. It offers 28 physical cores, 48 vCPUs, and 448 GiB of RAM. The NVsv3-Type1 runs [NVv3-series](nvv3-series.md) VMs.
+The NVsv3-Type1 is a Dedicated Host SKU utilizing the Intel® Broadwell (E5-2690 v4) processor with NVIDIA Tesla M60 GPUs and NVIDIA GRID technology. It offers 28 physical cores, 48 vCPUs, and 448 GiB of RAM. The NVsv3-Type1 runs [NVv3-series](nvv3-series.md) VMs.
The following packing configuration outlines the max packing of uniform VMs you can put onto a NVsv3-Type1 host.
virtual-machines Dedicated Host Memory Optimized Skus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/dedicated-host-memory-optimized-skus.md
The sizes and hardware types available for dedicated hosts vary by region. Refer
## Eadsv5 ### Eadsv5-Type1
-The Eadsv5-Type1 is a Dedicated Host SKU utilizing AMD's EPYCΓäó 7763v processor. It offers 64 physical cores, 96 vCPUs, and 768 GiB of RAM. The Eadsv5-Type1 runs [Eadsv5-series](easv5-eadsv5-series.md#eadsv5-series) VMs.
+The Eadsv5-Type1 is a Dedicated Host SKU utilizing AMD's EPYCΓäó 7763v processor. It offers 64 physical cores, 112 vCPUs, and 768 GiB of RAM. The Eadsv5-Type1 runs [Eadsv5-series](easv5-eadsv5-series.md#eadsv5-series) VMs.
-The following packing configuration outlines the max packing of uniform VMs you can put onto a Eadsv5-Type1 host.
+The following packing configuration outlines the max packing of uniform VMs you can put onto an Eadsv5-Type1 host.
| Physical cores | Available vCPUs | Available RAM | VM Size | # VMs | |-|--||--|-|
-| 64 | 96 | 768 GiB | E2ads v5 | 32 |
+| 64 | 112 | 768 GiB | E2ads v5 | 32 |
| | | | E4ads v5 | 21 | | | | | E8ads v5 | 10 | | | | | E16ads v5 | 5 |
The following packing configuration outlines the max packing of uniform VMs you
## Easv5 ### Easv5-Type1
-The Easv5-Type1 is a Dedicated Host SKU utilizing AMD's EPYCΓäó 7763v processor. It offers 64 physical cores, 96 vCPUs, and 768 GiB of RAM. The Easv5-Type1 runs [Easv5-series](easv5-eadsv5-series.md#easv5-series) VMs.
+The Easv5-Type1 is a Dedicated Host SKU utilizing AMD's EPYCΓäó 7763v processor. It offers 64 physical cores, 112 vCPUs, and 768 GiB of RAM. The Easv5-Type1 runs [Easv5-series](easv5-eadsv5-series.md#easv5-series) VMs.
-The following packing configuration outlines the max packing of uniform VMs you can put onto a Easv5-Type1 host.
+The following packing configuration outlines the max packing of uniform VMs you can put onto an Easv5-Type1 host.
| Physical cores | Available vCPUs | Available RAM | VM Size | # VMs | |-|--||-|-|
-| 64 | 96 | 768 GiB | E2as v5 | 32 |
+| 64 | 112 | 768 GiB | E2as v5 | 32 |
| | | | E4as v5 | 21 | | | | | E8as v5 | 10 | | | | | E16as v5 | 5 |
The following packing configuration outlines the max packing of uniform VMs you
## Edsv5 ### Edsv5-Type1
-The Edsv5-Type1 is a Dedicated Host SKU utilizing the Intel® Ice Lake (Xeon® Platinum 8370C) processor. It offers 64 physical cores, 96 vCPUs, and 768 GiB of RAM. The Edsv5-Type1 runs [Edsv5-series](edv5-edsv5-series.md#edsv5-series) VMs.
+The Edsv5-Type1 is a Dedicated Host SKU utilizing the Intel® Ice Lake (Xeon® Platinum 8370C) processor. It offers 64 physical cores, 119 vCPUs, and 768 GiB of RAM. The Edsv5-Type1 runs [Edsv5-series](edv5-edsv5-series.md#edsv5-series) VMs.
-The following packing configuration outlines the max packing of uniform VMs you can put onto a Edsv5-Type1 host.
+The following packing configuration outlines the max packing of uniform VMs you can put onto an Edsv5-Type1 host.
| Physical cores | Available vCPUs | Available RAM | VM Size | # VMs | |-|--||-|-|
-| 64 | 96 | 768 GiB | E2ds v5 | 32 |
+| 64 | 119 | 768 GiB | E2ds v5 | 32 |
| | | | E4ds v5 | 21 | | | | | E8ds v5 | 10 | | | | | E16ds v5 | 5 |
The following packing configuration outlines the max packing of uniform VMs you
## Esv5 ### Esv5-Type1
-The Esv5-Type1 is a Dedicated Host SKU utilizing the Intel® Ice Lake (Xeon® Platinum 8370C) processor. It offers 64 physical cores, 84 vCPUs, and 768 GiB of RAM. The Esv5-Type1 runs [Esv5-series](ev5-esv5-series.md#esv5-series) VMs.
+The Esv5-Type1 is a Dedicated Host SKU utilizing the Intel® Ice Lake (Xeon® Platinum 8370C) processor. It offers 64 physical cores, 119 vCPUs, and 768 GiB of RAM. The Esv5-Type1 runs [Esv5-series](ev5-esv5-series.md#esv5-series) VMs.
-The following packing configuration outlines the max packing of uniform VMs you can put onto a Esv5-Type1 host.
+The following packing configuration outlines the max packing of uniform VMs you can put onto an Esv5-Type1 host.
| Physical cores | Available vCPUs | Available RAM | VM Size | # VMs | |-|--|||-|
-| 64 | 84 | 768 GiB | E2s v5 | 32 |
+| 64 | 119 | 768 GiB | E2s v5 | 32 |
| | | | E4s v5 | 21 | | | | | E8s v5 | 10 | | | | | E16s v5 | 5 |
The following packing configuration outlines the max packing of uniform VMs you
The Easv4-Type1 is a Dedicated Host SKU utilizing AMD's 2.35 GHz EPYCΓäó 7452 processor. It offers 64 physical cores, 96 vCPUs, and 672 GiB of RAM. The Easv4-Type1 runs [Easv4-series](eav4-easv4-series.md#easv4-series) VMs.
-The following packing configuration outlines the max packing of uniform VMs you can put onto a Easv4-Type1 host.
+The following packing configuration outlines the max packing of uniform VMs you can put onto an Easv4-Type1 host.
| Physical cores | Available vCPUs | Available RAM | VM Size | # VMs | |-|--||-|-|
The following packing configuration outlines the max packing of uniform VMs you
### Easv4-Type2
-The Easv4-Type2 is a Dedicated Host SKU utilizing AMD's EPYCΓäó 7763v processor. It offers 64 physical cores, 96 vCPUs, and 768 GiB of RAM. The Easv4-Type2 runs [Easv4-series](eav4-easv4-series.md#easv4-series) VMs.
+The Easv4-Type2 is a Dedicated Host SKU utilizing AMD's EPYCΓäó 7763v processor. It offers 64 physical cores, 112 vCPUs, and 768 GiB of RAM. The Easv4-Type2 runs [Easv4-series](eav4-easv4-series.md#easv4-series) VMs.
-The following packing configuration outlines the max packing of uniform VMs you can put onto a Easv4-Type2 host.
+The following packing configuration outlines the max packing of uniform VMs you can put onto an Easv4-Type2 host.
| Physical cores | Available vCPUs | Available RAM | VM Size | # VMs | |-|--||-|-|
-| 64 | 96 | 768 GiB | E2as v4 | 32 |
+| 64 | 112 | 768 GiB | E2as v4 | 32 |
| | | | E4as v4 | 21 | | | | | E8as v4 | 10 | | | | | E16as v4 | 5 |
The following packing configuration outlines the max packing of uniform VMs you
## Edsv4 ### Edsv4-Type1
-The Edsv4-Type1 is a Dedicated Host SKU utilizing the Intel® Cascade Lake (Xeon® Platinum 8272CL) processor. It offers 52 physical cores, 64 vCPUs, and 504 GiB of RAM. The Edsv4-Type1 runs [Edsv4-series](edv4-edsv4-series.md#edsv4-series) VMs.
+The Edsv4-Type1 is a Dedicated Host SKU utilizing the Intel® Cascade Lake (Xeon® Platinum 8272CL) processor. It offers 52 physical cores, 80 vCPUs, and 504 GiB of RAM. The Edsv4-Type1 runs [Edsv4-series](edv4-edsv4-series.md#edsv4-series) VMs.
-The following packing configuration outlines the max packing of uniform VMs you can put onto a Edsv4-Type1 host.
+The following packing configuration outlines the max packing of uniform VMs you can put onto an Edsv4-Type1 host.
| Physical cores | Available vCPUs | Available RAM | VM Size | # VMs | |-|--||-|-|
-| 52 | 64 | 504 GiB | E2ds v4 | 31 |
+| 52 | 80 | 504 GiB | E2ds v4 | 31 |
| | | | E4ds v4 | 15 | | | | | E8ds v4 | 7 | | | | | E16ds v4 | 3 |
The following packing configuration outlines the max packing of uniform VMs you
### Edsv4-Type2
-The Edsv4-Type2 is a Dedicated Host SKU utilizing the Intel® Ice Lake (Xeon® Platinum 8370C) processor. It offers 64 physical cores, 76 vCPUs, and 768 GiB of RAM. The Edsv4-Type2 runs [Edsv4-series](edv4-edsv4-series.md#edsv4-series) VMs.
+The Edsv4-Type2 is a Dedicated Host SKU utilizing the Intel® Ice Lake (Xeon® Platinum 8370C) processor. It offers 64 physical cores, 119 vCPUs, and 768 GiB of RAM. The Edsv4-Type2 runs [Edsv4-series](edv4-edsv4-series.md#edsv4-series) VMs.
-The following packing configuration outlines the max packing of uniform VMs you can put onto a Edsv4-Type2 host.
+The following packing configuration outlines the max packing of uniform VMs you can put onto an Edsv4-Type2 host.
| Physical cores | Available vCPUs | Available RAM | VM Size | # VMs | |-|--||-|-|
-| 64 | 76 | 768 GiB | E2ds v4 | 32 |
+| 64 | 119 | 768 GiB | E2ds v4 | 32 |
| | | | E4ds v4 | 19 | | | | | E8ds v4 | 9 | | | | | E16ds v4 | 4 |
The following packing configuration outlines the max packing of uniform VMs you
## Esv4 ### Esv4-Type1
-The Esv4-Type1 is a Dedicated Host SKU utilizing the Intel® Cascade Lake (Xeon® Platinum 8272CL) processor. It offers 52 physical cores, 64 vCPUs, and 504 GiB of RAM. The Esv4-Type1 runs [Esv4-series](ev4-esv4-series.md#esv4-series) VMs.
+The Esv4-Type1 is a Dedicated Host SKU utilizing the Intel® Cascade Lake (Xeon® Platinum 8272CL) processor. It offers 52 physical cores, 80 vCPUs, and 504 GiB of RAM. The Esv4-Type1 runs [Esv4-series](ev4-esv4-series.md#esv4-series) VMs.
-The following packing configuration outlines the max packing of uniform VMs you can put onto a Esv4-Type1 host.
+The following packing configuration outlines the max packing of uniform VMs you can put onto an Esv4-Type1 host.
| Physical cores | Available vCPUs | Available RAM | VM Size | # VMs | |-|--|||-|
-| 52 | 64 | 504 GiB | E2s v4 | 31 |
+| 52 | 80 | 504 GiB | E2s v4 | 31 |
| | | | E4s v4 | 15 | | | | | E8s v4 | 7 | | | | | E16s v4 | 3 |
The following packing configuration outlines the max packing of uniform VMs you
### Esv4-Type2
-The Esv4-Type2 is a Dedicated Host SKU utilizing the Intel® Ice Lake (Xeon® Platinum 8370C) processor. It offers 64 physical cores, 84 vCPUs, and 768 GiB of RAM. The Esv4-Type2 runs [Esv4-series](ev4-esv4-series.md#esv4-series) VMs.
+The Esv4-Type2 is a Dedicated Host SKU utilizing the Intel® Ice Lake (Xeon® Platinum 8370C) processor. It offers 64 physical cores, 119 vCPUs, and 768 GiB of RAM. The Esv4-Type2 runs [Esv4-series](ev4-esv4-series.md#esv4-series) VMs.
-The following packing configuration outlines the max packing of uniform VMs you can put onto a Esv4-Type2 host.
+The following packing configuration outlines the max packing of uniform VMs you can put onto an Esv4-Type2 host.
| Physical cores | Available vCPUs | Available RAM | VM Size | # VMs | |-|--|||-|
-| 64 | 84 | 768 GiB | E2s v4 | 32 |
+| 64 | 119 | 768 GiB | E2s v4 | 32 |
| | | | E4s v4 | 21 | | | | | E8s v4 | 10 | | | | | E16s v4 | 5 |
The following packing configuration outlines the max packing of uniform VMs you
The Esv3-Type1 is a Dedicated Host SKU utilizing the Intel® Broadwell (2.3 GHz Xeon® E5-2673 v4) processor. It offers 40 physical cores, 64 vCPUs, and 448 GiB of RAM. The Esv3-Type1 runs [Esv3-series](ev3-esv3-series.md#ev3-series) VMs.
-The following packing configuration outlines the max packing of uniform VMs you can put onto a Esv3-Type1 host.
+The following packing configuration outlines the max packing of uniform VMs you can put onto an Esv3-Type1 host.
| Physical cores | Available vCPUs | Available RAM | VM Size | # VMs | |-|--|||-|
The following packing configuration outlines the max packing of uniform VMs you
### Esv3-Type2
-The Esv3-Type2 is a Dedicated Host SKU utilizing the Intel® Skylake (Xeon® 8171M) processor. It offers 48 physical cores, 64 vCPUs, and 504 GiB of RAM. The Esv3-Type2 runs [Esv3-series](ev3-esv3-series.md#ev3-series) VMs.
+The Esv3-Type2 is a Dedicated Host SKU utilizing the Intel® Skylake (Xeon® 8171M) processor. It offers 48 physical cores, 76 vCPUs, and 504 GiB of RAM. The Esv3-Type2 runs [Esv3-series](ev3-esv3-series.md#ev3-series) VMs.
-The following packing configuration outlines the max packing of uniform VMs you can put onto a Esv3-Type2 host.
+The following packing configuration outlines the max packing of uniform VMs you can put onto an Esv3-Type2 host.
| Physical cores | Available vCPUs | Available RAM | VM Size | # VMs | |-|--|||-|
-| 48 | 64 | 504 GiB | E2s v3 | 31 |
+| 48 | 76 | 504 GiB | E2s v3 | 31 |
| | | | E4s v3 | 15 | | | | | E8s v3 | 7 | | | | | E16s v3 | 3 |
The following packing configuration outlines the max packing of uniform VMs you
### Esv3-Type3
-The Esv3-Type3 is a Dedicated Host SKU utilizing the Intel® Cascade Lake (Xeon® Platinum 8272CL) processor. It offers 52 physical cores, 64 vCPUs, and 504 GiB of RAM. The Esv3-Type3 runs [Esv3-series](ev3-esv3-series.md#ev3-series) VMs.
+The Esv3-Type3 is a Dedicated Host SKU utilizing the Intel® Cascade Lake (Xeon® Platinum 8272CL) processor. It offers 52 physical cores, 80 vCPUs, and 504 GiB of RAM. The Esv3-Type3 runs [Esv3-series](ev3-esv3-series.md#ev3-series) VMs.
-The following packing configuration outlines the max packing of uniform VMs you can put onto a Esv3-Type3 host.
+The following packing configuration outlines the max packing of uniform VMs you can put onto an Esv3-Type3 host.
| Physical cores | Available vCPUs | Available RAM | VM Size | # VMs | |-|--|||-|
-| 52 | 64 | 504 GiB | E2s v3 | 31 |
+| 52 | 80 | 504 GiB | E2s v3 | 31 |
| | | | E4s v3 | 15 | | | | | E8s v3 | 7 | | | | | E16s v3 | 3 |
The following packing configuration outlines the max packing of uniform VMs you
### Esv3-Type4
-The Esv3-Type4 is a Dedicated Host SKU utilizing the Intel® Ice Lake (Xeon® Platinum 8370C) processor. It offers 64 physical cores, 84 vCPUs, and 768 GiB of RAM. The Esv3-Type4 runs [Esv3-series](ev3-esv3-series.md#ev3-series) VMs.
+The Esv3-Type4 is a Dedicated Host SKU utilizing the Intel® Ice Lake (Xeon® Platinum 8370C) processor. It offers 64 physical cores, 119 vCPUs, and 768 GiB of RAM. The Esv3-Type4 runs [Esv3-series](ev3-esv3-series.md#ev3-series) VMs.
-The following packing configuration outlines the max packing of uniform VMs you can put onto a Esv3-Type4 host.
+The following packing configuration outlines the max packing of uniform VMs you can put onto an Esv3-Type4 host.
| Physical cores | Available vCPUs | Available RAM | VM Size | # VMs | |-|--|||-|
-| 64 | 84 | 768 GiB | E2s v3 | 32 |
+| 64 | 119 | 768 GiB | E2s v3 | 32 |
| | | | E4s v3 | 21 | | | | | E8s v3 | 10 | | | | | E16s v3 | 5 |
The Mdsv2MedMem-Type1 is a Dedicated Host SKU utilizing the Intel® Cascade Lake
- For more information, see the [Dedicated hosts](dedicated-hosts.md) overview. -- There is sample template, available at [Azure quickstart templates](https://github.com/Azure/azure-quickstart-templates/blob/master/quickstarts/microsoft.compute/vm-dedicated-hosts/README.md), that uses both zones and fault domains for maximum resiliency in a region.
+- There's sample template, available at [Azure Quickstart Templates](https://github.com/Azure/azure-quickstart-templates/blob/master/quickstarts/microsoft.compute/vm-dedicated-hosts/README.md), which uses both zones and fault domains for maximum resiliency in a region.
virtual-machines Dedicated Hosts How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/dedicated-hosts-how-to.md
You can also decide to use both availability zones and fault domains.
### [Portal](#tab/portal)
-In this example, we will create a host group using one availability zone and two fault domains.
+In this example, we'll create a host group using one availability zone and two fault domains.
1. Open the Azure [portal](https://portal.azure.com). 1. Select **Create a resource** in the upper left corner.
Not all host SKUs are available in all regions, and availability zones. You can
az vm list-skus -l eastus2 -r hostGroups/hosts -o table ```
-In this example, we will use [az vm host group create](/cli/azure/vm/host/group#az-vm-host-group-create) to create a host group using both availability zones and fault domains.
+In this example, we'll use [az vm host group create](/cli/azure/vm/host/group#az-vm-host-group-create) to create a host group using both availability zones and fault domains.
```azurecli-interactive az vm host group create \
If you set a fault domain count for your host group, you'll need to specify the
### [CLI](#tab/cli)
-Use [az vm host create](/cli/azure/vm/host#az-vm-host-create) to create a host. If you set a fault domain count for your host group, you will be asked to specify the fault domain for your host.
+Use [az vm host create](/cli/azure/vm/host#az-vm-host-create) to create a host. If you set a fault domain count for your host group, you'll be asked to specify the fault domain for your host.
```azurecli-interactive az vm host create \
You can add an existing VM to a dedicated host, but the VM must first be Stop\De
- The VM can't be in an availability set. - If the VM is in an availability zone, it must be the same availability zone as the host group. The availability zone settings for the VM and the host group must match.
-### [Portal](#tab/portal2)
+### [Portal](#tab/portal)
Move the VM to a dedicated host using the [portal](https://portal.azure.com).
Move the VM to a dedicated host using the [portal](https://portal.azure.com).
1. At the top of the page, select **Start** to restart the VM.
-### [PowerShell](#tab/powershell2)
+## [CLI](#tab/cli)
+
+Move the existing VM to a dedicated host using the CLI. The VM must be Stop/Deallocated using [az vm deallocate](/cli/azure/vm#az_vm_stop) in order to assign it to a dedicated host.
+
+Replace the values with your own information.
+
+```azurecli-interactive
+az vm deallocate -n myVM -g myResourceGroup
+az vm update - n myVM -g myResourceGroup --host myHost
+az vm start -n myVM -g myResourceGroup
+```
+
+For automatically placed VMs, only update the host group. For more information, see [Manual vs. automatic placement](dedicated-hosts.md#manual-vs-automatic-placement).
+
+Replace the values with your own information.
+
+```azurecli-interactive
+az vm deallocate -n myVM -g myResourceGroup
+az vm update -n myVM -g myResourceGroup --host-group myHostGroup
+az vm start -n myVM -g myResourceGroup
+```
+
+### [PowerShell](#tab/powershell)
Replace the values of the variables with your own information.
Remove-AzResourceGroup -Name $rgName
- For more information, see the [Dedicated hosts](dedicated-hosts.md) overview. -- There's sample template, available at [Azure quickstart templates](https://github.com/Azure/azure-quickstart-templates/blob/master/quickstarts/microsoft.compute/vm-dedicated-hosts/README.md), that uses both zones and fault domains for maximum resiliency in a region.
+- There's sample template, available at [Azure Quickstart Templates](https://github.com/Azure/azure-quickstart-templates/blob/master/quickstarts/microsoft.compute/vm-dedicated-hosts/README.md), which uses both zones and fault domains for maximum resiliency in a region.
virtual-machines Delete https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/delete.md
PATCH https://management.azure.com/subscriptions/subID/resourceGroups/resourcegr
## Force Delete for VMs
-Force delete allows you to forcefully delete your virtual machine, reducing delete latency and immediately freeing up attached resources. Force delete should only be used when you are not intending to re-use virtual hard disks. You can use force delete through Portal, CLI, PowerShell, and Rest API.
+Force delete allows you to forcefully delete your virtual machine, reducing delete latency and immediately freeing up attached resources. Force delete should only be used when you are not intending to re-use virtual hard disks. You can use force delete through Portal, CLI, PowerShell, and REST API.
### [Portal](#tab/portal3)
You can use the Azure REST API to apply force delete to your virtual machines. U
## Force Delete for virtual machine scale sets
-Force delete allows you to forcefully delete your **Uniform** virtual machine scale sets, reducing delete latency and immediately freeing up attached resources. Force delete should only be used when you are not intending to re-use virtual hard disks. You can use force delete through Portal, CLI, PowerShell, and Rest API.
+Force delete allows you to forcefully delete your **Uniform** virtual machine scale sets, reducing delete latency and immediately freeing up attached resources. Force delete should only be used when you are not intending to re-use virtual hard disks. You can use force delete through Portal, CLI, PowerShell, and REST API.
### [Portal](#tab/portal4)
virtual-machines Dv2 Dsv2 Series Memory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/dv2-dsv2-series-memory.md
description: Specifications for the Dv2 and DSv2-series VMs.
-+ Last updated 02/03/2020
virtual-machines Dv2 Dsv2 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/dv2-dsv2-series.md
description: Specifications for the Dv2 and Dsv2-series VMs.
-+ Last updated 02/03/2020
virtual-machines Dv3 Dsv3 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/dv3-dsv3-series.md
description: Specifications for the Dv3 and Dsv3-series VMs.
-+ Last updated 09/22/2020
virtual-machines Dv4 Dsv4 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/dv4-dsv4-series.md
description: Specifications for the Dv4 and Dsv4-series VMs.
-+ Last updated 06/08/2020
virtual-machines Dv5 Dsv5 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/dv5-dsv5-series.md
description: Specifications for the Dv5 and Dsv5-series VMs.
-+ Last updated 10/20/2021
virtual-machines Easv5 Eadsv5 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/easv5-eadsv5-series.md
-+ Last updated 10/8/2021
virtual-machines Eav4 Easv4 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/eav4-easv4-series.md
Title: Eav4-series and Easv4-series
description: Specifications for the Eav4 and Easv4-series VMs. -+ Last updated 07/13/2021
virtual-machines Ecasv5 Ecadsv5 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/ecasv5-ecadsv5-series.md
-+ Last updated 11/15/2021
virtual-machines Edv4 Edsv4 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/edv4-edsv4-series.md
description: Specifications for the Ev4, Edv4, Esv4 and Edsv4-series VMs.
-+ Last updated 10/20/2021
virtual-machines Edv5 Edsv5 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/edv5-edsv5-series.md
description: Specifications for the Edv5 and Edsv5-series VMs.
-+ Last updated 10/20/2021
virtual-machines Ev3 Esv3 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/ev3-esv3-series.md
Title: Ev3-series and Esv3-series description: Specifications for the Ev3 and Esv3-series VMs. -+ Last updated 09/22/2020
virtual-machines Ev4 Esv4 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/ev4-esv4-series.md
description: Specifications for the Ev4, and Esv4-series VMs.
-+ Last updated 6/8/2020
virtual-machines Ev5 Esv5 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/ev5-esv5-series.md
description: Specifications for the Ev5 and Esv5-series VMs.
-+ Last updated 10/20/2021
virtual-machines Features Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/features-linux.md
Title: Azure VM extensions and features for Linux
+ Title: Azure VM extensions and features for Linux
description: Learn what extensions are available for Azure virtual machines on Linux, grouped by what they provide or improve.
Last updated 03/30/2018
# Virtual machine extensions and features for Linux
-Azure virtual machine (VM) extensions are small applications that provide post-deployment configuration and automation tasks on Azure VMs. For example, if a virtual machine requires software installation, antivirus protection, or the ability to run a script inside it, you can use a VM extension.
+Azure virtual machine (VM) extensions are small applications that provide post-deployment configuration and automation tasks on Azure VMs. For example, if a virtual machine requires software installation, antivirus protection, or the ability to run a script inside it, you can use a VM extension.
You can run Azure VM extensions by using the Azure CLI, PowerShell, Azure Resource Manager templates (ARM templates), and the Azure portal. You can bundle extensions with a new VM deployment or run them against any existing system.
-This article provides an overview of Azure VM extensions, prerequisites for using them, and guidance on how to detect, manage, and remove them. This article provides generalized information because many VM extensions are available. Each has a potentially unique configuration and its own documentation.
+This article provides an overview of Azure VM extensions, prerequisites for using them, and guidance on how to detect, manage, and remove them. This article provides generalized information because many VM extensions are available. Each has a potentially unique configuration and its own documentation.
## Use cases and samples
Each Azure VM extension has a specific use case. Examples include:
- Apply PowerShell desired state configurations (DSCs) to a VM by using the [DSC extension for Linux](https://github.com/Azure/azure-linux-extensions/tree/master/DSC). - Configure monitoring of a VM by using the [Microsoft Monitoring Agent VM extension](/previous-versions/azure/virtual-machines/linux/tutorial-monitor).-- Configure monitoring of your Azure infrastructure by using the [Chef](https://docs.chef.io/) or [Datadog](https://www.datadoghq.com/blog/introducing-azure-monitoring-with-one-click-datadog-deployment/) extension.
+- Configure monitoring of your Azure infrastructure by using the [Chef](https://docs.chef.io/) or [Datadog](https://www.datadoghq.com/blog/introducing-azure-monitoring-with-one-click-datadog-deployment/) extension.
-In addition to process-specific extensions, a Custom Script extension is available for both Windows and Linux virtual machines. The [Custom Script extension for Linux](custom-script-linux.md) allows any Bash script to be run on a VM. Custom scripts are useful for designing Azure deployments that require configuration beyond what native Azure tooling can provide.
+In addition to process-specific extensions, a Custom Script extension is available for both Windows and Linux virtual machines. The [Custom Script extension for Linux](custom-script-linux.md) allows any Bash script to be run on a VM. Custom scripts are useful for designing Azure deployments that require configuration beyond what native Azure tooling can provide.
## Prerequisites
In addition to process-specific extensions, a Custom Script extension is availab
To handle the extension on the VM, you need the [Azure Linux Agent](agent-linux.md) installed. Some individual extensions have prerequisites, such as access to resources or dependencies.
-The Azure Linux Agent manages interactions between an Azure VM and the Azure fabric controller. The agent is responsible for many functional aspects of deploying and managing Azure VMs, including running VM extensions.
+The Azure Linux Agent manages interactions between an Azure VM and the Azure fabric controller. The agent is responsible for many functional aspects of deploying and managing Azure VMs, including running VM extensions.
The Azure Linux Agent is preinstalled on Azure Marketplace images. It can also be installed manually on supported operating systems.
The agent runs on multiple operating systems. However, the extensions framework
### Network access
-Extension packages are downloaded from the Azure Storage extension repository. Extension status uploads are posted to Azure Storage.
+Extension packages are downloaded from the Azure Storage extension repository. Extension status uploads are posted to Azure Storage.
If you use a [supported version of the Azure Linux Agent](https://support.microsoft.com/en-us/help/4049215/extensions-and-virtual-machine-agent-minimum-version-support), you don't need to allow access to Azure Storage in the VM region. You can use the agent to redirect the communication to the Azure fabric controller for agent communications. If you're on an unsupported version of the agent, you need to allow outbound access to Azure Storage in that region from the VM.
To redirect agent traffic requests, the Azure Linux Agent has proxy server suppo
## Discover VM extensions
+### [Azure CLI](#tab/azure-cli)
+ Many VM extensions are available for use with Azure VMs. To see a complete list, use [az vm extension image list](/cli/azure/vm/extension/image#az-vm-extension-image-list). The following example lists all available extensions in the *westus* location: ```azurecli az vm extension image list --location westus --output table ```
+### [Azure PowerShell](#tab/azure-powershell)
+
+Many VM extensions are available for use with Azure VMs. To see a complete list, use [Get-AzVMExtensionImage](/powershell/module/az.compute/get-azvmextensionimage). The following example lists all available extensions in the *westus* location:
+
+```azurepowershell
+Get-AzVmImagePublisher -Location "westus" |
+Get-AzVMExtensionImageType |
+Get-AzVMExtensionImage | Select-Object Type, PublisherName, Version
+```
+++ ## Run VM extensions Azure VM extensions run on existing VMs. That's useful when you need to make configuration changes or recover connectivity on an already deployed VM. VM extensions can also be bundled with ARM template deployments. By using extensions with ARM templates, you can deploy and configure Azure VMs without post-deployment intervention.
You can use the following methods to run an extension against an existing VM.
### Azure CLI
-You can run Azure VM extensions against an existing VM by using the [az vm extension set](/cli/azure/vm/extension#az-vm-extension-set) command. The following example runs the Custom Script extension against a VM named *myVM* in a resource group named *myResourceGroup*. Replace the example resource group name, VM name, and script to run (https:\//raw.githubusercontent.com/me/project/hello.sh) with your own information.
+You can run Azure VM extensions against an existing VM by using the [az vm extension set](/cli/azure/vm/extension#az-vm-extension-set) command. The following example runs the Custom Script extension against a VM named *myVM* in a resource group named *myResourceGroup*. Replace the example resource group name, VM name, and script to run (https:\//raw.githubusercontent.com/me/project/hello.sh) with your own information.
```azurecli az vm extension set \
info: Executing command vm extension set
info: vm extension set command OK ```
+### Azure PowerShell
+
+You can run Azure VM extensions against an existing VM by using the [Set-AzVMExtension](/powershell/module/az.compute/set-azvmextension) command. The following example runs the Custom Script extension against a VM named *myVM* in a resource group named *myResourceGroup*. Replace the example resource group name, VM name, and script to run (https:\//raw.githubusercontent.com/me/project/hello.sh) with your own information.
+
+```azurepowershell
+$Params = @{
+ ResourceGroupName = 'myResourceGroup'
+ VMName = 'myVM'
+ Name = 'CustomScript'
+ Publisher = 'Microsoft.Azure.Extensions'
+ ExtensionType = 'CustomScript'
+ TypeHandlerVersion = '2.1'
+ Settings = @{fileUris = @('https://raw.githubusercontent.com/me/project/hello.sh'); commandToExecute = './hello.sh'}
+}
+Set-AzVMExtension @Params
+```
+When the extension runs correctly, the output is similar to the following example:
+
+```Output
+RequestId IsSuccessStatusCode StatusCode ReasonPhrase
+ - -
+ True OK OK
+```
+ ### Azure portal You can apply VM extensions to an existing VM through the Azure portal. Select the VM in the portal, select **Extensions**, and then select **Add**. Choose the extension that you want from the list of available extensions, and follow the instructions in the wizard.
The following image shows the installation of the Custom Script extension for Li
### Azure Resource Manager templates
-You can add VM extensions to an ARM template and run them with the deployment of the template. When you deploy an extension with a template, you can create fully configured Azure deployments.
+You can add VM extensions to an ARM template and run them with the deployment of the template. When you deploy an extension with a template, you can create fully configured Azure deployments.
For example, the following JSON is taken from a [full ARM template](https://github.com/Microsoft/dotnet-core-sample-templates/tree/master/dotnet-core-music-linux) that deploys a set of load-balanced VMs and an Azure SQL database, and then installs a .NET Core application on each VM. The VM extension takes care of the software installation.
For example, the following JSON is taken from a [full ARM template](https://gith
"properties": { "publisher": "Microsoft.Azure.Extensions", "type": "CustomScript",
- "typeHandlerVersion": "2.0",
+ "typeHandlerVersion": "2.1",
"autoUpgradeMinorVersion": true, "settings": { "fileUris": [
The following example shows an instance of the Custom Script extension for Linux
"properties": { "publisher": "Microsoft.Azure.Extensions", "type": "CustomScript",
- "typeHandlerVersion": "2.0",
+ "typeHandlerVersion": "2.1",
"autoUpgradeMinorVersion": true, "settings": { "fileUris": [
Moving the `commandToExecute` property to the `protected` configuration helps se
"properties": { "publisher": "Microsoft.Azure.Extensions", "type": "CustomScript",
- "typeHandlerVersion": "2.0",
+ "typeHandlerVersion": "2.1",
"autoUpgradeMinorVersion": true, "settings": { "fileUris": [
Publishers make updates available to regions at various times, so it's possible
#### Agent updates
-The Linux VM Agent contains *Provisioning Agent code* and *extension-handling code* in one package. They can't be separated.
+The Linux VM Agent contains *Provisioning Agent code* and *extension-handling code* in one package. They can't be separated.
You can disable the Provisioning Agent when you want to [provision on Azure by using cloud-init](../linux/using-cloud-init.md).
waagent --version
The output is similar to the following example: ```bash
-WALinuxAgent-2.2.17 running on ubuntu 16.04
-Python: 3.6.0
-Goal state agent: 2.2.18
+WALinuxAgent-2.2.45 running on ubuntu 18.04
+Python: 3.6.9
+Goal state agent: 2.7.1.0
```
-In the preceding example output, the parent (or package deployed version) is `WALinuxAgent-2.2.17`. The `Goal state agent` value is the auto-update version.
+In the preceding example output, the parent (or package deployed version) is `WALinuxAgent-2.2.45`. The `Goal state agent` value is the auto-update version.
We highly recommend that you always enable automatic update for the agent: [AutoUpdate.Enabled=y](./update-linux-agent.md). If you don't enable automatic update, you'll need to keep manually updating the agent, and you won't get bug and security fixes.
Automatic extension updates are either *minor* or *hotfix*. You can opt in or op
```json "publisher": "Microsoft.Azure.Extensions", "type": "CustomScript",
- "typeHandlerVersion": "2.0",
+ "typeHandlerVersion": "2.1",
"autoUpgradeMinorVersion": true, "settings": { "fileUris": [
Automatic extension updates are either *minor* or *hotfix*. You can opt in or op
To get the latest minor-release bug fixes, we highly recommend that you always select automatic update in your extension deployments. You can't opt out of hotfix updates that carry security or key bug fixes.
-If you disable automatic updates or you need to upgrade a major version, use [az vm extension set](/cli/azure/vm/extension#az-vm-extension-set) and specify the target version.
+If you disable automatic updates or you need to upgrade a major version, use [az vm extension set](/cli/azure/vm/extension#az-vm-extension-set) or [Set-AzVMExtension](/powershell/module/az.compute/set-azvmextension) and specify the target version.
### How to identify extension updates #### Identify if the extension is set with autoUpgradeMinorVersion on a VM
+### [Azure CLI](#tab/azure-cli)
+ You can see from the VM model if the extension was provisioned with `autoUpgradeMinorVersion`. To check, use [az vm show](/cli/azure/vm#az-vm-show) and provide the resource group and VM name as follows: ```azurecli
The following example output shows that `autoUpgradeMinorVersion` is set to `tru
{ "autoUpgradeMinorVersion": true, "forceUpdateTag": null,
- "id": "/subscriptions/guid/resourceGroups/myResourceGroup/providers/Microsoft.Compute/virtualMachines/myVM/extensions/CustomScriptExtension",
+ "id": "/subscriptions/guid/resourceGroups/myResourceGroup/providers/Microsoft.Compute/virtualMachines/myVM/extensions/customScript",
+```
+
+### [Azure PowerShell](#tab/azure-powershell)
+
+You can see from the VM model if the extension was provisioned with `AutoUpgradeMinorVersion`. To check, use [Get-AzVM](/powershell/module/az.compute/get-azvm) and provide the resource group and VM name as follows:
+
+```azurepowershell
+Get-AzVM -ResourceGroupName myResourceGroup -Name myVM | Select-Object -ExpandProperty Extensions
+```
+
+The following example output shows that `AutoUpgradeMinorVersion` is set to `True`:
+
+```Output
+ForceUpdateTag :
+Publisher : Microsoft.Azure.Extensions
+VirtualMachineExtensionType : CustomScript
+TypeHandlerVersion : 2.1
+AutoUpgradeMinorVersion : True
+EnableAutomaticUpgrade :
+...
``` ++ #### Identify when an autoUpgradeMinorVersion event occurred To see when an update to the extension occurred, review the agent logs on the VM at */var/log/waagent.log*.
To perform its tasks, the agent needs to run as *root*.
## Troubleshoot VM extensions
-Each VM extension might have specific troubleshooting steps. For example, when you use the Custom Script extension, you can find script execution details locally on the VM where the extension was run.
+Each VM extension might have specific troubleshooting steps. For example, when you use the Custom Script extension, you can find script execution details locally on the VM where the extension was run.
The following troubleshooting actions apply to all VM extensions:
The following troubleshooting actions apply to all VM extensions:
### View extension status
+### [Azure CLI](#tab/azure-cli)
+ After a VM extension has been run against a VM, use [az vm get-instance-view](/cli/azure/vm#az-vm-get-instance-view) to return extension status as follows: ```azurecli az vm get-instance-view \
- --resource-group rgName \
+ --resource-group myResourceGroup \
--name myVM \ --query "instanceView.extensions" ```
The output is similar to the following example:
} ], "substatuses": null,
- "type": "Microsoft.Azure.Extensions.customScript",
- "typeHandlerVersion": "2.0.6"
+ "type": "Microsoft.Azure.Extensions.CustomScript",
+ "typeHandlerVersion": "2.1.6"
} ```
+### [Azure PowerShell](#tab/azure-powershell)
+
+After a VM extension has been run against a VM, use [Get-AzVM](/powershell/module/az.compute/get-azvm) and specify the `-Status` switch parameter to return extension status as follows:
+
+```azurepowershell
+Get-AzVM -ResourceGroupName myResourceGroup -Name myVM -Status |
+Select-Object -ExpandProperty Extensions |
+Select-Object -ExpandProperty Statuses
+```
+
+The output is similar to the following example:
+
+```Output
+Code : ProvisioningState/failed/0
+Level : Error
+DisplayStatus : Provisioning failed
+Message : Enable failed: failed to execute command: command terminated with exit status=127
+ [stdout]
+
+ [stderr]
+ /bin/sh: 1: ./hello.sh: not found
+
+Time :
+```
+++ You can also find extension execution status in the Azure portal. Select the VM, select **Extensions**, and then select the desired extension. ### Rerun a VM extension
-There might be cases in which a VM extension needs to be rerun. You can rerun an extension by removing it, and then rerunning the extension with an execution method of your choice. To remove an extension, use [az vm extension delete](/cli/azure/vm/extension#az-vm-extension-delete) as follows:
+There might be cases in which a VM extension needs to be rerun. You can rerun an extension by removing it, and then rerunning the extension with an execution method of your choice.
+
+### [Azure CLI](#tab/azure-cli)
+
+To remove an extension, use [az vm extension delete](/cli/azure/vm/extension#az-vm-extension-delete) as follows:
```azurecli az vm extension delete \
az vm extension delete \
--name customScript ```
+### [Azure PowerShell](#tab/azure-powershell)
+
+To remove an extension, use [Remove-AzVMExtension](/powershell/module/az.compute/remove-azvmextension) as follows:
+
+```azurepowershell
+Remove-AzVMExtension -ResourceGroupName myResourceGroup -VMName myVM -Name customScript
+```
+
+To force the command to run without asking for user confirmation specify the `-Force` switch parameter.
+++ You can also remove an extension in the Azure portal: 1. Select a VM.
virtual-machines Oms Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/oms-windows.md
The following table provides a mapping of the version of the Windows Log Analyti
| Log Analytics Windows agent bundle version | Log Analytics Windows VM extension version | Release Date | Release Notes | |--|--|--|--|
-| 1.20.18064.0|1.0.18064 | December 2021 | <ul><li>Bug fix for intermittent crashes</li></ul> |
-| 1.20.18062.0| 1.0.18062 | November 2021 | <ul><li>Minor bug fixes and stabilizattion improvements</li></ul> |
+| 10.20.18064.0|1.0.18064 | December 2021 | <ul><li>Bug fix for intermittent crashes</li></ul> |
+| 10.20.18062.0| 1.0.18062 | November 2021 | <ul><li>Minor bug fixes and stabilizattion improvements</li></ul> |
| 10.20.18053| 1.0.18053.0 | October 2020 | <ul><li>New Agent Troubleshooter</li><li>Updates to how the agent handles certificate changes to Azure services</li></ul> | | 10.20.18040 | 1.0.18040.2 | August 2020 | <ul><li>Resolves an issue on Azure Arc</li></ul> | | 10.20.18038 | 1.0.18038 | April 2020 | <ul><li>Enables connectivity over Private Link using Azure Monitor Private Link Scopes</li><li>Adds ingestion throttling to avoid a sudden, accidental influx in ingestion to a workspace</li><li>Adds support for additional Azure Government clouds and regions</li><li>Resolves a bug where HealthService.exe crashed</li></ul> |
virtual-machines Field Programmable Gate Arrays Attestation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/field-programmable-gate-arrays-attestation.md
Title: Azure FPGA Attestation Service description: Attestation service for the NP-series VMs. -+ Last updated 04/01/2021
virtual-machines Fsv2 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/fsv2-series.md
Title: Fsv2-series description: Specifications for the Fsv2-series VMs.-+ -+ Last updated 02/03/2020-+ # Fsv2-series
virtual-machines Fx Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/fx-series.md
Title: FX-series description: Specifications for the FX-series VMs.-+ -+ Last updated 06/10/2021-+ # FX-series
virtual-machines Generation 2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/generation-2.md
Title: Azure support for generation 2 VMs description: Overview of Azure support for generation 2 VMs-+ Last updated 02/26/2021-+ # Support for generation 2 VMs on Azure
virtual-machines H Series Retirement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/h-series-retirement.md
Title: H-series retirement description: H-series retirement started September 1, 2021. -+ Last updated 08/02/2021
virtual-machines H Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/h-series.md
Title: H-series - Azure Virtual Machines description: Specifications for the H-series VMs. -+ Last updated 09/11/2021
virtual-machines Hb Series Retirement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/hb-series-retirement.md
Title: HB-series retirement description: HB-series retirement started September 1, 2021. -+ Last updated 08/02/2021
virtual-machines Hb Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/hb-series.md
Title: HB-series description: Specifications for the HB-series VMs. -+ Last updated 03/22/2021
virtual-machines Hbv2 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/hbv2-series.md
Title: HBv2-series - Azure Virtual Machines description: Specifications for the HBv2-series VMs. -+ Last updated 03/08/2021
virtual-machines Hbv3 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/hbv3-series.md
Title: HBv3-series - Azure Virtual Machines description: Specifications for the HBv3-series VMs. -+ Last updated 01/10/2022
virtual-machines Hc Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/hc-series.md
Title: HC-series - Azure Virtual Machines description: Specifications for the HC-series VMs. -+ Last updated 03/05/2021
virtual-machines Image Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/image-version.md
# Create an image definition and an image version
-A [Azure Compute Gallery](shared-image-galleries.md) (formerly known as Shared Image Gallery)simplifies custom image sharing across your organization. Custom images are like marketplace images, but you create them yourself. Images can be created from a VM, VHD, snapshot, managed image, or another image version.
+A [Azure Compute Gallery](shared-image-galleries.md) (formerly known as Shared Image Gallery) simplifies custom image sharing across your organization. Custom images are like marketplace images, but you create them yourself. Images can be created from a VM, VHD, snapshot, managed image, or another image version.
The Azure Compute Gallery lets you share your custom VM images with others in your organization, within or across regions, within an Azure AD tenant, or publicly using a [community gallery (preview)](azure-compute-gallery.md#community). Choose which images you want to share, which regions you want to make them available in, and who you want to share them with. You can create multiple galleries so that you can logically group images.
virtual-machines Azure Hybrid Benefit Byos Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/azure-hybrid-benefit-byos-linux.md
# How Azure Hybrid Benefit for BYOS VMs (AHB BYOS) applies for Linux virtual machines >[!IMPORTANT]
->The below article is scoped to Azure Hybrid Benefit for BYOS VMs (AHB BYOS) which caters to conversion of custom on-prem image VMs and RHEL or SLES BYOS VMs. For conversion of RHEL PAYG or SLES PAYG VMs, refer to [Azure Hybrid Benefit for PAYG VMs here](./azure-hybrid-benefit-linux.md).
+>The below article is scoped to Azure Hybrid Benefit for BYOS VMs (AHB BYOS) which caters to conversion of custom image VMs and RHEL or SLES BYOS VMs. For conversion of RHEL PAYG or SLES PAYG VMs, refer to [Azure Hybrid Benefit for PAYG VMs here](./azure-hybrid-benefit-linux.md).
>[!NOTE]
->Azure Hybrid Benefit for BYOS VMs is in Preview now. You can [sign up for the preview here.](https://aka.ms/ahb-linux-form) You will receive a mail from Microsoft once your subscriptions are enabled for Preview.
+>Azure Hybrid Benefit for BYOS VMs is in Preview now. You can start using the capability on Azure by following steps provided in the [section below](#get-started).
-Azure Hybrid Benefit for BYOS VMs is a licensing benefit that helps you to get software updates and integrated support for Red Hat Enterprise Linux (RHEL) and SUSE Linux Enterprise Server (SLES) virtual machines (VMs) directly from Azure infrastructure. This benefit is available to RHEL and SLES custom on-prem image VMs (VMs generated from on-prem images), and to RHEL and SLES Marketplace bring-your-own-subscription (BYOS) VMs.
+Azure Hybrid Benefit for BYOS VMs is a licensing benefit that helps you to get software updates and integrated support for Red Hat Enterprise Linux (RHEL) and SUSE Linux Enterprise Server (SLES) virtual machines (VMs) directly from Azure infrastructure. This benefit is available to RHEL and SLES custom image VMs (VMs generated from on-premises images), and to RHEL and SLES Marketplace bring-your-own-subscription (BYOS) VMs.
## Benefit description
-Before AHB BYOS, RHEL and SLES customers who migrated their on-prem machines to Azure by creating images of on-prem systems and migrating them as VMs on Azure did not have the flexibility to get software updates directly from Azure similar to Marketplace PAYG VMs. Hence, you needed to still buy cloud access licenses from the Enterprise Linux distributors to get security support as well as software updates. With Azure Hybrid Benefit for BYOS VMs, we will allow you to get software updates and support for on-prem custom image VMs as well as RHEL and SLES BYOS VMs similar to PAYG VMs by paying the same software fees as charged to PAYG VMs. In addition, these conversions can happen without any redeployment, so you can avoid any downtime risk.
+ Azure Hybrid Benefit for BYOS VMs allows you to get software updates and integrated support for Marketplace BYOS or on-premises migrated RHEL and SLES BYOS VMs without reboot. This benefit converts bring-your-own-subscription BYOS) billing model to pay-as-you-go (PAYG) billing model and you pay the same software fees as charged to PAYG VMs.
:::image type="content" source="./media/ahb-linux/azure-hybrid-benefit-byos-cost.png" alt-text="Azure Hybrid Benefit cost visualization on Linux VMs.":::
-After you enable the AHB for BYOS VMs benefit on RHEL or SLES VM, you will be charged for the additional software fee typically incurred on a PAYG VM and you will also start getting software updates typically provided to a PAYG VM.
+After you enable the AHB for BYOS VMs benefit on RHEL or SLES VM,you'll be charged for the software fee typically incurred on a PAYG VM and you'll also start getting software updates typically provided to a PAYG VM.
-You can also choose to convert a VM that has had the benefit enabled on it back to a BYOS billing model which will stop software billing and software updates from Azure infrastructure.
+You can also choose to convert a VM that has had the benefit enabled on it back to a BYOS billing model, which will stop software billing and software updates from Azure infrastructure.
## Scope of Azure Hybrid Benefit for BYOS VMs eligibility for Linux VMs
-**Azure Hybrid Benefit for BYOS VMs** is available for all RHEL and SLES custom on-prem image VMs as well as RHEL and SLES Marketplace BYOS VMs. For RHEL and SLES PAYG Marketplace VMs, [refer to AHB for PAYG VMs here](./azure-hybrid-benefit-linux.md)
+**Azure Hybrid Benefit for BYOS VMs** is available for all RHEL and SLES custom image VMs as well as RHEL and SLES Marketplace BYOS VMs. For RHEL and SLES PAYG Marketplace VMs, [refer to AHB for PAYG VMs here](./azure-hybrid-benefit-linux.md)
-Azure Dedicated Host instances, and SQL hybrid benefits are not eligible for Azure Hybrid Benefit for BYOS VMs if you're already using the benefit with Linux VMs. Virtual Machine Scale Sets (VMSS) are Reserved Instances (RIs) are not in scope for AHB BYOS.
+Azure Dedicated Host instances, and SQL hybrid benefits aren't eligible for Azure Hybrid Benefit for BYOS VMs if you're already using the benefit with Linux VMs. Virtual Machine Scale Sets are Reserved Instances (RIs) aren't in scope for AHB BYOS.
## Get started
Azure Dedicated Host instances, and SQL hybrid benefits are not eligible for Azu
To start using the benefit for Red Hat:
-1. Install the 'AHBForRHEL' extension on the virtual machine on which you wish to apply the AHB BYOS benefit. This is a prerequisite before moving to next step. You can do this via the portal or use Azure CLI.
+1. Install the 'AHBForRHEL' extension on the virtual machine on which you wish to apply the AHB BYOS benefit. You can do this installation via Azure CLI or Powershell.
1. Depending on the software updates you want, change the license type to relevant value. Here are the available license type values and the software updates associated with them: | License Type | Software Updates | Allowed VMs| ||||
- | RHEL_BASE | Installs Red Hat regular/base repositories into your virtual machine. | RHEL BYOS VMs, RHEL custom on-prem image VMs|
- | RHEL_EUS | Installs Red Hat Extended Update Support (EUS) repositories into your virtual machine. | RHEL BYOS VMs, RHEL custom on-prem image VMs|
- | RHEL_SAPAPPS | Installs RHEL for SAP Business Apps repositories into your virtual machine. | RHEL BYOS VMs, RHEL custom on-prem image VMs|
- | RHEL_SAPHA | Installs RHEL for SAP with HA repositories into your virtual machine. | RHEL BYOS VMs, RHEL custom on-prem image VMs|
- | RHEL_BASESAPAPPS | Installs RHEL regular/base SAP Business Apps repositories into your virtual machine. | RHEL BYOS VMs, RHEL custom on-prem image VMs|
- | RHEL_BASESAPHA | Installs regular/base RHEL for SAP with HA repositories into your virtual machine.| RHEL BYOS VMs, RHEL custom on-prem image VMs|
+ | RHEL_BASE | Installs Red Hat regular/base repositories into your virtual machine. | RHEL BYOS VMs, RHEL custom image VMs|
+ | RHEL_EUS | Installs Red Hat Extended Update Support (EUS) repositories into your virtual machine. | RHEL BYOS VMs, RHEL custom image VMs|
+ | RHEL_SAPAPPS | Installs RHEL for SAP Business Apps repositories into your virtual machine. | RHEL BYOS VMs, RHEL custom image VMs|
+ | RHEL_SAPHA | Installs RHEL for SAP with HA repositories into your virtual machine. | RHEL BYOS VMs, RHEL custom image VMs|
+ | RHEL_BASESAPAPPS | Installs RHEL regular/base SAP Business Apps repositories into your virtual machine. | RHEL BYOS VMs, RHEL custom image VMs|
+ | RHEL_BASESAPHA | Installs regular/base RHEL for SAP with HA repositories into your virtual machine.| RHEL BYOS VMs, RHEL custom image VMs|
1. Wait for one hour for the extension to read the license type value and install the repositories. 1. You should now be connected to Azure Red Hat Update Infrastructure and the relevant repositories will be installed in your machine.
-1. In case the extension is not running by itself, you can run it on demand as well.
+1. In case the extension isn't running by itself, you can run it on demand as well.
-1. In case you want to switch back to the bring-your-own-subscription model, just change the license type to 'None' and run the extension. This will remove all RHUI repositories from your virtual machine and stop the billing.
+1. In case you want to switch back to the bring-your-own-subscription model, just change the license type to 'None' and run the extension. This action will remove all RHUI repositories from your virtual machine and stop the billing.
>[!Note]
-> In the unlikely event that extension is not able to install repositories or there are any issues, please change the license type back to empty and reach out to support for help. This will ensure you are not getting billed for software updates.
+> In the unlikely event that extension isn't able to install repositories or there are any issues, please change the license type back to empty and reach out to support for help. This will ensure you aren't getting billed for software updates.
### SUSE customers
-To start using the benefit for SUSE:
+To start using the benefit for SLES VMs:
-1. Install the Azure Hybrid Benefit for BYOS VMs extension on the virtual machine on which you wish to apply the AHB BYOS benefit. This is a prerequisite before moving to next step.
+1. Install the Azure Hybrid Benefit for BYOS VMs extension on the virtual machine on which you wish to apply the AHB BYOS benefit.
1. Depending on the software updates you want, change the license type to relevant value. Here are the available license type values and the software updates associated with them: | License Type | Software Updates | Allowed VMs| ||||
- | SLES | Installs SLES standard repositories into your virtual machine. | SLES BYOS VMs, SLES custom on-prem image VMs|
- | SLES_SAP | Installs SLES SAP repositories into your virtual machine. | SLES SAP BYOS VMs, SLES custom on-prem image VMs|
- | SLES_HPC | Installs SLES High Performance Compute related repositories into your virtual machine. | SLES HPC BYOS VMs, SLES custom on-prem image VMs|
+ | SLES | Installs SLES standard repositories into your virtual machine. | SLES BYOS VMs, SLES custom image VMs|
+ | SLES_SAP | Installs SLES SAP repositories into your virtual machine. | SLES SAP BYOS VMs, SLES custom image VMs|
+ | SLES_HPC | Installs SLES High Performance Compute related repositories into your virtual machine. | SLES HPC BYOS VMs, SLES custom image VMs|
1. Wait for 5 minutes for the extension to read the license type value and install the repositories. 1. You should now be connected to the SUSE Public Cloud Update Infrastructure on Azure and the relevant repositories will be installed in your machine.
-1. In case the extension is not running by itself, you can run it on demand as well.
+1. In case the extension isn't running by itself, you can run it on demand as well.
-1. In case you want to switch back to the bring-your-own-subscription model, just change the license type to 'None' and run the extension. This will remove all repositories from your virtual machine and stop the billing.
+1. In case you want to switch back to the bring-your-own-subscription model, just change the license type to 'None' and run the extension. This action will remove all repositories from your virtual machine and stop the billing.
## Enable and disable the benefit for RHEL You can install the `AHBForRHEL` extension to install the extension. After successfully installing the extension,
-you can use the `az vm update` command to update existing license type on running VMs. For SLES VMs, run the command and set `--license-type` parameter to one of the following: `RHEL_BASE`, `RHEL_EUS`, `RHEL_SAPHA`, `RHEL_SAPAPPS`, `RHEL_BASESAPAPPS` or `RHEL_BASESAPHA`.
+you can use the `az vm update` command to update existing license type on running VMs. For SLES VMs, run the command and set `--license-type` parameter to one of the following license types: `RHEL_BASE`, `RHEL_EUS`, `RHEL_SAPHA`, `RHEL_SAPAPPS`, `RHEL_BASESAPAPPS` or `RHEL_BASESAPHA`.
### CLI example to enable the benefit for RHEL
you can use the `az vm update` command to update existing license type on runnin
``` 1. Wait for 5 minutes for the extension to read the license type value and install the repositories.
-1. You should now be connected to Azure Red Hat Update Infrastructure and the relevant repositories will be installed in your machine. You can check the same by performing the command below on your VM which outputs installed repository packages on your VM:
+1. You should now be connected to Azure Red Hat Update Infrastructure and the relevant repositories will be installed in your machine. You can validate the same by performing the command below on your VM:
```bash yum repolist ```
- 1. In case the extension is not running by itself, you can try the below command on the VM:
+ 1. In case the extension isn't running by itself, you can try the below command on the VM:
```bash
- systemctl start azure-hybrid-benefit.service
+ systemctl start azure-hybrid-benefit.service
+ ```
+ 1. You can use the below command in your RHEL VM to get the current status of the service:
+ ```bash
+ ahb-service -status
``` ## Enable and disable the benefit for SLES You can install the `AHBForSLES` extension to install the extension. After successfully installing the extension,
-you can use the `az vm update` command to update existing license type on running VMs. For SLES VMs, run the command and set `--license-type` parameter to one of the following: `SLES`, `SLES_SAP` or `SLES_HPC`.
+you can use the `az vm update` command to update existing license type on running VMs. For SLES VMs, run the command and set `--license-type` parameter to one of the following license types: `SLES_STANDARD`, `SLES_SAP` or `SLES_HPC`.
### CLI example to enable the benefit for SLES 1. Install the Azure Hybrid Benefit extension on running VM using the portal or via Azure CLI using the command below:
you can use the `az vm update` command to update existing license type on runnin
``` 1. Wait for 5 minutes for the extension to read the license type value and install the repositories.
-1. You should now be connected to the SUSE Public Cloud Update Infrastructure on Azure and the relevant repositories will be installed in your machine. You can verify this by performing the command below on your VM which list SUSE repositories on your VM:
+1. You should now be connected to the SUSE Public Cloud Update Infrastructure on Azure and the relevant repositories will be installed in your machine. You can verify this change by performing the command below on your VM, which lists SUSE repositories on your VM:
```bash zypper repos ```
Customers who use Azure Hybrid Benefit for BYOS VMs for RHEL agree to the standa
### SUSE
-To use Azure Hybrid Benefit for BYOS VMs for your SLES VMs, and for information about moving from SLES PAYG to BYOS or moving from SLES BYOS to PAYG, see [SUSE Linux Enterprise and Azure Hybrid Benefit](https://aka.ms/suse-ahb).
+Customers who use Azure Hybrid Benefit for BYOS VMs for SLES and want more for information about moving from SLES PAYG to BYOS or moving from SLES BYOS to PAYG, see [SUSE Linux Enterprise and Azure Hybrid Benefit](https://aka.ms/suse-ahb).
## Frequently asked questions
-*Q: What are the additional licensing cost I pay with AHB for BYOS VMs?*
+*Q: What is the licensing cost I pay with AHB for BYOS VMs?*
-A: On using AHB for BYOS VMs, you will essentially convert your bring your own subscription (BYOS) billing model to pay as you go (PAYG) billing model. Hence, you will be paying similar to PAYG VMs for software subscription cost. The table below maps the PAYG flavors available on Azure and links to pricing page to help you understand the cost associated with AHB for BYOS VMs.
+A: On using AHB for BYOS VMs, you'll essentially convert bring-your-own-subscription (BYOS) billing model to pay-as-you-go (PAYG) billing model. Hence, you'll be paying similar to PAYG VMs for software subscription cost. The table below maps the PAYG flavors available on Azure and links to pricing page to help you understand the cost associated with AHB for BYOS VMs.
| License type | Relevant PAYG VM image & Pricing Link (Keep the AHB for PAYG filter off) | ||||
A: RHEL versions greater than 7.4 are supported with AHB for BYOS VMs.
*Q: I've uploaded my own RHEL or SLES image from on-premises (via Azure Migrate, Azure Site Recovery, or otherwise) to Azure. Can I convert the billing on these images from BYOS to PAYG?*
-A: Yes, this is the capability AHB for BYOS VMs supports. Please [follow steps shared here](#get-started).
+A: Yes, this capability supports image from on-premises to Azure. Please [follow steps shared here](#get-started).
*Q: Can I use Azure Hybrid Benefit for BYOS VMs on RHEL and SLES PAYG Marketplace VMs?*
A: No, as these VMs are already pay-as-you-go (PAYG). However, with AHB v1 and v
*Q: Can I use Azure Hybrid Benefit for BYOS VMs on virtual machine scale sets for RHEL and SLES?*
-A: No, Azure Hybrid Benefit for BYOS VMs is not available for virtual machine scale sets currently.
+A: No, Azure Hybrid Benefit for BYOS VMs isn't available for virtual machine scale sets currently.
*Q: Can I use Azure Hybrid Benefit for BYOS VMs on a virtual machine deployed for SQL Server on RHEL images?*
-A: No, you can't. There is no plan for supporting these virtual machines.
+A: No, you can't. There's no plan for supporting these virtual machines.
*Q: Can I use Azure Hybrid Benefit for BYOS VMs on my RHEL Virtual Data Center subscription?*
-A: No, you cannot. VDC is not supported on Azure at all, including AHB.
+A: No, you can't. VDC isn't supported on Azure at all, including AHB.
## Next steps
virtual-machines Scheduled Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/scheduled-events.md
Title: Scheduled Events for Linux VMs in Azure
-description: Schedule events by using Azure Metadata Service for your Linux virtual machines.
+description: Scheduled events using the Azure Metadata Service for your Linux virtual machines.
With Scheduled Events, your application can discover when maintenance will occur
Scheduled Events provides events in the following use cases: -- [Platform initiated maintenance](../maintenance-and-updates.md?bc=/azure/virtual-machines/linux/breadcrumb/toc.json&toc=/azure/virtual-machines/linux/toc.json) (for example, VM reboot, live migration or memory preserving updates for host)-- Virtual machine is running on [degraded host hardware](https://azure.microsoft.com/blog/find-out-when-your-virtual-machine-hardware-is-degraded-with-scheduled-events) that is predicted to fail soon-- Virtual machine was running on a host that suffered a hardware failure-- User-initiated maintenance (for example, a user restarts or redeploys a VM)
+- [Platform initiated maintenance](../maintenance-and-updates.md?bc=/azure/virtual-machines/linux/breadcrumb/toc.json&toc=/azure/virtual-machines/linux/toc.json) (for example, VM reboot, live migration or memory preserving updates for host).
+- Virtual machine is running on [degraded host hardware](https://azure.microsoft.com/blog/find-out-when-your-virtual-machine-hardware-is-degraded-with-scheduled-events) that is predicted to fail soon.
+- Virtual machine was running on a host that suffered a hardware failure.
+- User-initiated maintenance (for example, a user restarts or redeploys a VM).
- [Spot VM](../spot-vms.md) and [Spot scale set](../../virtual-machine-scale-sets/use-spot.md) instance evictions. ## The Basics
Scheduled events are delivered to:
- All the VMs in a scale set placement group. > [!NOTE]
-> Specific to VMs in an availability zone, the scheduled events go to single VMs in a zone.
+> Scheduled Events for all virtual machines (VMs) in a Fabric Controller (FC) tenant are delivered to all VMs in a FC tenant. FC tenant equates to a standalone VM, an entire Cloud Service, an entire Availability Set, and a Placement Group for a VM Scale Set (VMSS) regardless of Availability Zone usage.
> For example, if you have 100 VMs in a availability set and there is an update to one of them, the scheduled event will go to all 100, whereas if there are 100 single VMs in a zone, then event will only go to the VM which is getting impacted. As a result, check the `Resources` field in the event to identify which VMs are affected.
-### Endpoint Discovery
+### Endpoint discovery
For VNET enabled VMs, Metadata Service is available from a static nonroutable IP, `169.254.169.254`. The full endpoint for the latest version of Scheduled Events is: > `http://169.254.169.254/metadata/scheduledevents?api-version=2020-07-01`
The Scheduled Events service is versioned. Versions are mandatory; the current v
| Version | Release Type | Regions | Release Notes | | - | - | - | - | | 2020-07-01 | General Availability | All | <li> Added support for Event Duration |
-| 2019-08-01 | General Availability | All | <li> Added support for Event Source |
+| 2019-08-01 | General Availability | All | <li> Added support for EventSource |
| 2019-04-01 | General Availability | All | <li> Added support for Event Description | | 2019-01-01 | General Availability | All | <li> Added support for virtual machine scale sets EventType 'Terminate' | | 2017-11-01 | General Availability | All | <li> Added support for Spot VM eviction EventType 'Preempt'<br> |
Scheduled Events is disabled for your service if it does not make a request for
### User-initiated Maintenance User-initiated VM maintenance via the Azure portal, API, CLI, or PowerShell results in a scheduled event. You then can test the maintenance preparation logic in your application, and your application can prepare for user-initiated maintenance.
-If you restart a VM, an event with the type `Reboot` is scheduled. If you redeploy a VM, an event with the type `Redeploy` is scheduled.
+If you restart a VM, an event with the type `Reboot` is scheduled. If you redeploy a VM, an event with the type `Redeploy` is scheduled. Typically events with a user event source can be immediately approved to avoid a delay on user-initiated actions.
## Use the API
You can query for scheduled events by making the following call:
``` curl -H Metadata:true http://169.254.169.254/metadata/scheduledevents?api-version=2020-07-01 ```
+#### Python sample
+````
+import json
+import requests
+
+metadata_url ="http://169.254.169.254/metadata/scheduledevents"
+header = {'Metadata' : 'true'}
+query_params = {'api-version':'2020-07-01'}
+
+def get_scheduled_events():
+ resp = requests.get(metadata_url, headers = header, params = query_params)
+ data = resp.json()
+ return data
+
+````
+ A response contains an array of scheduled events. An empty array means that currently no events are scheduled. In the case where there are scheduled events, the response contains an array of events.
In the case where there are scheduled events, the response contains an array of
### Event Properties |Property | Description | | - | - |
+| Document Incarnation | Integer that increases when the events array changes. Documents with the same incarnation contain the same event information, and the incarnation will be incremented when an event changes. |
| EventId | Globally unique identifier for this event. <br><br> Example: <br><ul><li>602d9444-d2cd-49c7-8624-8643e7171297 | | EventType | Impact this event causes. <br><br> Values: <br><ul><li> `Freeze`: The Virtual Machine is scheduled to pause for a few seconds. CPU and network connectivity may be suspended, but there is no impact on memory or open files.<li>`Reboot`: The Virtual Machine is scheduled for reboot (non-persistent memory is lost). This event is made available on a best effort basis <li>`Redeploy`: The Virtual Machine is scheduled to move to another node (ephemeral disks are lost). <li>`Preempt`: The Spot Virtual Machine is being deleted (ephemeral disks are lost). <li> `Terminate`: The virtual machine is scheduled to be deleted. | | ResourceType | Type of resource this event affects. <br><br> Values: <ul><li>`VirtualMachine`| | Resources| List of resources this event affects. The list is guaranteed to contain machines from at most one [update domain](../availability.md), but it might not contain all machines in the UD. <br><br> Example: <br><ul><li> ["FrontEnd_IN_0", "BackEnd_IN_0"] | | EventStatus | Status of this event. <br><br> Values: <ul><li>`Scheduled`: This event is scheduled to start after the time specified in the `NotBefore` property.<li>`Started`: This event has started.</ul> No `Completed` or similar status is ever provided. The event is no longer returned when the event is finished.
-| NotBefore| Time after which this event can start. <br><br> Example: <br><ul><li> Mon, 19 Sep 2016 18:29:47 GMT |
+| NotBefore| Time after which this event can start. The event is guaranteed to not start before this time. Will be blank if the event has already started <br><br> Example: <br><ul><li> Mon, 19 Sep 2016 18:29:47 GMT |
| Description | Description of this event. <br><br> Example: <br><ul><li> Host server is undergoing maintenance. | | EventSource | Initiator of the event. <br><br> Example: <br><ul><li> `Platform`: This event is initiated by platform. <li>`User`: This event is initiated by user. |
-| DurationInSeconds | The expected duration of the interruption caused by the event. <br><br> Example: <br><ul><li> `9`: The interruption caused by the event will last for 9 seconds. <li>`-1`: The default value used if the impact duration is either unknown or not applicable. |
+| DurationInSeconds | The expected duration of the interruption caused by the event. <br><br> Example: <br><ul><li> `9`: The interruption caused by the event will last for 9 seconds. <li>`-1`: The default value used if the impact duration is either unknown or not applicable. |
### Event Scheduling Each event is scheduled a minimum amount of time in the future based on the event type. This time is reflected in an event's `NotBefore` property.
Each event is scheduled a minimum amount of time in the future based on the even
> In some cases, Azure is able to predict host failure due to degraded hardware and will attempt to mitigate disruption to your service by scheduling a migration. Affected virtual machines will receive a scheduled event with a `NotBefore` that is typically a few days in the future. The actual time varies depending on the predicted failure risk assessment. Azure tries to give 7 days' advance notice when possible, but the actual time varies and might be smaller if the prediction is that there is a high chance of the hardware failing imminently. To minimize risk to your service in case the hardware fails before the system-initiated migration, we recommend that you self-redeploy your virtual machine as soon as possible. >[!NOTE]
-> In the case the host node experiences a hardware failure Azure will bypass the minimum notice period an immediately begin the recovery process for affected virtual machines. This reduces recovery time in the case that the affected VMs are unable to respond. During the recovery process an event will be created for all impacted VMs with EventType = Reboot and EventStatus = Started
+> In the case the host node experiences a hardware failure Azure will bypass the minimum notice period an immediately begin the recovery process for affected virtual machines. This reduces recovery time in the case that the affected VMs are unable to respond. During the recovery process an event will be created for all impacted VMs with `EventType = Reboot` and `EventStatus = Started`.
### Polling frequency
You can poll the endpoint for updates as frequently or infrequently as you like.
### Start an event
-After you learn of an upcoming event and finish your logic for graceful shutdown, you can approve the outstanding event by making a `POST` call to Metadata Service with `EventId`. This call indicates to Azure that it can shorten the minimum notification time (when possible).
+After you learn of an upcoming event and finish your logic for graceful shutdown, you can approve the outstanding event by making a `POST` call to Metadata Service with `EventId`. This call indicates to Azure that it can shorten the minimum notification time (when possible). The event may not start immediately upon approval, in some cases Azure will require the approval of all the VMs hosted on the node before proceeding with the event.
The following JSON sample is expected in the `POST` request body. The request should contain a list of `StartRequests`. Each `StartRequest` contains `EventId` for the event you want to expedite: ```
The following JSON sample is expected in the `POST` request body. The request sh
} ```
+The service will always return a 200 success code in the case of a valid event ID, even if it was already approved by a different VM. A 400 error code indicates that the request header or payload was malformed.
++ #### Bash sample ``` curl -H Metadata:true -X POST -d '{"StartRequests": [{"EventId": "f020ba2e-3bc0-4c40-a10b-86575a9eabd5"}]}' http://169.254.169.254/metadata/scheduledevents?api-version=2020-07-01 ```
+#### Python sample
+````
+import json
+import requests
+
+def confirm_scheduled_event(event_id):
+ # This payload confirms a single event with id event_id
+ payload = json.dumps({"StartRequests": [{"EventId": event_id }]})
+ response = requests.post("http://169.254.169.254/metadata/scheduledevents",
+ headers = {'Metadata' : 'true'},
+ params = {'api-version':'2020-07-01'},
+ data = payload)
+ return response.status_code
+````
> [!NOTE] > Acknowledging an event allows the event to proceed for all `Resources` in the event, not just the VM that acknowledges the event. Therefore, you can choose to elect a leader to coordinate the acknowledgement, which might be as simple as the first machine in the `Resources` field.
-## Python Sample
+## Example Responses
+The following is an example of a series of events that were seen by two VMs that were live migrated to another node.
-The following sample queries Metadata Service for scheduled events and approves each outstanding event:
+The `DocumentIncarnation` is changing every time there is new information in `Events`. An approval of the event would allow the freeze to proceed for both WestNO_0 and WestNO_1. The `DurationInSeconds` of -1 indicates that the platform does not know how long the operation will take.
-```python
-#!/usr/bin/python
+```JSON
+{
+ "DocumentIncarnation": 1,
+ "Events": [
+ ]
+}
-import json
-import socket
-import urllib2
+{
+ "DocumentIncarnation": 2,
+ "Events": [
+ {
+ "EventId": "C7061BAC-AFDC-4513-B24B-AA5F13A16123",
+ "EventStatus": "Scheduled",
+ "EventType": "Freeze",
+ "ResourceType": "VirtualMachine",
+ "Resources": [
+ "WestNO_0",
+ "WestNO_1"
+ ],
+ "NotBefore": "Mon, 11 Apr 2022 22:26:58 GMT",
+ "Description": "Virtual machine is being paused because of a memory-preserving Live Migration operation.",
+ "EventSource": "Platform",
+ "DurationInSeconds": -1
+ }
+ ]
+}
-metadata_url = "http://169.254.169.254/metadata/scheduledevents?api-version=2020-07-01"
-this_host = socket.gethostname()
+{
+ "DocumentIncarnation": 3,
+ "Events": [
+ {
+ "EventId": "C7061BAC-AFDC-4513-B24B-AA5F13A16123",
+ "EventStatus": "Started",
+ "EventType": "Freeze",
+ "ResourceType": "VirtualMachine",
+ "Resources": [
+ "WestNO_0",
+ "WestNO_1"
+ ],
+ "NotBefore": "",
+ "Description": "Virtual machine is being paused because of a memory-preserving Live Migration operation.",
+ "EventSource": "Platform",
+ "DurationInSeconds": -1
+ }
+ ]
+}
+{
+ "DocumentIncarnation": 4,
+ "Events": [
+ ]
+}
-def get_scheduled_events():
- req = urllib2.Request(metadata_url)
- req.add_header('Metadata', 'true')
- resp = urllib2.urlopen(req)
- data = json.loads(resp.read())
- return data
+```
+## Python Sample
+
+The following sample queries Metadata Service for scheduled events and approves each outstanding event:
-def handle_scheduled_events(data):
- for evt in data['Events']:
- eventid = evt['EventId']
- status = evt['EventStatus']
- resources = evt['Resources']
- eventtype = evt['EventType']
- resourcetype = evt['ResourceType']
- notbefore = evt['NotBefore'].replace(" ", "_")
- description = evt['Description']
- eventSource = evt['EventSource']
- if this_host in resources:
- print("+ Scheduled Event. This host " + this_host +
- " is scheduled for " + eventtype +
- " by " + eventSource +
- " with description " + description +
- " not before " + notbefore)
- # Add logic for handling events here
+```python
+#!/usr/bin/python
+import json
+import requests
+from time import sleep
+
+# The URL to access the metadata service
+metadata_url ="http://169.254.169.254/metadata/scheduledevents"
+# This must be sent otherwise the request will be ignored
+header = {'Metadata' : 'true'}
+# Current version of the API
+query_params = {'api-version':'2020-07-01'}
+
+def get_scheduled_events():
+ resp = requests.get(metadata_url, headers = header, params = query_params)
+ data = resp.json()
+ return data
+def confirm_scheduled_event(event_id):
+ # This payload confirms a single event with id event_id
+ # You can confirm multiple events in a single request if needed
+ payload = json.dumps({"StartRequests": [{"EventId": event_id }]})
+ response = requests.post(metadata_url,
+ headers= header,
+ params = query_params,
+ data = payload)
+ return response.status_code
+
+def log(event):
+ # This is an optional placeholder for logging events to your system
+ print(event["Description"])
+ return
+
+def advanced_sample(last_document_incarnation):
+ # Poll every second to see if there are new scheduled events to process
+ # Since some events may have necessarily short warning periods, it is
+ # recommended to poll frequently
+ found_document_incarnation = last_document_incarnation
+ while (last_document_incarnation == found_document_incarnation):
+ sleep(1)
+ payload = get_scheduled_events()
+ found_document_incarnation = payload["DocumentIncarnation"]
+
+ # We recommend processing all events in a document together,
+ # even if you won't be actioning on them right away
+ for event in payload["Events"]:
+
+ # Events that have already started, logged for tracking
+ if (event["EventStatus"] == "Started"):
+ log(event)
+
+ # Approve all user initiated events. These are typically created by an
+ # administrator and approving them immediately can help to avoid delays
+ # in admin actions
+ elif (event["EventSource"] == "User"):
+ confirm_scheduled_event(event["EventId"])
+
+ # For this application, freeze events less that 9 seconds are considered
+ # no impact. This will immediately approve them
+ elif (event["EventType"] == "Freeze" and
+ int(event["DurationInSeconds"]) >= 0 and
+ int(event["DurationInSeconds"]) < 9):
+ confirm_scheduled_event(event["EventId"])
+
+ # Events that may be impactful (eg. Reboot or redeploy) may need custom
+ # handling for your application
+ else:
+ #TODO Custom handling for impactful events
+ log(event)
+ print("Processed events from document: " + str(found_document_incarnation))
+ return found_document_incarnation
def main():
- data = get_scheduled_events()
- handle_scheduled_events(data)
+ # This will track the last set of events seen
+ last_document_incarnation = "-1"
+
+ input_text = "\
+ Press 1 to poll for new events \n\
+ Press 2 to exit \n "
+ program_exit = False
+ while program_exit == False:
+ user_input = input(input_text)
+ if (user_input == "1"):
+ last_document_incarnation = advanced_sample(last_document_incarnation)
+ elif (user_input == "2"):
+ program_exit = True
if __name__ == '__main__': main()
if __name__ == '__main__':
## Next steps - Review the Scheduled Events code samples in the [Azure Instance Metadata Scheduled Events GitHub repository](https://github.com/Azure-Samples/virtual-machines-scheduled-events-discover-endpoint-for-non-vnet-vm).
+- Review the Node.js Scheduled Events code samples in [Azure Samples GitHub repository](https://github.com/Azure/vm-scheduled-events).
- Read more about the APIs that are available in the [Instance Metadata Service](instance-metadata-service.md). - Learn about [planned maintenance for Linux virtual machines in Azure](../maintenance-and-updates.md?bc=/azure/virtual-machines/linux/breadcrumb/toc.json&toc=/azure/virtual-machines/linux/toc.json).
+- Learn how to log scheduled events by using Azure Event Hubs in the [Azure Samples GitHub repository](https://github.com/Azure-Samples/virtual-machines-python-scheduled-events-central-logging).
virtual-machines Use Remote Desktop https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/use-remote-desktop.md
# Install and configure xrdp to use Remote Desktop with Ubuntu
-**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets
+**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets
Linux virtual machines (VMs) in Azure are usually managed from the command line using a secure shell (SSH) connection. When new to Linux, or for quick troubleshooting scenarios, the use of remote desktop may be easier. This article details how to install and configure a desktop environment ([xfce](https://www.xfce.org)) and remote desktop ([xrdp](http://xrdp.org)) for your Linux VM running Ubuntu.
-The article was writen and tested using an Ubuntu 18.04 VM.
+The article was written and tested using an Ubuntu 18.04 VM.
## Prerequisites
sudo passwd azureuser
## Create a Network Security Group rule for Remote Desktop traffic To allow Remote Desktop traffic to reach your Linux VM, a network security group rule needs to be created that allows TCP on port 3389 to reach your VM. For more information about network security group rules, see [What is a network security group?](../../virtual-network/network-security-groups-overview.md) You can also [use the Azure portal to create a network security group rule](../windows/nsg-quickstart-portal.md).
+### [Azure CLI](#tab/azure-cli)
+ The following example creates a network security group rule with [az vm open-port](/cli/azure/vm#az-vm-open-port) on port *3389*. From the Azure CLI, not the SSH session to your VM, open the following network security group rule: ```azurecli az vm open-port --resource-group myResourceGroup --name myVM --port 3389 ```
+### [Azure PowerShell](#tab/azure-powershell)
+
+The following example adds a network security group rule with [Add-AzNetworkSecurityRuleConfig](/powershell/module/az.network/add-aznetworksecurityruleconfig) on port *3389* to the existing network security group. From the Azure PowerShell, not the SSH session to your VM, get the existing network security group named *myVMnsg*:
+
+```azurepowershell
+$nsg = Get-AzNetworkSecurityGroup -ResourceGroupName myResourceGroup -Name myVMnsg
+```
+
+Add an RDP network security rule named *open-port-3389* to your `$nsg` network security group and update the network security group with [Set-AzNetworkSecurityGroup](/powershell/module/az.network/set-aznetworksecuritygroup) in order for your changes to take effect:
+
+```azurepowershell
+$params = @{
+ Name = 'open-port-3389'
+ Description = 'Allow RDP'
+ NetworkSecurityGroup = $nsg
+ Access = 'Allow'
+ Protocol = 'TCP'
+ Direction = 'Inbound'
+ Priority = 100
+ SourceAddressPrefix = 'Internet'
+ SourcePortRange = '*'
+ DestinationAddressPrefix = '*'
+ DestinationPortRange = '3389'
+}
+
+Add-AzNetworkSecurityRuleConfig @params | Set-AzNetworkSecurityGroup
+```
++ ## Connect your Linux VM with a Remote Desktop client
-Open your local remote desktop client and connect to the IP address or DNS name of your Linux VM.
+Open your local remote desktop client and connect to the IP address or DNS name of your Linux VM.
:::image type="content" source="media/use-remote-desktop/remote-desktop.png" alt-text="Screenshot of the remote desktop client.":::
virtual-machines Lsv2 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/lsv2-series.md
Title: Lsv2-series - Azure Virtual Machines
description: Specifications for the Lsv2-series VMs. -+ Last updated 02/03/2020-+ # Lsv2-series
virtual-machines M Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/m-series.md
Title: M-series - Azure Virtual Machines description: Specifications for the M-series VMs.-+ -+ Last updated 03/31/2020-+ # M-series
virtual-machines Maintenance And Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/maintenance-and-updates.md
In the rare case where VMs need to be rebooted for planned maintenance, you'll b
During the *self-service phase*, which typically lasts four weeks, you start the maintenance on your VMs. As part of the self-service, you can query each VM to see its status and the result of your last maintenance request.
-When you start self-service maintenance, your VM is redeployed to an already updated node. Because the VM reboots, the temporary disk is lost and dynamic IP addresses associated with the virtual network interface are updated.
+When you start self-service maintenance, your VM is redeployed to an already updated node. Because the VM is redeployed, the temporary disk is lost and dynamic IP addresses associated with the virtual network interface are updated.
If an error arises during self-service maintenance, the operation stops, the VM isn't updated, and you get the option to retry the self-service maintenance.
virtual-machines Msv2 Mdsv2 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/msv2-mdsv2-series.md
Title: Msv2/Mdsv2 Medium Memory Series - Azure Virtual Machines description: Specifications for the Msv2-series VMs.-+ -+ Last updated 04/07/2020-+ # Msv2 and Mdsv2-series Medium Memory
virtual-machines Mv2 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/mv2-series.md
Title: Mv2-series - Azure Virtual Machines description: Specifications for the Mv2-series VMs.-+ -+ Last updated 04/07/2020-+ # Mv2-series
virtual-machines N Series Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/n-series-migration.md
Title: Migration Guide for GPU Compute Workloads in Azure description: NC, ND, NCv2-series migration guide. -+ Last updated 08/15/2020
virtual-machines Nc A100 V4 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/nc-a100-v4-series.md
description: Specifications for the NC A100 v4-series Azure VMs. These VMs inclu
-+ Last updated 03/01/2022
virtual-machines Nc Series Retirement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/nc-series-retirement.md
Title: NC-series retirement
description: NC-series retirement by August 31, 2023 -+ Last updated 09/01/2021
virtual-machines Nc Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/nc-series.md
Title: NC-series - Azure Virtual Machines
description: Specifications for the NC-series VMs. -+ Last updated 02/03/2020-+ # NC-series
virtual-machines Nct4 V3 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/nct4-v3-series.md
Title: NCas T4 v3-series description: Specifications for the NCas T4 v3-series VMs. -+ Last updated 01/12/2021
virtual-machines Ncv2 Series Retirement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/ncv2-series-retirement.md
Title: NCv2-series retirement
description: NCv2-series retirement by August 31, 2023 -+ Last updated 09/01/2021
virtual-machines Ncv2 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/ncv2-series.md
Title: NCv2-series - Azure Virtual Machines
description: Specifications for the NCv2-series VMs. -+ Last updated 02/03/2020-+ # NCv2-series
virtual-machines Ncv3 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/ncv3-series.md
Title: NCv3-series - Azure Virtual Machines
description: Specifications for the NCv3-series VMs. -+ Last updated 02/03/2020-+ # NCv3-series
virtual-machines Nd Series Retirement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/nd-series-retirement.md
Title: ND-series retirement
description: ND-series retirement by August 31, 2023 -+ Last updated 09/01/2021
virtual-machines Nd Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/nd-series.md
Title: ND-series - Azure Virtual Machines
description: Specifications for the ND-series VMs. -+ Last updated 02/03/2020-+ # ND-series
virtual-machines Nda100 V4 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/nda100-v4-series.md
Title: ND A100 v4-series description: Specifications for the ND A100 v4-series VMs. -+
virtual-machines Ndm A100 V4 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/ndm-a100-v4-series.md
description: Specifications for the NDm A100 v4-series VMs.
-+ Last updated 10/26/2021
virtual-machines Ndv2 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/ndv2-series.md
Title: NDv2-series
description: Specifications for the NDv2-series VMs. -+ Last updated 02/03/2020-+ # Updated NDv2-series
virtual-machines Np Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/np-series.md
Title: NP-series - Azure Virtual Machines
description: Specifications for the NP-series VMs. -+ Last updated 02/09/2021
virtual-machines Nv Series Migration Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/nv-series-migration-guide.md
Title: NV series migration guide
description: NV series migration guide -+ Last updated 01/12/2020
virtual-machines Nv Series Retirement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/nv-series-retirement.md
Title: NV series retirement
description: NV series retirement starting September 1, 2021 -+ Last updated 01/12/2020
virtual-machines Nv Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/nv-series.md
Title: NV-series - Azure Virtual Machines
description: Specifications for the NV-series VMs. -+ Last updated 03/29/2022-+ # NV-series
virtual-machines Nva10v5 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/nva10v5-series.md
Title: NV A10 v5-series
description: Specifications for the NV A10 v5-series VMs. -+ Last updated 02/01/2022
virtual-machines Nvv3 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/nvv3-series.md
description: Specifications for the NVv3-series VMs.
-+ Last updated 02/03/2020-+ # NVv3-series
virtual-machines Nvv4 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/nvv4-series.md
Title: NVv4-series
description: Specifications for the NVv4-series VMs. -+ Last updated 01/12/2020
virtual-machines Sizes B Series Burstable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/sizes-b-series-burstable.md
Title: B-series burstable - Azure Virtual Machines description: Describes the B-series of burstable Azure VM sizes. -+
virtual-machines Sizes Compute https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/sizes-compute.md
Title: Azure VM sizes - Compute optimized | Microsoft Docs
description: Lists the different compute optimized sizes available for virtual machines in Azure. Lists information about the number of vCPUs, data disks, and NICs as well as storage throughput and network bandwidth for sizes in this series. -+ Last updated 02/03/2020
virtual-machines Sizes Field Programmable Gate Arrays https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/sizes-field-programmable-gate-arrays.md
Title: Azure virtual machine sizes for field-programmable gate arrays (FPGA)
description: Lists the different FPGA optimized sizes available for virtual machines in Azure. Lists information about the number of vCPUs, data disks and NICs as well as storage throughput and network bandwidth for sizes in this series. -+ Last updated 02/03/2020
virtual-machines Sizes General https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/sizes-general.md
Title: Azure VM sizes - General purpose | Microsoft Docs
description: Lists the different general purpose sizes available for virtual machines in Azure. Lists information about the number of vCPUs, data disks, and NICs as well as storage throughput and network bandwidth for sizes in this series. -+ Last updated 10/20/2021
virtual-machines Sizes Gpu https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/sizes-gpu.md
Title: Azure VM sizes - GPU | Microsoft Docs
description: Lists the different GPU optimized sizes available for virtual machines in Azure. Lists information about the number of vCPUs, data disks and NICs as well as storage throughput and network bandwidth for sizes in this series. -+ Last updated 02/03/2020-+ # GPU optimized virtual machine sizes
virtual-machines Sizes Hpc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/sizes-hpc.md
Title: Azure VM sizes - HPC | Microsoft Docs description: Lists the different sizes available for high performance computing virtual machines in Azure. Lists information about the number of vCPUs, data disks and NICs as well as storage throughput and network bandwidth for sizes in this series. -+ Last updated 03/19/2021
virtual-machines Sizes Memory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/sizes-memory.md
tags: azure-resource-manager,azure-service-management
keywords: VM isolation,isolated VM,isolation,isolated ms.assetid: -+ Last updated 04/04/2022
virtual-machines Sizes Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/sizes-storage.md
Title: Azure VM sizes - Storage | Microsoft Docs description: Lists the different storage optimized sizes available for virtual machines in Azure. Lists information about the number of vCPUs, data disks, and NICs as well as storage throughput and network bandwidth for sizes in this series.-+ documentationcenter: '' Last updated 02/03/2020-+
virtual-machines Sizes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/sizes.md
Title: VM sizes description: Lists the different sizes available for virtual machines in Azure.-+ Last updated 04/04/2022-+ # Sizes for virtual machines in Azure
virtual-machines Vm Applications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/vm-applications.md
Previously updated : 02/03/2022 Last updated : 05/18/2022
Application packages provide benefits over other deployment and packaging method
- Support for virtual machines, and both flexible and uniform scale sets - If you have Network Security Group (NSG) rules applied on your VM or scale set, downloading the packages from an internet repository might not be possible. And with storage accounts, downloading packages onto locked-down VMs would require setting up private links.-- VM applications can be used with the [DeployIfNotExists](../governance/policy/concepts/effects.md) policy.+ ## What are VM app packages?
The VM application packages use multiple resource types:
| Resource | Description| |-|| | **Azure compute gallery** | A gallery is a repository for managing and sharing application packages. Users can share the gallery resource and all the child resources will be shared automatically. The gallery name must be unique per subscription. For example, you may have one gallery to store all your OS images and another gallery to store all your VM applications.|
-| **VM application** | The definition of your VM application. It is a *logical* resource that stores the common metadata for all the versions under it. For example, you may have an application definition for Apache Tomcat and have multiple versions within it. |
+| **VM application** | The definition of your VM application. It's a *logical* resource that stores the common metadata for all the versions under it. For example, you may have an application definition for Apache Tomcat and have multiple versions within it. |
| **VM Application version** | The deployable resource. You can globally replicate your VM application versions to target regions closer to your VM infrastructure. The VM Application Version must be replicated to a region before it may be deployed on a VM in that region. |
The VM application packages use multiple resource types:
- **Retrying failed installations**: Currently, the only way to retry a failed installation is to remove the application from the profile, then add it back. -- **Only 5 applications per VM**: No more than 5 applications may be deployed to a VM at any point.
+- **Only 5 applications per VM**: No more than five applications may be deployed to a VM at any point.
-- **1GB application size**: The maximum file size of an application version is 1GB.
+- **1GB application size**: The maximum file size of an application version is 1 GB.
- **No guarantees on reboots in your script**: If your script requires a reboot, the recommendation is to place that application last during deployment. While the code attempts to handle reboots, it may fail.
The VM application packages use multiple resource types:
## Cost
-There is no extra charge for using VM Application Packages, but you will be charged for the following resources:
+There's no extra charge for using VM Application Packages, but you'll be charged for the following resources:
- Storage costs of storing each package and any replicas. -- Network egress charges for replication of the first image version from the source region to the replicated regions. Subsequent replicas are handled within the region, so there are no additional charges.
+- Network egress charges for replication of the first image version from the source region to the replicated regions. Subsequent replicas are handled within the region, so there are no extra charges.
For more information on network egress, see [Bandwidth pricing](https://azure.microsoft.com/pricing/details/bandwidth/).
VM application versions are the deployable resource. Versions are defined with t
- Link to the application package file in a storage account - Install string for installing the application - Remove string to show how to properly remove the app-- Package file name to use when it is downloaded to the VM
+- Package file name to use when it's downloaded to the VM
- Configuration file name to be used to configure the app on the VM - A link to the configuration file for the VM application - Update string for how to update the VM application to a newer version-- End-of-life date. End-of-life dates are informational; you will still be able to deploy a VM application versions past the end-of-life date.
+- End-of-life date. End-of-life dates are informational; you'll still be able to deploy VM application versions past the end-of-life date.
- Exclude from latest. You can keep a version from being used as the latest version of the application. - Target regions for replication - Replica count per region
The install/update/remove commands should be written assuming the application pa
## File naming
-During the preview, when the application file gets downloaded to the VM, the file name is the same as the name you use when you create the VM application. For example, if I name my VM application `myApp`, the file that will be downloaded to the VM will also be named `myApp`, regardless of what the file name is used in the storage account. If you VM application also has a configuration file, that file is the name of the application with `_config` appended. If `myApp` has a configuration file, it will be named `myApp_config`.
+During the preview, when the application file gets downloaded to the VM, the file name is the same as the name you use when you create the VM application. For example, if I name my VM application `myApp`, the file that will be downloaded to the VM will also be named `myApp`, regardless of what the file name is used in the storage account. If your VM application also has a configuration file, that file is the name of the application with `_config` appended. If `myApp` has a configuration file, it will be named `myApp_config`.
-For example, if I name my VM application `myApp` when I create it in the Gallery, but it is stored as `myApplication.exe` in the storage account, when it gets downloaded to the VM the file name will be `myApp`. My install string should start by renaming the file to be whatever the it needs to be to run on the VM (like myApp.exe).
+For example, if I name my VM application `myApp` when I create it in the Gallery, but it's stored as `myApplication.exe` in the storage account, when it gets downloaded to the VM the file name will be `myApp`. My install string should start by renaming the file to be whatever it needs to be to run on the VM (like myApp.exe).
-The install, update, and remove commands must be written with this in mind.
+The install, update, and remove commands must be written with file naming in mind.
## Command interpreter
The default command interpreters are:
- Linux: `/bin/sh` - Windows: `cmd.exe`
-It's possible to use a different interpreter, as long as it is installed on the machine, by calling the executable and passing the command to it. For example, to have your command run in PowerShell on Windows instead of cmd, you can pass `powershell.exe -Command '<powershell commmand>'`
+It's possible to use a different interpreter, as long as it's installed on the machine, by calling the executable and passing the command to it. For example, to have your command run in PowerShell on Windows instead of cmd, you can pass `powershell.exe -Command '<powershell commmand>'`
ΓÇ» ## How updates are handled
-When you update a application version, the update command you provided during deployment will be used. If the updated version doesnΓÇÖt have an update command, then the current version will be removed and the new version will be installed.
+When you update an application version, the update command you provided during deployment will be used. If the updated version doesnΓÇÖt have an update command, then the current version will be removed and the new version will be installed.
Update commands should be written with the expectation that it could be updating from any older version of the VM application. ## Tips for creating VM Applications on Linux
-3rd party applications for Linux can be packaged in a few ways. Let's explore how to handle creating the install commands for some of the most common.
+Third party applications for Linux can be packaged in a few ways. Let's explore how to handle creating the install commands for some of the most common.
### .tar and .gz files
-These are compressed archives and can simply be extracted to a desired location. Check the installation instructions for the original package to in case they need to be extracted to a specific location. If .tar.gz file contains source code, refer to the instructions for the package for how to install from source.
+These are compressed archives and can be extracted to a desired location. Check the installation instructions for the original package to in case they need to be extracted to a specific location. If .tar.gz file contains source code, refer to the instructions for the package for how to install from source.
Example to install command to install `golang` on a Linux machine:
rm -rf /usr/local/go
``` ### .deb, .rpm, and other platform specific packages
-You can download individual packages for platform specific package managers, but they usually do not contain all the dependencies. For these files, you must also include all dependencies in the application package, or have the system package manager download the dependencies through the repositories that are available to the VM. If you are working with a VM with restricted internet access, you must package all the dependencies yourself.
+You can download individual packages for platform specific package managers, but they usually don't contain all the dependencies. For these files, you must also include all dependencies in the application package, or have the system package manager download the dependencies through the repositories that are available to the VM. If you're working with a VM with restricted internet access, you must package all the dependencies yourself.
Figuring out the dependencies can be a bit tricky. There are third party tools that can show you the entire dependency tree.
-On Ubuntu, you can run `apt-get install <name> --simulate` to show all the packages that will be installed for the `apt-get install <name>` command. Then you can use that output to download all .deb files to create an archive that can be used as the application package. The downside to this method is that is doesn't show the dependencies that are already installed on the VM.
+On Ubuntu, you can run `apt-get install <name> --simulate` to show all the packages that will be installed for the `apt-get install <name>` command. Then you can use that output to download all .deb files to create an archive that can be used as the application package. The downside to this method is that it doesn't show the dependencies that are already installed on the VM.
Example, to create a VM application package to install PowerShell for Ubuntu, run the command `apt-get install powershell --simulate` on a new Ubuntu VM. Check the output of the line **The following NEW packages will be installed** which lists the following packages: - `liblttng-ust-ctl4`
dpkg -i <appname> || apt --fix-broken install -y
## Tips for creating VM Applications on Windows
-Most 3rd party applications in Windows are available as .exe or .msi installers. Some are also available as extract and run zip files. Let us look at the best practices for each of them.
+Most third party applications in Windows are available as .exe or .msi installers. Some are also available as extract and run zip files. Let us look at the best practices for each of them.
### .exe installer
-Installer executables typically launch a user interface (UI) and require someone to click through the UI. If the installer supports a silent mode parameter, it should be included in your installation string.
+Installer executables typically launch a user interface (UI) and require someone to select through the UI. If the installer supports a silent mode parameter, it should be included in your installation string.
Cmd.exe also expects executable files to have the extension .exe, so you need to rename the file to have the .exe extension.
If I wanted to create a VM application package for myApp.exe, which ships as an
"move .\\myApp .\\myApp.exe & myApp.exe /S -config myApp_config" ```
-If the installer executable file doesn't support an uninstall parameter you can sometimes look up the registry on a test machine to know here the uninstaller is located.
+If the installer executable file doesn't support an uninstall parameter, you can sometimes look up the registry on a test machine to know here the uninstaller is located.
In the registry, the uninstall string is stored in `Computer\HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Uninstall\<installed application name>\UninstallString` so I would use the contents as my remove command:
rmdir /S /Q C:\\myapp
## Troubleshooting during preview
-During the preview, the VM application extension always returns a success regardless of whether any VM app failed while being installed/updated/removed. The VM Application extension will only report the extension status as failure when there is a problem with the extension or the underlying infrastructure. To know whether a particular VM application was successfully added to the VM instance, please check the message of the VMApplication extension.
+During the preview, the VM application extension always returns a success regardless of whether any VM app failed while being installed/updated/removed. The VM Application extension will only report the extension status as failure when there's a problem with the extension or the underlying infrastructure. To know whether a particular VM application was successfully added to the VM instance, check the message of the VM Application extension.
To learn more about getting the status of VM extensions, see [Virtual machine extensions and features for Linux](extensions/features-linux.md#view-extension-status) and [Virtual machine extensions and features for Windows](extensions/features-windows.md#view-extension-status).
Get-AzVmss -name <VMSS name> -ResourceGroupName <resource group name> -Status |
| Current VM Application Version {name} was deprecated at {date}. | You tried to deploy a VM Application version that has already been deprecated. Try using `latest` instead of specifying a specific version. | | Current VM Application Version {name} supports OS {OS}, while current OSDisk's OS is {OS}. | You tried to deploy a Linux application to Windows instance or vice versa. | | The maximum number of VM applications (max=5, current={count}) has been exceeded. Use fewer applications and retry the request. | We currently only support five VM applications per VM or scale set. |
-| More than one VMApplication was specified with the same packageReferenceId. | The same application was specified more than once. |
+| More than one VM Application was specified with the same packageReferenceId. | The same application was specified more than once. |
| Subscription not authorized to access this image. | The subscription doesn't have access to this application version. | | Storage account in the arguments doesn't exist. | There are no applications for this subscription. |
-| The platform image {image} is not available. Verify that all fields in the storage profile are correct. For more details about storage profile information, please refer to https://aka.ms/storageprofile. | The application doesn't exist. |
+| The platform image {image} isn't available. Verify that all fields in the storage profile are correct. For more details about storage profile information, please refer to https://aka.ms/storageprofile. | The application doesn't exist. |
| The gallery image {image} is not available in {region} region. Please contact image owner to replicate to this region, or change your requested region. | The gallery application version exists, but it was not replicated to this region. | | The SAS is not valid for source uri {uri}. | A `Forbidden` error was received from storage when attempting to retrieve information about the url (either mediaLink or defaultConfigurationLink). | | The blob referenced by source uri {uri} doesn't exist. | The blob provided for the mediaLink or defaultConfigurationLink properties doesn't exist. | | The gallery application version url {url} cannot be accessed due to the following error: remote name not found. Ensure that the blob exists and that it's either publicly accessible or is a SAS url with read privileges. | The most likely case is that a SAS uri with read privileges was not provided. | | The gallery application version url {url} cannot be accessed due to the following error: {error description}. Ensure that the blob exists and that it's either publicly accessible or is a SAS url with read privileges. | There was an issue with the storage blob provided. The error description will provide more information. | | Operation {operationName} is not allowed on {application} since it is marked for deletion. You can only retry the Delete operation (or wait for an ongoing one to complete). | Attempt to update an application thatΓÇÖs currently being deleted. |
-| The value {value} of parameter 'galleryApplicationVersion.properties.publishingProfile.replicaCount' is out of range. The value must be between 1 and 3, inclusive. | Only between 1 and 3 replicas are allowed for VMApplication versions. |
+| The value {value} of parameter 'galleryApplicationVersion.properties.publishingProfile.replicaCount' is out of range. The value must be between 1 and 3, inclusive. | Only between 1 and 3 replicas are allowed for VM Application versions. |
| Changing property 'galleryApplicationVersion.properties.publishingProfile.manageActions.install' is not allowed. (or update, delete) | It is not possible to change any of the manage actions on an existing VmApplication. A new VmApplication version must be created. | | Changing property ' galleryApplicationVersion.properties.publishingProfile.settings.packageFileName ' is not allowed. (or configFileName) | It is not possible to change any of the settings, such as the package file name or config file name. A new VmApplication version must be created. | | The blob referenced by source uri {uri} is too big: size = {size}. The maximum blob size allowed is '1 GB'. | The maximum size for a blob referred to by mediaLink or defaultConfigurationLink is currently 1 GB. |
virtual-machines Download Vhd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/download-vhd.md
In this article, you learn how to download a Windows virtual hard disk (VHD) fil
## Optional: Generalize the VM
-If you want to use the VHD as an [image](tutorial-custom-images.md) to create other VMs, you should use [Sysprep](/windows-hardware/manufacture/desktop/sysprep--generalize--a-windows-installation) to generalize the operating system. Otherwise, you will have to make a copy the disk for each VM you want to create.
+If you want to use the VHD as an [image](tutorial-custom-images.md) to create other VMs, you should use [Sysprep](/windows-hardware/manufacture/desktop/sysprep--generalize--a-windows-installation) to generalize the operating system. Otherwise, you will have to make a copy of the disk for each VM you want to create.
To use the VHD as an image to create other VMs, generalize the VM.
virtual-machines N Series Amd Driver Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/n-series-amd-driver-setup.md
description: How to set up AMD GPU drivers for N-series VMs running Windows Serv
-+
virtual-machines N Series Driver Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/n-series-driver-setup.md
description: How to set up NVIDIA GPU drivers for N-series VMs running Windows S
-+ Last updated 09/24/2018
virtual-machines Run Command https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/run-command.md
The Run Command feature uses the virtual machine (VM) agent to run PowerShell sc
## Benefits
-You can access your virtual machines in multiple ways. Run Command can run scripts on your virtual machines remotely by using the VM agent. You use Run Command through the Azure portal, [REST API](/rest/api/compute/virtual-machines-run-commands/run-command), or [PowerShell](/powershell/module/az.compute/invoke-azvmruncommand) for Windows VMs.
+You can access your virtual machines in multiple ways. Run Command can run scripts on your virtual machines remotely by using the VM agent. You use Run Command through the Azure portal, [REST API](/rest/api/compute/virtual-machine-run-commands), or [PowerShell](/powershell/module/az.compute/invoke-azvmruncommand) for Windows VMs.
This capability is useful in all scenarios where you want to run a script within a virtual machine. It's one of the only ways to troubleshoot and remediate a virtual machine that doesn't have the RDP or SSH port open because of improper network or administrative user configuration.
virtual-machines Storage Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/storage-performance.md
Title: Optimize performance on Azure Lsv2-series virtual machines
description: Learn how to optimize performance for your solution on the Lsv2-series virtual machines using a Windows example. -+ Last updated 04/17/2019
virtual-machines Oracle Design https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/oracle/oracle-design.md
**Applies to:** :heavy_check_mark: Linux VMs
-Suppose you're planning to migrate an Oracle database from an on-premises location to Azure. You have the [Diagnostics Pack](https://docs.oracle.com/cd/E11857_01/license.111/e11987/database_management.htm) or the [Automatic Workload Repository](https://www.oracle.com/technetwork/database/manageability/info/other-manageability/wp-self-managing-database18c-4412450.pdf) for the Oracle database you're looking to migrate. Further, you have an understanding of the various metrics in Oracle, and you have a baseline understanding of application performance and platform utilization.
+Azure is home for all Oracle workloads, including those which need to continue to run optimally in Azure with Oracle. If you have the [Diagnostic Pack](https://www.oracle.com/technetwork/database/enterprise-edition/overview/diagnostic-pack-11g-datasheet-1-129197.pdf) or the [Automatic Workload Repository](https://docs.oracle.com/en-us/iaas/operations-insights/doc/analyze-automatic-workload-repository-awr-performance-data.html) you can use this data to assess the Oracle workload, size the resource needs, and migrate it to Azure. The various metrics provided by Oracle in these reports can provide a baseline understanding of application performance and platform utilization.
-This article helps you understand how to optimize your Oracle deployment in Azure. You explore performance tuning options for an Oracle database in an Azure environment. And you develop clear expectations about the limits of physical tuning through architecture, the advantages of logical tuning of database code, and the overall database design.
+This article will help you to understand how to size out an Oracle workload to run in Azure and explore the best architecture solutions to provide the most optimal cloud performance. The data provided by Oracle in the Statspack and even more so in its descendent, the AWR, will assist you in developing clear expectations about the limits of physical tuning through architecture, the advantages of logical tuning of database code, and the overall database design.
## Differences between the two environments
virtual-machines Redhat Imagelist https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/redhat/redhat-imagelist.md
Title: Red Hat Enterprise Linux images available in Azure description: Learn about Red Hat Enterprise Linux images in Microsoft Azure-+ Last updated 04/16/2020-+ # Red Hat Enterprise Linux (RHEL) images available in Azure **Applies to:** :heavy_check_mark: Linux VMs
-Azure offers a variety of RHEL images for different use cases.
+Azure offers various RHEL images for different use cases.
> [!NOTE] > All RHEL images are available in Azure public and Azure Government clouds. They are not available in Azure China clouds. ## List of RHEL images
-This is a list of RHEL images available in Azure. Unless otherwise stated, all images are LVM-partitioned and attached to regular RHEL repositories (not EUS, not E4S). The following images are currently available for general use:
+This section provides list of RHEL images available in Azure. Unless otherwise stated, all images are LVM-partitioned and attached to regular RHEL repositories (not EUS, not E4S). The following images are currently available for general use:
> [!NOTE] > RAW images are no longer being produced in favor of LVM-partitioned images. LVM provides several advantages over the older raw (non-LVM) partitioning scheme, including significantly more flexible partition resizing options. Offer| SKU | Partitioning | Provisioning | Notes :-|:-|:-|:-|:--
-RHEL | 6.7 | RAW | Linux Agent | Extended Lifecycle Support available from December 1st. [More details here.](redhat-extended-lifecycle-support.md)
-| | 6.8 | RAW | Linux Agent | Extended Lifecycle Support available from December 1st. [More details here.](redhat-extended-lifecycle-support.md)
-| | 6.9 | RAW | Linux Agent | Extended Lifecycle Support available from December 1st. [More details here.](redhat-extended-lifecycle-support.md)
-| | 6.10 | RAW | Linux Agent | Extended Lifecycle Support available from December 1st. [More details here.](redhat-extended-lifecycle-support.md)
+RHEL | 6.7 | RAW | Linux Agent | Extended Lifecycle Support available. [More details here.](redhat-extended-lifecycle-support.md)
+| | 6.8 | RAW | Linux Agent | Extended Lifecycle Support available. [More details here.](redhat-extended-lifecycle-support.md)
+| | 6.9 | RAW | Linux Agent | Extended Lifecycle Support available. [More details here.](redhat-extended-lifecycle-support.md)
+| | 6.10 | RAW | Linux Agent | Extended Lifecycle Support available. [More details here.](redhat-extended-lifecycle-support.md)
| | 7-RAW | RAW | Linux Agent | RHEL 7.x family of images. <br> Attached to regular repositories by default (not EUS).
-| | 7-LVM | LVM | Linux Agent | RHEL 7.x family of images. <br> Attached to regular repositories by default (not EUS). If you are looking for a standard RHEL image to deploy, use this set of images and/or its Generation 2 counterpart.
-| | 7lvm-gen2| LVM | Linux Agent | Generation 2, RHEL 7.x family of images. <br> Attached to regular repositories by default (not EUS). If you are looking for a standard RHEL image to deploy, use this set of images and/or its Generation 1 counterpart.
+| | 7-LVM | LVM | Linux Agent | RHEL 7.x family of images. <br> Attached to regular repositories by default (not EUS). If you're looking for a standard RHEL image to deploy, use this set of images and/or its Generation 2 counterpart.
+| | 7lvm-gen2| LVM | Linux Agent | Generation 2, RHEL 7.x family of images. <br> Attached to regular repositories by default (not EUS). If you're looking for a standard RHEL image to deploy, use this set of images and/or its Generation 1 counterpart.
| | 7-RAW-CI | RAW-CI | cloud-init | RHEL 7.x family of images. <br> Attached to regular repositories by default (not EUS). | | 7.2 | RAW | Linux Agent | | | 7.3 | RAW | Linux Agent |
RHEL | 6.7 | RAW | Linux Agent | Extended Lifecycle Support ava
| | 82gen2 | LVM | Linux Agent | Hyper-V Generation 2 - Attached to EUS repositories as of November 2020. | | 8.3 | LVM | Linux Agent | Attached to regular repositories (EUS unavailable for RHEL 8.3) | | 83-gen2 | LVM | Linux Agent |Hyper-V Generation 2 - Attached to regular repositories (EUS unavailable for RHEL 8.3)
-RHEL-SAP | 7.4 | LVM | Linux Agent | RHEL 7.4 for SAP HANA and Business Apps. Attached to E4S repositories, will charge a premium for SAP and RHEL as well as the base compute fee.
-| | 74sap-gen2| LVM | Linux Agent | RHEL 7.4 for SAP HANA and Business Apps. Generation 2 image. Attached to E4S repositories, will charge a premium for SAP and RHEL as well as the base compute fee.
-| | 7.5 | LVM | Linux Agent | RHEL 7.5 for SAP HANA and Business Apps. Attached to E4S repositories, will charge a premium for SAP and RHEL as well as the base compute fee.
-| | 75sap-gen2| LVM | Linux Agent | RHEL 7.5 for SAP HANA and Business Apps. Generation 2 image. Attached to E4S repositories, will charge a premium for SAP and RHEL as well as the base compute fee.
-| | 7.6 | LVM | Linux Agent | RHEL 7.6 for SAP HANA and Business Apps. Attached to E4S repositories, will charge a premium for SAP and RHEL as well as the base compute fee.
-| | 76sap-gen2| LVM | Linux Agent | RHEL 7.6 for SAP HANA and Business Apps. Generation 2 image. Attached to E4S repositories, will charge a premium for SAP and RHEL as well as the base compute fee.
-| | 7.7 | LVM | Linux Agent | RHEL 7.7 for SAP HANA and Business Apps. Attached to E4S repositories, will charge a premium for SAP and RHEL as well as the base compute fee.
-RHEL-SAP-HANA (To be removed in November 2020) | 6.7 | RAW | Linux Agent | RHEL 6.7 for SAP HANA. Outdated in favor of the RHEL-SAP images. This image will be removed in November 2020. More details about Red Hat's SAP cloud offerings are available at [SAP offerings on certified cloud providers](https://access.redhat.com/articles/3751271).
-| | 7.2 | LVM | Linux Agent | RHEL 7.2 for SAP HANA. Outdated in favor of the RHEL-SAP images. This image will be removed in November 2020. More details about Red Hat's SAP cloud offerings are available at [SAP offerings on certified cloud providers](https://access.redhat.com/articles/3751271).
-| | 7.3 | LVM | Linux Agent | RHEL 7.3 for SAP HANA. Outdated in favor of the RHEL-SAP images. This image will be removed in November 2020. More details about Red Hat's SAP cloud offerings are available at [SAP offerings on certified cloud providers](https://access.redhat.com/articles/3751271).
+| | 8.4 | LVM | Linux Agent | Attached to EUS repositories
+| | 84-gen2 | LVM | Linux Agent |Hyper-V Generation 2 - Hyper-V Generation 2 - Attached to EUS repositories
+| | 8.5 | LVM | Linux Agent | Attached to regular repositories (EUS unavailable for RHEL 8.5)
+| | 85-gen2 | LVM | Linux Agent |Hyper-V Generation 2 - Attached to regular repositories (EUS unavailable for RHEL 8.5)
+| | 8.6 | LVM | Linux Agent | Attached to EUS repositories
+| | 86-gen2 | LVM | Linux Agent |Hyper-V Generation 2 - Hyper-V Generation 2 - Attached to EUS repositories
RHEL-SAP-APPS | 6.8 | RAW | Linux Agent | RHEL 6.8 for SAP Business Applications. Outdated in favor of the RHEL-SAP images. | | 7.3 | LVM | Linux Agent | RHEL 7.3 for SAP Business Applications. Outdated in favor of the RHEL-SAP images.
-| | 7.4 | LVM | Linux Agent | RHEL 7.4 for SAP Business Applications.
-| | 7.6 | LVM | Linux Agent | RHEL 7.6 for SAP Business Applications.
-| | 7.7 | LVM | Linux Agent | RHEL 7.7 for SAP Business Applications.
+| | 7.4 | LVM | Linux Agent | RHEL 7.4 for SAP Business Applications
+| | 7.6 | LVM | Linux Agent | RHEL 7.6 for SAP Business Applications
+| | 7.7 | LVM | Linux Agent | RHEL 7.7 for SAP Business Applications
| | 77-gen2 | LVM | Linux Agent | RHEL 7.7 for SAP Business Applications. Generation 2 image
-| | 8.1 | LVM | Linux Agent | RHEL 8.1 for SAP Business Applications.
-| | 81-gen2 | LVM | Linux Agent | RHEL 8.1 for SAP Business Applications. Generation 2 image.
-| | 8.2 | LVM | Linux Agent | RHEL 8.2 for SAP Business Applications.
-| | 82-gen2 | LVM | Linux Agent | RHEL 8.2 for SAP Business Applications. Generation 2 image.
-RHEL-HA | 7.4 | LVM | Linux Agent | RHEL 7.4 with HA Add-On. Will charge a premium for HA and RHEL on top of the base compute fee. Outdated in favor of the RHEL-SAP-HA images.
-| | 7.5 | LVM | Linux Agent | RHEL 7.5 with HA Add-On. Will charge a premium for HA and RHEL on top of the base compute fee. Outdated in favor of the RHEL-SAP-HA images.
-| | 7.6 | LVM | Linux Agent | RHEL 7.6 with HA Add-On. Will charge a premium for HA and RHEL on top of the base compute fee. Outdated in favor of the RHEL-SAP-HA images.
-RHEL-SAP-HA | 7.4 | LVM | Linux Agent | RHEL 7.4 for SAP with HA and Update Services. Attached to E4S repositories. Will charge a premium for SAP and HA repositories as well as RHEL, on top of the base compute fees.
-| | 74sapha-gen2 | LVM | Linux Agent | RHEL 7.4 for SAP with HA and Update Services. Generation 2 image. Attached to E4S repositories. Will charge a premium for SAP and HA repositories as well as RHEL, on top of the base compute fees.
-| | 7.5 | LVM | Linux Agent | RHEL 7.5 for SAP with HA and Update Services. Attached to E4S repositories. Will charge a premium for SAP and HA repositories as well as RHEL, on top of the base compute fees.
-| | 7.6 | LVM | Linux Agent | RHEL 7.6 for SAP with HA and Update Services. Attached to E4S repositories. Will charge a premium for SAP and HA repositories as well as RHEL, on top of the base compute fees.
-| | 76sapha-gen2 | LVM | Linux Agent | RHEL 7.6 for SAP with HA and Update Services. Generation 2 image. Attached to E4S repositories. Will charge a premium for SAP and HA repositories as well as RHEL, on top of the base compute fees.
-| | 7.7 | LVM | Linux Agent | RHEL 7.7 for SAP with HA and Update Services. Attached to E4S repositories. Will charge a premium for SAP and HA repositories as well as RHEL, on top of the base compute fees.
-| | 77sapha-gen2 | LVM | Linux Agent | RHEL 7.7 for SAP with HA and Update Services. Generation 2 image. Attached to E4S repositories. Will charge a premium for SAP and HA repositories as well as RHEL, on top of the base compute fees.
-| | 8.1 | LVM | Linux Agent | RHEL 8.1 for SAP with HA and Update Services. Attached to E4S repositories. Will charge a premium for SAP and HA repositories as well as RHEL, on top of the base compute fees.
-| | 81sapha-gen2 | LVM | Linux Agent | RHEL 8.1 for SAP with HA and Update Services. Generation 2 images Attached to E4S repositories. Will charge a premium for SAP and HA repositories as well as RHEL, on top of the base compute fees.
-| | 8.2 | LVM | Linux Agent | RHEL 8.2 for SAP with HA and Update Services. Will charge a premium for SAP and HA repositories as well as RHEL, on top of the base compute fees.
-| | 82sapha-gen2 | LVM | Linux Agent | RHEL 8.2 for SAP with HA and Update Services. Generation 2 images Attached to E4S repositories. Will charge a premium for SAP and HA repositories as well as RHEL, on top of the base compute fees.
-rhel-byos |rhel-lvm74| LVM | Linux Agent | RHEL 7.4 BYOS images, not attached to any source of updates, will not charge a RHEL premium.
-| |rhel-lvm75| LVM | Linux Agent | RHEL 7.5 BYOS images, not attached to any source of updates, will not charge a RHEL premium.
-| |rhel-lvm76| LVM | Linux Agent | RHEL 7.6 BYOS images, not attached to any source of updates, will not charge a RHEL premium.
-| |rhel-lvm76-gen2| LVM | Linux Agent | RHEL 7.6 Generation 2 BYOS images, not attached to any source of updates, will not charge a RHEL premium.
-| |rhel-lvm77| LVM | Linux Agent | RHEL 7.7 BYOS images, not attached to any source of updates, will not charge a RHEL premium.
-| |rhel-lvm77-gen2| LVM | Linux Agent | RHEL 7.7 Generation 2 BYOS images, not attached to any source of updates, will not charge a RHEL premium.
-| |rhel-lvm78| LVM | Linux Agent | RHEL 7.8 BYOS images, not attached to any source of updates, will not charge a RHEL premium.
-| |rhel-lvm78-gen2| LVM | Linux Agent | RHEL 7.8 Generation 2 BYOS images, not attached to any source of updates, will not charge a RHEL premium.
-| |rhel-lvm8 | LVM | Linux Agent | RHEL 8.0 BYOS images , not attached to any source of updates, will not charge a RHEL premium.
-| |rhel-lvm8-gen2 | LVM | Linux Agent | RHEL 8.0 Generation 2 BYOS images , not attached to any source of updates, will not charge a RHEL premium.
-| |rhel-lvm81 | LVM | Linux Agent | RHEL 8.1 BYOS images , not attached to any source of updates, will not charge a RHEL premium.
-| |rhel-lvm81-gen2 | LVM | Linux Agent | RHEL 8.1 Generation 2 BYOS images , not attached to any source of updates, will not charge a RHEL premium.
-| |rhel-lvm82 | LVM | Linux Agent | RHEL 8.2 BYOS images , not attached to any source of updates, will not charge a RHEL premium.
-| |rhel-lvm82-gen2 | LVM | Linux Agent | RHEL 8.2 Generation 2 BYOS images , not attached to any source of updates, will not charge a RHEL premium.
+| | 8.1 | LVM | Linux Agent | RHEL 8.1 for SAP Business Applications
+| | 81-gen2 | LVM | Linux Agent | RHEL 8.1 for SAP Business Applications. Generation 2 image
+| | 8.2 | LVM | Linux Agent | RHEL 8.2 for SAP Business Applications
+| | 82-gen2 | LVM | Linux Agent | RHEL 8.2 for SAP Business Applications. Generation 2 image
+| | 8.4 | LVM | Linux Agent | RHEL 8.4 for SAP Business Applications
+| | 84-gen2 | LVM | Linux Agent | RHEL 8.4 for SAP Business Applications. Generation 2 image
+| | 8.6 | LVM | Linux Agent | RHEL 8.6 for SAP Business Applications
+| | 86-gen2 | LVM | Linux Agent | RHEL 8.6 for SAP Business Applications. Generation 2 image
+RHEL-SAP-HA | 7.4 | LVM | Linux Agent | RHEL 7.4 for SAP with HA and Update Services. Images are attached to E4S repositories. Will charge a premium for SAP and HA repositories and RHEL, on top of the base compute fees
+| | 74sapha-gen2 | LVM | Linux Agent | RHEL 7.4 for SAP with HA and Update Services. Generation 2 image Attached to E4S repositories. Will charge a premium for SAP and HA repositories and RHEL, on top of the base compute fees
+| | 7.5 | LVM | Linux Agent | RHEL 7.5 for SAP with HA and Update Services. Images are attached to E4S repositories. Will charge a premium for SAP and HA repositories and RHEL, on top of the base compute fees
+| | 7.6 | LVM | Linux Agent | RHEL 7.6 for SAP with HA and Update Services. Images are attached to E4S repositories. Will charge a premium for SAP and HA repositories and RHEL, on top of the base compute fees
+| | 76sapha-gen2 | LVM | Linux Agent | RHEL 7.6 for SAP with HA and Update Services. Generation 2 image Attached to E4S repositories. Will charge a premium for SAP and HA repositories and RHEL, on top of the base compute fees
+| | 7.7 | LVM | Linux Agent | RHEL 7.7 for SAP with HA and Update Services. Images are attached to E4S repositories. Will charge a premium for SAP and HA repositories and RHEL, on top of the base compute fees
+| | 77sapha-gen2 | LVM | Linux Agent | RHEL 7.7 for SAP with HA and Update Services. Generation 2 image Attached to E4S repositories.Will charge a premium for SAP and HA repositories and RHEL, on top of the base compute fees
+| | 8.1 | LVM | Linux Agent | RHEL 8.1 for SAP with HA and Update Services. Images are attached to E4S repositories. Will charge a premium for SAP and HA repositories and RHEL, on top of the base compute fees
+| | 81sapha-gen2 | LVM | Linux Agent | RHEL 8.1 for SAP with HA and Update Services. Generation 2 images Attached to E4S repositories. Will charge a premium for SAP and HA repositories and RHEL, on top of the base compute fees
+| | 8.2 | LVM | Linux Agent | RHEL 8.2 for SAP with HA and Update Services. Will charge a premium for SAP and HA repositories and RHEL, on top of the base compute fees
+| | 82sapha-gen2 | LVM | Linux Agent | RHEL 8.2 for SAP with HA and Update Services. Generation 2 images Attached to E4S repositories. Will charge a premium for SAP and HA repositories and RHEL, on top of the base compute fees
+| | 8.4 | LVM | Linux Agent | RHEL 8.4 for SAP with HA and Update Services. Will charge a premium for SAP and HA repositories and RHEL, on top of the base compute fees
+| | 84sapha-gen2 | LVM | Linux Agent | RHEL 8.4 for SAP with HA and Update Services. Generation 2 images Attached to E4S repositories. Will charge a premium for SAP and HA repositories and RHEL, on top of the base compute fees
+| | 8.6 | LVM | Linux Agent | RHEL 8.6 for SAP with HA and Update Services. Will charge a premium for SAP and HA repositories and RHEL, on top of the base compute fees
+| | 86sapha-gen2 | LVM | Linux Agent | RHEL 8.6 for SAP with HA and Update Services. Generation 2 images Attached to E4S repositories. Will charge a premium for SAP and HA repositories and RHEL, on top of the base compute fees
+rhel-byos |rhel-lvm74| LVM | Linux Agent | RHEL 7.4 BYOS images, not attached to any source of updates, won't charge an RHEL premium
+| |rhel-lvm75| LVM | Linux Agent | RHEL 7.5 BYOS images, not attached to any source of updates, won't charge an RHEL premium
+| |rhel-lvm76| LVM | Linux Agent | RHEL 7.6 BYOS images, not attached to any source of updates, won't charge an RHEL premium
+| |rhel-lvm76-gen2| LVM | Linux Agent | RHEL 7.6 Generation 2 BYOS images, not attached to any source of updates, won't charge an RHEL premium
+| |rhel-lvm77| LVM | Linux Agent | RHEL 7.7 BYOS images, not attached to any source of updates, won't charge an RHEL premium
+| |rhel-lvm77-gen2| LVM | Linux Agent | RHEL 7.7 Generation 2 BYOS images, not attached to any source of updates, won't charge an RHEL premium
+| |rhel-lvm78| LVM | Linux Agent | RHEL 7.8 BYOS images, not attached to any source of updates, won't charge an RHEL premium
+| |rhel-lvm78-gen2| LVM | Linux Agent | RHEL 7.8 Generation 2 BYOS images, not attached to any source of updates, won't charge an RHEL premium
+| |rhel-lvm8 | LVM | Linux Agent | RHEL 8.0 BYOSimages, not attached to any source of updates, won't charge an RHEL premium
+| |rhel-lvm8-gen2 | LVM | Linux Agent | RHEL 8.0 Generation 2 BYOSimages, not attached to any source of updates, won't charge an RHEL premium
+| |rhel-lvm81 | LVM | Linux Agent | RHEL 8.1 BYOSimages, not attached to any source of updates, won't charge an RHEL premium
+| |rhel-lvm81-gen2 | LVM | Linux Agent | RHEL 8.1 Generation 2 BYOSimages, not attached to any source of updates, won't charge an RHEL premium
+| |rhel-lvm82 | LVM | Linux Agent | RHEL 8.2 BYOSimages, not attached to any source of updates, won't charge an RHEL premium
+| |rhel-lvm82-gen2 | LVM | Linux Agent | RHEL 8.2 Generation 2 BYOSimages, not attached to any source of updates, won't charge an RHEL premium
+| |rhel-lvm83 | LVM | Linux Agent | RHEL 8.2 BYOSimages, not attached to any source of updates, won't charge an RHEL premium
+| |rhel-lvm83-gen2 | LVM | Linux Agent | RHEL 8.2 Generation 2 BYOSimages, not attached to any source of updates, won't charge an RHEL premium
+| |rhel-lvm84 | LVM | Linux Agent | RHEL 8.2 BYOSimages, not attached to any source of updates, won't charge an RHEL premium
+| |rhel-lvm84-gen2 | LVM | Linux Agent | RHEL 8.2 Generation 2 BYOSimages, not attached to any source of updates, won't charge an RHEL premium
+| |rhel-lvm85 | LVM | Linux Agent | RHEL 8.2 BYOSimages, not attached to any source of updates, won't charge an RHEL premium
+| |rhel-lvm85-gen2 | LVM | Linux Agent | RHEL 8.2 Generation 2 BYOSimages, not attached to any source of updates, won't charge an RHEL premium
+| |rhel-lvm86 | LVM | Linux Agent | RHEL 8.2 BYOSimages, not attached to any source of updates, won't charge an RHEL premium
+| |rhel-lvm86-gen2 | LVM | Linux Agent | RHEL 8.2 Generation 2 BYOSimages, not attached to any source of updates, won't charge an RHEL premium
+RHEL-SAP (out of support) | 7.4 | LVM | Linux Agent | RHEL 7.4 for SAP HANA and Business Apps. Images are attached to E4S repositories, will charge a premium for SAP and RHEL and the base compute fee
+| | 74sap-gen2| LVM | Linux Agent | RHEL 7.4 for SAP HANA and Business Apps. Generation 2 image. Images are attached to E4S repositories, will charge a premium for SAP and RHEL and the base compute fee
+| | 7.5 | LVM | Linux Agent | RHEL 7.5 for SAP HANA and Business Apps. Images are attached to E4S repositories, will charge a premium for SAP and RHEL and the base compute fee
+| | 75sap-gen2| LVM | Linux Agent | RHEL 7.5 for SAP HANA and Business Apps. Generation 2 image. Images are attached to E4S repositories, will charge a premium for SAP and RHEL and the base compute fee
+| | 7.6 | LVM | Linux Agent | RHEL 7.6 for SAP HANA and Business Apps. Images are attached to E4S repositories, will charge a premium for SAP and RHEL and the base compute fee
+| | 76sap-gen2| LVM | Linux Agent | RHEL 7.6 for SAP HANA and Business Apps. Generation 2 image. Images are attached to E4S repositories, will charge a premium for SAP and RHEL and the base compute fee
+| | 7.7 | LVM | Linux Agent | RHEL 7.7 for SAP HANA and Business Apps. Images are attached to E4S repositories, will charge a premium for SAP and RHEL and the base compute fee
+RHEL-SAP-HANA (out of support) | 6.7 | RAW | Linux Agent | RHEL 6.7 for SAP HANA. Outdated in favor of the RHEL-SAP images. This image will be removed in November 2020. More details about Red Hat's SAP cloud offerings are available at [SAP offerings on certified cloud providers](https://access.redhat.com/articles/3751271)
+| | 7.2 | LVM | Linux Agent | RHEL 7.2 for SAP HANA. Outdated in favor of the RHEL-SAP images. This image will be removed in November 2020. More details about Red Hat's SAP cloud offerings are available at [SAP offerings on certified cloud providers](https://access.redhat.com/articles/3751271)
+| | 7.3 | LVM | Linux Agent | RHEL 7.3 for SAP HANA. Outdated in favor of the RHEL-SAP images. This image will be removed in November 2020. More details about Red Hat's SAP cloud offerings are available at [SAP offerings on certified cloud providers](https://access.redhat.com/articles/3751271)
+RHEL-HA (out of support) | 7.4 | LVM | Linux Agent | RHEL 7.4 with HA Add-On. Will charge a premium for HA and RHEL on top of the base compute fee. Outdated in favor of the RHEL-SAP-HA images
+| | 7.5 | LVM | Linux Agent | RHEL 7.5 with HA Add-On. Will charge a premium for HA and RHEL on top of the base compute fee. Outdated in favor of the RHEL-SAP-HA images
+| | 7.6 | LVM | Linux Agent | RHEL 7.6 with HA Add-On. Will charge a premium for HA and RHEL on top of the base compute fee. Outdated in favor of the RHEL-SAP-HA images
> [!NOTE]
-> The RHEL-SAP-HANA product offering is considered end of life by Red Hat. Existing deployments will continue to work normally, but Red Hat recommends that customers migrate from the RHEL-SAP-HANA images to the RHEL-SAP-HA images which includes the SAP HANA repositories as well as the HA add-on. More details about Red Hat's SAP cloud offerings are available at [SAP offerings on certified cloud providers](https://access.redhat.com/articles/3751271).
+> The RHEL-SAP-HANA product offering is considered end of life by Red Hat. Existing deployments will continue to work normally, but Red Hat recommends that customers migrate from the RHEL-SAP-HANA images to the RHEL-SAP-HA images which includes the SAP HANA repositories and the HA add-on. More details about Red Hat's SAP cloud offerings are available at [SAP offerings on certified cloud providers](https://access.redhat.com/articles/3751271).
## Next steps * Learn more about the [Red Hat images in Azure](./redhat-images.md). * Learn more about the [Red Hat Update Infrastructure](./redhat-rhui.md). * Learn more about the [RHEL BYOS offer](./byos.md).
-* Information on Red Hat support policies for all versions of RHEL can be found on the [Red Hat Enterprise Linux Life Cycle](https://access.redhat.com/support/policy/updates/errata) page.
+* Information on Red Hat support policies for all versions of RHEL can be found on the [Red Hat Enterprise Linux Life Cycle](https://access.redhat.com/support/policy/updates/errata) page.
virtual-machines Sap High Availability Architecture Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/sap-high-availability-architecture-scenarios.md
Azure is in process of rolling out a concepts of [Azure Availability Zones](../.
Using Availability Zones, there are some things to consider. The considerations list like: -- You can't deploy Azure Availability Sets within an Availability Zone. You need to choose either an Availability Zone or an Availability Set as deployment frame for a VM.
+- You can't deploy Azure Availability Sets within an Availability Zone. Only possibility to combine Availability sets and Availability Zones is with [proximity placement groups](../../co-location.md). For more information see , [combine availability sets and availability zones with proximity placement groups](./sap-proximity-placement-scenarios.md#combine-availability-sets-and-availability-zones-with-proximity-placement-groups)
- You can't use the [Basic Load Balancer](../../../load-balancer/load-balancer-overview.md) to create failover cluster solutions based on Windows Failover Cluster Services or Linux Pacemaker. Instead you need to use the [Azure Standard Load Balancer SKU](../../../load-balancer/load-balancer-standard-availability-zones.md) - Azure Availability Zones are not giving any guarantees of certain distance between the different zones within one region - The network latency between different Azure Availability Zones within the different Azure regions might be different from Azure region to region. There will be cases, where you as a customer can reasonably run the SAP application layer deployed across different zones since the network latency from one zone to the active DBMS VM is still acceptable from a business process impact. Whereas there will be customer scenarios where the latency between the active DBMS VM in one zone and an SAP application instance in a VM in another zone can be too intrusive and not acceptable for the SAP business processes. As a result, the deployment architectures need to be different with an active/active architecture for the application or active/passive architecture if latency is too high.
virtual-machines Sap Rise Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/sap-rise-integration.md
vm-linux Previously updated : 03/14/2022 Last updated : 05/09/2022
RISE with SAP S/4HANA Cloud, private edition and SAP Enterprise Cloud Services a
## Virtual network peering with SAP RISE/ECS
-A vnet peering is the most performant way to connect securely and privately two standalone vnets, utilizing the Microsoft private backbone network. The peered networks appear as one for connectivity purposes, allowing applications to talk to each other. Applications running in different vnets, subscriptions, Azure tenants or regions are enabled to communicate directly. Like network traffic on a single vnet, vnet peering traffic remains on MicrosoftΓÇÖs private network and does not traverse the internet.
+A vnet peering is the most performant way to connect securely and privately two standalone vnets, utilizing the Microsoft private backbone network. The peered networks appear as one for connectivity purposes, allowing applications to talk to each other. Applications running in different vnets, subscriptions, Azure tenants or regions are enabled to communicate directly. Like network traffic on a single vnet, vnet peering traffic remains on MicrosoftΓÇÖs private network and doesn't traverse the internet.
For SAP RISE/ECS deployments, virtual peering is the preferred way to establish connectivity with customerΓÇÖs existing Azure environment. Both the SAP vnet and customer vnet(s) are protected with network security groups (NSG), enabling communication on SAP and database ports through the vnet peering. Communication between the peered vnets is secured through these NSGs, limiting communication to customerΓÇÖs SAP environment. For details and a list of open ports, contact your SAP representative.
Integration of customer owned networks with Cloud-based infrastructure and provi
This diagram describes one of the common integration scenarios of SAP owned subscriptions, VNets and DNS infrastructure with customerΓÇÖs local network and DNS services. In this setup on-premise DNS servers are holding all DNS entries. The DNS infrastructure is capable to resolve DNS requests coming from all sources (on-premise clients, customerΓÇÖs Azure services and SAP managed environments).
- This diagram shows a typical SAP customer's hub and spoke virtual networks. Cross-tenant virtual network peering connects SAP RISE vnet to customer's hub vnet. On-premise connectivity is provided from customer's hub. DNS servers are located both within customer's hub vnet as well as SAP RISE vnet, with DNS zone transfer between them. DNS Queries from customer's VMs query the customer's DNS servers.
+[![Diagram shows customer DNS servers are located both within customer's hub vnet as well as SAP RISE vnet, with DNS zone transfer between them.](./media/sap-rise-integration/sap-rise-dns.png)](./media/sap-rise-integration/sap-rise-dns.png#lightbox)
Design description and specifics: - Custom DNS configuration for SAP-owned VNets
- - 2 VMs in the RISE/STE/ECS Azure vnet hosting DNS servers
+ - Two VMs in the RISE/STE/ECS Azure vnet hosting DNS servers
- - Customers must provide and delegate to SAP a subdomain/zone (for example, \*hec.contoso.com) which will be used to assign names and create forward and reverse DNS entries for the virtual machines that run SAP managed environment. SAP DNS servers are holding a master DNS role for the delegated zone
+ - Customers must provide and delegate to SAP a subdomain/zone (for example, \*ecs.contoso.com) which will be used to assign names and create forward and reverse DNS entries for the virtual machines that run SAP managed environment. SAP DNS servers are holding a master DNS role for the delegated zone
- DNS zone transfer from SAP DNS server to customerΓÇÖs DNS servers is the primary method to replicate DNS entries from RISE/STE/ECS environment to on-premise DNS - Customer-owned Azure vnets are also using custom DNS configuration referring to customer DNS servers located in Azure Hub vnet.
- - Optionally, customers can set up a DNS forwarder within their Azure vnets. Such forwarder then pushes DNS requests coming from Azure services to SAP DNS servers that are targeted to the delegated zone (\*hec.contoso.com).
+ - Optionally, customers can set up a DNS forwarder within their Azure vnets. Such forwarder then pushes DNS requests coming from Azure services to SAP DNS servers that are targeted to the delegated zone (\*ecs.contoso.com).
Alternatively, DNS zone transfer from SAP DNS servers could be performed to a customerΓÇÖs DNS servers located in Azure Hub VNet (diagram above). This is applicable for the designs when customers operate custom DNS solution (e.g. [AD DS](/windows-server/identity/ad-ds/active-directory-domain-services) or BIND servers) within their Hub VNet.
-**Important to note**, that both Azure provided DNS and Azure private zones **do not** support DNS zone transfer capability, hence, cannot be used to accept DNS replication from SAP RISE/STE/ECS DNS servers. Additionally, external DNS service providers are typically not supported by SAP RISE/ECS.
+**Important to note**, that both Azure provided DNS and Azure private zones **do not** support DNS zone transfer capability, hence, can't be used to accept DNS replication from SAP RISE/STE/ECS DNS servers. Additionally, external DNS service providers are typically not supported by SAP RISE/ECS.
To further read about the usage of Azure DNS for SAP, outside the usage with SAP RISE/ECS see details in following [blog post](https://www.linkedin.com/posts/k-popov_sap-on-azure-dns-integration-whitepaper-activity-6841398577495977984-ui9V/).
To further read about the usage of Azure DNS for SAP, outside the usage with SAP
SAP workloads communicating with external applications or inbound connections from a companyΓÇÖs user base (for example, SAP Fiori) could require a network path to the Internet, depending on customerΓÇÖs requirements. Within SAP RISE/ECS managed workloads, work with your SAP representative to explore needs for such https/RFC/other communication paths. Network communication to/from the Internet is by default not enabled for SAP RISE/ECS customers and default networking is utilizing private IP ranges only. Internet connectivity requires planning with SAP, to optimally protect customerΓÇÖs SAP landscape.
-Should you enable Internet bound or incoming traffic with your SAP representatives, the network communication is protected through various Azure technologies such as NSGs, ASGs, Application Gateway with Web Application Firewall (WAF), proxy servers and others. These services are entirely managed through SAP within the SAP RISE/ECS vnet and subscription. The network path SAP RISE/ECS to and from Internet remains typically within the SAP RISE/ECS vnet only and does not transit into/from customerΓÇÖs own vnet(s).
+Should you enable Internet bound or incoming traffic with your SAP representatives, the network communication is protected through various Azure technologies such as NSGs, ASGs, Application Gateway with Web Application Firewall (WAF), proxy servers and others. These services are entirely managed through SAP within the SAP RISE/ECS vnet and subscription. The network path SAP RISE/ECS to and from Internet remains typically within the SAP RISE/ECS vnet only and doesn't transit into/from customerΓÇÖs own vnet(s).
- This diagram shows a typical SAP customer's hub and spoke virtual networks. Cross-tenant virtual network peering connects SAP RISE vnet to customer's hub vnet. On-premise connectivity is provided from customer's hub. SAP Cloud Connector VM from SAP RISE vnet connects through Internet to SAP BTP. Another SAP Cloud Connector VM connects through Internet to SAP BTP, with internet inbound and outbound connectivity facilitated by customer's hub vnet.
+[![Diagram shows SAP Cloud Connector VM from SAP RISE vnet connecting through Internet to SAP BTP. SAP RISE/ECS provides inbound/outbound internet connectivity. Customer's own workloads go through own internet breakout, not crossing over to SAP RISE vnet](./media/sap-rise-integration/sap-rise-internet.png)](./media/sap-rise-integration/sap-rise-internet.png#lightbox)
Applications within a customerΓÇÖs own vnet connect to the Internet directly from respective vnet or through customerΓÇÖs centrally managed services such as Azure Firewall, Azure Application Gateway, NAT Gateway and others. Connectivity to SAP BTP from non-SAP RISE/ECS applications takes the same network Internet bound path. Should an SAP Cloud Connecter be needed for such integration, it's placed with customerΓÇÖs non-SAP VMs requiring SAP BTP communication and network path managed by customer themselves.
SAP has a [preview program](https://help.sap.com/products/PRIVATE_LINK/42acd88cb
See a series of blog posts on the architecture of the SAP BTP Private Link Service and private connectivity methods, dealing with DNS and certificates in following SAP blog series [Getting Started with BTP Private Link Service for Azure](https://blogs.sap.com/2021/12/29/getting-started-with-btp-private-link-service-for-azure/)
+## Integration with Azure services
+
+Your SAP landscape runs within SAP RISE/ECS subscription, you can access the SAP system through available ports. Each application communicating with your SAP system might require different ports to access it.
+
+For SAP Fiori, standalone or embedded within the SAP S/4 HANA or NetWeaver system, the customer can connect applications through OData or Rest API. Both use https for incoming requests to the SAP system. Applications running on-premise or within the customerΓÇÖs own Azure subscription and vnet, use the established vnet peering or VPN vnet-to-vnet connection through a private IP address. Applications accessing a publicly available IP, exposed through SAP RISE managed Azure application gateway, are also able to contact the SAP system through https. For details and security for the application gateway and NSG open ports, contact SAP.
+
+Applications using remote function calls (RFC) or direct database connections using JDBC/ODBC protocols are only possible through private networks and thus via the vnet peering or VPN from customerΓÇÖs vnet(s).
+
+ Diagram of open ports on a SAP RISE/ECS system. RFC connections for BAPI and IDoc, htps for OData and Rest/SOAP. ODBC/JDBC for direct database connections to SAP HANA. All connnections through the private vnet peering. Application Gateway with public IP for https as a potential option, managed through SAP.
+
+With the information about available interfaces to the SAP RISE/ECS landscape, several methods of integration with Azure Services are possible. For data scenarios with Azure Data Factory or Synapse Analytics a self-hosted integration runtime or Azure Integration Runtime is available and described in the next chapter. For Logic Apps, Power Apps, Power BI the intermediary between the SAP RISE system and Azure service is through the on-premise data gateway, described in further chapters. Most services in the [Azure Integration Services](https://azure.microsoft.com/product-categories/integration/) do not require any intermediary gateway and thus can communicate directly with these available SAP interfaces.
+
+## Integration with self-hosted integration runtime
+
+Integrating your SAP system with Azure cloud native services such as Azure Data Factory or Azure Synapse would use these communication channels to the SAP RISE/ECS managed environment.
+
+The following high-level architecture shows possible integration scenario with Azure data services such as [Data Factory](/azure/data-factory) or [Synapse Analytics](/azure/synapse-analytics). For these Azure services either a self-hosted integration runtime (self-hosted IR or IR) or Azure integration runtime (Azure IR) can be used. The use of either integration runtime depends on the [chosen data connector](/azure/data-factory/copy-activity-overview#supported-data-stores-and-formats), most SAP connectors are only available for the self-hosted IR. [SAP ECC connector](/azure/data-factory/connector-sap-ecc?tabs=data-factory) is capable of being using through both Azure IR and self-hosted IR. The choice of IR governs the network path taken. SAP .NET connector is used for [SAP table connector](/azure/data-factory/connector-sap-ecc?tabs=data-factory), [SAP BW](/azure/data-factory/connector-sap-business-warehouse?tabs=data-factory) and [SAP OpenHub](/azure/data-factory/connector-sap-business-warehouse-open-hub) connectors alike. All these connectors use SAP function modules (FM) on the SAP system, executed through RFC connections. Last if direct database access has been agreed with SAP, along with users and connection path opened, ODBC/JDBC connector for [SAP HANA](/azure/data-factory/connector-sap-hana?tabs=data-factory) can be used from the self-hosted IR as well.
+
+[![SAP RISE/ECS accessed by Azure ADF or Synapse.](./media/sap-rise-integration/sap-rise-adf-synapse.png)](./media/sap-rise-integration/sap-rise-adf-synapse.png#lightbox)
+
+For data connectors using the Azure IR, this IR accesses your SAP environment through a public IP address. SAP RISE/ECS provides this endpoint through an application gateway for use and the communication and data movement is through https.
+
+Data connectors within the self-hosted integration runtime communicate with the SAP system within SAP RISE/ECS subscription and vnet through the established vnet peering and private network address only. The established network security group rules limit which application can communicate with the SAP system.
+
+The customer is responsible for deployment and operation of the self-hosted integration runtime within their subscription and vnet. The communication between Azure PaaS services such as Data Factory or Synapse Analytics and self-hosted integration runtime is within the customerΓÇÖs subscription. SAP RISE/ECS exposes the communication ports for these applications to use but has no knowledge or support about any details of the connected application or service.
+
+> [!Note]
+> Contact SAP for details on communication paths available to you with SAP RISE and the necessary steps to open them. SAP must also be contacted for any SAP license details for any implications accessing SAP data through any Azure Data Factory or Synapse connectors.
+
+To learn the overall support on SAP data integration scenario, see [SAP data integration using Azure Data Factory whitepaper](https://github.com/Azure/Azure-DataFactory/blob/master/whitepaper/SAP%20Data%20Integration%20using%20Azure%20Data%20Factory.pdf) with detailed introduction on each SAP connector, comparison and guidance.
+
+## On-premise data gateway
+Further Azure Services such as [Logic Apps](/azure/logic-apps/logic-apps-using-sap-connector), [Power Apps](/connectors/saperp/) or [Power BI](/power-bi/connect-data/desktop-sap-bw-connector) communicate and exchange data with SAP systems through an on-premise data gateway. The on-premise data gateway is a virtual machine, running in Azure or on-premise. It provides secure data transfer between these Azure Services and your SAP systems.
+
+With SAP RISE, the on-premise data gateway can connect to Azure Services running in customerΓÇÖs Azure subscription. This VM running the data gateway is deployed and operated by the customer. With below high-level architecture as overview, similar method can be used for either service.
+
+[![SAP RISE/ECS accessed from Azure on-premise data gateway and connected Azure services.](./media/sap-rise-integration/sap-rise-on-premises-data-gateway.png)](./media/sap-rise-integration/sap-rise-on-premises-data-gateway.png#lightbox)
+
+The SAP RISE environment here provides access to the SAP ports for RFC and https described earlier. The communication ports are accessed by the private network address through the vnet peering or VPN site-to-site connection. The on-premise data gateway VM running in customerΓÇÖs Azure subscription uses the [SAP .NET connector](https://support.sap.com/en/product/connectors/msnet.html) to run RFC, BAPI or IDoc calls through the RFC connection. Additionally, depending on service and way the communication is setup, a way to connect to public IP of the SAP systems REST API through https might be required. The https connection to a public IP can be exposed through SAP RISE/ECS managed application gateway. This high level architecture shows the possible integration scenario. Alternatives to it such as using Logic Apps single tenant and [private endpoints](/azure/logic-apps/secure-single-tenant-workflow-virtual-network-private-endpoint) to secure the communication and other can be seen as extension and are not described here in.
+
+SAP RISE/ECS exposes the communication ports for these applications to use but has no knowledge about any details of the connected application or service running in a customerΓÇÖs subscription.
+
+> [!Note]
+> SAP must be contacted for any SAP license details for any implications accessing SAP data through Azure service connecting to the SAP system or database.
+
+## Azure Monitoring for SAP with SAP RISE
+
+[Azure Monitoring for SAP](/azure/virtual-machines/workloads/sap/monitor-sap-on-azure) is an Azure-native solution for monitoring your SAP system. It extends the Azure monitor platform monitoring capability with support to gather data about SAP NetWeaver, database, and operating system details.
+
+> [!Note]
+> SAP RISE/ECS is a fully managed service for your SAP landscape and thus Azure Monitoring for SAP is not intended to be utilized for such managed environment.
+
+SAP RISE/ECS doesn't support any integration with Azure Monitoring for SAP. SAP RISE/ECSΓÇÖs own monitoring and reporting is provided to the customer as defined by your service description with SAP.
+ ## Next steps Check out the documentation: - [SAP workloads on Azure: planning and deployment checklist](./sap-deployment-checklist.md) - [Virtual network peering](../../../virtual-network/virtual-network-peering-overview.md)-- [Public endpoint connectivity for Virtual Machines using Azure Standard Load Balancer in SAP high-availability scenarios](./high-availability-guide-standard-load-balancer-outbound-connections.md)
+- [Public endpoint connectivity for Virtual Machines using Azure Standard Load Balancer in SAP high-availability scenarios](./high-availability-guide-standard-load-balancer-outbound-connections.md)
+- [SAP Data Integration Using Azure Data Factory](https://github.com/Azure/Azure-DataFactory/blob/main/whitepaper/SAP%20Data%20Integration%20using%20Azure%20Data%20Factory.pdf)
virtual-network-manager Concept Security Admins https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/concept-security-admins.md
Title: 'Security admin rules in Azure Virtual Network Manager (Preview)' description: Learn about what security admin rules are in Azure Virtual Network Manager.--++ Previously updated : 11/02/2021 Last updated : 05/24/2022
A security admin rule allows you to enforce security policy criteria that matche
:::image type="content" source="./media/concept-security-admins/traffic-evaluation.png" alt-text="Diagram of how traffic is evaluated with security admin rules and NSG.":::
-Security admin rules can be used to enforce security rules. For example, an administrator can deny all high-risk ports or protocol from the Internet with security admin rules because these security admin rules will be evaluated prior to all NSG rules.
+## Network intent policies and security admin rules
-> [!IMPORTANT]
-> Some services have network intent policies to ensure the network traffic is working as needed for their services. When you use security admin rules, you could break the network intent policies created for those services. For example, creating a deny admin rule can block some traffic allowed by the *SQL managed instance* service, which is defined by their network intent policy. Make sure to review your environment before applying a security admin configuration. For more information, see [How can I explicitly allow SQLMI traffic before having deny rules](faq.md#how-can-i-explicitly-allow-sqlmi-traffic-before-having-deny-rules).
+ A network intent policy is applied to some network services to ensure the network traffic is working as needed for these services. By default, deployed security admin rules aren't applied on virtual networks with services that use network intent policies such as SQL managed instance service. If you deploy a service in a virtual network with existing security admin rules, those security admin rules will be removed from those virtual networks.
+
+If you need to apply security admin rules on virtual networks with services that use network intent policies, contact AVNMFeatureRegister@microsoft.com to enable this functionality. Overriding the default behavior described above could break the network intent policies created for those services. For example, creating a deny admin rule can block some traffic allowed by the SQL managed instance service, which is defined by their network intent policies. Make sure to review your environment before applying a security admin configuration. For an example of how to allow the traffic of services that use network intent policies, see [How can I explicitly allow SQLMI traffic before having deny rules](faq.md#how-can-i-explicitly-allow-sqlmi-traffic-before-having-deny-rules).
+
+## Security admin fields
-The following are fields you can define in a security admin rule:
+When you define a security admin rule, there are required and optional fields.
-## Required fields
+### Required fields
-### Priority
+#### Priority
Security rule priority is determined by an integer between 0 and 99. The lower the value the higher the priority of the rule. For example, a deny rule with a priority of 10 override an allow rule with a priority of 20.
-### <a name = "action"></a>Action
+#### <a name = "action"></a>Action
You can define one of three actions for a security rule:
You can define one of three actions for a security rule:
* **Deny**: Block traffic on the specified port, protocol, and source/destination IP prefixes in the specified direction. * **Always allow**: Regardless of other rules with lower priority or user-defined NSGs, allow traffic on the specified port, protocol, and source/destination IP prefixes in the specified direction.
-### Direction
+#### Direction
You can specify the direction of traffic for which the rule applies. You can define either inbound or outbound.
-### Protocol
+#### Protocol
Protocols currently supported with security admin rules are:
Protocols currently supported with security admin rules are:
* AH * Any protocols
-## Optional fields
+### Optional fields
-### Source and destination types
+#### Source and destination types
* **IP addresses**: You can provide IPv4 or IPv6 addresses or blocks of address in CIDR notation. To list multiple IP address, separate each IP address with a comma. * **Service Tag**: You can define specific service tags based on regions or a whole service. See [Available service tags](../virtual-network/service-tags-overview.md#available-service-tags), for the list of supported tags.
-### Source and destination ports
+#### Source and destination ports
You can define specific common ports to block from the source or to the destination. See below for a list of common TCP ports:
virtual-network-manager Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/faq.md
Title: Frequently asked questions about Azure Virtual Network Manager description: Find answers to frequently asked questions about Azure Virtual Network Manager. -+ Last updated 4/18/2022-+
* Central India
-* All regions that have [Availability Zones](../availability-zones/az-overview.md#azure-regions-with-availability-zones), except France Central.
+* All regions have [Availability Zones](../availability-zones/az-overview.md#azure-regions-with-availability-zones), except France Central.
> [!NOTE] > Even if an Azure Virtual Network Manager instance isn't available because all zones are down, configurations applied to resources will still persist.
Azure SQL Managed Instance has some network requirements. If your security admin
| 443, 12000 | TCP | **VirtualNetwork** | AzureCloud | Allow | | Any | Any | **VirtualNetwork** | **VirtualNetwork** | Allow | +
+## Can an Azure Virtual WAN hub be part of a network group?
+
+No, an Azure Virtual WAN hub can't be in a network group at this time.
++
+## Can an Azure Virtual WAN be used as the hub in AVNM's hub and spoke topology configuration?
+
+No, an Azure Virtual WAN hub isn't supported as the hub in a hub and spoke topology at this time.
++ ## Limits ### What are the service limitation of Azure Virtual Network Manager?
virtual-network-manager Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/overview.md
Title: 'What is Azure Virtual Network Manager (Preview)?' description: Learn how Azure Virtual Network Manager can simplify management and scalability of your virtual networks. -+ Last updated 11/02/2021-+ #Customer intent: As an IT administrator, I want to learn about Azure Virtual Network Manager and what I can use it for.
A connectivity configuration enables you to create a mesh or a hub-and-spoke net
* North Central US
+* South Central US
+ * West US * West US 2
A connectivity configuration enables you to create a mesh or a hub-and-spoke net
* East US 2
+* Canada Central
+ * North Europe * West Europe
-* France Central
+* UK South
+
+* Switzerland North
+
+* Southeast Asia
+
+* Japan East
+
+* Japan West
+
+* Australia East
+
+* Central India
## Next steps
virtual-network Nat Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/nat-gateway/nat-overview.md
Virtual Network NAT is a software defined networking service. A NAT gateway won'
## Virtual Network NAT basics
-* A NAT gateway can be created in a specific availability zone. Redundancy is built in within the specified zone. Virtual Network NAT is non-zonal by default. A non-zonal Virtual Network NAT isn't associated to a specific zone and is assigned to a specific zone by Azure. A NAT gateway can be isolated in a specific zone when you create [availability zones](../../availability-zones/az-overview.md) scenarios. This deployment is called a zonal deployment.
+* A NAT gateway can be created in a specific availability zone or placed in 'no zone'. Virtual Network NAT is placed in no zone by default. A non-zonal NAT gateway is placed in a zone for you by Azure and does not give a guarantee of redundancy. A NAT gateway can be isolated in a specific zone when you create [availability zones](../../availability-zones/az-overview.md) scenarios. This deployment is called a zonal deployment. After NAT gateway is deployed, the zone selection cannot be changed.
* Outbound connectivity can be defined for each subnet with a NAT gateway. Multiple subnets within the same virtual network can have different NAT gateways associated. Multiple subnets within the same virtual network can use the same NAT gateway. A subnet is configured by specifying which NAT gateway resource to use. All outbound traffic for the subnet is processed by the NAT gateway without any customer configuration. A NAT gateway takes precedence over other outbound scenarios and replaces the default Internet destination of a subnet.
virtual-network Network Security Groups Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/network-security-groups-overview.md
A network security group contains zero, or as many rules as desired, within Azur
|Property |Explanation | ||| |Name|A unique name within the network security group.|
-|Priority | A number between 100 and 4096. Rules are processed in priority order, with lower numbers processed before higher numbers, because lower numbers have higher priority. Once traffic matches a rule, processing stops. As a result, any rules that exist with lower priorities (higher numbers) that have the same attributes as rules with higher priorities are not processed.|
-|Source or destination| Any, or an individual IP address, classless inter-domain routing (CIDR) block (10.0.0.0/24, for example), service tag, or application security group. If you specify an address for an Azure resource, specify the private IP address assigned to the resource. Network security groups are processed after Azure translates a public IP address to a private IP address for inbound traffic, and before Azure translates a private IP address to a public IP address for outbound traffic. Specifying a range, a service tag, or application security group, enables you to create fewer security rules. The ability to specify multiple individual IP addresses and ranges (you cannot specify multiple service tags or application groups) in a rule is referred to as [augmented security rules](#augmented-security-rules). Augmented security rules can only be created in network security groups created through the Resource Manager deployment model. You cannot specify multiple IP addresses and IP address ranges in network security groups created through the classic deployment model.|
+|Priority | A number between 100 and 4096. Rules are processed in priority order, with lower numbers processed before higher numbers, because lower numbers have higher priority. Once traffic matches a rule, processing stops. As a result, any rules that exist with lower priorities (higher numbers) that have the same attributes as rules with higher priorities aren't processed.|
+|Source or destination| Any, or an individual IP address, classless inter-domain routing (CIDR) block (10.0.0.0/24, for example), service tag, or application security group. If you specify an address for an Azure resource, specify the private IP address assigned to the resource. Network security groups are processed after Azure translates a public IP address to a private IP address for inbound traffic, and before Azure translates a private IP address to a public IP address for outbound traffic. Fewer security rules are needed when you specify a range, a service tag, or application security group. The ability to specify multiple individual IP addresses and ranges (you can't specify multiple service tags or application groups) in a rule is referred to as [augmented security rules](#augmented-security-rules). Augmented security rules can only be created in network security groups created through the Resource Manager deployment model. You can't specify multiple IP addresses and IP address ranges in network security groups created through the classic deployment model.|
|Protocol | TCP, UDP, ICMP, ESP, AH, or Any.| |Direction| Whether the rule applies to inbound, or outbound traffic.|
-|Port range |You can specify an individual or range of ports. For example, you could specify 80 or 10000-10005. Specifying ranges enables you to create fewer security rules. Augmented security rules can only be created in network security groups created through the Resource Manager deployment model. You cannot specify multiple ports or port ranges in the same security rule in network security groups created through the classic deployment model. |
+|Port range |You can specify an individual or range of ports. For example, you could specify 80 or 10000-10005. Specifying ranges enables you to create fewer security rules. Augmented security rules can only be created in network security groups created through the Resource Manager deployment model. You can't specify multiple ports or port ranges in the same security rule in network security groups created through the classic deployment model. |
|Action | Allow or deny |
-Network security group security rules are evaluated by priority using the 5-tuple information (source, source port, destination, destination port, and protocol) to allow or deny the traffic. You may not create two security rules with the same priority and direction. A flow record is created for existing connections. Communication is allowed or denied based on the connection state of the flow record. The flow record allows a network security group to be stateful. If you specify an outbound security rule to any address over port 80, for example, it's not necessary to specify an inbound security rule for the response to the outbound traffic. You only need to specify an inbound security rule if communication is initiated externally. The opposite is also true. If inbound traffic is allowed over a port, it's not necessary to specify an outbound security rule to respond to traffic over the port.
+Security rules are evaluated and applied based on the five-tuple (source, source port, destination, destination port, and protocol) information. You can't create two security rules with the same priority and direction. A flow record is created for existing connections. Communication is allowed or denied based on the connection state of the flow record. The flow record allows a network security group to be stateful. If you specify an outbound security rule to any address over port 80, for example, it's not necessary to specify an inbound security rule for the response to the outbound traffic. You only need to specify an inbound security rule if communication is initiated externally. The opposite is also true. If inbound traffic is allowed over a port, it's not necessary to specify an outbound security rule to respond to traffic over the port.
Existing connections may not be interrupted when you remove a security rule that enabled the flow. Traffic flows are interrupted when connections are stopped and no traffic is flowing in either direction, for at least a few minutes.
Azure creates the following default rules in each network security group that yo
In the **Source** and **Destination** columns, *VirtualNetwork*, *AzureLoadBalancer*, and *Internet* are [service tags](service-tags-overview.md), rather than IP addresses. In the protocol column, **Any** encompasses TCP, UDP, and ICMP. When creating a rule, you can specify TCP, UDP, ICMP or Any. *0.0.0.0/0* in the **Source** and **Destination** columns represents all addresses. Clients like Azure portal, Azure CLI, or PowerShell can use * or any for this expression.
-You cannot remove the default rules, but you can override them by creating rules with higher priorities.
+You can't remove the default rules, but you can override them by creating rules with higher priorities.
### <a name="augmented-security-rules"></a> Augmented security rules
Application security groups enable you to configure network security as a natura
## Azure platform considerations -- **Virtual IP of the host node**: Basic infrastructure services like DHCP, DNS, IMDS, and health monitoring are provided through the virtualized host IP addresses 168.63.129.16 and 169.254.169.254. These IP addresses belong to Microsoft and are the only virtualized IP addresses used in all regions for this purpose. Effective security rules and effective routes will not include these platform rules. To override this basic infrastructure communication, you can create a security rule to deny traffic by using the following [service tags](service-tags-overview.md) on your Network Security Group rules: AzurePlatformDNS, AzurePlatformIMDS, AzurePlatformLKM. Learn how to [diagnose network traffic filtering](diagnose-network-traffic-filter-problem.md) and [diagnose network routing](diagnose-network-routing-problem.md).
+- **Virtual IP of the host node**: Basic infrastructure services like DHCP, DNS, IMDS, and health monitoring are provided through the virtualized host IP addresses 168.63.129.16 and 169.254.169.254. These IP addresses belong to Microsoft and are the only virtualized IP addresses used in all regions for this purpose. By default, these services aren't subject to the configured network security groups unless targeted by [service tags](service-tags-overview.md) specific to each service. To override this basic infrastructure communication, you can create a security rule to deny traffic by using the following service tags on your Network Security Group rules: AzurePlatformDNS, AzurePlatformIMDS, AzurePlatformLKM. Learn how to [diagnose network traffic filtering](diagnose-network-traffic-filter-problem.md) and [diagnose network routing](diagnose-network-routing-problem.md).
- **Licensing (Key Management Service)**: Windows images running in virtual machines must be licensed. To ensure licensing, a request is sent to the Key Management Service host servers that handle such queries. The request is made outbound through port 1688. For deployments using [default route 0.0.0.0/0](virtual-networks-udr-overview.md#default-route) configuration, this platform rule will be disabled. - **Virtual machines in load-balanced pools**: The source port and address range applied are from the originating computer, not the load balancer. The destination port and address range are for the destination computer, not the load balancer.-- **Azure service instances**: Instances of several Azure services, such as HDInsight, Application Service Environments, and Virtual Machine Scale Sets are deployed in virtual network subnets. For a complete list of services you can deploy into virtual networks, see [Virtual network for Azure services](virtual-network-for-azure-services.md#services-that-can-be-deployed-into-a-virtual-network). Ensure you familiarize yourself with the port requirements for each service before applying a network security group to the subnet the resource is deployed in. If you deny ports required by the service, the service doesn't function properly.-- **Sending outbound email**: Microsoft recommends that you utilize authenticated SMTP relay services (typically connected via TCP port 587, but often others, as well) to send email from Azure Virtual Machines. SMTP relay services specialize in sender reputation, to minimize the possibility that third-party email providers reject messages. Such SMTP relay services include, but are not limited to, Exchange Online Protection and SendGrid. Use of SMTP relay services is in no way restricted in Azure, regardless of your subscription type.
+- **Azure service instances**: Instances of several Azure services, such as HDInsight, Application Service Environments, and Virtual Machine Scale Sets are deployed in virtual network subnets. For a complete list of services you can deploy into virtual networks, see [Virtual network for Azure services](virtual-network-for-azure-services.md#services-that-can-be-deployed-into-a-virtual-network). Before applying a network security group to the subnet, familiarize yourself with the port requirements for each service. If you deny ports required by the service, the service won't function properly.
+- **Sending outbound email**: Microsoft recommends that you utilize authenticated SMTP relay services (typically connected via TCP port 587, but often others, as well) to send email from Azure Virtual Machines. SMTP relay services specialize in sender reputation, to minimize the possibility that third-party email providers reject messages. Such SMTP relay services include, but aren't limited to, Exchange Online Protection and SendGrid. Use of SMTP relay services is in no way restricted in Azure, regardless of your subscription type.
If you created your Azure subscription prior to November 15, 2017, in addition to being able to use SMTP relay services, you can send email directly over TCP port 25. If you created your subscription after November 15, 2017, you may not be able to send email directly over port 25. The behavior of outbound communication over port 25 depends on the type of subscription you have, as follows:
- - **Enterprise Agreement**: Outbound port 25 communication is allowed. You are able to send an outbound email directly from virtual machines to external email providers, with no restrictions from the Azure platform.
- - **Pay-as-you-go:** Outbound port 25 communication is blocked from all resources. If you need to send email from a virtual machine directly to external email providers (not using an authenticated SMTP relay), you can make a request to remove the restriction. Requests are reviewed and approved at Microsoft's discretion and are only granted after anti-fraud checks are performed. To make a request, open a support case with the issue type *Technical*, *Virtual Network Connectivity*, *Cannot send e-mail (SMTP/Port 25)*. In your support case, include details about why your subscription needs to send email directly to mail providers, instead of going through an authenticated SMTP relay. If your subscription is exempted, only virtual machines created after the exemption date are able to communicate outbound over port 25.
- - **MSDN, Azure Pass, Azure in Open, Education, BizSpark, and Free trial**: Outbound port 25 communication is blocked from all resources. No requests to remove the restriction can be made, because requests are not granted. If you need to send email from your virtual machine, you have to use an SMTP relay service.
- - **Cloud service provider**: Customers that are consuming Azure resources via a cloud service provider can create a support case with their cloud service provider, and request that the provider create an unblock case on their behalf, if a secure SMTP relay cannot be used.
+ - **Enterprise Agreement**: Outbound port 25 communication is allowed. You're able to send an outbound email directly from virtual machines to external email providers, with no restrictions from the Azure platform.
+ - **Pay-as-you-go:** Outbound port 25 communication is blocked from all resources. If you need to send email from a virtual machine directly to external email providers (not using an authenticated SMTP relay), you can make a request to remove the restriction. Requests are reviewed and approved at Microsoft's discretion and are only granted after anti-fraud checks are performed. To make a request, open a support case with the issue type *Technical*, *Virtual Network Connectivity*, *Can't send e-mail (SMTP/Port 25)*. In your support case, include details about why your subscription needs to send email directly to mail providers, instead of going through an authenticated SMTP relay. If your subscription is exempted, only virtual machines created after the exemption date are able to communicate outbound over port 25.
+ - **MSDN, Azure Pass, Azure in Open, Education, BizSpark, and Free trial**: Outbound port 25 communication is blocked from all resources. No requests to remove the restriction can be made, because requests aren't granted. If you need to send email from your virtual machine, you have to use an SMTP relay service.
+ - **Cloud service provider**: Outbound port 25 communication may be blocked for Azure customers using a cloud service provider. In cases where a secure SMTP relay can't be used, you can create a support case with your cloud service provider, and request that the provider create an unblock case on your behalf.
- If Azure allows you to send email over port 25, Microsoft cannot guarantee email providers will accept inbound email from your virtual machine. If a specific provider rejects mail from your virtual machine, work directly with the provider to resolve any message delivery or spam filtering issues, or use an authenticated SMTP relay service.
+ If Azure allows you to send email over port 25, Microsoft can't guarantee email providers will accept inbound email from your virtual machine. If a specific provider rejects mail from your virtual machine, work directly with the provider to resolve any message delivery or spam filtering issues, or use an authenticated SMTP relay service.
## Next steps
virtual-network Tutorial Restrict Network Access To Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/tutorial-restrict-network-access-to-resources.md
description: In this tutorial, you learn how to limit and restrict network acces
documentationcenter: virtual-network -+ editor: '' tags: azure-resource-manager # Customer intent: I want only resources in a virtual network subnet to access an Azure PaaS resource, such as an Azure Storage account.
virtual-network Previously updated : 07/16/2021 Last updated : 05/17/2022
By default, all virtual machine instances in a subnet can communicate with any r
|Setting|Value| |-|-|
- |Source| Select **VirtualNetwork** |
+ |Source| Select **Service Tag** |
+ |Source service tag | Select **VirtualNetwork** |
|Source port ranges| * | |Destination | Select **Service Tag**| |Destination service tag | Select **Storage**|
By default, all virtual machine instances in a subnet can communicate with any r
|Setting|Value| |-|-|
- |Source| Select **VirtualNetwork** |
+ |Source| Select **Service Tag** |
+ |Source service tag | Select **VirtualNetwork** |
|Source port ranges| * | |Destination | Select **Service Tag**| |Destination service tag| Select **Internet**|
By default, all virtual machine instances in a subnet can communicate with any r
|Protocol|Any| |Action| Change default to **Deny**. | |Priority|110|
- |Name|Change to *Deny-Internet-All*|
+ |Name|Change to **Deny-Internet-All**|
:::image type="content" source="./media/tutorial-restrict-network-access-to-resources/create-outbound-internet-rule.png" alt-text="Screenshot of creating an outbound security to block internet access.":::
By default, all virtual machine instances in a subnet can communicate with any r
|-|-| |Source| Any | |Source port ranges| * |
- |Destination | Select **VirtualNetwork**|
+ |Destination | Select **Service Tag**|
+ |Destination service tag | Select **VirtualNetwork** |
+ |Service| Leave default as *Custom*. |
|Destination port ranges| Change to *3389* | |Protocol|Any| |Action|Allow|
virtual-network Virtual Networks Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-networks-faq.md
Yes. For more information about public IP address ranges, see [Create a virtual
Yes. See [Azure limits](../azure-resource-manager/management/azure-subscription-service-limits.md?toc=%2fazure%2fvirtual-network%2ftoc.json#networking-limits) for details. Subnet address spaces cannot overlap one another. ### Are there any restrictions on using IP addresses within these subnets?
-Yes. Azure reserves 5 IP addresses within each subnet. These are x.x.x.0-x.x.x.3 and the last address of the subnet. x.x.x.1-x.x.x.3 is reserved in each subnet for Azure services.
-- x.x.x.0: Network address-- x.x.x.1: Reserved by Azure for the default gateway-- x.x.x.2, x.x.x.3: Reserved by Azure to map the Azure DNS IPs to the VNet space-- x.x.x.255: Network broadcast address for subnets of size /25 and larger. This will be a different address in smaller subnets.
-For example, for the subnet with addressing 172.16.1.128/26:
+Yes. Azure reserves the first four and last IP address for a total of 5 IP addresses within each subnet.
-- 172.16.1.128: Network address-- 172.16.1.129: Reserved by Azure for the default gateway-- 172.16.1.130, 172.16.1.131: Reserved by Azure to map the Azure DNS IPs to the VNet space-- 172.16.1.191: Network broadcast address
+For example, the IP address range of 192.168.1.0/24 has the following reserved addresses:
+- 192.168.1.0 : Network address
+- 192.168.1.1 : Reserved by Azure for the default gateway
+- 192.168.1.2, 192.168.1.3 : Reserved by Azure to map the Azure DNS IPs to the VNet space
+- 192.168.1.255 : Network broadcast address.
### How small and how large can VNets and subnets be? The smallest supported IPv4 subnet is /29, and the largest is /2 (using CIDR subnet definitions). IPv6 subnets must be exactly /64 in size.
virtual-network Virtual Networks Udr Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-networks-udr-overview.md
Each route contains an address prefix and next hop type. When traffic leaving a
The next hop types listed in the previous table represent how Azure routes traffic destined for the address prefix listed. Explanations for the next hop types follow:
-* **Virtual network**: Routes traffic between address ranges within the [address space](manage-virtual-network.md#add-or-remove-an-address-range) of a virtual network. Azure creates a route with an address prefix that corresponds to each address range defined within the address space of a virtual network. If the virtual network address space has multiple address ranges defined, Azure creates an individual route for each address range. Azure automatically routes traffic between subnets using the routes created for each address range. You don't need to define gateways for Azure to route traffic between subnets. Though a virtual network contains subnets, and each subnet has a defined address range, Azure doesn't* create default routes for subnet address ranges, because each subnet address range is within an address range of the address space of a virtual network.
+* **Virtual network**: Routes traffic between address ranges within the [address space](manage-virtual-network.md#add-or-remove-an-address-range) of a virtual network. Azure creates a route with an address prefix that corresponds to each address range defined within the address space of a virtual network. If the virtual network address space has multiple address ranges defined, Azure creates an individual route for each address range. Azure automatically routes traffic between subnets using the routes created for each address range. You don't need to define gateways for Azure to route traffic between subnets. Though a virtual network contains subnets, and each subnet has a defined address range, Azure doesn't create default routes for subnet address ranges. This is due each subnet address range is within an address range of the address space of a virtual network.
* **Internet**: Routes traffic specified by the address prefix to the Internet. The system default route specifies the 0.0.0.0/0 address prefix. If you don't override Azure's default routes, Azure routes traffic for any address not specified by an address range within a virtual network, to the Internet, with one exception. If the destination address is for one of Azure's services, Azure routes the traffic directly to the service over Azure's backbone network, rather than routing the traffic to the Internet. Traffic between Azure services doesn't traverse the Internet, regardless of which Azure region the virtual network exists in, or which Azure region an instance of the Azure service is deployed in. You can override Azure's default system route for the 0.0.0.0/0 address prefix with a [custom route](#custom-routes). * **None**: Traffic routed to the **None** next hop type is dropped, rather than routed outside the subnet. Azure automatically creates default routes for the following address prefixes:
The same command for CLI will be:
az network route-table route create -g MyResourceGroup --route-table-name MyRouteTable -n StorageRoute --address-prefix Storage --next-hop-type VirtualAppliance --next-hop-ip-address 10.0.100.4 ``` -
-#### Known Issues (April 2021)
-
-When BGP routes are present or a Service Endpoint is configured on your subnet, routes may not be evaluated with the correct priority. This feature doesn't currently work for dual stack (IPv4+IPv6) virtual networks. A fix for these scenarios is currently in progress </br>
- ## Next hop types across Azure tools The name displayed and referenced for next hop types is different between the Azure portal and command-line tools, and the Azure Resource Manager and classic deployment models. The following table lists the names used to refer to each next hop type with the different tools and [deployment models](../azure-resource-manager/management/deployment-models.md?toc=%2fazure%2fvirtual-network%2ftoc.json):
virtual-wan Gateway Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/gateway-settings.md
+
+ Title: 'About gateway settings for Virtual WAN'
+
+description: This article answers common questions about Virtual WAN gateway settings.
+++ Last updated : 05/20/2022++++
+# About Virtual WAN gateway settings
+
+This article helps you understand Virtual WAN gateway settings.
+
+## <a name="capacity"></a>Gateway scale units
+
+The gateway scale unit setting lets you pick the aggregate throughput of the gateway in the virtual hub. Each type of gateway scale unit (site-to-site, user-vpn, and ExpressRoute) is configured separately.
+
+Gateway scale units are different than routing infrastructure units. You adjust gateway scale units when you need more aggregated throughput for the gateway itself. You adjust hub infrastructure units when you want the hub router to support more VMs. For more information about hub settings and infrastructure units, see [About virtual hub settings](hub-settings.md).
+
+### <a name="s2s"></a>Site-to-site
+
+Site-to-site VPN gateway scale units are configured on the **Site to site** page of the virtual hub. When configuring scale units, keep the following information in mind:
+
+If you pick 1 scale unit = 500 Mbps, it implies that two instances for redundancy will be created, each having a maximum throughput of 500 Mbps.
+
+For example, if you had five branches, each doing 10 Mbps at the branch, you'll need an aggregate of 50 Mbps at the head end. Planning for aggregate capacity of the Azure VPN gateway should be done after assessing the capacity needed to support the number of branches to the hub.
++
+### <a name="p2s"></a>Point-to-site (User VPN)
+
+User VPN gateway scale units are configured on the **Point to site** page of the virtual hub. Gateway scale units represent the aggregate capacity of the User VPN gateway. When configuring scale units, keep the following information in mind:
+
+If you select 40 or more gateway scale units, plan your client address pool accordingly. For information about how this setting impacts the client address pool, see [About client address pools](about-client-address-pools.md).
++
+### <a name="expressroute"></a>ExpressRoute
+
+ExpressRoute gateway scale units are configured on the **ExpressRoute** page of the virtual hub.
++
+## <a name="type"></a>Basic and Standard
+
+The virtual WAN type (Basic or Standard) determines the types of resources that can be created within a hub, including the type of gateways that can be created (site-to-site VPN, point-to site User VPN, and ExpressRoute). This setting is configured on the virtual WAN object. For more information, see [Upgrade from Basic to Standard](upgrade-virtual-wan.md).
+
+The following table shows the configurations available for each virtual WAN type:
++
+## Next steps
+
+* For current pricing, see [Virtual WAN pricing](https://azure.microsoft.com/pricing/details/virtual-wan/).
+
+* For more information about Virtual WAN, see the [FAQ](virtual-wan-faq.md).
virtual-wan High Availability Vpn Client https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/high-availability-vpn-client.md
- Title: 'Configure High Availability connections for P2S User VPN clients'-
-description: Learn how to configure High Availability connections for Virtual WAN P2S User VPN clients.
----- Previously updated : 04/18/2022---
-# Configure High Availability connections for Virtual WAN P2S User VPN clients
-
-This article helps you configure and connect using the High Availability setting for Virtual WAN point-to-site (P2S) User VPN clients. This feature is only available for P2S clients connecting to Virtual WAN VPN gateways using the OpenVPN protocol.
-
-By default, every Virtual WAN VPN gateway consists of two instances in an active-active configuration. If anything happens to the gateway instance that the VPN client is connected to, the tunnel will be disconnected. P2S VPN clients must then initiate a connection to the new active instance.
-
-When **High Availability** is configured for the Azure VPN Client, if a failover occurs, the client connection isn't interrupted.
-
-> [!NOTE]
-> High Availability is supported for OpenVPN® protocol connections only and requires the Azure VPN Client.
-
-## <a name = "windows"></a>Windows
-
-### <a name = "download"></a>Download the Azure VPN Client
-
-To use this feature, you must install version **2.1901.41.0** or later of the Azure VPN Client.
--
-### <a name = "import"></a>Configure VPN client settings
-
-1. Use the [Point-to-site VPN for Azure AD authentication](virtual-wan-point-to-site-azure-ad.md#download-profile) article as a general guideline to generate client profile files. The OpenVPN® tunnel type is required for High Availability. If the generated client profile files don't contain an **OpenVPN** folder, your point-to-site User VPN configuration settings need to be modified to use the OpenVPN tunnel type.
-
-1. Configure the Azure VPN Client using the steps in the [Configure the Azure VPN Client](virtual-wan-point-to-site-azure-ad.md#configure-client) article as a guideline.
-
-### <a name = "HA"></a>Configure High Availability settings
-
-1. Open the Azure VPN Client and go to **Settings**.
-
- :::image type="content" source="./media/high-availability-vpn-client/settings.png" alt-text="Screenshot shows VPN client with settings selected." lightbox="./media/high-availability-vpn-client/settings-expand.png":::
-
-1. On the **Settings** page, select **Enable High Availability**.
-
- :::image type="content" source="./media/high-availability-vpn-client/enable.png" alt-text="Screenshot shows High Availability checkbox." lightbox="./media/high-availability-vpn-client/enable-expand.png":::
-
-1. On the home page for the client, save your settings.
-
-1. Connect to the VPN. After connecting, you'll see **Connected (HA)** in the left pane. You can also see the connection in the **Status logs**.
-
- :::image type="content" source="./media/high-availability-vpn-client/ha-logs.png" alt-text="Screenshot shows High Availability in left pane and in status logs." lightbox="./media/high-availability-vpn-client/ha-logs-expand.png":::
-
-1. If you later decide that you don't want to use HA, deselect the **Enable High Availability** checkbox on the Azure VPN Client and reconnect to the VPN.
-
-## <a name = "macOS"></a>macOS
-
-1. Use the steps in the [Azure AD - macOS](openvpn-azure-ad-client-mac.md) article as a configuration guideline. The settings you configure may be different than the configuration example in the article, depending on what type of authentication you're using. Configure the Azure VPN Client with the settings specified in the VPN client profile.
-
-1. Open the **Azure VPN Client** and click **Settings** at the bottom of the page.
-
- :::image type="content" source="./media/high-availability-vpn-client/mac-settings.png" alt-text="Screenshot click Settings button." lightbox="./media/high-availability-vpn-client/mac-settings.png":::
-
-1. On the **Settings** page, select **Enable High Availability**. Settings are automatically saved.
-
- :::image type="content" source="./media/high-availability-vpn-client/mac-ha-settings.png" alt-text="Screenshot shows Enable High Availability." lightbox="./media/high-availability-vpn-client/mac-ha-settings-expand.png":::
-
-1. Click **Connect**. Once you're connected, you can view the connection status in the left pane and in the **Status logs**.
-
- :::image type="content" source="./media/high-availability-vpn-client/mac-connected.png" alt-text="Screenshot mac logs and H A connection status." lightbox="./media/high-availability-vpn-client/mac-connected-expand.png":::
-
-## Next steps
-
-For VPN client profile information, see [Global and hub-based profiles](global-hub-profile.md).
virtual-wan Hub Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/hub-settings.md
+
+ Title: 'About virtual hub settings'
+
+description: This article answers common questions about virtual hub settings and routing infrastructure units.
+++ Last updated : 05/20/2022+++
+# About virtual hub settings
+
+This article helps you understand the various settings available for virtual hubs. A virtual hub is a Microsoft-managed virtual network that contains various service endpoints to enable connectivity. The virtual hub is the core of your network in a region. Multiple virtual hubs can be created in the same region.
+
+A virtual hub can contain gateways for site-to-site VPN, ExpressRoute, or point-to-site User VPN. For example, when using Virtual WAN, you don't create a site-to-site connection from your on-premises site directly to your VNet. Instead, you create a site-to-site connection to the virtual hub. The traffic always goes through the virtual hub gateway. This means that your VNets don't need their own virtual network gateway. Virtual WAN lets your VNets take advantage of scaling easily through the virtual hub and the virtual hub gateway. For more information about gateways, see [Gateway settings](gateway-settings.md). Note that a virtual hub gateway isn't the same as a virtual network gateway that you use for ExpressRoute and VPN Gateway.
+
+When you create a virtual hub, a virtual hub router is deployed. The virtual hub router, within the Virtual WAN hub, is the central component that manages all routing between gateways and virtual networks (VNets). Routing infrastructure units determine the minimum throughput of the virtual hub router, and the number of virtual machines that can be deployed in VNets that are connected to the Virtual WAN virtual hub.
+
+You can create an empty virtual hub (a virtual hub that doesn't contain any gateways) and then add gateways (S2S, P2S, ExpressRoute, etc.) later, or create the virtual hub and gateways at the same time. Once a virtual hub is created, virtual hub pricing applies, even if you don't create any gateways within the virtual hub. For more information, see [Azure Virtual WAN pricing](https://azure.microsoft.com/pricing/details/virtual-wan/).
+
+## <a name="capacity"></a>Virtual hub capacity
+
+By default, the virtual hub router is automatically configured to deploy with a virtual hub capacity of 2 routing infrastructure units. This supports a minimum of 3 Gbps aggregate throughput, and 2000 connected VMs deployed in all virtual networks connected to that virtual hub.
+
+When you deploy a new virtual hub, you can specify additional routing infrastructure units to increase the default virtual hub capacity in increments of 1 Gbps and 1000 VMs. This feature gives you the ability to secure upfront capacity without having to wait for the virtual hub to scale out when more throughput is needed. The scale unit on which the virtual hub is created becomes the minimum capacity. You can view routing infrastructure units, router Gbps, and number of VMs supported, in the Azure portal **Virtual hub** pages for **Create virtual hub** and **Edit virtual hub**.
+
+### Configure virtual hub capacity
+
+Capacity is configured on the **Basics** tab **Virtual hub capacity** setting when you create your virtual hub.
++
+#### Edit virtual hub capacity
+
+Adjust the virtual hub capacity when you need to support additional virtual machines and the aggregate throughput of the virtual hub router.
+
+To add additional virtual hub capacity, go to the virtual hub in the Azure portal. On the **Overview** page, click **Edit virtual hub**. Adjust the **Virtual hub capacity** using the dropdown, then **Confirm**.
+
+### Routing infrastructure unit table
+
+For pricing information, see [Azure Virtual WAN pricing](https://azure.microsoft.com/pricing/details/virtual-wan/).
+
+| Routing infrastructure unit | Aggregate throughput<br>Gbps| Number of VMs |
+| | | |
+| 2 | 3 | 2000 |
+| 3 | 3 | 3000 |
+| 4 | 4 | 4000 |
+| 5 | 5 | 5000 |
+| 6 | 6 | 6000 |
+| 7 | 7 | 7000 |
+| 8 | 8 | 8000 |
+| 9 | 9 | 9000 |
+| 10 | 10 | 10000 |
+| 11 | 11 | 11000 |
+| 12 | 12 | 12000 |
+| 13 | 13 | 13000 |
+| 14 | 14 | 14000 |
+| 15 | 15 | 15000 |
+| 16 | 16 | 16000 |
+| 17 | 17 | 17000 |
+| 18 | 18 | 18000 |
+| 19 | 19 | 19000 |
+| 20 | 20 | 20000 |
+| 21 | 21 | 21000 |
+| 22 | 22 | 22000 |
+| 23 | 23 | 23000 |
+| 24 | 24 | 24000 |
+| 25 | 25 | 25000 |
+| 26 | 26 | 26000 |
+| 27 | 27 | 27000 |
+| 28 | 28 | 28000 |
+| 29 | 29 | 29000 |
+| 30 | 30 | 30000 |
+| 31 | 31 | 31000 |
+| 32 | 32 | 32000 |
+| 33 | 33 | 33000 |
+| 34 | 34 | 34000 |
+| 35 | 35 | 35000 |
+| 36 | 36 | 36000 |
+| 37 | 37 | 37000 |
+| 38 | 38 | 38000 |
+| 39 | 39 | 39000 |
+| 40 | 40 | 40000 |
+| 41 | 41 | 41000 |
+| 42 | 42 | 42000 |
+| 43 | 43 | 43000 |
+| 44 | 44 | 44000 |
+| 45 | 45 | 45000 |
+| 46 | 46 | 46000 |
+| 47 | 47 | 47000 |
+| 48 | 48 | 48000 |
+| 49 | 49 | 49000 |
+| 50 | 50 | 50000 |
+
+## <a name="gateway"></a>Gateway settings
+
+Each virtual hub can contain multiple gateways (site-to-site, point-to-site User VPN, and ExpressRoute). When you create your virtual hub, you can configure gateways at the same time, or create an empty virtual hub and add the gateway settings later. When you edit a virtual hub, you'll see settings that pertain to gateways. For example, gateway scale units.
+
+Gateway scale units are different than routing infrastructure units. You adjust gateway scale units when you need more aggregated throughput for the gateway itself. You adjust virtual hub infrastructure units when you want the virtual hub router to support more VMs.
+
+For more information about gateway settings, see [Gateway settings](gateway-settings.md).
+
+## <a name="type"></a>Basic and Standard
+
+The virtual WAN type (Basic or Standard) determines the types of resources that can be created within a virtual hub, including the type of gateways that can be created (site-to-site VPN, point-to site User VPN, and ExpressRoute). This setting is configured on the virtual WAN object. For more information, see [Upgrade from Basic to Standard](upgrade-virtual-wan.md).
+
+The following table shows the configurations available for each virtual WAN type:
++
+## <a name="router-status"></a>Virtual hub router status
++
+## Next steps
+
+For virtual hub routing, see [About virtual hub routing](about-virtual-hub-routing.md).
virtual-wan Manage Secure Access Resources Spoke P2s https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/manage-secure-access-resources-spoke-p2s.md
This article shows you how to use Virtual WAN and Azure Firewall rules and filte
The steps in this article help you create the architecture in the following diagram to allow User VPN clients to access a specific resource (VM1) in a spoke VNet connected to the virtual hub, but not other resources (VM2). Use this architecture example as a basic guideline. ## Prerequisites
In this section, you create the virtual hub with a point-to-site gateway. When c
## <a name="generate"></a>Generate VPN client configuration files
-In this section, you generate and download the configuration profile files. These files are used to configure the native VPN client on the client computer. For information about the contents of the client profile files, see [Point-to-site configuration - certificates](../vpn-gateway/point-to-site-vpn-client-configuration-azure-cert.md).
+In this section, you generate and download the configuration profile files. These files are used to configure the native VPN client on the client computer. For information about the contents of the client profile files, see [Point-to-site configuration - certificates](../vpn-gateway/point-to-site-vpn-client-cert-windows.md#generate).
[!INCLUDE [Download profile](../../includes/virtual-wan-p2s-download-profile-include.md)]
virtual-wan Pricing Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/pricing-concepts.md
Previously updated : 09/02/2021 Last updated : 05/20/2022
Azure Virtual WAN is a networking service that brings many networking, security,
This article discusses three commonly deployed scenarios with Azure Virtual WAN and typical price estimates for the deployments based on the listed prices. Additionally, there can be many other scenarios where Virtual WAN may be useful. > [!IMPORTANT]
-> The pricing shown in this article is intended to be used for example purposes only.
-> * Pricing can change at any point. For current pricing information, see the [Virtual WAN pricing](https://azure.microsoft.com/pricing/details/virtual-wan/) page.
-> * Inter-hub (hub-to-hub) charges do not show in the Virtual WAN pricing page because pricing is subject to Inter-Region (Intra/Inter-continental) charges. For more information, see [Azure data transfer charges](https://azure.microsoft.com/pricing/details/bandwidth/).
+> The pricing shown in this article is intended to be used for example purposes only.
+>
+> * Pricing can change at any point. For current pricing information, see the [Virtual WAN pricing](https://azure.microsoft.com/pricing/details/virtual-wan/) page.
+> * Inter-hub (hub-to-hub) charges do not show in the Virtual WAN pricing page because pricing is subject to Inter-Region (Intra/Inter-continental) charges. For more information, see [Azure data transfer charges](https://azure.microsoft.com/pricing/details/bandwidth/).
+> * For virtual hub routing infrastructure unit pricing, see the [Virtual WAN pricing](https://azure.microsoft.com/pricing/details/virtual-wan/) page.
> ## <a name="pricing"></a>Pricing components
The following diagram shows the typical data routes in a network involving Virtu
|2'|Data transfer from a VPN Site-to-Site branch via Standard vWAN hub to ExpressRoute connected Data Center/ HQ in East US|Deployment hour ($0.25/hr) + VPN S2S Scale Unit ($0.261/hr) + VPN S2S Connection Unit ($0.05/hr) + ExpressRoute Scale Unit ($0.42/hr) + ExpressRoute Connection Unit ($0.05/hr) = $1.03/hr|ExpressRoute Metered Outbound Zone 1 ($0.025/GB) = $0.025/GB| |3|Data transfer from a VPN Site-to-Site branch via Standard vWAN hub to ExpressRoute connected Data Center/ HQ in East US|Deployment hour ($0.25/hr) + VPN S2S Scale Unit ($0.261/hr) + VPN S2S Connection Unit ($0.05/hr) + ExpressRoute Scale Unit ($0.42/hr) + ExpressRoute Connection Unit ($0.05/hr) = $1.03/hr|ExpressRoute Metered Outbound Zone 1 ($0.025/GB) = $0.025/GB| |4|Data transfer from a spoke VNet to ExpressRoute connected Data Center/ HQ via Standard vWAN hub in East US|Deployment hour ($0.25/hr) + ExpressRoute Scale Unit ($0.42/hr) + ExpressRoute Connection Unit ($0.05/hr) = $0.72/hr|VNet peering (outbound) ($0.01/GB) + ExpressRoute Metered Outbound Zone 1 ($0.025/GB) = $0.035/GB|
-|4'|Data transfer from ExpressRoute connected Data Center/ HQ to a spoke VNet via Standard vWAN hub in East US|Deployment hour ($0.25/hr) + ExpressRoute Scale Unit ($0.42/hr) + ExpressRoute Connection Unit ($0.05/hr) = $0.72/hr|VNet peering (inbound) ($0.01/GB) = $0.01/GB|
-|4"|Data transfer from ExpressRoute connected Data Center/ HQ to a remote spoke VNet via Standard vWAN hub in Europe|Deployment hour (2x$0.25/hr) + ExpressRoute Scale Unit ($0.42/hr) + ExpressRoute Connection Unit ($0.05/hr) = $0.97/hr|VNet peering (inbound) ($0.01/GB) + hub Data Processing (Europe) ($0.02/GB) + Inter-Region data transfer (East US to Europe) ($0.05/GB) = $0.08/GB|
+|4'|Data transfer from ExpressRoute connected Data Center/ HQ to a spoke VNet via Standard vWAN hub in East US|Deployment hour ($0.25/hr) + ExpressRoute Scale Unit ($0.42/hr) + ExpressRoute Connection Unit ($0.05/hr) = $0.72/hr|VNet peering (inbound) ($0.01/GB) = $0.01/GB|
+|4"|Data transfer from ExpressRoute connected Data Center/ HQ to a remote spoke VNet via Standard vWAN hub in Europe|Deployment hour (2x$0.25/hr) + ExpressRoute Scale Unit ($0.42/hr) + ExpressRoute Connection Unit ($0.05/hr) = $0.97/hr|VNet peering (inbound) ($0.01/GB) + hub Data Processing (Europe) ($0.02/GB) + Inter-Region data transfer (East US to Europe) ($0.05/GB) = $0.08/GB|
|5|Data transfer from a spoke VNet to another spoke VNet via Standard vWAN hub in East US|Deployment hour ($0.25/hr) = $0.25/hr|VNet peering (outbound + inbound) (2x$0.01/GB) + hub Data Processing ($0.02/GB) = $0. 04/GB| |6|Data transfer from a spoke VNet connected to a hub in East US to another spoke VNet in Europe (a different region) that is connected to a hub in Europe|Deployment hour (2x$0.25/hr) = $0.50/hr|VNet peering (outbound + inbound) (2x$0.01/GB) + hub Data Processing (2x$0.02/GB) + Inter-Region data transfer (East US to Europe) ($0.05/GB) = $0. 11/GB| |7|Data transfer from a spoke VNet to a User VPN (Point-to-Site) via Standard vWAN hub in Europe|Deployment hour ($0.25/hr) + VPN P2S Scale Unit ($0.261/hr) + VPN P2S Connection Unit ($0.0125/hr) = $0.524/hr|VNet peering (outbound) ($0.01/GB) + Standard Outbound Zone 1 ($0.087/GB) = $0.097/GB|
In this scenario, we assumed a total of 8-TB data flowing through the global net
| Value | Calculation | | | |
-|S2S VPN hub Singapore |(1 S2S VPN scale unit ($0.361/hr) + 1 connection unit ($0.05/hr)) x 730 hours = $300 per month|
+|S2S VPN hub Singapore |(1 S2S VPN scale unit ($0.361/hr) + 1 connection unit ($0.05/hr)) x 730 hours = $300 per month|
|ExpressRoute hub US E |(1 ER scale unit ($0.42/hr) + 1 connection unit ($0.05/hr)) x 730 hours = $343 per month| |ExpressRoute hub EU|(1 ER scale unit ($0.42/hr) + 1 connection unit ($0.05/hr)) x 730 hours = $343 per month| |Standard hub deployment cost |3 hubs x 730 hours x $0.25/hr = $548 per month|
A **connection unit** applies to any on-premises/non-Microsoft endpoint connecti
### <a name="data-transfer"></a>How are data transfer charges calculated?
-* Any traffic entering Azure is not charged. Traffic leaving Azure (via VPN, ExpressRoute, or Point-to-site User VPN connections) is subject to the standard [Azure data transfer charges](https://azure.microsoft.com/pricing/details/bandwidth/) or, in the case of ExpressRoute, [ExpressRoute pricing](https://azure.microsoft.com/pricing/details/expressroute/).
+* Any traffic entering Azure isn't charged. Traffic leaving Azure (via VPN, ExpressRoute, or Point-to-site User VPN connections) is subject to the standard [Azure data transfer charges](https://azure.microsoft.com/pricing/details/bandwidth/) or, in the case of ExpressRoute, [ExpressRoute pricing](https://azure.microsoft.com/pricing/details/expressroute/).
* Peering charges are applicable when a VNet connected to a vWAN hub sends or receives data. For more information, see [Virtual Network pricing](https://azure.microsoft.com/pricing/details/virtual-network/).
-* For data transfer charges between a Virtual WAN hub, and a remote Virtual WAN hub or VNet in a different region than the source hub, data transfer charges apply for traffic leaving a hub. Example: Traffic leaving an East US hub will be charged $0.02/GB going to a West US hub. There is no charge for traffic entering the West US hub. All hub to hub traffic is subject to Inter-Region (Intra/Inter-continental) charges [Azure data transfer charges](https://azure.microsoft.com/pricing/details/bandwidth/).
+* For data transfer charges between a Virtual WAN hub, and a remote Virtual WAN hub or VNet in a different region than the source hub, data transfer charges apply for traffic leaving a hub. Example: Traffic leaving an East US hub will be charged $0.02/GB going to a West US hub. There's no charge for traffic entering the West US hub. All hub to hub traffic is subject to Inter-Region (Intra/Inter-continental) charges [Azure data transfer charges](https://azure.microsoft.com/pricing/details/bandwidth/).
### <a name="fee"></a>What is the difference between a Standard hub fee and a Standard hub processing fee? Virtual WAN comes in two flavors:
-* A **Basic virtual WAN**, where users can deploy multiple hubs and use VPN Site-to-site connectivity. A Basic virtual WAN does not have advanced capabilities such as fully meshed hubs, ExpressRoute connectivity, User VPN/Point-to-site VPN connectivity, VNet-to-VNet transitive connectivity, VPN and ExpressRoute transit connectivity, or Azure Firewall. There is no base fee or data processing fee for hubs in a Basic virtual WAN.
+* A **Basic virtual WAN**, where users can deploy multiple hubs and use VPN Site-to-site connectivity. A Basic virtual WAN doesn't have advanced capabilities such as fully meshed hubs, ExpressRoute connectivity, User VPN/Point-to-site VPN connectivity, VNet-to-VNet transitive connectivity, VPN and ExpressRoute transit connectivity, or Azure Firewall. There's no base fee or data processing fee for hubs in a Basic virtual WAN.
-* A **Standard virtual WAN** provides advanced capabilities, such as fully meshed hubs, ExpressRoute connectivity, User VPN/Point-to-site VPN connectivity, VNet-to-VNet transitive connectivity, VPN and ExpressRoute transit connectivity, and Azure Firewall, etc. All of the virtual hub routing is provided by a router that enables multiple services in a virtual hub. There is a base fee for the hub, which is priced at $0.25/hr. There is also a charge for data processing in the virtual hub router for VNet-to-VNet transit connectivity. The data processing charge in the virtual hub router is not applicable for branch-to-branch transfers (Scenario 2, 2', 3), or VNet-to-branch transfers via the same vWAN hub (Scenario 1, 1') as shown in the [Pricing Components](#pricing).
+* A **Standard virtual WAN** provides advanced capabilities, such as fully meshed hubs, ExpressRoute connectivity, User VPN/Point-to-site VPN connectivity, VNet-to-VNet transitive connectivity, VPN and ExpressRoute transit connectivity, and Azure Firewall, etc. All of the virtual hub routing is provided by a router that enables multiple services in a virtual hub. There's a base fee for the hub, which is priced at $0.25/hr. There's also a charge for data processing in the virtual hub router for VNet-to-VNet transit connectivity. The data processing charge in the virtual hub router isn't applicable for branch-to-branch transfers (Scenario 2, 2', 3), or VNet-to-branch transfers via the same vWAN hub (Scenario 1, 1') as shown in the [Pricing Components](#pricing).
## Next steps * For current pricing, see [Virtual WAN pricing](https://azure.microsoft.com/pricing/details/virtual-wan/).
-* For more information about Virtual WAN, see the [FAQ](virtual-wan-faq.md).
--
+* For more information about Virtual WAN, see the [FAQ](virtual-wan-faq.md).
virtual-wan Virtual Wan About https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/virtual-wan-about.md
Previously updated : 04/19/2022 Last updated : 05/20/2022 # Customer intent: As someone with a networking background, I want to understand what Virtual WAN is and if it is the right choice for my Azure network.
To configure an end-to-end virtual WAN, you create the following resources:
* **Virtual WAN:** The virtualWAN resource represents a virtual overlay of your Azure network and is a collection of multiple resources. It contains links to all your virtual hubs that you would like to have within the virtual WAN. Virtual WAN resources are isolated from each other and can't contain a common hub. Virtual hubs across Virtual WAN don't communicate with each other.
-* **Hub:** A virtual hub is a Microsoft-managed virtual network. The hub contains various service endpoints to enable connectivity. From your on-premises network (vpnsite), you can connect to a VPN Gateway inside the virtual hub, connect ExpressRoute circuits to a virtual hub, or even connect mobile users to a point-to-site gateway in the virtual hub. The hub is the core of your network in a region. Multiple virtual hubs can be created in the same region.
+* **Hub:** A virtual hub is a Microsoft-managed virtual network. The hub contains various service endpoints to enable connectivity. From your on-premises network (vpnsite), you can connect to a VPN gateway inside the virtual hub, connect ExpressRoute circuits to a virtual hub, or even connect mobile users to a point-to-site gateway in the virtual hub. The hub is the core of your network in a region. Multiple virtual hubs can be created in the same region.
A hub gateway isn't the same as a virtual network gateway that you use for ExpressRoute and VPN Gateway. For example, when using Virtual WAN, you don't create a site-to-site connection from your on-premises site directly to your VNet. Instead, you create a site-to-site connection to the hub. The traffic always goes through the hub gateway. This means that your VNets don't need their own virtual network gateway. Virtual WAN lets your VNets take advantage of scaling easily through the virtual hub and the virtual hub gateway.
-* **Hub virtual network connection:** The Hub virtual network connection resource is used to connect the hub seamlessly to your virtual network. One virtual network can be connected to only one virtual hub.
+* **Hub virtual network connection:** The hub virtual network connection resource is used to connect the hub seamlessly to your virtual network. One virtual network can be connected to only one virtual hub.
* **Hub-to-hub connection:** Hubs are all connected to each other in a virtual WAN. This implies that a branch, user, or VNet connected to a local hub can communicate with another branch or VNet using the full mesh architecture of the connected hubs. You can also connect VNets within a hub transiting through the virtual hub, as well as VNets across hub, using the hub-to-hub connected framework.
You can connect an Azure virtual network to a virtual hub. For more information,
Virtual WAN allows transit connectivity between VNets. VNets connect to a virtual hub via a virtual network connection. Transit connectivity between the VNets in **Standard Virtual WAN** is enabled due to the presence of a router in every virtual hub. This router is instantiated when the virtual hub is first created.
-The router can have four routing statuses: Provisioned, Provisioning, Failed, or None. The **Routing status** is located in the Azure portal by navigating to the Virtual Hub page.
-
-* A **None** status indicates that the virtual hub didn't provision the router. This can happen if the Virtual WAN is of type *Basic*, or if the virtual hub was deployed prior to the service being made available.
-* A **Failed** status indicates failure during instantiation. In order to instantiate or reset the router, you can locate the **Reset Router** option by navigating to the virtual hub Overview page in the Azure portal.
Every virtual hub router supports an aggregate throughput up to 50 Gbps.
-Connectivity between the virtual network connections assumes, by default, a maximum total of 2000 VM workload across all VNets connected to a single virtual Hub. This [limit](../azure-resource-manager/management/azure-subscription-service-limits.md#virtual-wan-limits) can be increased opening an online customer support request. For cost implications, see **Routing Infrastructure Unit** cost on the [Azure Virtual WAN Pricing](https://azure.microsoft.com/pricing/details/virtual-wan/) page.
+Connectivity between the virtual network connections assumes, by default, a maximum total of 2000 VM workload across all VNets connected to a single virtual hub. **Hub infrastructure units** can be adjusted to support additional VMs. For more information about hub infrastructure units, see [Hub settings](hub-settings.md).
#### <a name="transit-er"></a>Transit connectivity between VPN and ExpressRoute
virtual-wan Virtual Wan Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/virtual-wan-faq.md
description: See answers to frequently asked questions about Azure Virtual WAN n
Previously updated : 04/19/2022 Last updated : 05/20/2022 # Customer intent: As someone with a networking background, I want to read more details about Virtual WAN in a FAQ format.
Yes as long as the device supports IPsec IKEv1 or IKEv2. Virtual WAN partners au
### How do new partners that aren't listed in your launch partner list get onboarded?
-All virtual WAN APIs are open API. You can go over the documentation [Virtual WAN partner automation](virtual-wan-configure-automation-providers.md) to assess technical feasibility. An ideal partner is one that has a device that can be provisioned for IKEv1 or IKEv2 IPsec connectivity. Once the company has completed the automation work for their CPE device based on the automation guidelines provided above, you can reach out to azurevirtualwan@microsoft.com to be listed here [Connectivity through partners](virtual-wan-locations-partners.md#partners). If you're a customer that would like a certain company solution to be listed as a Virtual WAN partner, have the company contact the Virtual WAN by sending an email to azurevirtualwan@microsoft.com.
+All virtual WAN APIs are OpenAPI. You can go over the documentation [Virtual WAN partner automation](virtual-wan-configure-automation-providers.md) to assess technical feasibility. An ideal partner is one that has a device that can be provisioned for IKEv1 or IKEv2 IPsec connectivity. Once the company has completed the automation work for their CPE device based on the automation guidelines provided above, you can reach out to azurevirtualwan@microsoft.com to be listed here [Connectivity through partners](virtual-wan-locations-partners.md#partners). If you're a customer that would like a certain company solution to be listed as a Virtual WAN partner, have the company contact the Virtual WAN by sending an email to azurevirtualwan@microsoft.com.
### How is Virtual WAN supporting SD-WAN devices?
Yes. Virtual WAN prefers ExpressRoute over VPN for traffic egressing Azure.
### When a Virtual WAN hub has an ExpressRoute circuit and a VPN site connected to it, what would cause a VPN connection route to be preferred over ExpressRoute?
-When an ExpressRoute circuit is connected to virtual hub, the Microsoft edge routers are the first node for communication between on-premises and Azure. These edge routers communicate with the Virtual WAN ExpressRoute gateways that, in turn, learn routes from the virtual hub router that controls all routes between any gateways in Virtual WAN. The Microsoft edge routers process virtual hub ExpressRoute routes with higher preference over routes learned from on-premises.
+When an ExpressRoute circuit is connected to virtual hub, the Microsoft Edge routers are the first node for communication between on-premises and Azure. These edge routers communicate with the Virtual WAN ExpressRoute gateways that, in turn, learn routes from the virtual hub router that controls all routes between any gateways in Virtual WAN. The Microsoft Edge routers process virtual hub ExpressRoute routes with higher preference over routes learned from on-premises.
-For any reason, if the VPN connection becomes the primary medium for the virtual hub to learn routes from (e.g failover scenarios between ExpressRoute and VPN), unless the VPN site has a longer AS Path length, the virtual hub will continue to share VPN learned routes with the ExpressRoute gateway. This causes the Microsoft edge routers to prefer VPN routes over on-premises routes.
+For any reason, if the VPN connection becomes the primary medium for the virtual hub to learn routes from (e.g failover scenarios between ExpressRoute and VPN), unless the VPN site has a longer AS Path length, the virtual hub will continue to share VPN learned routes with the ExpressRoute gateway. This causes the Microsoft Edge routers to prefer VPN routes over on-premises routes.
### <a name="expressroute-bow-tie"></a>When two hubs (hub 1 and 2) are connected and there's an ExpressRoute circuit connected as a bow-tie to both the hubs, what is the path for a VNet connected to hub 1 to reach a VNet connected in hub 2?
The current behavior is to prefer the ExpressRoute circuit path over hub-to-hub
* Configure multiple ExpressRoute circuits (different providers) to connect to one hub and use the hub-to-hub connectivity provided by Virtual WAN for inter-region traffic flows.
-* Contact the product team to take part in the gated public preview. In this preview, traffic between the 2 hubs traverses through the Azure Virtual WAN router in each hub and uses a hub-to-hub path instead of the ExpressRoute path (which traverses through the Microsoft edge routers/MSEE). To use this feature during preview, email **previewpreferh2h@microsoft.com** with the Virtual WAN IDs, Subscription ID, and the Azure region. Expect a response within 48 business hours (Monday-Friday) with confirmation that the feature is enabled.
+* Contact the product team to take part in the gated public preview. In this preview, traffic between the 2 hubs traverses through the Azure Virtual WAN router in each hub and uses a hub-to-hub path instead of the ExpressRoute path (which traverses through the Microsoft Edge routers/MSEE). To use this feature during preview, email **previewpreferh2h@microsoft.com** with the Virtual WAN IDs, Subscription ID, and the Azure region. Expect a response within 48 business hours (Monday-Friday) with confirmation that the feature is enabled.
### Can hubs be created in different resource group in Virtual WAN?
The recommended Virtual WAN hub address space is /23. Virtual WAN hub assigns su
### Can you resize or change the address prefixes of a spoke virtual network connected to the Virtual WAN hub?
-No. This is currently not possible. To change the address prefixes of a spoke virtual network, remove the connection between the spoke virtual network and the Virtual WAN hub, modify the address spaces of the spoke virtual network, and then re-create the connection between the spoke virtual network and the Virtual WAN hub.
+No. This is currently not possible. To change the address prefixes of a spoke virtual network, remove the connection between the spoke virtual network and the Virtual WAN hub, modify the address spaces of the spoke virtual network, and then re-create the connection between the spoke virtual network and the Virtual WAN hub. Also, connecting 2 virtual networks with overlapping address spaces to the virtual hub is currently not supported.
### Is there support for IPv6 in Virtual WAN?
Yes. For a list of Managed Service Provider (MSP) solutions enabled via Azure Ma
Both Azure Virtual WAN hub and Azure Route Server provide Border Gateway Protocol (BGP) peering capabilities that can be utilized by NVAs (Network Virtual Appliance) to advertise IP addresses from the NVA to the userΓÇÖs Azure virtual networks. The deployment options differ in the sense that Azure Route Server is typically deployed by a self-managed customer hub VNet whereas Azure Virtual WAN provides a zero-touch fully meshed hub service to which customers connect their various spokes end points (Azure VNet, on-premise branches with site-to-site VPN or SDWAN, remote users with point-to-site/Remote User VPN and Private connections with ExpressRoute) and enjoy BGP Peering for NVAs deployed in spoke VNet along with other vWAN capabilities such as transit connectivity for VNet-to-VNet, transit connectivity between VPN and ExpressRoute, custom/advanced routing, custom route association and propagation, routing intent/policies for no hassle inter-region security, Secure Hub/Azure firewall etc. For more details about Virtual WAN BGP Peering, please see [How to peer BGP with a virtual hub](scenario-bgp-peering-hub.md).
-### If I'm using a third-party security provider (Zscaler, iBoss or Checkpoint) to secure my internet traffic, why don't I see the VPN site associated to the third-party security provider in the Azure portal?
+### If I'm using a third-party security provider (Zscaler, iBoss or Checkpoint) to secure my internet traffic, why don't I see the VPN site associated to the third-party security provider in the Azure Portal?
-When you choose to deploy a security partner provider to protect Internet access for your users, the third-party security provider creates a VPN site on your behalf. Because the third-party security provider is created automatically by the provider and isn't a user-created VPN site, this VPN site won't show up in the Azure portal.
+When you choose to deploy a security partner provider to protect Internet access for your users, the third-party security provider creates a VPN site on your behalf. Because the third-party security provider is created automatically by the provider and isn't a user-created VPN site, this VPN site won't show up in the Azure Portal.
For more information regarding the available options third-party security providers and how to set this up, see [Deploy a security partner provider](../firewall-manager/deploy-trusted-security-partner.md).
Yes, BGP communities generated by on-premises will be preserved in Virtual WAN.
### Why am I seeing a message and button called "Update router to latest software version" in portal?
-The Virtual WAN team has been working on upgrading virtual routers from their current Cloud Services infrastructure to Virtual Machine Scale Sets (VMSS) based deployments. This will enable the virtual hub router to now be availability zone aware. If you navigate to your Virtual WAN hub resource and see this message and button, then you can upgrade your router to the latest version by clicking on the button. The Cloud Services infrastructure will be deprecated soon. If you would like to take advantage of new Virtual WAN features, such as [BGP peering with the hub](create-bgp-peering-hub-portal.md), you will have to update your virtual hub router via Azure Portal.
+The Virtual WAN team has been working on upgrading virtual routers from their current Cloud Services infrastructure to Virtual Machine Scale Sets based deployments. This will enable the virtual hub router to now be availability zone aware. If you navigate to your Virtual WAN hub resource and see this message and button, then you can upgrade your router to the latest version by clicking on the button. The Cloud Services infrastructure will be deprecated soon. If you would like to take advantage of new Virtual WAN features, such as [BGP peering with the hub](create-bgp-peering-hub-portal.md), you'll have to update your virtual hub router via Azure Portal.
-YouΓÇÖll only be able to update your virtual hub router if all the resources (gateways/route tables/VNet connections) in your hub are in a succeeded state. Additionally, as this operation requires deployment of new VMSS based virtual hub routers, youΓÇÖll face an expected downtime of 30 minutes per hub. Within a single Virtual WAN resource, hubs should be updated one at a time instead of updating multiple at the same time. When the Router Version says ΓÇ£LatestΓÇ¥, then the hub is done updating. There will be no routing behavior changes after this update. If the update fails for any reason, your hub will be auto recovered to the old version to ensure there is still a working setup.
+YouΓÇÖll only be able to update your virtual hub router if all the resources (gateways/route tables/VNet connections) in your hub are in a succeeded state. Additionally, as this operation requires deployment of new virtual machine scale sets based virtual hub routers, youΓÇÖll face an expected downtime of 30 minutes per hub. Within a single Virtual WAN resource, hubs should be updated one at a time instead of updating multiple at the same time. When the Router Version says ΓÇ£LatestΓÇ¥, then the hub is done updating. There will be no routing behavior changes after this update. If the update fails for any reason, your hub will be auto recovered to the old version to ensure there is still a working setup.
### Is there a route limit for OpenVPN clients connecting to an Azure P2S VPN gateway?
virtual-wan Virtual Wan Site To Site Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/virtual-wan-site-to-site-portal.md
Previously updated : 03/18/2022 Last updated : 05/02/2022 # Customer intent: As someone with a networking background, I want to connect my local site to my VNets using Virtual WAN and I don't want to go through a Virtual WAN partner.
vpn-gateway Howto Point To Site Multi Auth https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/howto-point-to-site-multi-auth.md
VPN clients must be configured with client configuration settings. The VPN clien
For instructions to generate and install VPN client configuration files, use the article that pertains to your configuration:
-* [Create and install VPN client configuration files for native Azure certificate authentication P2S configurations](point-to-site-vpn-client-configuration-azure-cert.md).
-* [Azure Active Directory authentication: Configure a VPN client for P2S OpenVPN protocol connections](openvpn-azure-ad-client.md).
## <a name="faq"></a>Point-to-Site FAQ
vpn-gateway Ikev2 Openvpn From Sstp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/ikev2-openvpn-from-sstp.md
Last updated 05/04/2022-+ # Transition to OpenVPN protocol or IKEv2 from SSTP
You can enable OpenVPN along side with IKEv2 if you desire. OpenVPN is TLS-based
:::image type="content" source="./media/ikev2-openvpn-from-sstp/change-tunnel-type.png" alt-text="Screenshot that shows the Point-to-site configuration page with Open VPN selected." lightbox="./media/ikev2-openvpn-from-sstp/change-tunnel-type.png":::
-Once the gateway has been configured, existing clients won't be able to connect until you [deploy and configure the OpenVPN clients](./vpn-gateway-howto-openvpn-clients.md).
+Once the gateway has been configured, existing clients won't be able to connect until you [deploy and configure the OpenVPN clients](point-to-site-vpn-client-cert-windows.md#view-openvpn).
-If you're using Windows 10, you can also use the [Azure VPN Client for Windows](./openvpn-azure-ad-client.md#download)
+If you're using Windows 10 or later, you can also use the [Azure VPN Client](point-to-site-vpn-client-cert-windows.md#azurevpn).
## <a name="faq"></a>Frequently asked questions
A P2S configuration requires quite a few specific steps. The following articles
* [Configure a P2S connection - RADIUS authentication](point-to-site-how-to-radius-ps.md)
-* [Configure a P2S connection - Azure native certificate authentication](vpn-gateway-howto-point-to-site-rm-ps.md)
+* [Configure a P2S connection - Azure certificate authentication](vpn-gateway-howto-point-to-site-rm-ps.md)
**"OpenVPN" is a trademark of OpenVPN Inc.**
vpn-gateway Point To Site How To Vpn Client Install Azure Cert https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/point-to-site-how-to-vpn-client-install-azure-cert.md
Title: 'Install a Point-to-Site client certificate' description: Learn how to install client certificates for P2S certificate authentication - Windows, Mac, Linux.- - Previously updated : 09/03/2021 Last updated : 05/06/2022
If you want to generate a client certificate from a self-signed root certificate
* [Generate certificates - PowerShell](vpn-gateway-certificates-point-to-site.md) * [Generate certificates - MakeCert](vpn-gateway-certificates-point-to-site-makecert.md)
-* [Generate certificates - Linux](vpn-gateway-certificates-point-to-site-linux.md)
+* [Generate certificates - Linux](vpn-gateway-certificates-point-to-site-linux.md)
## <a name="installwin"></a>Windows [!INCLUDE [Install on Windows](../../includes/vpn-gateway-certificates-install-client-cert-include.md)]
-## <a name="installmac"></a>Mac
+## <a name="installmac"></a>macOS
>[!NOTE]
->Mac VPN clients are supported for the [Resource Manager deployment model](../azure-resource-manager/management/deployment-models.md) only. They are not supported for the classic deployment model.
->
->
+>macOS VPN clients are supported for the [Resource Manager deployment model](../azure-resource-manager/management/deployment-models.md) only. They are not supported for the classic deployment model.
[!INCLUDE [Install on Mac](../../includes/vpn-gateway-certificates-install-mac-client-cert-include.md)] ## <a name="installlinux"></a>Linux
-The Linux client certificate is installed on the client as part of the client configuration. See [Client configuration - Linux](point-to-site-vpn-client-configuration-azure-cert.md#linuxinstallcli) for instructions.
+The Linux client certificate is installed on the client as part of the client configuration. See [Client configuration - Linux](point-to-site-vpn-client-cert-linux.md) for instructions.
## Next steps
-Continue with the Point-to-Site configuration steps to [Create and install VPN client configuration files](point-to-site-vpn-client-configuration-azure-cert.md).
+Continue with the Point-to-Site configuration steps to Create and install VPN client configuration files for [Windows](point-to-site-vpn-client-cert-windows.md), [macOS](point-to-site-vpn-client-cert-windows.md), or [Linux](point-to-site-vpn-client-cert-linux.md)).
vpn-gateway Point To Site Vpn Client Cert Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/point-to-site-vpn-client-cert-linux.md
+
+ Title: 'Configure P2S VPN clients -certificate authentication - Linux (strongSwan)'
+
+description: Learn how to configure the Linux (strongSwan) VPN client solution for VPN Gateway P2S configurations that use certificate authentication. This article applies to Linux (strongSwan).
+++ Last updated : 05/18/2022+++
+# Configure point-to-site VPN clients - certificate authentication - Linux (strongSwan)
+
+When you connect to an Azure virtual network (VNet) using point-to-site (P2S) and certificate authentication from a Linux computer, you can use strongSwan. All of the necessary configuration settings for the VPN clients are contained in a VPN client configuration zip file. The settings in the zip file help you easily configure the VPN clients Linux.
+
+The VPN client configuration files that you generate are specific to the P2S VPN gateway configuration for the virtual network. If there are any changes to the P2S VPN configuration after you generate the files, such as changes to the VPN protocol type or authentication type, you need to generate new VPN client configuration files and apply the new configuration to all of the VPN clients that you want to connect. For more information about P2S connections, see [About point-to-site VPN](point-to-site-about.md).
+
+## <a name="generate"></a>Before you begin
+
+Before beginning, verify that you are on the correct article. The following table shows the configuration articles available for Azure VPN Gateway P2S VPN clients. Steps differ, depending on the authentication type, tunnel type, and the client OS.
++
+>[!IMPORTANT]
+>[!INCLUDE [TLS](../../includes/vpn-gateway-tls-change.md)]
+
+## <a name="strongswan"></a>1. Install strongSwan
+
+The steps in this article use strongSwan.
++
+## <a name="certificates"></a>2. Install certificates
+
+A client certificate is required for authentication when using the Azure certificate authentication type. A client certificate must be installed on each client computer. The exported client certificate must be exported with the private key, and must contain all certificates in the certification path. Make sure that the client computer has the appropriate client certificate installed before proceeding to the next section.
+
+For information about client certificates, see [Generate certificates - Linux](vpn-gateway-certificates-point-to-site-linux.md).
+
+## <a name="generate"></a>3. Generate VPN client configuration files
+
+You can generate client configuration files using PowerShell, or by using the Azure portal. Either method returns the same zip file.
+
+### <a name="portal"></a>Generate profile config files using the Azure portal
+
+1. In the Azure portal, navigate to the virtual network gateway for the virtual network that you want to connect to.
+1. On the virtual network gateway page, select **Point-to-site configuration** to open the Point-to-site configuration page.
+1. At the top of the Point-to-site configuration page, select **Download VPN client**. This doesn't download VPN client software, it generates the configuration package used to configure VPN clients. It takes a few minutes for the client configuration package to generate. During this time, you may not see any indications until the packet has generated.
+
+ :::image type="content" source="./media/point-to-site-vpn-client-cert-linux/download-configuration.png" alt-text="Download the VPN client configuration." lightbox="./media/point-to-site-vpn-client-cert-linux/download-configuration.png":::
+1. Once the configuration package has been generated, your browser indicates that a client configuration zip file is available. It's named the same name as your gateway.
+
+### <a name="powershell"></a>Generate profile config files using PowerShell
+
+1. When generating VPN client configuration files, the value for '-AuthenticationMethod' is 'EapTls'. Generate the VPN client configuration files using the following command:
+
+ ```azurepowershell-interactive
+ $profile=New-AzVpnClientConfiguration -ResourceGroupName "TestRG" -Name "VNet1GW" -AuthenticationMethod "EapTls"
+
+ $profile.VPNProfileSASUrl
+ ```
+
+1. Copy the URL to your browser to download the zip file.
+
+## 4. View the folder and files
+
+Unzip the file to view the following folders:
+
+* **WindowsAmd64** and **WindowsX86**, which contain the Windows 32-bit and 64-bit installer packages, respectively. The **WindowsAmd64** installer package is for all supported 64-bit Windows clients, not just Amd.
+* **Generic**, which contains general information used to create your own VPN client configuration. The Generic folder is provided if IKEv2 or SSTP+IKEv2 was configured on the gateway. If only SSTP is configured, then the Generic folder isnΓÇÖt present.
+
+## 5. Select the configuration instructions
+
+The sections below contain instructions to help you configure your VPN client. Select the tunnel type that your P2S configuration uses, then select the method that you want to use to configure.
+
+* [IKEv2 tunnel type steps](#ike)
+* [OpenVPN tunnel type steps](#openvpn)
+
+## <a name="ike"></a>IKEv2 tunnel type steps
+
+This section helps you configure Linux clients for certificate authentication that uses the IKEv2 tunnel type. To connect to Azure, you manually configure an IKEv2 VPN client.
+
+Go to the downloaded VPN client profile configuration files. You can find all of the information that you need for configuration in the **Generic** folder. Azure doesnΓÇÖt provide a *mobileconfig* file for this configuration.
+
+If you don't see the Generic folder, check the following items, then generate the zip file again.
+
+* Check the tunnel type for your configuration. It's likely that IKEv2 wasnΓÇÖt selected as a tunnel type.
+* On the VPN gateway, verify that the SKU isnΓÇÖt Basic. The VPN Gateway Basic SKU doesnΓÇÖt support IKEv2. Then, select IKEv2 and generate the zip file again to retrieve the Generic folder.
+
+The Generic folder contains the following files:
+
+* **VpnSettings.xml**, which contains important settings like server address and tunnel type.
+* **VpnServerRoot.cer**, which contains the root certificate required to validate the Azure VPN gateway during P2S connection setup.
+
+### <a name="gui"></a>GUI instructions
+
+This section walks you through the configuration using the strongSwan GUI. The following instructions were created on Ubuntu 18.0.4. Ubuntu 16.0.10 doesnΓÇÖt support strongSwan GUI. If you want to use Ubuntu 16.0.10, youΓÇÖll have to use the [command line](#linuxinstallcli). The following examples may not match screens that you see, depending on your version of Linux and strongSwan.
+
+1. Open the **Terminal** to install **strongSwan** and its Network Manager by running the command in the example.
+
+ ```
+ sudo apt install network-manager-strongswan
+ ```
+1. Select **Settings**, then select **Network**. Select the **+** button to create a new connection.
+
+ :::image type="content" source="./media/point-to-site-vpn-client-cert-linux/edit-connections.png" alt-text="Screenshot shows the network connections page." lightbox="./media/point-to-site-vpn-client-cert-linux/expanded/edit-connections.png":::
+
+1. Select **IPsec/IKEv2 (strongSwan)** from the menu, and double-click.
+
+ :::image type="content" source="./media/point-to-site-vpn-client-cert-linux/add-connection.png" alt-text="Screenshot shows the Add VPN page." lightbox="./media/point-to-site-vpn-client-cert-linux/expanded/add-connection.png":::
+
+1. On the **Add VPN** page, add a name for your VPN connection.
+
+ :::image type="content" source="./media/point-to-site-vpn-client-cert-linux/choose-type.png" alt-text="Screenshot shows Choose a connection type." lightbox="./media/point-to-site-vpn-client-cert-linux/expanded/choose-type.png":::
+
+1. Open the **VpnSettings.xml** file from the **Generic** folder contained in the downloaded VPN client profile configuration files. Find the tag called **VpnServer** and copy the name, beginning with 'azuregateway' and ending with '.cloudapp.net'.
+
+ :::image type="content" source="./media/point-to-site-vpn-client-cert-linux/vpn-server.png" alt-text="Screenshot shows copy data." lightbox="./media/point-to-site-vpn-client-cert-linux/expanded/vpn-server.png":::
+
+1. Paste the name in the **Address** field of your new VPN connection in the **Gateway** section. Next, select the folder icon at the end of the **Certificate** field, browse to the **Generic** folder, and select the **VpnServerRoot** file.
+
+1. In the **Client** section of the connection, for **Authentication**, select **Certificate/private key**. For **Certificate** and **Private key**, choose the certificate and the private key that were created earlier. In **Options**, select **Request an inner IP address**. Then, select **Add**.
+
+ :::image type="content" source="./media/point-to-site-vpn-client-cert-linux/ip-request.png" alt-text="Screenshot shows Request an inner IP address." lightbox="./media/point-to-site-vpn-client-cert-linux/expanded/ip-request.png":::
+
+1. Turn the connection **On**.
+
+ :::image type="content" source="./media/point-to-site-vpn-client-cert-linux/turn-on.png" alt-text="Screenshot shows copy." lightbox="./media/point-to-site-vpn-client-cert-linux/expanded/turn-on.png":::
+
+### <a name="linuxinstallcli"></a>CLI instructions
+
+This section walks you through the configuration using the strongSwan CLI.
+
+1. From the VPN client profile configuration files **Generic** folder, copy or move the **VpnServerRoot.cer** to **/etc/ipsec.d/cacerts**.
+
+1. Copy or move **cp client.p12** to **/etc/ipsec.d/private/**. This file is the client certificate for the VPN gateway.
+
+1. Open the **VpnSettings.xml** file and copy the `<VpnServer>` value. YouΓÇÖll use this value in the next step.
+
+1. Adjust the values in the following example, then add the example to the **/etc/ipsec.conf** configuration.
+
+ ```cli
+ conn azure
+ keyexchange=ikev2
+ type=tunnel
+ leftfirewall=yes
+ left=%any
+ leftauth=eap-tls
+ leftid=%client # use the DNS alternative name prefixed with the %
+ right= Enter the VPN Server value here# Azure VPN gateway address
+ rightid=% # Enter the VPN Server value here# Azure VPN gateway FQDN with %
+ rightsubnet=0.0.0.0/0
+ leftsourceip=%config
+ auto=add
+ ```
+
+1. Add the following values to **/etc/ipsec.secrets**.
+
+ ```cli
+ : P12 client.p12 'password' # key filename inside /etc/ipsec.d/private directory
+ ```
+
+1. Run the following commands:
+
+ ```cli
+ # ipsec restart
+ # ipsec up azure
+ ```
+
+## <a name="openvpn"></a>OpenVPN tunnel type steps
+
+This section helps you configure Linux clients for certificate authentication that uses the OpenVPN tunnel type. To connect to Azure, you download the OpenVPN client and configure the connection profile.
++
+## Next steps
+
+For additional steps, return to the original point-to-site article that you were working from.
+
+* [PowerShell configuration steps](vpn-gateway-howto-point-to-site-rm-ps.md).
+* [Azure portal configuration steps](vpn-gateway-howto-point-to-site-resource-manager-portal.md).
vpn-gateway Point To Site Vpn Client Cert Mac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/point-to-site-vpn-client-cert-mac.md
+
+ Title: 'Configure P2S VPN clients -certificate authentication - macOS and iOS'
+
+description: Learn how to configure the VPN client for VPN Gateway P2S configurations that use certificate authentication. This article applies to macOS and iOS.
+++ Last updated : 05/18/2022+++
+# Configure point-to-site VPN clients - certificate authentication - macOS and iOS
+
+When you connect to an Azure virtual network (VNet) using VPN Gateway point-to-site (P2S), IKEv2, and certificate authentication, you use the VPN client that is natively installed on the operating system from which youΓÇÖre connecting. For OpenVPN connections, you use an OpenVPN client. All of the necessary configuration settings for the VPN clients are contained in a VPN client configuration zip file. The settings in the zip file help you easily configure the VPN clients macOS.
+
+The VPN client configuration files that you generate are specific to the P2S VPN gateway configuration for the virtual network. If there are any changes to the P2S VPN configuration after you generate the files, such as changes to the VPN protocol type or authentication type, you need to generate new VPN client configuration files and apply the new configuration to all of the VPN clients that you want to connect. For more information about P2S connections, see [About point-to-site VPN](point-to-site-about.md).
+
+## <a name="generate"></a>Before you begin
+
+Before beginning, verify that you are on the correct article. The following table shows the configuration articles available for Azure VPN Gateway P2S VPN clients. Steps differ, depending on the authentication type, tunnel type, and the client OS.
++
+>[!IMPORTANT]
+>[!INCLUDE [TLS](../../includes/vpn-gateway-tls-change.md)]
+
+## <a name="generate"></a>Generate VPN client configuration files
+
+You can generate client configuration files using PowerShell, or by using the Azure portal. Either method returns the same zip file.
+
+### <a name="zipportal"></a>Generate files using the Azure portal
+
+1. In the Azure portal, navigate to the virtual network gateway for the virtual network that you want to connect to.
+1. On the virtual network gateway page, select **Point-to-site configuration** to open the Point-to-site configuration page.
+1. At the top of the Point-to-site configuration page, select **Download VPN client**. This doesn't download VPN client software, it generates the configuration package used to configure VPN clients. It takes a few minutes for the client configuration package to generate. During this time, you may not see any indications until the packet has generated.
+
+ :::image type="content" source="./media/point-to-site-vpn-client-cert-mac/download-configuration.png" alt-text="Download the VPN client configuration." lightbox="./media/point-to-site-vpn-client-cert-mac/download-configuration.png":::
+1. Once the configuration package has been generated, your browser indicates that a client configuration zip file is available. It's named the same name as your gateway. Unzip the file to view the folders.
+
+### <a name="zipps"></a>Generate files using PowerShell
+
+1. When generating VPN client configuration files, the value for '-AuthenticationMethod' is 'EapTls'. Generate the VPN client configuration files using the following command:
+
+ ```azurepowershell-interactive
+ $profile=New-AzVpnClientConfiguration -ResourceGroupName "TestRG" -Name "VNet1GW" -AuthenticationMethod "EapTls"
+
+ $profile.VPNProfileSASUrl
+ ```
+
+1. Copy the URL to your browser to download the zip file, then unzip the file to view the folders.
+
+## IKEv2 - macOS steps
+
+### <a name="view"></a>View files
+
+Unzip the file to view the following folders.
+
+* **WindowsAmd64** and **WindowsX86**, which contain the Windows 32-bit and 64-bit installer packages, respectively. The **WindowsAmd64** installer package is for all supported 64-bit Windows clients, not just Amd.
+* **Generic**, which contains general information used to create your own VPN client configuration. The Generic folder is provided if IKEv2 or SSTP+IKEv2 was configured on the gateway. If only SSTP is configured, then the Generic folder isnΓÇÖt present.
+
+To connect to Azure, you manually configure the native IKEv2 VPN client. Azure doesnΓÇÖt provide a *mobileconfig* file. You can find all of the information that you need for configuration in the **Generic** folder.
+
+If you don't see the Generic folder, check the following items, then generate the zip file again.
+
+* Check the tunnel type for your configuration. It's likely that IKEv2 wasnΓÇÖt selected as a tunnel type.
+* On the VPN gateway, verify that the SKU isnΓÇÖt Basic. The VPN Gateway Basic SKU doesnΓÇÖt support IKEv2. Then, select IKEv2 and generate the zip file again to retrieve the Generic folder.
+
+The **Generic** folder contains the following files.
+
+* **VpnSettings.xml**, which contains important settings like server address and tunnel type.
+* **VpnServerRoot.cer**, which contains the root certificate required to validate the Azure VPN gateway during P2S connection setup.
+
+Use the following steps to configure the native VPN client on Mac for certificate authentication. These steps must be completed on every Mac that you want to connect to Azure.
+
+### <a name="certificate"></a>Install certificates
+
+1. Copy to the root certificate file - **VpnServerRoot.cer** - to your Mac. Double-click the certificate. The certificate will either automatically install, or youΓÇÖll see the **Add Certificates** page.
+1. On the **Add Certificates** page, select **login** from the dropdown.
+
+ :::image type="content" source="./media/point-to-site-vpn-client-cert-mac/login.png" alt-text="Screenshot shows Add Certificates page with login selected.":::
+1. Click **Add** to import the file.
+
+ :::image type="content" source="./media/point-to-site-vpn-client-cert-mac/add.png" alt-text="Screenshot shows Add Certificates page with Add selected.":::
+
+### Verify certificate install
+
+Verify that both the client and the root certificate are installed. The client certificate is used for authentication and is required. For information about how to install a client certificate, see [Install a client certificate](point-to-site-how-to-vpn-client-install-azure-cert.md).
+
+1. Open the **Keychain Access** application.
+1. Navigate to the **Certificates** tab.
+1. Verify that both the client and the root certificate are installed.
+
+ :::image type="content" source="./media/point-to-site-vpn-client-cert-mac/keychain.png" alt-text="Screenshot shows Keychain Access with certificates installed." lightbox="./media/point-to-site-vpn-client-cert-mac/expanded/keychain.png":::
+
+### <a name="create"></a>Configure VPN client profile
+
+1. Navigate to **System Preferences -> Network**. On the Network page, select **'+'** to create a new VPN client connection profile for a P2S connection to the Azure virtual network.
+1. For **Interface**, from the dropdown, select **VPN**.
+
+ :::image type="content" source="./media/point-to-site-vpn-client-cert-mac/select-vpn.png" alt-text="Screenshot shows the Network window with the option to select an interface, VPN is selected." lightbox="./media/point-to-site-vpn-client-cert-mac/expanded/select-vpn.png":::
+
+1. For **VPN Type**, from the dropdown, select **IKEv2**. In the **Service Name** field,specify a friendly name for the profile.
+
+ :::image type="content" source="./media/point-to-site-vpn-client-cert-mac/vpn-type.png" alt-text="Screenshot shows the Network window with the option to select an interface, select VPN type, and enter a service name." lightbox="./media/point-to-site-vpn-client-cert-mac/expanded/vpn-type.png":::
+
+1. Select **Create** to create the VPN client connection profile.
+1. In the **Generic** folder, open the **VpnSettings.xml** file using a text editor, and copy the **VpnServer** tag value.
+
+ :::image type="content" source="./media/point-to-site-vpn-client-cert-mac/server-tag.png" alt-text="Screenshot shows the VpnSettings.xml file open with the VpnServer tag highlighted." lightbox="./media/point-to-site-vpn-client-cert-mac/expanded/server-tag.png":::
+
+1. Paste the **VpnServer** tag value in both the **Server Address** and **Remote ID** fields of the profile.
+
+ :::image type="content" source="./media/point-to-site-vpn-client-cert-mac/paste-value.png" alt-text="Screenshot shows the Network window with the value pasted." lightbox="./media/point-to-site-vpn-client-cert-mac/expanded/paste-value.png":::
+
+### <a name="auth"></a>Configure authentication settings
+
+Configure authentication settings. There are two sets of instructions. Choose the instructions that correspond to your OS version.
+
+#### Catalina
+
+* For **Authentication Settings** select **None**.
+* Select **Certificate**, select **Select** and select the correct client certificate that you installed earlier. Then, select **OK**.
+
+ :::image type="content" source="./media/point-to-site-vpn-client-cert-mac/catalina.png" alt-text="Screenshot shows the Network window with None selected for Authentication Settings and Certificate selected.":::
+
+#### Big Sur
+
+* Select **Authentication Settings**, then select **Certificate**.
+
+ :::image type="content" source="./media/point-to-site-vpn-client-cert-mac/authentication-certificate.png" alt-text="Screenshot shows authentication settings with certificate selected." lightbox="./media/point-to-site-vpn-client-cert-mac/expanded/authentication-certificate.png":::
+
+* Select **Select** to open the **Choose An Identity** page. The **Choose An Identity** page displays a list of certificates for you to choose from. If youΓÇÖre unsure which certificate to use, you can select **Show Certificate** to see more information about each certificate.
+
+ :::image type="content" source="./media/point-to-site-vpn-client-cert-mac/show-certificate.png" alt-text="Screenshot shows certificate properties." lightbox="./media/point-to-site-vpn-client-cert-mac/expanded/show-certificate.png":::
+
+* Select the proper certificate, then select **Continue**.
+
+ :::image type="content" source="./media/point-to-site-vpn-client-cert-mac/choose-identity.png" alt-text="Screenshot shows Choose an Identity, where you can select a certificate." lightbox="./media/point-to-site-vpn-client-cert-mac/expanded/choose-identity.png":::
+
+* On the **Authentication Settings** page, verify that the correct certificate is shown, then select **OK**.
+
+ :::image type="content" source="./media/point-to-site-vpn-client-cert-mac/certificate.png" alt-text="Screenshot shows the Choose An Identity dialog box where you can select the proper certificate." lightbox="./media/point-to-site-vpn-client-cert-mac/expanded/certificate.png":::
+
+### <a name="certificate"></a>Specify certificate
+
+1. For both Catalina and Big Sur, in the **Local ID** field, specify the name of the certificate. In this example, itΓÇÖs `P2SChildCert`.
+
+ :::image type="content" source="./media/point-to-site-vpn-client-cert-mac/local-id.png" alt-text="Screenshot shows local ID value." lightbox="./media/point-to-site-vpn-client-cert-mac/expanded/local-id.png":::
+1. Select **Apply** to save all changes.
+
+### <a name="connect"></a>Connect
+
+1. Select **Connect** to start the P2S connection to the Azure virtual network.
+
+ :::image type="content" source="./media/point-to-site-vpn-client-cert-mac/select-connect.png" alt-text="Screenshot shows connect button." lightbox="./media/point-to-site-vpn-client-cert-mac/expanded/select-connect.png":::
+
+1. Once the connection has been established, the status shows as **Connected** and you can view the IP address that was pulled from the VPN client address pool.
+
+ :::image type="content" source="./media/point-to-site-vpn-client-cert-mac/connected.png" alt-text="Screenshot shows Connected." lightbox="./media/point-to-site-vpn-client-cert-mac/expanded/connected.png":::
+
+## OpenVPN - macOS steps
+
+>[!INCLUDE [OpenVPN Mac](../../includes/vpn-gateway-vwan-config-openvpn-mac.md)]
+
+## OpenVPN - iOS steps
+
+>[!INCLUDE [OpenVPN iOS](../../includes/vpn-gateway-vwan-config-openvpn-ios.md)]
+
+## Next steps
+
+For additional steps, return to the original point-to-site article that you were working from.
+
+* [PowerShell configuration steps](vpn-gateway-howto-point-to-site-rm-ps.md).
+* [Azure portal configuration steps](vpn-gateway-howto-point-to-site-resource-manager-portal.md).
vpn-gateway Point To Site Vpn Client Cert Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/point-to-site-vpn-client-cert-windows.md
+
+ Title: 'Configure P2S native VPN clients -certificate authentication - Windows'
+
+description: Learn how to configure the native VPN client VPN Gateway P2S configurations that use certificate authentication. This article applies to Windows.
+++ Last updated : 05/18/2022+++
+# Configure point-to-site native VPN clients - certificate authentication - Windows
+
+When you connect to an Azure virtual network (VNet) using point-to-site (P2S) and certificate authentication, you use the VPN client that is natively installed on the operating system from which youΓÇÖre connecting. All of the necessary configuration settings for the VPN clients are contained in a VPN client configuration zip file. The settings in the zip file help you easily configure the VPN clients Windows.
+
+The VPN client configuration files that you generate are specific to the P2S VPN gateway configuration for the VNet. If there are any changes to the P2S VPN configuration after you generate the files, such as changes to the VPN protocol type or authentication type, you need to generate new VPN client configuration files and apply the new configuration to all of the VPN clients that you want to connect. For more information about P2S connections, see [About point-to-site VPN](point-to-site-about.md).
+
+## <a name="generate"></a>Before you begin
+
+Before beginning, verify that you are on the correct article. The following table shows the configuration articles available for Azure VPN Gateway P2S VPN clients. Steps differ, depending on the authentication type, tunnel type, and the client OS.
++
+>[!IMPORTANT]
+>[!INCLUDE [TLS](../../includes/vpn-gateway-tls-change.md)]
+
+## <a name="certificates"></a>1. Install certificates
+
+A client certificate is required for authentication when using the Azure certificate authentication type. A client certificate must be installed on each client computer. The exported client certificate must be exported with the private key, and must contain all certificates in the certification path.
+
+* For information about client certificates, see [Point-to site: generate certificates](vpn-gateway-howto-point-to-site-resource-manager-portal.md#generatecert).
+* To view an installed client certificate, open **Manage User Certificates**. The client certificate is installed in **Current User\Personal\Certificates**.
+
+## <a name="generate"></a>2. Generate VPN client configuration files
+
+You can generate VPN client profile configuration files using PowerShell, or by using the Azure portal. Either method returns the same zip file.
+
+### <a name="zip-portal"></a>Generate files using the Azure portal
+
+1. In the Azure portal, navigate to the virtual network gateway for the VNet that you want to connect to.
+
+1. On the virtual network gateway page, select **Point-to-site configuration** to open the Point-to-site configuration page.
+
+1. At the top of the Point-to-site configuration page, select **Download VPN client**. This doesn't download VPN client software, it generates the configuration package used to configure VPN clients. It takes a few minutes for the client configuration package to generate. During this time, you may not see any indications until the packet has generated.
+
+ :::image type="content" source="./media/point-to-site-vpn-client-cert-windows/download-configuration.png" alt-text="Download the VPN client configuration." lightbox="./media/point-to-site-vpn-client-cert-windows/download-configuration.png":::
+
+1. Once the configuration package has been generated, your browser indicates that a client configuration zip file is available. It's named the same name as your gateway. The folders and files that the zip file contains depend on the settings that you selected when creating your P2S configuration.
+
+1. For next steps, depending on your P2S configuration, go to one of the following sections:
+
+ * [IKEv2 and SSTP - native client steps](#ike)
+ * [OpenVPN - OpenVPN client steps](#openvpn)
+ * [OpenVPN - Azure VPN client steps](#azurevpn)
+
+### <a name="zip-powershell"></a>Generate files using PowerShell
+
+1. When generating VPN client configuration files, the value for '-AuthenticationMethod' is 'EapTls'. Generate the VPN client configuration files using the following command:
+
+ ```azurepowershell-interactive
+ $profile=New-AzVpnClientConfiguration -ResourceGroupName "TestRG" -Name "VNet1GW" -AuthenticationMethod "EapTls"
+
+ $profile.VPNProfileSASUrl
+ ```
+
+1. Copy the URL to your browser to download the zip file. The folders and files that the zip file contains depend on the settings that you selected when creating your P2S configuration.
+
+1. For next steps, depending on your P2S configuration, go to one of the following sections:
+
+ * [IKEv2 and SSTP - native client steps](#ike)
+ * [OpenVPN - OpenVPN client steps](#openvpn)
+ * [OpenVPN - Azure VPN client steps](#azurevpn)
+
+## <a name="ike"></a>IKEv2 and SSTP - native VPN client steps
+
+This section helps you configure the native VPN client on your Windows computer to connect to your VNet. This configuration doesn't require additional client software.
+
+### <a name="view-ike"></a>View config files
+
+Unzip the configuration file to view the following folders:
+
+* **WindowsAmd64** and **WindowsX86**, which contain the Windows 32-bit and 64-bit installer packages, respectively. The **WindowsAmd64** installer package is for all supported 64-bit Windows clients, not just Amd.
+* **Generic**, which contains general information used to create your own VPN client configuration. The Generic folder is provided if IKEv2 or SSTP+IKEv2 was configured on the gateway. If only SSTP is configured, then the Generic folder isnΓÇÖt present.
+
+### <a name="install"></a>Configure VPN client profile
+
+You can use the same VPN client configuration package on each Windows client computer, as long as the version matches the architecture for the client. For the list of client operating systems that are supported, see the point-to-site section of the [VPN Gateway FAQ](vpn-gateway-vpn-faq.md#P2S).
+
+>[!NOTE]
+>You must have Administrator rights on the Windows client computer from which you want to connect.
+
+1. Select the VPN client configuration files that correspond to the architecture of the Windows computer. For a 64-bit processor architecture, choose the 'VpnClientSetupAmd64' installer package. For a 32-bit processor architecture, choose the 'VpnClientSetupX86' installer package.
+
+1. Double-click the package to install it. If you see a SmartScreen popup, click **More info**, then **Run anyway**.
+
+## <a name="azurevpn"></a>OpenVPN - Azure VPN client steps
+
+This section applies to certificate authentication configurations that are configured to use the OpenVPN tunnel type. The following steps help you download, install, and configure the Azure VPN client to connect to your VNet. To connect to your VNet, each client must have the following items:
+
+* The Azure VPN client software is installed.
+* Azure VPN client profile is configured using the downloaded **azurevpnconfig.xml** configuration file.
+* The client certificate is installed locally.
+
+### <a name="view-azurevpn"></a>View config files
+
+When you open the zip file, you'll see the **AzureVPN** folder. Locate the **azurevpnconfig.xml** file. This file contains the settings you use to configure the VPN client profile. If you don't see the file, verify the following items:
+
+* Verify that your VPN gateway is configured to use the OpenVPN tunnel type.
+* If you're using Azure AD authentication, you may not have an AzureVPN folder. See the [Azure AD](openvpn-azure-ad-client.md) configuration article instead.
+
+### Download the Azure VPN client
++
+### Configure the VPN client profile
+
+1. Open the Azure VPN client.
+
+1. Click **+** on the bottom left of the page, then select **Import**.
+
+1. In the window, navigate to the **azurevpnconfig.xml** file, select it, then click **Open**.
+
+1. From the **Certificate Information** dropdown, select the name of the child certificate (the client certificate). For example, **P2SChildCert**.
+
+ :::image type="content" source="./media/point-to-site-vpn-client-cert-windows/configure-certificate.png" alt-text="Screenshot showing Azure VPN client profile configuration page." lightbox="./media/point-to-site-vpn-client-cert-windows/configure-certificate.png":::
+
+ If you don't see a client certificate in the **Certificate Information** dropdown, you'll need cancel the profile configuration import and fix the issue before proceeding. It's possible that one of the following things is true:
+
+ * The client certificate isn't installed locally on the client computer.
+ * There are multiple certificates with exactly the same name installed on your local computer (common in test environments).
+ * The child certificate is corrupt.
+
+1. After the import validates (imports with no errors), click **Save**.
+
+1. In the left pane, locate the **VPN connection**, then click **Connect**.
+
+## <a name="openvpn"></a>OpenVPN - OpenVPN client steps
+
+This section applies to certificate authentication configurations that are configured to use the OpenVPN tunnel type. The following steps help you configure the **OpenVPN &reg; Protocol** client and connect to your VNet.
+
+### <a name="view-openvpn"></a>View config files
+
+When you open the zip file, you should see an OpenVPN folder. If you don't see the folder, verify the following items:
+
+* Verify that your VPN gateway is configured to use the OpenVPN tunnel type.
+* If you're using Azure AD authentication, you may not have an OpenVPN folder. See the [Azure AD](openvpn-azure-ad-client.md) configuration article instead.
++
+## Connect
+
+To connect, return to the previous article that you were working from, and see [Connect to Azure](vpn-gateway-howto-point-to-site-resource-manager-portal.md#connect).
+
+## Next steps
+
+For additional steps, return to the point-to-site article that you were working from.
+
+* [PowerShell configuration steps](vpn-gateway-howto-point-to-site-rm-ps.md).
+* [Azure portal configuration steps](vpn-gateway-howto-point-to-site-resource-manager-portal.md).
vpn-gateway Point To Site Vpn Client Configuration Azure Cert https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/point-to-site-vpn-client-configuration-azure-cert.md
- Title: 'P2S VPN client profile configuration files: certificate authentication'-
-description: Learn how to generate and install VPN client configuration files for Windows, Linux (strongSwan), and macOS. This article applies to VPN Gateway P2S configurations that use certificate authentication.
---- Previously updated : 07/15/2021---
-# Generate and install VPN client profile configuration files for certificate authentication
-
-When you connect to an Azure VNet using Point-to-Site and certificate authentication, you use the VPN client that is natively installed on the operating system from which youΓÇÖre connecting. All of the necessary configuration settings for the VPN clients are contained in a VPN client configuration zip file. The settings in the zip file help you easily configure the VPN clients for Windows, Mac IKEv2 VPN, or Linux.
-
-The VPN client configuration files that you generate are specific to the P2S VPN gateway configuration for the virtual network. If there are any changes to the Point-to-Site VPN configuration after you generate the files, such as changes to the VPN protocol type or authentication type, you need to generate new VPN client configuration files and apply the new configuration to all of the VPN clients that you want to connect.
-
-* For more information about Point-to-Site connections, see [About Point-to-Site VPN](point-to-site-about.md).
-* For OpenVPN instructions, see [Configure OpenVPN for P2S](vpn-gateway-howto-openvpn.md) and [Configure OpenVPN clients](vpn-gateway-howto-openvpn-clients.md).
-
->[!IMPORTANT]
->[!INCLUDE [TLS](../../includes/vpn-gateway-tls-change.md)]
->
-
-## <a name="generate"></a>Generate VPN client configuration files
-
-You can generate client configuration files using PowerShell, or by using the Azure portal. Either method returns the same zip file. Unzip the file to view the following folders:
-
-* **WindowsAmd64** and **WindowsX86**, which contain the Windows 32-bit and 64-bit installer packages, respectively. The **WindowsAmd64** installer package is for all supported 64-bit Windows clients, not just Amd.
-* **Generic**, which contains general information used to create your own VPN client configuration. The Generic folder is provided if IKEv2 or SSTP+IKEv2 was configured on the gateway. If only SSTP is configured, then the Generic folder isnΓÇÖt present.
-
-### <a name="zipportal"></a>Generate files using the Azure portal
-
-1. In the Azure portal, navigate to the virtual network gateway for the virtual network that you want to connect to.
-1. On the virtual network gateway page, select **Point-to-site configuration** to open the Point-to-site configuration page.
-1. At the top of the Point-to-site configuration page, select **Download VPN client**. This doesn't download VPN client software, it generates the configuration package used to configure VPN clients. It takes a few minutes for the client configuration package to generate. During this time, you may not see any indications until the packet has generated.
-
- :::image type="content" source="./media/point-to-site-vpn-client-configuration-azure-cert/download-client.png" alt-text="Download the VPN client configuration." lightbox="./media/point-to-site-vpn-client-configuration-azure-cert/expanded/download-client.png":::
-1. Once the configuration package has been generated, your browser indicates that a client configuration zip file is available. It's named the same name as your gateway. Unzip the file to view the folders.
-
-### <a name="zipps"></a>Generate files using PowerShell
-
-1. When generating VPN client configuration files, the value for '-AuthenticationMethod' is 'EapTls'. Generate the VPN client configuration files using the following command:
-
- ```azurepowershell-interactive
- $profile=New-AzVpnClientConfiguration -ResourceGroupName "TestRG" -Name "VNet1GW" -AuthenticationMethod "EapTls"
-
- $profile.VPNProfileSASUrl
- ```
-
-1. Copy the URL to your browser to download the zip file, then unzip the file to view the folders.
-
-## <a name="installwin"></a>Windows
--
-## <a name="installmac"></a>Mac (macOS)
-
-In order to connect to Azure, you must manually configure the native IKEv2 VPN client. Azure doesnΓÇÖt provide a *mobileconfig* file. You can find all of the information that you need for configuration in the **Generic** folder.
-
-If you don't see the Generic folder in your download, it's likely that IKEv2 wasnΓÇÖt selected as a tunnel type. Note that the VPN gateway Basic SKU doesnΓÇÖt support IKEv2. On the VPN gateway, verify that the SKU isnΓÇÖt Basic. Then, select IKEv2 and generate the zip file again to retrieve the Generic folder.
-
-The Generic folder contains the following files:
-
-* **VpnSettings.xml**, which contains important settings like server address and tunnel type. 
-* **VpnServerRoot.cer**, which contains the root certificate required to validate the Azure VPN gateway during P2S connection setup.
-
-Use the following steps to configure the native VPN client on Mac for certificate authentication. These steps must be completed on every Mac that you want to connect to Azure.
-
-### Import root certificate file
-
-1. Copy to the root certificate file to your Mac. Double-click the certificate. The certificate will either automatically install, or youΓÇÖll see the **Add Certificates** page.
-1. On the **Add Certificates** page, select **login** from the dropdown.
-
- :::image type="content" source="./media/point-to-site-vpn-client-configuration-azure-cert/login.png" alt-text="Screenshot shows Add Certificates page with login selected.":::
-1. Click **Add** to import the file.
-
- :::image type="content" source="./media/point-to-site-vpn-client-configuration-azure-cert/add.png" alt-text="Screenshot shows Add Certificates page with Add selected.":::
-
-### Verify certificate install
-
-Verify that both the client and the root certificate are installed. The client certificate is used for authentication and is required. For information about how to install a client certificate, see [Install a client certificate](point-to-site-how-to-vpn-client-install-azure-cert.md).
-
-1. Open the **Keychain Access** application.
-1. Navigate to the **Certificates** tab.
-1. Verify that both the client and the root certificate are installed.
-
- :::image type="content" source="./media/point-to-site-vpn-client-configuration-azure-cert/keychain.png" alt-text="Screenshot shows Keychain Access with certificates installed." lightbox="./media/point-to-site-vpn-client-configuration-azure-cert/expanded/keychain.png":::
-
-### Create VPN client profile
-
-1. Navigate to **System Preferences -> Network**. On the Network page, select **'+'** to create a new VPN client connection profile for a P2S connection to the Azure virtual network.
-1. For **Interface**, from the dropdown, select **VPN**.
-
- :::image type="content" source="./media/point-to-site-vpn-client-configuration-azure-cert/select-vpn.png" alt-text="Screenshot shows the Network window with the option to select an interface, VPN is selected." lightbox="./media/point-to-site-vpn-client-configuration-azure-cert/expanded/select-vpn.png":::
-
-1. For **VPN Type**, from the dropdown, select **IKEv2**. In the **Service Name** field,specify a friendly name for the profile.
-
- :::image type="content" source="./media/point-to-site-vpn-client-configuration-azure-cert/vpn-type.png" alt-text="Screenshot shows the Network window with the option to select an interface, select VPN type, and enter a service name." lightbox="./media/point-to-site-vpn-client-configuration-azure-cert/expanded/vpn-type.png":::
-
-1. Select **Create** to create the VPN client connection profile.
-1. In the **Generic** folder, open the **VpnSettings.xml** file using a text editor, and copy the **VpnServer** tag value.
-
- :::image type="content" source="./media/point-to-site-vpn-client-configuration-azure-cert/server-tag.png" alt-text="Screenshot shows the VpnSettings.xml file open with the VpnServer tag highlighted." lightbox="./media/point-to-site-vpn-client-configuration-azure-cert/expanded/server-tag.png":::
-
-1. Paste the **VpnServer** tag value in both the **Server Address** and **Remote ID** fields of the profile.
-
- :::image type="content" source="./media/point-to-site-vpn-client-configuration-azure-cert/paste-value.png" alt-text="Screenshot shows the Network window with the value pasted." lightbox="./media/point-to-site-vpn-client-configuration-azure-cert/expanded/paste-value.png":::
-
-1. Configure authentication settings. There are two sets of instructions. Choose the instructions that correspond to your OS version.
-
- **Catalina:**
-
- * For **Authentication Settings** select **None**.
- * Select **Certificate**, select **Select** and select the correct client certificate that you installed earlier. Then, select **OK**.
-
- :::image type="content" source="./media/point-to-site-vpn-client-configuration-azure-cert/catalina.png" alt-text="Screenshot shows the Network window with None selected for Authentication Settings and Certificate selected.":::
-
- **Big Sur:**
-
- * Select **Authentication Settings**, then select **Certificate**. 
-
- :::image type="content" source="./media/point-to-site-vpn-client-configuration-azure-cert/authentication-certificate.png" alt-text="Screenshot shows authentication settings with certificate selected." lightbox="./media/point-to-site-vpn-client-configuration-azure-cert/expanded/authentication-certificate.png":::
-
- * Select **Select** to open the **Choose An Identity** page. The **Choose An Identity** page displays a list of certificates for you to choose from. If youΓÇÖre unsure which certificate to use, you can select **Show Certificate** to see more information about each certificate.
-
- :::image type="content" source="./media/point-to-site-vpn-client-configuration-azure-cert/show-certificate.png" alt-text="Screenshot shows certificate properties.." lightbox="./media/point-to-site-vpn-client-configuration-azure-cert/expanded/show-certificate.png":::
- * Select the proper certificate, then select **Continue**.
-
- :::image type="content" source="./media/point-to-site-vpn-client-configuration-azure-cert/choose-identity.png" alt-text="Screenshot shows Choose an Identity, where you can select a certificate." lightbox="./media/point-to-site-vpn-client-configuration-azure-cert/expanded/choose-identity.png":::
-
- * On the **Authentication Settings** page, verify that the correct certificate is shown, then select **OK**.
-
- :::image type="content" source="./media/point-to-site-vpn-client-configuration-azure-cert/certificate.png" alt-text="Screenshot shows the Choose An Identity dialog box where you can select the proper certificate." lightbox="./media/point-to-site-vpn-client-configuration-azure-cert/expanded/certificate.png":::
-
-1. For both Catalina and Big Sur, in the **Local ID** field, specify the name of the certificate. In this example, itΓÇÖs `P2SChildCert`.
-
- :::image type="content" source="./media/point-to-site-vpn-client-configuration-azure-cert/local-id.png" alt-text="Screenshot shows local ID value." lightbox="./media/point-to-site-vpn-client-configuration-azure-cert/expanded/local-id.png":::
-1. Select **Apply** to save all changes.
-1. Select **Connect** to start the P2S connection to the Azure virtual network.
-
- :::image type="content" source="./media/point-to-site-vpn-client-configuration-azure-cert/select-connect.png" alt-text="Screenshot shows connect button." lightbox="./media/point-to-site-vpn-client-configuration-azure-cert/expanded/select-connect.png":::
-
-1. Once the connection has been established, the status shows as **Connected** and you can view the IP address that was pulled from the VPN client address pool.
-
- :::image type="content" source="./media/point-to-site-vpn-client-configuration-azure-cert/connected.png" alt-text="Screenshot shows Connected." lightbox="./media/point-to-site-vpn-client-configuration-azure-cert/expanded/connected.png":::
-
-## <a name="linuxgui"></a>Linux (strongSwan GUI)
-
-### <a name="installstrongswan"></a>Install strongSwan
--
-### <a name="genlinuxcerts"></a>Generate certificates
-
-If you havenΓÇÖt already generated certificates, use the following steps:
--
-### <a name="install"></a>Install and configure
-
-The following instructions were created on Ubuntu 18.0.4. Ubuntu 16.0.10 doesnΓÇÖt support strongSwan GUI. If you want to use Ubuntu 16.0.10, youΓÇÖll have to use the [command line](#linuxinstallcli). The following examples may not match screens that you see, depending on your version of Linux and strongSwan.
-
-1. Open the **Terminal** to install **strongSwan** and its Network Manager by running the command in the example.
-
- ```
- sudo apt install network-manager-strongswan
- ```
-1. Select **Settings**, then select **Network**. Select the **+** button to create a new connection.
-
- :::image type="content" source="./media/point-to-site-vpn-client-configuration-azure-cert/edit-connections.png" alt-text="Screenshot shows the network connections page." lightbox="./media/point-to-site-vpn-client-configuration-azure-cert/expanded/edit-connections.png":::
-
-1. Select **IPsec/IKEv2 (strongSwan)** from the menu, and double-click.
-
- :::image type="content" source="./media/point-to-site-vpn-client-configuration-azure-cert/add-connection.png" alt-text="Screenshot shows the Add VPN page." lightbox="./media/point-to-site-vpn-client-configuration-azure-cert/expanded/add-connection.png":::
-
-1. On the **Add VPN** page, add a name for your VPN connection.
-
- :::image type="content" source="./media/point-to-site-vpn-client-configuration-azure-cert/choose-type.png" alt-text="Screenshot shows Choose a connection type." lightbox="./media/point-to-site-vpn-client-configuration-azure-cert/expanded/choose-type.png":::
-1. Open the **VpnSettings.xml** file from the **Generic** folder contained in the downloaded client configuration files. Find the tag called **VpnServer** and copy the name, beginning with 'azuregateway' and ending with '.cloudapp.net'.
-
- :::image type="content" source="./media/point-to-site-vpn-client-configuration-azure-cert/vpn-server.png" alt-text="Screenshot shows copy data." lightbox="./media/point-to-site-vpn-client-configuration-azure-cert/expanded/vpn-server.png":::
-1. Paste the name in the **Address** field of your new VPN connection in the **Gateway** section. Next, select the folder icon at the end of the **Certificate** field, browse to the **Generic** folder, and select the **VpnServerRoot** file.
-1. In the **Client** section of the connection, for **Authentication**, select **Certificate/private key**. For **Certificate** and **Private key**, choose the certificate and the private key that were created earlier. In **Options**, select **Request an inner IP address**. Then, select **Add**.
-
- :::image type="content" source="./media/point-to-site-vpn-client-configuration-azure-cert/ip-request.png" alt-text="Screenshot shows Request an inner IP address." lightbox="./media/point-to-site-vpn-client-configuration-azure-cert/expanded/ip-request.png":::
-
-1. Turn the connection **On**.
-
- :::image type="content" source="./media/point-to-site-vpn-client-configuration-azure-cert/turn-on.png" alt-text="Screenshot shows copy." lightbox="./media/point-to-site-vpn-client-configuration-azure-cert/expanded/turn-on.png":::
-
-## <a name="linuxinstallcli"></a>Linux (strongSwan CLI)
-
-### Install strongSwan
--
-### Generate certificates
-
-If you havenΓÇÖt already generated certificates, use the following steps:
--
-### Install and configure
-
-1. Download the VPNClient package from Azure portal.
-1. Extract the file.
-1. From the **Generic** folder, copy or move the **VpnServerRoot.cer** to **/etc/ipsec.d/cacerts**.
-1. Copy or move **cp client.p12** to **/etc/ipsec.d/private/**. This file is the client certificate for the VPN gateway.
-1. Open the **VpnSettings.xml** file and copy the `<VpnServer>` value. YouΓÇÖll use this value in the next step.
-1. Adjust the values in the following example, then add the example to the **/etc/ipsec.conf** configuration.
-
- ```
- conn azure
- keyexchange=ikev2
- type=tunnel
- leftfirewall=yes
- left=%any
- leftauth=eap-tls
- leftid=%client # use the DNS alternative name prefixed with the %
- right= Enter the VPN Server value here# Azure VPN gateway address
- rightid=% # Enter the VPN Server value here# Azure VPN gateway FQDN with %
- rightsubnet=0.0.0.0/0
- leftsourceip=%config
- auto=add
- ```
-1. Add the following values to **/etc/ipsec.secrets**.
-
- ```
- : P12 client.p12 'password' # key filename inside /etc/ipsec.d/private directory
- ```
-
-1. Run the following commands:
-
- ```
- # ipsec restart
- # ipsec up azure
- ```
-
-## Next steps
-
-Return to the original article that you were working from, then complete your P2S configuration.
-
-* [PowerShell configuration steps](vpn-gateway-howto-point-to-site-rm-ps.md).
-* [Azure portal configuration steps](vpn-gateway-howto-point-to-site-resource-manager-portal.md).
vpn-gateway Site To Site Vpn Private Peering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/site-to-site-vpn-private-peering.md
In both of these examples, Azure will send traffic to 10.0.1.0/24 over the VPN c
1. To enable **Use Azure Private IP Address** on the connection, select **Configuration**. Set **Use Azure Private IP Address** to **Enabled**, then select **Save**. :::image type="content" source="media/site-to-site-vpn-private-peering/connection.png" alt-text="Gateway Private IPs - Enabled":::
-1. From your firewall, ping the private IP that you wrote down in step 3. The private IP should be reachable over the ExpressRoute private peering.
-1. Use this private IP as the remote IP on your on-premises firewall to establish the Site-to-Site tunnel over the ExpressRoute private peering.
+1. Use the private IP that you wrote down in step 3 as the remote IP on your on-premises firewall to establish the Site-to-Site tunnel over the ExpressRoute private peering.
## <a name="powershell"></a>PowerShell steps
vpn-gateway Vpn Gateway Certificates Point To Site Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-certificates-point-to-site-linux.md
Title: 'Generate and export certificates for Point-to-Site: Linux: CLI'
+ Title: 'Generate and export certificates for point-to-site: Linux - strongSwan'
description: Learn how to create a self-signed root certificate, export the public key, and generate client certificates using the Linux (strongSwan) CLI. - - Previously updated : 09/02/2020- Last updated : 05/17/2022+
-# Generate and export certificates
+# Generate and export certificates - Linux (strongSwan)
+
+VPN Gateway point-to-site connections can use certificates to authenticate. This article shows you how to create a self-signed root certificate and generate client certificates using strongSwan. You can also use [PowerShell](vpn-gateway-certificates-point-to-site.md) or [MakeCert](vpn-gateway-certificates-point-to-site-makecert.md).
-Point-to-Site connections use certificates to authenticate. This article shows you how to create a self-signed root certificate and generate client certificates using the Linux CLI and strongSwan. If you are looking for different certificate instructions, see the [PowerShell](vpn-gateway-certificates-point-to-site.md) or [MakeCert](vpn-gateway-certificates-point-to-site-makecert.md) articles. For information about how to install strongSwan using the GUI instead of CLI, see the steps in the [Client configuration](point-to-site-vpn-client-configuration-azure-cert.md#install) article.
+Each client must have a client certificate installed locally to connect. Additionally, the root certificate public key information must be uploaded to Azure. For more information, see [Point-to-site configuration - certificate authentication](vpn-gateway-howto-point-to-site-resource-manager-portal.md).
-## Install strongSwan
+## <a name="install"></a>Install strongSwan
+
+The following steps help you install strongSwan.
[!INCLUDE [strongSwan Install](../../includes/vpn-gateway-strongswan-install-include.md)]
-## Generate and export certificates
+## <a name="cli"></a>Linux CLI instructions (strongSwan)
+
+The following steps help you generate and export certificates using the Linux CLI (strongSwan).
++
+## <a name="gui"></a>Linux GUI instructions (strongSwan)
+
+The following steps help you generate and export certificates using the Linux GUI (strongSwan).
[!INCLUDE [strongSwan certificates](../../includes/vpn-gateway-strongswan-certificates-include.md)] ## Next steps
-Continue with your Point-to-Site configuration to [Create and install VPN client configuration files](point-to-site-vpn-client-configuration-azure-cert.md#linuxinstallcli).
+Continue with your point-to-site configuration to [Create and install VPN client configuration files - Linux](point-to-site-vpn-client-cert-linux.md).
vpn-gateway Vpn Gateway Howto Openvpn Clients https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-howto-openvpn-clients.md
- Title: 'How to configure OpenVPN clients for P2S VPN gateways'-
-description: Learn how to configure OpenVPN clients for Azure VPN Gateway. This article helps you configure Windows, Linux, iOS, and Mac clients.
--- Previously updated : 05/05/2022---
-# Configure an OpenVPN client for Azure VPN Gateway P2S connections
-
-This article helps you configure the **OpenVPN &reg; Protocol** client for Azure VPN Gateway point-to-site configurations. This article pertains specifically to OpenVPN clients, not the Azure VPN Client or native VPN clients.
-
-For these authentication types, see the following articles instead:
-
-* Azure AD authentication
- * [Windows clients](openvpn-azure-ad-client.md)
- * [macOS clients](openvpn-azure-ad-client.md)
-
-* RADIUS authentication
- * [All RADIUS clients](point-to-site-vpn-client-configuration-radius.md)
-
-## Before you begin
-
-Verify that you've completed the steps to configure OpenVPN for your VPN gateway. For details, see [Configure OpenVPN for Azure VPN Gateway](vpn-gateway-howto-openvpn.md).
-
-## VPN client configuration files
-
-You can generate and download the VPN client configuration files from the portal, or by using PowerShell. Either method returns the same zip file. Unzip the file to view the OpenVPN folder.
-
-When you open the zip file, if you don't see the OpenVPN folder, verify that your VPN gateway is configured to use the OpenVPN tunnel type. Additionally, if you're using Azure AD authentication, you may not have an OpenVPN folder. See the links at the top of this article for Azure AD instructions.
---
-## Next steps
-
-If you want the VPN clients to be able to access resources in another VNet, then follow the instructions on the [VNet-to-VNet](vpn-gateway-howto-vnet-vnet-resource-manager-portal.md) article to set up a vnet-to-vnet connection. Be sure to enable BGP on the gateways and the connections, otherwise traffic won't flow.
-
-**"OpenVPN" is a trademark of OpenVPN Inc.**
vpn-gateway Vpn Gateway Howto Openvpn https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-howto-openvpn.md
Title: 'How to enable OpenVPN for P2S VPN gateways' description: Learn how to enable OpenVPN Protocol on VPN gateways for point-to-site configurations.- - Previously updated : 04/20/2022 Last updated : 05/16/2022
This article helps you set up **OpenVPN® Protocol** on Azure VPN Gateway. This
## Next steps
-To configure clients for OpenVPN, see [Configure OpenVPN clients](vpn-gateway-howto-openvpn-clients.md).
+To configure clients for OpenVPN, see Configure OpenVPN clients for [Windows](point-to-site-vpn-client-cert-windows.md), [macOS and IOS](point-to-site-vpn-client-cert-mac.md), and [Linux](point-to-site-vpn-client-cert-linux.md).
**"OpenVPN" is a trademark of OpenVPN Inc.**
vpn-gateway Vpn Gateway Howto Point To Site Resource Manager Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-howto-point-to-site-resource-manager-portal.md
For install steps, see [Install a client certificate](point-to-site-how-to-vpn-c
## <a name="clientconfig"></a>Configure settings for VPN clients
-To connect to the virtual network gateway using P2S, each computer uses the VPN client that is natively installed as a part of the operating system. For example, when you go to VPN settings on your Windows computer, you can add VPN connections without installing a separate VPN client. You configure each VPN client by using a client configuration package. The client configuration package contains settings that are specific to the VPN gateway that you created.
+To connect to the virtual network gateway using P2S, each computer can use the VPN client that is natively installed as a part of the operating system. For example, when you go to VPN settings on your Windows computer, you can add VPN connections without installing a separate VPN client. You configure each VPN client by using a client configuration package. The client configuration package contains settings that are specific to the VPN gateway that you created.
-For steps to generate and install VPN client configuration files, see [Create and install VPN client configuration files for Azure certificate authentication P2S configurations](point-to-site-vpn-client-configuration-azure-cert.md).
+For steps to generate and install VPN client configuration files, see [Configure point-to-site VPN clients - certificate authentication](point-to-site-vpn-client-cert-windows.md).
## <a name="connect"></a>Connect to Azure
For steps to generate and install VPN client configuration files, see [Create an
### To connect from a Mac VPN client
-From the Network dialog box, locate the client profile that you want to use, specify the settings from the [VpnSettings.xml](point-to-site-vpn-client-configuration-azure-cert.md#installmac), and then select **Connect**. For detailed instructions, see [Generate and install VPN client configuration files - macOS](./point-to-site-vpn-client-configuration-azure-cert.md#installmac).
+From the Network dialog box, locate the client profile that you want to use, specify the settings from the [VpnSettings.xml](point-to-site-vpn-client-cert-mac.md), and then select **Connect**. For detailed instructions, see [Configure point-to-site VPN clients - certificate authentication - macOS](point-to-site-vpn-client-cert-mac.md).
If you're having trouble connecting, verify that the virtual network gateway isn't using a Basic SKU. The Basic SKU isn't supported for Mac clients.
- :::image type="content" source="./media/point-to-site-vpn-client-configuration-azure-cert/select-connect.png" alt-text="Screenshot shows connect button." lightbox="./media/point-to-site-vpn-client-configuration-azure-cert/expanded/select-connect.png":::
+ :::image type="content" source="./media/point-to-site-vpn-client-cert-mac/select-connect.png" alt-text="Screenshot shows connect button." lightbox="./media/point-to-site-vpn-client-cert-mac/select-connect.png":::
## <a name="verify"></a>To verify your connection
vpn-gateway Vpn Gateway Howto Point To Site Rm Ps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-howto-point-to-site-rm-ps.md
Previously updated : 06/03/2021 Last updated : 05/05/2022
Make sure the client certificate was exported as a .pfx along with the entire ce
## <a name="clientconfig"></a>Configure the VPN client
-To connect to the virtual network gateway using P2S, each computer uses the VPN client that is natively installed as a part of the operating system. For example, when you go to VPN settings on your Windows computer, you can add VPN connections without installing a separate VPN client. You configure each VPN client by using a client configuration package. The client configuration package contains settings that are specific to the VPN gateway that you created.
+To connect to the virtual network gateway using P2S, each computer uses the VPN client that is natively installed as a part of the operating system. For example, when you go to VPN settings on your Windows computer, you can add VPN connections without installing a separate VPN client. You configure each VPN client by using a client configuration package. The client configuration package contains settings that are specific to the VPN gateway that you created.
-You can use the following quick examples to generate and install the client configuration package. For more information about package contents and additional instructions about to generate and install VPN client configuration files, see [Create and install VPN client configuration files](point-to-site-vpn-client-configuration-azure-cert.md).
+You can use the following quick examples to generate and install the client configuration package. For more information about package contents and additional instructions about to generate and install VPN client configuration files, see [Create and install VPN client configuration files](point-to-site-vpn-client-cert-windows.md).
If you need to declare your variables again, you can find them [here](#declare).
$profile.VPNProfileSASUrl
### Mac VPN client From the Network dialog box, locate the client profile that you want to use, then click **Connect**.
-Check [Install - Mac (macOS)](./point-to-site-vpn-client-configuration-azure-cert.md#installmac) for detailed instructions. If you are having trouble connecting, verify that the virtual network gateway is not using a Basic SKU. Basic SKU is not supported for Mac clients.
+Check [Install - Mac (macOS)](point-to-site-vpn-client-cert-mac.md) for detailed instructions. If you are having trouble connecting, verify that the virtual network gateway is not using a Basic SKU. Basic SKU is not supported for Mac clients.
![Mac connection](./media/vpn-gateway-howto-point-to-site-rm-ps/applyconnect.png)
web-application-firewall Waf Front Door Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/afds/waf-front-door-monitor.md
Last updated 05/11/2022
+zone_pivot_groups: front-door-tiers
# Azure Web Application Firewall monitoring and logging
For logging on the classic tier, use [FrontdoorAccessLog](../../frontdoor/front-
The following query example returns WAF logs on blocked requests: ``` WAFlogQuery AzureDiagnostics | where ResourceType == "FRONTDOORS" and Category == "FrontdoorWebApplicationFirewallLog" | where action_s == "Block"
+```
+
+``` WAFlogQuery
+AzureDiagnostics
+| where ResourceProvider == "MICROSOFT.CDN" and Category == "FrontDoorWebApplicationFirewallLog"
+| where action_s == "Block"
``` Here is an example of a logged request in WAF log:
Here is an example of a logged request in WAF log:
The following example query returns AccessLogs entries: ``` AccessLogQuery AzureDiagnostics | where ResourceType == "FRONTDOORS" and Category == "FrontdoorAccessLog" ```++
+``` AccessLogQuery
+AzureDiagnostics
+| where ResourceProvider == "MICROSOFT.CDN" and Category == "FrontDoorAccessLog"
+```
Here is an example of a logged request in Access log:
web-application-firewall Application Gateway Waf Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/application-gateway-waf-configuration.md
description: This article provides information on Web Application Firewall exclu
Previously updated : 04/21/2022 Last updated : 05/18/2022
There can be any number of reasons to disable evaluating this header. There coul
You can use the following approaches to exclude the `User-Agent` header from evaluation by all of the SQL injection rules:
-> [!NOTE]
-> As of early May 2022, we are rolling out updates to the Azure portal for these features. If you don't see configuration options in the portal, please use PowerShell, the Azure CLI, Bicep, or ARM templates to configure global or per-rule exclusions.
+# [Azure portal](#tab/portal)
+
+To configure a per-rule exclusion by using the Azure portal, follow these steps:
+
+1. Navigate to the WAF policy, and select **Managed rules**.
+
+1. Select **Add exclusions**.
+
+ :::image type="content" source="../media/application-gateway-waf-configuration/waf-policy-exclusions-rule-add.png" alt-text="Screenshot of the Azure portal that shows how to add a new per-rule exclusion for the W A F policy.":::
+
+1. In **Applies to**, select the CRS ruleset to apply the exclusion to, such as **OWASP_3.2**.
+
+ :::image type="content" source="../media/application-gateway-waf-configuration/waf-policy-exclusions-rule-edit.png" alt-text="Screenshot of the Azure portal that shows the per-rule exclusion configuration for the W A F policy.":::
+
+1. Select **Add rules**, and select the rules you want to apply exclusions to.
+
+1. Configure the match variable, operator, and selector. Then select **Save**.
+
+You can configure multiple exclusions.
# [Azure PowerShell](#tab/powershell)
Suppose you want to exclude the value in the *user* parameter that is passed in
The following example shows how you can exclude the `user` query string argument from evaluation:
-> [!NOTE]
-> As of early May 2022, we are rolling out updates to the Azure portal for these features. If you don't see configuration options in the portal, please use PowerShell, the Azure CLI, Bicep, or ARM templates to configure global or per-rule exclusions.
+# [Azure portal](#tab/portal)
+
+To configure a g;lobal exclusion by using the Azure portal, follow these steps:
+
+1. Navigate to the WAF policy, and select **Managed rules**.
+
+1. Select **Add exclusions**.
+
+ :::image type="content" source="../media/application-gateway-waf-configuration/waf-policy-exclusions-rule-add.png" alt-text="Screenshot of the Azure portal that shows how to add a new global exclusion for the W A F policy.":::
+
+1. In **Applies to**, select **Global**
+
+ :::image type="content" source="../media/application-gateway-waf-configuration/waf-policy-exclusions-global-edit.png" alt-text="Screenshot of the Azure portal that shows the global exclusion configuration for the W A F policy.":::
+
+1. Configure the match variable, operator, and selector. Then select **Save**.
+
+You can configure multiple exclusions.
# [Azure PowerShell](#tab/powershell)
web-application-firewall Application Gateway Web Application Firewall Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/application-gateway-web-application-firewall-portal.md
Previously updated : 04/26/2022 Last updated : 05/23/2022 #Customer intent: As an IT administrator, I want to use the Azure portal to set up an application gateway with Web Application Firewall so I can protect my applications.
On the **Configuration** tab, you'll connect the frontend and backend pool you c
1. Select **Add a routing rule** in the **Routing rules** column. 2. In the **Add a routing rule** window that opens, enter *myRoutingRule* for the **Rule name**.
+1. For **Priority**, type a priority number.
3. A routing rule requires a listener. On the **Listener** tab within the **Add a routing rule** window, enter the following values for the listener:
On the **Configuration** tab, you'll connect the frontend and backend pool you c
Accept the default values for the other settings on the **Listener** tab, then select the **Backend targets** tab to configure the rest of the routing rule.
- ![Create new application gateway: listener](../media/application-gateway-web-application-firewall-portal/application-gateway-create-rule-listener.png)
+ :::image type="content" source="../media/application-gateway-web-application-firewall-portal/application-gateway-create-rule-listener.png" alt-text="Screenshot showing Create new application gateway: listener." lightbox="../media/application-gateway-web-application-firewall-portal/application-gateway-create-rule-listener.png":::
4. On the **Backend targets** tab, select **myBackendPool** for the **Backend target**.
-5. For the **HTTP setting**, select **Add new** to create a new HTTP setting. The HTTP setting will determine the behavior of the routing rule. In the **Add an HTTP setting** window that opens, enter *myHTTPSetting* for the **HTTP setting name**. Accept the default values for the other settings in the **Add an HTTP setting** window, then select **Add** to return to the **Add a routing rule** window.
+5. For the **Backend settings**, select **Add new** to create a new Backend setting. This setting determines the behavior of the routing rule. In the **Add Backend setting** window that opens, enter *myBackendSetting* for the **Backend settings name**. Accept the default values for the other settings in the window, then select **Add** to return to the **Add a routing rule** window.
- ![Create new application gateway: HTTP setting](../media/application-gateway-web-application-firewall-portal/application-gateway-create-httpsetting.png)
+ :::image type="content" source="../media/application-gateway-web-application-firewall-portal/application-gateway-create-backend-setting.png" alt-text="Screenshot showing Create new application gateway, Backend setting." lightbox="../media/application-gateway-web-application-firewall-portal/application-gateway-create-backend-setting.png":::
6. On the **Add a routing rule** window, select **Add** to save the routing rule and return to the **Configuration** tab.
- ![Create new application gateway: routing rule](../media/application-gateway-web-application-firewall-portal/application-gateway-create-rule-backends.png)
+ :::image type="content" source="../media/application-gateway-web-application-firewall-portal/application-gateway-create-rule-backends.png" alt-text="Screenshot showing Create new application gateway: routing rule." lightbox="../media/application-gateway-web-application-firewall-portal/application-gateway-create-rule-backends.png":::
7. Select **Next: Tags** and then **Next: Review + create**.
In this example, you install IIS on the virtual machines only to verify Azure cr
2. Set the location parameter for your environment, and then run the following command to install IIS on the virtual machine: ```azurepowershell-interactive
+ $location = 'east us'
+ Set-AzVMExtension ` -ResourceGroupName myResourceGroupAG ` -ExtensionName IIS `
In this example, you install IIS on the virtual machines only to verify Azure cr
-ExtensionType CustomScriptExtension ` -TypeHandlerVersion 1.4 ` -SettingString '{"commandToExecute":"powershell Add-WindowsFeature Web-Server; powershell Add-Content -Path \"C:\\inetpub\\wwwroot\\Default.htm\" -Value $($env:computername)"}' `
- -Location EastUS
+ -Location $location
``` 3. Create a second virtual machine and install IIS by using the steps that you previously completed. Use *myVM2* for the virtual machine name and for the **VMName** setting of the **Set-AzVMExtension** cmdlet.